Sort by:
Page 189 of 6526512 results

Sipka G, Farkas I, Bakos A, Maráz A, Mikó ZS, Czékus T, Bukva M, Urbán S, Pávics L, Besenyi Z

pubmed logopapersSep 8 2025
<i>Background:</i> Neuroendocrine neoplasms (NENs) are a diverse group of malignancies in which somatostatin receptor expression can be crucial in guiding therapy. We aimed to evaluate the effectiveness of [<sup>99m</sup>Tc]Tc-EDDA/HYNIC-TOC SPECT/CT in differentiating neuroendocrine tumor histology, selecting candidates for radioligand therapy, and identifying correlations between somatostatin receptor expression and non-imaging parameters in metastatic NENs. <i>Methods:</i> This retrospective study included 65 patients (29 women, 36 men, mean age 61) with metastatic neuroendocrine neoplasms confirmed by histology, follow-up, or imaging, comprising 14 poorly differentiated carcinomas and 51 well-differentiated tumors. Somatostatin receptor SPECT/CT results were assessed visually and semiquantitatively, with mathematical models incorporating histological, oncological, immunohistochemical, and laboratory parameters, followed by biostatistical analysis. <i>Results:</i> Of 392 lesions evaluated, the majority were metastases in the liver, lymph nodes, and bones. Mathematical models estimated somatostatin receptor expression accurately (70-83%) based on clinical parameters alone. Key factors included tumor origin, oncological treatments, and the immunohistochemical marker CK7. Associations were found between age, grade, disease extent, and markers (CEA, CA19-9, AFP). <i>Conclusions:</i> Our findings suggest that [<sup>99m</sup>Tc]Tc-EDDA/HYNIC-TOC SPECT/CT effectively evaluates somatostatin receptor expression in NENs. Certain immunohistochemical and laboratory parameters, beyond recognized factors, show potential prognostic value, supporting individualized treatment strategies.

Vanderbecq, Q., Xia, W. F., Chouzenoux, E., Pesquet, J.-c., Zins, M., Wagner, M.

medrxiv logopreprintSep 8 2025
PurposeTo develop and externally validate a multimodal AI model for detecting ischaemia complicating small-bowel obstruction (SBO). MethodsWe combined 3D CT data with routine laboratory markers (C-reactive protein, neutrophil count) and, optionally, radiology report text. From two centers, 1,350 CT examinations were curated; 771 confirmed SBO scans were used for model development with patient-level splits. Ischemia labels were defined by surgical confirmation within 24 hours of imaging. Models (MViT, ResNet-101, DaViT) were trained as unimodal and multimodal variants. External testing was used for 66 independent cases from a third center. Two radiologists (attending, resident) read the test set with and without AI assistance. Performance was assessed using AUC, sensitivity, specificity, and 95% bootstrap confidence intervals; predictions included a confidence score. ResultsThe image-plus-laboratory model performed best on external testing (AUC 0.69 [0.59-0.79], sensitivity 0.89 [0.76-1.00], and specificity 0.44 [0.35-0.54]). Adding report text improved internal validation but did not generalize externally; image+text and full multimodal variants did not exceed image+laboratory performance. Without AI, the attending outperformed the resident (AUC 0.745 [0.617-0.845] vs 0.706 [0.581-0.818]); with AI, both improved, attending 0.752 [0.637-0.853] and resident 0.752 [0.629-0.867], rising to 0.750 [0.631-0.839] and 0.773 [0.657-0.867] with confidence display; differences were not statistically significant. ConclusionA multimodal AI that combines CT images with routine laboratory markers outperforms single-modality approaches and boosts radiologist readers performance notably junior, supporting earlier, more consistent decisions within the first 24 hours. Key PointsA multimodal artificial intelligence (AI) model that combines CT images with laboratory markers detected ischemia in small-bowel obstruction with AUC 0.69 (95% CI 0.59-0.79) and sensitivity 0.89 (0.76-1.00) on external testing, outperforming single-modality models. Adding report text did not generalize across sites: the image+text model fell from AUC 0.82 (internal) to 0.53 (external), and adding text to image+biology left external AUC unchanged (0.69) with similar specificity (0.43-0.44). With AI assistance both junior and senior readers improved; the juniors AUC rose from 0.71 to 0.77, reaching senior-level performance. Summary StatementA multicentric AI model combining CT and routine laboratory data (CRP and neutrophilia) improved radiologists detection of ischemia in small-bowel obstruction. This tool supports earlier decision-making within the first 24 hours.

Kim, D. D., Madabhushi, A., Margulies, K. B., Peyster, E. G.

medrxiv logopreprintSep 8 2025
BackgroundCardiac allograft rejection (CAR) remains the leading cause of early graft failure after heart transplantation (HT). Current diagnostics, including histologic grading of endomyocardial biopsy (EMB) and blood-based assays, lack accurate predictive power for future CAR risk. We developed a predictive model integrating routine clinical data with quantitative morphologic features extracted from routine EMBs to demonstrate the precision-medicine potential of mining existing data sources in post-HT care. MethodsIn a retrospective cohort of 484 HT recipients with 1,188 EMB encounters within 6 months post-transplant, we extracted 370 quantitative pathology features describing lymphocyte infiltration and stromal architecture from digitized H&E-stained slides. Longitudinal clinical data comprising 268 variables--including lab values, immunosuppression records, and prior rejection history--were aggregated per patient. Using the XGBoost algorithm with rigorous cross-validation, we compared models based on four different data sources: clinical-only, morphology-only, cross-sectional-only, and fully integrated longitudinal data. The top predictors informed the derivation of a simplified Integrated Rejection Risk Index (IRRI), which relies on just 4 clinical and 4 morphology risk facts. Model performance was evaluated by AUROC, AUPRC, and time-to-event hazard ratios. ResultsThe fully integrated longitudinal model achieved superior predictive accuracy (AUROC 0.86, AUPRC 0.74). IRRI stratified patients into risk categories with distinct future CAR hazards: high-risk patients showed a markedly increased CAR risk (HR=6.15, 95% CI: 4.17-9.09), while low-risk patients had significantly reduced risk (HR=0.52, 95% CI: 0.33-0.84). This performance exceeded models based on just cross-sectional or single-domain data, demonstrating the value of multi-modal, temporal data integration. ConclusionsBy integrating longitudinal clinical and biopsy morphologic features, IRRI provides a scalable, interpretable tool for proactive CAR risk assessment. This precision-based approach could support risk-adaptive surveillance and immunosuppression management strategies, offering a promising pathway toward safer, more personalized post-HT care with the potential to reduce unnecessary procedures and improve outcomes. Clinical PerspectiveWhat is new? O_LICurrent tools for cardiac allograft monitoring detect rejection only after it occurs and are not designed to forecast future risk. This leads to missed opportunities for early intervention, avoidable patient injury, unnecessary testing, and inefficiencies in care. C_LIO_LIWe developed a machine learning-based risk index that integrates clinical features, quantitative biopsy morphology, and longitudinal temporal trends to create a robust predictive framework. C_LIO_LIThe Integrated Rejection Risk Index (IRRI) provides highly accurate prediction of future allograft rejection, identifying both high- and low-risk patients up to 90 days in advance - a capability entirely absent from current transplant management. C_LI What are the clinical implications? O_LIIntegrating quantitative histopathology with clinical data provides a more precise, individualized estimate of rejection risk in heart transplant recipients. C_LIO_LIThis framework has the potential to guide post-transplant surveillance intensity, immunosuppressive management, and patient counseling. C_LIO_LIAutomated biopsy analysis could be incorporated into digital pathology workflows, enabling scalable, multicenter application in real-world transplant care. C_LI

Bang Andersen I, Søndergaard Svendsen MB, Risgaard AL, Sander Danstrup C, Todsen T, Tolsgaard MG, Friis ML

pubmed logopapersSep 7 2025
Assessing skills in simulated settings is resource-intensive and lacks validated metrics. Advances in AI offer the potential for automated competence assessment, addressing these limitations. This study aimed to develop and validate a machine learning AI model for automated evaluation during simulation-based thyroid ultrasound (US) training. Videos from eight experts and 21 novices performing thyroid US on a simulator were analyzed. Frames were processed into sequences of 1, 10, and 50 seconds. A convolutional neural network with a pre-trained ResNet-50 base and a long short-term memory layer analyzed these sequences. The model was trained to distinguish competence levels (competent=1, not competent=0) using fourfold cross-validation, with performance metrics including precision, recall, F1 score, and accuracy. Bayesian updating and adaptive thresholding assessed performance over time. The AI model effectively differentiated expert and novice US performance. The 50-second sequences achieved the highest accuracy (70%) and F1 score (0.76). Experts showed significantly longer durations above the threshold (15.71s) compared to novices (9.31s, p= .030). A long short-term memory-based AI model provides near real-time, automated assessments of competence in US training. Utilizing temporal video data enables detailed micro-assessments of complex procedures, which may enhance interpretability and be applied across various procedural domains.

Duncan AE, Malkani AL, Stoltz MJ, Ahmed N, Mullick M, Whitaker JE, Swiergosz A, Smith LS, Dourado A

pubmed logopapersSep 7 2025
The use of cementless total knee arthroplasty (TKA) has significantly increased over the past decade. However, there is no objective criteria or consensus on parameters for patient selection for cementless TKA. The purpose of this study was to develop a machine learning model based on patient and radiographic parameters that could identify patients indicated for cementless TKA. We developed an explainable recommendation model using multiple patient and radiographic parameters (BMI, Age, Gender, Hounsfield Units [HU] from CT for density of tibia). The predictive model was trained on medical, operative, and radiographic data of 217 patients who underwent primary TKA. HU density measurements of four quadrants of the proximal tibia were obtained at region of interest on preoperative CT scans. which were then incorporated into the model as a surrogate for bone mineral density. The model employs Local Interpretable Model-agnostic Explanations in combination with bagging ensemble techniques for artificial neural networks. Model testing on the 217-patient cohort included 22 cemented and 38 cementless TKA cases. The model successfully identified 19 cemented patients (sensitivity: 86.4%) and 37 cementless patients (specificity: 97.4%) with an AUC = 0.94. Use of cementless TKA has grown significantly. There are currently no standard radiographic criteria for patient selection. Our machine learning model demonstrated 97.4% specificity and should improve with more training data. Future improvements will include incorporating additional cases and developing automated HU extraction techniques.

Chai WY, Lin G, Wang CJ, Chiang HJ, Ng SH, Kuo YS, Lin YC

pubmed logopapersSep 7 2025
Automated cardiac MR segmentation enables accurate and reproducible ventricular function assessment in Tetralogy of Fallot (ToF), whereas manual segmentation remains time-consuming and variable. To evaluate the deep learning (DL)-based models for automatic left ventricle (LV), right ventricle (RV), and LV myocardium segmentation in ToF, compared with manual reference standard annotations. Retrospective. 427 patients with diverse cardiac conditions (305 non-ToF, 122 ToF), with 395 for training/validation, 32 ToF for internal testing, and 12 external ToF for generalizability assessment. Steady-state free precession cine sequence at 1.5/3 T. U-Net, Deep U-Net, and MultiResUNet were trained under three regimes (non-ToF, ToF-only, mixed), using manual segmentations from one radiologist and one researcher (20 and 10 years of experience, respectively) as reference, with consensus for discrepancies. Performance for LV, RV, and LV myocardium was evaluated using Dice Similarity Coefficient (DSC), Intersection over Union (IoU), and F1-score, alongside regional (basal, middle, apical) and global ventricular function comparisons to manual results. Friedman tests were applied for architecture and regime comparisons, paired Wilcoxon tests for ED-ES differences, and Pearson's r for assessing agreement in global function. MultiResUNet model trained on a mixed dataset (TOF and non-TOF cases) achieved the best segmentation performance, with DSCs of 96.1% for LV and 93.5% for RV. In the internal test set, DSCs for LV, RV, and LV myocardium were 97.3%, 94.7%, and 90.7% at end-diastole, and 93.6%, 92.1%, and 87.8% at end-systole, with ventricular measurement correlations ranging from 0.84 to 0.99. Regional analysis showed LV DSCs of 96.3% (basal), 96.4% (middle), and 94.1% (apical), and RV DSCs of 92.8%, 94.2%, and 89.6%. External validation (n = 12) showed correlations ranging from 0.81 to 0.98. The MultiResUNet model enabled accurate automated cardiac MRI segmentation in ToF with the potential to streamline workflows and improve disease monitoring. 3. Stage 2.

Jin L, Liu Z, Sun Y, Gao P, Ma Z, Ye H, Liu Z, Dong X, Sun Y, Han J, Lv L, Guan D, Li M

pubmed logopapersSep 7 2025
Diagnosing pulmonary ground-glass nodules (GGNs) on chest CT imaging remains challenging in clinical practice. Moreover, different stages of GGNs may require different clinical treatments. Hence, we sought to predict the progressive state of pulmonary GGNs (absorption or persistence) for accurate clinical treatment and decision-making. We retrospectively enrolled 672 patients (absorption group: 299; control group: 373) from two medical centres from January 2017 to March 2023. Clinical information and radiomic features extracted from regions of interest of all patients on chest CT imaging were collected. All patients were randomly divided into training and test sets at a ratio of 7:3. Three models were constructed-Rad-score (Model 1), clinical factor (Model 2), and clinical factors and Rad-score (Model 3)-to identify GGN progression. In the test dataset, two radiologists (with over 8 years of experience in chest imaging) evaluated the models' performance. Receiver operating characteristic curves, accuracy, sensitivity, and specificity were analysed. In the test set, the area under the curve (AUC) of Model 1 and Model 2 was 0.907 [0.868-0.946] and 0.918 [0.88-0.955], respectively. Model 3 achieved the best predictive performance, with an AUC of 0.959 [0.936-0.982], an accuracy of 0.881, a sensitivity of 0.902, and a specificity of 0.856. The intraclass correlation coefficient of Model 3 (0.86) showed better performance than radiologists (0.83 and 0.71). We developed and validated a radiomics-based machine-learning method that achieved good performance in predicting the progressive state of GGNs on initial computed tomography. The model may improve follow-up management of GGNs.

Amna Hassan, Ilsa Afzaal, Nouman Muneeb, Aneeqa Batool, Hamail Noor

arxiv logopreprintSep 7 2025
Bone fractures present a major global health challenge, often resulting in pain, reduced mobility, and productivity loss, particularly in low-resource settings where access to expert radiology services is limited. Conventional imaging methods suffer from high costs, radiation exposure, and dependency on specialized interpretation. To address this, we developed an AI-based solution for automated fracture detection from X-ray images using a custom Convolutional Neural Network (CNN) and benchmarked it against transfer learning models including EfficientNetB0, MobileNetV2, and ResNet50. Training was conducted on the publicly available FracAtlas dataset, comprising 4,083 anonymized musculoskeletal radiographs. The custom CNN achieved 95.96% accuracy, 0.94 precision, 0.88 recall, and an F1-score of 0.91 on the FracAtlas dataset. Although transfer learning models (EfficientNetB0, MobileNetV2, ResNet50) performed poorly in this specific setup, these results should be interpreted in light of class imbalance and data set limitations. This work highlights the promise of lightweight CNNs for detecting fractures in X-rays and underscores the importance of fair benchmarking, diverse datasets, and external validation for clinical translation

Zhengquan Luo, Chi Liu, Dongfu Xiao, Zhen Yu, Yueye Wang, Tianqing Zhu

arxiv logopreprintSep 7 2025
The integration of AI with medical images enables the extraction of implicit image-derived biomarkers for a precise health assessment. Recently, retinal age, a biomarker predicted from fundus images, is a proven predictor of systemic disease risks, behavioral patterns, aging trajectory and even mortality. However, the capability to infer such sensitive biometric data raises significant privacy risks, where unauthorized use of fundus images could lead to bioinformation leakage, breaching individual privacy. In response, we formulate a new research problem of biometric privacy associated with medical images and propose RetinaGuard, a novel privacy-enhancing framework that employs a feature-level generative adversarial masking mechanism to obscure retinal age while preserving image visual quality and disease diagnostic utility. The framework further utilizes a novel multiple-to-one knowledge distillation strategy incorporating a retinal foundation model and diverse surrogate age encoders to enable a universal defense against black-box age prediction models. Comprehensive evaluations confirm that RetinaGuard successfully obfuscates retinal age prediction with minimal impact on image quality and pathological feature representation. RetinaGuard is also flexible for extension to other medical image derived biomarkers. RetinaGuard is also flexible for extension to other medical image biomarkers.

Yiwen Ye, Yicheng Wu, Xiangde Luo, He Zhang, Ziyang Chen, Ting Dang, Yanning Zhang, Yong Xia

arxiv logopreprintSep 7 2025
Foundation models have become a promising paradigm for advancing medical image analysis, particularly for segmentation tasks where downstream applications often emerge sequentially. Existing fine-tuning strategies, however, remain limited: parallel fine-tuning isolates tasks and fails to exploit shared knowledge, while multi-task fine-tuning requires simultaneous access to all datasets and struggles with incremental task integration. To address these challenges, we propose MedSeqFT, a sequential fine-tuning framework that progressively adapts pre-trained models to new tasks while refining their representational capacity. MedSeqFT introduces two core components: (1) Maximum Data Similarity (MDS) selection, which identifies downstream samples most representative of the original pre-training distribution to preserve general knowledge, and (2) Knowledge and Generalization Retention Fine-Tuning (K&G RFT), a LoRA-based knowledge distillation scheme that balances task-specific adaptation with the retention of pre-trained knowledge. Extensive experiments on two multi-task datasets covering ten 3D segmentation tasks demonstrate that MedSeqFT consistently outperforms state-of-the-art fine-tuning strategies, yielding substantial performance gains (e.g., an average Dice improvement of 3.0%). Furthermore, evaluations on two unseen tasks (COVID-19-20 and Kidney) verify that MedSeqFT enhances transferability, particularly for tumor segmentation. Visual analyses of loss landscapes and parameter variations further highlight the robustness of MedSeqFT. These results establish sequential fine-tuning as an effective, knowledge-retentive paradigm for adapting foundation models to evolving clinical tasks. Code will be released.
Page 189 of 6526512 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.