Sort by:
Page 12 of 4494481 results

Generation of multimodal realistic computational phantoms as a test-bed for validating deep learning-based cross-modality synthesis techniques.

Camagni F, Nakas A, Parrella G, Vai A, Molinelli S, Vitolo V, Barcellini A, Chalaszczyk A, Imparato S, Pella A, Orlandi E, Baroni G, Riboldi M, Paganelli C

pubmed logopapersSep 27 2025
The validation of multimodal deep learning models for medical image translation is limited by the lack of high-quality, paired datasets. We propose a novel framework that leverages computational phantoms to generate realistic CT and MRI images, enabling reliable ground-truth datasets for robust validation of artificial intelligence (AI) methods that generate synthetic CT (sCT) from MRI, specifically for radiotherapy applications. Two CycleGANs (cycle-consistent generative adversarial networks) were trained to transfer the imaging style of real patients onto CT and MRI phantoms, producing synthetic data with realistic textures and continuous intensity distributions. These data were evaluated through paired assessments with original phantoms, unpaired comparisons with patient scans, and dosimetric analysis using patient-specific radiotherapy treatment plans. Additional external validation was performed on public CT datasets to assess the generalizability to unseen data. The resulting, paired CT/MRI phantoms were used to validate a GAN-based model for sCT generation from abdominal MRI in particle therapy, available in the literature. Results showed strong anatomical consistency with original phantoms, high histogram correlation with patient images (HistCC = 0.998 ± 0.001 for MRI, HistCC = 0.97 ± 0.04 for CT), and dosimetric accuracy comparable to real data. The novelty of this work lies in using generated phantoms as validation data for deep learning-based cross-modality synthesis techniques.

Enhanced Fracture Diagnosis Based on Critical Regional and Scale Aware in YOLO

Yuyang Sun, Junchuan Yu, Cuiming Zou

arxiv logopreprintSep 27 2025
Fracture detection plays a critical role in medical imaging analysis, traditional fracture diagnosis relies on visual assessment by experienced physicians, however the speed and accuracy of this approach are constrained by the expertise. With the rapid advancements in artificial intelligence, deep learning models based on the YOLO framework have been widely employed for fracture detection, demonstrating significant potential in improving diagnostic efficiency and accuracy. This study proposes an improved YOLO-based model, termed Fracture-YOLO, which integrates novel Critical-Region-Selector Attention (CRSelector) and Scale-Aware (ScA) heads to further enhance detection performance. Specifically, the CRSelector module utilizes global texture information to focus on critical features of fracture regions. Meanwhile, the ScA module dynamically adjusts the weights of features at different scales, enhancing the model's capacity to identify fracture targets at multiple scales. Experimental results demonstrate that, compared to the baseline model, Fracture-YOLO achieves a significant improvement in detection precision, with mAP50 and mAP50-95 increasing by 4 and 3, surpassing the baseline model and achieving state-of-the-art (SOTA) performance.

Development of a clinical-CT-radiomics nomogram for predicting endoscopic red color sign in cirrhotic patients with esophageal varices.

Han J, Dong J, Yan C, Zhang J, Wang Y, Gao M, Zhang M, Chen Y, Cai J, Zhao L

pubmed logopapersSep 27 2025
To evaluate the predictive performance of a clinical-CT-radiomics nomogram based on radiomics signature and independent clinical-CT predictors for predicting endoscopic red color sign (RC) in cirrhotic patients with esophageal varices (EV). We retrospectively evaluated 215 cirrhotic patients. Among them, 108 and 107 cases were positive and negative for endoscopic RC, respectively. Patients were assigned to a training cohort (n = 150) and a validation cohort (n = 65) at a 7:3 ratio. In the training cohort, univariate and multivariate logistic regression analyses were performed on clinical and CT features to develop a clinical-CT model. Radiomic features were extracted from portal venous phase CT images to generate a Radiomic score (Rad-score) and to construct five machine learning models. A combined model was built using clinical-CT predictors and Rad-score through logistic regression. The performance of different models was evaluated using the receiver operating characteristic (ROC) curves and the area under the curve (AUC). The spleen-to-platelet ratio, liver volume, splenic vein diameter, and superior mesenteric vein diameter were independent predictors. Six radiomics features were selected to construct five machine learning models. The adaptive boosting model showed excellent predictive performance, achieving an AUC of 0.964 in the validation cohort, while the combined model achieved the highest predictive accuracy with an AUC of 0.985 in the validation cohort. The clinical-CT-radiomics nomogram demonstrates high predictive accuracy for endoscopic RC in cirrhotic patients with EV, which provides a novel tool for non-invasive prediction of esophageal varices bleeding.

Single-step prediction of inferior alveolar nerve injury after mandibular third molar extraction using contrastive learning and bayesian auto-tuned deep learning model.

Yoon K, Choi Y, Lee M, Kim J, Kim JY, Kim JW, Choi J, Park W

pubmed logopapersSep 27 2025
Inferior alveolar nerve (IAN) injury is a critical complication of mandibular third molar extraction. This study aimed to construct and evaluate a deep learning framework that integrates contrastive learning and Bayesian optimization to enhance predictive performance on cone-beam computed tomography (CBCT) and panoramic radiographs. A retrospective dataset of 902 panoramic radiographs and 1,500 CBCT images was used. Five deep learning architectures (MobileNetV2, ResNet101D, Vision Transformer, Twins-SVT, and SSL-ResNet50) were trained with and without contrastive learning and Bayesian optimization. Model performance was evaluated using accuracy, F1-score, and comparison with oral and maxillofacial surgeons (OMFSs). Contrastive learning significantly improved the F1-scores across all models (e.g., MobileNetV2: 0.302 to 0.740; ResNet101D: 0.188 to 0.689; Vision Transformer: 0.275 to 0.704; Twins-SVT: 0.370 to 0.719; SSL-ResNet50: 0.109 to 0.576). Bayesian optimization further enhanced the F1-scores for MobileNetV2 (from 0.740 to 0.923), ResNet101D (from 0.689 to 0.857), Vision Transformer (from 0.704 to 0.871), Twins-SVT (from 0.719 to 0.857), and SSL-ResNet50 (from 0.576 to 0.875). The AI model outperformed OMFSs on CBCT cross-sectional images (F1-score: 0.923 vs. 0.667) but underperformed on panoramic radiographs (0.666 vs. 0.730). The proposed single-step deep learning approach effectively predicts IAN injury, with contrastive learning addressing data imbalance and Bayesian optimization optimizing model performance. While artificial intelligence surpasses human performance in CBCT images, panoramic radiographs analysis still benefits from expert interpretation. Future work should focus on multi-center validation and explainable artificial intelligence for broader clinical adoption.

Quantifying 3D foot and ankle alignment using an AI-driven framework: a pilot study.

Huysentruyt R, Audenaert E, Van den Borre I, Pižurica A, Duquesne K

pubmed logopapersSep 27 2025
Accurate assessment of foot and ankle alignment through clinical measurements is essential for diagnosing deformities, treatment planning, and monitoring outcomes. The traditional 2D radiographs fail to fully represent the 3D complexity of the foot and ankle. In contrast, weight-bearing CT provides a 3D view of bone alignment under physiological loading. Nevertheless, manual landmark identification on WBCT remains time-intensive and prone to variability. This study presents a novel AI framework automating foot and ankle alignment assessment via deep learning landmark detection. By training 3D U-Net models to predict 22 anatomical landmarks directly from weight-bearing CT images, using heatmap predictions, our approach eliminates the need for segmentation and iterative mesh registration methods. A small dataset of 74 orthopedic patients, including foot deformity cases such as pes cavus and planovalgus, was used to develop and evaluate the model in a clinically relevant population. The mean absolute error was assessed for each landmark and each angle using a fivefold cross-validation. Mean absolute distance errors ranged from 1.00 mm for the proximal head center of the first phalanx to a maximum of 1.88 mm for the lowest point of the calcaneus. Automated clinical measurements derived from these landmarks achieved mean absolute errors between 0.91° for the hindfoot angle and a maximum of 2.90° for the Böhler angle. The heatmap-based AI approach enables automated foot and ankle alignment assessment from WBCT imaging, achieving accuracies comparable to the manual inter-rater variability reported in previous studies. This novel AI-driven method represents a potentially valuable approach for evaluating foot and ankle morphology. However, this exploratory study requires further evaluation with larger datasets to assess its real clinical applicability.

Test-time Uncertainty Estimation for Medical Image Registration via Transformation Equivariance

Lin Tian, Xiaoling Hu, Juan Eugenio Iglesias

arxiv logopreprintSep 27 2025
Accurate image registration is essential for downstream applications, yet current deep registration networks provide limited indications of whether and when their predictions are reliable. Existing uncertainty estimation strategies, such as Bayesian methods, ensembles, or MC dropout, require architectural changes or retraining, limiting their applicability to pretrained registration networks. Instead, we propose a test-time uncertainty estimation framework that is compatible with any pretrained networks. Our framework is grounded in the transformation equivariance property of registration, which states that the true mapping between two images should remain consistent under spatial perturbations of the input. By analyzing the variance of network predictions under such perturbations, we derive a theoretical decomposition of perturbation-based uncertainty in registration. This decomposition separates into two terms: (i) an intrinsic spread, reflecting epistemic noise, and (ii) a bias jitter, capturing how systematic error drifts under perturbations. Across four anatomical structures (brain, cardiac, abdominal, and lung) and multiple registration models (uniGradICON, SynthMorph), the uncertainty maps correlate consistently with registration errors and highlight regions requiring caution. Our framework turns any pretrained registration network into a risk-aware tool at test time, placing medical image registration one step closer to safe deployment in clinical and large-scale research settings.

Benchmarking DINOv3 for Multi-Task Stroke Analysis on Non-Contrast CT

Donghao Zhang, Yimin Chen, Kauê TN Duarte, Taha Aslan, Mohamed AlShamrani, Brij Karmur, Yan Wan, Shengcai Chen, Bo Hu, Bijoy K Menon, Wu Qiu

arxiv logopreprintSep 27 2025
Non-contrast computed tomography (NCCT) is essential for rapid stroke diagnosis but is limited by low image contrast and signal to noise ratio. We address this challenge by leveraging DINOv3, a state-of-the-art self-supervised vision transformer, to generate powerful feature representations for a comprehensive set of stroke analysis tasks. Our evaluation encompasses infarct and hemorrhage segmentation, anomaly classification (normal vs. stroke and normal vs. infarct vs. hemorrhage), hemorrhage subtype classification (EDH, SDH, SAH, IPH, IVH), and dichotomized ASPECTS classification (<=6 vs. >6) on multiple public and private datasets. This study establishes strong benchmarks for these tasks and demonstrates the potential of advanced self-supervised models to improve automated stroke diagnosis from NCCT, providing a clear analysis of both the advantages and current constraints of the approach. The code is available at https://github.com/Zzz0251/DINOv3-stroke.

Deep learning-driven contactless ECG in MRI via beat pilot tone for motion-resolved image reconstruction and heart rate monitoring.

Sun H, Ding Q, Zhong S, Zhang Z

pubmed logopapersSep 26 2025
Electrocardiogram (ECG) is crucial for synchronizing cardiovascular magnetic resonance imaging (CMRI) acquisition with the cardiac cycle and for continuous heart rate monitoring during prolonged scans. However, conventional electrode-based ECG systems in clinical MRI environments suffer from tedious setup, magnetohydrodynamic (MHD) waveform distortion, skin burn risks, and patient discomfort. This study proposes a contactless ECG measurement method in MRI to address these challenges. We integrated Beat Pilot Tone (BPT)-a contactless, high motion sensitivity, and easily integrable RF motion sensing modality-into CMRI to capture cardiac motion without direct patient contact. A deep neural network was trained to map the BPT-derived cardiac mechanical motion signals to corresponding ECG waveforms. The reconstructed ECG was evaluated against simultaneously acquired ground truth ECG through multiple metrics: Pearson correlation coefficient, relative root mean square error (RRMSE), cardiac trigger timing accuracy, and heart rate estimation error. Additionally, we performed MRI retrospective binning reconstruction using reconstructed ECG reference and evaluated image quality under both standard clinical conditions and challenging scenarios involving arrhythmias and subject motion. To examine scalability of our approach across field strength, the model pretrained on 1.5T data was applied to 3T BPT cardiac acquisitions. In optimal acquisition scenarios, the reconstructed ECG achieved a median Pearson correlation of 89% relative to the ground truth, while cardiac triggering accuracy reached 94%, and heart rate estimation error remained below 1 bpm. The quality of the reconstructed images was comparable to that of ground truth synchronization. The method exhibited a degree of adaptability to irregular heart rate patterns and subject motion, and scaled effectively across MRI systems operating at different field strengths. The proposed contactless ECG measurement method has the potential to streamline CMRI workflows, improve patient safety and comfort, mitigate MHD distortion challenges and find a robust clinical application.

Hybrid Fusion Model for Effective Distinguishing Benign and Malignant Parotid Gland Tumors in Gray-Scale Ultrasonography.

Mao Y, Jiang LP, Wang JL, Chen FQ, Zhang WP, Peng XQ, Chen L, Liu ZX

pubmed logopapersSep 26 2025
To develop a hybrid fusion model-deep learning radiomics nomograms (DLRN), integrating radiomics and transfer learning for assisting sonographers differentiate benign and malignant parotid gland tumors. This study retrospectively analyzed a total of 328 patients with pathologically confirmed parotid gland tumors from two centers. Radiomics features extracted from ultrasound images were input into eight machine learning classifiers to construct Radiomics (Rad) model. Additionally, images were also input into seven transfer learning networks to construct deep transfer learning (DTL) model. The prediction probabilities from these two models were combined through decision fusion to construct a DLR model. Clinical features were further integrated with the prediction probabilities of the DLR model to develop the DLRN model. The performance of these models was evaluated using receiver operating characteristic curve analysis, calibration curve, decision curve analysis and the Hosmer-Lemeshow test. In the internal and external validation cohorts, compared with Clinic (AUC = 0.891 and 0.734), Rad (AUC = 0.809 and 0.860), DTL (AUC = 0.905 and 0.782) and DLR (AUC = 0.932 and 0.828), the DLRN model demonstrated the greatest discriminative ability (AUC = 0.931 and 0.934), showing the best discriminative power. With the assistance of DLR, the diagnostic accuracy of resident, attending and chief physician increased by 6.6%, 6.5% and 1.2%, respectively. The hybrid fusion model DLRN significantly enhances the diagnostic performance for benign and malignant tumors of the parotid gland. It can effectively assist sonographers in making more accurate diagnoses.

Efficacy of PSMA PET/CT radiomics analysis for risk stratification in newly diagnosed prostate cancer: a multicenter study.

Jafari E, Zarei A, Dadgar H, Keshavarz A, Abdollahi H, Samimi R, Manafi-Farid R, Divband G, Nikkholgh B, Fallahi B, Amini H, Ahmadzadehfar H, Rahmim A, Zohrabi F, Assadi M

pubmed logopapersSep 26 2025
Prostate-specific membrane antigen (PSMA) PET/CT plays an increasing role in prostate cancer management. Radiomics analysis of PSMA PET/CT images may provide additional information for risk stratification. This study aimed to evaluate the performance of PSMA PET/CT radiomics analysis in differentiating between Gleason Grade Groups (GGG 1–3 vs. GGG 4–5) and predicting PSA levels (below vs. at or above 20 ng/ml) in patients with newly diagnosed prostate cancer. In this multicenter study, patients with confirmed primary prostate cancer were enrolled who underwent [68Ga]Ga-PSMA PET/CT for staging. Inclusion criteria required intraprostatic lesions on PET and the International Society of Urological Pathology (ISUP) grade information. Three different segments were delineated including intraprostatic PSMA-avid lesions on PET, the whole prostate in PET, and the whole prostate in CT. Radiomic features (RFs) were extracted from all segments. Dimensionality reduction was achieved through principal component analysis (PCA) prior to model training on data from two centers (186 cases) with 10-fold cross-validation. Model performance was validated with external data set (57 cases) using various machine learning models including random forest, nearest centroid, support vector machine (SVM), calibrated classifier CV and logistic regression. In this retrospective study, 243 patients with a median age of 69 (range: 46–89) were enrolled. For distinguishing GGG 1–3 from GGG 4–5, the nearest centroid classifier using radiomic features (RFs) from whole-prostate PET achieved the best performance in the internal test set, while the random forest classifier using RFs from PSMA-avid lesions in PET performed best in the external test set. However, when considering both internal and external test sets, a calibrated classifier CV using RFs from PSMA-avid PET data showed slightly improved overall performance. Regarding PSA level classification (< 20 ng/ml vs. ≥20 ng/ml), the nearest centroid classifier using RFs from the whole prostate in PET achieved the best performance in the internal test set. In the external test set, the highest performance was observed using RFs derived from the concatenation of PET and CT. Notably, when combining both internal and external test sets, the best performance was again achieved with RFs from the concatenated PET/CT data. Our research suggests that [68Ga]Ga-PSMA PET/CT radiomic features, particularly features derived from intraprostatic PSMA-avid lesions, may provide valuable information for pre-biopsy risk stratification in newly diagnosed prostate cancer.
Page 12 of 4494481 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.