Sort by:
Page 52 of 1421413 results

SMAS: Structural MRI-based AD Score using Bayesian supervised VAE.

Nemali A, Bernal J, Yakupov R, D S, Dyrba M, Incesoy EI, Mukherjee S, Peters O, Ersözlü E, Hellmann-Regen J, Preis L, Priller J, Spruth E, Altenstein S, Lohse A, Schneider A, Fliessbach K, Kimmich O, Wiltfang J, Hansen N, Schott B, Rostamzadeh A, Glanz W, Butryn M, Buerger K, Janowitz D, Ewers M, Perneczky R, Rauchmann B, Teipel S, Kilimann I, Goerss D, Laske C, Sodenkamp S, Spottke A, Coenjaerts M, Brosseron F, Lüsebrink F, Dechent P, Scheffler K, Hetzer S, Kleineidam L, Stark M, Jessen F, Duzel E, Ziegler G

pubmed logopapersAug 15 2025
This study introduces the Structural MRI-based Alzheimer's Disease Score (SMAS), a novel index intended to quantify Alzheimer's Disease (AD)-related morphometric patterns using a deep learning Bayesian-supervised Variational Autoencoder (Bayesian-SVAE). The SMAS index was constructed using baseline structural MRI data from the DELCODE study and evaluated longitudinally in two independent cohorts: DELCODE (n=415) and ADNI (n=190). Our findings indicate that SMAS has strong associations with cognitive performance (DELCODE: r=-0.83; ADNI: r=-0.62), age (DELCODE: r=0.50; ADNI: r=0.28), hippocampal volume (DELCODE: r=-0.44; ADNI: r=-0.66), and total gray matter volume (DELCODE: r=-0.42; ADNI: r=-0.47), suggesting its potential as a biomarker for AD-related brain atrophy. Moreover, our longitudinal studies indicated that SMAS may be useful for the early identification and tracking of AD. The model demonstrated significant predictive accuracy in distinguishing cognitively healthy individuals from those with AD (DELCODE: AUC=0.971 at baseline, 0.833 at 36 months; ADNI: AUC=0.817 at baseline, improving to 0.903 at 24 months). Notably, over 36 months, the SMAS index outperformed existing measures such as SPARE-AD and hippocampal volume. The relevance map analysis revealed significant morphological changes in key AD-related brain regions, including the hippocampus, posterior cingulate cortex, precuneus, and lateral parietal cortex, highlighting that SMAS is a sensitive and interpretable biomarker of brain atrophy, suitable for early AD detection and longitudinal monitoring of disease progression.

Determination of Skeletal Age From Hand Radiographs Using Deep Learning.

Bram JT, Pareek A, Beber SA, Jones RH, Shariatnia MM, Daliliyazdi A, Tracey OC, Green DW, Fabricant PD

pubmed logopapersAug 15 2025
Surgeons treating skeletally immature patients use skeletal age to determine appropriate surgical strategies. Traditional bone age estimation methods utilizing hand radiographs are time-consuming. To develop highly accurate/reliable deep learning (DL) models for determination of accurate skeletal age from hand radiographs. Cohort Study. The authors utilized 3 publicly available hand radiograph data sets for model development/validation from (1) the Radiological Society of North America (RSNA), (2) the Radiological Hand Pose Estimation (RHPE) data set, and (3) the Digital Hand Atlas (DHA). All 3 data sets report corresponding sex and skeletal age. The RHPE and DHA also contain chronological age. After image preprocessing, a ConvNeXt model was trained first on the RSNA data set using sex/skeletal age as inputs using 5-fold cross-validation, with subsequent training on the RHPE with addition of chronological age. Final model validation was performed on the DHA and an institutional data set of 200 images. The first model, trained on the RSNA, achieved a mean absolute error (MAE) of 3.68 months on the RSNA test set and 5.66 months on the DHA. This outperformed the 4.2 months achieved on the RSNA test set by the best model from previous work (12.4% improvement) and 3.9 months by the open-source software Deeplasia (5.6% improvement). After incorporation of chronological age from the RHPE in model 2, this error improved to an MAE of 4.65 months on the DHA, again surpassing the best previously published models (19.8% improvement). Leveraging newer DL technologies trained on >20,000 hand radiographs across 3 distinct, diverse data sets, this study developed a robust model for predicting bone age. Utilizing features extracted from an RSNA model, combined with chronological age inputs, this model outperforms previous state-of-the-art models when applied to validation data sets. These results indicate that the models provide a highly accurate/reliable platform for clinical use to improve confidence about appropriate surgical selection (eg, physeal-sparing procedures) and time savings for orthopaedic surgeons/radiologists evaluating skeletal age. Development of an accurate DL model for determination of bone age from the hand reduces the time required for age estimation. Additionally, streamlined skeletal age estimation can aid practitioners in determining optimal treatment strategies and may be useful in research settings to decrease workload and improve reporting.

Efficient Image-to-Image Schrödinger Bridge for CT Field of View Extension

Zhenhao Li, Long Yang, Xiaojie Yin, Haijun Yu, Jiazhou Wang, Hongbin Han, Weigang Hu, Yixing Huang

arxiv logopreprintAug 15 2025
Computed tomography (CT) is a cornerstone imaging modality for non-invasive, high-resolution visualization of internal anatomical structures. However, when the scanned object exceeds the scanner's field of view (FOV), projection data are truncated, resulting in incomplete reconstructions and pronounced artifacts near FOV boundaries. Conventional reconstruction algorithms struggle to recover accurate anatomy from such data, limiting clinical reliability. Deep learning approaches have been explored for FOV extension, with diffusion generative models representing the latest advances in image synthesis. Yet, conventional diffusion models are computationally demanding and slow at inference due to their iterative sampling process. To address these limitations, we propose an efficient CT FOV extension framework based on the image-to-image Schr\"odinger Bridge (I$^2$SB) diffusion model. Unlike traditional diffusion models that synthesize images from pure Gaussian noise, I$^2$SB learns a direct stochastic mapping between paired limited-FOV and extended-FOV images. This direct correspondence yields a more interpretable and traceable generative process, enhancing anatomical consistency and structural fidelity in reconstructions. I$^2$SB achieves superior quantitative performance, with root-mean-square error (RMSE) values of 49.8\,HU on simulated noisy data and 152.0HU on real data, outperforming state-of-the-art diffusion models such as conditional denoising diffusion probabilistic models (cDDPM) and patch-based diffusion methods. Moreover, its one-step inference enables reconstruction in just 0.19s per 2D slice, representing over a 700-fold speedup compared to cDDPM (135s) and surpassing diffusionGAN (0.58s), the second fastest. This combination of accuracy and efficiency makes I$^2$SB highly suitable for real-time or clinical deployment.

Deep learning-based non-invasive prediction of PD-L1 status and immunotherapy survival stratification in esophageal cancer using [<sup>18</sup>F]FDG PET/CT.

Xie F, Zhang M, Zheng C, Zhao Z, Wang J, Li Y, Wang K, Wang W, Lin J, Wu T, Wang Y, Chen X, Li Y, Zhu Z, Wu H, Li Y, Liu Q

pubmed logopapersAug 14 2025
This study aimed to develop and validate deep learning models using [<sup>18</sup>F]FDG PET/CT to predict PD-L1 status in esophageal cancer (EC) patients. Additionally, we assessed the potential of derived deep learning model scores (DLS) for survival stratification in immunotherapy. In this retrospective study, we included 331 EC patients from two centers, dividing them into training, internal validation, and external validation cohorts. Fifty patients who received immunotherapy were followed up. We developed four 3D ResNet10-based models-PET + CT + clinical factors (CPC), PET + CT (PC), PET (P), and CT (C)-using pre-treatment [<sup>18</sup>F]FDG PET/CT scans. For comparison, we also constructed a logistic model incorporating clinical factors (clinical model). The DLS were evaluated as radiological markers for survival stratification, and nomograms for predicting survival were constructed. The models demonstrated accurate prediction of PD-L1 status. The areas under the curve (AUCs) for predicting PD-L1 status were as follows: CPC (0.927), PC (0.904), P (0.886), C (0.934), and the clinical model (0.603) in the training cohort; CPC (0.882), PC (0.848), P (0.770), C (0.745), and the clinical model (0.524) in the internal validation cohort; and CPC (0.843), PC (0.806), P (0.759), C (0.667), and the clinical model (0.671) in the external validation cohort. The CPC and PC models exhibited superior predictive performance. Survival analysis revealed that the DLS from most models effectively stratified overall survival and progression-free survival at appropriate cut-off points (P < 0.05), outperforming stratification based on PD-L1 status (combined positive score ≥ 10). Furthermore, incorporating model scores with clinical factors in nomograms enhanced the predictive probability of survival after immunotherapy. Deep learning models based on [<sup>18</sup>F]FDG PET/CT can accurately predict PD-L1 status in esophageal cancer patients. The derived DLS can effectively stratify survival outcomes following immunotherapy, particularly when combined with clinical factors.

An effective brain stroke diagnosis strategy based on feature extraction and hybrid classifier.

Elsayed MS, Saleh GA, Saleh AI, Khalil AT

pubmed logopapersAug 14 2025
Stroke is a leading cause of death and long-term disability worldwide, and early detection remains a significant clinical challenge. This study proposes an Effective Brain Stroke Diagnosis Strategy (EBDS). The hybrid deep learning framework integrates Vision Transformer (ViT) and VGG16 to enable accurate and interpretable stroke detection from CT images. The model was trained and evaluated using a publicly available dataset from Kaggle, achieving impressive results: a test accuracy of 99.6%, a precision of 1.00 for normal cases and 0.98 for stroke cases, a recall of 0.99 for normal cases and 1.00 for stroke cases, and an overall F1-score of 0.99. These results demonstrate the robustness and reliability of the EBDS model, which outperforms several recent state-of-the-art methods. To enhance clinical trust, the model incorporates explainability techniques, such as Grad-CAM and LIME, which provide visual insights into its decision-making process. The EBDS framework is designed for real-time application in emergency settings, offering both high diagnostic performance and interpretability. This work addresses a critical research gap in early brain stroke diagnosis and contributes a scalable, explainable, and clinically relevant solution for medical imaging diagnostics.

Cross-view Generalized Diffusion Model for Sparse-view CT Reconstruction

Jixiang Chen, Yiqun Lin, Yi Qin, Hualiang Wang, Xiaomeng Li

arxiv logopreprintAug 14 2025
Sparse-view computed tomography (CT) reduces radiation exposure by subsampling projection views, but conventional reconstruction methods produce severe streak artifacts with undersampled data. While deep-learning-based methods enable single-step artifact suppression, they often produce over-smoothed results under significant sparsity. Though diffusion models improve reconstruction via iterative refinement and generative priors, they require hundreds of sampling steps and struggle with stability in highly sparse regimes. To tackle these concerns, we present the Cross-view Generalized Diffusion Model (CvG-Diff), which reformulates sparse-view CT reconstruction as a generalized diffusion process. Unlike existing diffusion approaches that rely on stochastic Gaussian degradation, CvG-Diff explicitly models image-domain artifacts caused by angular subsampling as a deterministic degradation operator, leveraging correlations across sparse-view CT at different sample rates. To address the inherent artifact propagation and inefficiency of sequential sampling in generalized diffusion model, we introduce two innovations: Error-Propagating Composite Training (EPCT), which facilitates identifying error-prone regions and suppresses propagated artifacts, and Semantic-Prioritized Dual-Phase Sampling (SPDPS), an adaptive strategy that prioritizes semantic correctness before detail refinement. Together, these innovations enable CvG-Diff to achieve high-quality reconstructions with minimal iterations, achieving 38.34 dB PSNR and 0.9518 SSIM for 18-view CT using only \textbf{10} steps on AAPM-LDCT dataset. Extensive experiments demonstrate the superiority of CvG-Diff over state-of-the-art sparse-view CT reconstruction methods. The code is available at https://github.com/xmed-lab/CvG-Diff.

DINOMotion: advanced robust tissue motion tracking with DINOv2 in 2D-Cine MRI-guided radiotherapy.

Salari S, Spino C, Pharand LA, Lathuiliere F, Rivaz H, Beriault S, Xiao Y

pubmed logopapersAug 14 2025
Accurate tissue motion tracking is critical to ensure treatment outcome and safety in 2D-Cine MRI-guided radiotherapy. This is typically achieved by registration of sequential images, but existing methods often face challenges with large misalignments and lack of interpretability. In this paper, we introduce DINOMotion, a novel deep learning framework based on DINOv2 with Low-Rank Adaptation (LoRA) layers for robust, efficient, and interpretable motion tracking. DINOMotion automatically detects corresponding landmarks to derive optimal image registration, enhancing interpretability by providing explicit visual correspondences between sequential images. The integration of LoRA layers reduces trainable parameters, improving training efficiency, while DINOv2's powerful feature representations offer robustness against large misalignments. Unlike iterative optimization-based methods, DINOMotion directly computes image registration at test time. Our experiments on volunteer and patient datasets demonstrate its effectiveness in estimating both linear and nonlinear transformations, achieving Dice scores of 92.07% for the kidney, 90.90% for the liver, and 95.23% for the lung, with corresponding Hausdorff distances of 5.47 mm, 8.31 mm, and 6.72 mm, respectively. DINOMotion processes each scan in approximately 30ms and consistently outperforms state-of-the-art methods, particularly in handling large misalignments. These results highlight its potential as a robust and interpretable solution for real-time motion tracking in 2D-Cine MRI-guided radiotherapy.

Performance Evaluation of Deep Learning for the Detection and Segmentation of Thyroid Nodules: Systematic Review and Meta-Analysis.

Ni J, You Y, Wu X, Chen X, Wang J, Li Y

pubmed logopapersAug 14 2025
Thyroid cancer is one of the most common endocrine malignancies. Its incidence has steadily increased in recent years. Distinguishing between benign and malignant thyroid nodules (TNs) is challenging due to their overlapping imaging features. The rapid advancement of artificial intelligence (AI) in medical image analysis, particularly deep learning (DL) algorithms, has provided novel solutions for automated TN detection. However, existing studies exhibit substantial heterogeneity in diagnostic performance. Furthermore, no systematic evidence-based research comprehensively assesses the diagnostic performance of DL models in this field. This study aimed to execute a systematic review and meta-analysis to appraise the performance of DL algorithms in diagnosing TN malignancy, identify key factors influencing their diagnostic efficacy, and compare their accuracy with that of clinicians in image-based diagnosis. We systematically searched multiple databases, including PubMed, Cochrane, Embase, Web of Science, and IEEE, and identified 41 eligible studies for systematic review and meta-analysis. Based on the task type, studies were categorized into segmentation (n=14) and detection (n=27) tasks. The pooled sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC) were calculated for each group. Subgroup analyses were performed to examine the impact of transfer learning and compare model performance against clinicians. For segmentation tasks, the pooled sensitivity, specificity, and AUC were 82% (95% CI 79%-84%), 95% (95% CI 92%-96%), and 0.91 (95% CI 0.89-0.94), respectively. For detection tasks, the pooled sensitivity, specificity, and AUC were 91% (95% CI 89%-93%), 89% (95% CI 86%-91%), and 0.96 (95% CI 0.93-0.97), respectively. Some studies demonstrated that DL models could achieve diagnostic performance comparable with, or even exceeding, that of clinicians in certain scenarios. The application of transfer learning contributed to improved model performance. DL algorithms exhibit promising diagnostic accuracy in TN imaging, highlighting their potential as auxiliary diagnostic tools. However, current studies are limited by suboptimal methodological design, inconsistent image quality across datasets, and insufficient external validation, which may introduce bias. Future research should enhance methodological standardization, improve model interpretability, and promote transparent reporting to facilitate the sustainable clinical translation of DL-based solutions.

DINOMotion: advanced robust tissue motion tracking with DINOv2 in 2D-Cine MRI-guided radiotherapy

Soorena Salari, Catherine Spino, Laurie-Anne Pharand, Fabienne Lathuiliere, Hassan Rivaz, Silvain Beriault, Yiming Xiao

arxiv logopreprintAug 14 2025
Accurate tissue motion tracking is critical to ensure treatment outcome and safety in 2D-Cine MRI-guided radiotherapy. This is typically achieved by registration of sequential images, but existing methods often face challenges with large misalignments and lack of interpretability. In this paper, we introduce DINOMotion, a novel deep learning framework based on DINOv2 with Low-Rank Adaptation (LoRA) layers for robust, efficient, and interpretable motion tracking. DINOMotion automatically detects corresponding landmarks to derive optimal image registration, enhancing interpretability by providing explicit visual correspondences between sequential images. The integration of LoRA layers reduces trainable parameters, improving training efficiency, while DINOv2's powerful feature representations offer robustness against large misalignments. Unlike iterative optimization-based methods, DINOMotion directly computes image registration at test time. Our experiments on volunteer and patient datasets demonstrate its effectiveness in estimating both linear and nonlinear transformations, achieving Dice scores of 92.07% for the kidney, 90.90% for the liver, and 95.23% for the lung, with corresponding Hausdorff distances of 5.47 mm, 8.31 mm, and 6.72 mm, respectively. DINOMotion processes each scan in approximately 30ms and consistently outperforms state-of-the-art methods, particularly in handling large misalignments. These results highlight its potential as a robust and interpretable solution for real-time motion tracking in 2D-Cine MRI-guided radiotherapy.

Data-Driven Abdominal Phenotypes of Type 2 Diabetes in Lean, Overweight, and Obese Cohorts

Lucas W. Remedios, Chloe Choe, Trent M. Schwartz, Dingjie Su, Gaurav Rudravaram, Chenyu Gao, Aravind R. Krishnan, Adam M. Saunders, Michael E. Kim, Shunxing Bao, Alvin C. Powers, Bennett A. Landman, John Virostko

arxiv logopreprintAug 14 2025
Purpose: Although elevated BMI is a well-known risk factor for type 2 diabetes, the disease's presence in some lean adults and absence in others with obesity suggests that detailed body composition may uncover abdominal phenotypes of type 2 diabetes. With AI, we can now extract detailed measurements of size, shape, and fat content from abdominal structures in 3D clinical imaging at scale. This creates an opportunity to empirically define body composition signatures linked to type 2 diabetes risk and protection using large-scale clinical data. Approach: To uncover BMI-specific diabetic abdominal patterns from clinical CT, we applied our design four times: once on the full cohort (n = 1,728) and once on lean (n = 497), overweight (n = 611), and obese (n = 620) subgroups separately. Briefly, our experimental design transforms abdominal scans into collections of explainable measurements through segmentation, classifies type 2 diabetes through a cross-validated random forest, measures how features contribute to model-estimated risk or protection through SHAP analysis, groups scans by shared model decision patterns (clustering from SHAP) and links back to anatomical differences (classification). Results: The random-forests achieved mean AUCs of 0.72-0.74. There were shared type 2 diabetes signatures in each group; fatty skeletal muscle, older age, greater visceral and subcutaneous fat, and a smaller or fat-laden pancreas. Univariate logistic regression confirmed the direction of 14-18 of the top 20 predictors within each subgroup (p < 0.05). Conclusions: Our findings suggest that abdominal drivers of type 2 diabetes may be consistent across weight classes.
Page 52 of 1421413 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.