Sort by:
Page 28 of 100991 results

Instantaneous T<sub>2</sub> Mapping via Reduced Field of View Multiple Overlapping-Echo Detachment Imaging: Application in Free-Breathing Abdominal and Myocardial Imaging.

Dai C, Cai C, Wu J, Zhu L, Qu X, Yang Q, Zhou J, Cai S

pubmed logopapersAug 14 2025
Quantitative magnetic resonance imaging (qMRI) has attracted more and more attention in clinical diagnosis and medical sciences due to its capability to non-invasively characterize tissue properties. Nevertheless, most qMRI methods are time-consuming and sensitive to motion, making them inadequate for quantifying organs with physiological movement. In this context, single-shot multiple overlapping-echo detachment (MOLED) imaging technique has been presented, but its acquisition efficiency and image quality are limited when the field of view (FOV) is smaller than the object, especially for abdominal organs and myocardium. A novel single-shot reduced FOV qMRI method was developed based on MOLED (termed rFOV-MOLED). This method combines zonal oblique multislice (ZOOM) and outer volume suppression (OVS) techniques to reduce the FOV and suppress signals outside the FOV. A deep neural network was trained using synthetic data generated from Bloch simulations to achieve high-quality T<sub>2</sub> map reconstruction from rFOV-MOLED iamges. Numerical simulation, water phantom and in vivo abdominal and myocardial imaging experiments were performed to evaluate the method. The coefficient of variation and repeatability index were used to evaluate the reproducibility. Multiple statistical analyses were utilized to evaluate the accuracy and significance of the method, including linear regression, Bland-Altman analysis, Wilcoxon signed-rank test, and Mann-Whitney U test, with the p-value significance level of 0.05. Experimental results show that rFOV-MOLED achieved excellent performance in reducing aliasing signals due to FOV reduction. It provided T<sub>2</sub> maps closely resembling the reference maps. Moreover, it gave finer tissue details than MOLED and was quite repeatable. rFOV-MOLED can ultrafast and stably provide accurate T2 maps for myocardium and specific abdominal organs with improved acquisition efficiency and image quality.

Deep learning-based non-invasive prediction of PD-L1 status and immunotherapy survival stratification in esophageal cancer using [<sup>18</sup>F]FDG PET/CT.

Xie F, Zhang M, Zheng C, Zhao Z, Wang J, Li Y, Wang K, Wang W, Lin J, Wu T, Wang Y, Chen X, Li Y, Zhu Z, Wu H, Li Y, Liu Q

pubmed logopapersAug 14 2025
This study aimed to develop and validate deep learning models using [<sup>18</sup>F]FDG PET/CT to predict PD-L1 status in esophageal cancer (EC) patients. Additionally, we assessed the potential of derived deep learning model scores (DLS) for survival stratification in immunotherapy. In this retrospective study, we included 331 EC patients from two centers, dividing them into training, internal validation, and external validation cohorts. Fifty patients who received immunotherapy were followed up. We developed four 3D ResNet10-based models-PET + CT + clinical factors (CPC), PET + CT (PC), PET (P), and CT (C)-using pre-treatment [<sup>18</sup>F]FDG PET/CT scans. For comparison, we also constructed a logistic model incorporating clinical factors (clinical model). The DLS were evaluated as radiological markers for survival stratification, and nomograms for predicting survival were constructed. The models demonstrated accurate prediction of PD-L1 status. The areas under the curve (AUCs) for predicting PD-L1 status were as follows: CPC (0.927), PC (0.904), P (0.886), C (0.934), and the clinical model (0.603) in the training cohort; CPC (0.882), PC (0.848), P (0.770), C (0.745), and the clinical model (0.524) in the internal validation cohort; and CPC (0.843), PC (0.806), P (0.759), C (0.667), and the clinical model (0.671) in the external validation cohort. The CPC and PC models exhibited superior predictive performance. Survival analysis revealed that the DLS from most models effectively stratified overall survival and progression-free survival at appropriate cut-off points (P < 0.05), outperforming stratification based on PD-L1 status (combined positive score ≥ 10). Furthermore, incorporating model scores with clinical factors in nomograms enhanced the predictive probability of survival after immunotherapy. Deep learning models based on [<sup>18</sup>F]FDG PET/CT can accurately predict PD-L1 status in esophageal cancer patients. The derived DLS can effectively stratify survival outcomes following immunotherapy, particularly when combined with clinical factors.

DINOMotion: advanced robust tissue motion tracking with DINOv2 in 2D-Cine MRI-guided radiotherapy.

Salari S, Spino C, Pharand LA, Lathuiliere F, Rivaz H, Beriault S, Xiao Y

pubmed logopapersAug 14 2025
Accurate tissue motion tracking is critical to ensure treatment outcome and safety in 2D-Cine MRI-guided radiotherapy. This is typically achieved by registration of sequential images, but existing methods often face challenges with large misalignments and lack of interpretability. In this paper, we introduce DINOMotion, a novel deep learning framework based on DINOv2 with Low-Rank Adaptation (LoRA) layers for robust, efficient, and interpretable motion tracking. DINOMotion automatically detects corresponding landmarks to derive optimal image registration, enhancing interpretability by providing explicit visual correspondences between sequential images. The integration of LoRA layers reduces trainable parameters, improving training efficiency, while DINOv2's powerful feature representations offer robustness against large misalignments. Unlike iterative optimization-based methods, DINOMotion directly computes image registration at test time. Our experiments on volunteer and patient datasets demonstrate its effectiveness in estimating both linear and nonlinear transformations, achieving Dice scores of 92.07% for the kidney, 90.90% for the liver, and 95.23% for the lung, with corresponding Hausdorff distances of 5.47 mm, 8.31 mm, and 6.72 mm, respectively. DINOMotion processes each scan in approximately 30ms and consistently outperforms state-of-the-art methods, particularly in handling large misalignments. These results highlight its potential as a robust and interpretable solution for real-time motion tracking in 2D-Cine MRI-guided radiotherapy.

DINOMotion: advanced robust tissue motion tracking with DINOv2 in 2D-Cine MRI-guided radiotherapy

Soorena Salari, Catherine Spino, Laurie-Anne Pharand, Fabienne Lathuiliere, Hassan Rivaz, Silvain Beriault, Yiming Xiao

arxiv logopreprintAug 14 2025
Accurate tissue motion tracking is critical to ensure treatment outcome and safety in 2D-Cine MRI-guided radiotherapy. This is typically achieved by registration of sequential images, but existing methods often face challenges with large misalignments and lack of interpretability. In this paper, we introduce DINOMotion, a novel deep learning framework based on DINOv2 with Low-Rank Adaptation (LoRA) layers for robust, efficient, and interpretable motion tracking. DINOMotion automatically detects corresponding landmarks to derive optimal image registration, enhancing interpretability by providing explicit visual correspondences between sequential images. The integration of LoRA layers reduces trainable parameters, improving training efficiency, while DINOv2's powerful feature representations offer robustness against large misalignments. Unlike iterative optimization-based methods, DINOMotion directly computes image registration at test time. Our experiments on volunteer and patient datasets demonstrate its effectiveness in estimating both linear and nonlinear transformations, achieving Dice scores of 92.07% for the kidney, 90.90% for the liver, and 95.23% for the lung, with corresponding Hausdorff distances of 5.47 mm, 8.31 mm, and 6.72 mm, respectively. DINOMotion processes each scan in approximately 30ms and consistently outperforms state-of-the-art methods, particularly in handling large misalignments. These results highlight its potential as a robust and interpretable solution for real-time motion tracking in 2D-Cine MRI-guided radiotherapy.

Medico 2025: Visual Question Answering for Gastrointestinal Imaging

Sushant Gautam, Vajira Thambawita, Michael Riegler, Pål Halvorsen, Steven Hicks

arxiv logopreprintAug 14 2025
The Medico 2025 challenge addresses Visual Question Answering (VQA) for Gastrointestinal (GI) imaging, organized as part of the MediaEval task series. The challenge focuses on developing Explainable Artificial Intelligence (XAI) models that answer clinically relevant questions based on GI endoscopy images while providing interpretable justifications aligned with medical reasoning. It introduces two subtasks: (1) answering diverse types of visual questions using the Kvasir-VQA-x1 dataset, and (2) generating multimodal explanations to support clinical decision-making. The Kvasir-VQA-x1 dataset, created from 6,500 images and 159,549 complex question-answer (QA) pairs, serves as the benchmark for the challenge. By combining quantitative performance metrics and expert-reviewed explainability assessments, this task aims to advance trustworthy Artificial Intelligence (AI) in medical image analysis. Instructions, data access, and an updated guide for participation are available in the official competition repository: https://github.com/simula/MediaEval-Medico-2025

Performance Evaluation of Deep Learning for the Detection and Segmentation of Thyroid Nodules: Systematic Review and Meta-Analysis.

Ni J, You Y, Wu X, Chen X, Wang J, Li Y

pubmed logopapersAug 14 2025
Thyroid cancer is one of the most common endocrine malignancies. Its incidence has steadily increased in recent years. Distinguishing between benign and malignant thyroid nodules (TNs) is challenging due to their overlapping imaging features. The rapid advancement of artificial intelligence (AI) in medical image analysis, particularly deep learning (DL) algorithms, has provided novel solutions for automated TN detection. However, existing studies exhibit substantial heterogeneity in diagnostic performance. Furthermore, no systematic evidence-based research comprehensively assesses the diagnostic performance of DL models in this field. This study aimed to execute a systematic review and meta-analysis to appraise the performance of DL algorithms in diagnosing TN malignancy, identify key factors influencing their diagnostic efficacy, and compare their accuracy with that of clinicians in image-based diagnosis. We systematically searched multiple databases, including PubMed, Cochrane, Embase, Web of Science, and IEEE, and identified 41 eligible studies for systematic review and meta-analysis. Based on the task type, studies were categorized into segmentation (n=14) and detection (n=27) tasks. The pooled sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC) were calculated for each group. Subgroup analyses were performed to examine the impact of transfer learning and compare model performance against clinicians. For segmentation tasks, the pooled sensitivity, specificity, and AUC were 82% (95% CI 79%-84%), 95% (95% CI 92%-96%), and 0.91 (95% CI 0.89-0.94), respectively. For detection tasks, the pooled sensitivity, specificity, and AUC were 91% (95% CI 89%-93%), 89% (95% CI 86%-91%), and 0.96 (95% CI 0.93-0.97), respectively. Some studies demonstrated that DL models could achieve diagnostic performance comparable with, or even exceeding, that of clinicians in certain scenarios. The application of transfer learning contributed to improved model performance. DL algorithms exhibit promising diagnostic accuracy in TN imaging, highlighting their potential as auxiliary diagnostic tools. However, current studies are limited by suboptimal methodological design, inconsistent image quality across datasets, and insufficient external validation, which may introduce bias. Future research should enhance methodological standardization, improve model interpretability, and promote transparent reporting to facilitate the sustainable clinical translation of DL-based solutions.

Using deep learning methods to shorten acquisition time in children's renal cortical imaging

Gan, C., Niu, P., Pan, B., Chen, X., Xu, L., Huang, K., Chen, H., Wang, Q., Ding, L., Yin, Y., Wu, S., Gong, N.-j.

medrxiv logopreprintAug 13 2025
PurposeThis study evaluates the capability of diffusion-based generative models to reconstruct diagnostic-quality renal cortical images from reduced-acquisition-time pediatric 99mTc-DMSA scintigraphy. Materials and MethodsA prospective study was conducted on 99mTc-DMSA scintigraphic data from consecutive pediatric patients with suspected urinary tract infections (UTIs) acquired between November 2023 and October 2024. A diffusion model SR3 was trained to reconstruct standard-quality images from simulated reduced-count data. Performance was benchmarked against U-Net, U2-Net, Restormer, and a Poisson-based variant of SR3 (PoissonSR3). Quantitative assessment employed peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), Frechet inception distance (FID), and learned perceptual image patch similarity (LPIPS). Renal contrast and anatomic fidelity were assessed using the target-to-background ratio (TBR) and the Dice similarity coefficient respectively. Wilcoxon signed-rank tests were used for statistical analysis. ResultsThe training cohort comprised 94 participants (mean age 5.16{+/-}3.90 years; 48 male) with corresponding Poisson-downsampled images, while the test cohort included 36 patients (mean age 6.39{+/-}3.16 years; 14 male). SR3 outperformed all models, achieving the highest PSNR (30.976{+/-}2.863, P<.001), SSIM (0.760{+/-}0.064, P<.001), FID (25.687{+/-}16.223, P<.001), and LPIPS (0.055{+/-}0.022, P<.001). Further, SR3 maintained excellent renal contrast (TBR: left kidney 7.333{+/-}2.176; right kidney 7.156{+/-}1.808) and anatomical consistency (Dice coefficient: left kidney 0.749{+/-}0.200; right kidney 0.745{+/-}0.176), representing significant improvements over the fast scan (all P < .001). While Restormer, U-Net, and PoissonSR3 showed statistically significant improvements across all metrics, U2-Net exhibited limited improvement restricted to SSIM and left kidney TBR (P < .001). ConclusionSR3 enables high-quality reconstruction of 99mTc-DMSA images from 4-fold accelerated acquisitions, demonstrating potential for substantial reduction in imaging duration while preserving both diagnostic image quality and renal anatomical integrity.

Applications of artificial intelligence in liver cancer: A scoping review.

Chierici A, Lareyre F, Iannelli A, Salucki B, Goffart S, Guzzi L, Poggi E, Delingette H, Raffort J

pubmed logopapersAug 13 2025
This review explores the application of Artificial Intelligence (AI) in managing primary liver cancer, focusing on recent advancements. AI, particularly machine learning (ML) and deep learning (DL), shows potential in improving screening, diagnosis, treatment planning, efficacy assessment, prognosis prediction, and follow-up-crucial elements given the high mortality of liver cancer. A systematic search was conducted in the PubMed, Scopus, Embase, and Web of Science databases, focusing on original research published until June 2024 on AI's clinical applications in liver cancer. Studies not relevant or lacking clinical evaluation were excluded. Out of 13,122 screened articles, 62 were selected for full review. The studies highlight significant improvements in detecting hepatocellular carcinoma and intrahepatic cholangiocarcinoma through AI. DL models show high sensitivity and specificity, particularly in early detection. In diagnosis, AI models using CT and MRI data improve precision in distinguishing benign from malignant lesions through multimodal data integration. Recent AI models outperform earlier non-neural network versions, though a gap remains between development and clinical implementation. Many models lack thorough clinical applicability assessments and external validation. AI integration in primary liver cancer management is promising but requires rigorous development and validation practices to enhance clinical outcomes fully.

In vivo variability of MRI radiomics features in prostate lesions assessed by a test-retest study with repositioning.

Zhang KS, Neelsen CJO, Wennmann M, Hielscher T, Kovacs B, Glemser PA, Görtz M, Stenzinger A, Maier-Hein KH, Huber J, Schlemmer HP, Bonekamp D

pubmed logopapersAug 13 2025
Despite academic success, radiomics-based machine learning algorithms have not reached clinical practice, partially due to limited repeatability/reproducibility. To address this issue, this work aims to identify a stable subset of radiomics features in prostate MRI for radiomics modelling. A prospective study was conducted in 43 patients who received a clinical MRI examination and a research exam with repetition of T2-weighted and two different diffusion-weighted imaging (DWI) sequences with repositioning in between. Radiomics feature (RF) extraction was performed from MRI segmentations accounting for intra-rater and inter-rater effects, and three different image normalization methods were compared. Stability of RFs was assessed using the concordance correlation coefficient (CCC) for different comparisons: rater effects, inter-scan (before and after repositioning) and inter-sequence (between the two diffusion-weighted sequences) variability. In total, only 64 out of 321 (~ 20%) extracted features demonstrated stability, defined as CCC ≥ 0.75 in all settings (5 high-b value, 7 ADC- and 52 T2-derived features). For DWI, primarily intensity-based features proved stable with no shape feature passing the CCC threshold. T2-weighted images possessed the largest number of stable features with multiple shape (7), intensity-based (7) and texture features (28). Z-score normalization for high-b value images and muscle-normalization for T2-weighted images were identified as suitable.

CT-Based radiomics and deep learning for the preoperative prediction of peritoneal metastasis in ovarian cancers.

Liu Y, Yin H, Li J, Wang Z, Wang W, Cui S

pubmed logopapersAug 13 2025
To develop a CT-based deep learning radiomics nomogram (DLRN) for the preoperative prediction of peritoneal metastasis (PM) in patients with ovarian cancer (OC). A total of 296 patients with OCs were randomly divided into training dataset (N = 207) and test dataset (N = 89). The radiomics features and DL features were extracted from CT images of each patient. Specifically, radiomics features were extracted from the 3D tumor regions, while DL features were extracted from the 2D slice with the largest tumor region of interest (ROI). The least absolute shrinkage and selection operator (LASSO) algorithm was used to select radiomics and DL features, and the radiomics score (Radscore) and DL score (Deepscore) were calculated. Multivariate logistic regression was employed to construct clinical model. The important clinical factors, radiomics and DL features were integrated to build the DLRN. The predictive performance of the models was evaluated using the area under the receiver operating characteristic curve (AUC) and DeLong's test. Nine radiomics features and 10 DL features were selected. Carbohydrate antigen 125 (CA-125) was the independent clinical predictor. In the training dataset, the AUC values of the clinical, radiomics and DL models were 0.618, 0.842, and 0.860, respectively. In the test dataset, the AUC values of these models were 0.591, 0.819 and 0.917, respectively. The DLRN showed better performance than other models in both training and test datasets with AUCs of 0.943 and 0.951, respectively. Decision curve analysis and calibration curve showed that the DLRN provided relatively high clinical benefit in both the training and test datasets. The DLRN demonstrated superior performance in predicting preoperative PM in patients with OC. This model offers a highly accurate and noninvasive tool for preoperative prediction, with substantial clinical potential to provide critical information for individualized treatment planning, thereby enabling more precise and effective management of OC patients.
Page 28 of 100991 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.