Sort by:
Page 4 of 2982972 results

Comparative evaluation of supervised and unsupervised deep learning strategies for denoising hyperpolarized <sup>129</sup>Xe lung MRI.

Bdaiwi AS, Willmering MM, Hussain R, Hysinger E, Woods JC, Walkup LL, Cleveland ZI

pubmed logopapersAug 14 2025
Reduced signal-to-noise ratio (SNR) in hyperpolarized <sup>129</sup>Xe MR images can affect accurate quantification for research and diagnostic evaluations. Thus, this study explores the application of supervised deep learning (DL) denoising, traditional (Trad) and Noise2Noise (N2N) and unsupervised Noise2void (N2V) approaches for <sup>129</sup>Xe MR imaging. The DL denoising frameworks were trained and tested on 952 <sup>129</sup>Xe MRI data sets (421 ventilation, 125 diffusion-weighted, and 406 gas-exchange acquisitions) from healthy subjects and participants with cardiopulmonary conditions and compared with the block matching 3D denoising technique. Evaluation involved mean signal, noise standard deviation (SD), SNR, and sharpness. Ventilation defect percentage (VDP), apparent diffusion coefficient (ADC), membrane uptake, red blood cell (RBC) transfer, and RBC:Membrane were also evaluated for ventilation, diffusion, and gas-exchange images, respectively. Denoising methods significantly reduced noise SDs and enhanced SNR (p < 0.05) across all imaging types. Traditional ventilation model (Trad<sub>vent</sub>) improved sharpness in ventilation images but underestimated VDP (bias = -1.37%) relative to raw images, whereas N2N<sub>vent</sub> overestimated VDP (bias = +1.88%). Block matching 3D and N2V<sub>vent</sub> showed minimal VDP bias (≤ 0.35%). Denoising significantly reduced ADC mean and SD (p < 0.05, bias ≤ - 0.63 × 10<sup>-2</sup>). The values of Trad<sub>vent</sub> and N2N<sub>vent</sub> increased mean membrane and RBC (p < 0.001) with no change in RBC:Membrane. Denoising also reduced SDs of all gas-exchange metrics (p < 0.01). Low SNR may impair the potential of <sup>129</sup>Xe MRI for clinical diagnosis and lung function assessment. The evaluation of supervised and unsupervised DL denoising methods enhanced <sup>129</sup>Xe imaging quality, offering promise for improved clinical interpretation and diagnosis.

AI-based prediction of best-corrected visual acuity in patients with multiple retinal diseases using multimodal medical imaging.

Dong L, Gao W, Niu L, Deng Z, Gong Z, Li HY, Fang LJ, Shao L, Zhang RH, Zhou WD, Ma L, Wei WB

pubmed logopapersAug 14 2025
This study evaluated the performance of artificial intelligence (AI) algorithms in predicting best-corrected visual acuity (BCVA) for patients with multiple retinal diseases, using multimodal medical imaging including macular optical coherence tomography (OCT), optic disc OCT and fundus images. The goal was to enhance clinical BCVA evaluation efficiency and precision. A retrospective study used data from 2545 patients (4028 eyes) for training, 896 (1006 eyes) for testing and 196 (200 eyes) for internal validation, with an external prospective dataset of 741 patients (1381 eyes). Single-modality analyses employed different backbone networks and feature fusion methods, while multimodal fusion combined modalities using average aggregation, concatenation/reduction and maximum feature selection. Predictive accuracy was measured by mean absolute error (MAE), root mean squared error (RMSE) and R² score. Macular OCT achieved better single-modality prediction than optic disc OCT, with MAE of 3.851 vs 4.977 and RMSE of 7.844 vs 10.026. Fundus images showed an MAE of 3.795 and RMSE of 7.954. Multimodal fusion significantly improved accuracy, with the best results using average aggregation, achieving an MAE of 2.865, RMSE of 6.229 and R² of 0.935. External validation yielded an MAE of 8.38 and RMSE of 10.62. Multimodal fusion provided the most accurate BCVA predictions, demonstrating AI's potential to improve clinical evaluation. However, challenges remain regarding disease diversity and applicability in resource-limited settings.

Optimized AI-based Neural Decoding from BOLD fMRI Signal for Analyzing Visual and Semantic ROIs in the Human Visual System.

Veronese L, Moglia A, Pecco N, Della Rosa P, Scifo P, Mainardi LT, Cerveri P

pubmed logopapersAug 14 2025
AI-based neural decoding reconstructs visual perception by leveraging generative models to map brain activity measured through functional MRI (fMRI) into the observed visual stimulus. Traditionally, ridge linear models transform fMRI into a latent space, which is then decoded using variational autoencoders (VAE) or latent diffusion models (LDM). Owing to the complexity and noisiness of fMRI data, newer approaches split the reconstruction into two sequential stages, the first one providing a rough visual approximation using a VAE, the second one incorporating semantic information through the adoption of LDM guided by contrastive language-image pre-training (CLIP) embeddings. This work addressed some key scientific and technical gaps of the two-stage neural decoding by: 1) implementing a gated recurrent unit (GRU)-based architecture to establish a non-linear mapping between the fMRI signal and the VAE latent space, 2) optimizing the dimensionality of the VAE latent space, 3) systematically evaluating the contribution of the first reconstruction stage, and 4) analyzing the impact of different brain regions of interest (ROIs) on reconstruction quality. Experiments on the Natural Scenes Dataset, containing 73,000 unique natural images, along with fMRI of eight subjects, demonstrated that the proposed architecture maintained competitive performance while reducing the complexity of its first stage by 85%. The sensitivity analysis showcased that the first reconstruction stage is essential for preserving high structural similarity in the final reconstructions. Restricting analysis to semantic ROIs, while excluding early visual areas, diminished visual coherence, preserving semantics though. The inter-subject repeatability across ROIs was about 92 and 98% for visual and sematic metrics, respectively. This study represents a key step toward optimized neural decoding architectures leveraging non-linear models for stimulus prediction. Sensitivity analysis highlighted the interplay between the two reconstruction stages, while ROI-based analysis provided strong evidence that the two-stage AI model reflects the brain's hierarchical processing of visual information.

Performance Evaluation of Deep Learning for the Detection and Segmentation of Thyroid Nodules: Systematic Review and Meta-Analysis.

Ni J, You Y, Wu X, Chen X, Wang J, Li Y

pubmed logopapersAug 14 2025
Thyroid cancer is one of the most common endocrine malignancies. Its incidence has steadily increased in recent years. Distinguishing between benign and malignant thyroid nodules (TNs) is challenging due to their overlapping imaging features. The rapid advancement of artificial intelligence (AI) in medical image analysis, particularly deep learning (DL) algorithms, has provided novel solutions for automated TN detection. However, existing studies exhibit substantial heterogeneity in diagnostic performance. Furthermore, no systematic evidence-based research comprehensively assesses the diagnostic performance of DL models in this field. This study aimed to execute a systematic review and meta-analysis to appraise the performance of DL algorithms in diagnosing TN malignancy, identify key factors influencing their diagnostic efficacy, and compare their accuracy with that of clinicians in image-based diagnosis. We systematically searched multiple databases, including PubMed, Cochrane, Embase, Web of Science, and IEEE, and identified 41 eligible studies for systematic review and meta-analysis. Based on the task type, studies were categorized into segmentation (n=14) and detection (n=27) tasks. The pooled sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC) were calculated for each group. Subgroup analyses were performed to examine the impact of transfer learning and compare model performance against clinicians. For segmentation tasks, the pooled sensitivity, specificity, and AUC were 82% (95% CI 79%-84%), 95% (95% CI 92%-96%), and 0.91 (95% CI 0.89-0.94), respectively. For detection tasks, the pooled sensitivity, specificity, and AUC were 91% (95% CI 89%-93%), 89% (95% CI 86%-91%), and 0.96 (95% CI 0.93-0.97), respectively. Some studies demonstrated that DL models could achieve diagnostic performance comparable with, or even exceeding, that of clinicians in certain scenarios. The application of transfer learning contributed to improved model performance. DL algorithms exhibit promising diagnostic accuracy in TN imaging, highlighting their potential as auxiliary diagnostic tools. However, current studies are limited by suboptimal methodological design, inconsistent image quality across datasets, and insufficient external validation, which may introduce bias. Future research should enhance methodological standardization, improve model interpretability, and promote transparent reporting to facilitate the sustainable clinical translation of DL-based solutions.

A software ecosystem for brain tractometry processing, analysis, and insight.

Kruper J, Richie-Halford A, Qiao J, Gilmore A, Chang K, Grotheer M, Roy E, Caffarra S, Gomez T, Chou S, Cieslak M, Koudoro S, Garyfallidis E, Satthertwaite TD, Yeatman JD, Rokem A

pubmed logopapersAug 14 2025
Tractometry uses diffusion-weighted magnetic resonance imaging (dMRI) to assess physical properties of brain connections. Here, we present an integrative ecosystem of software that performs all steps of tractometry: post-processing of dMRI data, delineation of major white matter pathways, and modeling of the tissue properties within them. This ecosystem also provides a set of interoperable and extensible tools for visualization and interpretation of the results that extract insights from these measurements. These include novel machine learning and statistical analysis methods adapted to the characteristic structure of tract-based data. We benchmark the performance of these statistical analysis methods in different datasets and analysis tasks, including hypothesis testing on group differences and predictive analysis of subject age. We also demonstrate that computational advances implemented in the software offer orders of magnitude of acceleration. Taken together, these open-source software tools-freely available at https://tractometry.org-provide a transformative environment for the analysis of dMRI data.

Lung-DDPM: Semantic Layout-guided Diffusion Models for Thoracic CT Image Synthesis.

Jiang Y, Lemarechal Y, Bafaro J, Abi-Rjeile J, Joubert P, Despres P, Manem V

pubmed logopapersAug 14 2025
With the rapid development of artificial intelligence (AI), AI-assisted medical imaging analysis demonstrates remarkable performance in early lung cancer screening. However, the costly annotation process and privacy concerns limit the construction of large-scale medical datasets, hampering the further application of AI in healthcare. To address the data scarcity in lung cancer screening, we propose Lung-DDPM, a thoracic CT image synthesis approach that effectively generates high-fidelity 3D synthetic CT images, which prove helpful in downstream lung nodule segmentation tasks. Our method is based on semantic layout-guided denoising diffusion probabilistic models (DDPM), enabling anatomically reasonable, seamless, and consistent sample generation even from incomplete semantic layouts. Our results suggest that the proposed method outperforms other state-of-the-art (SOTA) generative models in image quality evaluation and downstream lung nodule segmentation tasks. Specifically, Lung-DDPM achieved superior performance on our large validation cohort, with a Fréchet inception distance (FID) of 0.0047, maximum mean discrepancy (MMD) of 0.0070, and mean squared error (MSE) of 0.0024. These results were 7.4×, 3.1×, and 29.5× better than the second-best competitors, respectively. Furthermore, the lung nodule segmentation model, trained on a dataset combining real and Lung-DDPM-generated synthetic samples, attained a Dice Coefficient (Dice) of 0.3914 and sensitivity of 0.4393. This represents 8.8% and 18.6% improvements in Dice and sensitivity compared to the model trained solely on real samples. The experimental results highlight Lung-DDPM's potential for a broader range of medical imaging applications, such as general tumor segmentation, cancer survival estimation, and risk prediction. The code and pretrained models are available at https://github.com/Manem-Lab/Lung-DDPM/.

DINOMotion: advanced robust tissue motion tracking with DINOv2 in 2D-Cine MRI-guided radiotherapy.

Salari S, Spino C, Pharand LA, Lathuiliere F, Rivaz H, Beriault S, Xiao Y

pubmed logopapersAug 14 2025
Accurate tissue motion tracking is critical to ensure treatment outcome and safety in 2D-Cine MRI-guided radiotherapy. This is typically achieved by registration of sequential images, but existing methods often face challenges with large misalignments and lack of interpretability. In this paper, we introduce DINOMotion, a novel deep learning framework based on DINOv2 with Low-Rank Adaptation (LoRA) layers for robust, efficient, and interpretable motion tracking. DINOMotion automatically detects corresponding landmarks to derive optimal image registration, enhancing interpretability by providing explicit visual correspondences between sequential images. The integration of LoRA layers reduces trainable parameters, improving training efficiency, while DINOv2's powerful feature representations offer robustness against large misalignments. Unlike iterative optimization-based methods, DINOMotion directly computes image registration at test time. Our experiments on volunteer and patient datasets demonstrate its effectiveness in estimating both linear and nonlinear transformations, achieving Dice scores of 92.07% for the kidney, 90.90% for the liver, and 95.23% for the lung, with corresponding Hausdorff distances of 5.47 mm, 8.31 mm, and 6.72 mm, respectively. DINOMotion processes each scan in approximately 30ms and consistently outperforms state-of-the-art methods, particularly in handling large misalignments. These results highlight its potential as a robust and interpretable solution for real-time motion tracking in 2D-Cine MRI-guided radiotherapy.

Instantaneous T<sub>2</sub> Mapping via Reduced Field of View Multiple Overlapping-Echo Detachment Imaging: Application in Free-Breathing Abdominal and Myocardial Imaging.

Dai C, Cai C, Wu J, Zhu L, Qu X, Yang Q, Zhou J, Cai S

pubmed logopapersAug 14 2025
Quantitative magnetic resonance imaging (qMRI) has attracted more and more attention in clinical diagnosis and medical sciences due to its capability to non-invasively characterize tissue properties. Nevertheless, most qMRI methods are time-consuming and sensitive to motion, making them inadequate for quantifying organs with physiological movement. In this context, single-shot multiple overlapping-echo detachment (MOLED) imaging technique has been presented, but its acquisition efficiency and image quality are limited when the field of view (FOV) is smaller than the object, especially for abdominal organs and myocardium. A novel single-shot reduced FOV qMRI method was developed based on MOLED (termed rFOV-MOLED). This method combines zonal oblique multislice (ZOOM) and outer volume suppression (OVS) techniques to reduce the FOV and suppress signals outside the FOV. A deep neural network was trained using synthetic data generated from Bloch simulations to achieve high-quality T<sub>2</sub> map reconstruction from rFOV-MOLED iamges. Numerical simulation, water phantom and in vivo abdominal and myocardial imaging experiments were performed to evaluate the method. The coefficient of variation and repeatability index were used to evaluate the reproducibility. Multiple statistical analyses were utilized to evaluate the accuracy and significance of the method, including linear regression, Bland-Altman analysis, Wilcoxon signed-rank test, and Mann-Whitney U test, with the p-value significance level of 0.05. Experimental results show that rFOV-MOLED achieved excellent performance in reducing aliasing signals due to FOV reduction. It provided T<sub>2</sub> maps closely resembling the reference maps. Moreover, it gave finer tissue details than MOLED and was quite repeatable. rFOV-MOLED can ultrafast and stably provide accurate T2 maps for myocardium and specific abdominal organs with improved acquisition efficiency and image quality.

A novel hybrid convolutional and recurrent neural network model for automatic pituitary adenoma classification using dynamic contrast-enhanced MRI.

Motamed M, Bastam M, Tabatabaie SM, Elhaie M, Shahbazi-Gahrouei D

pubmed logopapersAug 14 2025
Pituitary adenomas, ranging from subtle microadenomas to mass-effect macroadenomas, pose diagnostic challenges for radiologists due to increasing scan volumes and the complexity of dynamic contrast-enhanced MRI interpretation. A hybrid CNN-LSTM model was trained and validated on a multi-center dataset of 2,163 samples from Tehran and Babolsar, Iran. Transfer learning and preprocessing techniques (e.g., Wiener filters) were utilized to improve classification performance for microadenomas (< 10 mm) and macroadenomas (> 10 mm). The model achieved 90.5% accuracy, an area under the receiver operating characteristic curve (AUROC) of 0.92, and 89.6% sensitivity (93.5% for microadenomas, 88.3% for macroadenomas), outperforming standard CNNs by 5-18% across metrics. With a processing time of 0.17 s per scan, the model demonstrated robustness to variations in imaging conditions, including scanner differences and contrast variations, excelling in real-time detection and differentiation of adenoma subtypes. This dual-path approach, the first to synergize spatial and temporal MRI features for pituitary diagnostics, offers high precision and efficiency. Supported by comparisons with existing models, it provides a scalable, reproducible tool to improve patient outcomes, with potential adaptability to broader neuroimaging challenges.

Deep learning-based non-invasive prediction of PD-L1 status and immunotherapy survival stratification in esophageal cancer using [<sup>18</sup>F]FDG PET/CT.

Xie F, Zhang M, Zheng C, Zhao Z, Wang J, Li Y, Wang K, Wang W, Lin J, Wu T, Wang Y, Chen X, Li Y, Zhu Z, Wu H, Li Y, Liu Q

pubmed logopapersAug 14 2025
This study aimed to develop and validate deep learning models using [<sup>18</sup>F]FDG PET/CT to predict PD-L1 status in esophageal cancer (EC) patients. Additionally, we assessed the potential of derived deep learning model scores (DLS) for survival stratification in immunotherapy. In this retrospective study, we included 331 EC patients from two centers, dividing them into training, internal validation, and external validation cohorts. Fifty patients who received immunotherapy were followed up. We developed four 3D ResNet10-based models-PET + CT + clinical factors (CPC), PET + CT (PC), PET (P), and CT (C)-using pre-treatment [<sup>18</sup>F]FDG PET/CT scans. For comparison, we also constructed a logistic model incorporating clinical factors (clinical model). The DLS were evaluated as radiological markers for survival stratification, and nomograms for predicting survival were constructed. The models demonstrated accurate prediction of PD-L1 status. The areas under the curve (AUCs) for predicting PD-L1 status were as follows: CPC (0.927), PC (0.904), P (0.886), C (0.934), and the clinical model (0.603) in the training cohort; CPC (0.882), PC (0.848), P (0.770), C (0.745), and the clinical model (0.524) in the internal validation cohort; and CPC (0.843), PC (0.806), P (0.759), C (0.667), and the clinical model (0.671) in the external validation cohort. The CPC and PC models exhibited superior predictive performance. Survival analysis revealed that the DLS from most models effectively stratified overall survival and progression-free survival at appropriate cut-off points (P < 0.05), outperforming stratification based on PD-L1 status (combined positive score ≥ 10). Furthermore, incorporating model scores with clinical factors in nomograms enhanced the predictive probability of survival after immunotherapy. Deep learning models based on [<sup>18</sup>F]FDG PET/CT can accurately predict PD-L1 status in esophageal cancer patients. The derived DLS can effectively stratify survival outcomes following immunotherapy, particularly when combined with clinical factors.
Page 4 of 2982972 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.