Sort by:
Page 37 of 3433423 results

Overcoming Site Variability in Multisite fMRI Studies: an Autoencoder Framework for Enhanced Generalizability of Machine Learning Models.

Almuqhim F, Saeed F

pubmed logopapersSep 2 2025
Harmonizing multisite functional magnetic resonance imaging (fMRI) data is crucial for eliminating site-specific variability that hinders the generalizability of machine learning models. Traditional harmonization techniques, such as ComBat, depend on additive and multiplicative factors, and may struggle to capture the non-linear interactions between scanner hardware, acquisition protocols, and signal variations between different imaging sites. In addition, these statistical techniques require data from all the sites during their model training which may have the unintended consequence of data leakage for ML models trained using this harmonized data. The ML models trained using this harmonized data may result in low reliability and reproducibility when tested on unseen data sets, limiting their applicability for general clinical usage. In this study, we propose Autoencoders (AEs) as an alternative for harmonizing multisite fMRI data. Our designed and developed framework leverages the non-linear representation learning capabilities of AEs to reduce site-specific effects while preserving biologically meaningful features. Our evaluation using Autism Brain Imaging Data Exchange I (ABIDE-I) dataset, containing 1,035 subjects collected from 17 centers demonstrates statistically significant improvements in leave-one-site-out (LOSO) cross-validation evaluations. All AE variants (AE, SAE, TAE, and DAE) significantly outperformed the baseline mode (p < 0.01), with mean accuracy improvements ranging from 3.41% to 5.04%. Our findings demonstrate the potential of AEs to harmonize multisite neuroimaging data effectively enabling robust downstream analyses across various neuroscience applications while reducing data-leakage, and preservation of neurobiological features. Our open-source code is made available at https://github.com/pcdslab/Autoencoder-fMRI-Harmonization .

Integrating GANs, Contrastive Learning, and Transformers for Robust Medical Image Analysis.

Heng Y, Khan FG, Yinghua M, Khan A, Ali F, Khan N, Kwak D

pubmed logopapersSep 2 2025
Despite the widespread success of convolutional neural networks (CNNs) in general computer vision tasks, their application to complex medical image analysis faces persistent challenges. These include limited labeled data availability, which restricts model generalization; class imbalance, where minority classes are underrepresented and lead to biased predictions; and inadequate feature representation, since conventional CNNs often struggle to capture subtle patterns and intricate dependencies characteristic of medical imaging. To address these limitations, we propose CTNGAN, a unified framework that integrates generative modeling with Generative Adversarial Networks (GANs), contrastive learning, and Transformer architectures to enhance the robustness and accuracy of medical image analysis. Each component is designed to tackle a specific challenge: the GAN model mitigates data scarcity and imbalance, contrastive learning strengthens feature robustness against domain shifts, and the Transformer captures long-range spatial patterns. This tripartite integration not only overcomes the limitations of conventional CNNs but also achieves superior generalizability, as demonstrated by classification experiments on benchmark medical imaging datasets, with up to 98.5% accuracy and an F1-score of 0.968, outperforming existing methods. The framework's ability to jointly optimize data generation, feature discrimination, and contextual modeling establishes a new paradigm for accurate and reliable medical image diagnosis.

Application and assessment of deep learning to routine 2D T2 FLEX spine imaging at 1.5T.

Shaikh IS, Milshteyn E, Chulsky S, Maclellan CJ, Soman S

pubmed logopapersSep 2 2025
2D T2 FSE is an essential routine spine MRI sequence, allowing assessment of fractures, soft tissues, and pathology. Fat suppression using a DIXON-type approach (2D FLEX) improves water/fat separation. Recently, a deep learning (DL) reconstruction (AIR™ Recon DL, GE HealthCare) became available for 2D FLEX, offering increased signal-to-noise ratio (SNR), reduced artifacts, and sharper images. This study aimed to compare DL-reconstructed versus non-DL-reconstructed spine 2D T2 FLEX images for diagnostic image quality and quantitative metrics at 1.5T. Forty-one patients with clinically indicated cervical or lumbar spine MRI were scanned between May and August 2023 on a 1.5T Voyager (GE HealthCare). A 2D T2 FLEX sequence was acquired, and DL-based reconstruction (noise reduction strength: 75%) was applied. Raw data were also reconstructed without DL. Three readers (CAQ-neuroradiologist, PGY-6 neuroradiology fellow, PGY-2 radiology resident) rated diagnostic preference (0 = non-DL, 1 = DL, 2 = equivalent) for 39 cases. Quantitative measures (SNR, total variation [TV], number of edges, and fat fraction [FF]) were compared using paired t-tests with significance set at p < .05. Among evaluations, 79.5% preferred DL, 11% found images equivalent, and 9.4% favored non-DL, with strong inter-rater agreement (p < .001, Fleiss' Kappa = 0.99). DL images had higher SNR, lower TV, and fewer edges (p < .001), indicating effective noise reduction. FF remained statistically unchanged in subcutaneous fat (p = .25) but differed slightly in vertebral bodies (1.4% difference, p = .01). DL reconstruction notably improved image quality by enhancing SNR and reducing noise without clinically meaningful changes in fat quantification. These findings support the use of DL-enhanced 2D T2 FLEX in routine spine imaging at 1.5T. Incorporating DL-based reconstruction into standard spine MRI protocols can increase diagnostic confidence and workflow efficiency. Further studies with larger cohorts and diverse pathologies are warranted to refine this approach and explore potential benefits for clinical decision-making.

Navigator motion-resolved MR fingerprinting using implicit neural representation: Feasibility for free-breathing three-dimensional whole-liver multiparametric mapping.

Li C, Li J, Zhang J, Solomon E, Dimov AV, Spincemaille P, Nguyen TD, Prince MR, Wang Y

pubmed logopapersSep 2 2025
To develop a multiparametric free-breathing three-dimensional, whole-liver quantitative maps of water T<sub>1</sub>, water T<sub>2</sub>, fat fraction (FF) and R<sub>2</sub>*. A multi-echo 3D stack-of-spiral gradient-echo sequence with inversion recovery and T<sub>2</sub>-prep magnetization preparations was implemented for multiparametric MRI. Fingerprinting and a neural network based on implicit neural representation (FINR) were developed to simultaneously reconstruct the motion deformation fields, the static images, perform water-fat separation, and generate T<sub>1</sub>, T<sub>2</sub>, R<sub>2</sub>*, and FF maps. FINR performance was evaluated in 10 healthy subjects by comparison with quantitative maps generated using conventional breath-holding imaging. FINR consistently generated sharp images in all subjects free of motion artifacts. FINR showed minimal bias and narrow 95% limits of agreement for T<sub>1</sub>, T<sub>2</sub>, R<sub>2</sub>*, and FF values in the liver compared with conventional imaging. FINR training took about 3 h per subject, and FINR inference took less than 1 min to produce static images and motion deformation fields. FINR is a promising approach for 3D whole-liver T<sub>1</sub>, T<sub>2</sub>, R<sub>2</sub>*, and FF mapping in a single free-breathing continuous scan.

Predicting Prognosis of Light-Chain Cardiac Amyloidosis by Magnetic Resonance Imaging and Deep Learning.

Wang S, Liu C, Guo Y, Sang H, Li X, Lin L, Li X, Wu Y, Zhang L, Tian J, Li J, Wang Y

pubmed logopapersSep 2 2025
Light-chain cardiac amyloidosis (AL-CA) is a progressive heart disease with high mortality rate and variable prognosis. Presently used Mayo staging method can only stratify patients into four stages, highlighting the necessity for a more individualized prognosis prediction method. We aim to develop a novel deep learning (DL) model for whole-heart analysis of cardiovascular magnetic resonance-derived late gadolinium enhancement (LGE) images to predict individualized prognosis in AL-CA. This study included 394 patients with AL-CA who underwent standardized chemotherapy and had at least one year of follow-up. The approach involved automated segmentation of heart in LGE images and feature extraction using a Transformer-based DL model. To enhance feature differentiation and mitigate overfitting, a contrastive pretraining strategy was employed to accentuate distinct features between patients with different prognosis while clustering similar cases. Finally, an ensemble learning strategy was used to integrate predictions from 15 models at 15 survival time points into a comprehensive prognostic model. In the testing set of 79 patients, the DL model achieved a C-Index of 0.91 and an AUC of 0.95 in predicting 2.6-year survival (HR: 2.67), outperforming the Mayo model (C-Index=0.65, AUC=0.71). The DL model effectively distinguished patients with the same Mayo stage but different prognosis. Visualization techniques revealed that the model captures complex, high-dimensional prognostic features across multiple cardiac regions, extending beyond the amyloid-affected areas. This fully automated DL model can predict individualized prognosis of AL-CA through LGE images, which complements the presently used Mayo staging method.

Super-Resolution MR Spectroscopic Imaging via Diffusion Models for Tumor Metabolism Mapping.

Alsubaie M, Perera SM, Gu L, Subasi SB, Andronesi OC, Li X

pubmed logopapersSep 2 2025
High-resolution magnetic resonance spectroscopic imaging (MRSI) plays a crucial role in characterizing tumor metabolism and guiding clinical decisions for glioma patients. However, due to inherently low metabolite concentrations and signal-to-noise ratio (SNR) limitations, MRSI data are often acquired at low spatial resolution, hindering accurate visualization of tumor heterogeneity and margins. In this study, we propose a novel deep learning framework based on conditional denoising diffusion probabilistic models for super-resolution reconstruction of MRSI, with a particular focus on mutant isocitrate dehydrogenase (IDH) gliomas. The model progressively transforms noise into high-fidelity metabolite maps through a learned reverse diffusion process, conditioned on low-resolution inputs. Leveraging a Self-Attention UNet backbone, the proposed approach integrates global contextual features and achieves superior detail preservation. On simulated patient data, the proposed method achieved Structural Similarity Index Measure (SSIM) values of 0.956, 0.939, and 0.893; Peak Signal-to-Noise Ratio (PSNR) values of 29.73, 27.84, and 26.39 dB; and Learned Perceptual Image Patch Similarity (LPIPS) values of 0.025, 0.036, and 0.045 for upsampling factors of 2, 4, and 8, respectively, with LPIPS improvements statistically significant compared to all baselines ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>p</mi> <mo><</mo> <mn>0.01</mn></mrow> </math> ). We validated the framework on in vivo MRSI from healthy volunteers and glioma patients, where it accurately reconstructed small lesions, preserved critical textural and structural information, and enhanced tumor boundary delineation in metabolic ratio maps, revealing heterogeneity not visible in other approaches. These results highlight the promise of diffusion-based deep learning models as clinically relevant tools for noninvasive, high-resolution metabolic imaging in glioma and potentially other neurological disorders.

Fusion of Deep Transfer Learning and Radiomics in MRI-Based Prediction of Post-Surgical Recurrence in Soft Tissue Sarcoma.

Wang Y, Wang T, Zheng F, Hao W, Hao Q, Zhang W, Yin P, Hong N

pubmed logopapersSep 2 2025
Soft  tissue sarcomas (STS) are heterogeneous malignancies with high recurrence rates (33-39%) post-surgery, necessitating improved prognostic tools. This study proposes a fusion model integrating deep transfer learning and radiomics from MRI to predict postoperative STS recurrence. Axial T2-weighted fat-suppressed imaging (T<sub>2</sub>WI) of 803 STS patients from two institutions was retrospectively collected and divided into training (n = 527), internal validation (n = 132), and external validation (n = 144) cohorts. Tumor segmentation was performed using the SegResNet model within the Auto3DSeg framework. Radiomic features and deep learning features were extracted. Feature selection employed LASSO regression, and the deep learning radiomic (DLR) model combined radiomic and deep learning signatures. Using the features, nine models were constructed based on three classifiers. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, accuracy, negative predictive value, and positive predictive value were calculated for performance evaluation. The SegResNet model achieved Dice coefficients of 0.728 after refinement. Recurrence rates were 22.8% (120/527) in the training, 25.0% (33/132) in the internal validation, and 32.6% (47/144) in the external validation cohorts. The DLR model (ExtraTrees) demonstrated superior performance, achieving an AUC of 0.818 in internal validation and 0.809 in external validation, better than the radiomic model (0.710, 0.612) and the deep learning model (0.751, 0.667). Sensitivity and specificity ranged from 0.702 to 0.976 and 0.732 to 0.830, respectively. Decision curve analysis confirmed superior clinical utility. The DLR model provides a robust, non-invasive tool for preoperative STS recurrence prediction, enabling personalized treatment decisions and postoperative management.

An MRI-pathology foundation model for noninvasive diagnosis and grading of prostate cancer.

Shao L, Liang C, Yan Y, Zhu H, Jiang X, Bao M, Zang P, Huang X, Zhou H, Nie P, Wang L, Li J, Zhang S, Ren S

pubmed logopapersSep 2 2025
Prostate cancer is a leading health concern for men, yet current clinical assessments of tumor aggressiveness rely on invasive procedures that often lead to inconsistencies. There remains a critical need for accurate, noninvasive diagnosis and grading methods. Here we developed a foundation model trained on multiparametric magnetic resonance imaging (MRI) and paired pathology data for noninvasive diagnosis and grading of prostate cancer. Our model, MRI-based Predicted Transformer for Prostate Cancer (MRI-PTPCa), was trained under contrastive learning on nearly 1.3 million image-pathology pairs from over 5,500 patients in discovery, modeling, external and prospective cohorts. During real-world testing, prediction of MRI-PTPCa demonstrated consistency with pathology and superior performance (area under the curve above 0.978; grading accuracy 89.1%) compared with clinical measures and other prediction models. This work introduces a scalable, noninvasive approach to prostate cancer diagnosis and grading, offering a robust tool to support clinical decision-making while reducing reliance on biopsies.

Mask-Guided and Fidelity-Constrained Deep Learning Model for Accurate Translation of Brain CT Images to Diffusion MRI Images in Acute Stroke Patients.

Khalil MA, Bajger M, Skeats A, Delnooz C, Dwyer A, Lee G

pubmed logopapersSep 2 2025
The early and precise diagnosis of stroke plays an important role in its treatment planning. Computed Tomography (CT) is utilised as a first diagnostic tool for quick diagnosis and to rule out haemorrhage. Diffusion Magnetic Resonance Imaging (MRI) provides superior sensitivity in comparison to CT for detecting early acute ischaemia and small lesions. However, the long scan time and limited availability of MRI make it not feasible for emergency settings. To deal with this problem, this study presents a brain mask-guided and fidelity-constrained cycle-consistent generative adversarial network for translating CT images into diffusion MRI images for stroke diagnosis. A brain mask is concatenated with the input CT image and given as input to the generator to encourage more focus on the critical foreground areas. A fidelity-constrained loss is utilised to preserve details for better translation results. A publicly available dataset, A Paired CT-MRI Dataset for Ischemic Stroke Segmentation (APIS) is utilised to train and test the models. The proposed method yields MSE 197.45 [95% CI: 180.80, 214.10], PSNR 25.50 [95% CI: 25.10, 25.92], and SSIM 88.50 [95% CI: 87.50, 89.50] on a testing set. The proposed method significantly improves techniques based on UNet, cycle-consistent generative adversarial networks (CycleGAN) and Attention generative adversarial networks (GAN). Furthermore, an ablation study was performed, which demonstrates the effectiveness of incorporating fidelity-constrained loss and brain mask information as a soft guide in translating CT images into diffusion MRI images. The experimental results demonstrate that the proposed approach has the potential to support faster and precise diagnosis of stroke.

Evaluation efficacy and accuracy of a real-time computer-aided polyp detection system during colonoscopy: a prospective, multicentric, randomized, parallel-controlled study trial.

Xu X, Ba L, Lin L, Song Y, Zhao C, Yao S, Cao H, Chen X, Mu J, Yang L, Feng Y, Wang Y, Wang B, Zheng Z

pubmed logopapersSep 2 2025
Colorectal cancer (CRC) ranks as the second deadliest cancer globally, impacting patients' quality of life. Colonoscopy is the primary screening method for detecting adenomas and polyps, crucial for reducing long-term CRC risk, but it misses about 30% of cases. Efforts to improve detection rates include using AI to enhance colonoscopy. This study assesses the effectiveness and accuracy of a real-time AI-assisted polyp detection system during colonoscopy. The study included 390 patients aged 40 to 75 undergoing colonoscopies for either colorectal cancer screening (risk score ≥ 4) or clinical diagnosis. Participants were randomly assigned to an experimental group using software-assisted diagnosis or a control group with physician diagnosis. The software, a medical image processing tool with B/S and MVC architecture, operates on Windows 10 (64-bit) and supports real-time image handling and lesion identification via HDMI, SDI, AV, and DVI outputs from endoscopy devices. Expert evaluations of retrospective video lesions served as the gold standard. Efficacy was assessed by polyp per colonoscopy (PPC), adenoma per colonoscopy (APC), adenoma detection rate (ADR), and polyp detection rate (PDR), while accuracy was measured using sensitivity and specificity against the gold standard. In this multicenter, randomized controlled trial, computer-aided detection (CADe) significantly improved polyp detection rates (PDR), achieving 67.18% in the CADe group versus 56.92% in the control group. The CADe group identified more polyps, especially those 5 mm or smaller (61.03% vs. 56.92%). In addition, the CADe group demonstrated higher specificity (98.44%) and sensitivity (95.19%) in the FAS dataset, and improved sensitivity (95.82% vs. 77.53%) in the PPS dataset, with both groups maintaining 100% specificity. These results suggest that the AI-assisted system enhances PDR accuracy. This real-time computer-aided polyp detection system enhances efficacy by boosting adenoma and polyp detection rates, while also achieving high accuracy with excellent sensitivity and specificity.
Page 37 of 3433423 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.