Sort by:
Page 6 of 1601593 results

Leveraging multi-modal foundation model image encoders to enhance brain MRI-based headache classification.

Rafsani F, Sheth D, Che Y, Shah J, Siddiquee MMR, Chong CD, Nikolova S, Ross K, Dumkrieger G, Li B, Wu T, Schwedt TJ

pubmed logopapersSep 26 2025
Headaches are a nearly universal human experience traditionally diagnosed based solely on symptoms. Recent advances in imaging techniques and artificial intelligence (AI) have enabled the development of automated headache detection systems, which can enhance clinical diagnosis, especially when symptom-based evaluations are insufficient. Current AI models often require extensive data, limiting their clinical applicability where data availability is low. However, deep learning models, particularly pre-trained ones and fine-tuned with smaller, targeted datasets can potentially overcome this limitation. By leveraging BioMedCLIP, a pre-trained foundational model combining a vision transformer (ViT) image encoder with PubMedBERT text encoder, we fine-tuned the pre-trained ViT model for the specific purpose of classifying headaches and detecting biomarkers from brain MRI data. The dataset consisted of 721 individuals: 424 healthy controls (HC) from the IXI dataset and 297 local participants, including migraine sufferers (n = 96), individuals with acute post-traumatic headache (APTH, n = 48), persistent post-traumatic headache (PPTH, n = 49), and additional HC (n = 104). The model achieved high accuracy across multiple balanced test sets, including 89.96% accuracy for migraine versus HC, 88.13% for APTH versus HC, and 83.13% for PPTH versus HC, all validated through five-fold cross-validation for robustness. Brain regions identified by Gradient-weighted Class Activation Mapping analysis as responsible for migraine classification included the postcentral cortex, supramarginal gyrus, superior temporal cortex, and precuneus cortex; for APTH, rostral middle frontal and precentral cortices; and, for PPTH, cerebellar cortex and precentral cortex. To our knowledge, this is the first study to leverage a multimodal biomedical foundation model in the context of headache classification and biomarker detection using structural MRI, offering complementary insights into the causes and brain changes associated with headache disorders.

Deep learning-driven contactless ECG in MRI via beat pilot tone for motion-resolved image reconstruction and heart rate monitoring.

Sun H, Ding Q, Zhong S, Zhang Z

pubmed logopapersSep 26 2025
Electrocardiogram (ECG) is crucial for synchronizing cardiovascular magnetic resonance imaging (CMRI) acquisition with the cardiac cycle and for continuous heart rate monitoring during prolonged scans. However, conventional electrode-based ECG systems in clinical MRI environments suffer from tedious setup, magnetohydrodynamic (MHD) waveform distortion, skin burn risks, and patient discomfort. This study proposes a contactless ECG measurement method in MRI to address these challenges. We integrated Beat Pilot Tone (BPT)-a contactless, high motion sensitivity, and easily integrable RF motion sensing modality-into CMRI to capture cardiac motion without direct patient contact. A deep neural network was trained to map the BPT-derived cardiac mechanical motion signals to corresponding ECG waveforms. The reconstructed ECG was evaluated against simultaneously acquired ground truth ECG through multiple metrics: Pearson correlation coefficient, relative root mean square error (RRMSE), cardiac trigger timing accuracy, and heart rate estimation error. Additionally, we performed MRI retrospective binning reconstruction using reconstructed ECG reference and evaluated image quality under both standard clinical conditions and challenging scenarios involving arrhythmias and subject motion. To examine scalability of our approach across field strength, the model pretrained on 1.5T data was applied to 3T BPT cardiac acquisitions. In optimal acquisition scenarios, the reconstructed ECG achieved a median Pearson correlation of 89% relative to the ground truth, while cardiac triggering accuracy reached 94%, and heart rate estimation error remained below 1 bpm. The quality of the reconstructed images was comparable to that of ground truth synchronization. The method exhibited a degree of adaptability to irregular heart rate patterns and subject motion, and scaled effectively across MRI systems operating at different field strengths. The proposed contactless ECG measurement method has the potential to streamline CMRI workflows, improve patient safety and comfort, mitigate MHD distortion challenges and find a robust clinical application.

Ultra-fast whole-brain T2-weighted imaging in 7 seconds using dual-type deep learning reconstruction with single-shot acquisition: clinical feasibility and comparison with conventional methods.

Ikebe Y, Fujima N, Kameda H, Harada T, Shimizu Y, Kwon J, Yoneyama M, Kudo K

pubmed logopapersSep 26 2025
To evaluate the image quality and clinical utility of ultra-fast T2-weighted imaging (UF-T2WI), which acquires all slice data in 7 s using a single-shot turbo spin-echo technique combined with dual-type deep learning (DL) reconstruction, incorporating DL-based image denoising and super-resolution processing, by comparing UF-T2WI with conventional T2WI. We analyzed data from 38 patients who underwent both conventional T2WI and UF-T2WI with the dual-type DL-based image reconstruction. Two board-certified radiologists independently performed blinded qualitative assessments of the patients' images obtained with UF-T2WI with DL and conventional T2WI, evaluating the overall image quality, anatomical structure visibility, and levels of noise and artifacts. In cases that included central nervous system diseases, the lesions' delineation was also assessed. The quantitative analysis included measurements of signal-to-noise ratios in white and gray matter and the contrast-to-noise ratio between gray and white matter. Compared to conventional T2WI, UF-T2WI with DL received significantly higher ratings for overall image quality and lower noise and artifact levels (p < 0.001 for both readers). The anatomical visibility was significantly better in UF-T2WI for one reader, with no significant difference for the other reader. The lesion visibility in UF-T2WI was comparable to that in conventional T2WI. Quantitatively, the SNRs and CNRs were all significantly higher in UF-T2WI than conventional T2WI (p < 0.001). The combination of SSTSE with dual-type DL reconstruction allows for the acquisition of clinically acceptable T2WI images in just 7 s. This technique shows strong potential to reduce MRI scan times and improve clinical workflow efficiency.

Automated deep learning method for whole-breast segmentation in contrast-free quantitative MRI.

Gao W, Zhang Y, Gao B, Xia Y, Liang W, Yang Q, Shi F, He T, Han G, Li X, Su X, Zhang Y

pubmed logopapersSep 26 2025
To develop a deep learning segmentation method utilizing the nnU-Net architecture for fully automated whole-breast segmentation based on diffusion-weighted imaging (DWI) and synthetic MRI (SyMRI) images. A total of 98 patients with 196 breasts were evaluated. All patients underwent 3.0T magnetic resonance (MR) examinations, which incorporated DWI and SyMRI techniques. The ground truth for breast segmentation was established through a manual, slice-by-slice approach performed by two experienced radiologists. The U-Net and nnU-Net deep learning algorithms were employed to segment the whole-breast. Performance was evaluated using various metrics, including the Dice Similarity Coefficient (DSC), accuracy, and Pearson's correlation coefficient. For DWI and proton density (PD) of SyMRI, the nnU-Net outperformed the U-Net achieving the higher DSC in both the testing set (DWI, 0.930 ± 0.029 vs. 0.785 ± 0.161; PD, 0.969 ± 0.010 vs. 0.936 ± 0.018) and independent testing set (DWI, 0.953 ± 0.019 vs. 0.789 ± 0.148; PD, 0.976 ± 0.008 vs. 0.939 ± 0.018). The PD of SyMRI exhibited better performance than DWI, attaining the highest DSC and accuracy. The correlation coefficients R² for nnU-Net were 0.99 ~ 1.00 for DWI and PD, significantly surpassing the performance of U-Net. The nnU-Net exhibited exceptional segmentation performance for fully automated breast segmentation of contrast-free quantitative images. This method serves as an effective tool for processing large-scale clinical datasets and represents a significant advancement toward computer-aided quantitative analysis of breast DWI and SyMRI images.

Generating Synthetic MR Spectroscopic Imaging Data with Generative Adversarial Networks to Train Machine Learning Models.

Maruyama S, Takeshima H

pubmed logopapersSep 26 2025
To develop a new method to generate synthetic MR spectroscopic imaging (MRSI) data for training machine learning models. This study targeted routine MRI examination protocols with single voxel spectroscopy (SVS). A novel model derived from pix2pix generative adversarial networks was proposed to generate synthetic MRSI data using MRI and SVS data as inputs. T1- and T2-weighted, SVS, and reference MRSI data were acquired from healthy brains with clinically available sequences. The proposed model was trained to generate synthetic MRSI data. Quantitative evaluation involved the calculation of the mean squared error (MSE) against the reference and metabolite ratio value. The effect of the location of and the number of the SVS data on the quality of the synthetic MRSI data was investigated using the MSE. The synthetic MRSI data generated from the proposed model were visually closer to the reference. The 95% confidence interval (CI) of the metabolite ratio value of synthetic MRSI data overlapped with the reference for seven of eight metabolite ratios. The MSEs tended to be lower in the same location than in different locations. The MSEs among groups of numbers of SVS data were not significantly different. A new method was developed to generate MRSI data by integrating MRI and SVS data. Our method can potentially increase the volume of MRSI data training for other machine learning models by adding SVS acquisition to routine MRI examinations.

The Evolution and Clinical Impact of Deep Learning Technologies in Breast MRI.

Fujioka T, Fujita S, Ueda D, Ito R, Kawamura M, Fushimi Y, Tsuboyama T, Yanagawa M, Yamada A, Tatsugami F, Kamagata K, Nozaki T, Matsui Y, Fujima N, Hirata K, Nakaura T, Tateishi U, Naganawa S

pubmed logopapersSep 26 2025
The integration of deep learning (DL) in breast MRI has revolutionized the field of medical imaging, notably enhancing diagnostic accuracy and efficiency. This review discusses the substantial influence of DL technologies across various facets of breast MRI, including image reconstruction, classification, object detection, segmentation, and prediction of clinical outcomes such as response to neoadjuvant chemotherapy and recurrence of breast cancer. Utilizing sophisticated models such as convolutional neural networks, recurrent neural networks, and generative adversarial networks, DL has improved image quality and precision, enabling more accurate differentiation between benign and malignant lesions and providing deeper insights into disease behavior and treatment responses. DL's predictive capabilities for patient-specific outcomes also suggest potential for more personalized treatment strategies. The advancements in DL are pioneering a new era in breast cancer diagnostics, promising more personalized and effective healthcare solutions. Nonetheless, the integration of this technology into clinical practice faces challenges, necessitating further research, validation, and development of legal and ethical frameworks to fully leverage its potential.

Brain Tumor Classification from MRI Scans via Transfer Learning and Enhanced Feature Representation

Ahta-Shamul Hoque Emran, Hafija Akter, Abdullah Al Shiam, Abu Saleh Musa Miah, Anichur Rahman, Fahmid Al Farid, Hezerul Abdul Karim

arxiv logopreprintSep 26 2025
Brain tumors are abnormal cell growths in the central nervous system (CNS), and their timely detection is critical for improving patient outcomes. This paper proposes an automatic and efficient deep-learning framework for brain tumor detection from magnetic resonance imaging (MRI) scans. The framework employs a pre-trained ResNet50 model for feature extraction, followed by Global Average Pooling (GAP) and linear projection to obtain compact, high-level image representations. These features are then processed by a novel Dense-Dropout sequence, a core contribution of this work, which enhances non-linear feature learning, reduces overfitting, and improves robustness through diverse feature transformations. Another major contribution is the creation of the Mymensingh Medical College Brain Tumor (MMCBT) dataset, designed to address the lack of reliable brain tumor MRI resources. The dataset comprises MRI scans from 209 subjects (ages 9 to 65), including 3671 tumor and 13273 non-tumor images, all clinically verified under expert supervision. To overcome class imbalance, the tumor class was augmented, resulting in a balanced dataset well-suited for deep learning research.

Model-driven individualized transcranial direct current stimulation for the treatment of insomnia disorder: protocol for a randomized, sham-controlled, double-blind study.

Wang Y, Jia W, Zhang Z, Bai T, Xu Q, Jiang J, Wang Z

pubmed logopapersSep 26 2025
Insomnia disorder is a prevalent condition associated with significant negative impacts on health and daily functioning. Transcranial direct current stimulation (tDCS) has emerged as a potential technique for improving sleep. However, questions remain regarding its clinical efficacy, and there is a lack of standardized individualized stimulation protocols. This study aims to evaluate the efficacy of model-driven, individualized tDCS for treating insomnia disorder through a randomized, double-blind, sham-controlled trial. A total of 40 patients diagnosed with insomnia disorder will be recruited and randomly assigned to either an active tDCS group or a sham stimulation group. Individualized stimulation parameters will be determined through machine learning-based electric field modeling incorporating structural MRI and EEG data. Participants will undergo 10 sessions of tDCS (5 days/week for 2 consecutive weeks), with follow-up assessments conducted at 2 and 4 weeks after treatment. The primary outcome is the reduction in the Insomnia Severity Index (ISI) score at two weeks post-treatment. Secondary outcomes include changes in sleep parameters, anxiety, and depression scores. This study is expected to provide evidence for the effectiveness of individualized tDCS in improving sleep quality and reducing insomnia symptoms. This integrative approach, combining advanced neuroimaging and electrophysiological biomarkers, has the potential to establish an evidence-based framework for individualized brain stimulation, optimizing therapeutic outcomes. This study is registered at ClinicalTrials.gov (Identifier: NCT06671457) and was registered on 4 November 2024. The online version contains supplementary material available at 10.1186/s12888-025-07347-5.

AI-driven MRI biomarker for triple-class HER2 expression classification in breast cancer: a large-scale multicenter study.

Wong C, Yang Q, Liang Y, Wei Z, Dai Y, Xu Z, Chen X, Du S, Han C, Liang C, Zhang L, Liu Z, Wang Y, Shi Z

pubmed logopapersSep 26 2025
Accurate classification of Human epidermal growth factor receptor 2 (HER2) expression is crucial for guiding treatment in breast cancer, especially with emerging therapies like trastuzumab deruxtecan (T-DXd) for HER2-low patients. Current gold-standard methods relying on invasive biopsy and immunohistochemistry suffer from sampling bias and interobserver variability, highlighting the need for reliable non-invasive alternatives. We developed an artificial intelligence framework that integrates a pretrained foundation model with a task-specific classifier to predict HER2 expression categories (HER2-zero, HER2-low, HER2-positive) directly from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The model was trained and validated using multicenter datasets. Model interpretability was assessed through feature visualization using t-SNE and UMAP dimensionality reduction techniques, complemented by SHAP analysis for post-hoc interpretation of critical predictive imaging features. The developed model demonstrated robust performance across datasets, achieving micro-average AUCs of 0.821 (95% CI 0.795–0.846) and 0.835 (95% CI 0.797–0.864), and macro-average AUCs of 0.833 (95% CI 0.818–0.847) and 0.857 (95% CI 0.837–0.872) in external validation. Subgroup analysis demonstrated strong discriminative power in distinguishing HER2 categories, particularly HER2-zero and HER2-low cases. Visualization techniques revealed distinct, biologically plausible clustering patterns corresponding to HER2 expression categories. This study presents a reproducible, non-invasive solution for comprehensive HER2 phenotyping using DCE-MRI, addressing fundamental limitations of biopsy-dependent assessment. Our approach enables accurate identification of HER2-low patients who may benefit from novel therapies like T-DXd. This framework represents a significant advancement in precision oncology, with potential to transform diagnostic workflows and guide targeted therapy selection in breast cancer care. The online version contains supplementary material available at 10.1186/s13058-025-02118-2.

AI demonstrates comparable diagnostic performance to radiologists in MRI detection of anterior cruciate ligament tears: a systematic review and meta-analysis.

Gill SS, Haq T, Zhao Y, Ristic M, Amiras D, Gupte CM

pubmed logopapersSep 25 2025
Anterior cruciate ligament (ACL) injuries are among the most common knee injuries, affecting 1 in 3500 people annually. With rising rates of ACL tears, particularly in children, timely diagnosis is critical. This study evaluates artificial intelligence (AI) effectiveness in diagnosing and classifying ACL tears on MRI through a systematic review and meta-analysis, comparing AI performance with clinicians and assessing radiomic and non-radiomic models. Major databases were searched for AI models diagnosing ACL tears via MRIs. 36 studies, representing 52 models, were included. Accuracy, sensitivity, and specificity metrics were extracted. Pooled estimates were calculated using a random-effects model. Subgroup analyses compared MRI sequences, ground truths, AI versus clinician performance, and radiomic versus non-radiomic models. This study was conducted in line with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocols. AI demonstrated strong diagnostic performance, with pooled accuracy, sensitivity, and specificity of 87.37%, 90.73%, and 91.34%, respectively. Classification models achieved pooled metrics of 90.46%, 88.68%, and 94.08%. Radiomic models outperformed non-radiomic models, and AI demonstrated comparable performance to clinicians in key metrics. Three-dimensional (3D) proton density fat suppression (PDFS) sequences with < 2 mm slice depth yielded the most promising results, despite small sample sizes, favouring arthroscopic benchmarks. Despite high heterogeneity (I² > 90%). AI models demonstrate diagnostic performance comparable to clinicians and may serve as valuable adjuncts in ACL tear detection, pending prospective validation. However, substantial heterogeneity and limited interpretability remain key challenges. Further research and standardised evaluation frameworks are needed to support clinical integration. Question Is AI effective and accurate in diagnosing and classifying anterior cruciate ligament (ACL) tears on MRI? Findings AI demonstrated high accuracy (87.37%), sensitivity (90.73%), and specificity (91.34%) in ACL tear diagnosis, matching or surpassing clinicians. Radiomic models outperformed non-radiomic approaches. Clinical relevance AI can enhance the accuracy of ACL tear diagnosis, reducing misdiagnoses and supporting clinicians, especially in resource-limited settings. Its integration into clinical workflows may streamline MRI interpretation, reduce diagnostic delays, and improve patient outcomes by optimising management.
Page 6 of 1601593 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.