Sort by:
Page 20 of 38374 results

Adaptive Breast MRI Scanning Using AI.

Eskreis-Winkler S, Bhowmik A, Kelly LH, Lo Gullo R, D'Alessio D, Belen K, Hogan MP, Saphier NB, Sevilimedu V, Sung JS, Comstock CE, Sutton EJ, Pinker K

pubmed logopapersJun 1 2025
Background MRI protocols typically involve many imaging sequences and often require too much time. Purpose To simulate artificial intelligence (AI)-directed stratified scanning for screening breast MRI with various triage thresholds and evaluate its diagnostic performance against that of the full breast MRI protocol. Materials and Methods This retrospective reader study included consecutive contrast-enhanced screening breast MRI examinations performed between January 2013 and January 2019 at three regional cancer sites. In this simulation study, an in-house AI tool generated a suspicion score for subtraction maximum intensity projection images during a given MRI examination, and the score was used to determine whether to proceed with the full MRI protocol or end the examination early (abbreviated breast MRI [AB-MRI] protocol). Examinations with suspicion scores under the 50th percentile were read using both the AB-MRI protocol (ie, dynamic contrast-enhanced MRI scans only) and the full MRI protocol. Diagnostic performance metrics for screening with various AI triage thresholds were compared with those for screening without AI triage. Results Of 863 women (mean age, 52 years ± 10 [SD]; 1423 MRI examinations), 51 received a cancer diagnosis within 12 months of screening. The diagnostic performance metrics for AI-directed stratified scanning that triaged 50% of examinations to AB-MRI versus full MRI protocol scanning were as follows: sensitivity, 88.2% (45 of 51; 95% CI: 79.4, 97.1) versus 86.3% (44 of 51; 95% CI: 76.8, 95.7); specificity, 80.8% (1108 of 1372; 95% CI: 78.7, 82.8) versus 81.4% (1117 of 1372; 95% CI: 79.4, 83.5); positive predictive value 3 (ie, percent of biopsies yielding cancer), 23.6% (43 of 182; 95% CI: 17.5, 29.8) versus 24.7% (42 of 170; 95% CI: 18.2, 31.2); cancer detection rate (per 1000 examinations), 31.6 (95% CI: 22.5, 40.7) versus 30.9 (95% CI: 21.9, 39.9); and interval cancer rate (per 1000 examinations), 4.2 (95% CI: 0.9, 7.6) versus 4.9 (95% CI: 1.3, 8.6). Specificity decreased by no more than 2.7 percentage points with AI triage. There were no AI-triaged examinations for which conducting the full MRI protocol would have resulted in additional cancer detection. Conclusion AI-directed stratified MRI decreased simulated scan times while maintaining diagnostic performance. © RSNA, 2025 <i>Supplemental material is available for this article.</i> See also the editorial by Strand in this issue.

ChatGPT-4o's Performance in Brain Tumor Diagnosis and MRI Findings: A Comparative Analysis with Radiologists.

Ozenbas C, Engin D, Altinok T, Akcay E, Aktas U, Tabanli A

pubmed logopapersJun 1 2025
To evaluate the accuracy of ChatGPT-4o in identifying magnetic resonance imaging (MRI) findings and diagnosing brain tumors by comparing its performance with that of experienced radiologists. This retrospective study included 46 patients with pathologically confirmed brain tumors who underwent preoperative MRI between January 2021 and October 2024. Two experienced radiologists and ChatGPT 4o independently evaluated the anonymized MRI images. Eight questions focusing on MRI sequences, lesion characteristics, and diagnoses were answered. ChatGPT-4o's responses were compared to those of the radiologists and the pathology outcomes. Statistical analyses were performed, which included accuracy, sensitivity, specificity, and the McNemar test, with p<0.05 considered to indicate a statistically significant difference. ChatGPT-4o successfully identified 44 of the 46 (95.7%) lesions; it achieved 88.3% accuracy in identifying MRI sequences, 81% in perilesional edema, 79.5% in signal characteristics, and 82.2% in contrast enhancement. However, its accuracy in localizing lesions was 53.6% and that in distinguishing extra-axial from intra-axial lesions was 26.3%. As such, ChatGPT-4o achieved success rates of 56.8% and 29.5% for differential diagnoses and most likely diagnoses when compared to 93.2-90.9% and 70.5-65.9% for radiologists, respectively (p<0.005). ChatGPT-4o demonstrated high accuracy in identifying certain MRI features but underperformed in diagnostic tasks in comparison with the radiologists. Despite its current limitations, future updates and advancements have the potential to enable large language models to facilitate diagnosis and offer a reliable second opinion to radiologists.

Deep learning driven interpretable and informed decision making model for brain tumour prediction using explainable AI.

Adnan KM, Ghazal TM, Saleem M, Farooq MS, Yeun CY, Ahmad M, Lee SW

pubmed logopapersJun 1 2025
Brain Tumours are highly complex, particularly when it comes to their initial and accurate diagnosis, as this determines patient prognosis. Conventional methods rely on MRI and CT scans and employ generic machine learning techniques, which are heavily dependent on feature extraction and require human intervention. These methods may fail in complex cases and do not produce human-interpretable results, making it difficult for clinicians to trust the model's predictions. Such limitations prolong the diagnostic process and can negatively impact the quality of treatment. The advent of deep learning has made it a powerful tool for complex image analysis tasks, such as detecting brain Tumours, by learning advanced patterns from images. However, deep learning models are often considered "black box" systems, where the reasoning behind predictions remains unclear. To address this issue, the present study applies Explainable AI (XAI) alongside deep learning for accurate and interpretable brain Tumour prediction. XAI enhances model interpretability by identifying key features such as Tumour size, location, and texture, which are crucial for clinicians. This helps build their confidence in the model and enables them to make better-informed decisions. In this research, a deep learning model integrated with XAI is proposed to develop an interpretable framework for brain Tumour prediction. The model is trained on an extensive dataset comprising imaging and clinical data and demonstrates high AUC while leveraging XAI for model explainability and feature selection. The study findings indicate that this approach improves predictive performance, achieving an accuracy of 92.98% and a miss rate of 7.02%. Additionally, interpretability tools such as LIME and Grad-CAM provide clinicians with a clearer understanding of the decision-making process, supporting diagnosis and treatment. This model represents a significant advancement in brain Tumour prediction, with the potential to enhance patient outcomes and contribute to the field of neuro-oncology.

<i>Radiology: Cardiothoracic Imaging</i> Highlights 2024.

Catania R, Mukherjee A, Chamberlin JH, Calle F, Philomina P, Mastrodicasa D, Allen BD, Suchá D, Abbara S, Hanneman K

pubmed logopapersJun 1 2025
<i>Radiology: Cardiothoracic Imaging</i> publishes research, technical developments, and reviews related to cardiac, vascular, and thoracic imaging. The current review article, led by the <i>Radiology: Cardiothoracic Imaging</i> trainee editorial board, highlights the most impactful articles published in the journal between November 2023 and October 2024. The review encompasses various aspects of cardiac, vascular, and thoracic imaging related to coronary artery disease, cardiac MRI, valvular imaging, congenital and inherited heart diseases, thoracic imaging, lung cancer, artificial intelligence, and health services research. Key highlights include the role of CT fractional flow reserve analysis to guide patient management, the role of MRI elastography in identifying age-related myocardial stiffness associated with increased risk of heart failure, review of MRI in patients with cardiovascular implantable electronic devices and fractured or abandoned leads, imaging of mitral annular disjunction, specificity of the Lung Imaging Reporting and Data System version 2022 for detecting malignant airway nodules, and a radiomics-based reinforcement learning model to analyze serial low-dose CT scans in lung cancer screening. Ongoing research and future directions include artificial intelligence tools for applications such as plaque quantification using coronary CT angiography and growing understanding of the interconnectedness of environmental sustainability and cardiovascular imaging. <b>Keywords:</b> CT, MRI, CT-Coronary Angiography, Cardiac, Pulmonary, Coronary Arteries, Heart, Lung, Mediastinum, Mitral Valve, Aortic Valve, Artificial Intelligence © RSNA, 2025.

Network Occlusion Sensitivity Analysis Identifies Regional Contributions to Brain Age Prediction.

He L, Wang S, Chen C, Wang Y, Fan Q, Chu C, Fan L, Xu J

pubmed logopapersJun 1 2025
Deep learning frameworks utilizing convolutional neural networks (CNNs) have frequently been used for brain age prediction and have achieved outstanding performance. Nevertheless, deep learning remains a black box as it is hard to interpret which brain parts contribute significantly to the predictions. To tackle this challenge, we first trained a lightweight, fully CNN model for brain age estimation on a large sample data set (N = 3054, age range = [8,80 years]) and tested it on an independent data set (N = 555, age range = [8,80 years]). We then developed an interpretable scheme combining network occlusion sensitivity analysis (NOSA) with a fine-grained human brain atlas to uncover the learned invariance of the model. Our findings show that the dorsolateral, dorsomedial frontal cortex, anterior cingulate cortex, and thalamus had the highest contributions to age prediction across the lifespan. More interestingly, we observed that different regions showed divergent patterns in their predictions for specific age groups and that the bilateral hemispheres contributed differently to the predictions. Regions in the frontal lobe were essential predictors in both the developmental and aging stages, with the thalamus remaining relatively stable and saliently correlated with other regional changes throughout the lifespan. The lateral and medial temporal brain regions gradually became involved during the aging phase. At the network level, the frontoparietal and the default mode networks show an inverted U-shape contribution from the developmental to the aging stages. The framework could identify regional contributions to the brain age prediction model, which could help increase the model interpretability when serving as an aging biomarker.

A Large Language Model to Detect Negated Expressions in Radiology Reports.

Su Y, Babore YB, Kahn CE

pubmed logopapersJun 1 2025
Natural language processing (NLP) is crucial to extract information accurately from unstructured text to provide insights for clinical decision-making, quality improvement, and medical research. This study compared the performance of a rule-based NLP system and a medical-domain transformer-based model to detect negated concepts in radiology reports. Using a corpus of 984 de-identified radiology reports from a large U.S.-based academic health system (1000 consecutive reports, excluding 16 duplicates), the investigators compared the rule-based medspaCy system and the Clinical Assertion and Negation Classification Bidirectional Encoder Representations from Transformers (CAN-BERT) system to detect negated expressions of terms from RadLex, the Unified Medical Language System Metathesaurus, and the Radiology Gamuts Ontology. Power analysis determined a sample size of 382 terms to achieve α = 0.05 and β = 0.8 for McNemar's test; based on an estimate of 15% negated terms, 2800 randomly selected terms were annotated manually as negated or not negated. Precision, recall, and F1 of the two models were compared using McNemar's test. Of the 2800 terms, 387 (13.8%) were negated. For negation detection, medspaCy attained a recall of 0.795, precision of 0.356, and F1 of 0.492. CAN-BERT achieved a recall of 0.785, precision of 0.768, and F1 of 0.777. Although recall was not significantly different, CAN-BERT had significantly better precision (χ2 = 304.64; p < 0.001). The transformer-based CAN-BERT model detected negated terms in radiology reports with high precision and recall; its precision significantly exceeded that of the rule-based medspaCy system. Use of this system will improve data extraction from textual reports to support information retrieval, AI model training, and discovery of causal relationships.

Decoding Glioblastoma Heterogeneity: Neuroimaging Meets Machine Learning.

Fares J, Wan Y, Mayrand R, Li Y, Mair R, Price SJ

pubmed logopapersJun 1 2025
Recent advancements in neuroimaging and machine learning have significantly improved our ability to diagnose and categorize isocitrate dehydrogenase (IDH)-wildtype glioblastoma, a disease characterized by notable tumoral heterogeneity, which is crucial for effective treatment. Neuroimaging techniques, such as diffusion tensor imaging and magnetic resonance radiomics, provide noninvasive insights into tumor infiltration patterns and metabolic profiles, aiding in accurate diagnosis and prognostication. Machine learning algorithms further enhance glioblastoma characterization by identifying distinct imaging patterns and features, facilitating precise diagnoses and treatment planning. Integration of these technologies allows for the development of image-based biomarkers, potentially reducing the need for invasive biopsy procedures and enabling personalized therapy targeting specific pro-tumoral signaling pathways and resistance mechanisms. Although significant progress has been made, ongoing innovation is essential to address remaining challenges and further improve these methodologies. Future directions should focus on refining machine learning models, integrating emerging imaging techniques, and elucidating the complex interplay between imaging features and underlying molecular processes. This review highlights the pivotal role of neuroimaging and machine learning in glioblastoma research, offering invaluable noninvasive tools for diagnosis, prognosis prediction, and treatment planning, ultimately improving patient outcomes. These advances in the field promise to usher in a new era in the understanding and classification of IDH-wildtype glioblastoma.

P2TC: A Lightweight Pyramid Pooling Transformer-CNN Network for Accurate 3D Whole Heart Segmentation.

Cui H, Wang Y, Zheng F, Li Y, Zhang Y, Xia Y

pubmed logopapersJun 1 2025
Cardiovascular disease is a leading global cause of death, requiring accurate heart segmentation for diagnosis and surgical planning. Deep learning methods have been demonstrated to achieve superior performances in cardiac structures segmentation. However, there are still limitations in 3D whole heart segmentation, such as inadequate spatial context modeling, difficulty in capturing long-distance dependencies, high computational complexity, and limited representation of local high-level semantic information. To tackle the above problems, we propose a lightweight Pyramid Pooling Transformer-CNN (P2TC) network for accurate 3D whole heart segmentation. The proposed architecture comprises a dual encoder-decoder structure with a 3D pyramid pooling Transformer for multi-scale information fusion and a lightweight large-kernel Convolutional Neural Network (CNN) for local feature extraction. The decoder has two branches for precise segmentation and contextual residual handling. The first branch is used to generate segmentation masks for pixel-level classification based on the features extracted by the encoder to achieve accurate segmentation of cardiac structures. The second branch highlights contextual residuals across slices, enabling the network to better handle variations and boundaries. Extensive experimental results on the Multi-Modality Whole Heart Segmentation (MM-WHS) 2017 challenge dataset demonstrate that P2TC outperforms the most advanced methods, achieving the Dice scores of 92.6% and 88.1% in Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) modalities respectively, which surpasses the baseline model by 1.5% and 1.7%, and achieves state-of-the-art segmentation results.

Multi-modal large language models in radiology: principles, applications, and potential.

Shen Y, Xu Y, Ma J, Rui W, Zhao C, Heacock L, Huang C

pubmed logopapersJun 1 2025
Large language models (LLMs) and multi-modal large language models (MLLMs) represent the cutting-edge in artificial intelligence. This review provides a comprehensive overview of their capabilities and potential impact on radiology. Unlike most existing literature reviews focusing solely on LLMs, this work examines both LLMs and MLLMs, highlighting their potential to support radiology workflows such as report generation, image interpretation, EHR summarization, differential diagnosis generation, and patient education. By streamlining these tasks, LLMs and MLLMs could reduce radiologist workload, improve diagnostic accuracy, support interdisciplinary collaboration, and ultimately enhance patient care. We also discuss key limitations, such as the limited capacity of current MLLMs to interpret 3D medical images and to integrate information from both image and text data, as well as the lack of effective evaluation methods. Ongoing efforts to address these challenges are introduced.
Page 20 of 38374 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.