Sort by:
Page 33 of 78779 results

Back to the Future-Cardiovascular Imaging From 1966 to Today and Tomorrow.

Wintersperger BJ, Alkadhi H, Wildberger JE

pubmed logopapersJul 23 2025
This article, on the 60th anniversary of the journal Investigative Radiology, a journal dedicated to cutting-edge imaging technology, discusses key historical milestones in CT and MRI technology, as well as the ongoing advancement of contrast agent development for cardiovascular imaging over the past decades. It specifically highlights recent developments and the current state-of-the-art technology, including photon-counting detector CT and artificial intelligence, which will further push the boundaries of cardiovascular imaging. What were once ideas and visions have become today's clinical reality for the benefit of patients, and imaging technology will continue to evolve and transform modern medicine.

Artificial Intelligence Empowers Novice Users to Acquire Diagnostic-Quality Echocardiography.

Trost B, Rodrigues L, Ong C, Dezellus A, Goldberg YH, Bouchat M, Roger E, Moal O, Singh V, Moal B, Lafitte S

pubmed logopapersJul 22 2025
Cardiac ultrasound exams provide real-time data to guide clinical decisions but require highly trained sonographers. Artificial intelligence (AI) that uses deep learning algorithms to guide novices in the acquisition of diagnostic echocardiographic studies may broaden access and improve care. The objective of this trial was to evaluate whether nurses without previous ultrasound experience (novices) could obtain diagnostic-quality acquisitions of 10 echocardiographic views using AI-based software. This noninferiority study was prospective, international, nonrandomized, and conducted at 2 medical centers, in the United States and France, from November 2023 to August 2024. Two limited cardiac exams were performed on adult patients scheduled for a clinically indicated echocardiogram; one was conducted by a novice using AI guidance and one by an expert (experienced sonographer or cardiologist) without it. Primary endpoints were evaluated by 5 experienced cardiologists to assess whether the novice exam was of sufficient quality to visually analyze the left ventricular size and function, the right ventricle size, and the presence of nontrivial pericardial effusion. Secondary endpoints included 8 additional cardiac parameters. A total of 240 patients (mean age 62.6 years; 117 women (48.8%); mean body mass index 26.6 kg/m<sup>2</sup>) completed the study. One hundred percent of the exams performed by novices with the studied software were of sufficient quality to assess the primary endpoints. Cardiac parameters assessed in exams conducted by novices and experts were strongly correlated. AI-based software provides a safe means for novices to perform diagnostic-quality cardiac ultrasounds after a short training period.

AgentMRI: A Vison Language Model-Powered AI System for Self-regulating MRI Reconstruction with Multiple Degradations.

Sajua GA, Akhib M, Chang Y

pubmed logopapersJul 22 2025
Artificial intelligence (AI)-driven autonomous agents are transforming multiple domains by integrating reasoning, decision-making, and task execution into a unified framework. In medical imaging, such agents have the potential to change workflows by reducing human intervention and optimizing image quality. In this paper, we introduce the AgentMRI. It is an AI-driven system that leverages vision language models (VLMs) for fully autonomous magnetic resonance imaging (MRI) reconstruction in the presence of multiple degradations. Unlike traditional MRI correction or reconstruction methods, AgentMRI does not rely on manual intervention for post-processing or does not rely on fixed correction models. Instead, it dynamically detects MRI corruption and then automatically selects the best correction model for image reconstruction. The framework uses a multi-query VLM strategy to ensure robust corruption detection through consensus-based decision-making and confidence-weighted inference. AgentMRI automatically chooses deep learning models that include MRI reconstruction, motion correction, and denoising models. We evaluated AgentMRI in both zero-shot and fine-tuned settings. Experimental results on a comprehensive brain MRI dataset demonstrate that AgentMRI achieves an average of 73.6% accuracy in zero-shot and 95.1% accuracy for fine-tuned settings. Experiments show that it accurately executes the reconstruction process without human intervention. AgentMRI eliminates manual intervention and introduces a scalable and multimodal AI framework for autonomous MRI processing. This work may build a significant step toward fully autonomous and intelligent MR image reconstruction systems.

Role of Brain Age Gap as a Mediator in the Relationship Between Cognitive Impairment Risk Factors and Cognition.

Tan WY, Huang X, Huang J, Robert C, Cui J, Chen CPLH, Hilal S

pubmed logopapersJul 22 2025
Cerebrovascular disease (CeVD) and cognitive impairment risk factors contribute to cognitive decline, but the role of brain age gap (BAG) in mediating this relationship remains unclear, especially in Southeast Asian populations. This study investigated the influence of cognitive impairment risk factors on cognition and examined how BAG mediates this relationship, particularly in individuals with varying CeVD burden. This cross-sectional study analyzed Singaporean community and memory clinic participants. Cognitive impairment risk factors were assessed using the Cognitive Impairment Scoring System (CISS), encompassing 11 sociodemographic and vascular factors. Cognition was assessed through a neuropsychological battery, evaluating global cognition and 6 cognitive domains: executive function, attention, memory, language, visuomotor speed, and visuoconstruction. Brain age was derived from structural MRI features using ensemble machine learning model. Propensity score matching balanced risk profiles between model training and the remaining sample. Structural equation modeling examined the mediation effect of BAG on CISS-cognition relationship, stratified by CeVD burden (high: CeVD+, low: CeVD-). The study included 1,437 individuals without dementia, with 646 in the matched sample (mean age 66.4 ± 6.0 years, 47% female, 60% with no cognitive impairment). Higher CISS was consistently associated with poorer cognitive performance across all domains, with the strongest negative associations in visuomotor speed (β = -2.70, <i>p</i> < 0.001) and visuoconstruction (β = -3.02, <i>p</i> < 0.001). Among the CeVD+ group, BAG significantly mediated the relationship between CISS and global cognition (proportion mediated: 19.95%, <i>p</i> = 0.01), with the strongest mediation effects in executive function (34.1%, <i>p</i> = 0.03) and language (26.6%, <i>p</i> = 0.008). BAG also mediated the relationship between CISS and memory (21.1%) and visuoconstruction (14.4%) in the CeVD+ group, but these effects diminished after statistical adjustments. Our findings suggest that BAG is a key intermediary linking cognitive impairment risk factors to cognitive function, particularly in individuals with high CeVD burden. This mediation effect is domain-specific, with executive function, language, and visuoconstruction being the most vulnerable to accelerated brain aging. Limitations of this study include the cross-sectional design, limiting causal inference, and the focus on Southeast Asian populations, limiting generalizability. Future longitudinal studies should verify these relationships and explore additional factors not captured in our model.

AURA: A Multi-Modal Medical Agent for Understanding, Reasoning & Annotation

Nima Fathi, Amar Kumar, Tal Arbel

arxiv logopreprintJul 22 2025
Recent advancements in Large Language Models (LLMs) have catalyzed a paradigm shift from static prediction systems to agentic AI agents capable of reasoning, interacting with tools, and adapting to complex tasks. While LLM-based agentic systems have shown promise across many domains, their application to medical imaging remains in its infancy. In this work, we introduce AURA, the first visual linguistic explainability agent designed specifically for comprehensive analysis, explanation, and evaluation of medical images. By enabling dynamic interactions, contextual explanations, and hypothesis testing, AURA represents a significant advancement toward more transparent, adaptable, and clinically aligned AI systems. We highlight the promise of agentic AI in transforming medical image analysis from static predictions to interactive decision support. Leveraging Qwen-32B, an LLM-based architecture, AURA integrates a modular toolbox comprising: (i) a segmentation suite with phase grounding, pathology segmentation, and anatomy segmentation to localize clinically meaningful regions; (ii) a counterfactual image-generation module that supports reasoning through image-level explanations; and (iii) a set of evaluation tools including pixel-wise difference-map analysis, classification, and advanced state-of-the-art components to assess diagnostic relevance and visual interpretability.

LLM-driven Medical Report Generation via Communication-efficient Heterogeneous Federated Learning.

Che H, Jin H, Gu Z, Lin Y, Jin C, Chen H

pubmed logopapersJul 21 2025
Large Language Models (LLMs) have demonstrated significant potential in Medical Report Generation (MRG), yet their development requires large amounts of medical image-report pairs, which are commonly scattered across multiple centers. Centralizing these data is exceptionally challenging due to privacy regulations, thereby impeding model development and broader adoption of LLM-driven MRG models. To address this challenge, we present FedMRG, the first framework that leverages Federated Learning (FL) to enable privacy-preserving, multi-center development of LLM-driven MRG models, specifically designed to overcome the critical challenge of communication-efficient LLM training under multi-modal data heterogeneity. To start with, our framework tackles the fundamental challenge of communication overhead in federated LLM tuning by employing low-rank factorization to efficiently decompose parameter updates, significantly reducing gradient transmission costs and making LLM-driven MRG feasible in bandwidth-constrained FL settings. Furthermore, we observed the dual heterogeneity in MRG under the FL scenario: varying image characteristics across medical centers, as well as diverse reporting styles and terminology preferences. To address the data heterogeneity, we further enhance FedMRG with (1) client-aware contrastive learning in the MRG encoder, coupled with diagnosis-driven prompts, which capture both globally generalizable and locally distinctive features while maintaining diagnostic accuracy; and (2) a dual-adapter mutual boosting mechanism in the MRG decoder that harmonizes generic and specialized adapters to address variations in reporting styles and terminology. Through extensive evaluation of our established FL-MRG benchmark, we demonstrate the generalizability and adaptability of FedMRG, underscoring its potential in harnessing multi-center data and generating clinically accurate reports while maintaining communication efficiency.

Mammo-SAE: Interpreting Breast Cancer Concept Learning with Sparse Autoencoders

Krishna Kanth Nakka

arxiv logopreprintJul 21 2025
Interpretability is critical in high-stakes domains such as medical imaging, where understanding model decisions is essential for clinical adoption. In this work, we introduce Sparse Autoencoder (SAE)-based interpretability to breast imaging by analyzing {Mammo-CLIP}, a vision--language foundation model pretrained on large-scale mammogram image--report pairs. We train a patch-level \texttt{Mammo-SAE} on Mammo-CLIP to identify and probe latent features associated with clinically relevant breast concepts such as \textit{mass} and \textit{suspicious calcification}. Our findings reveal that top activated class level latent neurons in the SAE latent space often tend to align with ground truth regions, and also uncover several confounding factors influencing the model's decision-making process. Additionally, we analyze which latent neurons the model relies on during downstream finetuning for improving the breast concept prediction. This study highlights the promise of interpretable SAE latent representations in providing deeper insight into the internal workings of foundation models at every layer for breast imaging.

Latent Space Synergy: Text-Guided Data Augmentation for Direct Diffusion Biomedical Segmentation

Muhammad Aqeel, Maham Nazir, Zanxi Ruan, Francesco Setti

arxiv logopreprintJul 21 2025
Medical image segmentation suffers from data scarcity, particularly in polyp detection where annotation requires specialized expertise. We present SynDiff, a framework combining text-guided synthetic data generation with efficient diffusion-based segmentation. Our approach employs latent diffusion models to generate clinically realistic synthetic polyps through text-conditioned inpainting, augmenting limited training data with semantically diverse samples. Unlike traditional diffusion methods requiring iterative denoising, we introduce direct latent estimation enabling single-step inference with T x computational speedup. On CVC-ClinicDB, SynDiff achieves 96.0% Dice and 92.9% IoU while maintaining real-time capability suitable for clinical deployment. The framework demonstrates that controlled synthetic augmentation improves segmentation robustness without distribution shift. SynDiff bridges the gap between data-hungry deep learning models and clinical constraints, offering an efficient solution for deployment in resourcelimited medical settings.

Artificial intelligence in radiology: diagnostic sensitivity of ChatGPT for detecting hemorrhages in cranial computed tomography scans.

Bayar-Kapıcı O, Altunışık E, Musabeyoğlu F, Dev Ş, Kaya Ö

pubmed logopapersJul 21 2025
Chat Generative Pre-trained Transformer (ChatGPT)-4V, a large language model developed by OpenAI, has been explored for its potential application in radiology. This study assesses ChatGPT-4V's diagnostic performance in identifying various types of intracranial hemorrhages in non-contrast cranial computed tomography (CT) images. Intracranial hemorrhages were presented to ChatGPT using the clearest 2D imaging slices. The first question, "Q1: Which imaging technique is used in this image?" was asked to determine the imaging modality. ChatGPT was then prompted with the second question, "Q2: What do you see in this image and what is the final diagnosis?" to assess whether the CT scan was normal or showed pathology. For CT scans containing hemorrhage that ChatGPT did not interpret correctly, a follow-up question-"Q3: There is bleeding in this image. Which type of bleeding do you see?"-was used to evaluate whether this guidance influenced its response. ChatGPT accurately identified the imaging technique (Q1) in all cases but demonstrated difficulty diagnosing epidural hematoma (EDH), subdural hematoma (SDH), and subarachnoid hemorrhage (SAH) when no clues were provided (Q2). When a hemorrhage clue was introduced (Q3), ChatGPT correctly identified EDH in 16.7% of cases, SDH in 60%, and SAH in 15.6%, and achieved 100% diagnostic accuracy for hemorrhagic cerebrovascular disease. Its sensitivity, specificity, and accuracy for Q2 were 23.6%, 92.5%, and 57.4%, respectively. These values improved substantially with the clue in Q3, with sensitivity rising to 50.9% and accuracy to 71.3%. ChatGPT also demonstrated higher diagnostic accuracy in larger hemorrhages in EDH and SDH images. Although the model performs well in recognizing imaging modalities, its diagnostic accuracy substantially improves when guided by additional contextual information. These findings suggest that ChatGPT's diagnostic performance improves with guided prompts, highlighting its potential as a supportive tool in clinical radiology.

Imaging-aided diagnosis and treatment based on artificial intelligence for pulmonary nodules: A review.

Gao H, Li J, Wu Y, Tang Z, He X, Zhao F, Chen Y, He X

pubmed logopapersJul 21 2025
Pulmonary nodules are critical indicators for the early detection of lung cancer; however, their diagnosis and management pose significant challenges due to the variability in nodule characteristics, reader fatigue, and limited clinical expertise, often leading to diagnostic errors. The rapid advancement of artificial intelligence (AI) presents promising solutions to address these issues. This review compares traditional rule-based methods, handcrafted feature-based machine learning, radiomics, deep learning, and hybrid models incorporating Transformers or attention mechanisms. It systematically compares their methodologies, clinical applications (diagnosis, treatment, prognosis), and dataset usage to evaluate performance, applicability, and limitations in pulmonary nodule management. AI advances have significantly improved pulmonary nodule management, with transformer-based models achieving leading accuracy in segmentation, classification, and subtyping. The fusion of multimodal imaging CT, PET, and MRI further enhances diagnostic precision. Additionally, AI aids treatment planning and prognosis prediction by integrating radiomics with clinical data. Despite these advances, challenges remain, including domain shift, high computational demands, limited interpretability, and variability across multi-center datasets. Artificial intelligence (AI) has transformative potential in improving the diagnosis and treatment of lung nodules, especially in improving the accuracy of lung cancer treatment and patient prognosis, where significant progress has been made.
Page 33 of 78779 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.