Sort by:
Page 36 of 79781 results

From Referral to Reporting: The Potential of Large Language Models in the Radiological Workflow.

Fink A, Rau S, Kästingschäfer K, Weiß J, Bamberg F, Russe MF

pubmed logopapersJul 16 2025
Large language models (LLMs) hold great promise for optimizing and supporting radiology workflows amidst rising workloads. This review examines potential applications in daily radiology practice, as well as remaining challenges and potential solutions.Presentation of potential applications and challenges, illustrated with practical examples and concrete optimization suggestions.LLM-based assistance systems have potential applications in almost all language-based process steps of the radiological workflow. Significant progress has been made in areas such as report generation, particularly with retrieval-augmented generation (RAG) and multi-step reasoning approaches. However, challenges related to hallucinations, reproducibility, and data protection, as well as ethical concerns, need to be addressed before widespread implementation.LLMs have immense potential in radiology, particularly for supporting language-based process steps, with technological advances such as RAG and cloud-based approaches potentially accelerating clinical implementation. · LLMs can optimize reporting and other language-based processes in radiology with technologies such as RAG and multi-step reasoning approaches.. · Challenges such as hallucinations, reproducibility, privacy, and ethical concerns must be addressed before widespread adoption.. · RAG and cloud-based approaches could help overcome these challenges and advance the clinical implementation of LLMs.. · Fink A, Rau S, Kästingschäfer K et al. From Referral to Reporting: The Potential of Large Language Models in the Radiological Workflow. Rofo 2025; DOI 10.1055/a-2641-3059.

An end-to-end interpretable machine-learning-based framework for early-stage diagnosis of gallbladder cancer using multi-modality medical data.

Zhao H, Miao C, Zhu Y, Shu Y, Wu X, Yin Z, Deng X, Gong W, Yang Z, Zou W

pubmed logopapersJul 16 2025
The accurate early-stage diagnosis of gallbladder cancer (GBC) is regarded as one of the major challenges in the field of oncology. However, few studies have focused on the comprehensive classification of GBC based on multiple modalities. This study aims to develop a comprehensive diagnostic framework for GBC based on both imaging and non-imaging medical data. This retrospective study reviewed 298 clinical patients with gallbladder disease or volunteers from two devices. A novel end-to-end interpretable diagnostic framework for GBC is proposed to handle multiple medical modalities, including CT imaging, demographics, tumor markers, coagulation function tests, and routine blood tests. To achieve better feature extraction and fusion of the imaging modality, a novel global-hybrid-local network, namely GHL-Net, has also been developed. The ensemble learning strategy is employed to fuse multi-modality data and obtain the final classification result. In addition, two interpretable methods are applied to help clinicians understand the model-based decisions. Model performance was evaluated through accuracy, precision, specificity, sensitivity, F1-score, area under the curve (AUC), and matthews correlation coefficient (MCC). In both binary and multi-class classification scenarios, the proposed method showed better performance compared to other comparison methods in both datasets. Especially in the binary classification scenario, the proposed method achieved the highest accuracy, sensitivity, specificity, precision, F1-score, ROC-AUC, PR-AUC, and MCC of 95.24%, 93.55%, 96.87%, 96.67%, 95.08%, 0.9591, 0.9636, and 0.9051, respectively. The visualization results obtained based on the interpretable methods also demonstrated a high clinical relevance of the intermediate decision-making processes. Ablation studies then provided an in-depth understanding of our methodology. The machine learning-based framework can effectively improve the accuracy of GBC diagnosis and is expected to have a more significant impact in other cancer diagnosis scenarios.

Late gadolinium enhancement imaging and sudden cardiac death.

Prasad SK, Akbari T, Bishop MJ, Halliday BP, Leyva-Leon F, Marchlinski F

pubmed logopapersJul 16 2025
The prediction and management of sudden cardiac death risk continue to pose significant challenges in cardiovascular care despite advances in therapies over the last two decades. Late gadolinium enhancement (LGE) on cardiac magnetic resonance-a marker of myocardial fibrosis-is a powerful non-invasive tool with the potential to aid the prediction of sudden death and direct the use of preventative therapies in several cardiovascular conditions. In this state-of-the-art review, we provide a critical appraisal of the current evidence base underpinning the utility of LGE in both ischaemic and non-ischaemic cardiomyopathies together with a focus on future perspectives and the role for machine learning and digital twin technologies.

Illuminating radiogenomic signatures in pediatric-type diffuse gliomas: insights into molecular, clinical, and imaging correlations. Part II: low-grade group.

Kurokawa R, Hagiwara A, Ito R, Ueda D, Saida T, Sakata A, Nishioka K, Sugawara S, Takumi K, Watabe T, Ide S, Kawamura M, Sofue K, Hirata K, Honda M, Yanagawa M, Oda S, Iima M, Naganawa S

pubmed logopapersJul 16 2025
The fifth edition of the World Health Organization classification of central nervous system tumors represents a significant advancement in the molecular-genetic classification of pediatric-type diffuse gliomas. This article comprehensively summarizes the clinical, molecular, and radiological imaging features in pediatric-type low-grade gliomas (pLGGs), including MYB- or MYBL1-altered tumors, polymorphous low-grade neuroepithelial tumor of the young (PLNTY), and diffuse low-grade glioma, MAPK pathway-altered. Most pLGGs harbor alterations in the RAS/MAPK pathway, functioning as "one pathway disease". Specific magnetic resonance imaging features, such as the T2-fluid-attenuated inversion recovery (FLAIR) mismatch sign in MYB- or MYBL1-altered tumors and the transmantle-like sign in PLNTYs, may serve as non-invasive biomarkers for underlying molecular alterations. Recent advances in radiogenomics have enabled the differentiation of BRAF fusion from BRAF V600E mutant tumors based on magnetic resonance imaging characteristics. Machine learning approaches have further enhanced our ability to predict molecular subtypes from imaging features. These radiology-molecular correlations offer potential clinical utility in treatment planning and prognostication, especially as targeted therapies against the MAPK pathway emerge. Continued research is needed to refine our understanding of genotype-phenotype correlations in less common molecular alterations and to validate these imaging biomarkers in larger cohorts.

LRMR: LLM-Driven Relational Multi-node Ranking for Lymph Node Metastasis Assessment in Rectal Cancer

Yaoxian Dong, Yifan Gao, Haoyue Li, Yanfen Cui, Xin Gao

arxiv logopreprintJul 15 2025
Accurate preoperative assessment of lymph node (LN) metastasis in rectal cancer guides treatment decisions, yet conventional MRI evaluation based on morphological criteria shows limited diagnostic performance. While some artificial intelligence models have been developed, they often operate as black boxes, lacking the interpretability needed for clinical trust. Moreover, these models typically evaluate nodes in isolation, overlooking the patient-level context. To address these limitations, we introduce LRMR, an LLM-Driven Relational Multi-node Ranking framework. This approach reframes the diagnostic task from a direct classification problem into a structured reasoning and ranking process. The LRMR framework operates in two stages. First, a multimodal large language model (LLM) analyzes a composite montage image of all LNs from a patient, generating a structured report that details ten distinct radiological features. Second, a text-based LLM performs pairwise comparisons of these reports between different patients, establishing a relative risk ranking based on the severity and number of adverse features. We evaluated our method on a retrospective cohort of 117 rectal cancer patients. LRMR achieved an area under the curve (AUC) of 0.7917 and an F1-score of 0.7200, outperforming a range of deep learning baselines, including ResNet50 (AUC 0.7708). Ablation studies confirmed the value of our two main contributions: removing the relational ranking stage or the structured prompting stage led to a significant performance drop, with AUCs falling to 0.6875 and 0.6458, respectively. Our work demonstrates that decoupling visual perception from cognitive reasoning through a two-stage LLM framework offers a powerful, interpretable, and effective new paradigm for assessing lymph node metastasis in rectal cancer.

3D Wavelet Latent Diffusion Model for Whole-Body MR-to-CT Modality Translation

Jiaxu Zheng, Meiman He, Xuhui Tang, Xiong Wang, Tuoyu Cao, Tianyi Zeng, Lichi Zhang, Chenyu You

arxiv logopreprintJul 14 2025
Magnetic Resonance (MR) imaging plays an essential role in contemporary clinical diagnostics. It is increasingly integrated into advanced therapeutic workflows, such as hybrid Positron Emission Tomography/Magnetic Resonance (PET/MR) imaging and MR-only radiation therapy. These integrated approaches are critically dependent on accurate estimation of radiation attenuation, which is typically facilitated by synthesizing Computed Tomography (CT) images from MR scans to generate attenuation maps. However, existing MR-to-CT synthesis methods for whole-body imaging often suffer from poor spatial alignment between the generated CT and input MR images, and insufficient image quality for reliable use in downstream clinical tasks. In this paper, we present a novel 3D Wavelet Latent Diffusion Model (3D-WLDM) that addresses these limitations by performing modality translation in a learned latent space. By incorporating a Wavelet Residual Module into the encoder-decoder architecture, we enhance the capture and reconstruction of fine-scale features across image and latent spaces. To preserve anatomical integrity during the diffusion process, we disentangle structural and modality-specific characteristics and anchor the structural component to prevent warping. We also introduce a Dual Skip Connection Attention mechanism within the diffusion model, enabling the generation of high-resolution CT images with improved representation of bony structures and soft-tissue contrast.

Generative AI enables medical image segmentation in ultra low-data regimes.

Zhang L, Jindal B, Alaa A, Weinreb R, Wilson D, Segal E, Zou J, Xie P

pubmed logopapersJul 14 2025
Semantic segmentation of medical images is pivotal in applications like disease diagnosis and treatment planning. While deep learning automates this task effectively, it struggles in ultra low-data regimes for the scarcity of annotated segmentation masks. To address this, we propose a generative deep learning framework that produces high-quality image-mask pairs as auxiliary training data. Unlike traditional generative models that separate data generation from model training, ours uses multi-level optimization for end-to-end data generation. This allows segmentation performance to guide the generation process, producing data tailored to improve segmentation outcomes. Our method demonstrates strong generalization across 11 medical image segmentation tasks and 19 datasets, covering various diseases, organs, and modalities. It improves performance by 10-20% (absolute) in both same- and out-of-domain settings and requires 8-20 times less training data than existing approaches. This greatly enhances the feasibility and cost-effectiveness of deep learning in data-limited medical imaging scenarios.

A Clinically-Informed Framework for Evaluating Vision-Language Models in Radiology Report Generation: Taxonomy of Errors and Risk-Aware Metric

Guan, H., Hou, P. C., Hong, P., Wang, L., Zhang, W., Du, X., Zhou, Z., Zhou, L.

medrxiv logopreprintJul 14 2025
Recent advances in vision-language models (VLMs) have enabled automatic radiology report generation, yet current evaluation methods remain limited to general-purpose NLP metrics or coarse classification-based clinical scores. In this study, we propose a clinically informed evaluation framework for VLM-generated radiology reports that goes beyond traditional performance measures. We define a taxonomy of 12 radiology-specific error types, each annotated with clinical risk levels (low, medium, high) in collaboration with physicians. Using this framework, we conduct a comprehensive error analysis of three representative VLMs, i.e., DeepSeek VL2, CXR-LLaVA, and CheXagent, on 685 gold-standard, expert-annotated MIMIC-CXR cases. We further introduce a risk-aware evaluation metric, the Clinical Risk-weighted Error Score for Text-generation (CREST), to quantify safety impact. Our findings reveal critical model vulnerabilities, common error patterns, and condition-specific risk profiles, offering actionable insights for model development and deployment. This work establishes a safety-centric foundation for evaluating and improving medical report generation models. The source code of our evaluation framework, including CREST computation and error taxonomy analysis, is available at https://github.com/guanharry/VLM-CREST.

The Potential of ChatGPT as an Aiding Tool for the Neuroradiologist

nikola, s., paz, d.

medrxiv logopreprintJul 14 2025
PurposeThis study aims to explore whether ChatGPT can serve as an assistive tool for neuroradiologists in establishing a reasonable differential diagnosis in central nervous system tumors based on MRI images characteristics. MethodsThis retrospective study included 50 patients aged 18-90 who underwent imaging and surgery at the Western Galilee Medical Center. ChatGPT was provided with demographic and radiological information of the patients to generate differential diagnoses. We compared ChatGPTs performance to an experienced neuroradiologist, using pathological reports as the gold standard. Quantitative data were described using means and standard deviations, median and range. Qualitative data were described using frequencies and percentages. The level of agreement between examiners (neuroradiologist versus ChatGPT) was assessed using Fleiss kappa coefficient. A significance value below 5% was considered statistically significant. Statistical analysis was performed using IBM SPSS Statistics, version 27. ResultsThe results showed that while ChatGPT demonstrated good performance, particularly in identifying common tumors such as glioblastoma and meningioma, its overall accuracy (48%) was lower than that of the neuroradiologist (70%). The AI tool showed moderate agreement with the neuroradiologist (kappa = 0.445) and with pathology results (kappa = 0.419). ChatGPTs performance varied across tumor types, performing better with common tumors but struggling with rarer ones. ConclusionThis study suggests that ChatGPT has the potential to serve as an assistive tool in neuroradiology for establishing a reasonable differential diagnosis in central nervous system tumors based on MRI images characteristics. However, its limitations and potential risks must be considered, and it should therefore be used with caution.

A generative model uses healthy and diseased image pairs for pixel-level chest X-ray pathology localization.

Dong K, Cheng Y, He K, Suo J

pubmed logopapersJul 14 2025
Medical artificial intelligence (AI) offers potential for automatic pathological interpretation, but a practicable AI model demands both pixel-level accuracy and high explainability for diagnosis. The construction of such models relies on substantial training data with fine-grained labelling, which is impractical in real applications. To circumvent this barrier, we propose a prompt-driven constrained generative model to produce anatomically aligned healthy and diseased image pairs and learn a pathology localization model in a supervised manner. This paradigm provides high-fidelity labelled data and addresses the lack of chest X-ray images with labelling at fine scales. Benefitting from the emerging text-driven generative model and the incorporated constraint, our model presents promising localization accuracy of subtle pathologies, high explainability for clinical decisions, and good transferability to many unseen pathological categories such as new prompts and mixed pathologies. These advantageous features establish our model as a promising solution to assist chest X-ray analysis. In addition, the proposed approach is also inspiring for other tasks lacking massive training data and time-consuming manual labelling.
Page 36 of 79781 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.