Sort by:
Page 18 of 38374 results

Recent Advances in Medical Image Classification

Loan Dao, Ngoc Quoc Ly

arxiv logopreprintJun 4 2025
Medical image classification is crucial for diagnosis and treatment, benefiting significantly from advancements in artificial intelligence. The paper reviews recent progress in the field, focusing on three levels of solutions: basic, specific, and applied. It highlights advances in traditional methods using deep learning models like Convolutional Neural Networks and Vision Transformers, as well as state-of-the-art approaches with Vision Language Models. These models tackle the issue of limited labeled data, and enhance and explain predictive results through Explainable Artificial Intelligence.

Retrieval-Augmented Generation with Large Language Models in Radiology: From Theory to Practice.

Fink A, Rau A, Reisert M, Bamberg F, Russe MF

pubmed logopapersJun 4 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Large language models (LLMs) hold substantial promise in addressing the growing workload in radiology, but recent studies also reveal limitations, such as hallucinations and opacity in sources for LLM responses. Retrieval-augmented Generation (RAG) based LLMs offer a promising approach to streamline radiology workflows by integrating reliable, verifiable, and customizable information. Ongoing refinement is critical to enable RAG models to manage large amounts of input data and to engage in complex multiagent dialogues. This report provides an overview of recent advances in LLM architecture, including few-shot and zero-shot learning, RAG integration, multistep reasoning, and agentic RAG, and identifies future research directions. Exemplary cases demonstrate the practical application of these techniques in radiology practice. ©RSNA, 2025.

Multimodal data integration for biologically-relevant artificial intelligence to guide adjuvant chemotherapy in stage II colorectal cancer.

Xie C, Ning Z, Guo T, Yao L, Chen X, Huang W, Li S, Chen J, Zhao K, Bian X, Li Z, Huang Y, Liang C, Zhang Q, Liu Z

pubmed logopapersJun 4 2025
Adjuvant chemotherapy provides a limited survival benefit (<5%) for patients with stage II colorectal cancer (CRC) and is suggested for high-risk patients. Given the heterogeneity of stage II CRC, we aimed to develop a clinically explainable artificial intelligence (AI)-powered analyser to identify radiological phenotypes that would benefit from chemotherapy. Multimodal data from patients with CRC across six cohorts were collected, including 405 patients from the Guangdong Provincial People's Hospital for model development and 153 patients from the Yunnan Provincial Cancer Centre for validation. RNA sequencing data were used to identify the differentially expressed genes in the two radiological clusters. Histopathological patterns were evaluated to bridge the gap between the imaging and genetic information. Finally, we investigated the discovered morphological patterns of mouse models to observe imaging features. The survival benefit of chemotherapy varied significantly among the AI-powered radiological clusters [interaction hazard ratio (iHR) = 5.35, (95% CI: 1.98, 14.41), adjusted P<sub>interaction</sub> = 0.012]. Distinct biological pathways related to immune and stromal cell abundance were observed between the clusters. The observation only (OO)-preferable cluster exhibited higher necrosis, haemorrhage, and tortuous vessels, whereas the adjuvant chemotherapy (AC)-preferable cluster exhibited vessels with greater pericyte coverage, allowing for a more enriched infiltration of B, CD4<sup>+</sup>-T, and CD8<sup>+</sup>-T cells into the core tumoural areas. Further experiments confirmed that changes in vessel morphology led to alterations in predictive imaging features. The developed explainable AI-powered analyser effectively identified patients with stage II CRC with improved overall survival after receiving adjuvant chemotherapy, thereby contributing to the advancement of precision oncology. This work was funded by the National Science Fund of China (81925023, 82302299, and U22A2034), Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (2022B1212010011), and High-level Hospital Construction Project (DFJHBF202105 and YKY-KF202204).

A Review of Intracranial Aneurysm Imaging Modalities, from CT to State-of-the-Art MR.

Allaw S, Khabaz K, Given TC, Montas D, Alcazar-Felix RJ, Srinath A, Kass-Hout T, Carroll TJ, Hurley MC, Polster SP

pubmed logopapersJun 3 2025
Traditional guidance for intracranial aneurysm (IA) management is dichotomized by rupture status. Fundamental to the management of ruptured aneurysm is the detection and treatment of SAH, along with securing the aneurysm by the safest technique. On the other hand, unruptured aneurysms first require a careful assessment of their natural history versus treatment risk, including an imaging assessment of aneurysm size, location, and morphology, along with additional evidence-based risk factors such as smoking, hypertension, and family history. Unfortunately, a large proportion of ruptured aneurysms are in the lower risk size category (<7 mm), putting a premium on discovering a more refined noninvasive biomarker to detect and stratify aneurysm instability before rupture. In this review of aneurysm work-up, we cover the gamut of established imaging modalities (eg, CT, CTA, DSA, FLAIR, 3D TOF-MRA, contrast-enhanced-MRA) as well as more novel MR techniques (MR vessel wall imaging, dynamic contrast-enhanced MRI, computational fluid dynamics). Additionally, we evaluate the current landscape of artificial intelligence software and its integration into diagnostic and risk-stratification pipelines for IAs. These advanced MR techniques, increasingly complemented with artificial intelligence models, offer a paradigm shift by evaluating factors beyond size and morphology, including vessel wall inflammation, permeability, and hemodynamics. Additionally, we provide our institution's scan parameters for many of these modalities as a reference. Ultimately, this review provides an organized, up-to-date summary of the array of available modalities/sequences for IA imaging to help build protocols focused on IA characterization.

Deep learning reveals pathology-confirmed neuroimaging signatures in Alzheimer's, vascular and Lewy body dementias.

Wang D, Honnorat N, Toledo JB, Li K, Charisis S, Rashid T, Benet Nirmala A, Brandigampala SR, Mojtabai M, Seshadri S, Habes M

pubmed logopapersJun 3 2025
Concurrent neurodegenerative and vascular pathologies pose a diagnostic challenge in the clinical setting, with histopathology remaining the definitive modality for dementia-type diagnosis. To address this clinical challenge, we introduce a neuropathology-based, data-driven, multi-label deep-learning framework to identify and quantify in vivo biomarkers for Alzheimer's disease (AD), vascular dementia (VD) and Lewy body dementia (LBD) using antemortem T1-weighted MRI scans of 423 demented and 361 control participants from National Alzheimer's Coordinating Center and Alzheimer's Disease Neuroimaging Initiative datasets. Based on the best-performing deep-learning model, explainable heat maps were extracted to visualize disease patterns, and the novel Deep Signature of Pathology Atrophy REcognition (DeepSPARE) indices were developed, where a higher DeepSPARE score indicates more brain alterations associated with that specific pathology. A substantial discrepancy in clinical and neuropathological diagnosis was observed in the demented patients: 71% had more than one pathology, but 67% were diagnosed clinically as AD only. Based on these neuropathological diagnoses and leveraging cross-validation principles, the deep-learning model achieved the best performance, with a balanced accuracy of 0.844, 0.839 and 0.623 for AD, VD and LBD, respectively, and was used to generate the explainable deep-learning heat maps and DeepSPARE indices. The explainable deep-learning heat maps revealed distinct neuroimaging brain alteration patterns for each pathology: (i) the AD heat map highlighted bilateral hippocampal regions; (ii) the VD heat map emphasized white matter regions; and (iii) the LBD heat map exposed occipital alterations. The DeepSPARE indices were validated by examining their associations with cognitive testing and neuropathological and neuroimaging measures using linear mixed-effects models. The DeepSPARE-AD index was associated with Mini-Mental State Examination, the Trail Making Test B, memory, hippocampal volume, Braak stages, Consortium to Establish a Registry for Alzheimer's Disease (CERAD) scores and Thal phases [false-discovery rate (FDR)-adjusted P < 0.05]. The DeepSPARE-VD index was associated with white matter hyperintensity volume and cerebral amyloid angiopathy (FDR-adjusted P < 0.001), and the DeepSPARE-LBD index was associated with Lewy body stages (FDR-adjusted P < 0.05). The findings were replicated in an out-of-sample Alzheimer's Disease Neuroimaging Initiative dataset by testing associations with cognitive, imaging, plasma and CSF measures. CSF and plasma tau phosphorylated at threonine-181 (pTau181) were significantly associated with DeepSPARE-AD in the AD and mild cognitive impairment amyloid-β positive (AD/MCIΑβ+) group (FDR-adjusted P < 0.001), and CSF α-synuclein was associated solely with DeepSPARE-LBD (FDR-adjusted P = 0.036). Overall, these findings demonstrate the advantages of our innovative deep-learning framework in detecting antemortem neuroimaging signatures linked to different pathologies. The newly deep-learning-derived DeepSPARE indices are precise, pathology-sensitive and single-valued non-invasive neuroimaging metrics, bridging the traditional widely available in vivo T1 imaging with histopathology.

Super-resolution sodium MRI of human gliomas at 3T using physics-based generative artificial intelligence.

Raymond C, Yao J, Kolkovsky ALL, Feiweier T, Clifford B, Meyer H, Zhong X, Han F, Cho NS, Sanvito F, Oshima S, Salamon N, Liau LM, Patel KS, Everson RG, Cloughesy TF, Ellingson BM

pubmed logopapersJun 3 2025
Sodium neuroimaging provides unique insights into the cellular and metabolic properties of brain tumors. However, at 3T, sodium neuroimaging MRI's low signal-to-noise ratio (SNR) and resolution discourages routine clinical use. We evaluated the recently developed Anatomically constrained GAN using physics-based synthetic MRI artifacts" (ATHENA) for high-resolution sodium neuroimaging of brain tumors at 3T. We hypothesized the model would improve the image quality while preserving the inherent sodium information. 4,573 proton MRI scans from 1,390 suspected brain tumor patients were used for training. Sodium and proton MRI datasets from Twenty glioma patients were collected for validation. Twenty-four image-guided biopsies from seven patients were available for sodium-proton exchanger (NHE1) expression evaluation on immunohistochemistry. High-resolution synthetic sodium images were generated using the ATHENA model, then compared to native sodium MRI and NHE1 protein expression from image-guided biopsy samples. The ATHENA produced synthetic-sodium MR with significantly improved SNR (native SNR 18.20 ± 7.04; synthetic SNR 23.83 ± 9.33, P = 0.0079). The synthetic-sodium values were consistent with the native measurements (P = 0.2058), with a strong linear correlation within contrast-enhancing areas of the tumor (R<sup>2</sup> = 0.7565, P = 0.0005), T2-hyperintense (R<sup>2</sup> = 0.7325, P < 0.0001), and necrotic areas (R<sup>2</sup> = 0.7678, P < 0.0001). The synthetic-sodium MR and the relative NHE1 expression from image-guided biopsies were better correlated for the synthetic (ρ = 0.3269, P < 0.0001) than the native (ρ = 0.1732, P = 0.0276) with higher sodium signal in samples expressing elevated NHE1 (P < 0.0001). ATHENA generates high-resolution synthetic-sodium MRI at 3T, enabling clinically attainable multinuclear imaging for brain tumors that retain the inherent information from the native sodium. The resulting synthetic sodium significantly correlates with tissue expression, potentially supporting its utility as a non-invasive marker of underlying sodium homeostasis in brain tumors.

Patient-specific prediction of glioblastoma growth via reduced order modeling and neural networks.

Cerrone D, Riccobelli D, Gazzoni S, Vitullo P, Ballarin F, Falco J, Acerbi F, Manzoni A, Zunino P, Ciarletta P

pubmed logopapersJun 3 2025
Glioblastoma is among the most aggressive brain tumors in adults, characterized by patient-specific invasion patterns driven by the underlying brain microstructure. In this work, we present a proof-of-concept for a mathematical model of GBL growth, enabling real-time prediction and patient-specific parameter identification from longitudinal neuroimaging data. The framework exploits a diffuse-interface mathematical model to describe the tumor evolution and a reduced-order modeling strategy, relying on proper orthogonal decomposition, trained on synthetic data derived from patient-specific brain anatomies reconstructed from magnetic resonance imaging and diffusion tensor imaging. A neural network surrogate learns the inverse mapping from tumor evolution to model parameters, achieving significant computational speed-up while preserving high accuracy. To ensure robustness and interpretability, we perform both global and local sensitivity analyses, identifying the key biophysical parameters governing tumor dynamics and assessing the stability of the inverse problem solution. These results establish a methodological foundation for future clinical deployment of patient-specific digital twins in neuro-oncology.

Evaluating the Diagnostic Accuracy of ChatGPT-4.0 for Classifying Multimodal Musculoskeletal Masses: A Comparative Study with Human Raters.

Bosbach WA, Schoeni L, Beisbart C, Senge JF, Mitrakovic M, Anderson SE, Achangwa NR, Divjak E, Ivanac G, Grieser T, Weber MA, Maurer MH, Sanal HT, Daneshvar K

pubmed logopapersJun 3 2025
Novel artificial intelligence tools have the potential to significantly enhance productivity in medicine, while also maintaining or even improving treatment quality. In this study, we aimed to evaluate the current capability of ChatGPT-4.0 to accurately interpret multimodal musculoskeletal tumor cases.We created 25 cases, each containing images from X-ray, computed tomography, magnetic resonance imaging, or scintigraphy. ChatGPT-4.0 was tasked with classifying each case using a six-option, two-choice question, where both a primary and a secondary diagnosis were allowed. For performance evaluation, human raters also assessed the same cases.When only the primary diagnosis was taken into account, the accuracy of human raters was greater than that of ChatGPT-4.0 by a factor of nearly 2 (87% vs. 44%). However, in a setting that also considered secondary diagnoses, the performance gap shrank substantially (accuracy: 94% vs. 71%). Power analysis relying on Cohen's w confirmed the adequacy of the sample set size (n: 25).The tested artificial intelligence tool demonstrated lower performance than human raters. Considering factors such as speed, constant availability, and potential future improvements, it appears plausible that artificial intelligence tools could serve as valuable assistance systems for doctors in future clinical settings. · ChatGPT-4.0 classifies musculoskeletal cases using multimodal imaging inputs.. · Human raters outperform AI in primary diagnosis accuracy by a factor of nearly two.. · Including secondary diagnoses improves AI performance and narrows the gap.. · AI demonstrates potential as an assistive tool in future radiological workflows.. · Power analysis confirms robustness of study findings with the current sample size.. · Bosbach WA, Schoeni L, Beisbart C et al. Evaluating the Diagnostic Accuracy of ChatGPT-4.0 for Classifying Multimodal Musculoskeletal Masses: A Comparative Study with Human Raters. Rofo 2025; DOI 10.1055/a-2594-7085.

Synthetic Ultrasound Image Generation for Breast Cancer Diagnosis Using cVAE-WGAN Models: An Approach Based on Generative Artificial Intelligence

Mondillo, G., Masino, M., Colosimo, S., Perrotta, A., Frattolillo, V., Abbate, F. G.

medrxiv logopreprintJun 2 2025
The scarcity and imbalance of medical image datasets hinder the development of robust computer-aided diagnosis (CAD) systems for breast cancer. This study explores the application of advanced generative models, based on generative artificial intelligence (GenAI), for the synthesis of digital breast ultrasound images. Using a hybrid Conditional Variational Autoencoder-Wasserstein Generative Adversarial Network (CVAE-WGAN) architecture, we developed a system to generate high-quality synthetic images conditioned on the class (malignant vs. normal/benign). These synthetic images, generated from the low-resolution BreastMNIST dataset and filtered for quality, were systematically integrated with real training data at different mixing ratios (W). The performance of a CNN classifier trained on these mixed datasets was evaluated against a baseline model trained only on real data balanced with SMOTE. The optimal integration (mixing weight W=0.25) produced a significant performance increase on the real test set: +8.17% in macro-average F1-score and +4.58% in accuracy compared to using real data alone. Analysis confirmed the originality of the generated samples. This approach offers a promising solution for overcoming data limitations in image-based breast cancer diagnostics, potentially improving the capabilities of CAD systems.

Beyond Pixel Agreement: Large Language Models as Clinical Guardrails for Reliable Medical Image Segmentation

Jiaxi Sheng, Leyi Yu, Haoyue Li, Yifan Gao, Xin Gao

arxiv logopreprintJun 2 2025
Evaluating AI-generated medical image segmentations for clinical acceptability poses a significant challenge, as traditional pixelagreement metrics often fail to capture true diagnostic utility. This paper introduces Hierarchical Clinical Reasoner (HCR), a novel framework that leverages Large Language Models (LLMs) as clinical guardrails for reliable, zero-shot quality assessment. HCR employs a structured, multistage prompting strategy that guides LLMs through a detailed reasoning process, encompassing knowledge recall, visual feature analysis, anatomical inference, and clinical synthesis, to evaluate segmentations. We evaluated HCR on a diverse dataset across six medical imaging tasks. Our results show that HCR, utilizing models like Gemini 2.5 Flash, achieved a classification accuracy of 78.12%, performing comparably to, and in instances exceeding, dedicated vision models such as ResNet50 (72.92% accuracy) that were specifically trained for this task. The HCR framework not only provides accurate quality classifications but also generates interpretable, step-by-step reasoning for its assessments. This work demonstrates the potential of LLMs, when appropriately guided, to serve as sophisticated evaluators, offering a pathway towards more trustworthy and clinically-aligned quality control for AI in medical imaging.
Page 18 of 38374 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.