Sort by:
Page 1 of 21205 results
Next

A software ecosystem for brain tractometry processing, analysis, and insight.

Kruper J, Richie-Halford A, Qiao J, Gilmore A, Chang K, Grotheer M, Roy E, Caffarra S, Gomez T, Chou S, Cieslak M, Koudoro S, Garyfallidis E, Satthertwaite TD, Yeatman JD, Rokem A

pubmed logopapersAug 14 2025
Tractometry uses diffusion-weighted magnetic resonance imaging (dMRI) to assess physical properties of brain connections. Here, we present an integrative ecosystem of software that performs all steps of tractometry: post-processing of dMRI data, delineation of major white matter pathways, and modeling of the tissue properties within them. This ecosystem also provides a set of interoperable and extensible tools for visualization and interpretation of the results that extract insights from these measurements. These include novel machine learning and statistical analysis methods adapted to the characteristic structure of tract-based data. We benchmark the performance of these statistical analysis methods in different datasets and analysis tasks, including hypothesis testing on group differences and predictive analysis of subject age. We also demonstrate that computational advances implemented in the software offer orders of magnitude of acceleration. Taken together, these open-source software tools-freely available at https://tractometry.org-provide a transformative environment for the analysis of dMRI data.

Medico 2025: Visual Question Answering for Gastrointestinal Imaging

Sushant Gautam, Vajira Thambawita, Michael Riegler, Pål Halvorsen, Steven Hicks

arxiv logopreprintAug 14 2025
The Medico 2025 challenge addresses Visual Question Answering (VQA) for Gastrointestinal (GI) imaging, organized as part of the MediaEval task series. The challenge focuses on developing Explainable Artificial Intelligence (XAI) models that answer clinically relevant questions based on GI endoscopy images while providing interpretable justifications aligned with medical reasoning. It introduces two subtasks: (1) answering diverse types of visual questions using the Kvasir-VQA-x1 dataset, and (2) generating multimodal explanations to support clinical decision-making. The Kvasir-VQA-x1 dataset, created from 6,500 images and 159,549 complex question-answer (QA) pairs, serves as the benchmark for the challenge. By combining quantitative performance metrics and expert-reviewed explainability assessments, this task aims to advance trustworthy Artificial Intelligence (AI) in medical image analysis. Instructions, data access, and an updated guide for participation are available in the official competition repository: https://github.com/simula/MediaEval-Medico-2025

MInDI-3D: Iterative Deep Learning in 3D for Sparse-view Cone Beam Computed Tomography

Daniel Barco, Marc Stadelmann, Martin Oswald, Ivo Herzig, Lukas Lichtensteiger, Pascal Paysan, Igor Peterlik, Michal Walczak, Bjoern Menze, Frank-Peter Schilling

arxiv logopreprintAug 13 2025
We present MInDI-3D (Medical Inversion by Direct Iteration in 3D), the first 3D conditional diffusion-based model for real-world sparse-view Cone Beam Computed Tomography (CBCT) artefact removal, aiming to reduce imaging radiation exposure. A key contribution is extending the "InDI" concept from 2D to a full 3D volumetric approach for medical images, implementing an iterative denoising process that refines the CBCT volume directly from sparse-view input. A further contribution is the generation of a large pseudo-CBCT dataset (16,182) from chest CT volumes of the CT-RATE public dataset to robustly train MInDI-3D. We performed a comprehensive evaluation, including quantitative metrics, scalability analysis, generalisation tests, and a clinical assessment by 11 clinicians. Our results show MInDI-3D's effectiveness, achieving a 12.96 (6.10) dB PSNR gain over uncorrected scans with only 50 projections on the CT-RATE pseudo-CBCT (independent real-world) test set and enabling an 8x reduction in imaging radiation exposure. We demonstrate its scalability by showing that performance improves with more training data. Importantly, MInDI-3D matches the performance of a 3D U-Net on real-world scans from 16 cancer patients across distortion and task-based metrics. It also generalises to new CBCT scanner geometries. Clinicians rated our model as sufficient for patient positioning across all anatomical sites and found it preserved lung tumour boundaries well.

MammosighTR: Nationwide Breast Cancer Screening Mammogram Dataset with BI-RADS Annotations for Artificial Intelligence Applications.

Koç U, Beşler MS, Sezer EA, Karakaş E, Özkaya YA, Evrimler Ş, Yalçın A, Kızıloğlu A, Kesimal U, Oruç M, Çankaya İ, Koç Keleş D, Merd N, Özkan E, Çevik Nİ, Gökhan MB, Boyraz Hayat B, Özer M, Tokur O, Işık F, Tezcan A, Battal F, Yüzkat M, Sebik NB, Karademir F, Topuz Y, Sezer Ö, Varlı S, Ülgü MM, Akdoğan E, Birinci Ş

pubmed logopapersAug 13 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. The MammosighTR dataset, derived from Türkiye's national breast cancer screening mammography program, provides BI-RADS-labeled mammograms with detailed annotations on breast composition and lesion quadrant location, which may be useful for developing and testing AI models in breast cancer detection. ©RSNA, 2025.

Ultrasonic Texture Analysis for Predicting Acute Myocardial Infarction.

Jamthikar AD, Hathaway QA, Maganti K, Hamirani Y, Bokhari S, Yanamala N, Sengupta PP

pubmed logopapersAug 13 2025
Acute myocardial infarction (MI) alters cardiomyocyte geometry and architecture, leading to changes in the acoustic properties of the myocardium. This study examines ultrasomics-a novel cardiac ultrasound-based radiomics technique to extract high-throughput pixel-level information from images-for identifying ultrasonic texture and morphologic changes associated with infarcted myocardium. We included 684 participants from multisource data: a) a retrospective single-center matched case-control dataset, b) a prospective multicenter matched clinical trial dataset, and c) an open-source international and multivendor dataset. Handcrafted and deep transfer learning-based ultrasomics features from 2- and 4-chamber echocardiographic views were used to train machine learning (ML) models with the use of leave-one-source-out cross-validation for external validation. The ML model showed a higher AUC than transfer learning-based deep features in identifying MI [AUCs: 0.87 [95% CI: 0.84-0.89] vs 0.74 [95% CI: 0.70-0.77]; P < 0.0001]. ML probability was an independent predictor of MI even after adjusting for conventional echocardiographic parameters [adjusted OR: 1.03 [95% CI: 1.01-1.05]; P < 0.0001]. ML probability showed diagnostic value in differentiating acute MI, even in the presence of myocardial dysfunction (averaged longitudinal strain [LS] <16%) (AUC: 0.84 [95% CI: 0.77-0.89]). In addition, combining averaged LS with ML probability significantly improved predictive performance compared with LS alone (AUCs: 0.86 [95% CI: 0.80-0.91] vs 0.80 [95% CI: 0.72-0.87]; P = 0.02). Visualization of ultrasomics features with the use of a Manhattan plot discriminated infarcted and noninfarcted segments (P < 0.001) and facilitated parametric visualization of infarcted myocardium. This study demonstrates the potential of cardiac ultrasomics to distinguish healthy from infarcted myocardium and highlights the need for validation in diverse populations to define its role and incremental value in myocardial tissue characterization beyond conventional echocardiography.

A Chain of Diagnosis Framework for Accurate and Explainable Radiology Report Generation

Haibo Jin, Haoxuan Che, Sunan He, Hao Chen

arxiv logopreprintAug 13 2025
Despite the progress of radiology report generation (RRG), existing works face two challenges: 1) The performances in clinical efficacy are unsatisfactory, especially for lesion attributes description; 2) the generated text lacks explainability, making it difficult for radiologists to trust the results. To address the challenges, we focus on a trustworthy RRG model, which not only generates accurate descriptions of abnormalities, but also provides basis of its predictions. To this end, we propose a framework named chain of diagnosis (CoD), which maintains a chain of diagnostic process for clinically accurate and explainable RRG. It first generates question-answer (QA) pairs via diagnostic conversation to extract key findings, then prompts a large language model with QA diagnoses for accurate generation. To enhance explainability, a diagnosis grounding module is designed to match QA diagnoses and generated sentences, where the diagnoses act as a reference. Moreover, a lesion grounding module is designed to locate abnormalities in the image, further improving the working efficiency of radiologists. To facilitate label-efficient training, we propose an omni-supervised learning strategy with clinical consistency to leverage various types of annotations from different datasets. Our efforts lead to 1) an omni-labeled RRG dataset with QA pairs and lesion boxes; 2) a evaluation tool for assessing the accuracy of reports in describing lesion location and severity; 3) extensive experiments to demonstrate the effectiveness of CoD, where it outperforms both specialist and generalist models consistently on two RRG benchmarks and shows promising explainability by accurately grounding generated sentences to QA diagnoses and images.

Diffusing the Blind Spot: Uterine MRI Synthesis with Diffusion Models

Johanna P. Müller, Anika Knupfer, Pedro Blöss, Edoardo Berardi Vittur, Bernhard Kainz, Jana Hutter

arxiv logopreprintAug 11 2025
Despite significant progress in generative modelling, existing diffusion models often struggle to produce anatomically precise female pelvic images, limiting their application in gynaecological imaging, where data scarcity and patient privacy concerns are critical. To overcome these barriers, we introduce a novel diffusion-based framework for uterine MRI synthesis, integrating both unconditional and conditioned Denoising Diffusion Probabilistic Models (DDPMs) and Latent Diffusion Models (LDMs) in 2D and 3D. Our approach generates anatomically coherent, high fidelity synthetic images that closely mimic real scans and provide valuable resources for training robust diagnostic models. We evaluate generative quality using advanced perceptual and distributional metrics, benchmarking against standard reconstruction methods, and demonstrate substantial gains in diagnostic accuracy on a key classification task. A blinded expert evaluation further validates the clinical realism of our synthetic images. We release our models with privacy safeguards and a comprehensive synthetic uterine MRI dataset to support reproducible research and advance equitable AI in gynaecology.

MedReasoner: Reinforcement Learning Drives Reasoning Grounding from Clinical Thought to Pixel-Level Precision

Zhonghao Yan, Muxi Diao, Yuxuan Yang, Jiayuan Xu, Kaizhou Zhang, Ruoyan Jing, Lele Yang, Yanxi Liu, Kongming Liang, Zhanyu Ma

arxiv logopreprintAug 11 2025
Accurately grounding regions of interest (ROIs) is critical for diagnosis and treatment planning in medical imaging. While multimodal large language models (MLLMs) combine visual perception with natural language, current medical-grounding pipelines still rely on supervised fine-tuning with explicit spatial hints, making them ill-equipped to handle the implicit queries common in clinical practice. This work makes three core contributions. We first define Unified Medical Reasoning Grounding (UMRG), a novel vision-language task that demands clinical reasoning and pixel-level grounding. Second, we release U-MRG-14K, a dataset of 14K samples featuring pixel-level masks alongside implicit clinical queries and reasoning traces, spanning 10 modalities, 15 super-categories, and 108 specific categories. Finally, we introduce MedReasoner, a modular framework that distinctly separates reasoning from segmentation: an MLLM reasoner is optimized with reinforcement learning, while a frozen segmentation expert converts spatial prompts into masks, with alignment achieved through format and accuracy rewards. MedReasoner achieves state-of-the-art performance on U-MRG-14K and demonstrates strong generalization to unseen clinical queries, underscoring the significant promise of reinforcement learning for interpretable medical grounding.

Large-scale Multi-sequence Pretraining for Generalizable MRI Analysis in Versatile Clinical Applications

Zelin Qiu, Xi Wang, Zhuoyao Xie, Juan Zhou, Yu Wang, Lingjie Yang, Xinrui Jiang, Juyoung Bae, Moo Hyun Son, Qiang Ye, Dexuan Chen, Rui Zhang, Tao Li, Neeraj Ramesh Mahboobani, Varut Vardhanabhuti, Xiaohui Duan, Yinghua Zhao, Hao Chen

arxiv logopreprintAug 10 2025
Multi-sequence Magnetic Resonance Imaging (MRI) offers remarkable versatility, enabling the distinct visualization of different tissue types. Nevertheless, the inherent heterogeneity among MRI sequences poses significant challenges to the generalization capability of deep learning models. These challenges undermine model performance when faced with varying acquisition parameters, thereby severely restricting their clinical utility. In this study, we present PRISM, a foundation model PRe-trained with large-scale multI-Sequence MRI. We collected a total of 64 datasets from both public and private sources, encompassing a wide range of whole-body anatomical structures, with scans spanning diverse MRI sequences. Among them, 336,476 volumetric MRI scans from 34 datasets (8 public and 26 private) were curated to construct the largest multi-organ multi-sequence MRI pretraining corpus to date. We propose a novel pretraining paradigm that disentangles anatomically invariant features from sequence-specific variations in MRI, while preserving high-level semantic representations. We established a benchmark comprising 44 downstream tasks, including disease diagnosis, image segmentation, registration, progression prediction, and report generation. These tasks were evaluated on 32 public datasets and 5 private cohorts. PRISM consistently outperformed both non-pretrained models and existing foundation models, achieving first-rank results in 39 out of 44 downstream benchmarks with statistical significance improvements. These results underscore its ability to learn robust and generalizable representations across unseen data acquired under diverse MRI protocols. PRISM provides a scalable framework for multi-sequence MRI analysis, thereby enhancing the translational potential of AI in radiology. It delivers consistent performance across diverse imaging protocols, reinforcing its clinical applicability.

Text Embedded Swin-UMamba for DeepLesion Segmentation

Ruida Cheng, Tejas Sudharshan Mathai, Pritam Mukherjee, Benjamin Hou, Qingqing Zhu, Zhiyong Lu, Matthew McAuliffe, Ronald M. Summers

arxiv logopreprintAug 8 2025
Segmentation of lesions on CT enables automatic measurement for clinical assessment of chronic diseases (e.g., lymphoma). Integrating large language models (LLMs) into the lesion segmentation workflow offers the potential to combine imaging features with descriptions of lesion characteristics from the radiology reports. In this study, we investigate the feasibility of integrating text into the Swin-UMamba architecture for the task of lesion segmentation. The publicly available ULS23 DeepLesion dataset was used along with short-form descriptions of the findings from the reports. On the test dataset, a high Dice Score of 82% and low Hausdorff distance of 6.58 (pixels) was obtained for lesion segmentation. The proposed Text-Swin-UMamba model outperformed prior approaches: 37% improvement over the LLM-driven LanGuideMedSeg model (p < 0.001),and surpassed the purely image-based xLSTM-UNet and nnUNet models by 1.74% and 0.22%, respectively. The dataset and code can be accessed at https://github.com/ruida/LLM-Swin-UMamba
Page 1 of 21205 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.