Sort by:
Page 13 of 78773 results

A Generative Foundation Model for Chest Radiography

Yuanfeng Ji, Dan Lin, Xiyue Wang, Lu Zhang, Wenhui Zhou, Chongjian Ge, Ruihang Chu, Xiaoli Yang, Junhan Zhao, Junsong Chen, Xiangde Luo, Sen Yang, Jin Fang, Ping Luo, Ruijiang Li

arxiv logopreprintSep 4 2025
The scarcity of well-annotated diverse medical images is a major hurdle for developing reliable AI models in healthcare. Substantial technical advances have been made in generative foundation models for natural images. Here we develop `ChexGen', a generative vision-language foundation model that introduces a unified framework for text-, mask-, and bounding box-guided synthesis of chest radiographs. Built upon the latent diffusion transformer architecture, ChexGen was pretrained on the largest curated chest X-ray dataset to date, consisting of 960,000 radiograph-report pairs. ChexGen achieves accurate synthesis of radiographs through expert evaluations and quantitative metrics. We demonstrate the utility of ChexGen for training data augmentation and supervised pretraining, which led to performance improvements across disease classification, detection, and segmentation tasks using a small fraction of training data. Further, our model enables the creation of diverse patient cohorts that enhance model fairness by detecting and mitigating demographic biases. Our study supports the transformative role of generative foundation models in building more accurate, data-efficient, and equitable medical AI systems.

A Foundation Model for Chest X-ray Interpretation with Grounded Reasoning via Online Reinforcement Learning

Qika Lin, Yifan Zhu, Bin Pu, Ling Huang, Haoran Luo, Jingying Ma, Zhen Peng, Tianzhe Zhao, Fangzhi Xu, Jian Zhang, Kai He, Zhonghong Ou, Swapnil Mishra, Mengling Feng

arxiv logopreprintSep 4 2025
Medical foundation models (FMs) have shown tremendous promise amid the rapid advancements in artificial intelligence (AI) technologies. However, current medical FMs typically generate answers in a black-box manner, lacking transparent reasoning processes and locally grounded interpretability, which hinders their practical clinical deployments. To this end, we introduce DeepMedix-R1, a holistic medical FM for chest X-ray (CXR) interpretation. It leverages a sequential training pipeline: initially fine-tuned on curated CXR instruction data to equip with fundamental CXR interpretation capabilities, then exposed to high-quality synthetic reasoning samples to enable cold-start reasoning, and finally refined via online reinforcement learning to enhance both grounded reasoning quality and generation performance. Thus, the model produces both an answer and reasoning steps tied to the image's local regions for each query. Quantitative evaluation demonstrates substantial improvements in report generation (e.g., 14.54% and 31.32% over LLaVA-Rad and MedGemma) and visual question answering (e.g., 57.75% and 23.06% over MedGemma and CheXagent) tasks. To facilitate robust assessment, we propose Report Arena, a benchmarking framework using advanced language models to evaluate answer quality, further highlighting the superiority of DeepMedix-R1. Expert review of generated reasoning steps reveals greater interpretability and clinical plausibility compared to the established Qwen2.5-VL-7B model (0.7416 vs. 0.2584 overall preference). Collectively, our work advances medical FM development toward holistic, transparent, and clinically actionable modeling for CXR interpretation.

Interpretable Transformer Models for rs-fMRI Epilepsy Classification and Biomarker Discovery

Jeyabose Sundar, A., Boerwinkle, V. L., Robinson Vimala, B., Leggio, O., Kazemi, M.

medrxiv logopreprintSep 4 2025
BackgroundAutomated interpretation of resting-state fMRI (rs-fMRI) for epilepsy diagnosis remains a challenge. We developed a regularized transformer that models parcel-wise spatial patterns and long-range temporal dynamics to classify epilepsy and generate interpretable, network-level candidate biomarkers. MethodsInputs were Schaefer-200 parcel time series extracted after standardized preprocessing (fMRIPrep). The Regularized Transformer is an attention-based sequence model with learned positional encoding and multi-head self-attention, combined with fMRI-specific regularization (dropout, weight decay, gradient clipping) and augmentation to improve robustness on modest clinical cohorts. Training used stratified group 4-fold cross-validation on n=65 (30 epilepsy, 35 controls) with fMRI-specific augmentation (time-warping, adaptive noise, structured masking). We compared the transformer to seven baselines (MLP, 1D-CNN, LSTM, CNN-LSTM, GCN, GAT, Attention-Only). External validation used an independent set (10 UNC epilepsy cohort, 10 controls). Biomarker discovery combined gradient-based attributions with parcelwise statistics and connectivity contrasts. ResultsOn an illustrative best-performing fold, the transformer attained Accuracy 0.77, Sensitivity 0.83, Specificity 0.88, F1-Score 0.77, and AUC 0.76. Averaged cross-validation performance was lower but consistent with these findings. External testing yielded Accuracy 0.60, AUC 0.64, Specificity 0.80, Sensitivity 0.40. Attribution-guided analysis identified distributed, network-level candidate biomarkers concentrated in limbic, somatomotor, default-mode and salience systems. ConclusionsA regularized transformer on parcel-level rs-fMRI can achieve strong within-fold discrimination and produce interpretable candidate biomarkers. Results are encouraging but preliminary larger multi-site validation, stability testing and multiple-comparison control are required prior to clinical translation.

TauGenNet: Plasma-Driven Tau PET Image Synthesis via Text-Guided 3D Diffusion Models

Yuxin Gong, Se-in Jang, Wei Shao, Yi Su, Kuang Gong

arxiv logopreprintSep 4 2025
Accurate quantification of tau pathology via tau positron emission tomography (PET) scan is crucial for diagnosing and monitoring Alzheimer's disease (AD). However, the high cost and limited availability of tau PET restrict its widespread use. In contrast, structural magnetic resonance imaging (MRI) and plasma-based biomarkers provide non-invasive and widely available complementary information related to brain anatomy and disease progression. In this work, we propose a text-guided 3D diffusion model for 3D tau PET image synthesis, leveraging multimodal conditions from both structural MRI and plasma measurement. Specifically, the textual prompt is from the plasma p-tau217 measurement, which is a key indicator of AD progression, while MRI provides anatomical structure constraints. The proposed framework is trained and evaluated using clinical AV1451 tau PET data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Experimental results demonstrate that our approach can generate realistic, clinically meaningful 3D tau PET across a range of disease stages. The proposed framework can help perform tau PET data augmentation under different settings, provide a non-invasive, cost-effective alternative for visualizing tau pathology, and support the simulation of disease progression under varying plasma biomarker levels and cognitive conditions.

Diffusion Generative Models Meet Compressed Sensing, with Applications to Image Data and Financial Time Series

Zhengyi Guo, Jiatu Li, Wenpin Tang, David D. Yao

arxiv logopreprintSep 4 2025
This paper develops dimension reduction techniques for accelerating diffusion model inference in the context of synthetic data generation. The idea is to integrate compressed sensing into diffusion models: (i) compress the data into a latent space, (ii) train a diffusion model in the latent space, and (iii) apply a compressed sensing algorithm to the samples generated in the latent space, facilitating the efficiency of both model training and inference. Under suitable sparsity assumptions on data, the proposed algorithm is proved to enjoy faster convergence by combining diffusion model inference with sparse recovery. As a byproduct, we obtain an optimal value for the latent space dimension. We also conduct numerical experiments on a range of datasets, including image data (handwritten digits, medical images, and climate data) and financial time series for stress testing.

SynBT: High-quality Tumor Synthesis for Breast Tumor Segmentation by 3D Diffusion Model

Hongxu Yang, Edina Timko, Levente Lippenszky, Vanda Czipczer, Lehel Ferenczi

arxiv logopreprintSep 3 2025
Synthetic tumors in medical images offer controllable characteristics that facilitate the training of machine learning models, leading to an improved segmentation performance. However, the existing methods of tumor synthesis yield suboptimal performances when tumor occupies a large spatial volume, such as breast tumor segmentation in MRI with a large field-of-view (FOV), while commonly used tumor generation methods are based on small patches. In this paper, we propose a 3D medical diffusion model, called SynBT, to generate high-quality breast tumor (BT) in contrast-enhanced MRI images. The proposed model consists of a patch-to-volume autoencoder, which is able to compress the high-resolution MRIs into compact latent space, while preserving the resolution of volumes with large FOV. Using the obtained latent space feature vector, a mask-conditioned diffusion model is used to synthesize breast tumors within selected regions of breast tissue, resulting in realistic tumor appearances. We evaluated the proposed method for a tumor segmentation task, which demonstrated the proposed high-quality tumor synthesis method can facilitate the common segmentation models with performance improvement of 2-3% Dice Score on a large public dataset, and therefore provides benefits for tumor segmentation in MRI images.

Interpretable Artificial Intelligence Analysis of Functional Magnetic Resonance Imaging for Migraine Classification: Quantitative Study.

Li G, Yang H, He L, Zeng G

pubmed logopapersSep 3 2025
Deep learning has demonstrated significant potential in advancing computer-aided diagnosis for neuropsychiatric disorders, such as migraine, enabling patient-specific diagnosis at an individual level. However, despite the superior accuracy of deep learning models, the interpretability of image classification models remains limited. Their black-box nature continues to pose a major obstacle in clinical applications, hindering biomarker discovery and personalized treatment. This study aims to investigate explainable artificial intelligence (XAI) techniques combined with multiple functional magnetic resonance imaging (fMRI) indicators to (1) compare their efficacy in migraine classification, (2) identify optimal model-indicator pairings, and (3) evaluate XAI's potential in clinical diagnostics by localizing discriminative brain regions. We analyzed resting-state fMRI data from 64 participants, including 21 (33%) patients with migraine without aura, 15 (23%) patients with migraine with aura, and 28 (44%) healthy controls. Three fMRI metrics-amplitude of low-frequency fluctuation, regional homogeneity, and regional functional connectivity strength (RFCS)-were extracted and classified using GoogleNet, ResNet18, and Vision Transformer. For comprehensive model comparison, conventional machine learning methods, including support vector machine and random forest, were also used as benchmarks. Model performance was evaluated through accuracy and area under the curve metrics, while activation heat maps were generated via gradient-weighted class activation mapping for convolutional neural networks and self-attention mechanisms for Vision Transformer. The GoogleNet model combined with RFCS indicators achieved the best classification performance, with an accuracy of >98.44% and an area under the receiver operating characteristic curve of 0.99 for the test set. In addition, among the 3 indicators, the RFCS indicator improved accuracy by approximately 8% compared with the amplitude of low-frequency fluctuation. Brain activation heat maps generated by XAI technology revealed that the precuneus and cuneus were the most discriminative brain regions, with slight activation also observed in the frontal gyrus. The use of XAI technology combined with brain region features provides visual explanations for the progression of migraine in patients. Understanding the decision-making process of the network has significant potential for clinical diagnosis of migraines, offering promising applications in enhancing diagnostic accuracy and aiding in the development of new diagnostic techniques.

Using Explainable AI to Characterize Features in the Mirai Mammographic Breast Cancer Risk Prediction Model.

Wang YK, Klanecek Z, Wagner T, Cockmartin L, Marshall N, Studen A, Jeraj R, Bosmans H

pubmed logopapersSep 3 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate whether features extracted by Mirai can be aligned with mammographic observations, and contribute meaningfully to the prediction. Materials and Methods This retrospective study examined the correlation of 512 Mirai features with mammographic observations in terms of receptive field and anatomic location. A total of 29,374 screening examinations with mammograms (10,415 women, mean age at examination 60 [SD: 11] years) from the EMBED Dataset (2013-2020) were used to evaluate feature importance using a feature-centric explainable AI pipeline. Risk prediction was evaluated using only calcification features (CalcMirai) or mass features (MassMirai) against Mirai. Performance was assessed in screening and screen-negative (time-to-cancer > 6 months) populations using the area under the receiver operating characteristic curve (AUC). Results Eighteen calcification features and 18 mass features were selected for CalcMirai and MassMirai, respectively. Both CalcMirai and MassMirai had lower performance than Mirai in lesion detection (screening population, 1-year AUC: Mirai, 0.81 [95% CI: 0.78, 0.84], CalcMirai, 0.76 [95% CI: 0.73, 0.80]; MassMirai, 0.74 [95% CI: 0.71, 0.78]; <i>P</i> values < 0.001). In risk prediction, there was no evidence of a difference in performance between CalcMirai and Mirai (screen-negative population, 5-year AUC: Mirai, 0.66 [95% CI: 0.63, 0.69], CalcMirai, 0.66 [95% CI: 0.64, 0.69]; <i>P</i> value: 0.71); however, MassMirai achieved lower performance than Mirai (AUC, 0.57 [95% CI: 0.54, 0.60]; <i>P</i> value < .001). Radiologist review of calcification features confirmed Mirai's use of benign calcification in risk prediction. Conclusion The explainable AI pipeline demonstrated that Mirai implicitly learned to identify mammographic lesion features, particularly calcifications, for lesion detection and risk prediction. ©RSNA, 2025.

Evaluating large language model-generated brain MRI protocols: performance of GPT4o, o3-mini, DeepSeek-R1 and Qwen2.5-72B.

Kim SH, Schramm S, Schmitzer L, Serguen K, Ziegelmayer S, Busch F, Komenda A, Makowski MR, Adams LC, Bressem KK, Zimmer C, Kirschke J, Wiestler B, Hedderich D, Finck T, Bodden J

pubmed logopapersSep 3 2025
To evaluate the potential of LLMs to generate sequence-level brain MRI protocols. This retrospective study employed a dataset of 150 brain MRI cases derived from local imaging request forms. Reference protocols were established by two neuroradiologists. GPT-4o, o3-mini, DeepSeek-R1 and Qwen2.5-72B were employed to generate brain MRI protocols based on the case descriptions. Protocol generation was conducted (1) with additional in-context learning involving local standard protocols (enhanced) and (2) without additional information (base). Additionally, two radiology residents independently defined MRI protocols. The sum of redundant and missing sequences (accuracy index) was defined as performance metric. Accuracy indices were compared between groups using paired t-tests. The two neuroradiologists achieved substantial inter-rater agreement (Cohen's κ = 0.74). o3-mini demonstrated superior performance (base: 2.65 ± 1.61; enhanced: 1.94 ± 1.25), followed by GPT-4o (base: 3.11 ± 1.83; enhanced: 2.23 ± 1.48), DeepSeek-R1 (base: 3.42 ± 1.84; enhanced: 2.37 ± 1.42) and Qwen2.5-72B (base: 5.95 ± 2.78; enhanced: 2.75 ± 1.54). o3-mini consistently outperformed the other models with a significant margin. All four models showed highly significant performance improvements under the enhanced condition (adj. p < 0.001 for all models). The highest-performing LLM (o3-mini [enhanced]) yielded an accuracy index comparable to residents (o3-mini [enhanced]: 1.94 ± 1.25, resident 1: 1.77 ± 1.29, resident 2: 1.77 ± 1.28). Our findings demonstrate the promising potential of LLMs in automating brain MRI protocoling, especially when augmented through in-context learning. o3-mini exhibited superior performance, followed by GPT-4o. QuestionBrain MRI protocoling is a time-consuming, non-interpretative task, exacerbating radiologist workload. Findingso3-mini demonstrated superior brain MRI protocoling performance. All models showed notable improvements when augmented with local standard protocols. Clinical relevanceMRI protocoling is a time-intensive, non-interpretative task that adds to radiologist workload; large language models offer potential for (semi-)automation of this process.

Fully-Guided Placement of Dental Implants Utilizing Nasopalatine Canal Fixation in a Novel Rotational Path Surgical Template Design: A Retrospective Case Series.

Ganz SD

pubmed logopapersSep 3 2025
Precise implant placement in the anterior and posterior maxilla often presents challenges due to variable bone and soft tissue anatomy. Many clinicians elect a freehand surgical approach because conventional surgical guides may not always be easy to design, fabricate, or utilize. Guided surgery has been proven to have advantages over freehand surgical protocols and therefore, the present study proposed utilizing the nasopalatine canal (NPC) as an anatomical reference and point of fixation for a novel rotational path surgical template during computer-aided implant surgery (CAIS). The present digital workflow combined artificial intelligence (AI) facilitated cone beam computed tomography (CBCT) software bone segmentation of the maxillary arch to assess the NPC and surrounding hard tissues, to design and fabricate static surgical guides to precisely place implants. After rotational engagement of the maxillary buccal undercuts, each novel surgical guide incorporated the NPC for fixation with a single pin to achieve initial stability. 22 consecutive patients requiring maxillary reconstruction received 123 implants (7 fully and 15 partially edentulous) utilizing a fully-guided surgical protocol to complete 4 overdenture and 18 full-arch fixed restorations. 12 patients required extensive maxillary bone augmentation before implant placement. 13 patients required delayed loading based on bone density and 9 patients were restoratively loaded within 24 to 96 hours post-surgery, accomplished with the use of photogrammetry for the fabrication of 3D-printed restorations. The initial implant success rate was 98.37% and 100% initial prosthetic success. The use of the NPC for fixation of surgical guides did not result in any neurovascular post-operative complications. The novel template concept can improve surgical outcomes using a bone-borne template design for implant-supported rehabilitation of the partial and fully edentulous maxillary arch. Preliminary case series confirmed controlled placement accuracy with limited risk of neurovascular complications for full-arch overdenture and fixed restorations. NPC is a vital maxillary anatomic landmark for implant planning, with an expanded role for the stabilization of novel surgical guide designs due to advancements in AI bone segmentation.
Page 13 of 78773 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.