Sort by:
Page 174 of 3973969 results

Noise-inspired diffusion model for generalizable low-dose CT reconstruction.

Gao Q, Chen Z, Zeng D, Zhang J, Ma J, Shan H

pubmed logopapersJul 8 2025
The generalization of deep learning-based low-dose computed tomography (CT) reconstruction models to doses unseen in the training data is important and remains challenging. Previous efforts heavily rely on paired data to improve the generalization performance and robustness through collecting either diverse CT data for re-training or a few test data for fine-tuning. Recently, diffusion models have shown promising and generalizable performance in low-dose CT (LDCT) reconstruction, however, they may produce unrealistic structures due to the CT image noise deviating from Gaussian distribution and imprecise prior information from the guidance of noisy LDCT images. In this paper, we propose a noise-inspired diffusion model for generalizable LDCT reconstruction, termed NEED, which tailors diffusion models for noise characteristics of each domain. First, we propose a novel shifted Poisson diffusion model to denoise projection data, which aligns the diffusion process with the noise model in pre-log LDCT projections. Second, we devise a doubly guided diffusion model to refine reconstructed images, which leverages LDCT images and initial reconstructions to more accurately locate prior information and enhance reconstruction fidelity. By cascading these two diffusion models for dual-domain reconstruction, our NEED requires only normal-dose data for training and can be effectively extended to various unseen dose levels during testing via a time step matching strategy. Extensive qualitative, quantitative, and segmentation-based evaluations on two datasets demonstrate that our NEED consistently outperforms state-of-the-art methods in reconstruction and generalization performance. Source code is made available at https://github.com/qgao21/NEED.

A Unified Platform for Radiology Report Generation and Clinician-Centered AI Evaluation

Ma, Z., Yang, X., Atalay, Z., Yang, A., Collins, S., Bai, H., Bernstein, M., Baird, G., Jiao, Z.

medrxiv logopreprintJul 8 2025
Generative AI models have demonstrated strong potential in radiology report generation, but their clinical adoption depends on physician trust. In this study, we conducted a radiology-focused Turing test to evaluate how well attendings and residents distinguish AI-generated reports from those written by radiologists, and how their confidence and decision time reflect trust. we developed an integrated web-based platform comprising two core modules: Report Generation and Report Evaluation. Using the web-based platform, eight participants evaluated 48 anonymized X-ray cases, each paired with two reports from three comparison groups: radiologist vs. AI model 1, radiologist vs. AI model 2, and AI model 1 vs. AI model 2. Participants selected the AI-generated report, rated their confidence, and indicated report preference. Attendings outperformed residents in identifying AI-generated reports (49.9% vs. 41.1%) and exhibited longer decision times, suggesting more deliberate judgment. Both groups took more time when both reports were AI-generated. Our findings highlight the role of clinical experience in AI acceptance and the need for design strategies that foster trust in clinical applications. The project page of the evaluation platform is available at: https://zachatalay89.github.io/Labsite.

External Validation of an Upgraded AI Model for Screening Ileocolic Intussusception Using Pediatric Abdominal Radiographs: Multicenter Retrospective Study.

Lee JH, Kim PH, Son NH, Han K, Kang Y, Jeong S, Kim EK, Yoon H, Gatidis S, Vasanawala S, Yoon HM, Shin HJ

pubmed logopapersJul 8 2025
Artificial intelligence (AI) is increasingly used in radiology, but its development in pediatric imaging remains limited, particularly for emergent conditions. Ileocolic intussusception is an important cause of acute abdominal pain in infants and toddlers and requires timely diagnosis to prevent complications such as bowel ischemia or perforation. While ultrasonography is the diagnostic standard due to its high sensitivity and specificity, its accessibility may be limited, especially outside tertiary centers. Abdominal radiographs (AXRs), despite their limited sensitivity, are often the first-line imaging modality in clinical practice. In this context, AI could support early screening and triage by analyzing AXRs and identifying patients who require further ultrasonography evaluation. This study aimed to upgrade and externally validate an AI model for screening ileocolic intussusception using pediatric AXRs with multicenter data and to assess the diagnostic performance of the model in comparison with radiologists of varying experience levels with and without AI assistance. This retrospective study included pediatric patients (≤5 years) who underwent both AXRs and ultrasonography for suspected intussusception. Based on the preliminary study from hospital A, the AI model was retrained using data from hospital B and validated with external datasets from hospitals C and D. Diagnostic performance of the upgraded AI model was evaluated using sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC). A reader study was conducted with 3 radiologists, including 2 trainees and 1 pediatric radiologist, to evaluate diagnostic performance with and without AI assistance. Based on the previously developed AI model trained on 746 patients from hospital A, an additional 431 patients from hospital B (including 143 intussusception cases) were used for further training to develop an upgraded AI model. External validation was conducted using data from hospital C (n=68; 19 intussusception cases) and hospital D (n=90; 30 intussusception cases). The upgraded AI model achieved a sensitivity of 81.7% (95% CI 68.6%-90%) and a specificity of 81.7% (95% CI 73.3%-87.8%), with an AUC of 86.2% (95% CI 79.2%-92.1%) in the external validation set. Without AI assistance, radiologists showed lower performance (overall AUC 64%; sensitivity 49.7%; specificity 77.1%). With AI assistance, radiologists' specificity improved to 93% (difference +15.9%; P<.001), and AUC increased to 79.2% (difference +15.2%; P=.05). The least experienced reader showed the largest improvement in specificity (+37.6%; P<.001) and AUC (+14.7%; P=.08). The upgraded AI model improved diagnostic performance for screening ileocolic intussusception on pediatric AXRs. It effectively enhanced the specificity and overall accuracy of radiologists, particularly those with less experience in pediatric radiology. A user-friendly software platform was introduced to support broader clinical validation and underscores the potential of AI as a screening and triage tool in pediatric emergency settings.

CineMyoPS: Segmenting Myocardial Pathologies from Cine Cardiac MR.

Ding W, Li L, Qiu J, Lin B, Yang M, Huang L, Wu L, Wang S, Zhuang X

pubmed logopapersJul 7 2025
Myocardial infarction (MI) is a leading cause of death worldwide. Late gadolinium enhancement (LGE) and T2-weighted cardiac magnetic resonance (CMR) imaging can respectively identify scarring and edema areas, both of which are essential for MI risk stratification and prognosis assessment. Although combining complementary information from multi-sequence CMR is useful, acquiring these sequences can be time-consuming and prohibitive, e.g., due to the administration of contrast agents. Cine CMR is a rapid and contrast-free imaging technique that can visualize both motion and structural abnormalities of the myocardium induced by acute MI. Therefore, we present a new end-to-end deep neural network, referred to as CineMyoPS, to segment myocardial pathologies, i.e., scars and edema, solely from cine CMR images. Specifically, CineMyoPS extracts both motion and anatomy features associated with MI. Given the interdependence between these features, we design a consistency loss (resembling the co-training strategy) to facilitate their joint learning. Furthermore, we propose a time-series aggregation strategy to integrate MI-related features across the cardiac cycle, thereby enhancing segmentation accuracy for myocardial pathologies. Experimental results on a multi-center dataset demonstrate that CineMyoPS achieves promising performance in myocardial pathology segmentation, motion estimation, and anatomy segmentation.

Evaluation of AI-based detection of incidental pulmonary emboli in cardiac CT angiography scans.

Brin D, Gilat EK, Raskin D, Goitein O

pubmed logopapersJul 7 2025
Incidental pulmonary embolism (PE) is detected in 1% of cardiac CT angiography (CCTA) scans, despite the targeted aortic opacification and limited field of view. While artificial intelligence (AI) algorithms have proven effective in detecting PE in CT pulmonary angiography (CTPA), their use in CCTA remains unexplored. This study aimed to evaluate the feasibility of an AI algorithm for detecting incidental PE in CCTA scans. A dedicated AI algorithm was retrospectively applied to CCTA scans to detect PE. Radiology reports were reviewed using a natural language processing (NLP) tool to detect mentions of PE. Discrepancies between the AI and radiology reports triggered a blinded review by a cardiothoracic radiologist. All scans identified as positive for PE were thoroughly assessed for radiographic features, including the location of emboli and right ventricular (RV) strain. The performance of the AI algorithm for PE detection was compared to the original radiology report. Between 2021 and 2023, 1534 CCTA scans were analyzed. The AI algorithm identified 27 positive PE scans, with a subsequent review confirming PE in 22/27 cases. Of these, 10 (45.5%) were missed in the initial radiology report, all involving segmental or subsegmental arteries (P < 0.05) with no evidence of RV strain. This study demonstrates the feasibility of using an AI algorithm to detect incidental PE in CCTA scans. A notable radiology report miss rate (45.5%) of segmental and subsegmental emboli was documented. While these findings emphasize the potential value of AI for PE detection in the daily radiology workflow, further research is needed to fully determine its clinical impact.

External Validation on a Japanese Cohort of a Computer-Aided Diagnosis System Aimed at Characterizing ISUP ≥ 2 Prostate Cancers at Multiparametric MRI.

Escande R, Jaouen T, Gonindard-Melodelima C, Crouzet S, Kuroda S, Souchon R, Rouvière O, Shoji S

pubmed logopapersJul 7 2025
To evaluate the generalizability of a computer-aided diagnosis (CADx) system based on the apparent diffusion coefficient (ADC) and wash-in rate, and trained on a French population to diagnose International Society of Urological Pathology ≥ 2 prostate cancer on multiparametric MRI. Sixty-eight consecutive patients who underwent radical prostatectomy at a single Japanese institution were retrospectively included. Pre-prostatectomy MRIs were reviewed by an experienced radiologist who assigned to suspicious lesions a Prostate Imaging-Reporting and Data System version 2.1 (PI-RADSv2.1) score and delineated them. The CADx score was computed from these regions-of-interest. Using prostatectomy whole-mounts as reference, the CADx and PI-RADSv2.1 scores were compared at the lesion level using areas under the receiver operating characteristic curves (AUC), and sensitivities and specificities obtained with predefined thresholds. In PZ, AUCs were 80% (95% confidence interval [95% CI]: 71-90) for the CADx score and 80% (95% CI: 71-89; p = 0.886) for the PI-RADSv2.1score; in TZ, AUCs were 79% (95% CI: 66-90) for the CADx score and 93% (95% CI: 82-96; p = 0.051) for the PI-RADSv2.1 score. The CADx diagnostic thresholds that provided sensitivities of 86%-91% and specificities of 64%-75% in French test cohorts yielded sensitivities of 60% (95% CI: 38-83) in PZ and 42% (95% CI: 20-71) in TZ, with specificities of 95% (95% CI: 86-100) and 92% (95% CI: 73-100), respectively. This shift may be attributed to higher ADC values and lower dynamic contrast-enhanced temporal resolution in the test cohort. The CADx obtained good overall results in this external cohort. However, predefined diagnostic thresholds provided lower sensitivities and higher specificities than expected.

Performance of GPT-4 for automated prostate biopsy decision-making based on mpMRI: a multi-center evidence study.

Shi MJ, Wang ZX, Wang SK, Li XH, Zhang YL, Yan Y, An R, Dong LN, Qiu L, Tian T, Liu JX, Song HC, Wang YF, Deng C, Cao ZB, Wang HY, Wang Z, Wei W, Song J, Lu J, Wei X, Wang ZC

pubmed logopapersJul 7 2025
Multiparametric magnetic resonance imaging (mpMRI) has significantly advanced prostate cancer (PCa) detection, yet decisions on invasive biopsy with moderate prostate imaging reporting and data system (PI-RADS) scores remain ambiguous. To explore the decision-making capacity of Generative Pretrained Transformer-4 (GPT-4) for automated prostate biopsy recommendations, we included 2299 individuals who underwent prostate biopsy from 2018 to 2023 in 3 large medical centers, with available mpMRI before biopsy and documented clinical-histopathological records. GPT-4 generated structured reports with given prompts. The performance of GPT-4 was quantified using confusion matrices, and sensitivity, specificity, as well as area under the curve were calculated. Multiple artificial evaluation procedures were conducted. Wilcoxon's rank sum test, Fisher's exact test, and Kruskal-Wallis tests were used for comparisons. Utilizing the largest sample size in the Chinese population, patients with moderate PI-RADS scores (scores 3 and 4) accounted for 39.7% (912/2299), defined as the subset-of-interest (SOI). The detection rates of clinically significant PCa corresponding to PI-RADS scores 2-5 were 9.4, 27.3, 49.2, and 80.1%, respectively. Nearly 47.5% (433/912) of SOI patients were histopathologically proven to have undergone unnecessary prostate biopsies. With the assistance of GPT-4, 20.8% (190/912) of the SOI population could avoid unnecessary biopsies, and it performed even better [28.8% (118/410)] in the most heterogeneous subgroup of PI-RADS score 3. More than 90.0% of GPT-4 -generated reports were comprehensive and easy to understand, but less satisfied with the accuracy (82.8%). GPT-4 also demonstrated cognitive potential for handling complex problems. Additionally, the Chain of Thought method enabled us to better understand the decision-making logic behind GPT-4. Eventually, we developed a ProstAIGuide platform to facilitate accessibility for both doctors and patients. This multi-center study highlights the clinical utility of GPT-4 for prostate biopsy decision-making and advances our understanding of the latest artificial intelligence implementation in various medical scenarios.

Prediction of tissue and clinical thrombectomy outcome in acute ischaemic stroke using deep learning.

von Braun MS, Starke K, Peter L, Kürsten D, Welle F, Schneider HR, Wawrzyniak M, Kaiser DPO, Prasse G, Richter C, Kellner E, Reisert M, Klingbeil J, Stockert A, Hoffmann KT, Scheuermann G, Gillmann C, Saur D

pubmed logopapersJul 7 2025
The advent of endovascular thrombectomy has significantly improved outcomes for stroke patients with intracranial large vessel occlusion, yet individual benefits can vary widely. As demand for thrombectomy rises and geographical disparities in stroke care access persist, there is a growing need for predictive models that quantify individual benefits. However, current imaging methods for estimating outcomes may not fully capture the dynamic nature of cerebral ischaemia and lack a patient-specific assessment of thrombectomy benefits. Our study introduces a deep learning approach to predict individual responses to thrombectomy in acute ischaemic stroke patients. The proposed models provide predictions for both tissue and clinical outcomes under two scenarios: one assuming successful reperfusion and another assuming unsuccessful reperfusion. The resulting simulations of penumbral salvage and difference in National Institutes of Health Stroke Scale (NIHSS) at discharge quantify the potential individual benefits of the intervention. Our models were developed on an extensive dataset from routine stroke care, which included 405 ischaemic stroke patients who underwent thrombectomy. We used acute data for training (n = 304), including multimodal CT imaging and clinical characteristics, along with post hoc markers such as thrombectomy success, final infarct localization and NIHSS at discharge. We benchmarked our tissue outcome predictions under the observed reperfusion scenario against a thresholding-based clinical method and a generalized linear model. Our deep learning model showed significant superiority, with a mean Dice score of 0.48 on internal test data (n = 50) and 0.52 on external test data (n = 51), versus 0.26/0.36 and 0.34/0.35 for the baselines, respectively. The NIHSS sum score prediction achieved median absolute errors of 1.5 NIHSS points on the internal test dataset and 3.0 NIHSS points on the external test dataset, outperforming other machine learning models. By predicting the patient-specific response to thrombectomy for both tissue and clinical outcomes, our approach offers an innovative biomarker that captures the dynamics of cerebral ischaemia. We believe this method holds significant potential to enhance personalized therapeutic strategies and to facilitate efficient resource allocation in acute stroke care.

MedGemma Technical Report

Andrew Sellergren, Sahar Kazemzadeh, Tiam Jaroensri, Atilla Kiraly, Madeleine Traverse, Timo Kohlberger, Shawn Xu, Fayaz Jamil, Cían Hughes, Charles Lau, Justin Chen, Fereshteh Mahvar, Liron Yatziv, Tiffany Chen, Bram Sterling, Stefanie Anna Baby, Susanna Maria Baby, Jeremy Lai, Samuel Schmidgall, Lu Yang, Kejia Chen, Per Bjornsson, Shashir Reddy, Ryan Brush, Kenneth Philbrick, Howard Hu, Howard Yang, Richa Tiwari, Sunny Jansen, Preeti Singh, Yun Liu, Shekoofeh Azizi, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ramé, Morgane Riviere, Louis Rouillard, Thomas Mesnard, Geoffrey Cideron, Jean-bastien Grill, Sabela Ramos, Edouard Yvinec, Michelle Casbon, Elena Buchatskaya, Jean-Baptiste Alayrac, Dmitry, Lepikhin, Vlad Feinberg, Sebastian Borgeaud, Alek Andreev, Cassidy Hardin, Robert Dadashi, Léonard Hussenot, Armand Joulin, Olivier Bachem, Yossi Matias, Katherine Chou, Avinatan Hassidim, Kavi Goel, Clement Farabet, Joelle Barral, Tris Warkentin, Jonathon Shlens, David Fleet, Victor Cotruta, Omar Sanseviero, Gus Martins, Phoebe Kirk, Anand Rao, Shravya Shetty, David F. Steiner, Can Kirmizibayrak, Rory Pilgrim, Daniel Golden, Lin Yang

arxiv logopreprintJul 7 2025
Artificial intelligence (AI) has significant potential in healthcare applications, but its training and deployment faces challenges due to healthcare's diverse data, complex tasks, and the need to preserve privacy. Foundation models that perform well on medical tasks and require less task-specific tuning data are critical to accelerate the development of healthcare AI applications. We introduce MedGemma, a collection of medical vision-language foundation models based on Gemma 3 4B and 27B. MedGemma demonstrates advanced medical understanding and reasoning on images and text, significantly exceeding the performance of similar-sized generative models and approaching the performance of task-specific models, while maintaining the general capabilities of the Gemma 3 base models. For out-of-distribution tasks, MedGemma achieves 2.6-10% improvement on medical multimodal question answering, 15.5-18.1% improvement on chest X-ray finding classification, and 10.8% improvement on agentic evaluations compared to the base models. Fine-tuning MedGemma further improves performance in subdomains, reducing errors in electronic health record information retrieval by 50% and reaching comparable performance to existing specialized state-of-the-art methods for pneumothorax classification and histopathology patch classification. We additionally introduce MedSigLIP, a medically-tuned vision encoder derived from SigLIP. MedSigLIP powers the visual understanding capabilities of MedGemma and as an encoder achieves comparable or better performance than specialized medical image encoders. Taken together, the MedGemma collection provides a strong foundation of medical image and text capabilities, with potential to significantly accelerate medical research and development of downstream applications. The MedGemma collection, including tutorials and model weights, can be found at https://goo.gle/medgemma.

Uncovering Neuroimaging Biomarkers of Brain Tumor Surgery with AI-Driven Methods

Carmen Jimenez-Mesa, Yizhou Wan, Guilio Sansone, Francisco J. Martinez-Murcia, Javier Ramirez, Pietro Lio, Juan M. Gorriz, Stephen J. Price, John Suckling, Michail Mamalakis

arxiv logopreprintJul 7 2025
Brain tumor resection is a complex procedure with significant implications for patient survival and quality of life. Predictions of patient outcomes provide clinicians and patients the opportunity to select the most suitable onco-functional balance. In this study, global features derived from structural magnetic resonance imaging in a clinical dataset of 49 pre- and post-surgery patients identified potential biomarkers associated with survival outcomes. We propose a framework that integrates Explainable AI (XAI) with neuroimaging-based feature engineering for survival assessment, offering guidance for surgical decision-making. In this study, we introduce a global explanation optimizer that refines survival-related feature attribution in deep learning models, enhancing interpretability and reliability. Our findings suggest that survival is influenced by alterations in regions associated with cognitive and sensory functions, indicating the importance of preserving areas involved in decision-making and emotional regulation during surgery to improve outcomes. The global explanation optimizer improves both fidelity and comprehensibility of explanations compared to state-of-the-art XAI methods. It effectively identifies survival-related variability, underscoring its relevance in precision medicine for brain tumor treatment.
Page 174 of 3973969 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.