Sort by:
Page 31 of 78779 results

All-in-One Medical Image Restoration with Latent Diffusion-Enhanced Vector-Quantized Codebook Prior

Haowei Chen, Zhiwen Yang, Haotian Hou, Hui Zhang, Bingzheng Wei, Gang Zhou, Yan Xu

arxiv logopreprintJul 26 2025
All-in-one medical image restoration (MedIR) aims to address multiple MedIR tasks using a unified model, concurrently recovering various high-quality (HQ) medical images (e.g., MRI, CT, and PET) from low-quality (LQ) counterparts. However, all-in-one MedIR presents significant challenges due to the heterogeneity across different tasks. Each task involves distinct degradations, leading to diverse information losses in LQ images. Existing methods struggle to handle these diverse information losses associated with different tasks. To address these challenges, we propose a latent diffusion-enhanced vector-quantized codebook prior and develop \textbf{DiffCode}, a novel framework leveraging this prior for all-in-one MedIR. Specifically, to compensate for diverse information losses associated with different tasks, DiffCode constructs a task-adaptive codebook bank to integrate task-specific HQ prior features across tasks, capturing a comprehensive prior. Furthermore, to enhance prior retrieval from the codebook bank, DiffCode introduces a latent diffusion strategy that utilizes the diffusion model's powerful mapping capabilities to iteratively refine the latent feature distribution, estimating more accurate HQ prior features during restoration. With the help of the task-adaptive codebook bank and latent diffusion strategy, DiffCode achieves superior performance in both quantitative metrics and visual quality across three MedIR tasks: MRI super-resolution, CT denoising, and PET synthesis.

Digitalizing English-language CT Interpretation for Positive Haemorrhage Evaluation Reporting: the DECIPHER study.

Bloom B, Haimovich A, Pott J, Williams SL, Cheetham M, Langsted S, Skene I, Astin-Chamberlain R, Thomas SH

pubmed logopapersJul 25 2025
Identifying whether there is a traumatic intracranial bleed (ICB+) on head CT is critical for clinical care and research. Free text CT reports are unstructured and therefore must undergo time-consuming manual review. Existing artificial intelligence classification schemes are not optimised for the emergency department endpoint of classification of ICB+ or ICB-. We sought to assess three methods for classifying CT reports: a text classification (TC) programme, a commercial natural language processing programme (Clinithink) and a generative pretrained transformer large language model (Digitalizing English-language CT Interpretation for Positive Haemorrhage Evaluation Reporting (DECIPHER)-LLM). Primary objective: determine the diagnostic classification performance of the dichotomous categorisation of each of the three approaches. determine whether the LLM could achieve a substantial reduction in CT report review workload while maintaining 100% sensitivity.Anonymised radiology reports of head CT scans performed for trauma were manually labelled as ICB+/-. Training and validation sets were randomly created to train the TC and natural language processing models. Prompts were written to train the LLM. 898 reports were manually labelled. Sensitivity and specificity (95% CI)) of TC, Clinithink and DECIPHER-LLM (with probability of ICB set at 10%) were respectively 87.9% (76.7% to 95.0%) and 98.2% (96.3% to 99.3%), 75.9% (62.8% to 86.1%) and 96.2% (93.8% to 97.8%) and 100% (93.8% to 100%) and 97.4% (95.3% to 98.8%).With DECIPHER-LLM probability of ICB+ threshold of 10% set to identify CT reports requiring manual evaluation, CT reports requiring manual classification reduced by an estimated 385/449 cases (85.7% (95% CI 82.1% to 88.9%)) while maintaining 100% sensitivity. DECIPHER-LLM outperformed other tested free-text classification methods.

Deep learning-based image classification for integrating pathology and radiology in AI-assisted medical imaging.

Lu C, Zhang J, Liu R

pubmed logopapersJul 25 2025
The integration of pathology and radiology in medical imaging has emerged as a critical need for advancing diagnostic accuracy and improving clinical workflows. Current AI-driven approaches for medical image analysis, despite significant progress, face several challenges, including handling multi-modal imaging, imbalanced datasets, and the lack of robust interpretability and uncertainty quantification. These limitations often hinder the deployment of AI systems in real-world clinical settings, where reliability and adaptability are essential. To address these issues, this study introduces a novel framework, the Domain-Informed Adaptive Network (DIANet), combined with an Adaptive Clinical Workflow Integration (ACWI) strategy. DIANet leverages multi-scale feature extraction, domain-specific priors, and Bayesian uncertainty modeling to enhance interpretability and robustness. The proposed model is tailored for multi-modal medical imaging tasks, integrating adaptive learning mechanisms to mitigate domain shifts and imbalanced datasets. Complementing the model, the ACWI strategy ensures seamless deployment through explainable AI (XAI) techniques, uncertainty-aware decision support, and modular workflow integration compatible with clinical systems like PACS. Experimental results demonstrate significant improvements in diagnostic accuracy, segmentation precision, and reconstruction fidelity across diverse imaging modalities, validating the potential of this framework to bridge the gap between AI innovation and clinical utility.

Privacy-Preserving Generation of Structured Lymphoma Progression Reports from Cross-sectional Imaging: A Comparative Analysis of Llama 3.3 and Llama 4.

Prucker P, Bressem KK, Kim SH, Weller D, Kader A, Dorfner FJ, Ziegelmayer S, Graf MM, Lemke T, Gassert F, Can E, Meddeb A, Truhn D, Hadamitzky M, Makowski MR, Adams LC, Busch F

pubmed logopapersJul 25 2025
Efficient processing of radiology reports for monitoring disease progression is crucial in oncology. Although large language models (LLMs) show promise in extracting structured information from medical reports, privacy concerns limit their clinical implementation. This study evaluates the feasibility and accuracy of two of the most recent Llama models for generating structured lymphoma progression reports from cross-sectional imaging data in a privacy-preserving, real-world clinical setting. This single-center, retrospective study included adult lymphoma patients who underwent cross-sectional imaging and treatment between July 2023 and July 2024. We established a chain-of-thought prompting strategy to leverage the locally deployed Llama-3.3-70B-Instruct and Llama-4-Scout-17B-16E-Instruct models to generate lymphoma disease progression reports across three iterations. Two radiologists independently scored nodal and extranodal involvement, as well as Lugano staging and treatment response classifications. For each LLM and task, we calculated the F1 score, accuracy, recall, precision, and specificity per label, as well as the case-weighted average with 95% confidence intervals (CIs). Both LLMs correctly implemented the template structure for all 65 patients included in this study. Llama-4-Scout-17B-16E-Instruct demonstrated significantly greater accuracy in extracting nodal and extranodal involvement information (nodal: 0.99 [95% CI = 0.98-0.99] vs. 0.97 [95% CI = 0.95-0.96], p < 0.001; extranodal: 0.99 [95% CI = 0.99-1.00] vs. 0.99 [95% CI = 0.98-0.99], p = 0.013). This difference was more pronounced when predicting Lugano stage and treatment response (stage: 0.85 [95% CI = 0.79-0.89] vs. 0.60 [95% CI = 0.53-0.67], p < 0.001; treatment response: 0.88 [95% CI = 0.83-0.92] vs. 0.65 [95% CI = 0.58-0.71], p < 0.001). Neither model produced hallucinations of newly involved nodal or extranodal sites. The highest relative error rates were found when interpreting the level of disease after treatment. In conclusion, privacy-preserving LLMs can effectively extract clinical information from lymphoma imaging reports. While they excel at data extraction, they are limited in their ability to generate new clinical inferences from the extracted information. Our findings suggest their potential utility in streamlining documentation and highlight areas requiring optimization before clinical implementation.

Could a New Method of Acromiohumeral Distance Measurement Emerge? Artificial Intelligence vs. Physician.

Dede BT, Çakar İ, Oğuz M, Alyanak B, Bağcıer F

pubmed logopapersJul 25 2025
The aim of this study was to evaluate the reliability of ChatGPT-4 measurement of acromiohumeral distance (AHD), a popular assessment in patients with shoulder pain. In this retrospective study, 71 registered shoulder magnetic resonance imaging (MRI) scans were included. AHD measurements were performed on a coronal oblique T1 sequence with a clear view of the acromion and humerus. Measurements were performed by an experienced radiologist twice at 3-day intervals and by ChatGPT-4 twice at 3-day intervals in different sessions. The first, second, and mean values of AHD measured by the physician were 7.6 ± 1.7, 7.5 ± 1.6, and 7.6 ± 1.7, respectively. The first, second, and mean values measured by ChatGPT-4 were 6.7 ± 0.8, 7.3 ± 1.1, and 7.1 ± 0.8, respectively. There was a significant difference between the physician and ChatGPT-4 between the first and mean measurements (p < 0.0001 and p = 0.009, respectively). However, there was no significant difference between the second measurements (p = 0.220). Intrarater reliability for the physician was excellent (ICC = 0.99); intrarater reliability for ChatGPT-4 was poor (ICC = 0.41). Interrater reliability was poor (ICC = 0.45). In conclusion, this study demonstrated that the reliability of ChatGPT-4 in AHD measurements is inferior to that of an experienced radiologist. This study may help improve the possible future contribution of large language models to medical science.

Exploring AI-Based System Design for Pixel-Level Protected Health Information Detection in Medical Images.

Truong T, Baltruschat IM, Klemens M, Werner G, Lenga M

pubmed logopapersJul 25 2025
De-identification of medical images is a critical step to ensure privacy during data sharing in research and clinical settings. The initial step in this process involves detecting Protected Health Information (PHI), which can be found in image metadata or imprinted within image pixels. Despite the importance of such systems, there has been limited evaluation of existing AI-based solutions, creating barriers to the development of reliable and robust tools. In this study, we present an AI-based pipeline for PHI detection, comprising three key modules: text detection, text extraction, and text analysis. We benchmark three models-YOLOv11, EasyOCR, and GPT-4o- across different setups corresponding to these modules, evaluating their performance on two different datasets encompassing multiple imaging modalities and PHI categories. Our findings indicate that the optimal setup involves utilizing dedicated vision and language models for each module, which achieves a commendable balance in performance, latency, and cost associated with the usage of large language models (LLMs). Additionally, we show that the application of LLMs not only involves identifying PHI content but also enhances OCR tasks and facilitates an end-to-end PHI detection pipeline, showcasing promising outcomes through our analysis.

Carotid and femoral bifurcation plaques detected by ultrasound as predictors of cardiovascular events.

Blinc A, Nicolaides AN, Poredoš P, Paraskevas KI, Heiss C, Müller O, Rammos C, Stanek A, Jug B

pubmed logopapersJul 25 2025
<b></b>Risk factor-based algorithms give a good estimate of cardiovascular (CV) risk at the population level but are often inaccurate at the individual level. Detecting preclinical atherosclerotic plaques in the carotid and common femoral arterial bifurcations by ultrasound is a simple, non-invasive way of detecting atherosclerosis in the individual and thus more accurately estimating his/her risk of future CV events. The presence of plaques in these bifurcations is independently associated with increased risk of CV death and myocardial infarction, even after adjusting for traditional risk factors, while ultrasonographic characteristics of vulnerable plaque are mostly associated with increased risk for ipsilateral ischaemic stroke. The predictive value of carotid and femoral plaques for CV events increases in proportion to plaque burden and especially by plaque progression over time. Assessing the burden of carotid and/or common femoral bifurcation plaques enables reclassification of a significant number of individuals with low risk according risk factor-based algorithms into intermediate or high CV risk and intermediate risk individuals into the low- or high CV risk. Ongoing multimodality imaging studies, supplemented by clinical and genetic data, aided by machine learning/ artificial intelligence analysis are expected to advance our understanding of atherosclerosis progression from the asymptomatic into the symptomatic phase and personalize prevention.

Current evidence of low-dose CT screening benefit.

Yip R, Mulshine JL, Oudkerk M, Field J, Silva M, Yankelevitz DF, Henschke CI

pubmed logopapersJul 25 2025
Lung cancer is the leading cause of cancer-related mortality worldwide, largely due to late-stage diagnosis. Low-dose computed tomography (LDCT) screening has emerged as a powerful tool for early detection, enabling diagnosis at curable stages and reducing lung cancer mortality. Despite strong evidence, LDCT screening uptake remains suboptimal globally. This review synthesizes current evidence supporting LDCT screening, highlights ongoing global implementation efforts, and discusses key insights from the 1st AGILE conference. Lung cancer screening is gaining global momentum, with many countries advancing plans for national LDCT programs. Expanding eligibility through risk-based models and targeting high-risk never- and light-smokers are emerging strategies to improve efficiency and equity. Technological advancements, including AI-assisted interpretation and image-based biomarkers, are addressing concerns around false positives, overdiagnosis, and workforce burden. Integrating cardiac and smoking-related disease assessment within LDCT screening offers added preventive health benefits. To maximize global impact, screening strategies must be tailored to local health systems and populations. Efforts should focus on increasing awareness, standardizing protocols, optimizing screening intervals, and strengthening multidisciplinary care pathways. International collaboration and shared infrastructure can accelerate progress and ensure sustainability. LDCT screening represents a cost-effective opportunity to reduce lung cancer mortality and premature deaths.

Counterfactual Explanations in Medical Imaging: Exploring SPN-Guided Latent Space Manipulation

Julia Siekiera, Stefan Kramer

arxiv logopreprintJul 25 2025
Artificial intelligence is increasingly leveraged across various domains to automate decision-making processes that significantly impact human lives. In medical image analysis, deep learning models have demonstrated remarkable performance. However, their inherent complexity makes them black box systems, raising concerns about reliability and interpretability. Counterfactual explanations provide comprehensible insights into decision processes by presenting hypothetical "what-if" scenarios that alter model classifications. By examining input alterations, counterfactual explanations provide patterns that influence the decision-making process. Despite their potential, generating plausible counterfactuals that adhere to similarity constraints providing human-interpretable explanations remains a challenge. In this paper, we investigate this challenge by a model-specific optimization approach. While deep generative models such as variational autoencoders (VAEs) exhibit significant generative power, probabilistic models like sum-product networks (SPNs) efficiently represent complex joint probability distributions. By modeling the likelihood of a semi-supervised VAE's latent space with an SPN, we leverage its dual role as both a latent space descriptor and a classifier for a given discrimination task. This formulation enables the optimization of latent space counterfactuals that are both close to the original data distribution and aligned with the target class distribution. We conduct experimental evaluation on the cheXpert dataset. To evaluate the effectiveness of the integration of SPNs, our SPN-guided latent space manipulation is compared against a neural network baseline. Additionally, the trade-off between latent variable regularization and counterfactual quality is analyzed.

DeepJIVE: Learning Joint and Individual Variation Explained from Multimodal Data Using Deep Learning

Matthew Drexler, Benjamin Risk, James J Lah, Suprateek Kundu, Deqiang Qiu

arxiv logopreprintJul 25 2025
Conventional multimodal data integration methods provide a comprehensive assessment of the shared or unique structure within each individual data type but suffer from several limitations such as the inability to handle high-dimensional data and identify nonlinear structures. In this paper, we introduce DeepJIVE, a deep-learning approach to performing Joint and Individual Variance Explained (JIVE). We perform mathematical derivation and experimental validations using both synthetic and real-world 1D, 2D, and 3D datasets. Different strategies of achieving the identity and orthogonality constraints for DeepJIVE were explored, resulting in three viable loss functions. We found that DeepJIVE can successfully uncover joint and individual variations of multimodal datasets. Our application of DeepJIVE to the Alzheimer's Disease Neuroimaging Initiative (ADNI) also identified biologically plausible covariation patterns between the amyloid positron emission tomography (PET) and magnetic resonance (MR) images. In conclusion, the proposed DeepJIVE can be a useful tool for multimodal data analysis.
Page 31 of 78779 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.