Sort by:
Page 10 of 58574 results

Deep Learning-Based Acceleration in MRI: Current Landscape and Clinical Applications in Neuroradiology.

Rai P, Mark IT, Soni N, Diehn F, Messina SA, Benson JC, Madhavan A, Agarwal A, Bathla G

pubmed logopapersJul 28 2025
Magnetic resonance imaging (MRI) is a cornerstone of neuroimaging, providing unparalleled soft-tissue contrast. However, its clinical utility is often limited by long acquisition times, which contribute to motion artifacts, patient discomfort, and increased costs. Although traditional acceleration techniques, such as parallel imaging and compressed sensing help reduce scan times, they may reduce signal-to-noise ratio (SNR) and introduce artifacts. The advent of deep learning-based image reconstruction (DLBIR) may help in several ways to reduce scan times while preserving or improving image quality. Various DLBIR techniques are currently available through different vendors, with claimed reductions in gradient times up to 85% while maintaining or enhancing lesion conspicuity, improved noise suppression and diagnostic accuracy. The evolution of DLBIR from 2D to 3D acquisitions, coupled with advancements in self-supervised learning, further expands its capabilities and clinical applicability. Despite these advancements, challenges persist in generalizability across scanners and imaging conditions, susceptibility to artifacts and potential alterations in pathology representation. Additionally, limited data on training, underlying algorithms and clinical validation of these vendor-specific closed-source algorithms pose barriers to end-user trust and widespread adoption. This review explores the current applications of DLBIR in neuroimaging, vendor-driven implementations, and emerging trends that may impact accelerated MRI acquisitions.ABBREVIATIONS: PI= parallel imaging; CS= compressed sensing; DLBIR = deep learning-based image reconstruction; AI= artificial intelligence; DR =. Deep resolve; ACS = Artificial-intelligence-assisted compressed sensing.

Harnessing deep learning to optimize induction chemotherapy choices in nasopharyngeal carcinoma.

Chen ZH, Han X, Lin L, Lin GY, Li B, Kou J, Wu CF, Ai XL, Zhou GQ, Gao MY, Lu LJ, Sun Y

pubmed logopapersJul 28 2025
Currently, there is no guidance for personalized choice of induction chemotherapy (IC) regimens (TPF, docetaxel + cisplatin + 5-Fu; or GP, gemcitabine + cisplatin) for locoregionally advanced nasopharyngeal carcinoma (LA-NPC). This study aimed to develop deep learning models for IC response prediction in LA-NPC. For 1438 LA-NPC patients, pretreatment magnetic resonance imaging (MRI) scans and complete biological response (cBR) information after 3 cycles of IC were collected from two centers. All models were trained in 969 patients (TPF: 548, GP: 421), and internally validated in 243 patients (TPF: 138, GP: 105), then tested on an internal dataset of 226 patients (TPF: 125, GP: 101). MRI models for the TPF and GP cohorts were constructed to predict cBR from MRI using radiomics and graph convolutional network (GCN). The MRI-Clinical models were built based on both MRI and clinical parameters. The MRI models and MRI-Clinical models achieved high discriminative accuracy in both TPF cohorts (MRI model: AUC, 0.835; MRI-Clinical model: AUC, 0.838) and GP cohorts (MRI model: AUC, 0.764; MRI-Clinical model: AUC, 0.777). The MRI-Clinical models also showed good performance in the risk stratification. The survival curve revealed that the 3-year disease-free survival of the high-sensitivity group was better than that of the low-sensitivity group in both the TPF and GP cohorts. An online tool guiding personalized choice of IC regimen was developed based on MRI-Clinical models. Our radiomics and GCN-based IC response prediction tool has robust predictive performance and may provide guidance for personalized treatment.

The evolving role of multimodal imaging, artificial intelligence and radiomics in the radiologic assessment of immune related adverse events.

Das JP, Ma HY, DeJong D, Prendergast C, Baniasadi A, Braumuller B, Giarratana A, Khonji S, Paily J, Shobeiri P, Yeh R, Dercle L, Capaccione KM

pubmed logopapersJul 28 2025
Immunotherapy, in particular checkpoint blockade, has revolutionized the treatment of many advanced cancers. Imaging plays a critical role in assessing both treatment response and the development of immune toxicities. Both conventional imaging and molecular imaging techniques can be used to evaluate multisystemic immune related adverse events (irAEs), including thoracic, abdominal and neurologic irAEs. As artificial intelligence (AI) proliferates in medical imaging, radiologic assessment of irAEs will become more efficient, improving the diagnosis, prognosis, and management of patients affected by immune-related toxicities. This review addresses some of the advancements in medical imaging including the potential future role of radiomics in evaluating irAEs, which may facilitate clinical decision-making and improvements in patient care.

Contextual structured annotations on PACS: a futuristic vision for reporting routine oncologic imaging studies and its potential to transform clinical work and research.

Wong VK, Wang MX, Bethi E, Nagarakanti S, Morani AC, Marcal LP, Rauch GM, Brown JJ, Yedururi S

pubmed logopapersJul 26 2025
Radiologists currently have very limited and time-consuming options to annotate findings on the images and are mostly limited to arrows, calipers and lines to annotate any type of findings on most PACS systems. We propose a framework placing encoded, transferable, highly contextual structured text annotations directly on PACS images indicating the type of lesion, level of suspicion, location, lesion measurement, and TNM status for malignant lesions, along with automated integration of this information into the radiology report. This approach offers a one-stop solution to generate radiology reports that are easily understood by other radiologists, patient care providers, patients, and machines while reducing the effort needed to dictate a detailed radiology report and minimizing speech recognition errors. It also provides a framework for automated generation of large volume high quality annotated data sets for machine learning algorithms from daily work of radiologists. Enabling voice dictation of these contextual annotations directly into PACS similar to voice enabled Google search will further enhance the user experience. Wider adaptation of contextualized structured annotations in the future can facilitate studies understanding the temporal evolution of different tumor lesions across multiple lines of treatment and early detection of asynchronous response/areas of treatment failure. We present a futuristic vision, and solution with the potential to transform clinical work and research in oncologic imaging.

Leveraging Fine-Tuned Large Language Models for Interpretable Pancreatic Cystic Lesion Feature Extraction and Risk Categorization

Ebrahim Rasromani, Stella K. Kang, Yanqi Xu, Beisong Liu, Garvit Luhadia, Wan Fung Chui, Felicia L. Pasadyn, Yu Chih Hung, Julie Y. An, Edwin Mathieu, Zehui Gu, Carlos Fernandez-Granda, Ammar A. Javed, Greg D. Sacks, Tamas Gonda, Chenchan Huang, Yiqiu Shen

arxiv logopreprintJul 26 2025
Background: Manual extraction of pancreatic cystic lesion (PCL) features from radiology reports is labor-intensive, limiting large-scale studies needed to advance PCL research. Purpose: To develop and evaluate large language models (LLMs) that automatically extract PCL features from MRI/CT reports and assign risk categories based on guidelines. Materials and Methods: We curated a training dataset of 6,000 abdominal MRI/CT reports (2005-2024) from 5,134 patients that described PCLs. Labels were generated by GPT-4o using chain-of-thought (CoT) prompting to extract PCL and main pancreatic duct features. Two open-source LLMs were fine-tuned using QLoRA on GPT-4o-generated CoT data. Features were mapped to risk categories per institutional guideline based on the 2017 ACR White Paper. Evaluation was performed on 285 held-out human-annotated reports. Model outputs for 100 cases were independently reviewed by three radiologists. Feature extraction was evaluated using exact match accuracy, risk categorization with macro-averaged F1 score, and radiologist-model agreement with Fleiss' Kappa. Results: CoT fine-tuning improved feature extraction accuracy for LLaMA (80% to 97%) and DeepSeek (79% to 98%), matching GPT-4o (97%). Risk categorization F1 scores also improved (LLaMA: 0.95; DeepSeek: 0.94), closely matching GPT-4o (0.97), with no statistically significant differences. Radiologist inter-reader agreement was high (Fleiss' Kappa = 0.888) and showed no statistically significant difference with the addition of DeepSeek-FT-CoT (Fleiss' Kappa = 0.893) or GPT-CoT (Fleiss' Kappa = 0.897), indicating that both models achieved agreement levels on par with radiologists. Conclusion: Fine-tuned open-source LLMs with CoT supervision enable accurate, interpretable, and efficient phenotyping for large-scale PCL research, achieving performance comparable to GPT-4o.

All-in-One Medical Image Restoration with Latent Diffusion-Enhanced Vector-Quantized Codebook Prior

Haowei Chen, Zhiwen Yang, Haotian Hou, Hui Zhang, Bingzheng Wei, Gang Zhou, Yan Xu

arxiv logopreprintJul 26 2025
All-in-one medical image restoration (MedIR) aims to address multiple MedIR tasks using a unified model, concurrently recovering various high-quality (HQ) medical images (e.g., MRI, CT, and PET) from low-quality (LQ) counterparts. However, all-in-one MedIR presents significant challenges due to the heterogeneity across different tasks. Each task involves distinct degradations, leading to diverse information losses in LQ images. Existing methods struggle to handle these diverse information losses associated with different tasks. To address these challenges, we propose a latent diffusion-enhanced vector-quantized codebook prior and develop \textbf{DiffCode}, a novel framework leveraging this prior for all-in-one MedIR. Specifically, to compensate for diverse information losses associated with different tasks, DiffCode constructs a task-adaptive codebook bank to integrate task-specific HQ prior features across tasks, capturing a comprehensive prior. Furthermore, to enhance prior retrieval from the codebook bank, DiffCode introduces a latent diffusion strategy that utilizes the diffusion model's powerful mapping capabilities to iteratively refine the latent feature distribution, estimating more accurate HQ prior features during restoration. With the help of the task-adaptive codebook bank and latent diffusion strategy, DiffCode achieves superior performance in both quantitative metrics and visual quality across three MedIR tasks: MRI super-resolution, CT denoising, and PET synthesis.

Current evidence of low-dose CT screening benefit.

Yip R, Mulshine JL, Oudkerk M, Field J, Silva M, Yankelevitz DF, Henschke CI

pubmed logopapersJul 25 2025
Lung cancer is the leading cause of cancer-related mortality worldwide, largely due to late-stage diagnosis. Low-dose computed tomography (LDCT) screening has emerged as a powerful tool for early detection, enabling diagnosis at curable stages and reducing lung cancer mortality. Despite strong evidence, LDCT screening uptake remains suboptimal globally. This review synthesizes current evidence supporting LDCT screening, highlights ongoing global implementation efforts, and discusses key insights from the 1st AGILE conference. Lung cancer screening is gaining global momentum, with many countries advancing plans for national LDCT programs. Expanding eligibility through risk-based models and targeting high-risk never- and light-smokers are emerging strategies to improve efficiency and equity. Technological advancements, including AI-assisted interpretation and image-based biomarkers, are addressing concerns around false positives, overdiagnosis, and workforce burden. Integrating cardiac and smoking-related disease assessment within LDCT screening offers added preventive health benefits. To maximize global impact, screening strategies must be tailored to local health systems and populations. Efforts should focus on increasing awareness, standardizing protocols, optimizing screening intervals, and strengthening multidisciplinary care pathways. International collaboration and shared infrastructure can accelerate progress and ensure sustainability. LDCT screening represents a cost-effective opportunity to reduce lung cancer mortality and premature deaths.

Carotid and femoral bifurcation plaques detected by ultrasound as predictors of cardiovascular events.

Blinc A, Nicolaides AN, Poredoš P, Paraskevas KI, Heiss C, Müller O, Rammos C, Stanek A, Jug B

pubmed logopapersJul 25 2025
<b></b>Risk factor-based algorithms give a good estimate of cardiovascular (CV) risk at the population level but are often inaccurate at the individual level. Detecting preclinical atherosclerotic plaques in the carotid and common femoral arterial bifurcations by ultrasound is a simple, non-invasive way of detecting atherosclerosis in the individual and thus more accurately estimating his/her risk of future CV events. The presence of plaques in these bifurcations is independently associated with increased risk of CV death and myocardial infarction, even after adjusting for traditional risk factors, while ultrasonographic characteristics of vulnerable plaque are mostly associated with increased risk for ipsilateral ischaemic stroke. The predictive value of carotid and femoral plaques for CV events increases in proportion to plaque burden and especially by plaque progression over time. Assessing the burden of carotid and/or common femoral bifurcation plaques enables reclassification of a significant number of individuals with low risk according risk factor-based algorithms into intermediate or high CV risk and intermediate risk individuals into the low- or high CV risk. Ongoing multimodality imaging studies, supplemented by clinical and genetic data, aided by machine learning/ artificial intelligence analysis are expected to advance our understanding of atherosclerosis progression from the asymptomatic into the symptomatic phase and personalize prevention.

Counterfactual Explanations in Medical Imaging: Exploring SPN-Guided Latent Space Manipulation

Julia Siekiera, Stefan Kramer

arxiv logopreprintJul 25 2025
Artificial intelligence is increasingly leveraged across various domains to automate decision-making processes that significantly impact human lives. In medical image analysis, deep learning models have demonstrated remarkable performance. However, their inherent complexity makes them black box systems, raising concerns about reliability and interpretability. Counterfactual explanations provide comprehensible insights into decision processes by presenting hypothetical "what-if" scenarios that alter model classifications. By examining input alterations, counterfactual explanations provide patterns that influence the decision-making process. Despite their potential, generating plausible counterfactuals that adhere to similarity constraints providing human-interpretable explanations remains a challenge. In this paper, we investigate this challenge by a model-specific optimization approach. While deep generative models such as variational autoencoders (VAEs) exhibit significant generative power, probabilistic models like sum-product networks (SPNs) efficiently represent complex joint probability distributions. By modeling the likelihood of a semi-supervised VAE's latent space with an SPN, we leverage its dual role as both a latent space descriptor and a classifier for a given discrimination task. This formulation enables the optimization of latent space counterfactuals that are both close to the original data distribution and aligned with the target class distribution. We conduct experimental evaluation on the cheXpert dataset. To evaluate the effectiveness of the integration of SPNs, our SPN-guided latent space manipulation is compared against a neural network baseline. Additionally, the trade-off between latent variable regularization and counterfactual quality is analyzed.

Reconstruct or Generate: Exploring the Spectrum of Generative Modeling for Cardiac MRI

Niklas Bubeck, Yundi Zhang, Suprosanna Shit, Daniel Rueckert, Jiazhen Pan

arxiv logopreprintJul 25 2025
In medical imaging, generative models are increasingly relied upon for two distinct but equally critical tasks: reconstruction, where the goal is to restore medical imaging (usually inverse problems like inpainting or superresolution), and generation, where synthetic data is created to augment datasets or carry out counterfactual analysis. Despite shared architecture and learning frameworks, they prioritize different goals: generation seeks high perceptual quality and diversity, while reconstruction focuses on data fidelity and faithfulness. In this work, we introduce a "generative model zoo" and systematically analyze how modern latent diffusion models and autoregressive models navigate the reconstruction-generation spectrum. We benchmark a suite of generative models across representative cardiac medical imaging tasks, focusing on image inpainting with varying masking ratios and sampling strategies, as well as unconditional image generation. Our findings show that diffusion models offer superior perceptual quality for unconditional generation but tend to hallucinate as masking ratios increase, whereas autoregressive models maintain stable perceptual performance across masking levels, albeit with generally lower fidelity.
Page 10 of 58574 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.