Sort by:
Page 8 of 40393 results

AgentMRI: A Vison Language Model-Powered AI System for Self-regulating MRI Reconstruction with Multiple Degradations.

Sajua GA, Akhib M, Chang Y

pubmed logopapersJul 22 2025
Artificial intelligence (AI)-driven autonomous agents are transforming multiple domains by integrating reasoning, decision-making, and task execution into a unified framework. In medical imaging, such agents have the potential to change workflows by reducing human intervention and optimizing image quality. In this paper, we introduce the AgentMRI. It is an AI-driven system that leverages vision language models (VLMs) for fully autonomous magnetic resonance imaging (MRI) reconstruction in the presence of multiple degradations. Unlike traditional MRI correction or reconstruction methods, AgentMRI does not rely on manual intervention for post-processing or does not rely on fixed correction models. Instead, it dynamically detects MRI corruption and then automatically selects the best correction model for image reconstruction. The framework uses a multi-query VLM strategy to ensure robust corruption detection through consensus-based decision-making and confidence-weighted inference. AgentMRI automatically chooses deep learning models that include MRI reconstruction, motion correction, and denoising models. We evaluated AgentMRI in both zero-shot and fine-tuned settings. Experimental results on a comprehensive brain MRI dataset demonstrate that AgentMRI achieves an average of 73.6% accuracy in zero-shot and 95.1% accuracy for fine-tuned settings. Experiments show that it accurately executes the reconstruction process without human intervention. AgentMRI eliminates manual intervention and introduces a scalable and multimodal AI framework for autonomous MRI processing. This work may build a significant step toward fully autonomous and intelligent MR image reconstruction systems.

Dual-Network Deep Learning for Accelerated Head and Neck MRI: Enhanced Image Quality and Reduced Scan Time.

Li S, Yan W, Zhang X, Hu W, Ji L, Yue Q

pubmed logopapersJul 22 2025
Head-and-neck MRI faces inherent challenges, including motion artifacts and trade-offs between spatial resolution and acquisition time. We aimed to evaluate a dual-network deep learning (DL) super-resolution method for improving image quality and reducing scan time in T1- and T2-weighted head-and-neck MRI. In this prospective study, 97 patients with head-and-neck masses were enrolled at xx from August 2023 to August 2024. After exclusions, 58 participants underwent paired conventional and accelerated T1WI and T2WI MRI sequences, with the accelerated sequences being reconstructed using a dual-network DL framework for super-resolution. Image quality was assessed both quantitatively (signal-to-noise ratio [SNR], contrast-to-noise ratio [CNR], contrast ratio [CR]) and qualitatively by two blinded radiologists using a 5-point Likert scale for image sharpness, lesion conspicuity, structure delineation, and artifacts. Wilcoxon signed-rank tests were used to compare paired outcomes. Among 58 participants (34 men, 24 women; mean age 51.37 ± 13.24 years), DL reconstruction reduced scan times by 46.3% (T1WI) and 26.9% (T2WI). Quantitative analysis showed significant improvements in SNR (T1WI: 26.33 vs. 20.65; T2WI: 14.14 vs. 11.26) and CR (T1WI: 0.20 vs. 0.18; T2WI: 0.34 vs. 0.30; all p < 0.001), with comparable CNR (p > 0.05). Qualitatively, image sharpness, lesion conspicuity, and structure delineation improved significantly (p < 0.05), while artifact scores remained similar (all p > 0.05). The dual-network DL method significantly enhanced image quality and reduced scan times in head-and-neck MRI while maintaining diagnostic performance comparable to conventional methods. This approach offers potential for improved workflow efficiency and patient comfort.

A Tutorial on MRI Reconstruction: From Modern Methods to Clinical Implications

Tolga Çukur, Salman U. H. Dar, Valiyeh Ansarian Nezhad, Yohan Jun, Tae Hyung Kim, Shohei Fujita, Berkin Bilgic

arxiv logopreprintJul 22 2025
MRI is an indispensable clinical tool, offering a rich variety of tissue contrasts to support broad diagnostic and research applications. Clinical exams routinely acquire multiple structural sequences that provide complementary information for differential diagnosis, while research protocols often incorporate advanced functional, diffusion, spectroscopic, and relaxometry sequences to capture multidimensional insights into tissue structure and composition. However, these capabilities come at the cost of prolonged scan times, which reduce patient throughput, increase susceptibility to motion artifacts, and may require trade-offs in image quality or diagnostic scope. Over the last two decades, advances in image reconstruction algorithms--alongside improvements in hardware and pulse sequence design--have made it possible to accelerate acquisitions while preserving diagnostic quality. Central to this progress is the ability to incorporate prior information to regularize the solutions to the reconstruction problem. In this tutorial, we overview the basics of MRI reconstruction and highlight state-of-the-art approaches, beginning with classical methods that rely on explicit hand-crafted priors, and then turning to deep learning methods that leverage a combination of learned and crafted priors to further push the performance envelope. We also explore the translational aspects and eventual clinical implications of these methods. We conclude by discussing future directions to address remaining challenges in MRI reconstruction. The tutorial is accompanied by a Python toolbox (https://github.com/tutorial-MRI-recon/tutorial) to demonstrate select methods discussed in the article.

MedSR-Impact: Transformer-Based Super-Resolution for Lung CT Segmentation, Radiomics, Classification, and Prognosis

Marc Boubnovski Martell, Kristofer Linton-Reid, Mitchell Chen, Sumeet Hindocha, Benjamin Hunter, Marco A. Calzado, Richard Lee, Joram M. Posma, Eric O. Aboagye

arxiv logopreprintJul 21 2025
High-resolution volumetric computed tomography (CT) is essential for accurate diagnosis and treatment planning in thoracic diseases; however, it is limited by radiation dose and hardware costs. We present the Transformer Volumetric Super-Resolution Network (\textbf{TVSRN-V2}), a transformer-based super-resolution (SR) framework designed for practical deployment in clinical lung CT analysis. Built from scalable components, including Through-Plane Attention Blocks (TAB) and Swin Transformer V2 -- our model effectively reconstructs fine anatomical details in low-dose CT volumes and integrates seamlessly with downstream analysis pipelines. We evaluate its effectiveness on three critical lung cancer tasks -- lobe segmentation, radiomics, and prognosis -- across multiple clinical cohorts. To enhance robustness across variable acquisition protocols, we introduce pseudo-low-resolution augmentation, simulating scanner diversity without requiring private data. TVSRN-V2 demonstrates a significant improvement in segmentation accuracy (+4\% Dice), higher radiomic feature reproducibility, and enhanced predictive performance (+0.06 C-index and AUC). These results indicate that SR-driven recovery of structural detail significantly enhances clinical decision support, positioning TVSRN-V2 as a well-engineered, clinically viable system for dose-efficient imaging and quantitative analysis in real-world CT workflows.

Advances in IPMN imaging: deep learning-enhanced HASTE improves lesion assessment.

Kolck J, Pivetta F, Hosse C, Cao H, Fehrenbach U, Malinka T, Wagner M, Walter-Rittel T, Geisel D

pubmed logopapersJul 21 2025
The prevalence of asymptomatic pancreatic cysts is increasing due to advances in imaging techniques. Among these, intraductal papillary mucinous neoplasms (IPMNs) are most common, with potential for malignant transformation, often necessitating close follow-up. This study evaluates novel MRI techniques for the assessment of IPMN. From May to December 2023, 59 patients undergoing abdominal MRI were retrospectively enrolled. Examinations were conducted on 3-Tesla scanners using a Deep-Learning Accelerated Half-Fourier Single-Shot Turbo Spin-Echo (HASTE<sub>DL</sub>) and standard HASTE (HASTE<sub>S</sub>) sequence. Two readers assessed minimum detectable lesion size and lesion-to-parenchyma contrast quantitatively, and qualitative assessments focused on image quality. Statistical analyses included the Wilcoxon signed-rank and chi-squared tests. HASTE<sub>DL</sub> demonstrated superior overall image quality (p < 0.001), with higher sharpness and contrast ratings (p < 0.001, p = 0.112). HASTE<sub>DL</sub> showed enhanced conspicuity of IPMN (p < 0.001) and lymph nodes (p < 0.001), with more frequent visualization of IPMN communication with the pancreatic duct (p < 0.001). Visualization of complex features (dilated pancreatic duct, septa, and mural nodules) was superior in HASTE<sub>DL</sub> (p < 0.001). The minimum detectable cyst size was significantly smaller for HASTE<sub>DL</sub> (4.17 mm ± 3.00 vs. 5.51 mm ± 4.75; p < 0.001). Inter-reader agreement was for (к 0.936) for HASTE<sub>DL</sub>, slightly lower (к 0.885) for HASTE<sub>S</sub>. HASTE<sub>DL</sub> in IPMN imaging provides superior image quality and significantly reduced scan times. Given the increasing prevalence of IPMN and the ensuing clinical need for fast and precise imaging, HASTE<sub>DL</sub> improves the availability and quality of patient care. Question Are there advantages of deep-learning-accelerated MRI in imaging and assessing intraductal papillary mucinous neoplasms (IPMN)? Findings Deep-Learning Accelerated Half-Fourier Single-Shot Turbo Spin-Echo (HASTE<sub>DL</sub>) demonstrated superior image quality, improved conspicuity of "worrisome features" and detection of smaller cysts, with significantly reduced scan times. Clinical relevance HASTEDL provides faster, high-quality MRI imaging, enabling improved diagnostic accuracy and timely risk stratification for IPMN, potentially enhancing patient care and addressing the growing clinical demand for efficient imaging of IPMN.

Lightweight Network Enhancing High-Resolution Feature Representation for Efficient Low Dose CT Denoising.

Li J, Li Y, Qi F, Wang S, Zhang Z, Huang Z, Yu Z

pubmed logopapersJul 21 2025
Low-dose computed tomography plays a crucial role in reducing radiation exposure in clinical imaging, however, the resultant noise significantly impacts image quality and diagnostic precision. Recent transformer-based models have demonstrated strong denoising capabilities but are often constrained by high computational complexity. To overcome these limitations, we propose AMFA-Net, an adaptive multi-order feature aggregation network that provides a lightweight architecture for enhancing highresolution feature representation in low-dose CT imaging. AMFA-Net effectively integrates local and global contexts within high-resolution feature maps while learning discriminative representations through multi-order context aggregation. We introduce an agent-based self-attention crossshaped window transformer block that efficiently captures global context in high-resolution feature maps, which is subsequently fused with backbone features to preserve critical structural information. Our approach employs multiorder gated aggregation to adaptively guide the network in capturing expressive interactions that may be overlooked in fused features, thereby producing robust representations for denoised image reconstruction. Experiments on two challenging public datasets with 25% and 10% full-dose CT image quality demonstrate that our method surpasses state-of-the-art approaches in denoising performance with low computational cost, highlighting its potential for realtime medical applications.

Ultra-low dose imaging in a standard axial field-of-view PET.

Lima T, Gomes CV, Fargier P, Strobel K, Leimgruber A

pubmed logopapersJul 21 2025
Though ultra-low dose (ULD) imaging offers notable benefits, its widespread clinical adoption faces challenges. Long-axial field-of-view (LAFOV) PET/CT systems are expensive and scarce, while artificial intelligence (AI) shows great potential but remains largely limited to specific systems and is not yet widely used in clinical practice. However, integrating AI techniques and technological advancements into ULD imaging is helping bridge the gap between standard axial field-of-view (SAFOV) and LAFOV PET/CT systems. This paper offers an initial evaluation of ULD capabilities using one of the latest SAFOV PET/CT device. A patient injected with 16.4 MBq <sup>18</sup>F-FDG underwent a local protocol consisting of a dynamic acquisition (first 30 min) of the abdominal section and a static whole body 74 min post-injection on a GE Omni PET/CT. From the acquired images we computed the dosimetry and compared clinical output from kidney function and brain uptake to kidney model and normal databases, respectively. The effective PET dose for this patient was 0.27 ± 0.01 mSv and the absorbed doses were 0.56 mGy, 0.89 mGy and 0.20 mGy, respectively to the brain, heart, and kidneys. The recorded kidney concentration closely followed the kidney model, matching the increase and decrease in activity concentration over time. Normal values for the z-score were observed for the brain uptake, indicating typical brain function and activity patterns consistent with healthy individuals. The signal to noise ration obtained in this study (13.1) was comparable to the LAFOV reported values. This study shows promising capabilities of ultra-low-dose imaging in SAFOV PET devices, previously deemed unattainable with SAFOV PET imaging.

PET Image Reconstruction Using Deep Diffusion Image Prior

Fumio Hashimoto, Kuang Gong

arxiv logopreprintJul 20 2025
Diffusion models have shown great promise in medical image denoising and reconstruction, but their application to Positron Emission Tomography (PET) imaging remains limited by tracer-specific contrast variability and high computational demands. In this work, we proposed an anatomical prior-guided PET image reconstruction method based on diffusion models, inspired by the deep diffusion image prior (DDIP) framework. The proposed method alternated between diffusion sampling and model fine-tuning guided by the PET sinogram, enabling the reconstruction of high-quality images from various PET tracers using a score function pretrained on a dataset of another tracer. To improve computational efficiency, the half-quadratic splitting (HQS) algorithm was adopted to decouple network optimization from iterative PET reconstruction. The proposed method was evaluated using one simulation and two clinical datasets. For the simulation study, a model pretrained on [$^{18}$F]FDG data was tested on amyloid-negative PET data to assess out-of-distribution (OOD) performance. For the clinical-data validation, ten low-dose [$^{18}$F]FDG datasets and one [$^{18}$F]Florbetapir dataset were tested on a model pretrained on data from another tracer. Experiment results show that the proposed PET reconstruction method can generalize robustly across tracer distributions and scanner types, providing an efficient and versatile reconstruction framework for low-dose PET imaging.

OpenBreastUS: Benchmarking Neural Operators for Wave Imaging Using Breast Ultrasound Computed Tomography

Zhijun Zeng, Youjia Zheng, Hao Hu, Zeyuan Dong, Yihang Zheng, Xinliang Liu, Jinzhuo Wang, Zuoqiang Shi, Linfeng Zhang, Yubing Li, He Sun

arxiv logopreprintJul 20 2025
Accurate and efficient simulation of wave equations is crucial in computational wave imaging applications, such as ultrasound computed tomography (USCT), which reconstructs tissue material properties from observed scattered waves. Traditional numerical solvers for wave equations are computationally intensive and often unstable, limiting their practical applications for quasi-real-time image reconstruction. Neural operators offer an innovative approach by accelerating PDE solving using neural networks; however, their effectiveness in realistic imaging is limited because existing datasets oversimplify real-world complexity. In this paper, we present OpenBreastUS, a large-scale wave equation dataset designed to bridge the gap between theoretical equations and practical imaging applications. OpenBreastUS includes 8,000 anatomically realistic human breast phantoms and over 16 million frequency-domain wave simulations using real USCT configurations. It enables a comprehensive benchmarking of popular neural operators for both forward simulation and inverse imaging tasks, allowing analysis of their performance, scalability, and generalization capabilities. By offering a realistic and extensive dataset, OpenBreastUS not only serves as a platform for developing innovative neural PDE solvers but also facilitates their deployment in real-world medical imaging problems. For the first time, we demonstrate efficient in vivo imaging of the human breast using neural operator solvers.

QUTCC: Quantile Uncertainty Training and Conformal Calibration for Imaging Inverse Problems

Cassandra Tong Ye, Shamus Li, Tyler King, Kristina Monakhova

arxiv logopreprintJul 19 2025
Deep learning models often hallucinate, producing realistic artifacts that are not truly present in the sample. This can have dire consequences for scientific and medical inverse problems, such as MRI and microscopy denoising, where accuracy is more important than perceptual quality. Uncertainty quantification techniques, such as conformal prediction, can pinpoint outliers and provide guarantees for image regression tasks, improving reliability. However, existing methods utilize a linear constant scaling factor to calibrate uncertainty bounds, resulting in larger, less informative bounds. We propose QUTCC, a quantile uncertainty training and calibration technique that enables nonlinear, non-uniform scaling of quantile predictions to enable tighter uncertainty estimates. Using a U-Net architecture with a quantile embedding, QUTCC enables the prediction of the full conditional distribution of quantiles for the imaging task. During calibration, QUTCC generates uncertainty bounds by iteratively querying the network for upper and lower quantiles, progressively refining the bounds to obtain a tighter interval that captures the desired coverage. We evaluate our method on several denoising tasks as well as compressive MRI reconstruction. Our method successfully pinpoints hallucinations in image estimates and consistently achieves tighter uncertainty intervals than prior methods while maintaining the same statistical coverage.
Page 8 of 40393 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.