Sort by:
Page 11 of 54537 results

Ultra-Low-Dose CTPA Using Sparse Sampling CT Combined with the U-Net for Deep Learning-Based Artifact Reduction: An Exploratory Study.

Sauter AP, Thalhammer J, Meurer F, Dorosti T, Sasse D, Ritter J, Leonhardt Y, Pfeiffer F, Schaff F, Pfeiffer D

pubmed logopapersAug 27 2025
This retrospective study evaluates U-Net-based artifact reduction for dose-reduced sparse-sampling CT (SpSCT) in terms of image quality and diagnostic performance using a reader study and automated detection. CT pulmonary angiograms from 89 patients were used to generate SpSCT data with 16 to 512 views. Twenty patients were reserved for a reader study and test set, the remaining 69 were used to train (53) and validate (16) a dual-frame U-Net for artifact reduction. U-Net post-processed images were assessed for image quality, diagnostic performance, and automated pulmonary embolism (PE) detection using the top-performing network from the 2020 RSNA PE detection challenge. Statistical comparisons were made using two-sided Wilcoxon signed-rank and DeLong two-sided tests. Post-processing with the dual-frame U-Net significantly improved image quality in the internal test set, with a structural similarity index of 0.634/0.378/0.234/0.152 for FBP and 0.894/0.892/0.866/0.778 for U-Net at 128/64/32/16 views, respectively. The reader study showed significantly enhanced image quality (3.15 vs. 3.53 for 256 views, 0.00 vs. 2.52 for 32 views), increased diagnostic confidence (0.00 vs. 2.38 for 32 views), and fewer artifacts across all subsets (P < 0.05). Diagnostic performance, measured by the Sørensen-Dice coefficient, was significantly better for 64- and 32-view images (0.23 vs. 0.44 and 0.00 vs. 0.09, P < 0.05). Automated PE detection was better at fewer views (64 views: 0.77 vs. 0.80, 16 views: 0.59 vs. 0.80), although the differences were not statistically significant. U-Net-based post-processing of SpSCT data significantly enhances image quality and diagnostic performance, supporting substantial dose reduction in CT pulmonary angiography.

PWLS-SOM: alternative PWLS reconstruction for limited-view CT by strategic optimization of a deep learning model.

Chen C, Zhang L, Xing Y, Chen Z

pubmed logopapersAug 27 2025
While deep learning (DL) methods have exhibited promising results in mitigating streaking artifacts caused by limited-view computed tomography (CT), their generalization to practical applications remains challenging. To address this challenge, we aim to develop a novel approach that integrates DL priors with targeted-case data consistency for improved artifact suppression and robust reconstruction.&#xD;Approach: We propose an alternative Penalized Weighted Least Squares reconstruction framework by Strategic Optimization of a DL Model (PWLS-SOM). This framework combines data-driven DL priors with data consistency constraints in a three-stage process: (1) Group-level embedding: DL network parameters are optimized on a large-scale paired dataset to learn general artifact elimination. (2) Significance evaluation: A novel significance score quantifies the contribution of DL model parameters, guiding the subsequent strategic adaptation. (3) Individual-level consistency adaptation: PWLS-driven strategic optimization further adapts DL parameters for target-specific projection data.&#xD;Main Results: Experiments were conducted on sparse-view (90 views) circular trajectory CT data and a multi-segment linear trajectory CT scan with a mixed data missing problem. PWLS-SOM reconstruction demonstrated superior generalization across variations in patients, anatomical structures, and data distributions. It outperformed supervised DL methods in recovering contextual structures and adapting to practical CT scenarios. The method was validated with real experiments on a dead rat, showcasing its applicability to real-world CT scans.&#xD;Significance: PWLS-SOM reconstruction advances the field of limited-view CT reconstruction by uniting DL priors with PWLS adaptation. This approach facilitates robust and personalized imaging. The introduction of the significance score provides an efficient metric to evaluate generalization and guide the strategic optimization of DL parameters, enhancing adaptability across diverse data and practical imaging conditions.

Physical foundations for trustworthy medical imaging: A survey for artificial intelligence researchers.

Cobo M, Corral Fontecha D, Silva W, Lloret Iglesias L

pubmed logopapersAug 26 2025
Artificial intelligence in medical imaging has grown rapidly in the past decade, driven by advances in deep learning and widespread access to computing resources. Applications cover diverse imaging modalities, including those based on electromagnetic radiation (e.g., X-rays), subatomic particles (e.g., nuclear imaging), and acoustic waves (ultrasound). Each modality features and limitations are defined by its underlying physics. However, many artificial intelligence practitioners lack a solid understanding of the physical principles involved in medical image acquisition. This gap hinders leveraging the full potential of deep learning, as incorporating physics knowledge into artificial intelligence systems promotes trustworthiness, especially in limited data scenarios. This work reviews the fundamental physical concepts behind medical imaging and examines their influence on recent developments in artificial intelligence, particularly, generative models and reconstruction algorithms. Finally, we describe physics-informed machine learning approaches to improve feature learning in medical imaging.

Displacement-Guided Anisotropic 3D-MRI Super-Resolution with Warp Mechanism.

Wang L, Liu S, Yu Z, Du J, Li Y

pubmed logopapersAug 25 2025
Enhancing the resolution of Magnetic Resonance Imaging (MRI) through super-resolution (SR) reconstruction is crucial for boosting diagnostic precision. However, current SR methods primarily rely on single LR images or multi-contrast features, limiting detail restoration. Inspired by video frame interpolation, this work utilizes the spatiotemporal correlations between adjacent slices to reformulate the SR task of anisotropic 3D-MRI image into the generation of new high-resolution (HR) slices between adjacent 2D slices. The generated SR slices are subsequently combined with the HR adjacent slices to create a new HR 3D-MRI image. We propose a innovative network architecture termed DGWMSR, comprising a backbone network and a feature supplement module (FSM). The backbone's core innovations include the displacement former block (DFB) module, which independently extracts structural and displacement features, and the maskdisplacement vector network (MDVNet) which combines with Warp mechanism to facilitate edge pixel detailing. The DFB integrates the inter-slice attention (ISA) mechanism into the Transformer, effectively minimizing the mutual interference between the two types of features and mitigating volume effects during reconstruction. Additionally, the FSM module combines self-attention with feed-forward neural network, which emphasizes critical details derived from the backbone architecture. Experimental results demonstrate the DGWMSR network outperforms current MRI SR methods on Kirby21, ANVIL-adult, and MSSEG datasets. Our code has been made publicly available on GitHub at https://github.com/Dohbby/DGWMSR.

Artificial Intelligence-Guided PET Image Reconstruction and Multi-Tracer Imaging: Novel Methods, Challenges, and Opportunities.

Dassanayake M, Lopez A, Reader A, Cook GJR, Mingels C, Rahmim A, Seifert R, Alberts I, Yousefirizi F

pubmed logopapersAug 25 2025
This article reviews recent advancements in PET/computed tomography imaging, emphasizing the transformative impact of total-body and long-axial field-of-view scanners, which offer increased sensitivity, larger coverage, and faster, lower-dose imaging. It highlights the growing role of artificial intelligence (AI) in enhancing image reconstruction, resolution, and multi-tracer applications, enabling rapid processing and improved quantification. AI-driven techniques, such as super-resolution, positron range correction, and motion compensation, are improving lesion detectability and image quality. The review underscores the potential of these innovations to revolutionize clinical and research PET imaging, while also noting the challenges in validation and implementation for routine practice.

Motion Management in Positron Emission Tomography/Computed Tomography and Positron Emission Tomography/Magnetic Resonance.

Guo L, Liu C, Soultanidis G

pubmed logopapersAug 25 2025
Motion in clinical positron emission tomography (PET) examinations degrades image quality and quantification, requiring tailored correction strategies. Recent advancements integrate external devices and/or data-driven motion tracking with image registration and motion modeling, particularly deep learning-based methods, to address complex motion scenarios. The development of total-body PET systems with long axial field-of-view enables advanced motion correction by leveraging extended coverage and continuous acquisition. These innovations enhance the accuracy of motion estimation and correction across various clinical applications, improve quantitative reliability in static and dynamic imaging, and enable more precise assessments in oncology, neurology, and cardiovascular PET studies.

2D Ultrasound Elasticity Imaging of Abdominal Aortic Aneurysms Using Deep Neural Networks

Utsav Ratna Tuladhar, Richard Simon, Doran Mix, Michael Richards

arxiv logopreprintAug 25 2025
Abdominal aortic aneurysms (AAA) pose a significant clinical risk due to their potential for rupture, which is often asymptomatic but can be fatal. Although maximum diameter is commonly used for risk assessment, diameter alone is insufficient as it does not capture the properties of the underlying material of the vessel wall, which play a critical role in determining the risk of rupture. To overcome this limitation, we propose a deep learning-based framework for elasticity imaging of AAAs with 2D ultrasound. Leveraging finite element simulations, we generate a diverse dataset of displacement fields with their corresponding modulus distributions. We train a model with U-Net architecture and normalized mean squared error (NMSE) to infer the spatial modulus distribution from the axial and lateral components of the displacement fields. This model is evaluated across three experimental domains: digital phantom data from 3D COMSOL simulations, physical phantom experiments using biomechanically distinct vessel models, and clinical ultrasound exams from AAA patients. Our simulated results demonstrate that the proposed deep learning model is able to reconstruct modulus distributions, achieving an NMSE score of 0.73\%. Similarly, in phantom data, the predicted modular ratio closely matches the expected values, affirming the model's ability to generalize to phantom data. We compare our approach with an iterative method which shows comparable performance but higher computation time. In contrast, the deep learning method can provide quick and effective estimates of tissue stiffness from ultrasound images, which could help assess the risk of AAA rupture without invasive procedures.

UniSino: Physics-Driven Foundational Model for Universal CT Sinogram Standardization

Xingyu Ai, Shaoyu Wang, Zhiyuan Jia, Ao Xu, Hongming Shan, Jianhua Ma, Qiegen Liu

arxiv logopreprintAug 25 2025
During raw-data acquisition in CT imaging, diverse factors can degrade the collected sinograms, with undersampling and noise leading to severe artifacts and noise in reconstructed images and compromising diagnostic accuracy. Conventional correction methods rely on manually designed algorithms or fixed empirical parameters, but these approaches often lack generalizability across heterogeneous artifact types. To address these limitations, we propose UniSino, a foundation model for universal CT sinogram standardization. Unlike existing foundational models that operate in image domain, UniSino directly standardizes data in the projection domain, which enables stronger generalization across diverse undersampling scenarios. Its training framework incorporates the physical characteristics of sinograms, enhancing generalization and enabling robust performance across multiple subtasks spanning four benchmark datasets. Experimental results demonstrate thatUniSino achieves superior reconstruction quality both single and mixed undersampling case, demonstrating exceptional robustness and generalization in sinogram enhancement for CT imaging. The code is available at: https://github.com/yqx7150/UniSino.

FoundDiff: Foundational Diffusion Model for Generalizable Low-Dose CT Denoising

Zhihao Chen, Qi Gao, Zilong Li, Junping Zhang, Yi Zhang, Jun Zhao, Hongming Shan

arxiv logopreprintAug 24 2025
Low-dose computed tomography (CT) denoising is crucial for reduced radiation exposure while ensuring diagnostically acceptable image quality. Despite significant advancements driven by deep learning (DL) in recent years, existing DL-based methods, typically trained on a specific dose level and anatomical region, struggle to handle diverse noise characteristics and anatomical heterogeneity during varied scanning conditions, limiting their generalizability and robustness in clinical scenarios. In this paper, we propose FoundDiff, a foundational diffusion model for unified and generalizable LDCT denoising across various dose levels and anatomical regions. FoundDiff employs a two-stage strategy: (i) dose-anatomy perception and (ii) adaptive denoising. First, we develop a dose- and anatomy-aware contrastive language image pre-training model (DA-CLIP) to achieve robust dose and anatomy perception by leveraging specialized contrastive learning strategies to learn continuous representations that quantify ordinal dose variations and identify salient anatomical regions. Second, we design a dose- and anatomy-aware diffusion model (DA-Diff) to perform adaptive and generalizable denoising by synergistically integrating the learned dose and anatomy embeddings from DACLIP into diffusion process via a novel dose and anatomy conditional block (DACB) based on Mamba. Extensive experiments on two public LDCT datasets encompassing eight dose levels and three anatomical regions demonstrate superior denoising performance of FoundDiff over existing state-of-the-art methods and the remarkable generalization to unseen dose levels. The codes and models are available at https://github.com/hao1635/FoundDiff.

Deep Learning Architectures for Medical Image Denoising: A Comparative Study of CNN-DAE, CADTra, and DCMIEDNet

Asadullah Bin Rahman, Masud Ibn Afjal, Md. Abdulla Al Mamun

arxiv logopreprintAug 24 2025
Medical imaging modalities are inherently susceptible to noise contamination that degrades diagnostic utility and clinical assessment accuracy. This paper presents a comprehensive comparative evaluation of three state-of-the-art deep learning architectures for MRI brain image denoising: CNN-DAE, CADTra, and DCMIEDNet. We systematically evaluate these models across multiple Gaussian noise intensities ($\sigma = 10, 15, 25$) using the Figshare MRI Brain Dataset. Our experimental results demonstrate that DCMIEDNet achieves superior performance at lower noise levels, with PSNR values of $32.921 \pm 2.350$ dB and $30.943 \pm 2.339$ dB for $\sigma = 10$ and $15$ respectively. However, CADTra exhibits greater robustness under severe noise conditions ($\sigma = 25$), achieving the highest PSNR of $27.671 \pm 2.091$ dB. All deep learning approaches significantly outperform traditional wavelet-based methods, with improvements ranging from 5-8 dB across tested conditions. This study establishes quantitative benchmarks for medical image denoising and provides insights into architecture-specific strengths for varying noise intensities.
Page 11 of 54537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.