Sort by:
Page 1 of 29286 results
Next

Feasibility study of "double-low" scanning protocol combined with artificial intelligence iterative reconstruction algorithm for abdominal computed tomography enhancement in patients with obesity.

Ji MT, Wang RR, Wang Q, Li HS, Zhao YX

pubmed logopapersJul 9 2025
To evaluate the efficacy of the "double-low" scanning protocol combined with the artificial intelligence iterative reconstruction (AIIR) algorithm for abdominal computed tomography (CT) enhancement in obese patients and to identify the optimal AIIR algorithm level. Patients with a body mass index ≥ 30.00 kg/m<sup>2</sup> who underwent abdominal CT enhancement were randomly assigned to groups A or B. Group A underwent conventional protocol with the Karl 3D iterative reconstruction algorithm at levels 3-5. Group B underwent the "double-low" protocol with AIIR algorithm at levels 1-5. Radiation dose, total iodine intake, along with subjective and objective image quality were recorded. The optimal reconstruction levels for arterial-phase and portal-venous-phase images were identified. Comparisons were made in terms of radiation dose, iodine intake, and image quality. Overall, 150 patients with obesity were collected, and each group consisted of 75 cases. Karl 3D level 5 was the optimal algorithm level for group A, while AIIR level 4 was the optimal algorithm level for group B. AIIR level 4 images in group B exhibited significantly superior subjective and objective image quality than those in Karl 3D level 5 images in group A (P < 0.001). Group B showed reductions in mean CT dose index values, dose-length product, size-specific dose estimate based on water-equivalent diameter, and total iodine intake, compared with group A (P < 0.001). The "double-low" scanning protocol combined with the AIIR algorithm significantly reduces radiation dose and iodine intake during abdominal CT enhancement in obese patients. AIIR level 4 is the optimal reconstruction level for arterial-phase and portal-venous-phase in this patient population.

Evolution of CT perfusion software in stroke imaging: from deconvolution to artificial intelligence.

Gragnano E, Cocozza S, Rizzuti M, Buono G, Elefante A, Guida A, Marseglia M, Tarantino M, Manganelli F, Tortora F, Briganti F

pubmed logopapersJul 9 2025
Computed tomography perfusion (CTP) represents one of the main determinants in the decision-making strategy of stroke patients, being very useful in triaging these patients. The aim of this review is to describe the current knowledge and the future applications of AI in CTP. This review contains a short technical description of the CTP technique and how perfusion parameters are currently estimated and applied in clinical practice. We then provided a comprehensive literature review on the performance of CTP analysis software aimed at understanding whether possible differences between commercially available software might have a direct implication on neuroradiological patient stratification, and therefore on their clinical outcomes. An overview of past, present, and future of software used for CTP estimation, with an emphasis on those AI-based, is provided. Finally, future challenges regarding technical aspects and ethical considerations are discussed. In the current state, most of the use of AI in CTP estimation is limited to some technical steps of the processing pipeline, and especially in the correction of motion artifacts, with deconvolution methods that are still widely used to generate CTP-derived variables. Major drawbacks in AI implementation are still present, especially regarding the "black-box" nature of some models, technical workflow implementations, and the economic costs. In the future, the integration of AI with all the information available in clinical practice should fulfill the aim of developing patient-specific CTP maps, which will overcome the current limitations of threshold-based decision-making processes and will lead physicians to better patient selection and earlier and more efficient treatments. KEY POINTS: Question AI is a widely investigated field in neuroradiology, yet no comprehensive review is yet available on its role in CT perfusion (CTP) in stroke patients. Findings AI in CTP is mainly used for motion correction; future integration with clinical data could enable personalized stroke treatment, despite ethical and economic challenges. Clinical relevance To date, AI in CTP mainly finds applications in image motion correction; although some ethical, technical, and vendor standardization issues remain, integrating AI with clinical data in stroke patients promises a possible improvement in patient outcomes.

4KAgent: Agentic Any Image to 4K Super-Resolution

Yushen Zuo, Qi Zheng, Mingyang Wu, Xinrui Jiang, Renjie Li, Jian Wang, Yide Zhang, Gengchen Mai, Lihong V. Wang, James Zou, Xiaoyu Wang, Ming-Hsuan Yang, Zhengzhong Tu

arxiv logopreprintJul 9 2025
We present 4KAgent, a unified agentic super-resolution generalist system designed to universally upscale any image to 4K resolution (and even higher, if applied iteratively). Our system can transform images from extremely low resolutions with severe degradations, for example, highly distorted inputs at 256x256, into crystal-clear, photorealistic 4K outputs. 4KAgent comprises three core components: (1) Profiling, a module that customizes the 4KAgent pipeline based on bespoke use cases; (2) A Perception Agent, which leverages vision-language models alongside image quality assessment experts to analyze the input image and make a tailored restoration plan; and (3) A Restoration Agent, which executes the plan, following a recursive execution-reflection paradigm, guided by a quality-driven mixture-of-expert policy to select the optimal output for each step. Additionally, 4KAgent embeds a specialized face restoration pipeline, significantly enhancing facial details in portrait and selfie photos. We rigorously evaluate our 4KAgent across 11 distinct task categories encompassing a total of 26 diverse benchmarks, setting new state-of-the-art on a broad spectrum of imaging domains. Our evaluations cover natural images, portrait photos, AI-generated content, satellite imagery, fluorescence microscopy, and medical imaging like fundoscopy, ultrasound, and X-ray, demonstrating superior performance in terms of both perceptual (e.g., NIQE, MUSIQ) and fidelity (e.g., PSNR) metrics. By establishing a novel agentic paradigm for low-level vision tasks, we aim to catalyze broader interest and innovation within vision-centric autonomous agents across diverse research communities. We will release all the code, models, and results at: https://4kagent.github.io.

Speckle2Self: Self-Supervised Ultrasound Speckle Reduction Without Clean Data

Xuesong Li, Nassir Navab, Zhongliang Jiang

arxiv logopreprintJul 9 2025
Image denoising is a fundamental task in computer vision, particularly in medical ultrasound (US) imaging, where speckle noise significantly degrades image quality. Although recent advancements in deep neural networks have led to substantial improvements in denoising for natural images, these methods cannot be directly applied to US speckle noise, as it is not purely random. Instead, US speckle arises from complex wave interference within the body microstructure, making it tissue-dependent. This dependency means that obtaining two independent noisy observations of the same scene, as required by pioneering Noise2Noise, is not feasible. Additionally, blind-spot networks also cannot handle US speckle noise due to its high spatial dependency. To address this challenge, we introduce Speckle2Self, a novel self-supervised algorithm for speckle reduction using only single noisy observations. The key insight is that applying a multi-scale perturbation (MSP) operation introduces tissue-dependent variations in the speckle pattern across different scales, while preserving the shared anatomical structure. This enables effective speckle suppression by modeling the clean image as a low-rank signal and isolating the sparse noise component. To demonstrate its effectiveness, Speckle2Self is comprehensively compared with conventional filter-based denoising algorithms and SOTA learning-based methods, using both realistic simulated US images and human carotid US images. Additionally, data from multiple US machines are employed to evaluate model generalization and adaptability to images from unseen domains. \textit{Code and datasets will be released upon acceptance.

LangMamba: A Language-driven Mamba Framework for Low-dose CT Denoising with Vision-language Models

Zhihao Chen, Tao Chen, Chenhui Wang, Qi Gao, Huidong Xie, Chuang Niu, Ge Wang, Hongming Shan

arxiv logopreprintJul 8 2025
Low-dose computed tomography (LDCT) reduces radiation exposure but often degrades image quality, potentially compromising diagnostic accuracy. Existing deep learning-based denoising methods focus primarily on pixel-level mappings, overlooking the potential benefits of high-level semantic guidance. Recent advances in vision-language models (VLMs) suggest that language can serve as a powerful tool for capturing structured semantic information, offering new opportunities to improve LDCT reconstruction. In this paper, we introduce LangMamba, a Language-driven Mamba framework for LDCT denoising that leverages VLM-derived representations to enhance supervision from normal-dose CT (NDCT). LangMamba follows a two-stage learning strategy. First, we pre-train a Language-guided AutoEncoder (LangAE) that leverages frozen VLMs to map NDCT images into a semantic space enriched with anatomical information. Second, we synergize LangAE with two key components to guide LDCT denoising: Semantic-Enhanced Efficient Denoiser (SEED), which enhances NDCT-relevant local semantic while capturing global features with efficient Mamba mechanism, and Language-engaged Dual-space Alignment (LangDA) Loss, which ensures that denoised images align with NDCT in both perceptual and semantic spaces. Extensive experiments on two public datasets demonstrate that LangMamba outperforms conventional state-of-the-art methods, significantly improving detail preservation and visual fidelity. Remarkably, LangAE exhibits strong generalizability to unseen datasets, thereby reducing training costs. Furthermore, LangDA loss improves explainability by integrating language-guided insights into image reconstruction and offers a plug-and-play fashion. Our findings shed new light on the potential of language as a supervisory signal to advance LDCT denoising. The code is publicly available on https://github.com/hao1635/LangMamba.

Deep supervised transformer-based noise-aware network for low-dose PET denoising across varying count levels.

Azimi MS, Felfelian V, Zeraatkar N, Dadgar H, Arabi H, Zaidi H

pubmed logopapersJul 8 2025
Reducing radiation dose from PET imaging is essential to minimize cancer risks; however, it often leads to increased noise and degraded image quality, compromising diagnostic reliability. Recent advances in deep learning have shown promising results in addressing these limitations through effective denoising. However, existing networks trained on specific noise levels often fail to generalize across diverse acquisition conditions. Moreover, training multiple models for different noise levels is impractical due to data and computational constraints. This study aimed to develop a supervised Swin Transformer-based unified noise-aware (ST-UNN) network that handles diverse noise levels and reconstructs high-quality images in low-dose PET imaging. We present a Swin Transformer-based Noise-Aware Network (ST-UNN), which incorporates multiple sub-networks, each designed to address specific noise levels ranging from 1 % to 10 %. An adaptive weighting mechanism dynamically integrates the outputs of these sub-networks to achieve effective denoising. The model was trained and evaluated using PET/CT dataset encompassing the entire head and malignant lesions in the head and neck region. Performance was assessed using a combination of structural and statistical metrics, including the Structural Similarity Index (SSIM), Peak Signal-to-Noise Ratio (PSNR), Standardized Uptake Value (SUV) mean bias, SUV<sub>max</sub> bias, and Root Mean Square Error (RMSE). This comprehensive evaluation ensured reliable results for both global and localized regions within PET images. The ST-UNN consistently outperformed conventional networks, particularly in ultra-low-dose scenarios. At 1 % count level, it achieved a PSNR of 34.77, RMSE of 0.05, and SSIM of 0.97, notably surpassing the baseline networks. It also achieved the lowest SUV<sub>mean</sub> bias (0.08) and RMSE lesion (0.12) at this level. Across all count levels, ST-UNN maintained high performance and low error, demonstrating strong generalization and diagnostic integrity. ST-UNN offers a scalable, transformer-based solution for low-dose PET imaging. By dynamically integrating sub-networks, it effectively addresses noise variability and provides superior image quality, thereby advancing the capabilities of low-dose and dynamic PET imaging.

Usefulness of compressed sensing coronary magnetic resonance angiography with deep learning reconstruction.

Tabo K, Kido T, Matsuda M, Tokui S, Mizogami G, Takimoto Y, Matsumoto M, Miyoshi M, Kido T

pubmed logopapersJul 7 2025
Coronary magnetic resonance angiography (CMRA) scans are generally time-consuming. CMRA with compressed sensing (CS) and artificial intelligence (AI) (CSAI CMRA) is expected to shorten the imaging time while maintaining image quality. This study aimed to evaluate the usefulness of CS and AI for non-contrast CMRA. Twenty volunteers underwent both CS and conventional CMRA. Conventional CMRA employed parallel imaging (PI) with an acceleration factor of 2. CS CMRA employed a combination of PI and CS with an acceleration factor of 3. Deep learning reconstruction was performed offline on the CS CMRA data after scanning, which was defined as CSAI CMRA. We compared the imaging time, image quality, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and vessel sharpness for each CMRA scan. The CS CMRA scan time was significantly shorter than that of conventional CMRA (460 s [343,753 s] vs. 727 s [567,939 s], p < 0.001). The image quality scores of the left anterior descending artery (LAD) and left circumflex artery (LCX) were significantly higher in conventional CMRA (LAD: 3.3 ± 0.7, LCX: 3.3 ± 0.7) and CSAI CMRA (LAD: 3.7 ± 0.6, LCX: 3.5 ± 0.7) than the CS CMRA (LAD: 2.9 ± 0.6, LCX: 2.9 ± 0.6) (p < 0.05). The right coronary artery scores did not vary among the three groups (p = 0.087). The SNR and CNR were significantly higher in CSAI CMRA (SNR: 12.3 [9.7, 13.7], CNR: 12.3 [10.5, 14.5]) and CS CMRA (SNR: 10.5 [8.2, 12.6], CNR: 9.5 [7.9, 12.6]) than conventional CMRA (SNR: 9.0 [7.8, 11.1], CNR: 7.7 [6.0, 10.1]) (p < 0.01). The vessel sharpness was significantly higher in CSAI CMRA (LAD: 0.87 [0.78, 0.91]) (p < 0.05), with no significant difference between the CS CMRA (LAD: 0.77 [0.71, 0.83]) and conventional CMRA (LAD: 0.77 [0.71, 0.86]). CSAI CMRA can shorten the imaging time while maintaining good image quality.

2-D Stationary Wavelet Transform and 2-D Dual-Tree DWT for MRI Denoising.

Talbi M, Nasraoui B, Alfaidi A

pubmed logopapersJul 7 2025
The noise emergence in the digital image can occur throughout image acquisition, transmission, and processing steps. Consequently, eliminating the noise from the digital image is required before further processing. This study aims to denoise noisy images (including Magnetic Resonance Images (<b>MRIs</b>)) by employing our proposed image denoising approach. This proposed approach is based on the Stationary Wavelet Transform (<b>SWT 2-D</b>) and the <b>2 - D</b> Dual-Tree Discrete Wavelet Transform (<b>DWT</b>). The first step of this approach consists of applying the 2 - D Dual-Tree DWT to the noisy image to obtain noisy wavelet coefficients. The second step of this approach consists of denoising each of these coefficients by applying an SWT 2-D based denoising technique. The denoised image is finally obtained by applying the inverse of the 2-D Dual-Tree <b>DWT</b> to the denoised coefficients obtained in the second step. The proposed image denoising approach is evaluated by comparing it to four denoising techniques existing in literature. The latters are the image denoising technique based on thresholding in the <b>SWT-2D</b> domain, the image denoising technique based on deep neural network, the image denoising technique based on soft thresholding in the domain of 2-D Dual-Tree DWT, and Non-local Means Filter. The proposed denoising approach, and the other four techniques previously mentioned, are applied to a number of noisy grey scale images and noisy Magnetic Resonance Images (MRIs) and the obtained results are in terms of <b>PSNR</b> (Peak Signal to Noise Ratio), <b>SSIM</b> (Structural Similarity), <b>NMSE</b> (Normalized Mean Square Error) and Feature Similarity (<b>FSIM</b>). These results show that the proposed image denoising approach outperforms the other denoising techniques applied for our evaluation. In comparison with the four denoising techniques applied for our evaluation, the proposed approach permits to obtain highest values of <b>PSNR, SSIM</b> and <b>FSIM</b> and the lowest values of <b>NMSE</b>. Moreover, in cases where the noise level <b>σ = 10</b> or <b>σ = 20</b>, this approach permits the elimination of the noise from the noisy images and introduces slight distortions on the details of the original images. However, in case where <b>σ = 30</b> or <b>σ = 40</b>, this approach eliminates a great part of the noise and introduces some distortions on the original images. The performance of this approach is proven by comparing it to four image denoising techniques existing in literature. These techniques are the denoising technique based on thresholding in the SWT-2D domain, the image denoising technique based on a deep neural network, the image denoising technique based on soft thresholding in the domain of <b>2 - D</b> Dual-Tree <b>DWT</b> and the Non-local Means Filter. All these denoising techniques, including our approach, are applied to a number of noisy grey scale images and noisy <b>MRIs</b>, and the obtained results are in terms of <b>PSNR</b> (Peak Signal to Noise Ratio), <b>SSIM</b>(Structural Similarity), <b>NMSE</b> (Normalized Mean Square Error) and <b>FSIM</b> (Feature Similarity). These results show that this proposed approach outperforms the four denoising techniques applied for our evaluation.

Self-supervised Deep Learning for Denoising in Ultrasound Microvascular Imaging

Lijie Huang, Jingyi Yin, Jingke Zhang, U-Wai Lok, Ryan M. DeRuiter, Jieyang Jin, Kate M. Knoll, Kendra E. Petersen, James D. Krier, Xiang-yang Zhu, Gina K. Hesley, Kathryn A. Robinson, Andrew J. Bentall, Thomas D. Atwell, Andrew D. Rule, Lilach O. Lerman, Shigao Chen, Chengwu Huang

arxiv logopreprintJul 7 2025
Ultrasound microvascular imaging (UMI) is often hindered by low signal-to-noise ratio (SNR), especially in contrast-free or deep tissue scenarios, which impairs subsequent vascular quantification and reliable disease diagnosis. To address this challenge, we propose Half-Angle-to-Half-Angle (HA2HA), a self-supervised denoising framework specifically designed for UMI. HA2HA constructs training pairs from complementary angular subsets of beamformed radio-frequency (RF) blood flow data, across which vascular signals remain consistent while noise varies. HA2HA was trained using in-vivo contrast-free pig kidney data and validated across diverse datasets, including contrast-free and contrast-enhanced data from pig kidneys, as well as human liver and kidney. An improvement exceeding 15 dB in both contrast-to-noise ratio (CNR) and SNR was observed, indicating a substantial enhancement in image quality. In addition to power Doppler imaging, denoising directly in the RF domain is also beneficial for other downstream processing such as color Doppler imaging (CDI). CDI results of human liver derived from the HA2HA-denoised signals exhibited improved microvascular flow visualization, with a suppressed noisy background. HA2HA offers a label-free, generalizable, and clinically applicable solution for robust vascular imaging in both contrast-free and contrast-enhanced UMI.

Introducing Image-Space Preconditioning in the Variational Formulation of MRI Reconstructions

Bastien Milani, Jean-Baptist Ledoux, Berk Can Acikgoz, Xavier Richard

arxiv logopreprintJul 7 2025
The aim of the present article is to enrich the comprehension of iterative magnetic resonance imaging (MRI) reconstructions, including compressed sensing (CS) and iterative deep learning (DL) reconstructions, by describing them in the general framework of finite-dimensional inner-product spaces. In particular, we show that image-space preconditioning (ISP) and data-space preconditioning (DSP) can be formulated as non-conventional inner-products. The main gain of our reformulation is an embedding of ISP in the variational formulation of the MRI reconstruction problem (in an algorithm-independent way) which allows in principle to naturally and systematically propagate ISP in all iterative reconstructions, including many iterative DL and CS reconstructions where preconditioning is lacking. The way in which we apply linear algebraic tools to MRI reconstructions as presented in this article is a novelty. A secondary aim of our article is to offer a certain didactic material to scientists who are new in the field of MRI reconstruction. Since we explore here some mathematical concepts of reconstruction, we take that opportunity to recall some principles that may be understood for experts, but which may be hard to find in the literature for beginners. In fact, the description of many mathematical tools of MRI reconstruction is fragmented in the literature or sometimes missing because considered as a general knowledge. Further, some of those concepts can be found in mathematic manuals, but not in a form that is oriented toward MRI. For example, we think of the conjugate gradient descent, the notion of derivative with respect to non-conventional inner products, or simply the notion of adjoint. The authors believe therefore that it is beneficial for their field of research to dedicate some space to such a didactic material.
Page 1 of 29286 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.