Sort by:
Page 6 of 14137 results

Introducing Image-Space Preconditioning in the Variational Formulation of MRI Reconstructions

Bastien Milani, Jean-Baptist Ledoux, Berk Can Acikgoz, Xavier Richard

arxiv logopreprintJul 7 2025
The aim of the present article is to enrich the comprehension of iterative magnetic resonance imaging (MRI) reconstructions, including compressed sensing (CS) and iterative deep learning (DL) reconstructions, by describing them in the general framework of finite-dimensional inner-product spaces. In particular, we show that image-space preconditioning (ISP) and data-space preconditioning (DSP) can be formulated as non-conventional inner-products. The main gain of our reformulation is an embedding of ISP in the variational formulation of the MRI reconstruction problem (in an algorithm-independent way) which allows in principle to naturally and systematically propagate ISP in all iterative reconstructions, including many iterative DL and CS reconstructions where preconditioning is lacking. The way in which we apply linear algebraic tools to MRI reconstructions as presented in this article is a novelty. A secondary aim of our article is to offer a certain didactic material to scientists who are new in the field of MRI reconstruction. Since we explore here some mathematical concepts of reconstruction, we take that opportunity to recall some principles that may be understood for experts, but which may be hard to find in the literature for beginners. In fact, the description of many mathematical tools of MRI reconstruction is fragmented in the literature or sometimes missing because considered as a general knowledge. Further, some of those concepts can be found in mathematic manuals, but not in a form that is oriented toward MRI. For example, we think of the conjugate gradient descent, the notion of derivative with respect to non-conventional inner products, or simply the notion of adjoint. The authors believe therefore that it is beneficial for their field of research to dedicate some space to such a didactic material.

Self-supervised Deep Learning for Denoising in Ultrasound Microvascular Imaging

Lijie Huang, Jingyi Yin, Jingke Zhang, U-Wai Lok, Ryan M. DeRuiter, Jieyang Jin, Kate M. Knoll, Kendra E. Petersen, James D. Krier, Xiang-yang Zhu, Gina K. Hesley, Kathryn A. Robinson, Andrew J. Bentall, Thomas D. Atwell, Andrew D. Rule, Lilach O. Lerman, Shigao Chen, Chengwu Huang

arxiv logopreprintJul 7 2025
Ultrasound microvascular imaging (UMI) is often hindered by low signal-to-noise ratio (SNR), especially in contrast-free or deep tissue scenarios, which impairs subsequent vascular quantification and reliable disease diagnosis. To address this challenge, we propose Half-Angle-to-Half-Angle (HA2HA), a self-supervised denoising framework specifically designed for UMI. HA2HA constructs training pairs from complementary angular subsets of beamformed radio-frequency (RF) blood flow data, across which vascular signals remain consistent while noise varies. HA2HA was trained using in-vivo contrast-free pig kidney data and validated across diverse datasets, including contrast-free and contrast-enhanced data from pig kidneys, as well as human liver and kidney. An improvement exceeding 15 dB in both contrast-to-noise ratio (CNR) and SNR was observed, indicating a substantial enhancement in image quality. In addition to power Doppler imaging, denoising directly in the RF domain is also beneficial for other downstream processing such as color Doppler imaging (CDI). CDI results of human liver derived from the HA2HA-denoised signals exhibited improved microvascular flow visualization, with a suppressed noisy background. HA2HA offers a label-free, generalizable, and clinically applicable solution for robust vascular imaging in both contrast-free and contrast-enhanced UMI.

PGMI assessment in mammography: AI software versus human readers.

Santner T, Ruppert C, Gianolini S, Stalheim JG, Frei S, Hondl M, Fröhlich V, Hofvind S, Widmann G

pubmed logopapersJul 5 2025
The aim of this study was to evaluate human inter-reader agreement of parameters included in PGMI (perfect-good-moderate-inadequate) classification of screening mammograms and explore the role of artificial intelligence (AI) as an alternative reader. Five radiographers from three European countries independently performed a PGMI assessment of 520 anonymized mammography screening examinations randomly selected from representative subsets from 13 imaging centres within two European countries. As a sixth reader, a dedicated AI software was used. Accuracy, Cohen's Kappa, and confusion matrices were calculated to compare the predictions of the software against the individual assessment of the readers, as well as potential discrepancies between them. A questionnaire and a personality test were used to better understand the decision-making processes of the human readers. Significant inter-reader variability among human readers with poor to moderate agreement (κ = -0.018 to κ = 0.41) was observed, with some showing more homogenous interpretations of single features and overall quality than others. In comparison, the software surpassed human inter-reader agreement in detecting glandular tissue cuts, mammilla deviation, pectoral muscle detection, and pectoral angle measurement, while remaining features and overall image quality exhibited comparable performance to human assessment. Notably, human inter-reader disagreement of PGMI assessment in mammography is considerably high. AI software may already reliably categorize quality. Its potential for standardization and immediate feedback to achieve and monitor high levels of quality in screening programs needs further attention and should be included in future approaches. AI has promising potential for automated assessment of diagnostic image quality. Faster, more representative and more objective feedback may support radiographers in their quality management processes. Direct transformation of common PGMI workflows into an AI algorithm could be challenging.

Impact of super-resolution deep learning-based reconstruction for hippocampal MRI: A volunteer and phantom study.

Takada S, Nakaura T, Yoshida N, Uetani H, Shiraishi K, Kobayashi N, Matsuo K, Morita K, Nagayama Y, Kidoh M, Yamashita Y, Takayanagi R, Hirai T

pubmed logopapersJul 5 2025
To evaluate the effects of super-resolution deep learning-based reconstruction (SR-DLR) on thin-slice T2-weighted hippocampal MR image quality using 3 T MRI, in both human volunteers and phantoms. Thirteen healthy volunteers underwent hippocampal MRI at standard and high resolutions. Original (standard-resolution; StR) images were reconstructed with and without deep learning-based reconstruction (DLR) (Matrix = 320 × 320), and with SR-DLR (Matrix = 960 × 960). High-resolution (HR) images were also reconstructed with/without DLR (Matrix = 960 × 960). Contrast, contrast-to-noise ratio (CNR), and septum slope were analyzed. Two radiologists evaluated the images for noise, contrast, artifacts, sharpness, and overall quality. Quantitative and qualitative results are reported as medians and interquartile ranges (IQR). Comparisons used the Wilcoxon signed-rank test with Holm correction. We also scanned an American College of Radiology (ACR) phantom to evaluate the ability of our SR-DLR approach to reduce artifacts induced by zero-padding interpolation (ZIP). SR-DLR exhibited contrast comparable to original images and significantly higher than HR-images. Its slope was comparable to that of HR images but was significantly steeper than that of StR images (p < 0.01). Furthermore, the CNR of SR-DLR (10.53; IQR: 10.08, 11.69) was significantly superior to the StR-images without DLR (7.5; IQR: 6.4, 8.37), StR-images with DLR (8.73; IQR: 7.68, 9.0), HR-images without DLR (2.24; IQR: 1.43, 2.38), and HR-images with DLR (4.84; IQR: 2.99, 5.43) (p < 0.05). In the phantom study, artifacts induced by ZIP were scarcely observed when using SR-DLR. SR-DLR for hippocampal MRI potentially improves image quality beyond that of actual HR-images while reducing acquisition time.

EdgeSRIE: A hybrid deep learning framework for real-time speckle reduction and image enhancement on portable ultrasound systems

Hyunwoo Cho, Jongsoo Lee, Jinbum Kang, Yangmo Yoo

arxiv logopreprintJul 5 2025
Speckle patterns in ultrasound images often obscure anatomical details, leading to diagnostic uncertainty. Recently, various deep learning (DL)-based techniques have been introduced to effectively suppress speckle; however, their high computational costs pose challenges for low-resource devices, such as portable ultrasound systems. To address this issue, EdgeSRIE, which is a lightweight hybrid DL framework for real-time speckle reduction and image enhancement in portable ultrasound imaging, is introduced. The proposed framework consists of two main branches: an unsupervised despeckling branch, which is trained by minimizing a loss function between speckled images, and a deblurring branch, which restores blurred images to sharp images. For hardware implementation, the trained network is quantized to 8-bit integer precision and deployed on a low-resource system-on-chip (SoC) with limited power consumption. In the performance evaluation with phantom and in vivo analyses, EdgeSRIE achieved the highest contrast-to-noise ratio (CNR) and average gradient magnitude (AGM) compared with the other baselines (different 2-rule-based methods and other 4-DL-based methods). Furthermore, EdgeSRIE enabled real-time inference at over 60 frames per second while satisfying computational requirements (< 20K parameters) on actual portable ultrasound hardware. These results demonstrated the feasibility of EdgeSRIE for real-time, high-quality ultrasound imaging in resource-limited environments.

A tailored deep learning approach for early detection of oral cancer using a 19-layer CNN on clinical lip and tongue images.

Liu P, Bagi K

pubmed logopapersJul 4 2025
Early and accurate detection of oral cancer plays a pivotal role in improving patient outcomes. This research introduces a custom-designed, 19-layer convolutional neural network (CNN) for the automated diagnosis of oral cancer using clinical images of the lips and tongue. The methodology integrates advanced preprocessing steps, including min-max normalization and histogram-based contrast enhancement, to optimize image features critical for reliable classification. The model is extensively validated on the publicly available Oral Cancer (Lips and Tongue) Images (OCI) dataset, which is divided into 80% training and 20% testing subsets. Comprehensive performance evaluation employs established metrics-accuracy, sensitivity, specificity, precision, and F1-score. Our CNN architecture achieved an accuracy of 99.54%, sensitivity of 95.73%, specificity of 96.21%, precision of 96.34%, and F1-score of 96.03%, demonstrating substantial improvements over prominent transfer learning benchmarks, including SqueezeNet, AlexNet, Inception, VGG19, and ResNet50, all tested under identical experimental protocols. The model's robust performance, efficient computation, and high reliability underline its practicality for clinical application and support its superiority over existing approaches. This study provides a reproducible pipeline and a new reference point for deep learning-based oral cancer detection, facilitating translation into real-world healthcare environments and promising enhanced diagnostic confidence.

CT-Mamba: A hybrid convolutional State Space Model for low-dose CT denoising.

Li L, Wei W, Yang L, Zhang W, Dong J, Liu Y, Huang H, Zhao W

pubmed logopapersJul 3 2025
Low-dose CT (LDCT) significantly reduces the radiation dose received by patients, however, dose reduction introduces additional noise and artifacts. Currently, denoising methods based on convolutional neural networks (CNNs) face limitations in long-range modeling capabilities, while Transformer-based denoising methods, although capable of powerful long-range modeling, suffer from high computational complexity. Furthermore, the denoised images predicted by deep learning-based techniques inevitably exhibit differences in noise distribution compared to normal-dose CT (NDCT) images, which can also impact the final image quality and diagnostic outcomes. This paper proposes CT-Mamba, a hybrid convolutional State Space Model for LDCT image denoising. The model combines the local feature extraction advantages of CNNs with Mamba's strength in capturing long-range dependencies, enabling it to capture both local details and global context. Additionally, we introduce an innovative spatially coherent Z-shaped scanning scheme to ensure spatial continuity between adjacent pixels in the image. We design a Mamba-driven deep noise power spectrum (NPS) loss function to guide model training, ensuring that the noise texture of the denoised LDCT images closely resembles that of NDCT images, thereby enhancing overall image quality and diagnostic value. Experimental results have demonstrated that CT-Mamba performs excellently in reducing noise in LDCT images, enhancing detail preservation, and optimizing noise texture distribution, and exhibits higher statistical similarity with the radiomics features of NDCT images. The proposed CT-Mamba demonstrates outstanding performance in LDCT denoising and holds promise as a representative approach for applying the Mamba framework to LDCT denoising tasks.

Integrating MobileNetV3 and SqueezeNet for Multi-class Brain Tumor Classification.

Kantu S, Kaja HS, Kukkala V, Aly SA, Sayed K

pubmed logopapersJul 3 2025
Brain tumors pose a critical health threat requiring timely and accurate classification for effective treatment. Traditional MRI analysis is labor-intensive and prone to variability, necessitating reliable automated solutions. This study explores lightweight deep learning models for multi-class brain tumor classification across four categories: glioma, meningioma, pituitary tumors, and no tumor. We investigate the performance of MobileNetV3 and SqueezeNet individually, and a feature-fusion hybrid model that combines their embedding layers. We utilized a publicly available MRI dataset containing 7023 images with a consistent internal split (65% training, 17% validation, 18% test) to ensure reliable evaluation. MobileNetV3 offers deep semantic understanding through its expressive features, while SqueezeNet provides minimal computational overhead. Their feature-level integration creates a balanced approach between diagnostic accuracy and deployment efficiency. Experiments conducted with consistent hyperparameters and preprocessing showed MobileNetV3 achieved the highest test accuracy (99.31%) while maintaining a low parameter count (3.47M), making it suitable for real-world deployment. Grad-CAM visualizations were employed for model explainability, highlighting tumor-relevant regions and helping visualize the specific areas contributing to predictions. Our proposed models outperform several baseline architectures like VGG16 and InceptionV3, achieving high accuracy with significantly fewer parameters. These results demonstrate that well-optimized lightweight networks can deliver accurate and interpretable brain tumor classification.

Towards reliable WMH segmentation under domain shift: An application study using maximum entropy regularization to improve uncertainty estimation.

Matzkin F, Larrazabal A, Milone DH, Dolz J, Ferrante E

pubmed logopapersJul 2 2025
Accurate segmentation of white matter hyperintensities (WMH) is crucial for clinical decision-making, particularly in the context of multiple sclerosis. However, domain shifts, such as variations in MRI machine types or acquisition parameters, pose significant challenges to model calibration and uncertainty estimation. This comparative study investigates the impact of domain shift on WMH segmentation, proposing maximum-entropy regularization techniques to enhance model calibration and uncertainty estimation. The purpose is to identify errors appearing after model deployment in clinical scenarios using predictive uncertainty as a proxy measure, since it does not require ground-truth labels to be computed. We conducted experiments using a classic U-Net architecture and evaluated maximum entropy regularization schemes to improve model calibration under domain shift on two publicly available datasets: the WMH Segmentation Challenge and the 3D-MR-MS dataset. Performance is assessed with Dice coefficient, Hausdorff distance, expected calibration error, and entropy-based uncertainty estimates. Entropy-based uncertainty estimates can anticipate segmentation errors, both in-distribution and out-of-distribution, with maximum-entropy regularization further strengthening the correlation between uncertainty and segmentation performance, while also improving model calibration under domain shift. Maximum-entropy regularization improves uncertainty estimation for WMH segmentation under domain shift. By strengthening the relationship between predictive uncertainty and segmentation errors, these methods allow models to better flag unreliable predictions without requiring ground-truth annotations. Additionally, maximum-entropy regularization contributes to better model calibration, supporting more reliable and safer deployment of deep learning models in multi-center and heterogeneous clinical environments.

Clinical value of the 70-kVp ultra-low-dose CT pulmonary angiography with deep learning image reconstruction.

Zhang Y, Wang L, Yuan D, Qi K, Zhang M, Zhang W, Gao J, Liu J

pubmed logopapersJul 2 2025
This study aims to assess the feasibility of "double-low," low radiation dosage and low contrast media dosage, CT pulmonary angiography (CTPA) based on deep-learning image reconstruction (DLIR) algorithms. One hundred consecutive patients (41 females; average age 60.9 years, range 18-90) were prospectively scanned on multi-detector CT systems. Fifty patients in the conventional-dose group (CD group) underwent CTPA with 100 kV protocol using the traditional iterative reconstruction algorithm, and 50 patients in the low-dose group (LD group) underwent CTPA with a 70 kVp DLIR protocol. Radiation and contrast agent doses were recorded and compared between groups. Objective parameters were measured and compared. Two radiologists evaluated images for overall image quality, artifacts, and image contrast separately on a 5-point scale. The furthest visible branches were compared between groups. Compared to the control group, the study group reduced the dose-length product by 80.3% (p < 0.01) and the contrast media dose by 33.3%. CT values, SD values, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) showed no statistically significant differences (all p > 0.05) between the LD and CD groups. The overall image quality scores were comparable between the LD and CD groups (p > 0.05), with good in-reader agreement (k = 0.75). More peripheral pulmonary vessels could be assessed in the LD group compared with the CD group. 70 kVp combined with DLIR reconstruction for CTPA can further reduce radiation and contrast agent dose while maintaining image quality and increasing the visibility on the pulmonary artery distal branches. Question Elevated radiation exposure and substantial doses of contrast media during CT pulmonary angiography (CTPA) augment patient risks. Findings The "double-low" CT pulmonary angiography protocol can diminish radiation doses by 80.3% and minimize contrast doses by one-third while maintaining image quality. Clinical relevance With deep learning algorithms, we confirmed that CTPA images maintained excellent quality despite reduced radiation and contrast dosages, helping to reduce radiation exposure and kidney burden on patients. The "double-low" CTPA protocol, complemented by deep learning image reconstruction, prioritizes patient safety.
Page 6 of 14137 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.