Sort by:
Page 18 of 54537 results

Unsupervised learning for inverse problems in computed tomography

Laura Hellwege, Johann Christopher Engster, Moritz Schaar, Thorsten M. Buzug, Maik Stille

arxiv logopreprintAug 7 2025
This study presents an unsupervised deep learning approach for computed tomography (CT) image reconstruction, leveraging the inherent similarities between deep neural network training and conventional iterative reconstruction methods. By incorporating forward and backward projection layers within the deep learning framework, we demonstrate the feasibility of reconstructing images from projection data without relying on ground-truth images. Our method is evaluated on the two-dimensional 2DeteCT dataset, showcasing superior performance in terms of mean squared error (MSE) and structural similarity index (SSIM) compared to traditional filtered backprojection (FBP) and maximum likelihood (ML) reconstruction techniques. Additionally, our approach significantly reduces reconstruction time, making it a promising alternative for real-time medical imaging applications. Future work will focus on extending this methodology to three-dimensional reconstructions and enhancing the adaptability of the projection geometry.

Unsupervised learning for inverse problems in computed tomography

Laura Hellwege, Johann Christopher Engster, Moritz Schaar, Thorsten M. Buzug, Maik Stille

arxiv logopreprintAug 7 2025
This study presents an unsupervised deep learning approach for computed tomography (CT) image reconstruction, leveraging the inherent similarities between deep neural network training and conventional iterative reconstruction methods. By incorporating forward and backward projection layers within the deep learning framework, we demonstrate the feasibility of reconstructing images from projection data without relying on ground-truth images. Our method is evaluated on the two-dimensional 2DeteCT dataset, showcasing superior performance in terms of mean squared error (MSE) and structural similarity index (SSIM) compared to traditional filtered backprojection (FBP) and maximum likelihood (ML) reconstruction techniques. Additionally, our approach significantly reduces reconstruction time, making it a promising alternative for real-time medical imaging applications. Future work will focus on extending this methodology to three-dimensional reconstructions and enhancing the adaptability of the projection geometry.

A novel approach for CT image smoothing: Quaternion Bilateral Filtering for kernel conversion.

Nasr M, Piórkowski A, Brzostowski K, El-Samie FEA

pubmed logopapersAug 7 2025
Denoising reconstructed Computed Tomography (CT) images without access to raw projection data remains a significant difficulty in medical imaging, particularly when utilizing sharp or medium reconstruction kernels that generate high-frequency noise. This work introduces an innovative method that integrates quaternion mathematics with bilateral filtering to resolve this issue. The proposed Quaternion Bilateral Filter (QBF) effectively maintains anatomical structures and mitigates noise caused by the kernel by expressing CT scans in quaternion form, with the red, green, and blue channels encoded together. Compared to conventional methods that depend on raw data or grayscale filtering, our approach functions directly on reconstructed sharp kernel images. It converts them to mimic the quality of soft-kernel outputs, obtained with kernels such as B30f, using paired data from the same patients. The efficacy of the QBF is evidenced by both full-reference metrics (Structural Similarity Index Measure (SSIM), Peak Signal-to-Noise Ratio (PSNR), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE)) and no-reference perceptual metrics (Naturalness Image Quality Evaluator (NIQE), Blind Referenceless Image Spatial Quality Evaluator (BRISQUE), and Perception-based Image Quality Evaluator (PIQE)). The results indicate that the QBF demonstrates improved denoising efficacy compared to traditional Bilateral Filter (BF), Non-Local Means (NLM), wavelet, and Convolutional Neural Network (CNN)-based processing, achieving an SSIM of 0.96 and a PSNR of 36.3 on B50f reconstructions. Additionally, segmentation-based visual validation verifies that QBF-filtered outputs maintain essential structural details necessary for subsequent diagnostic tasks. This study emphasizes the importance of quaternion-based filtering as a lightweight, interpretable, and efficient substitute for deep learning models in post-reconstruction CT image enhancement.

HiFi-Mamba: Dual-Stream W-Laplacian Enhanced Mamba for High-Fidelity MRI Reconstruction

Hongli Chen, Pengcheng Fang, Yuxia Chen, Yingxuan Ren, Jing Hao, Fangfang Tang, Xiaohao Cai, Shanshan Shan, Feng Liu

arxiv logopreprintAug 7 2025
Reconstructing high-fidelity MR images from undersampled k-space data remains a challenging problem in MRI. While Mamba variants for vision tasks offer promising long-range modeling capabilities with linear-time complexity, their direct application to MRI reconstruction inherits two key limitations: (1) insensitivity to high-frequency anatomical details; and (2) reliance on redundant multi-directional scanning. To address these limitations, we introduce High-Fidelity Mamba (HiFi-Mamba), a novel dual-stream Mamba-based architecture comprising stacked W-Laplacian (WL) and HiFi-Mamba blocks. Specifically, the WL block performs fidelity-preserving spectral decoupling, producing complementary low- and high-frequency streams. This separation enables the HiFi-Mamba block to focus on low-frequency structures, enhancing global feature modeling. Concurrently, the HiFi-Mamba block selectively integrates high-frequency features through adaptive state-space modulation, preserving comprehensive spectral details. To eliminate the scanning redundancy, the HiFi-Mamba block adopts a streamlined unidirectional traversal strategy that preserves long-range modeling capability with improved computational efficiency. Extensive experiments on standard MRI reconstruction benchmarks demonstrate that HiFi-Mamba consistently outperforms state-of-the-art CNN-based, Transformer-based, and other Mamba-based models in reconstruction accuracy while maintaining a compact and efficient model design.

Towards Globally Predictable k-Space Interpolation: A White-box Transformer Approach

Chen Luo, Qiyu Jin, Taofeng Xie, Xuemei Wang, Huayu Wang, Congcong Liu, Liming Tang, Guoqing Chen, Zhuo-Xu Cui, Dong Liang

arxiv logopreprintAug 6 2025
Interpolating missing data in k-space is essential for accelerating imaging. However, existing methods, including convolutional neural network-based deep learning, primarily exploit local predictability while overlooking the inherent global dependencies in k-space. Recently, Transformers have demonstrated remarkable success in natural language processing and image analysis due to their ability to capture long-range dependencies. This inspires the use of Transformers for k-space interpolation to better exploit its global structure. However, their lack of interpretability raises concerns regarding the reliability of interpolated data. To address this limitation, we propose GPI-WT, a white-box Transformer framework based on Globally Predictable Interpolation (GPI) for k-space. Specifically, we formulate GPI from the perspective of annihilation as a novel k-space structured low-rank (SLR) model. The global annihilation filters in the SLR model are treated as learnable parameters, and the subgradients of the SLR model naturally induce a learnable attention mechanism. By unfolding the subgradient-based optimization algorithm of SLR into a cascaded network, we construct the first white-box Transformer specifically designed for accelerated MRI. Experimental results demonstrate that the proposed method significantly outperforms state-of-the-art approaches in k-space interpolation accuracy while providing superior interpretability.

Machine Learning-Based Reconstruction of 2D MRI for Quantitative Morphometry in Epilepsy

Ratcliffe, C., Taylor, P. N., de Bezenac, C., Das, K., Biswas, S., Marson, A., Keller, S. S.

medrxiv logopreprintAug 6 2025
IntroductionStructural neuroimaging analyses require research quality images, acquired with costly MRI acquisitions. Isotropic (3D-T1) images are desirable for quantitative analyses, however a routine compromise in the clinical setting is to acquire anisotropic (2D-T1) analogues for qualitative visual inspection. ML (Machine learning-based) software have shown promise in addressing some of the limitations of 2D-T1 scans in research applications, yet their efficacy in quantitative research is generally poorly understood. Pathology-related abnormalities of the subcortical structures have previously been identified in idiopathic generalised epilepsy (IGE), which have been overlooked based on visual inspection, through the use of quantitative morphometric analyses. As such, IGE biomarkers present a suitable model in which to evaluate the applicability of image preprocessing methods. This study therefore explores subcortical structural biomarkers of IGE, first in our silver standard 3D-T1 scans, then in 2D-T1 scans that were either untransformed, resampled using a classical interpolation approach, or synthesised with a resolution and contrast agnostic ML model (the latter of which is compared to a separate model). Methods2D-T1 and 3D-T1 MRI scans were acquired during the same scanning session for 33 individuals with drug-responsive IGE (age mean 32.16 {+/-} SD = 14.20, male n = 14) and 42 individuals with drug-resistant IGE (31.76 {+/-} 11.12, 17), all diagnosed at the Walton Centre NHS Foundation Trust Liverpool, alongside 39 age- and sex-matched healthy controls (32.32 {+/-} 8.65, 16). The untransformed 2D-T1 scans were resampled into isotropic images using NiBabel (res-T1), and preprocessed into synthetic isotropic images using SynthSR (syn-T1). For the 3D-T1, 2D-T1, res-T1, and syn-T1 images, the recon-all command from FreeSurfer 8.0.0 was used to create parcellations of 174 anatomical regions (equivalent to the 174 regional parcellations provided as part of the DL+DiReCT pipeline), defined by the aseg and Destrieux atlases, and FSL run_first_all was used to segment subcortical surface shapes. The new ML FreeSurfer pipeline, recon-all-clinical, was also tested in the 2D-T1, 3D-T1, and res-T1 images. As a model comparison for SynthSR, the DL+DiReCT pipeline was used to provide segmentations of the 2D-T1 and res-T1 images, including estimates of regional volume and thickness. Spatial overlap and intraclass correlations between the morphometrics of the eight resulting parcellations were first determined, then subcortical surface shape abnormalities associated with IGE were identified by comparing the FSL run_first_all outputs of patients with controls. ResultsWhen standardised to the metrics derived from the 3D-T1 scans, cortical volume and thickness estimates trended lower for the 2D-T1, res-T1, syn-T1, and DL+DiReCT outputs, whereas subcortical volume estimates were more coherent. Dice coefficients revealed an acceptable spatial similarity between the cortices of the 3D-T1 scans and the other images overall, and was higher in the subcortical structures. Intraclass correlation coefficients were consistently lowest when metrics were computed for model-derived inputs, and estimates of thickness were less similar to the ground truth than those of volume. For the people with epilepsy, the 3D-T1 scans showed significant surface deflations across various subcortical structures when compared to healthy controls. Analysis of the 2D-T1 scans enabled the reliable detection of a subset of subcortical abnormalities, whereas analyses of the res-T1 and syn-T1 images were more prone to false-positive results. ConclusionsResampling and ML image synthesis methods do not currently attenuate partial volume effects resulting from low through plane resolution in anisotropic MRI scans, instead quantitative analyses using 2D-T1 scans should be interpreted with caution, and researchers should consider the potential implications of preprocessing. The recon-all-clinical pipeline is promising, but requires further evaluation, especially when considered as an alternative to the classical pipeline. Key PointsO_LISurface deviations indicative of regional atrophy and hypertrophy were identified in people with idiopathic generalised epilepsy. C_LIO_LIPartial volume effects are likely to attenuate subtle morphometric abnormalities, increasing the likelihood of erroneous inference. C_LIO_LIPriors in synthetic image creation models may render them insensitive to subtle biomarkers. C_LIO_LIResampling and machine-learning based image synthesis are not currently replacements for research quality acquisitions in quantitative MRI research. C_LIO_LIThe results of studies using synthetic images should be interpreted in a separate context to those using untransformed data. C_LI

Artificial Intelligence Iterative Reconstruction Algorithm Combined with Low-Dose Aortic CTA for Preoperative Access Assessment of Transcatheter Aortic Valve Implantation: A Prospective Cohort Study.

Li Q, Liu D, Li K, Li J, Zhou Y

pubmed logopapersAug 6 2025
This study aimed to explore whether an artificial intelligence iterative reconstruction (AIIR) algorithm combined with low-dose aortic computed tomography angiography (CTA) demonstrates clinical effectiveness in assessing preoperative access for transcatheter aortic valve implantation (TAVI). A total of 109 patients were prospectively recruited for aortic CTA scans and divided into two groups: group A (n = 51) with standard-dose CT examinations (SDCT) and group B (n = 58) with low-dose CT examinations (LDCT). Group B was further subdivided into groups B1 and B2. Groups A and B2 used the hybrid iterative algorithm (HIR: Karl 3D), whereas Group B1 used the AIIR algorithm. CT attenuation and noise of different vessel segments were measured, and the contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) were calculated. Two radiologists, who were blinded to the study details, rated the subjective image quality on a 5-point scale. The effective radiation doses were also recorded for groups A and B. Group B1 demonstrated the highest CT attenuation, SNR, and CNR and the lowest image noise among the three groups (p < 0.05). The scores of subjective image noise, vessel and non-calcified plaque edge sharpness, and overall image quality in Group B1 were higher than those in groups A and B2 (p < 0.001). Group B2 had the highest artifacts scores compared with groups A and B1 (p < 0.05). The radiation dose in group B was reduced by 50.33% compared with that in group A (p < 0.001). The AIIR algorithm combined with low-dose CTA yielded better diagnostic images before TAVI than the Karl 3D algorithm.

Deep Distillation Gradient Preconditioning for Inverse Problems

Romario Gualdrón-Hurtado, Roman Jacome, Leon Suarez, Laura Galvis, Henry Arguello

arxiv logopreprintAug 6 2025
Imaging inverse problems are commonly addressed by minimizing measurement consistency and signal prior terms. While huge attention has been paid to developing high-performance priors, even the most advanced signal prior may lose its effectiveness when paired with an ill-conditioned sensing matrix that hinders convergence and degrades reconstruction quality. In optimization theory, preconditioners allow improving the algorithm's convergence by transforming the gradient update. Traditional linear preconditioning techniques enhance convergence, but their performance remains limited due to their dependence on the structure of the sensing matrix. Learning-based linear preconditioners have been proposed, but they are optimized only for data-fidelity optimization, which may lead to solutions in the null-space of the sensing matrix. This paper employs knowledge distillation to design a nonlinear preconditioning operator. In our method, a teacher algorithm using a better-conditioned (synthetic) sensing matrix guides the student algorithm with an ill-conditioned sensing matrix through gradient matching via a preconditioning neural network. We validate our nonlinear preconditioner for plug-and-play FISTA in single-pixel, magnetic resonance, and super-resolution imaging tasks, showing consistent performance improvements and better empirical convergence.

MCA-GAN: A lightweight Multi-scale Context-Aware Generative Adversarial Network for MRI reconstruction.

Hou B, Du H

pubmed logopapersAug 6 2025
Magnetic Resonance Imaging (MRI) is widely utilized in medical imaging due to its high resolution and non-invasive nature. However, the prolonged acquisition time significantly limits its clinical applicability. Although traditional compressed sensing (CS) techniques can accelerate MRI acquisition, they often lead to degraded reconstruction quality under high undersampling rates. Deep learning-based methods, including CNN- and GAN-based approaches, have improved reconstruction performance, yet are limited by their local receptive fields, making it challenging to effectively capture long-range dependencies. Moreover, these models typically exhibit high computational complexity, which hinders their efficient deployment in practical scenarios. To address these challenges, we propose a lightweight Multi-scale Context-Aware Generative Adversarial Network (MCA-GAN), which enhances MRI reconstruction through dual-domain generators that collaboratively optimize both k-space and image-domain representations. MCA-GAN integrates several lightweight modules, including Depthwise Separable Local Attention (DWLA) for efficient local feature extraction, Adaptive Group Rearrangement Block (AGRB) for dynamic inter-group feature optimization, Multi-Scale Spatial Context Modulation Bridge (MSCMB) for multi-scale feature fusion in skip connections, and Channel-Spatial Multi-Scale Self-Attention (CSMS) for improved global context modeling. Extensive experiments conducted on the IXI, MICCAI 2013, and MRNet knee datasets demonstrate that MCA-GAN consistently outperforms existing methods in terms of PSNR and SSIM. Compared to SepGAN, the latest lightweight model, MCA-GAN achieves a 27.3% reduction in parameter size and a 19.6% reduction in computational complexity, while attaining the shortest reconstruction time among all compared methods. Furthermore, MCA-GAN exhibits robust performance across various undersampling masks and acceleration rates. Cross-dataset generalization experiments further confirm its ability to maintain competitive reconstruction quality, underscoring its strong generalization potential. Overall, MCA-GAN improves MRI reconstruction quality while significantly reducing computational cost through a lightweight architecture and multi-scale feature fusion, offering an efficient and accurate solution for accelerated MRI.

Utilizing 3D fast spin echo anatomical imaging to reduce the number of contrast preparations in <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> quantification of knee cartilage using learning-based methods.

Zhong J, Huang C, Yu Z, Xiao F, Blu T, Li S, Ong TM, Ho KK, Chan Q, Griffith JF, Chen W

pubmed logopapersAug 5 2025
To propose and evaluate an accelerated <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> quantification method that combines <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted fast spin echo (FSE) images and proton density (PD)-weighted anatomical FSE images, leveraging deep learning models for <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> mapping. The goal is to reduce scan time and facilitate integration into routine clinical workflows for osteoarthritis (OA) assessment. This retrospective study utilized MRI data from 40 participants (30 OA patients and 10 healthy volunteers). A volume of PD-weighted anatomical FSE images and a volume of <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted images acquired at a non-zero spin-lock time were used as input to train deep learning models, including a 2D U-Net and a multi-layer perceptron (MLP). <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> maps generated by these models were compared with ground truth maps derived from a traditional non-linear least squares (NLLS) fitting method using four <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted images. Evaluation metrics included mean absolute error (MAE), mean absolute percentage error (MAPE), regional error (RE), and regional percentage error (RPE). The best-performed deep learning models achieved RPEs below 5% across all evaluated scenarios. This performance was consistent even in reduced acquisition settings that included only one PD-weighted image and one <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted image, where NLLS methods cannot be applied. Furthermore, the results were comparable to those obtained with NLLS when longer acquisitions with four <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted images were used. The proposed approach enables efficient <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> mapping using PD-weighted anatomical images, reducing scan time while maintaining clinical standards. This method has the potential to facilitate the integration of quantitative MRI techniques into routine clinical practice, benefiting OA diagnosis and monitoring.
Page 18 of 54537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.