Sort by:
Page 2 of 323 results

Deep learning based rapid X-ray fluorescence signal extraction and image reconstruction for preclinical benchtop X-ray fluorescence computed tomography applications.

Kaphle A, Jayarathna S, Cho SH

pubmed logopapersJun 4 2025
Recent research advances have resulted in an experimental benchtop X-ray fluorescence computed tomography (XFCT) system that likely meets the imaging dose/scan time constraints for benchtop XFCT imaging of live mice injected with gold nanoparticles (GNPs). For routine in vivo benchtop XFCT imaging, however, additional challenges, most notably the need for rapid/near-real-time handling of X-ray fluorescence (XRF) signal extraction and XFCT image reconstruction, must be successfully addressed. Here we propose a novel end-to-end deep learning (DL) framework that integrates a one-dimensional convolutional neural network (1D CNN) for rapid XRF signal extraction with a U-Net model for XFCT image reconstruction. We trained the models using a comprehensive dataset including experimentally-acquired and augmented XRF/scatter photon spectra from various GNP concentrations and imaging scenarios, including phantom and synthetic mouse models. The DL framework demonstrated exceptional performance in both tasks. The 1D CNN achieved a high coefficient-of-determination (R² > 0.9885) and a low mean-absolute-error (MAE < 0.6248) in XRF signal extraction. The U-Net model achieved an average structural-similarity-index-measure (SSIM) of 0.9791 and a peak signal-to-noise ratio (PSNR) of 39.11 in XFCT image reconstruction, closely matching ground truth images. Notably, the DL approach (vs. the conventional approach) reduced the total post-processing time per slice from approximately 6 min to just 1.25 s.

Accelerating 3D radial MPnRAGE using a self-supervised deep factor model.

Chen Y, Kecskemeti SR, Holmes JH, Corum CA, Yaghoobi N, Magnotta VA, Jacob M

pubmed logopapersJun 2 2025
To develop a self-supervised and memory-efficient deep learning image reconstruction method for 4D non-Cartesian MRI with high resolution and a large parametric dimension. The deep factor model (DFM) represents a parametric series of 3D multicontrast images using a neural network conditioned by the inversion time using efficient zero-filled reconstructions as input estimates. The model parameters are learned in a single-shot learning (SSL) fashion from the k-space data of each acquisition. A compatible transfer learning (TL) approach using previously acquired data is also developed to reduce reconstruction time. The DFM is compared to subspace methods with different regularization strategies in a series of phantom and in vivo experiments using the MPnRAGE acquisition for multicontrast <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> imaging and quantitative <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> estimation. DFM-SSL improved the image quality and reduced bias and variance in quantitative <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> estimates in both phantom and in vivo studies, outperforming all other tested methods. DFM-TL reduced the inference time while maintaining a performance comparable to DFM-SSL and outperforming subspace methods with multiple regularization techniques. The proposed DFM offers a superior representation of the multicontrast images compared to subspace models, especially in the highly accelerated MPnRAGE setting. The self-supervised training is ideal for methods with both high resolution and a large parametric dimension, where training neural networks can become computationally demanding without a dedicated high-end GPU array.

Evaluation of a Deep Learning Denoising Algorithm for Dose Reduction in Whole-Body Photon-Counting CT Imaging: A Cadaveric Study.

Dehdab R, Brendel JM, Streich S, Ladurner R, Stenzl B, Mueck J, Gassenmaier S, Krumm P, Werner S, Herrmann J, Nikolaou K, Afat S, Brendlin A

pubmed logopapersJun 1 2025
Photon Counting CT (PCCT) offers advanced imaging capabilities with potential for substantial radiation dose reduction; however, achieving this without compromising image quality remains a challenge due to increased noise at lower doses. This study aims to evaluate the effectiveness of a deep learning (DL)-based denoising algorithm in maintaining diagnostic image quality in whole-body PCCT imaging at reduced radiation levels, using real intraindividual cadaveric scans. Twenty-four cadaveric human bodies underwent whole-body CT scans on a PCCT scanner (NAEOTOM Alpha, Siemens Healthineers) at four different dose levels (100%, 50%, 25%, and 10% mAs). Each scan was reconstructed using both QIR level 2 and a DL algorithm (ClariCT.AI, ClariPi Inc.), resulting in 192 datasets. Objective image quality was assessed by measuring CT value stability, image noise, and contrast-to-noise ratio (CNR) across consistent regions of interest (ROIs) in the liver parenchyma. Two radiologists independently evaluated subjective image quality based on overall image clarity, sharpness, and contrast. Inter-rater agreement was determined using Spearman's correlation coefficient, and statistical analysis included mixed-effects modeling to assess objective and subjective image quality. Objective analysis showed that the DL denoising algorithm did not significantly alter CT values (p ≥ 0.9975). Noise levels were consistently lower in denoised datasets compared to the Original (p < 0.0001). No significant differences were observed between the 25% mAs denoised and the 100% mAs original datasets in terms of noise and CNR (p ≥ 0.7870). Subjective analysis revealed strong inter-rater agreement (r ≥ 0.78), with the 50% mAs denoised datasets rated superior to the 100% mAs original datasets (p < 0.0001) and no significant differences detected between the 25% mAs denoised and 100% mAs original datasets (p ≥ 0.9436). The DL denoising algorithm maintains image quality in PCCT imaging while enabling up to a 75% reduction in radiation dose. This approach offers a promising method for reducing radiation exposure in clinical PCCT without compromising diagnostic quality.

A Dual-Energy Computed Tomography Guided Intelligent Radiation Therapy Platform.

Wen N, Zhang Y, Zhang H, Zhang M, Zhou J, Liu Y, Liao C, Jia L, Zhang K, Chen J

pubmed logopapersJun 1 2025
The integration of advanced imaging and artificial intelligence technologies in radiation therapy has revolutionized cancer treatment by enhancing precision and adaptability. This study introduces a novel dual-energy computed tomography (DECT) guided intelligent radiation therapy (DEIT) platform designed to streamline and optimize the radiation therapy process. The DEIT system combines DECT, a newly designed dual-layer multileaf collimator, deep learning algorithms for auto-segmentation, and automated planning and quality assurance capabilities. The DEIT system integrates an 80-slice computed tomography (CT) scanner with an 87 cm bore size, a linear accelerator delivering 4 photon and 5 electron energies, and a flat panel imager optimized for megavoltage (MV) cone beam CT acquisition. A comprehensive evaluation of the system's accuracy was conducted using end-to-end tests. Virtual monoenergetic CT images and electron density images of the DECT were generated and compared on both phantom and patient. The system's auto-segmentation algorithms were tested on 5 cases for each of the 99 organs at risk, and the automated optimization and planning capabilities were evaluated on clinical cases. The DEIT system demonstrated systematic errors of less than 1 mm for target localization. DECT reconstruction showed electron density mapping deviations ranging from -0.052 to 0.001, with stable Hounsfield unit consistency across monoenergetic levels above 60 keV, except for high-Z materials at lower energies. Auto-segmentation achieved dice similarity coefficients above 0.9 for most organs with an inference time of less than 2 seconds. Dose-volume histogram comparisons showed improved dose conformity indices and reduced doses to critical structures in auto-plans compared to manual plans across various clinical cases. In addition, high gamma passing rates at 2%/2 mm in both 2-dimensional (above 97%) and 3-dimensional (above 99%) in vivo analyses further validate the accuracy and reliability of treatment plans. The DEIT platform represents a viable solution for radiation treatment. The DEIT system uses artificial intelligence-driven automation, real-time adjustments, and CT imaging to enhance the radiation therapy process, improving efficiency and flexibility.

Phantom-Based Ultrasound-ECG Deep Learning Framework for Prospective Cardiac Computed Tomography.

Ganesh S, Lindsey BD, Tridandapani S, Bhatti PT

pubmed logopapersMay 30 2025
We present the first multimodal deep learning framework combining ultrasound (US) and electrocardiography (ECG) data to predict cardiac quiescent periods (QPs) for optimized computed tomography angiography gating (CTA). The framework integrates a 3D convolutional neural network (CNN) for US data and an artificial neural network (ANN) for ECG data. A dynamic heart motion phantom, replicating diverse cardiac conditions, including arrhythmias, was used to validate the framework. Performance was assessed across varying QP lengths, cardiac segments, and motions to simulate real-world conditions. The multimodal US-ECG 3D CNN-ANN framework demonstrated improved QP prediction accuracy compared to single-modality ECG-only gating, achieving 96.87% accuracy compared to 85.56%, including scenarios involving arrhythmic conditions. Notably, the framework shows higher accuracy for longer QP durations (100 ms - 200 ms) compared to shorter durations (<100ms), while still outperforming single-modality methods, which often fail to detect shorter quiescent phases, especially in arrhythmic cases. Consistently outperforming single-modality approaches, it achieves reliable QP prediction across cardiac regions, including the whole phantom, interventricular septum, and cardiac wall regions. Analysis of QP prediction accuracy across cardiac segments demonstrated an average accuracy of 92% in clinically relevant echocardiographic views, highlighting the framework's robustness. Combining US and ECG data using a multimodal framework improves QP prediction accuracy under variable cardiac motion, particularly in arrhythmic conditions. Since even small errors in cardiac CTA can result in non-diagnostic scans, the potential benefits of multimodal gating may improve diagnostic scan rates in patients with high and variable heart rates and arrhythmias.

End-to-end 2D/3D registration from pre-operative MRI to intra-operative fluoroscopy for orthopedic procedures.

Ku PC, Liu M, Grupp R, Harris A, Oni JK, Mears SC, Martin-Gomez A, Armand M

pubmed logopapersMay 30 2025
Soft tissue pathologies and bone defects are not easily visible in intra-operative fluoroscopic images; therefore, we develop an end-to-end MRI-to-fluoroscopic image registration framework, aiming to enhance intra-operative visualization for surgeons during orthopedic procedures. The proposed framework utilizes deep learning to segment MRI scans and generate synthetic CT (sCT) volumes. These sCT volumes are then used to produce digitally reconstructed radiographs (DRRs), enabling 2D/3D registration with intra-operative fluoroscopic images. The framework's performance was validated through simulation and cadaver studies for core decompression (CD) surgery, focusing on the registration accuracy of femur and pelvic regions. The framework achieved a mean translational registration accuracy of 2.4 ± 1.0 mm and rotational accuracy of 1.6 ± <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0</mn> <mo>.</mo> <msup><mn>8</mn> <mo>∘</mo></msup> </mrow> </math> for the femoral region in cadaver studies. The method successfully enabled intra-operative visualization of necrotic lesions that were not visible on conventional fluoroscopic images, marking a significant advancement in image guidance for femur and pelvic surgeries. The MRI-to-fluoroscopic registration framework offers a novel approach to image guidance in orthopedic surgeries, exclusively using MRI without the need for CT scans. This approach enhances the visualization of soft tissues and bone defects, reduces radiation exposure, and provides a safer, more effective alternative for intra-operative surgical guidance.

Deep learning reconstruction enhances tophus detection in a dual-energy CT phantom study.

Schmolke SA, Diekhoff T, Mews J, Khayata K, Kotlyarov M

pubmed logopapersMay 28 2025
This study aimed to compare two deep learning reconstruction (DLR) techniques (AiCE mild; AiCE strong) with two established methods-iterative reconstruction (IR) and filtered back projection (FBP)-for the detection of monosodium urate (MSU) in dual-energy computed tomography (DECT). An ex vivo bio-phantom and a raster phantom were prepared by inserting syringes containing different MSU concentrations and scanned in a 320-rows volume DECT scanner at different tube currents. The scans were reconstructed in a soft tissue kernel using the four reconstruction techniques mentioned above, followed by quantitative assessment of MSU volumes and image quality parameters, i.e., signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR). Both DLR techniques outperformed conventional IR and FBP in terms of volume detection and image quality. Notably, unlike IR and FBP, the two DLR methods showed no positive correlation of the MSU detection rate with the CT dose index (CTDIvol) in the bio-phantom. Our study highlights the potential of DLR for DECT imaging in gout, where it offers enhanced detection sensitivity, improved image contrast, reduced image noise, and lower radiation exposure. Further research is needed to assess the clinical reliability of this approach.

Brightness-Invariant Tracking Estimation in Tagged MRI

Zhangxing Bian, Shuwen Wei, Xiao Liang, Yuan-Chiao Lu, Samuel W. Remedios, Fangxu Xing, Jonghye Woo, Dzung L. Pham, Aaron Carass, Philip V. Bayly, Jiachen Zhuo, Ahmed Alshareef, Jerry L. Prince

arxiv logopreprintMay 23 2025
Magnetic resonance (MR) tagging is an imaging technique for noninvasively tracking tissue motion in vivo by creating a visible pattern of magnetization saturation (tags) that deforms with the tissue. Due to longitudinal relaxation and progression to steady-state, the tags and tissue brightnesses change over time, which makes tracking with optical flow methods error-prone. Although Fourier methods can alleviate these problems, they are also sensitive to brightness changes as well as spectral spreading due to motion. To address these problems, we introduce the brightness-invariant tracking estimation (BRITE) technique for tagged MRI. BRITE disentangles the anatomy from the tag pattern in the observed tagged image sequence and simultaneously estimates the Lagrangian motion. The inherent ill-posedness of this problem is addressed by leveraging the expressive power of denoising diffusion probabilistic models to represent the probabilistic distribution of the underlying anatomy and the flexibility of physics-informed neural networks to estimate biologically-plausible motion. A set of tagged MR images of a gel phantom was acquired with various tag periods and imaging flip angles to demonstrate the impact of brightness variations and to validate our method. The results show that BRITE achieves more accurate motion and strain estimates as compared to other state of the art methods, while also being resistant to tag fading.

Application of a pulmonary nodule detection program using AI technology to ultra-low-dose CT: differences in detection ability among various image reconstruction methods.

Tsuchiya N, Kobayashi S, Nakachi R, Tomori Y, Yogi A, Iida G, Ito J, Nishie A

pubmed logopapersMay 9 2025
This study aimed to investigate the performance of an artificial intelligence (AI)-based lung nodule detection program in ultra-low-dose CT (ULDCT) imaging, with a focus on the influence of various image reconstruction methods on detection accuracy. A chest phantom embedded with artificial lung nodules (solid and ground-glass nodules [GGNs]; diameters: 12 mm, 8 mm, 5 mm, and 3 mm) was scanned using six combinations of tube currents (160 mA, 80 mA, and 10 mA) and voltages (120 kV and 80 kV) on a Canon Aquilion One CT scanner. Images were reconstructed using filtered back projection (FBP), hybrid iterative reconstruction (HIR), model-based iterative reconstruction (MBIR), and deep learning reconstruction (DLR). Nodule detection was performed using an AI-based lung nodule detection program, and performance metrics were analyzed across different reconstruction methods and radiation dose protocols. At the lowest dose protocol (80 kV, 10 mA), FBP showed a 0% detection rate for all nodule sizes. HIR and DLR consistently achieved 100% detection rates for solid nodules ≥ 5 mm and GGNs ≥ 8 mm. No method detected 3 mm GGNs under any protocol. DLR demonstrated the highest detection rates, even under ultra-low-dose settings, while maintaining high image quality. AI-based lung nodule detection in ULDCT is strongly dependent on the choice of image reconstruction method.

Towards order of magnitude X-ray dose reduction in breast cancer imaging using phase contrast and deep denoising

Ashkan Pakzad, Robert Turnbull, Simon J. Mutch, Thomas A. Leatham, Darren Lockie, Jane Fox, Beena Kumar, Daniel Häsermann, Christopher J. Hall, Anton Maksimenko, Benedicta D. Arhatari, Yakov I. Nesterets, Amir Entezam, Seyedamir T. Taba, Patrick C. Brennan, Timur E. Gureyev, Harry M. Quiney

arxiv logopreprintMay 9 2025
Breast cancer is the most frequently diagnosed human cancer in the United States at present. Early detection is crucial for its successful treatment. X-ray mammography and digital breast tomosynthesis are currently the main methods for breast cancer screening. However, both have known limitations in terms of their sensitivity and specificity to breast cancers, while also frequently causing patient discomfort due to the requirement for breast compression. Breast computed tomography is a promising alternative, however, to obtain high-quality images, the X-ray dose needs to be sufficiently high. As the breast is highly radiosensitive, dose reduction is particularly important. Phase-contrast computed tomography (PCT) has been shown to produce higher-quality images at lower doses and has no need for breast compression. It is demonstrated in the present study that, when imaging full fresh mastectomy samples with PCT, deep learning-based image denoising can further reduce the radiation dose by a factor of 16 or more, without any loss of image quality. The image quality has been assessed both in terms of objective metrics, such as spatial resolution and contrast-to-noise ratio, as well as in an observer study by experienced medical imaging specialists and radiologists. This work was carried out in preparation for live patient PCT breast cancer imaging, initially at specialized synchrotron facilities.
Page 2 of 323 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.