Sort by:
Page 1 of 21208 results
Next

Photon-counting micro-CT scanner for deep learning-enabled small animal perfusion imaging.

Allphin AJ, Nadkarni R, Clark DP, Badea CT

pubmed logopapersJun 27 2025
In this work, we introduce a benchtop, turn-table photon-counting (PC) micro-CT scanner and highlight its application for dynamic small animal perfusion imaging.
Approach: Built on recently published hardware, the system now features a CdTe-based photon-counting detector (PCD). We validated its static spectral PC micro-CT imaging using conventional phantoms and assessed dynamic performance with a custom flow-configurable dual-compartment perfusion phantom. The phantom was scanned under varied flow conditions during injections of a low molecular weight iodinated contrast agent. In vivo mouse studies with identical injection settings demonstrated potential applications. A pretrained denoising CNN processed large multi-energy, temporal datasets (20 timepoints × 4 energies × 3 spatial dimensions), reconstructed via weighted filtered back projection. A separate CNN, trained on simulated data, performed gamma variate-based 2D perfusion mapping, evaluated qualitatively in phantom and in vivo tests.
Main Results: Full five-dimensional reconstructions were denoised using a CNN in ~3% of the time of iterative reconstruction, reducing noise in water at the highest energy threshold from 1206 HU to 86 HU. Decomposed iodine maps, which improved contrast to noise ratio from 16.4 (in the lowest energy CT images) to 29.4 (in the iodine maps), were used for perfusion analysis. The perfusion CNN outperformed pixelwise gamma variate fitting by ~33%, with a test set error of 0.04 vs. 0.06 in blood flow index (BFI) maps, and quantified linear BFI changes in the phantom with a coefficient of determination of 0.98.
Significance: This work underscores the PC micro-CT scanner's utility for high-throughput small animal perfusion imaging, leveraging spectral PC micro-CT and iodine decomposition. It provides a versatile platform for preclinical vascular research and advanced, time-resolved studies of disease models and therapeutic interventions.

Noise-Inspired Diffusion Model for Generalizable Low-Dose CT Reconstruction

Qi Gao, Zhihao Chen, Dong Zeng, Junping Zhang, Jianhua Ma, Hongming Shan

arxiv logopreprintJun 27 2025
The generalization of deep learning-based low-dose computed tomography (CT) reconstruction models to doses unseen in the training data is important and remains challenging. Previous efforts heavily rely on paired data to improve the generalization performance and robustness through collecting either diverse CT data for re-training or a few test data for fine-tuning. Recently, diffusion models have shown promising and generalizable performance in low-dose CT (LDCT) reconstruction, however, they may produce unrealistic structures due to the CT image noise deviating from Gaussian distribution and imprecise prior information from the guidance of noisy LDCT images. In this paper, we propose a noise-inspired diffusion model for generalizable LDCT reconstruction, termed NEED, which tailors diffusion models for noise characteristics of each domain. First, we propose a novel shifted Poisson diffusion model to denoise projection data, which aligns the diffusion process with the noise model in pre-log LDCT projections. Second, we devise a doubly guided diffusion model to refine reconstructed images, which leverages LDCT images and initial reconstructions to more accurately locate prior information and enhance reconstruction fidelity. By cascading these two diffusion models for dual-domain reconstruction, our NEED requires only normal-dose data for training and can be effectively extended to various unseen dose levels during testing via a time step matching strategy. Extensive qualitative, quantitative, and segmentation-based evaluations on two datasets demonstrate that our NEED consistently outperforms state-of-the-art methods in reconstruction and generalization performance. Source code is made available at https://github.com/qgao21/NEED.

Self-supervised learning for MRI reconstruction: a review and new perspective.

Li X, Huang J, Sun G, Yang Z

pubmed logopapersJun 26 2025
To review the latest developments in self-supervised deep learning (DL) techniques for magnetic resonance imaging (MRI) reconstruction, emphasizing their potential to overcome the limitations of supervised methods dependent on fully sampled k-space data. While DL has significantly advanced MRI, supervised approaches require large amounts of fully sampled k-space data for training-a major limitation given the impracticality and expense of acquiring such data clinically. Self-supervised learning has emerged as a promising alternative, enabling model training using only undersampled k-space data, thereby enhancing feasibility and driving research interest. We conducted a comprehensive literature review to synthesize recent progress in self-supervised DL for MRI reconstruction. The analysis focused on methods and architectures designed to improve image quality, reduce scanning time, and address data scarcity challenges, drawing from peer-reviewed publications and technical innovations in the field. Self-supervised DL holds transformative potential for MRI reconstruction, offering solutions to data limitations while maintaining image quality and accelerating scans. Key challenges include robustness across diverse anatomies, standardization of validation, and clinical integration. Future research should prioritize hybrid methodologies, domain-specific adaptations, and rigorous clinical validation. This review consolidates advancements and unresolved issues, providing a foundation for next-generation medical imaging technologies.

Morphology-based radiological-histological correlation on ultra-high-resolution energy-integrating detector CT using cadaveric human lungs: nodule and airway analysis.

Hata A, Yanagawa M, Ninomiya K, Kikuchi N, Kurashige M, Nishigaki D, Doi S, Yamagata K, Yoshida Y, Ogawa R, Tokuda Y, Morii E, Tomiyama N

pubmed logopapersJun 26 2025
To evaluate the depiction capability of fine lung nodules and airways using high-resolution settings on ultra-high-resolution energy-integrating detector CT (UHR-CT), incorporating large matrix sizes, thin-slice thickness, and iterative reconstruction (IR)/deep-learning reconstruction (DLR), using cadaveric human lungs and corresponding histological images. Images of 20 lungs were acquired using conventional CT (CCT), UHR-CT, and photon-counting detector CT (PCD-CT). CCT images were reconstructed with a 512 matrix and IR (CCT-512-IR). UHR-CT images were reconstructed with four settings by varying the matrix size and the reconstruction method: UHR-512-IR, UHR-1024-IR, UHR-2048-IR, and UHR-1024-DLR. Two imaging settings of PCD-CT were used: PCD-512-IR and PCD-1024-IR. CT images were visually evaluated and compared with histology. Overall, 6769 nodules (median: 1321 µm) and 92 airways (median: 851 µm) were evaluated. For nodules, UHR-2048-IR outperformed CCT-512-IR, UHR-512-IR, and UHR-1024-IR (p < 0.001). UHR-1024-DLR showed no significant difference from UHR-2048-IR in the overall nodule score after Bonferroni correction (uncorrected p = 0.043); however, for nodules > 1000 μm, UHR-2048-IR demonstrated significantly better scores than UHR-1024-DLR (p = 0.003). For airways, UHR-1024-IR and UHR-512-IR showed significant differences (p < 0.001), with no notable differences among UHR-1024-IR, UHR-2048-IR, and UHR-1024-DLR. UHR-2048-IR detected nodules and airways with median diameters of 604 µm and 699 µm, respectively. No significant difference was observed between UHR-512-IR and PCD-512-IR (p > 0.1). PCD-1024-IR outperformed UHR-CTs for nodules > 1000 μm (p ≤ 0.001), while UHR-1024-DLR outperformed PCD-1024-IR for airways > 1000 μm (p = 0.005). UHR-2048-IR demonstrated the highest scores among the evaluated EID-CT images. UHR-CT showed potential for detecting submillimeter nodules and airways. With the 512 matrix, UHR-CT demonstrated performance comparable to PCD-CT. Question There are scarce data evaluating the depiction capabilities of ultra-high-resolution energy-integrating detector CT (UHR-CT) for fine structures, nor any comparisons with photon-counting detector CT (PCD-CT). Findings UHR-CT depicted nodules and airways with median diameters of 604 µm and 699 µm, showing no significant difference from PCD-CT with the 512 matrix. Clinical relevance High-resolution imaging is crucial for lung diagnosis. UHR-CT has the potential to contribute to pulmonary nodule diagnosis and airway disease evaluation by detecting fine opacities and airways.

Improving Clinical Utility of Fetal Cine CMR Using Deep Learning Super-Resolution.

Vollbrecht TM, Hart C, Katemann C, Isaak A, Voigt MB, Pieper CC, Kuetting D, Geipel A, Strizek B, Luetkens JA

pubmed logopapersJun 26 2025
Fetal cardiovascular magnetic resonance is an emerging tool for prenatal congenital heart disease assessment, but long acquisition times and fetal movements limit its clinical use. This study evaluates the clinical utility of deep learning super-resolution reconstructions for rapidly acquired, low-resolution fetal cardiovascular magnetic resonance. This prospective study included participants with fetal congenital heart disease undergoing fetal cardiovascular magnetic resonance in the third trimester of pregnancy, with axial cine images acquired at normal resolution and low resolution. Low-resolution cine data was subsequently reconstructed using a deep learning super-resolution framework (cine<sub>DL</sub>). Acquisition times, apparent signal-to-noise ratio, contrast-to-noise ratio, and edge rise distance were assessed. Volumetry and functional analysis were performed. Qualitative image scores were rated on a 5-point Likert scale. Cardiovascular structures and pathological findings visible in cine<sub>DL</sub> images only were assessed. Statistical analysis included the Student paired <i>t</i> test and the Wilcoxon test. A total of 42 participants were included (median gestational age, 35.9 weeks [interquartile range (IQR), 35.1-36.4]). Cine<sub>DL</sub> acquisition was faster than cine images acquired at normal resolution (134±9.6 s versus 252±8.8 s; <i>P</i><0.001). Quantitative image quality metrics and image quality scores for cine<sub>DL</sub> were higher or comparable with those of cine images acquired at normal-resolution images (eg, fetal motion, 4.0 [IQR, 4.0-5.0] versus 4.0 [IQR, 3.0-4.0]; <i>P</i><0.001). Nonpatient-related artifacts (eg, backfolding) were more pronounced in Cine<sub>DL</sub> compared with cine images acquired at normal-resolution images (4.0 [IQR, 4.0-5.0] versus 5.0 [IQR, 3.0-4.0]; <i>P</i><0.001). Volumetry and functional results were comparable. Cine<sub>DL</sub> revealed additional structures in 10 of 42 fetuses (24%) and additional pathologies in 5 of 42 fetuses (12%), including partial anomalous pulmonary venous connection. Deep learning super-resolution reconstructions of low-resolution acquisitions shorten acquisition times and achieve diagnostic quality comparable with standard images, while being less sensitive to fetal bulk movements, leading to additional diagnostic findings. Therefore, deep learning super-resolution may improve the clinical utility of fetal cardiovascular magnetic resonance for accurate prenatal assessment of congenital heart disease.

Lightweight Physics-Informed Zero-Shot Ultrasound Plane Wave Denoising

Hojat Asgariandehkordi, Mostafa Sharifzadeh, Hassan Rivaz

arxiv logopreprintJun 26 2025
Ultrasound Coherent Plane Wave Compounding (CPWC) enhances image contrast by combining echoes from multiple steered transmissions. While increasing the number of angles generally improves image quality, it drastically reduces the frame rate and can introduce blurring artifacts in fast-moving targets. Moreover, compounded images remain susceptible to noise, particularly when acquired with a limited number of transmissions. We propose a zero-shot denoising framework tailored for low-angle CPWC acquisitions, which enhances contrast without relying on a separate training dataset. The method divides the available transmission angles into two disjoint subsets, each used to form compound images that include higher noise levels. The new compounded images are then used to train a deep model via a self-supervised residual learning scheme, enabling it to suppress incoherent noise while preserving anatomical structures. Because angle-dependent artifacts vary between the subsets while the underlying tissue response is similar, this physics-informed pairing allows the network to learn to disentangle the inconsistent artifacts from the consistent tissue signal. Unlike supervised methods, our model requires no domain-specific fine-tuning or paired data, making it adaptable across anatomical regions and acquisition setups. The entire pipeline supports efficient training with low computational cost due to the use of a lightweight architecture, which comprises only two convolutional layers. Evaluations on simulation, phantom, and in vivo data demonstrate superior contrast enhancement and structure preservation compared to both classical and deep learning-based denoising methods.

Generalizable Neural Electromagnetic Inverse Scattering

Yizhe Cheng, Chunxun Tian, Haoru Wang, Wentao Zhu, Xiaoxuan Ma, Yizhou Wang

arxiv logopreprintJun 26 2025
Solving Electromagnetic Inverse Scattering Problems (EISP) is fundamental in applications such as medical imaging, where the goal is to reconstruct the relative permittivity from scattered electromagnetic field. This inverse process is inherently ill-posed and highly nonlinear, making it particularly challenging. A recent machine learning-based approach, Img-Interiors, shows promising results by leveraging continuous implicit functions. However, it requires case-specific optimization, lacks generalization to unseen data, and fails under sparse transmitter setups (e.g., with only one transmitter). To address these limitations, we revisit EISP from a physics-informed perspective, reformulating it as a two stage inverse transmission-scattering process. This formulation reveals the induced current as a generalizable intermediate representation, effectively decoupling the nonlinear scattering process from the ill-posed inverse problem. Built on this insight, we propose the first generalizable physics-driven framework for EISP, comprising a current estimator and a permittivity solver, working in an end-to-end manner. The current estimator explicitly learns the induced current as a physical bridge between the incident and scattered field, while the permittivity solver computes the relative permittivity directly from the estimated induced current. This design enables data-driven training and generalizable feed-forward prediction of relative permittivity on unseen data while maintaining strong robustness to transmitter sparsity. Extensive experiments show that our method outperforms state-of-the-art approaches in reconstruction accuracy, generalization, and robustness. This work offers a fundamentally new perspective on electromagnetic inverse scattering and represents a major step toward cost-effective practical solutions for electromagnetic imaging.

Dose-aware denoising diffusion model for low-dose CT.

Kim S, Kim BJ, Baek J

pubmed logopapersJun 26 2025
Low-dose computed tomography (LDCT) denoising plays an important role in medical imaging for reducing the radiation dose to patients. Recently, various data-driven and diffusion-based deep learning (DL) methods have been developed and shown promising results in LDCT denoising. However, challenges remain in ensuring generalizability to different datasets and mitigating uncertainty from stochastic sampling. In this paper, we introduce a novel dose-aware diffusion model that effectively reduces CT image noise while maintaining structural fidelity and being generalizable to different dose levels.&#xD;Approach: Our approach employs a physics-based forward process with continuous timesteps, enabling flexible representation of diverse noise levels. We incorporate a computationally efficient noise calibration module in our diffusion framework that resolves misalignment between intermediate results and their corresponding timesteps. Furthermore, we present a simple yet effective method for estimating appropriate timesteps for unseen LDCT images, allowing generalization to an unknown, arbitrary dose levels.&#xD;Main Results: Both qualitative and quantitative evaluation results on Mayo Clinic datasets show that the proposed method outperforms existing denoising methods in preserving the noise texture and restoring anatomical structures. The proposed method also shows consistent results on different dose levels and an unseen dataset.&#xD;Significance: We propose a novel dose-aware diffusion model for LDCT denoising, aiming to address the generalization and uncertainty issues of existing diffusion-based DL methods. Our experimental results demonstrate the effectiveness of the proposed method across different dose levels. We expect that our approach can provide a clinically practical solution for LDCT denoising with its high structural fidelity and computational efficiency.

Constructing high-quality enhanced 4D-MRI with personalized modeling for liver cancer radiotherapy.

Yao Y, Chen B, Wang K, Cao Y, Zuo L, Zhang K, Chen X, Kuo M, Dai J

pubmed logopapersJun 26 2025
For magnetic resonance imaging (MRI), a short acquisition time and good image quality are incompatible. Thus, reconstructing time-resolved volumetric MRI (4D-MRI) to delineate and monitor thoracic and upper abdominal tumor movements is a challenge. Existing MRI sequences have limited applicability to 4D-MRI. A method is proposed for reconstructing high-quality personalized enhanced 4D-MR images. Low-quality 4D-MR images are scanned followed by deep learning-based personalization to generate high-quality 4D-MR images. High-speed multiphase 3D fast spoiled gradient recalled echo (FSPGR) sequences were utilized to generate low-quality enhanced free-breathing 4D-MR images and paired low-/high-quality breath-holding 4D-MR images for 58 liver cancer patients. Then, a personalized model guided by the paired breath-holding 4D-MR images was developed for each patient to cope with patient heterogeneity. The 4D-MR images generated by the personalized model were of much higher quality compared with the low-quality 4D-MRI images obtained by conventional scanning as demonstrated by significant improvements in the peak signal-to-noise ratio, structural similarity, normalized root mean square error, and cumulative probability of blur detection. The introduction of individualized information helped the personalized model demonstrate a statistically significant improvement compared to the general model (p < 0.001). The proposed method can be used to quickly reconstruct high-quality 4D-MR images and is potentially applicable to radiotherapy for liver cancer.

[Advances in low-dose cone-beam computed tomography image reconstruction methods based on deep learning].

Shi J, Song Y, Li G, Bai S

pubmed logopapersJun 25 2025
Cone-beam computed tomography (CBCT) is widely used in dentistry, surgery, radiotherapy and other medical fields. However, repeated CBCT scans expose patients to additional radiation doses, increasing the risk of secondary malignant tumors. Low-dose CBCT image reconstruction technology, which employs advanced algorithms to reduce radiation dose while enhancing image quality, has emerged as a focal point of recent research. This review systematically examined deep learning-based methods for low-dose CBCT reconstruction. It compared different network architectures in terms of noise reduction, artifact removal, detail preservation, and computational efficiency, covering three approaches: image-domain, projection-domain, and dual-domain techniques. The review also explored how emerging technologies like multimodal fusion and self-supervised learning could enhance these methods. By summarizing the strengths and weaknesses of current approaches, this work provides insights to optimize low-dose CBCT algorithms and support their clinical adoption.
Page 1 of 21208 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.