Sort by:
Page 3 of 54537 results

Photon-counting detector computed tomography in thoracic oncology: revolutionizing tumor imaging through precision and detail.

Yanagawa M, Ueno M, Ito R, Ueda D, Saida T, Kurokawa R, Takumi K, Nishioka K, Sugawara S, Ide S, Honda M, Iima M, Kawamura M, Sakata A, Sofue K, Oda S, Watabe T, Hirata K, Naganawa S

pubmed logopapersSep 24 2025
Photon-counting detector computed tomography (PCD-CT) is an emerging imaging technology that promises to overcome the limitations of conventional energy-integrating detector (EID)-CT, particularly in thoracic oncology. This narrative review summarizes technical advances and clinical applications of PCD-CT in the thorax with emphasis on spatial resolution, dose-image-quality balance, and intrinsic spectral imaging, and it outlines practical implications relevant to thoracic oncology. A literature review of PubMed through May 31, 2025, was conducted using combinations of "photon counting," "computed tomography," "thoracic oncology," and "artificial intelligence." We screened the retrieved records and included studies with direct relevance to lung and mediastinal tumors, image quality, radiation dose, spectral/iodine imaging, or artificial intelligence-based reconstruction; case reports, editorials, and animal-only or purely methodological reports were excluded. PCD-CT demonstrated superior spatial resolution compared with EID-CT, enabling clearer visualization of fine pulmonary structures, such as bronchioles and subsolid nodules; slice thicknesses of approximately 0.4 mm and <i>ex vivo</i> resolvable structures approaching 0.11 mm have been reported. Across intraindividual clinical comparisons, radiation-dose reductions of 16%-43% have been achieved while maintaining or improving diagnostic image quality. Intrinsic spectral imaging enables accurate iodine mapping and low-keV virtual monoenergetic images and has shown quantitative advantages versus dual-energy CT in phantoms and early clinical work. Artificial intelligence-based deep-learning reconstruction and super-resolution can complement detector capabilities to reduce noise and stabilize fine-structure depiction without increasing dose. Potential reductions in contrast volume are biologically plausible given improved low-keV contrast-to-noise ratio, although clinical dose-finding data remain limited, and routine K-edge imaging has not yet translated to clinical thoracic practice. In conclusion, PCD-CT provides higher spatial and spectral fidelity at lower or comparable doses, supporting earlier and more precise tumor detection and characterization; future work should prioritize outcome-oriented trials, protocol harmonization, and implementation studies aligned with "Green Radiology".

Dose reduction in radiotherapy treatment planning CT via deep learning-based reconstruction: a single‑institution study.

Yasui K, Kasugai Y, Morishita M, Saito Y, Shimizu H, Uezono H, Hayashi N

pubmed logopapersSep 24 2025
To quantify radiation dose reduction in radiotherapy treatment-planning CT (RTCT) using a deep learning-based reconstruction (DLR; AiCE) algorithm compared with adaptive iterative dose reduction (IR; AIDR). To evaluate its potential to inform RTCT-specific diagnostic reference levels (DRLs). In this single-institution retrospective study, 4-part RTCT scans (head, head and neck, lung, and pelvis) were acquired on a large-bore CT. Scans reconstructed with IR (n = 820) and DLR (n = 854) were compared. The 75th-percentile CTDI<sub>vol</sub> and DLP (CTDI<sub>IR</sub>, DLP<sub>IR</sub> vs. CTDI<sub>DLR</sub>, DLP<sub>DLR</sub>) were determined per site. Dose reduction rates were calculated as (CTDI<sub>DLR</sub> - CTDI<sub>IR</sub>)/CTDI<sub>IR</sub> × 100% and similarly for DLP. Statistical significance was assessed by the Mann-Whitney U-test. DLR yielded CTDI<sub>vol</sub> reductions of 30.4-75.4% and DLP reductions of 23.1-73.5% across sites (p < 0.001), with the greatest reductions in head and neck RTCT (CTDI<sub>vol</sub>: 75.4%; DLP: 73.5%). Variability also narrowed. Compared with published national DRLs, DLR achieved 34.8 mGy and 18.8 mGy lower CTDI<sub>vol</sub> for head and neck versus UK-DRLs and Japanese multi-institutional data, respectively. DLR substantially lowers RTCT dose indices, providing quantitative data to guide RTCT-specific DRLs and optimize clinical workflows.

An Anisotropic Cross-View Texture Transfer with Multi-Reference Non-Local Attention for CT Slice Interpolation

Kwang-Hyun Uhm, Hyunjun Cho, Sung-Hoo Hong, Seung-Won Jung

arxiv logopreprintSep 24 2025
Computed tomography (CT) is one of the most widely used non-invasive imaging modalities for medical diagnosis. In clinical practice, CT images are usually acquired with large slice thicknesses due to the high cost of memory storage and operation time, resulting in an anisotropic CT volume with much lower inter-slice resolution than in-plane resolution. Since such inconsistent resolution may lead to difficulties in disease diagnosis, deep learning-based volumetric super-resolution methods have been developed to improve inter-slice resolution. Most existing methods conduct single-image super-resolution on the through-plane or synthesize intermediate slices from adjacent slices; however, the anisotropic characteristic of 3D CT volume has not been well explored. In this paper, we propose a novel cross-view texture transfer approach for CT slice interpolation by fully utilizing the anisotropic nature of 3D CT volume. Specifically, we design a unique framework that takes high-resolution in-plane texture details as a reference and transfers them to low-resolution through-plane images. To this end, we introduce a multi-reference non-local attention module that extracts meaningful features for reconstructing through-plane high-frequency details from multiple in-plane images. Through extensive experiments, we demonstrate that our method performs significantly better in CT slice interpolation than existing competing methods on public CT datasets including a real-paired benchmark, verifying the effectiveness of the proposed framework. The source code of this work is available at https://github.com/khuhm/ACVTT.

A Kernel Space-based Multidimensional Sparse Model for Dynamic PET Image Denoising

Kuang Xiaodong, Li Bingxuan, Li Yuan, Rao Fan, Ma Gege, Xie Qingguo, Mok Greta S P, Liu Huafeng, Zhu Wentao

arxiv logopreprintSep 23 2025
Achieving high image quality for temporal frames in dynamic positron emission tomography (PET) is challenging due to the limited statistic especially for the short frames. Recent studies have shown that deep learning (DL) is useful in a wide range of medical image denoising tasks. In this paper, we propose a model-based neural network for dynamic PET image denoising. The inter-frame spatial correlation and intra-frame structural consistency in dynamic PET are used to establish the kernel space-based multidimensional sparse (KMDS) model. We then substitute the inherent forms of the parameter estimation with neural networks to enable adaptive parameters optimization, forming the end-to-end neural KMDS-Net. Extensive experimental results from simulated and real data demonstrate that the neural KMDS-Net exhibits strong denoising performance for dynamic PET, outperforming previous baseline methods. The proposed method may be used to effectively achieve high temporal and spatial resolution for dynamic PET. Our source code is available at https://github.com/Kuangxd/Neural-KMDS-Net/tree/main.

Deep Learning for Standardized Head CT Reformatting: A Quantitative Analysis of Image Quality and Operator Variability.

Chang PD, Chu E, Floriolli D, Soun J, Fussell D

pubmed logopapersSep 23 2025
To validate a deep learning foundation model for automated head computed tomography (CT) reformatting and to quantify the quality, speed, and variability of conventional manual reformats in a real-world dataset. A foundation artificial intelligence (AI) model was used to create automated reformats for 1,763 consecutive non-contrast head CT examinations. Model accuracy was first validated on a 100-exam subset by assessing landmark detection as well as rotational, centering, and zoom error against expert manual annotations. The validated model was subsequently used as a reference standard to evaluate the quality and speed of the original technician-generated reformats from the full dataset. The AI model demonstrated high concordance with expert annotations, with a mean landmark localization error of 0.6-0.9 mm. Compared to expert-defined planes, AI-generated reformats exhibited a mean rotational error of 0.7 degrees, a mean centering error of 0.3%, and a mean zoom error of 0.4%. By contrast, technician-generated reformats demonstrated a mean rotational error of 11.2 degrees, a mean centering error of 6.4%, and a mean zoom error of 6.2%. Significant variability in manual reformat quality was observed across different factors including patient age, scanner location, report findings, and individual technician operators. Manual head CT reformatting is subject to substantial variability in both quality and speed. A single-shot deep learning foundation model can generate reformats with high accuracy and consistency. The implementation of such an automated method offers the potential to improve standardization, increase workflow efficiency, and reduce operational costs in clinical practice.

Neural Network-Driven Direct CBCT-Based Dose Calculation for Head-and-Neck Proton Treatment Planning

Muheng Li, Evangelia Choulilitsa, Lisa Fankhauser, Francesca Albertini, Antony Lomax, Ye Zhang

arxiv logopreprintSep 22 2025
Accurate dose calculation on cone beam computed tomography (CBCT) images is essential for modern proton treatment planning workflows, particularly when accounting for inter-fractional anatomical changes in adaptive treatment scenarios. Traditional CBCT-based dose calculation suffers from image quality limitations, requiring complex correction workflows. This study develops and validates a deep learning approach for direct proton dose calculation from CBCT images using extended Long Short-Term Memory (xLSTM) neural networks. A retrospective dataset of 40 head-and-neck cancer patients with paired planning CT and treatment CBCT images was used to train an xLSTM-based neural network (CBCT-NN). The architecture incorporates energy token encoding and beam's-eye-view sequence modelling to capture spatial dependencies in proton dose deposition patterns. Training utilized 82,500 paired beam configurations with Monte Carlo-generated ground truth doses. Validation was performed on 5 independent patients using gamma analysis, mean percentage dose error assessment, and dose-volume histogram comparison. The CBCT-NN achieved gamma pass rates of 95.1 $\pm$ 2.7% using 2mm/2% criteria. Mean percentage dose errors were 2.6 $\pm$ 1.4% in high-dose regions ($>$90% of max dose) and 5.9 $\pm$ 1.9% globally. Dose-volume histogram analysis showed excellent preservation of target coverage metrics (Clinical Target Volume V95% difference: -0.6 $\pm$ 1.1%) and organ-at-risk constraints (parotid mean dose difference: -0.5 $\pm$ 1.5%). Computation time is under 3 minutes without sacrificing Monte Carlo-level accuracy. This study demonstrates the proof-of-principle of direct CBCT-based proton dose calculation using xLSTM neural networks. The approach eliminates traditional correction workflows while achieving comparable accuracy and computational efficiency suitable for adaptive protocols.

CPT-4DMR: Continuous sPatial-Temporal Representation for 4D-MRI Reconstruction

Xinyang Wu, Muheng Li, Xia Li, Orso Pusterla, Sairos Safai, Philippe C. Cattin, Antony J. Lomax, Ye Zhang

arxiv logopreprintSep 22 2025
Four-dimensional MRI (4D-MRI) is an promising technique for capturing respiratory-induced motion in radiation therapy planning and delivery. Conventional 4D reconstruction methods, which typically rely on phase binning or separate template scans, struggle to capture temporal variability, complicate workflows, and impose heavy computational loads. We introduce a neural representation framework that considers respiratory motion as a smooth, continuous deformation steered by a 1D surrogate signal, completely replacing the conventional discrete sorting approach. The new method fuses motion modeling with image reconstruction through two synergistic networks: the Spatial Anatomy Network (SAN) encodes a continuous 3D anatomical representation, while a Temporal Motion Network (TMN), guided by Transformer-derived respiratory signals, produces temporally consistent deformation fields. Evaluation using a free-breathing dataset of 19 volunteers demonstrates that our template- and phase-free method accurately captures both regular and irregular respiratory patterns, while preserving vessel and bronchial continuity with high anatomical fidelity. The proposed method significantly improves efficiency, reducing the total processing time from approximately five hours required by conventional discrete sorting methods to just 15 minutes of training. Furthermore, it enables inference of each 3D volume in under one second. The framework accurately reconstructs 3D images at any respiratory state, achieves superior performance compared to conventional methods, and demonstrates strong potential for application in 4D radiation therapy planning and real-time adaptive treatment.

Measurement Score-Based MRI Reconstruction with Automatic Coil Sensitivity Estimation

Tingjun Liu, Chicago Y. Park, Yuyang Hu, Hongyu An, Ulugbek S. Kamilov

arxiv logopreprintSep 22 2025
Diffusion-based inverse problem solvers (DIS) have recently shown outstanding performance in compressed-sensing parallel MRI reconstruction by combining diffusion priors with physical measurement models. However, they typically rely on pre-calibrated coil sensitivity maps (CSMs) and ground truth images, making them often impractical: CSMs are difficult to estimate accurately under heavy undersampling and ground-truth images are often unavailable. We propose Calibration-free Measurement Score-based diffusion Model (C-MSM), a new method that eliminates these dependencies by jointly performing automatic CSM estimation and self-supervised learning of measurement scores directly from k-space data. C-MSM reconstructs images by approximating the full posterior distribution through stochastic sampling over partial measurement posterior scores, while simultaneously estimating CSMs. Experiments on the multi-coil brain fastMRI dataset show that C-MSM achieves reconstruction performance close to DIS with clean diffusion priors -- even without access to clean training data and pre-calibrated CSMs.

Development of a patient-specific cone-beam computed tomography dose optimization model using machine learning in image-guided radiation therapy.

Miura S

pubmed logopapersSep 22 2025
Cone-beam computed tomography (CBCT) is commonly utilized in radiation therapy to visualize soft tissues and bone structures. This study aims to develop a machine learning model that predicts optimal, patient-specific CBCT doses that minimize radiation exposure while maintaining soft tissue image quality in prostate radiation therapy. Phantom studies evaluated the relationship between dose and two image quality metrics: image standard deviation (SD) and contrast-to-noise ratio (CNR). In a prostate-simulating phantom, CNR did not significantly decrease at doses above 40% compared to the 100% dose. Based on low-contrast resolution, this value was selected as the minimum clinical dose level. In clinical image analysis, both SD and CNR degraded with decreasing dose, consistent with the phantom findings. The structural similarity index between CBCT and planning computed tomography (CT) significantly decreased at doses below 60%, with a mean value of 0.69 at 40%. Previous studies suggest that this level may correspond to acceptable registration accuracy within the typical planning target volume margins applied in image-guided radiotherapy. A machine learning model was developed to predict CBCT doses using patient-specific metrics from planning CT scans and CBCT image quality parameters. Among the tested models, support vector regression achieved the highest accuracy, with an R<sup>2</sup> value of 0.833 and a root mean squared error of 0.0876, and was therefore adopted for dose prediction. These results support the feasibility of patient-specific CBCT imaging protocols that reduce radiation dose while maintaining clinically acceptable image quality for soft tissue registration.

Learning Scan-Adaptive MRI Undersampling Patterns with Pre-Optimized Mask Supervision

Aryan Dhar, Siddhant Gautam, Saiprasad Ravishankar

arxiv logopreprintSep 21 2025
Deep learning techniques have gained considerable attention for their ability to accelerate MRI data acquisition while maintaining scan quality. In this work, we present a convolutional neural network (CNN) based framework for learning undersampling patterns directly from multi-coil MRI data. Unlike prior approaches that rely on in-training mask optimization, our method is trained with precomputed scan-adaptive optimized masks as supervised labels, enabling efficient and robust scan-specific sampling. The training procedure alternates between optimizing a reconstructor and a data-driven sampling network, which generates scan-specific sampling patterns from observed low-frequency $k$-space data. Experiments on the fastMRI multi-coil knee dataset demonstrate significant improvements in sampling efficiency and image reconstruction quality, providing a robust framework for enhancing MRI acquisition through deep learning.
Page 3 of 54537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.