Sort by:
Page 55 of 1421416 results

Determining the scanning range of coronary computed tomography angiography based on deep learning.

Zhao YH, Fan YH, Wu XY, Qin T, Sun QT, Liang BH

pubmed logopapersJul 28 2025
Coronary computed tomography angiography (CCTA) is essential for diagnosing coronary artery disease as it provides detailed images of the heart's blood vessels to identify blockages or abnormalities. Traditionally, determining the computed tomography (CT) scanning range has relied on manual methods due to limited automation in this area. To develop and evaluate a novel deep learning approach to automate the determination of CCTA scan ranges using anteroposterior scout images. A retrospective analysis was conducted on chest CT data from 1388 patients at the Radiology Department of the First Affiliated Hospital of a university-affiliated hospital, collected between February 27 and March 27, 2024. A deep learning model was trained on anteroposterior scout images with annotations based on CCTA standards. The dataset was split into training (672 cases), validation (167 cases), and test (167 cases) sets to ensure robust model evaluation. The study demonstrated exceptional performance on the test set, achieving a mean average precision (mAP50) of 0.995 and mAP50-95 of 0.994 for determining CCTA scan ranges. This study demonstrates that: (1) Anteroposterior scout images can effectively estimate CCTA scan ranges; and (2) Estimates can be dynamically adjusted to meet the needs of various medical institutions.

From promise to practice: a scoping review of AI applications in abdominal radiology.

Fotis A, Lalwani N, Gupta P, Yee J

pubmed logopapersJul 28 2025
AI is rapidly transforming abdominal radiology. This scoping review mapped current applications across segmentation, detection, classification, prediction, and workflow optimization based on 432 studies published between 2019 and 2024. Most studies focused on CT imaging, with fewer involving MRI, ultrasound, or X-ray. Segmentation models (e.g., U-Net) performed well in liver and pancreatic imaging (Dice coefficient 0.65-0.90). Classification models (e.g., ResNet, DenseNet) were commonly used for diagnostic labeling, with reported sensitivities ranging from 52 to 100% and specificities from 40.7 to 99%. A small number of studies employed true object detection models (e.g., YOLOv3, YOLOv7, Mask R-CNN) capable of spatial lesion localization, marking an emerging trend toward localization-based AI. Predictive models demonstrated AUCs between 0.62 and 0.99 but often lacked interpretability and external validation. Workflow optimization studies reported improved efficiency (e.g., reduced report turnaround and scan repetition), though standardized benchmarks were often missing. Major gaps identified include limited real-world validation, underuse of non-CT modalities, and unclear regulatory pathways. Successful clinical integration will require robust validation, practical implementation, and interdisciplinary collaboration.

Dosimetric evaluation of synthetic kilo-voltage CT images generated from megavoltage CT for head and neck tomotherapy using a conditional GAN network.

Choghazardi Y, Tavakoli MB, Abedi I, Roayaei M, Hemati S, Shanei A

pubmed logopapersJul 28 2025
The lower image contrast of megavoltage computed tomography (MVCT), which corresponds to kilovoltage computed tomography (kVCT), can inhibit accurate dosimetric assessments. This study proposes a deep learning approach, specifically the pix2pix network, to generate high-quality synthetic kVCT (skVCT) images from MVCT data. The model was trained on a dataset of 25 paired patient images and evaluated on a test set of 15 paired images. We performed visual inspections to assess the quality of the generated skVCT images and calculated the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Dosimetric equivalence was evaluated by comparing the gamma pass rates of treatment plans derived from skVCT and kVCT images. Results showed that skVCT images exhibited significantly higher quality than MVCT images, with PSNR and SSIM values of 31.9 ± 1.1 dB and 94.8% ± 1.3%, respectively, compared to 26.8 ± 1.7 dB and 89.5% ± 1.5% for MVCT-to-kVCT comparisons. Furthermore, treatment plans based on skVCT images achieved excellent gamma pass rates of 99.78 ± 0.14% and 99.82 ± 0.20% for 2 mm/2% and 3 mm/3% criteria, respectively, comparable to those obtained from kVCT-based plans (99.70 ± 0.31% and 99.79 ± 1.32%). This study demonstrates the potential of pix2pix models for generating high-quality skVCT images, which could significantly enhance Adaptive Radiation Therapy (ART).

Predicting Intracranial Pressure Levels: A Deep Learning Approach Using Computed Tomography Brain Scans.

Theodoropoulos D, Trivizakis E, Marias K, Xirouchaki N, Vakis A, Papadaki E, Karantanas A, Karabetsos DA

pubmed logopapersJul 28 2025
Elevated intracranial pressure (ICP) is a serious condition that demands prompt diagnosis to avoid significant neurological injury or even death. Although invasive techniques remain the "gold standard" for ICP measuring, they are time-consuming and pose risks of complications. Various noninvasive methods have been suggested, but their experimental status limits their use in emergency situations. On the other hand, although artificial intelligence has rapidly evolved, it has not yet fully harnessed fast-acquisition modalities such as computed tomography (CT) scans to evaluate ICP. This is likely due to the lack of available annotated data sets. In this article, we present research that addresses this gap by training four distinct deep learning models on a custom data set, enhanced with demographical and Glasgow Coma Scale (GCS) values. A key innovation of our study is the incorporation of demographical data and GCS values as additional channels of the scans. The models were trained and validated on a custom data set consisting of paired CT brain scans (n = 578) with corresponding ICP values, supplemented by GCS scores and demographical data. The algorithm addresses a binary classification problem by predicting whether ICP levels exceed a predetermined threshold of 15 mm Hg. The top-performing models achieved an area under the curve of 88.3% and a recall of 81.8%. An algorithm that enhances the transparency of the model's decisions was used to provide insights into where the models focus when generating outcomes, both for the best and lowest-performing models. This study demonstrates the potential of AI-based models to evaluate ICP levels from brain CT scans with high recall. Although promising, further improvements are necessary in the future to validate these findings and improve clinical applicability.

Enhancing Synthetic Pelvic CT Generation from CBCT using Vision Transformer with Adaptive Fourier Neural Operators.

Bhaskara R, Oderinde OM

pubmed logopapersJul 28 2025
This study introduces a novel approach to improve Cone Beam CT (CBCT) image quality by developing a synthetic CT (sCT) generation method using CycleGAN with a Vision Transformer (ViT) and an Adaptive Fourier Neural Operator (AFNO). 

Approach: A dataset of 20 prostate cancer patients who received stereotactic body radiation therapy (SBRT) was used, consisting of paired CBCT and planning CT (pCT) images. The dataset was preprocessed by registering pCTs to CBCTs using deformation registration techniques, such as B-spline, followed by resampling to uniform voxel sizes and normalization. The model architecture integrates a CycleGAN with bidirectional generators, where the UNet generator is enhanced with a ViT at the bottleneck. AFNO functions as the attention mechanism for the ViT, operating on the input data in the Fourier domain. AFNO's innovations handle varying resolutions, mesh invariance, and efficient long-range dependency capture.

Main Results: Our model improved significantly in preserving anatomical details and capturing complex image dependencies. The AFNO mechanism processed global image information effectively, adapting to interpatient variations for accurate sCT generation. Evaluation metrics like Mean Absolute Error (MAE), Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Normalized Cross Correlation (NCC), demonstrated the superiority of our method. Specifically, the model achieved an MAE of 9.71, PSNR of 37.08 dB, SSIM of 0.97, and NCC of 0.99, confirming its efficacy. 

Significance: The integration of AFNO within the CycleGAN UNet framework addresses Cone Beam CT image quality limitations. The model generates synthetic CTs that allow adaptive treatment planning during SBRT, enabling adjustments to the dose based on tumor response, thus reducing radiotoxicity from increased doses. This method's ability to preserve both global and local anatomical features shows potential for improving tumor targeting, adaptive radiotherapy planning, and clinical decision-making.

Multi-Attention Stacked Ensemble for Lung Cancer Detection in CT Scans

Uzzal Saha, Surya Prakash

arxiv logopreprintJul 27 2025
In this work, we address the challenge of binary lung nodule classification (benign vs malignant) using CT images by proposing a multi-level attention stacked ensemble of deep neural networks. Three pretrained backbones - EfficientNet V2 S, MobileViT XXS, and DenseNet201 - are each adapted with a custom classification head tailored to 96 x 96 pixel inputs. A two-stage attention mechanism learns both model-wise and class-wise importance scores from concatenated logits, and a lightweight meta-learner refines the final prediction. To mitigate class imbalance and improve generalization, we employ dynamic focal loss with empirically calculated class weights, MixUp augmentation during training, and test-time augmentation at inference. Experiments on the LIDC-IDRI dataset demonstrate exceptional performance, achieving 98.09 accuracy and 0.9961 AUC, representing a 35 percent reduction in error rate compared to state-of-the-art methods. The model exhibits balanced performance across sensitivity (98.73) and specificity (98.96), with particularly strong results on challenging cases where radiologist disagreement was high. Statistical significance testing confirms the robustness of these improvements across multiple experimental runs. Our approach can serve as a robust, automated aid for radiologists in lung cancer screening.

KC-UNIT: Multi-kernel conversion using unpaired image-to-image translation with perceptual guidance in chest computed tomography imaging.

Choi C, Kim D, Park S, Lee H, Kim H, Lee SM, Kim N

pubmed logopapersJul 26 2025
Computed tomography (CT) images are reconstructed from raw datasets including sinogram using various convolution kernels through back projection. Kernels are typically chosen depending on the anatomical structure being imaged and the specific purpose of the scan, balancing the trade-off between image sharpness and pixel noise. Generally, a sinogram requires large storage capacity, and storage space is often limited in clinical settings. Thus, CT images are generally reconstructed with only one specific kernel in clinical settings, and the sinogram is typically discarded after a week. Therefore, many researchers have proposed deep learning-based image-to-image translation methods for CT kernel conversion. However, transferring the style of the target kernel while preserving anatomical structure remains challenging, particularly when translating CT images from a source domain to a target domain in an unpaired manner, which is often encountered in real-world settings. Thus, we propose a novel kernel conversion method using unpaired image-to-image translation (KC-UNIT). This approach utilizes discriminator regularization, using feature maps from the generator to improve semantic representation learning. To capture content and style features, cosine similarity content and contrastive style losses were defined between the feature map of generator and semantic label map of discriminator. This can be easily incorporated by modifying the discriminator's architecture without requiring any additional learnable or pre-trained networks. The KC-UNIT demonstrated the ability to preserve fine-grained anatomical structure from the source domain during transfer. Our method outperformed existing generative adversarial network-based methods across most kernel conversion methods in three kernel domains. The code is available at https://github.com/cychoi97/KC-UNIT.

A Metabolic-Imaging Integrated Model for Prognostic Prediction in Colorectal Liver Metastases

Qinlong Li, Pu Sun, Guanlin Zhu, Tianjiao Liang, Honggang QI

arxiv logopreprintJul 26 2025
Prognostic evaluation in patients with colorectal liver metastases (CRLM) remains challenging due to suboptimal accuracy of conventional clinical models. This study developed and validated a robust machine learning model for predicting postoperative recurrence risk. Preliminary ensemble models achieved exceptionally high performance (AUC $>$ 0.98) but incorporated postoperative features, introducing data leakage risks. To enhance clinical applicability, we restricted input variables to preoperative baseline clinical parameters and radiomic features from contrast-enhanced CT imaging, specifically targeting recurrence prediction at 3, 6, and 12 months postoperatively. The 3-month recurrence prediction model demonstrated optimal performance with an AUC of 0.723 in cross-validation. Decision curve analysis revealed that across threshold probabilities of 0.55-0.95, the model consistently provided greater net benefit than "treat-all" or "treat-none" strategies, supporting its utility in postoperative surveillance and therapeutic decision-making. This study successfully developed a robust predictive model for early CRLM recurrence with confirmed clinical utility. Importantly, it highlights the critical risk of data leakage in clinical prognostic modeling and proposes a rigorous framework to mitigate this issue, enhancing model reliability and translational value in real-world settings.

Quantification of hepatic steatosis on post-contrast computed tomography scans using artificial intelligence tools.

Derstine BA, Holcombe SA, Chen VL, Pai MP, Sullivan JA, Wang SC, Su GL

pubmed logopapersJul 26 2025
Early detection of steatotic liver disease (SLD) is critically important. In clinical practice, hepatic steatosis is frequently diagnosed using computed tomography (CT) performed for unrelated clinical indications. An equation for estimating magnetic resonance proton density fat fraction (MR-PDFF) using liver attenuation on non-contrast CT exists, but no equivalent equation exists for post-contrast CT. We sought to (1) determine whether an automated workflow can accurately measure liver attenuation, (2) validate previously identified optimal thresholds for liver or liver-spleen attenuation in post-contrast studies, and (3) develop a method for estimating MR-PDFF (FF) on post-contrast CT. The fully automated TotalSegmentator 'total' machine learning model was used to segment 3D liver and spleen from non-contrast and post-contrast CT scans. Mean attenuation was extracted from liver (L) and spleen (S) volumes and from manually placed regions of interest (ROIs) in multi-phase CT scans of two cohorts: derivation (n = 1740) and external validation (n = 1044). Non-linear regression was used to determine the optimal coefficients for three phase-specific (arterial, venous, delayed) increasing exponential decay equations relating post-contrast L to non-contrast L. MR-PDFF was estimated from non-contrast CT and used as the reference standard. The mean attenuation for manual ROIs versus automated volumes were nearly perfectly correlated for both liver and spleen (r > .96, p < .001). For moderate-to-severe steatosis (L < 40 HU), the density of the liver (L) alone was a better classifier than either liver-spleen difference (L-S) or ratio (L/S) on post-contrast CTs. Fat fraction calculated using a corrected post-contrast liver attenuation measure agreed with non-contrast FF > 15% in both the derivation and external validation cohort, with AUROC between 0.92 and 0.97 on arterial, venous, and delayed phases. Automated volumetric mean attenuation of liver and spleen can be used instead of manually placed ROIs for liver fat assessments. Liver attenuation alone in post-contrast phases can be used to assess the presence of moderate-to-severe hepatic steatosis. Correction equations for liver attenuation on post-contrast phase CT scans enable reasonable quantification of liver steatosis, providing potential opportunities for utilizing clinical scans to develop large scale screening or studies in SLD.

Current evidence of low-dose CT screening benefit.

Yip R, Mulshine JL, Oudkerk M, Field J, Silva M, Yankelevitz DF, Henschke CI

pubmed logopapersJul 25 2025
Lung cancer is the leading cause of cancer-related mortality worldwide, largely due to late-stage diagnosis. Low-dose computed tomography (LDCT) screening has emerged as a powerful tool for early detection, enabling diagnosis at curable stages and reducing lung cancer mortality. Despite strong evidence, LDCT screening uptake remains suboptimal globally. This review synthesizes current evidence supporting LDCT screening, highlights ongoing global implementation efforts, and discusses key insights from the 1st AGILE conference. Lung cancer screening is gaining global momentum, with many countries advancing plans for national LDCT programs. Expanding eligibility through risk-based models and targeting high-risk never- and light-smokers are emerging strategies to improve efficiency and equity. Technological advancements, including AI-assisted interpretation and image-based biomarkers, are addressing concerns around false positives, overdiagnosis, and workforce burden. Integrating cardiac and smoking-related disease assessment within LDCT screening offers added preventive health benefits. To maximize global impact, screening strategies must be tailored to local health systems and populations. Efforts should focus on increasing awareness, standardizing protocols, optimizing screening intervals, and strengthening multidisciplinary care pathways. International collaboration and shared infrastructure can accelerate progress and ensure sustainability. LDCT screening represents a cost-effective opportunity to reduce lung cancer mortality and premature deaths.
Page 55 of 1421416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.