Sort by:
Page 27 of 1411403 results

Machine learning to predict high-risk coronary artery disease on CT in the SCOT-HEART trial.

Williams MC, Guimaraes ARM, Jiang M, Kwieciński J, Weir-McCall JR, Adamson PD, Mills NL, Roditi GH, van Beek EJR, Nicol E, Berman DS, Slomka PJ, Dweck MR, Newby DE, Dey D

pubmed logopapersSep 1 2025
Machine learning based on clinical characteristics has the potential to predict coronary CT angiography (CCTA) findings and help guide resource utilisation. From the SCOT-HEART (Scottish Computed Tomography of the HEART) trial, data from 1769 patients was used to train and to test machine learning models (XGBoost, 10-fold cross validation, grid search hyperparameter selection). Two models were separately generated to predict the presence of coronary artery disease (CAD) and an increased burden of low-attenuation coronary artery plaque (LAP) using symptoms, demographic and clinical characteristics, electrocardiography and exercise tolerance testing (ETT). Machine learning predicted the presence of CAD on CCTA (area under the curve (AUC) 0.80, 95% CI 0.74 to 0.85) better than the 10-year cardiovascular risk score alone (AUC 0.75, 95% CI 0.70, 0.81, p=0.004). The most important features in this model were the 10-year cardiovascular risk score, age, sex, total cholesterol and an abnormal ETT. In contrast, the second model used to predict an increased LAP burden performed similarly to the 10-year cardiovascular risk score (AUC 0.75, 95% CI 0.70 to 0.80 vs AUC 0.72, 95% CI 0.66 to 0.77, p=0.08) with the most important features being the 10-year cardiovascular risk score, age, body mass index and total and high-density lipoprotein cholesterol concentrations. Machine learning models can improve prediction of the presence of CAD on CCTA, over the standard cardiovascular risk score. However, it was not possible to improve the prediction of an increased LAP burden based on clinical factors alone.

Automated coronary analysis in ultrahigh-spatial resolution photon-counting detector CT angiography: Clinical validation and intra-individual comparison with energy-integrating detector CT.

Kravchenko D, Hagar MT, Varga-Szemes A, Schoepf UJ, Schoebinger M, O'Doherty J, Gülsün MA, Laghi A, Laux GS, Vecsey-Nagy M, Emrich T, Tremamunno G

pubmed logopapersSep 1 2025
To evaluate a deep-learning algorithm for automated coronary artery analysis on ultrahigh-resolution photon-counting detector coronary computed tomography (CT) angiography and compared its performance to expert readers using invasive coronary angiography as reference. Thirty-two patients (mean age 68.6 years; 81 ​% male) underwent both energy-integrating detector and ultrahigh-resolution photon-counting detector CT within 30 days. Expert readers scored each image using the Coronary Artery Disease-Reporting and Data System classification, and compared to invasive angiography. After a three-month wash-out, one reader reanalyzed the photon-counting detector CT images assisted by the algorithm. Sensitivity, specificity, accuracy, inter-reader agreement, and reading times were recorded for each method. On 401 arterial segments, inter-reader agreement improved from substantial (κ ​= ​0.75) on energy-integrating detector CT to near-perfect (κ ​= ​0.86) on photon-counting detector CT. The algorithm alone achieved 85 ​% sensitivity, 91 ​% specificity, and 90 ​% accuracy on energy-integrating detector CT, and 85 ​%, 96 ​%, and 95 ​% on photon-counting detector CT. Compared to invasive angiography on photon-counting detector CT, manual and automated reads had similar sensitivity (67 ​%), but manual assessment slightly outperformed regarding specificity (85 ​% vs. 79 ​%) and accuracy (84 ​% vs. 78 ​%). When the reader was assisted by the algorithm, specificity rose to 97 ​% (p ​< ​0.001), accuracy to 95 ​%, and reading time decreased by 54 ​% (p ​< ​0.001). This deep-learning algorithm demonstrates high agreement with experts and improved diagnostic performance on photon-counting detector CT. Expert review augmented by the algorithm further increases specificity and dramatically reduces interpretation time.

CT-based deep learning radiomics model for predicting proliferative hepatocellular carcinoma: application in transarterial chemoembolization and radiofrequency ablation.

Zhang H, Zhang Z, Zhang K, Gao Z, Shen Z, Shen W

pubmed logopapersSep 1 2025
Proliferative hepatocellular carcinoma (HCC) is an aggressive tumor with varying prognosis depending on the different disease stages and subsequent treatment. This study aims to develop and validate a deep learning radiomics (DLR) model based on contrast-enhanced CT to predict proliferative HCC and to implement risk prediction in patients treated with transarterial chemoembolization (TACE) and radiofrequency ablation (RFA). 312 patients (mean age, 58 years ± 10 [SD]; 261 men and 51 women) with HCC undergoing surgery at two medical centers were included, who were divided into a training set (<i>n</i> = 182), an internal test set (<i>n</i> = 46) and an external test set (<i>n</i> = 84). DLR features were extracted from preoperative contrast-enhanced CT images. Multiple machine learning algorithms were used to develop and validate proliferative HCC prediction models in training and test sets. Subsequently, patients from two independent new sets (RFA and TACE sets) were divided into high- and low-risk groups using the DLR score generated by the optimal model. The risk prediction value of DLR scores in recurrence-free survival (RFS) and time to progression (TTP) was examined separately in RFA and TACE sets. The DLR proliferative HCC prediction model demonstrated excellent predictive performance with an AUC of 0.906 (95% CI 0.861–0.952) in the training set, 0.901 (95% CI 0.779–1.000) in the internal test set and 0.837 (95% CI 0.746–0.928) in the external test set. The DLR score effectively enables risk prediction for patients in RFA and TACE sets. For the RFA set, the low-risk group had significantly longer RFS compared to the high-risk group (<i>P</i> = 0.037). Similarly, the low-risk group showed a longer TTP than the high-risk group for the TACE set (<i>P</i> = 0.034). The DLR-based contrast-enhanced CT model enables non-invasive prediction of proliferative HCC. Furthermore, the DLR risk prediction helps identify high-risk patients undergoing RFA or TACE, providing prognostic insights for personalized management. The online version contains supplementary material available at 10.1186/s12880-025-01913-9.

3D Deep Learning for Virtual Orbital Defect Reconstruction: A Precise and Automated Approach.

Yu F, Liu C, Zhong C, Zeng W, Chen J, Liu W, Guo J, Tang W

pubmed logopapersSep 1 2025
Accurate virtual orbital reconstruction is crucial for preoperative planning. Traditional methods, such as the mirroring technique, are unsuitable for orbital defects involving both sides of the midline and are time-consuming and labor-intensive. This study introduces a modified 3D U-Net+++ architecture for orbital defects reconstruction, aiming to enhance precision and automation. The model was trained and tested with 300 synthetic defects from cranial spiral CT scans. The method was validated in 15 clinical cases of orbital fractures and evaluated using quantitative metrics, visual assessments, and a 5-point Likert scale, by 3 surgeons. For synthetic defect reconstruction, the network achieved a 95% Hausdorff distance (HD95) of<2.0 mm, an average symmetric surface distance (ASSD) of ∼0.02 mm, a surface Dice similarity coefficient (Surface DSC)>0.94, a peak signal-to-noise ratio (PSNR)>35 dB, and a structural similarity index (SSIM)>0.98, outperforming the compared state-of-the-art networks. For clinical cases, the average 5-point Likert scale scores for structural integrity, edge consistency, and overall morphology were>4, with no significant difference between unilateral and bilateral/trans-midline defects. For clinical unilateral defect reconstruction, the HD95 was ∼2.5 mm, ASSD<0.02 mm, Surface DSC>0.91, PSNR>30 dB, and SSIM>0.99. The automatic reconstruction process took ∼10 seconds per case. In conclusion, this method offers a precise and highly automated solution for orbital defect reconstruction, particularly for bilateral and trans-midline defects. We anticipate that this method will significantly assist future clinical practice.

Clinical Metadata Guided Limited-Angle CT Image Reconstruction

Yu Shi, Shuyi Fan, Changsheng Fang, Shuo Han, Haodong Li, Li Zhou, Bahareh Morovati, Dayang Wang, Hengyong Yu

arxiv logopreprintSep 1 2025
Limited-angle computed tomography (LACT) offers improved temporal resolution and reduced radiation dose for cardiac imaging, but suffers from severe artifacts due to truncated projections. To address the ill-posedness of LACT reconstruction, we propose a two-stage diffusion framework guided by structured clinical metadata. In the first stage, a transformer-based diffusion model conditioned exclusively on metadata, including acquisition parameters, patient demographics, and diagnostic impressions, generates coarse anatomical priors from noise. The second stage further refines the images by integrating both the coarse prior and metadata to produce high-fidelity results. Physics-based data consistency is enforced at each sampling step in both stages using an Alternating Direction Method of Multipliers module, ensuring alignment with the measured projections. Extensive experiments on both synthetic and real cardiac CT datasets demonstrate that incorporating metadata significantly improves reconstruction fidelity, particularly under severe angular truncation. Compared to existing metadata-free baselines, our method achieves superior performance in SSIM, PSNR, nMI, and PCC. Ablation studies confirm that different types of metadata contribute complementary benefits, particularly diagnostic and demographic priors under limited-angle conditions. These findings highlight the dual role of clinical metadata in improving both reconstruction quality and efficiency, supporting their integration into future metadata-guided medical imaging frameworks.

Deep Learning Application of YOLOv8 for Aortic Dissection Screening using Non-contrast Computed Tomography.

Tang Z, Huang Y, Hu S, Shen T, Meng M, Xue T, Jia Z

pubmed logopapersSep 1 2025
Acute aortic dissection (AD) is a life threatening condition that poses considerable challenges for timely diagnosis. Non-contrast computed tomography (CT) is frequently used to diagnose AD in certain clinical settings, but its diagnostic accuracy can vary among radiologists. This study aimed to develop and validate an interpretable YOLOv8 deep learning model based on non-contrast CT to detect AD. This retrospective study included patients from five institutions, divided into training, internal validation, and external validation cohorts. The YOLOv8 deep learning model was trained on annotated non-contrast CT images. Its performance was evaluated using area under the curve (AUC), sensitivity, specificity, and inference time compared with findings from vascular interventional radiologists, general radiologists, and radiology residents. In addition, gradient weighted class activation mapping (Grad-CAM) saliency map analysis was performed. A total of 1 138 CT scans were assessed (569 with AD, 569 controls). The YOLOv8s model achieved an AUC of 0.964 (95% confidence interval [CI] 0.939 - 0.988) in the internal validation cohort and 0.970 (95% CI 0.946 - 0.990) in the external validation cohort. In the external validation cohort, the performance of the three groups of radiologists in detecting AD was inferior to that of the YOLOv8s model. The model's sensitivity (0.976) was slightly higher than that of vascular interventional specialists (0.965; p = .18), and its specificity (0.935) was superior to that of general radiologists (0.835; p < .001). The model's inference time was 3.47 seconds, statistically significantly shorter than the radiologists' mean interpretation time of 25.32 seconds (p < .001). Grad-CAM analysis confirmed that the model focused on anatomically and clinically relevant regions, supporting its interpretability. The YOLOv8s deep learning model reliably detected AD on non-contrast CT and outperformed radiologists, particularly in time efficiency and diagnostic accuracy. Its implementation could enhance AD screening in specific settings, support clinical decision making, and improve diagnostic quality.

Predicting radiation pneumonitis in lung cancer patients using robust 4DCT-ventilation and perfusion imaging.

Neupane T, Castillo E, Chen Y, Pahlavian SH, Castillo R, Vinogradskiy Y, Choi W

pubmed logopapersSep 1 2025
Methods have been developed that apply image processing to 4-Dimension computed tomography (4DCT) to generate lung ventilation (4DCT-ventilation). Traditional methods for 4DCT-ventilation rely on density-change methods and lack reproducibility and do not provide 4DCT-perfusion data. Novel 4DCT-ventilation/perfusion methods have been developed that are robust and provide 4DCT-perfusion information. The purpose of this study was to use prospective clinical trial data to evaluate the ability of novel 4DCT-based lung function imaging methods to predict pneumonitis. Sixty-three advanced-stage lung cancer patients enrolled in a multi-institutional, phase 2 clinical trial on 4DCT-based functional avoidance radiation therapy were used. 4DCTs were used to generate four lung function images: 1) 4DCT-ventilation using the traditional HU approach ('4DCT-vent-HU'), and 3 methods using the novel statistically robust methods: 2) 4DCT-ventilation based on the Mass Conserving Volume Change ('4DCT-vent-MCVC'), 3) 4DCT-ventilation using the Integrated Jacobian Formulation ('4DCT-vent-IJF') and 4) 4DCT-perfusion. Dose-function metrics including mean functional lung dose (fMLD), and percentage of functional lung receiving ≥ 5 Gy (fV5), and ≥ 20 Gy (fV20) were calculated using various structure-based thresholds. The ability of dose-function metrics to predict for ≥ grade 2 RP was assessed using logistic regression and machine learning. Model performance was evaluated using the area under the curve (AUC) and validated through 10-fold cross-validation. 10/63 (15.9 %) patients developed grade ≥2 RP. Logistic regression yielded mean AUCs of 0.70 ± 0.02 (p = 0.04), 0.64 ± 0.04 (p = 0.13), 0.60 ± 0.03 (p = 0.27), and 0.63 ± 0.03 (p = 0.20) for 4DCT-vent-MCVC, 4DCT-perfusion, 4DCT-vent-IJF, and 4DCT-vent-HU, respectively, compared to 0.65 ± 0.10 (p > 0.05) for standard lung metrics. Machine learning modeling resulted in AUCs 0.83 ± 0.04, 0.82 ± 0.05, 0.76 ± 0.05, 0.74 ± 0.06, and 0.75 ± 0.02 for 4DCT-vent-MCVC, 4DCT-perfusion, 4DCT-vent-IJF, and 4DCT-vent-HU, and standard lung metrics respectively, with an accuracy of 75-85 %. This is the first study to comprehensively evaluate 4DCT-perfusion and robust 4DCT-ventilation in predicting clinical outcomes. The data showed that on the presented 63-patient study and using classis logistic regression and ML methods, 4DCT-vent-MCVC was the best predictors of RP.

Deep Learning for Automated 3D Assessment of Rotator Cuff Muscle Atrophy and Fat Infiltration prior to Total Shoulder Arthroplasty.

Levin JM, Satir OB, Hurley ET, Colasanti C, Becce F, Terrier A, Eghbali P, Goetti P, Klifto C, Anakwenze O, Frankle MA, Namdari S, Büchler P

pubmed logopapersSep 1 2025
Rotator cuff muscle pathology affects outcomes following total shoulder arthroplasty, yet current assessment methods lack reliability in quantifying muscle atrophy and fat infiltration. We developed a deep learning-based model for automated segmentation of rotator cuff muscles on computed tomography (CT) and propose a T-score classification of volumetric muscle atrophy. We further characterized distinct atrophy phenotypes, 3D fat infiltration percentage (3DFI%), and anterior-posterior (AP) balance, which were compared between healthy controls, anatomic total shoulder arthroplasty (aTSA), and reverse total shoulder arthroplasty (rTSA) patients. 952 shoulder CT scans were included (762 controls, 103 undergoing aTSA for glenohumeral osteoarthritis, and 87 undergoing rTSA for cuff tear arthropathy. A deep learning model was developed to allow automated segmentation of supraspinatus (SS), subscapularis (SC), infraspinatus (IS) and teres minor (TM). Muscle volumes were normalized to scapula volume, and control muscle volumes were referenced to calculate T-scores for each muscle. T-scores were classified as no atrophy (>-1.0), moderate atrophy (-1 to -2.5), and severe atrophy (<-2.5). 3DFI% was quantified as the proportion of fat within each muscle using Hounsfield unit thresholds. The T-scores, 3DFI%, and AP balance were compared between the three cohorts. The aTSA cohort had significantly greater atrophy in all muscles compared to control (p<0.001), whereas the rTSA cohort had significantly greater atrophy in SS, SC, and IS than aTSA (p<0.001). In the aTSA cohort, the most common phenotype was SS<sub>severe</sub>/SC<sub>moderate</sub>/IS+TM<sub>moderate</sub>, while in the rTSA cohort it was SS<sub>severe</sub>/SC<sub>moderate</sub>/IS+TM<sub>severe</sub>. The aTSA group had significantly higher 3DFI% compared to controls for all muscles (p<0.001), while the rTSA cohort had significantly higher 3DFI% than aTSA and control cohorts for all muscles (p<0.001). Additionally, the aTSA cohort had a significantly lower AP muscle volume ratio (1.06 vs. 1.14, p<0.001), whereas the rTSA group had a significantly higher AP muscle volume ratio than the control cohort (1.31 vs. 1.14, p<0.001). Our study demonstrates successful development of a deep learning model for automated volumetric assessment of rotator cuff muscle atrophy, 3DFI% and AP balance on shoulder CT scans. We found that aTSA patients had significantly greater muscle atrophy and 3DFI% than controls, while the rTSA patients had the most severe muscle atrophy and 3DFI%. Additionally, distinct phenotypes of muscle atrophy and AP muscle balance exist in aTSA and rTSA that warrant further investigation with regards to shoulder arthroplasty outcomes.

RibPull: Implicit Occupancy Fields and Medial Axis Extraction for CT Ribcage Scans

Emmanouil Nikolakakis, Amine Ouasfi, Julie Digne, Razvan Marinescu

arxiv logopreprintSep 1 2025
We present RibPull, a methodology that utilizes implicit occupancy fields to bridge computational geometry and medical imaging. Implicit 3D representations use continuous functions that handle sparse and noisy data more effectively than discrete methods. While voxel grids are standard for medical imaging, they suffer from resolution limitations, topological information loss, and inefficient handling of sparsity. Coordinate functions preserve complex geometrical information and represent a better solution for sparse data representation, while allowing for further morphological operations. Implicit scene representations enable neural networks to encode entire 3D scenes within their weights. The result is a continuous function that can implicitly compesate for sparse signals and infer further information about the 3D scene by passing any combination of 3D coordinates as input to the model. In this work, we use neural occupancy fields that predict whether a 3D point lies inside or outside an object to represent CT-scanned ribcages. We also apply a Laplacian-based contraction to extract the medial axis of the ribcage, thus demonstrating a geometrical operation that benefits greatly from continuous coordinate-based 3D scene representations versus voxel-based representations. We evaluate our methodology on 20 medical scans from the RibSeg dataset, which is itself an extension of the RibFrac dataset. We will release our code upon publication.

Progesterone for Traumatic Brain Injury, Experimental Clinical Treatment III Trial Revisited: Objective Classification of Traumatic Brain Injury With Brain Imaging Segmentation and Biomarker Levels.

Cheong S, Gupta R, Kadaba Sridhar S, Hall AJ, Frankel M, Wright DW, Sham YY, Samadani U

pubmed logopapersSep 1 2025
This post hoc study of the Progesterone for Traumatic Brain Injury, Experimental Clinical Treatment (ProTECT) III trial investigates whether improving traumatic brain injury (TBI) classification, using serum biomarkers (glial fibrillary acidic protein [GFAP] and ubiquitin carboxyl-terminal esterase L1 [UCH-L1]) and algorithmically assessed total lesion volume, could identify a subset of responders to progesterone treatment, beyond broad measures like the Glasgow Coma Scale (GCS) and Glasgow Outcome Scale-Extended (GOS-E), which may fail to capture subtle changes in TBI recovery. Brain lesion volumes on CT scans were quantified using Brain Lesion Analysis and Segmentation Tool for CT. Patients were classified into true-positive and true-negative groups based on an optimization scheme to determine a threshold that maximizes agreement between radiological assessment and objectively measured lesion volume. True-positives were further categorized into low (> 0.2-10 mL), medium (> 10-50 mL), and high (> 50 mL) lesion volumes for analysis with protein biomarkers and injury severity. Correlation analyses linked Rotterdam scores (RSs) with biomarker levels and lesion volumes, whereas Welch's t-test evaluated biomarker differences between groups and progesterone's effects. Forty-nine level 1 trauma centers in the United States. Patients with moderate-to-severe TBI. Progesterone. GFAP and UCH-L1 levels were significantly higher in true-positive cases with low to medium lesion volume. Only UCH-L1 differed between progesterone and placebo groups at 48 hours. Both biomarkers and lesion volume in the true-positive group correlated with the RS. No sex-specific or treatment differences were found. This study reaffirms elevated levels of GFAP and UCH-L1 as biomarkers for detecting TBI in patients with brain lesions and for predicting clinical outcomes. Despite improved classification using CT-imaging segmentation and serum biomarkers, we did not identify a subset of progesterone responders within 24 or 48 hours of progesterone treatment. More rigorous and quantifiable measures for classifying the nature of injury may be needed to enable development of therapeutics as neither serum markers nor algorithmic CT analysis performed better than the older metrics of Rotterdam or GCS metrics.
Page 27 of 1411403 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.