Sort by:
Page 26 of 1411403 results

Deep Learning for Automated 3D Assessment of Rotator Cuff Muscle Atrophy and Fat Infiltration prior to Total Shoulder Arthroplasty.

Levin JM, Satir OB, Hurley ET, Colasanti C, Becce F, Terrier A, Eghbali P, Goetti P, Klifto C, Anakwenze O, Frankle MA, Namdari S, Büchler P

pubmed logopapersSep 1 2025
Rotator cuff muscle pathology affects outcomes following total shoulder arthroplasty, yet current assessment methods lack reliability in quantifying muscle atrophy and fat infiltration. We developed a deep learning-based model for automated segmentation of rotator cuff muscles on computed tomography (CT) and propose a T-score classification of volumetric muscle atrophy. We further characterized distinct atrophy phenotypes, 3D fat infiltration percentage (3DFI%), and anterior-posterior (AP) balance, which were compared between healthy controls, anatomic total shoulder arthroplasty (aTSA), and reverse total shoulder arthroplasty (rTSA) patients. 952 shoulder CT scans were included (762 controls, 103 undergoing aTSA for glenohumeral osteoarthritis, and 87 undergoing rTSA for cuff tear arthropathy. A deep learning model was developed to allow automated segmentation of supraspinatus (SS), subscapularis (SC), infraspinatus (IS) and teres minor (TM). Muscle volumes were normalized to scapula volume, and control muscle volumes were referenced to calculate T-scores for each muscle. T-scores were classified as no atrophy (>-1.0), moderate atrophy (-1 to -2.5), and severe atrophy (<-2.5). 3DFI% was quantified as the proportion of fat within each muscle using Hounsfield unit thresholds. The T-scores, 3DFI%, and AP balance were compared between the three cohorts. The aTSA cohort had significantly greater atrophy in all muscles compared to control (p<0.001), whereas the rTSA cohort had significantly greater atrophy in SS, SC, and IS than aTSA (p<0.001). In the aTSA cohort, the most common phenotype was SS<sub>severe</sub>/SC<sub>moderate</sub>/IS+TM<sub>moderate</sub>, while in the rTSA cohort it was SS<sub>severe</sub>/SC<sub>moderate</sub>/IS+TM<sub>severe</sub>. The aTSA group had significantly higher 3DFI% compared to controls for all muscles (p<0.001), while the rTSA cohort had significantly higher 3DFI% than aTSA and control cohorts for all muscles (p<0.001). Additionally, the aTSA cohort had a significantly lower AP muscle volume ratio (1.06 vs. 1.14, p<0.001), whereas the rTSA group had a significantly higher AP muscle volume ratio than the control cohort (1.31 vs. 1.14, p<0.001). Our study demonstrates successful development of a deep learning model for automated volumetric assessment of rotator cuff muscle atrophy, 3DFI% and AP balance on shoulder CT scans. We found that aTSA patients had significantly greater muscle atrophy and 3DFI% than controls, while the rTSA patients had the most severe muscle atrophy and 3DFI%. Additionally, distinct phenotypes of muscle atrophy and AP muscle balance exist in aTSA and rTSA that warrant further investigation with regards to shoulder arthroplasty outcomes.

Predicting radiation pneumonitis in lung cancer patients using robust 4DCT-ventilation and perfusion imaging.

Neupane T, Castillo E, Chen Y, Pahlavian SH, Castillo R, Vinogradskiy Y, Choi W

pubmed logopapersSep 1 2025
Methods have been developed that apply image processing to 4-Dimension computed tomography (4DCT) to generate lung ventilation (4DCT-ventilation). Traditional methods for 4DCT-ventilation rely on density-change methods and lack reproducibility and do not provide 4DCT-perfusion data. Novel 4DCT-ventilation/perfusion methods have been developed that are robust and provide 4DCT-perfusion information. The purpose of this study was to use prospective clinical trial data to evaluate the ability of novel 4DCT-based lung function imaging methods to predict pneumonitis. Sixty-three advanced-stage lung cancer patients enrolled in a multi-institutional, phase 2 clinical trial on 4DCT-based functional avoidance radiation therapy were used. 4DCTs were used to generate four lung function images: 1) 4DCT-ventilation using the traditional HU approach ('4DCT-vent-HU'), and 3 methods using the novel statistically robust methods: 2) 4DCT-ventilation based on the Mass Conserving Volume Change ('4DCT-vent-MCVC'), 3) 4DCT-ventilation using the Integrated Jacobian Formulation ('4DCT-vent-IJF') and 4) 4DCT-perfusion. Dose-function metrics including mean functional lung dose (fMLD), and percentage of functional lung receiving ≥ 5 Gy (fV5), and ≥ 20 Gy (fV20) were calculated using various structure-based thresholds. The ability of dose-function metrics to predict for ≥ grade 2 RP was assessed using logistic regression and machine learning. Model performance was evaluated using the area under the curve (AUC) and validated through 10-fold cross-validation. 10/63 (15.9 %) patients developed grade ≥2 RP. Logistic regression yielded mean AUCs of 0.70 ± 0.02 (p = 0.04), 0.64 ± 0.04 (p = 0.13), 0.60 ± 0.03 (p = 0.27), and 0.63 ± 0.03 (p = 0.20) for 4DCT-vent-MCVC, 4DCT-perfusion, 4DCT-vent-IJF, and 4DCT-vent-HU, respectively, compared to 0.65 ± 0.10 (p > 0.05) for standard lung metrics. Machine learning modeling resulted in AUCs 0.83 ± 0.04, 0.82 ± 0.05, 0.76 ± 0.05, 0.74 ± 0.06, and 0.75 ± 0.02 for 4DCT-vent-MCVC, 4DCT-perfusion, 4DCT-vent-IJF, and 4DCT-vent-HU, and standard lung metrics respectively, with an accuracy of 75-85 %. This is the first study to comprehensively evaluate 4DCT-perfusion and robust 4DCT-ventilation in predicting clinical outcomes. The data showed that on the presented 63-patient study and using classis logistic regression and ML methods, 4DCT-vent-MCVC was the best predictors of RP.

RibPull: Implicit Occupancy Fields and Medial Axis Extraction for CT Ribcage Scans

Emmanouil Nikolakakis, Amine Ouasfi, Julie Digne, Razvan Marinescu

arxiv logopreprintSep 1 2025
We present RibPull, a methodology that utilizes implicit occupancy fields to bridge computational geometry and medical imaging. Implicit 3D representations use continuous functions that handle sparse and noisy data more effectively than discrete methods. While voxel grids are standard for medical imaging, they suffer from resolution limitations, topological information loss, and inefficient handling of sparsity. Coordinate functions preserve complex geometrical information and represent a better solution for sparse data representation, while allowing for further morphological operations. Implicit scene representations enable neural networks to encode entire 3D scenes within their weights. The result is a continuous function that can implicitly compesate for sparse signals and infer further information about the 3D scene by passing any combination of 3D coordinates as input to the model. In this work, we use neural occupancy fields that predict whether a 3D point lies inside or outside an object to represent CT-scanned ribcages. We also apply a Laplacian-based contraction to extract the medial axis of the ribcage, thus demonstrating a geometrical operation that benefits greatly from continuous coordinate-based 3D scene representations versus voxel-based representations. We evaluate our methodology on 20 medical scans from the RibSeg dataset, which is itself an extension of the RibFrac dataset. We will release our code upon publication.

Deep Learning Application of YOLOv8 for Aortic Dissection Screening using Non-contrast Computed Tomography.

Tang Z, Huang Y, Hu S, Shen T, Meng M, Xue T, Jia Z

pubmed logopapersSep 1 2025
Acute aortic dissection (AD) is a life threatening condition that poses considerable challenges for timely diagnosis. Non-contrast computed tomography (CT) is frequently used to diagnose AD in certain clinical settings, but its diagnostic accuracy can vary among radiologists. This study aimed to develop and validate an interpretable YOLOv8 deep learning model based on non-contrast CT to detect AD. This retrospective study included patients from five institutions, divided into training, internal validation, and external validation cohorts. The YOLOv8 deep learning model was trained on annotated non-contrast CT images. Its performance was evaluated using area under the curve (AUC), sensitivity, specificity, and inference time compared with findings from vascular interventional radiologists, general radiologists, and radiology residents. In addition, gradient weighted class activation mapping (Grad-CAM) saliency map analysis was performed. A total of 1 138 CT scans were assessed (569 with AD, 569 controls). The YOLOv8s model achieved an AUC of 0.964 (95% confidence interval [CI] 0.939 - 0.988) in the internal validation cohort and 0.970 (95% CI 0.946 - 0.990) in the external validation cohort. In the external validation cohort, the performance of the three groups of radiologists in detecting AD was inferior to that of the YOLOv8s model. The model's sensitivity (0.976) was slightly higher than that of vascular interventional specialists (0.965; p = .18), and its specificity (0.935) was superior to that of general radiologists (0.835; p < .001). The model's inference time was 3.47 seconds, statistically significantly shorter than the radiologists' mean interpretation time of 25.32 seconds (p < .001). Grad-CAM analysis confirmed that the model focused on anatomically and clinically relevant regions, supporting its interpretability. The YOLOv8s deep learning model reliably detected AD on non-contrast CT and outperformed radiologists, particularly in time efficiency and diagnostic accuracy. Its implementation could enhance AD screening in specific settings, support clinical decision making, and improve diagnostic quality.

Comparison of segmentation performance of cnns, vision transformers, and hybrid networks for paranasal sinuses with sinusitis on CT images.

Song D, Yang S, Han JY, Kim KG, Kim ST, Yi WJ

pubmed logopapersSep 1 2025
Accurate segmentation of the paranasal sinuses, including the frontal sinus (FS), ethmoid sinus (ES), sphenoid sinus (SS), and maxillary sinus (MS), plays an important role in supporting image-guided surgery (IGS) for sinusitis, facilitating safer intraoperative navigation by identifying anatomical variations and delineating surgical landmarks on CT imaging. To the best of our knowledge, no comparative studies of convolutional neural networks (CNNs), vision transformers (ViTs), and hybrid networks for segmenting each paranasal sinus in patients with sinusitis have been conducted. Therefore, the objective of this study was to compare the segmentation performance of CNNs, ViTs, and hybrid networks for individual paranasal sinuses with varying degrees of anatomical complexity and morphological and textural variations caused by sinusitis on CT images. The performance of CNNs, ViTs, and hybrid networks was compared using Jaccard Index (JI), Dice similarity coefficient (DSC), precision (PR), recall (RC), and 95% Hausdorff Distance (HD95) for segmentation accuracy metrics and the number of parameters (Params) and inference time (IT) for computational efficiency. The Swin UNETR hybrid network outperformed the other networks, achieving the highest segmentation scores, with a JI of 0.719, a DSC of 0.830, a PR of 0.935, and a RC of 0.758, and the lowest HD95 value of 10.529 with the smallest number of the model architectural parameter, with 15.705 M Params. Also, CoTr, another hybrid network, demonstrated superior segmentation performance compared to CNNs and ViTs, and achieved the fastest inference time with 0.149 IT. Compared with CNNs and ViTs, hybrid networks significantly reduced false positives and enabled more precise boundary delineation, effectively capturing anatomical relationships among the sinuses and surrounding structures. This resulted in the lowest segmentation errors near critical surgical landmarks. In conclusion, hybrid networks may provide a more balanced trade-off between segmentation accuracy and computational efficiency, with potential applicability in clinical decision support systems for sinusitis.

Machine learning to predict high-risk coronary artery disease on CT in the SCOT-HEART trial.

Williams MC, Guimaraes ARM, Jiang M, Kwieciński J, Weir-McCall JR, Adamson PD, Mills NL, Roditi GH, van Beek EJR, Nicol E, Berman DS, Slomka PJ, Dweck MR, Newby DE, Dey D

pubmed logopapersSep 1 2025
Machine learning based on clinical characteristics has the potential to predict coronary CT angiography (CCTA) findings and help guide resource utilisation. From the SCOT-HEART (Scottish Computed Tomography of the HEART) trial, data from 1769 patients was used to train and to test machine learning models (XGBoost, 10-fold cross validation, grid search hyperparameter selection). Two models were separately generated to predict the presence of coronary artery disease (CAD) and an increased burden of low-attenuation coronary artery plaque (LAP) using symptoms, demographic and clinical characteristics, electrocardiography and exercise tolerance testing (ETT). Machine learning predicted the presence of CAD on CCTA (area under the curve (AUC) 0.80, 95% CI 0.74 to 0.85) better than the 10-year cardiovascular risk score alone (AUC 0.75, 95% CI 0.70, 0.81, p=0.004). The most important features in this model were the 10-year cardiovascular risk score, age, sex, total cholesterol and an abnormal ETT. In contrast, the second model used to predict an increased LAP burden performed similarly to the 10-year cardiovascular risk score (AUC 0.75, 95% CI 0.70 to 0.80 vs AUC 0.72, 95% CI 0.66 to 0.77, p=0.08) with the most important features being the 10-year cardiovascular risk score, age, body mass index and total and high-density lipoprotein cholesterol concentrations. Machine learning models can improve prediction of the presence of CAD on CCTA, over the standard cardiovascular risk score. However, it was not possible to improve the prediction of an increased LAP burden based on clinical factors alone.

Clinical Metadata Guided Limited-Angle CT Image Reconstruction

Yu Shi, Shuyi Fan, Changsheng Fang, Shuo Han, Haodong Li, Li Zhou, Bahareh Morovati, Dayang Wang, Hengyong Yu

arxiv logopreprintSep 1 2025
Limited-angle computed tomography (LACT) offers improved temporal resolution and reduced radiation dose for cardiac imaging, but suffers from severe artifacts due to truncated projections. To address the ill-posedness of LACT reconstruction, we propose a two-stage diffusion framework guided by structured clinical metadata. In the first stage, a transformer-based diffusion model conditioned exclusively on metadata, including acquisition parameters, patient demographics, and diagnostic impressions, generates coarse anatomical priors from noise. The second stage further refines the images by integrating both the coarse prior and metadata to produce high-fidelity results. Physics-based data consistency is enforced at each sampling step in both stages using an Alternating Direction Method of Multipliers module, ensuring alignment with the measured projections. Extensive experiments on both synthetic and real cardiac CT datasets demonstrate that incorporating metadata significantly improves reconstruction fidelity, particularly under severe angular truncation. Compared to existing metadata-free baselines, our method achieves superior performance in SSIM, PSNR, nMI, and PCC. Ablation studies confirm that different types of metadata contribute complementary benefits, particularly diagnostic and demographic priors under limited-angle conditions. These findings highlight the dual role of clinical metadata in improving both reconstruction quality and efficiency, supporting their integration into future metadata-guided medical imaging frameworks.

Predicting perineural invasion of intrahepatic cholangiocarcinoma based on CT: a multicenter study.

Lin Y, Liu Z, Li J, Feng ST, Dong Z, Tang M, Song C, Peng Z, Cai H, Hu Q, Zou Y, Zhou X

pubmed logopapersSep 1 2025
This study explored the feasibility of preoperatively predicting perineural invasion (PNI) of intrahepatic cholangiocarcinoma (ICC) through machine learning based on clinical and CT image features, which may help in individualized clinical decision making and modification of further treatment strategies. This study enrolled 199 patients with histologically confirmed ICC from three institutions for final analysis. 111 patients from Institution I were recruited as the training cohort and internal validation cohort. Significant clinical and CT image features for predicting PNI were screened using the least absolute shrinkage and selection operator (LASSO) to construct machine learning models. 72 patients from Institutions II and III were recruited as two external validation cohorts, and 16 patients from Institution I were enrolled as a prospective cohort to assess model performance. Tumor location (perihilar), intrahepatic bile duct dilatation, and arterial enhancement pattern were selected using LASSO for model construction. Machine learning models were developed based on these three features using five algorithms: multilayer perceptron, random forest, support vector machine, logistic regression, and XGBoost. The AUCs of the models exceeded 0.86, 0.84, 0.79, and 0.72 in the training cohort, internal validation cohort, external validation cohorts, and prospective cohort, respectively. Machine learning models based on CT were accurate in predicting PNI of ICC, which may help in treatment decision making.

Deep learning-based super-resolution method for projection image compression in radiotherapy.

Chang Z, Shang J, Fan Y, Huang P, Hu Z, Zhang K, Dai J, Yan H

pubmed logopapersSep 1 2025
Cone-beam computed tomography (CBCT) is a three-dimensional (3D) imaging method designed for routine target verification of cancer patients during radiotherapy. The images are reconstructed from a sequence of projection images obtained by the on-board imager attached to a radiotherapy machine. CBCT images are usually stored in a health information system, but the projection images are mostly abandoned due to their massive volume. To store them economically, in this study, a deep learning (DL)-based super-resolution (SR) method for compressing the projection images was investigated. In image compression, low-resolution (LR) images were down-sampled by a factor from the high-resolution (HR) projection images and then encoded to the video file. In image restoration, LR images were decoded from the video file and then up-sampled to HR projection images via the DL network. Three SR DL networks, convolutional neural network (CNN), residual network (ResNet), and generative adversarial network (GAN), were tested along with three video coding-decoding (CODEC) algorithms: Advanced Video Coding (AVC), High Efficiency Video Coding (HEVC), and AOMedia Video 1 (AV1). Based on the two databases of the natural and projection images, the performance of the SR networks and video codecs was evaluated with the compression ratio (CR), peak signal-to-noise ratio (PSNR), video quality metric (VQM), and structural similarity index measure (SSIM). The codec AV1 achieved the highest CR among the three codecs. The CRs of AV1 were 13.91, 42.08, 144.32, and 289.80 for the down-sampling factor (DSF) 0 (non-SR) 2, 4, and 6, respectively. The SR network, ResNet, achieved the best restoration accuracy among the three SR networks. Its PSNRs were 69.08, 41.60, 37.08, and 32.44 dB for the four DSFs, respectively; its VQMs were 0.06%, 3.65%, 6.95%, and 13.03% for the four DSFs, respectively; and its SSIMs were 0.9984, 0.9878, 0.9798, and 0.9518 for the four DSFs, respectively. As the DSF increased, the CR increased proportionally with the modest degradation of the restored images. The application of the SR model can further improve the CR based on the current result achieved by the video encoders. This compression method is not only effective for the two-dimensional (2D) projection images, but also applicable to the 3D images used in radiotherapy.

Automated coronary analysis in ultrahigh-spatial resolution photon-counting detector CT angiography: Clinical validation and intra-individual comparison with energy-integrating detector CT.

Kravchenko D, Hagar MT, Varga-Szemes A, Schoepf UJ, Schoebinger M, O'Doherty J, Gülsün MA, Laghi A, Laux GS, Vecsey-Nagy M, Emrich T, Tremamunno G

pubmed logopapersSep 1 2025
To evaluate a deep-learning algorithm for automated coronary artery analysis on ultrahigh-resolution photon-counting detector coronary computed tomography (CT) angiography and compared its performance to expert readers using invasive coronary angiography as reference. Thirty-two patients (mean age 68.6 years; 81 ​% male) underwent both energy-integrating detector and ultrahigh-resolution photon-counting detector CT within 30 days. Expert readers scored each image using the Coronary Artery Disease-Reporting and Data System classification, and compared to invasive angiography. After a three-month wash-out, one reader reanalyzed the photon-counting detector CT images assisted by the algorithm. Sensitivity, specificity, accuracy, inter-reader agreement, and reading times were recorded for each method. On 401 arterial segments, inter-reader agreement improved from substantial (κ ​= ​0.75) on energy-integrating detector CT to near-perfect (κ ​= ​0.86) on photon-counting detector CT. The algorithm alone achieved 85 ​% sensitivity, 91 ​% specificity, and 90 ​% accuracy on energy-integrating detector CT, and 85 ​%, 96 ​%, and 95 ​% on photon-counting detector CT. Compared to invasive angiography on photon-counting detector CT, manual and automated reads had similar sensitivity (67 ​%), but manual assessment slightly outperformed regarding specificity (85 ​% vs. 79 ​%) and accuracy (84 ​% vs. 78 ​%). When the reader was assisted by the algorithm, specificity rose to 97 ​% (p ​< ​0.001), accuracy to 95 ​%, and reading time decreased by 54 ​% (p ​< ​0.001). This deep-learning algorithm demonstrates high agreement with experts and improved diagnostic performance on photon-counting detector CT. Expert review augmented by the algorithm further increases specificity and dramatically reduces interpretation time.
Page 26 of 1411403 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.