Sort by:
Page 6 of 1391390 results

Deep learning enables fully automated cineCT-based assessment of regional right ventricular function

Craine, A., Simon, K., Severance, L., Alshawabkeh, L., Kim, N. H., Adler, E. D., Narezkina, A., Ben-Yehuda, O., Contijoch, F.

medrxiv logopreprintSep 30 2025
BackgroundRight ventricular (RV) function is a key factor in the diagnosis and prognosis of heart disease. However, current advanced CT-based assessments rely on semi-automated segmentation of the RV blood pool and manual delineation of the RV free and septal wall boundaries. Both of these steps are time-consuming and prone to inter- and intra-observer variability. MethodsWe developed and evaluated a fully automated pipeline consisting of two deep learning methods to automate volumetric and regional strain analysis of the RV from contrast-enhanced, ECG-gated cineCT images. The Right Heart Blood Segmenter (RHBS) is a 3D high resolution configuration of nnU-Net to define the endocardial boundary, while the Right Ventricular Wall Labeler (RVWL) is a 3D point cloud-based deep learning method to label the free and septal walls. We trained our models using a diverse cohort of patients with different RV phenotypes and tested in an independent cohort of patients with aortic stenosis undergoing TAVR. ResultsOur approach demonstrated high accuracy in both cross-validation and independent validation cohorts. RHBS and RVWL both yielded Dice scores of 0.96, and accurate volumetry metrics. RVWL achieved high Dice scores (>0.90) and high accuracy (>93%) for wall labeling. The combination of RHBS and RVWL provided accurate assessment of free and septal wall regional strain, with a median cosine similarity value of 0.97 in the independent cohort. ConclusionsA fully automated 3D cineCT-based RV regional strain analysis pipeline has the potential to significantly enhance the efficiency and reproducibility of RV function assessment, enabling the evaluation of large cohorts and multi-center studies. Key PointsO_LIRV endocardial segmentation of contrast enhanced CT scans can be utilized to perform volumetry, and when paired with labeling of free and septal walls, regional evaluation of surface strain. C_LIO_LIHowever, this has previously been performed using time-intensive semi-automated segmentation methods and manually labeling free wall and septal wall regions.. C_LIO_LIHere, we describe an automated, deep learning-based approach which uses two separate DL models to define the endocardial boundary (in 3D) and then label the free and septal walls on the endocardial surface. C_LIO_LIOur approach facilitates rapid and automatic advanced phenotyping of patients. This reduces prior limitations of potential interobserver variability and challenges associated with evaluating large cohorts. C_LI

S<sup>2</sup>CAC: Semi-supervised coronary artery calcium segmentation via scoring-driven consistency and negative sample boosting.

Hao J, Shah NS, Zhou B

pubmed logopapersSep 30 2025
Coronary artery calcium (CAC) scoring plays a pivotal role in assessing the risk for cardiovascular disease events to guide the intensity of cardiovascular disease preventive efforts. Accurate CAC scoring from gated cardiac Computed Tomography (CT) relies on precise segmentation of calcification. However, the small size, irregular shape, and sparse distribution of calcification in 3D volumes present significant challenges for automated CAC assessment. Training reliable automatic segmentation models typically requires large-scale annotated datasets, yet the annotation process is resource-intensive, requiring highly trained specialists. To address this limitation, we propose S<sup>2</sup>CAC, a semi-supervised learning framework for CAC segmentation that achieves robust performance with minimal labeled data. First, we design a dual-path hybrid transformer architecture that jointly optimizes pixel-level segmentation and volume-level scoring through feature symbiosis, minimizing the information loss caused by down-sampling operations and enhancing the model's ability to preserve fine-grained calcification details. Second, we introduce a scoring-driven consistency mechanism that aligns pixel-level segmentation with volume-level CAC scores through differentiable score estimation, effectively leveraging unlabeled data. Third, we address the challenge of incorporating negative samples (cases without CAC) into training. Directly using these samples risks model collapse, as the sparse nature of CAC regions may lead the model to predict all-zero maps. To mitigate this, we design a dynamic weighted loss function that integrates negative samples into the training process while preserving the model's sensitivity to calcification. This approach effectively reduces over-segmentation and enhances overall model performance. We validate our framework on two public non-contrast gated CT datasets, achieving state-of-the-art performance over previous baseline methods. Additionally, the Agatston scores derived from our segmentation maps demonstrate strong concordance with manual annotations. These results highlight the potential of our approach to reduce dependence on annotated data while maintaining high accuracy in CAC scoring. Code and trained model weights are available at: https://github.com/JinkuiH/S2CAC.

Non-contrast CT-based pulmonary embolism detection using GAN-generated synthetic contrast enhancement: Development and validation of an AI framework.

Kim YT, Bak SH, Han SS, Son Y, Park J

pubmed logopapersSep 30 2025
Acute pulmonary embolism (PE) is a life-threatening condition often diagnosed using CT pulmonary angiography (CTPA). However, CTPA is contraindicated in patients with contrast allergies or at risk for contrast-induced nephropathy. This study explores an AI-driven approach to generate synthetic contrast-enhanced images from non-contrast CT scans for accurate diagnosis of acute PE without contrast agents. This retrospective study used dual-energy and standard CT datasets from two institutions. The internal dataset included 84 patients: 41 PE-negative cases for generative model training and 43 patients (30 PE-positive) for diagnostic evaluation. An external dataset of 62 patients (26 PE-positive) was used for further validation. We developed a generative adversarial network (GAN) based on U-Net, trained on paired non-contrast and contrast-enhanced images. The model was optimized using contrast-enhanced L1-loss with hyperparameter λ to improve anatomical accuracy. A ConvNeXt-based classifier trained on the RSNA dataset (N = 7,122) generated per-slice PE probabilities, which were aggregated for patient-level prediction via a Random Forest model. Diagnostic performance was assessed using five-fold cross-validation on both internal and external datasets. The GAN achieved optimal image similarity at λ = 0.5, with the lowest mean absolute error (0.0089) and highest MS-SSIM (0.9674). PE classification yielded AUCs of 0.861 and 0.836 in the internal dataset, and 0.787 and 0.680 in the external dataset, using real and synthetic images, respectively. No statistically significant differences were observed. Our findings demonstrate that synthetic contrast CT can serve as a viable alternative for PE diagnosis in patients contraindicated for CTPA, supporting safe and accessible imaging strategies.

Artificial Intelligence in Low-Dose Computed Tomography Screening of the Chest: Past, Present, and Future.

Yip R, Jirapatnakul A, Avila R, Gutierrez JG, Naghavi M, Yankelevitz DF, Henschke CI

pubmed logopapersSep 30 2025
The integration of artificial intelligence (AI) with low-dose computed tomography (LDCT) has the potential to transform lung cancer screening into a comprehensive approach to early detection of multiple diseases. Building on over 3 decades of research and global implementation by the International Early Lung Cancer Action Program (I-ELCAP), this paper reviews the development and clinical integration of AI for interpreting LDCT scans. We describe the historical milestones in AI-assisted lung nodule detection, emphysema quantification, and cardiovascular risk assessment using visual and quantitative imaging features. We also discuss challenges related to image acquisition variability, ground truth curation, and clinical integration, with a particular focus on the design and implementation of the open-source IELCAP-AIRS system and the ScreeningPLUS infrastructure, which enable AI training, validation, and deployment in real-world screening environments. AI algorithms for rule-out decisions, nodule tracking, and disease quantification have the potential to reduce radiologist workload and advance precision screening. With the ability to evaluate multiple diseases from a single LDCT scan, AI-enabled screening offers a powerful, scalable tool for improving population health. Ongoing collaboration, standardized protocols, and large annotated datasets are critical to advancing the future of integrated, AI-driven preventive care.

Identification of structural predictors of lung function improvement in adults with cystic fibrosis treated with elexacaftor-tezacaftor-ivacaftor using deep-learning.

Chassagnon G, Marini R, Ong V, Da Silva J, Habip Gatenyo D, Honore I, Kanaan R, Carlier N, Fesenbeckh J, Burnet E, Revel MP, Martin C, Burgel PR

pubmed logopapersSep 30 2025
The purpose of this study was to evaluate the relationship between structural abnormalities on CT and lung function prior to and after initiation of elexacaftor-tezacaftor-ivacaftor (ETI) in adults with cystic fibrosis (CF) using a deep learning model. A deep learning quantification model was developed using 100 chest computed tomography (CT) examinations of patients with CF and 150 chest CT examinations of patients with various other bronchial diseases to quantify seven types of abnormalities. This model was then applied to an independent dataset of CT examinations of 218 adults with CF who were treated with ETI. The relationship between structural abnormalities and percent predicted forced expiratory volume in one second (ppFEV<sub>1</sub>) was examined using general linear regression models. The deep learning model performed as well as radiologists for the quantification of the seven types of abnormalities. Chest CT examinations obtained before to and one year after the initiation of ETI were analyzed. The independent structural predictors of ppFEV<sub>1</sub> prior to ETI were bronchial wall thickening (P = 0.011), mucus plugging (P < 0.001), consolidation/atelectasis (P < 0.001), and mosaic perfusion (P < 0.001). An increase in ppFEV<sub>1</sub> after initiation of ETI independently correlated with a decrease in bronchial wall thicknening (-49 %; P = 0.004), mucus plugging (-92 %; P < 0.001), centrilobular nodules (-78 %; P = 0.009) and mosaic perfusion (-14 %; P < 0.001). Younger age (P < 0.001), greater mucus plugging extent (P = 0.016), and centrilobular nodules (P < 0.001) prior to ETI initiation were independent predictors of ppFEV<sub>1</sub> improvement. A deep learning model can quantify CT lung abnormalities in adults with CF. Lung function impairment in adults with CF is associated with muco-inflammatory lesions on CT, which are largely reversible with ETI, and with mosaic perfusion, which appear less reversible and is presumably related to irreversible damage. Predictors of lung function improvement are a younger age and a greater extent of muco-inflammatory lesions obstructing the airways.

Radiomics analysis using machine learning to predict perineural invasion in pancreatic cancer.

Sun Y, Li Y, Li M, Hu T, Wang J

pubmed logopapersSep 30 2025
Pancreatic cancer is one of the most aggressive and lethal malignancies of the digestive system and is characterized by an extremely low five-year survival rate. The perineural invasion (PNI) status in patients with pancreatic cancer is positively correlated with adverse prognoses, including overall survival and recurrence-free survival. Emerging radiomic methods can reveal subtle variations in tumor structure by analyzing preoperative contrast-enhanced computed tomography (CECT) imaging data. Therefore, we propose the development of a preoperative CECT-based radiomic model to predict the risk of PNI in patients with pancreatic cancer. This study enrolled patients with pancreatic malignancies who underwent radical resection. Computerized tools were employed to extract radiomic features from tumor regions of interest (ROIs). The optimal radiomic features associated with PNI were selected to construct a radiomic score (RadScore). The model's reliability was comprehensively evaluated by integrating clinical and follow-up information, with SHapley Additive exPlanations (SHAP)-based visualization to interpret the decision-making processes. A total of 167 patients with pancreatic malignancies were included. From the CECT images, 851 radiomic features were extracted, 22 of which were identified as most strongly correlated with PNI. These 22 features were evaluated using seven machine learning methods. We ultimately selected the Gaussian naive Bayes model, which demonstrated robust predictive performance in both the training and validation cohorts, and achieved area under the ROC curve (AUC) values of 0.899 and 0.813, respectively. Among the clinical features, maximum tumor diameter, CA-199 level, blood glucose concentration, and lymph node metastasis were found to be independent risk factors for PNI. The integrated model yielded AUCs of 0.945 (training cohort) and 0.881 (validation cohort). Decision curve analysis confirmed the clinical utility of the ensemble model to predict perineural invasion. The combined model integrating clinical and radiomic features exhibited excellent performance in predicting the probability of perineural invasion in patients with pancreatic cancer. This approach has significant potential to optimize therapeutic decision-making and prognostic evaluation in patients with PNI.

Deep Learning-Based Cardiac CT Coronary Motion Correction Method with Temporal Weight Adjustment: Clinical Data Evaluation.

Yao D, Yan C, Du W, Zhang J, Wang Z, Zhang S, Yang M, Dai S

pubmed logopapersSep 30 2025
Cardiac motion artifacts frequently degrade the quality and interpretability of coronary computed tomography angiography (CCTA) images, making it difficult for radiologists to identify and evaluate the details of the coronary vessels accurately. In this paper, a deep learning-based approach for coronary artery motion compensation, namely a temporal-weighted motion correction network (TW-MoCoNet), was proposed. Firstly, the motion data required for TW-MoCoNet training were generated using a motion artifact simulation method based on the original no-artifact CCTA images. Secondly, TW-MoCoNet, consisting of a temporal weighting correction module and a differentiable spatial transformer module, was trained using these generated paired images. Finally, the proposed method was evaluated on 67 clinical data with objective metrics including peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), fold-overlap ratio (FOR), low-intensity region score (LIRS), and motion artifact score (MAS). Additionally, subjective image quality was evaluated using a 4-point Likert scale to assess visual improvements. The experimental results demonstrated a substantial improvement in both the objective and subjective evaluations of image quality after motion correction was applied. The proportion of the segments with moderate artifacts, scored 2 points, has a notable decrease of 80.2% (from 26.37 to 5.22%), and the proportion of artifact-free segments (scored 4 points) has reached 50.0%, which is of great clinical significance. In conclusion, the deep learning-based motion correction method proposed in this paper can effectively reduce motion artifacts, enhance image clarity, and improve clinical interpretability, thus effectively assisting doctors in accurately identifying and evaluating the details of coronary vessels.

Artificial Intelligence Model for Imaging-Based Extranodal Extension Detection and Outcome Prediction in Human Papillomavirus-Positive Oropharyngeal Cancer.

Dayan GS, Hénique G, Bahig H, Nelson K, Brodeur C, Christopoulos A, Filion E, Nguyen-Tan PF, O'Sullivan B, Ayad T, Bissada E, Tabet P, Guertin L, Desilets A, Kadoury S, Letourneau-Guillon L

pubmed logopapersSep 30 2025
Although not included in the eighth edition of the American Joint Committee on Cancer Staging System, there is growing evidence suggesting that imaging-based extranodal extension (iENE) is associated with worse outcomes in HPV-associated oropharyngeal carcinoma (OPC). Key challenges with iENE include the lack of standardized criteria, reliance on radiological expertise, and interreader variability. To develop an artificial intelligence (AI)-driven pipeline for lymph node segmentation and iENE classification using pretreatment computed tomography (CT) scans, and to evaluate its association with oncologic outcomes in HPV-positive OPC. This was a single-center cohort study conducted at a tertiary oncology center in Montreal, Canada, of adult patients with HPV-positive cN+ OPC treated with up-front (chemo)radiotherapy from January 2009 to January 2020. Participants were followed up until January 2024. Data analysis was performed from March 2024 to April 2025. Pretreatment planning CT scans along with lymph node gross tumor volume segmentations performed by expert radiation oncologists were extracted. For lymph node segmentation, an nnU-Net model was developed. For iENE classification, radiomic and deep learning feature extraction methods were compared. iENE classification accuracy was assessed against 2 expert neuroradiologist evaluations using area under the receiver operating characteristic curve (AUC). Subsequently, the association of AI-predicted iENE with oncologic outcomes-ie, overall survival (OS), recurrence-free survival (RFS), distant control (DC), and locoregional control (LRC)-was assessed. Among 397 patients (mean [SD] age, 62.3 [9.1] years; 80 females [20.2%] and 317 males [79.8%]), AI-iENE classification using radiomics achieved an AUC of 0.81. Patients with AI-predicted iENE had worse 3-year OS (83.8% vs 96.8%), RFS (80.7% vs 93.7%), and DC (84.3% vs 97.1%), but similar LRC. AI-iENE had significantly higher Concordance indices than radiologist-assessed iENE for OS (0.64 vs 0.55), RFS (0.67 vs 0.60), and DC (0.79 vs 0.68). In multivariable analysis, AI-iENE remained independently associated with OS (adjusted hazard ratio [aHR], 2.82; 95% CI, 1.21-6.57), RFS (aHR, 4.20; 95% CI, 1.93-9.11), and DC (aHR, 12.33; 95% CI, 4.15-36.67), adjusting for age, tumor category, node category, and number of lymph nodes. This single-center cohort study found that an AI-driven pipeline can successfully automate lymph node segmentation and iENE classification from pretreatment CT scans in HPV-associated OPC. Predicted iENE was independently associated with worse oncologic outcomes. External validation is required to assess generalizability and the potential for implementation in institutions without specialized imaging expertise.

Centiloid values from deep learning-based CT parcellation: a valid alternative to freesurfer.

Yoon YJ, Seo S, Lee S, Lim H, Choo K, Kim D, Han H, So M, Kang H, Kang S, Kim D, Lee YG, Shin D, Jeon TJ, Yun M

pubmed logopapersSep 30 2025
Amyloid PET/CT is essential for quantifying amyloid-beta (Aβ) deposition in Alzheimer's disease (AD), with the Centiloid (CL) scale standardizing measurements across imaging centers. However, MRI-based CL pipelines face challenges: high cost, contraindications, and patient burden. To address these challenges, we developed a deep learning-based CT parcellation pipeline calibrated to the standard CL scale using CT images from PET/CT scans and evaluated its performance relative to standard pipelines. A total of 306 participants (23 young controls [YCs] and 283 patients) underwent 18 F-florbetaben (FBB) PET/CT and MRI. Based on visual assessment, 207 patients were classified as Aβ-positive and 76 as Aβ-negative. PET images were processed using the CT parcellation pipeline and compared to FreeSurfer (FS) and standard pipelines. Agreement was assessed via regression analyses. Effect size, variance, and ROC analyses were used to compare pipelines and determine the optimal CL threshold relative to visual Aβ assessment. The CT parcellation showed high concordance with the FS and provided reliable CL quantification (R² = 0.99). Both pipelines demonstrated similar variance in YCs and effect sizes between YCs and ADCI. ROC analyses confirmed comparable accuracy and similar CL thresholds, supporting CT parcellation as a viable MRI-free alternative. Our findings indicate that the CT parcellation pipeline achieves a level of accuracy similar to FS in CL quantification, demonstrating its reliability as an MRI-free alternative. In PET/CT, CT and PET are acquired sequentially within the same session on a shared bed and headrest, which helps maintain consistent positioning and adequate spatial alignment, reducing registration errors and supporting more reliable and precise quantification.

Inter-slice Complementarity Enhanced Ring Artifact Removal using Central Region Reinforced Neural Network.

Zhang Y, Liu G, Chen Z, Huang Z, Kan S, Ji X, Luo S, Zhu S, Yang J, Chen Y

pubmed logopapersSep 30 2025
In computed tomography (CT), non-uniform detector responses often lead to ring artifacts in reconstructed images. For conventional energy-integrating detectors (EIDs), such artifacts can be effectively addressed through dead-pixel correction and flat-dark field calibration. However, the response characteristics of photon-counting detectors (PCDs) are more complex, and standard calibration procedures can only partially mitigate ring artifacts. Consequently, developing high-performance ring artifact removal algorithms is essential for PCD-based CT systems. To this end, we propose the Inter-slice Complementarity Enhanced Ring Artifact Removal (ICE-RAR) algorithm. Since artifact removal in the central region is particularly challenging, ICE-RAR utilizes a dual-branch neural network that could simultaneously perform global artifact removal and enhance the central region restoration. Moreover, recognizing that the detector response is also non-uniform in the vertical direction, ICE-RAR suggests extracting and utilizing inter-slice complementarity to enhance its performance in artifact elimination and image restoration. Experiments on simulated data and two real datasets acquired from PCD-based CT systems demonstrate the effectiveness of ICE-RAR in reducing ring artifacts while preserving structural details. More importantly, since the system-specific characteristics are incorporated into the data simulation process, models trained on the simulated data can be directly applied to unseen real data from the target PCD-based CT system, demonstrating ICE-RAR's potential to address the ring artifact removal problem in practical CT systems. The implementation is publicly available at https://github.com/DarkBreakerZero/ICE-RAR.
Page 6 of 1391390 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.