Sort by:
Page 17 of 1411403 results

Multimodal Machine Learning-Based Technical Failure Prediction in Patients Undergoing Transcatheter Aortic Valve Replacement.

Tomii D, Shiri I, Baj G, Nakase M, Kazaj PM, Samim D, Bartkowiak J, Praz F, Lanz J, Stortecky S, Reineke D, Windecker S, Pilgrim T, Gräni C

pubmed logopapersSep 18 2025
Technical failure is not uncommon and is associated with unfavorable outcomes in patients undergoing TAVR. However, predicting procedural failure remains challenging due to the complex interplay of clinical, anatomical, and procedural factors. The objective of the study was to develop and validate a data-driven prediction model for technical failure of transcatheter aortic valve replacement (TAVR), using multimodal information and machine learning algorithms. In a prospective TAVR registry, 184 parameters derived from clinical examination, laboratory studies, electrocardiography, echocardiography, cardiac catheterization, computed tomography, and procedural measurements were used for machine learning modeling of TAVR technical failure prediction. For the machine learning algorithm, 24 different model combinations were developed using a standardized machine learning pipeline. All model development steps were performed solely on the training set, whereas the holdout test set was kept separate for final evaluation. Technical success/failure was defined according to the Valve Academic Research Consortium (VARC)-3 definition, which differentiates between vascular and cardiac complications. Among 2,937 consecutive patients undergoing TAVR, the rate of cardiac and vascular technical failure was 2.4% and 7.0%, respectively. For both categories of technical failure, the best-performing model demonstrated moderate-to-high discrimination (cardiac: area under the curve: 0.769; vascular: area under the curve: 0.788), with high negative predictive values (0.995 and 0.976, respectively). Interpretability analysis showed that atherosclerotic comorbidities, computed tomography-based aortic root and iliofemoral anatomies, antithrombotic management, and procedural features were consistently identified as key determinants of VARC-3 technical failure across all models. Machine learning-based models that integrate multimodal data can effectively predict VARC-3 technical failure in TAVR, refining patient selection and optimizing procedural strategies.

Transplant-Ready? Evaluating AI Lung Segmentation Models in Candidates with Severe Lung Disease

Jisoo Lee, Michael R. Harowicz, Yuwen Chen, Hanxue Gu, Isaac S. Alderete, Lin Li, Maciej A. Mazurowski, Matthew G. Hartwig

arxiv logopreprintSep 18 2025
This study evaluates publicly available deep-learning based lung segmentation models in transplant-eligible patients to determine their performance across disease severity levels, pathology categories, and lung sides, and to identify limitations impacting their use in preoperative planning in lung transplantation. This retrospective study included 32 patients who underwent chest CT scans at Duke University Health System between 2017 and 2019 (total of 3,645 2D axial slices). Patients with standard axial CT scans were selected based on the presence of two or more lung pathologies of varying severity. Lung segmentation was performed using three previously developed deep learning models: Unet-R231, TotalSegmentator, MedSAM. Performance was assessed using quantitative metrics (volumetric similarity, Dice similarity coefficient, Hausdorff distance) and a qualitative measure (four-point clinical acceptability scale). Unet-R231 consistently outperformed TotalSegmentator and MedSAM in general, for different severity levels, and pathology categories (p<0.05). All models showed significant performance declines from mild to moderate-to-severe cases, particularly in volumetric similarity (p<0.05), without significant differences among lung sides or pathology types. Unet-R231 provided the most accurate automated lung segmentation among evaluated models with TotalSegmentator being a close second, though their performance declined significantly in moderate-to-severe cases, emphasizing the need for specialized model fine-tuning in severe pathology contexts.

Optimising Generalisable Deep Learning Models for CT Coronary Segmentation: A Multifactorial Evaluation.

Zhang S, Gharleghi R, Singh S, Shen C, Adikari D, Zhang M, Moses D, Vickers D, Sowmya A, Beier S

pubmed logopapersSep 18 2025
Coronary artery disease (CAD) remains a leading cause of morbidity and mortality worldwide, with incidence rates continuing to rise. Automated coronary artery medical image segmentation can ultimately improve CAD management by enabling more advanced and efficient diagnostic assessments. Deep learning-based segmentation methods have shown significant promise and offered higher accuracy while reducing reliance on manual inputs. However, achieving consistent performance across diverse datasets remains a persistent challenge due to substantial variability in imaging protocols, equipment and patient-specific factors, such as signal intensities, anatomical differences and disease severity. This study investigates the influence of image quality and resolution, governed by vessel size and common disease characteristics that introduce artefacts, such as calcification, on coronary artery segmentation accuracy in computed tomography coronary angiography (CTCA). Two datasets were utilised for model training and validation, including the publicly available ASOCA dataset (40 cases) and a GeoCAD dataset (70 cases) with more cases of coronary disease. Coronary artery segmentations were generated using three deep learning frameworks/architectures: default U-Net, Swin-UNETR, and EfficientNet-LinkNet. The impact of various factors on model generalisation was evaluated, focusing on imaging characteristics (contrast-to-noise ratio, artery contrast enhancement, and edge sharpness) and the extent of calcification at both the coronary tree and individual vessel branch levels. The calcification ranges considered were 0 (no calcification), 1-99 (low), 100-399 (moderate), and > 400 (high). The findings demonstrated that image features, including artery contrast enhancement (r = 0.408, p < 0.001) and edge sharpness (r = 0.239, p = 0.046), were significantly correlated with improved segmentation performance in test cases. Regardless of severity, calcification had a negative impact on segmentation accuracy, with low calcification affecting the segmentation most poorly (p < 0.05). This may be because smaller calcified lesions produce less distinct contrast against the bright lumen, making it harder for the model to accurately identify and segment these lesions. Additionally, in males, a larger diameter of the first obtuse marginal branch (OM1) (p = 0.036) was associated with improved segmentation performance for OM1. Similarly, in females, larger diameters of left main (LM) coronary artery (p = 0.008) and right coronary artery (RCA) (p < 0.001) were associated with better segmentation performance for LM and RCA, respectively. These findings emphasise the importance of accounting for imaging characteristics and anatomical variability when developing generalisable deep learning models for coronary artery segmentation. Unlike previous studies, which broadly acknowledge the role of image quality in segmentation, our work quantitatively demonstrates the extent to which contrast enhancement, edge sharpness, calcification and vessel diameter impact segmentation performance, offering a data-driven foundation for model adaptation strategies. Potential improvements include optimising pre-segmentation imaging (e.g. ensuring adequate edge sharpness in low-contrast regions) and developing algorithms to address vessel-specific challenges, such as improving segmentation of low-level calcifications and accurately identifying LM, RCA and OM1 of smaller diameters.

Advancing X-ray microcomputed tomography image processing of avian eggshells: An improved registration metric for multiscale 3D images and resolution-enhanced segmentation of eggshell pores using edge-attentive neural networks.

Jia S, Piché N, McKee MD, Reznikov N

pubmed logopapersSep 17 2025
Avian eggs exhibit a variety of shapes and sizes, reflecting different reproductive strategies. The eggshell not only protects the egg contents, but also regulates gas and water vapor exchange vital for embryonic development. While many studies have explored eggshell ultrastructure, the distribution of pores across the entire shell is less well understood because of a trade-off between resolution and field-of-view in imaging. To overcome this, a neural network was developed for resolution enhancement of low-resolution 3D tomographic data, while performing voxel-wise labeling. Trained on X-ray microcomputed tomography images of ostrich, guillemot and crow eggshells from a natural history museum collection, the model used stepwise magnification to create low- and high-resolution training sets. Registration performance was validated with a novel metric based on local grayscale gradients. An edge-attentive loss function prevented bias towards the dominant background class (95% of all voxels), ensuring accurate labeling of eggshell (5%) and pore (0.1%) voxels. The results indicate that besides edge-attention and class balancing, 3D context preservation and 3D convolution are of paramount importance for extrapolating subvoxel features.

Multimodal deep learning integration for predicting renal function outcomes in living donor kidney transplantation: a retrospective cohort study.

Kim JM, Jung H, Kwon HE, Ko Y, Jung JH, Shin S, Kim YH, Kim YH, Jun TJ, Kwon H

pubmed logopapersSep 17 2025
Accurately predicting post-transplant renal function is essential for optimizing donor-recipient matching and improving long-term outcomes in kidney transplantation (KT). Traditional models using only structured clinical data often fail to account for complex biological and anatomical factors. This study aimed to develop and validate a multimodal deep learning model that integrates computed tomography (CT) imaging, radiology report text, and structured clinical variables to predict 1-year estimated glomerular filtration rate (eGFR) in living donor kidney transplantation (LDKT) recipients. A retrospective cohort of 1,937 LDKT recipients was selected from 3,772 KT cases. Exclusions included deceased donor KT, immunologic high-risk recipients (n = 304), missing CT imaging, early graft complications, and anatomical abnormalities. eGFR at 1 year post-transplant was classified into four categories: > 90, 75-90, 60-75, and 45-60 mL/min/1.73 m2. Radiology reports were embedded using BioBERT, while CT videos were encoded using a CLIP-based visual extractor. These were fused with structured clinical features and input into ensemble classifiers including XGBoost. Model performance was evaluated using cross-validation and SHapley Additive exPlanations (SHAP) analysis. The full multimodal model achieved a macro F1 score of 0.675, micro F1 score of 0.704, and weighted F1 score of 0.698-substantially outperforming the clinical-only model (macro F1 = 0.292). CT imaging contributed more than text data (clinical + CT macro F1 = 0.651; clinical + text = 0.486). The model showed highest accuracy in the >90 (F1 = 0.7773) and 60-75 (F1 = 0.7303) categories. SHAP analysis identified donor age, BMI, and donor sex as key predictors. Dimensionality reduction confirmed internal feature validity. Multimodal deep learning integrating clinical, imaging, and textual data enhances prediction of post-transplant renal function. This framework offers a robust and interpretable approach for individualized risk stratification in LDKT, supporting precision medicine in transplantation.

Augmenting conventional criteria: a CT-based deep learning radiomics nomogram for early recurrence risk stratification in hepatocellular carcinoma after liver transplantation.

Wu Z, Liu D, Ouyang S, Hu J, Ding J, Guo Q, Gao J, Luo J, Ren K

pubmed logopapersSep 17 2025
We developed a deep learning radiomics nomogram (DLRN) using CT scans to improve clinical decision-making and risk stratification for early recurrence of hepatocellular carcinoma (HCC) after transplantation, which typically has a poor prognosis. In this two-center study, 245 HCC patients who had contrast-enhanced CT before liver transplantation were split into a training set (n = 184) and a validation set (n = 61). We extracted radiomics and deep learning features from tumor and peritumor areas on preoperative CT images. The DLRN was created by combining these features with significant clinical variables using multivariate logistic regression. Its performance was validated against four traditional risk criteria to assess its additional value. The DLRN model showed strong predictive accuracy for early HCC recurrence post-transplant, with AUCs of 0.884 and 0.829 in training and validation groups. High DLRN scores significantly increased relapse risk by 16.370 times (95% CI: 7.100-31.690; p  < 0.001). Combining DLRN with Metro-Ticket 2.0 criteria yielded the best prediction (AUC: training/validation: 0.936/0.863). The CT-based DLRN offers a non-invasive method for predicting early recurrence following liver transplantation in patients with HCC. Furthermore, it provides substantial additional predictive value with traditional prognostic scoring systems. AI-driven predictive models utilizing preoperative CT imaging enable accurate identification of early HCC recurrence risk following liver transplantation, facilitating risk-stratified surveillance protocols and optimized post-transplant management. A CT-based DLRN for predicting early HCC recurrence post-transplant was developed. The DLRN predicted recurrence with high accuracy (AUC: 0.829) and 16.370-fold increased recurrence risk. Combining DLRN with Metro-Ticket 2.0 criteria achieved optimal prediction (AUC: 0.863).

Machine learning in sex estimation using CBCT morphometric measurements of canines.

Silva-Sousa AC, Dos Santos Cardoso G, Branco AC, Küchler EC, Baratto-Filho F, Candemil AP, Sousa-Neto MD, de Araujo CM

pubmed logopapersSep 17 2025
The aim of this study was to assess measurements of the maxillary canines using Cone Beam Computed Tomography (CBCT) and develop a machine learning model for sex estimation. CBCT scans from 610 patients were screened. The maxillary canines were examined to measure total tooth length, average enamel thickness, and mesiodistal width. Various supervised machine learning algorithms were employed to construct predictive models, including Decision Tree, Gradient Boosting Classifier, K-Nearest Neighbors (KNN), Logistic Regression, Multi-Layer Perceptron (MLP), Random Forest Classifier, Support Vector Machine (SVM), XGBoost, LightGBM, and CatBoost. Validation of each model was performed using a 10-fold cross-validation approach. Metrics such as area under the curve (AUC), accuracy, recall, precision, and F1 Score were computed, with ROC curves generated for visualization. The total length of the tooth proved to be the variable with the highest predictive power. The algorithms that demonstrated superior performance in terms of AUC were LightGBM and Logistic Regression, achieving AUC values of 0.77 [CI95% = 0.65-0.89] and 0.75 [CI95% = 0.62-0.86] for the test data, and 0.74 [CI95% = 0.70-0.80] and 0.75 [CI95% = 0.70-0.79] in cross-validation, respectively. Both models also showed high precision values. The use of maxillary canine measurements, combined with supervised machine learning techniques, has proven to be viable for sex estimation. The machine learning approach combined with is a low-cost option as it relies solely on a single anatomical structure.

SAMIR, an efficient registration framework via robust feature learning from SAM

Yue He, Min Liu, Qinghao Liu, Jiazheng Wang, Yaonan Wang, Hang Zhang, Xiang Chen

arxiv logopreprintSep 17 2025
Image registration is a fundamental task in medical image analysis. Deformations are often closely related to the morphological characteristics of tissues, making accurate feature extraction crucial. Recent weakly supervised methods improve registration by incorporating anatomical priors such as segmentation masks or landmarks, either as inputs or in the loss function. However, such weak labels are often not readily available, limiting their practical use. Motivated by the strong representation learning ability of visual foundation models, this paper introduces SAMIR, an efficient medical image registration framework that utilizes the Segment Anything Model (SAM) to enhance feature extraction. SAM is pretrained on large-scale natural image datasets and can learn robust, general-purpose visual representations. Rather than using raw input images, we design a task-specific adaptation pipeline using SAM's image encoder to extract structure-aware feature embeddings, enabling more accurate modeling of anatomical consistency and deformation patterns. We further design a lightweight 3D head to refine features within the embedding space, adapting to local deformations in medical images. Additionally, we introduce a Hierarchical Feature Consistency Loss to guide coarse-to-fine feature matching and improve anatomical alignment. Extensive experiments demonstrate that SAMIR significantly outperforms state-of-the-art methods on benchmark datasets for both intra-subject cardiac image registration and inter-subject abdomen CT image registration, achieving performance improvements of 2.68% on ACDC and 6.44% on the abdomen dataset. The source code will be publicly available on GitHub following the acceptance of this paper.

Patient-Specific Cardio-Respiratory Model for Optimization of Cardiac Radioablation.

Rigal L, Bellec J, Lemaire L, Duverge L, Benali K, Lederlin M, Martins R, De Crevoisier R, Simon A

pubmed logopapersSep 17 2025
Stereotactic Arrhythmia Radioablation (STAR) is a promising treatment for refractory ventricular tachycardia. However, its precision may be hampered by cardiac and respiratory motions. Multiple techniques exist to mitigate the effects of these displacements. The purpose of this work was, based on cardiac and respiratory dynamic CT scans, to generate a patient-specific dynamic model of the structures of interest, that enables simulation of treatments for evaluation of motion management methods. Deep learning-based segmentation was used to extract the geometry of the cardiac structures, whose deformations and displacements were assessed using deformable and rigid image registrations. The combination of the model with dose maps enabled to evaluate the dose locally accumulated during the treatment. The reproducibility of each step was evaluated considering expert references, and treatment simulations were evaluated using data of a physical phantom. The exploitation of the model was illustrated on the data of nine patients, demonstrating that the impact of cardiorespiratory dynamics is potentially important and highly patient-specific, and allowing for future evaluations of motion management methods.

Head-to-Head Comparison of Two AI Computer-Aided Triage Solutions for Detecting Intracranial Hemorrhage on Non-Contrast Head CT.

Garcia GM, Young P, Dawood L, Elshikh M

pubmed logopapersSep 16 2025
This study aims to provide a comprehensive comparison of the performance and reproducibility of two commercially available artificial intelligence (AI) software computer-aided triage and notification solutions, Vendor A (Aidoc) and Vendor B (Viz.ai), for the detection of intracranial hemorrhage (ICH) on non-contrast enhanced head CT (NCHCT) scans performed within a single academic institution. The retrospective analysis was conducted on a large patient cohort from multiple healthcare settings within a single academic institution, utilizing standardized scanning protocols. Sensitivity, specificity, false positive, and false negative rates were evaluated for both vendors. Outputs assessed included AI-generated case-level classification. Among 4,081 scans, 595 were positive for ICH. Vendor A demonstrated a sensitivity of 94.4% and specificity of 97.4%, PPV of 85.9%, and NPV of 99.1%. Vendor B showed a sensitivity of 59.5% and specificity of 99.0%, PPV of 90.0%, and NPV of 92.6%. Vendor A had 20 false negatives, which primarily involved subdural and intraparenchymal hemorrhages, and 97 false positives, which appear to be related to motion artifact. Vendor B had 145 false negatives, largely comprised of subdural and subarachnoid hemorrhages, and 36 false positives, which appeared to be related to motion artifact and calcified or dense lesions. Concordantly, 18 cases were false negatives and 11 cases were false positives for both AI solutions. The findings of this study provide valuable information for clinicians and healthcare institutions considering the implementation of AI software for computer aided-triage and notification in the detection of intracranial hemorrhage. The discussion encompasses the implications of the results, the importance of evaluating AI findings in context-especially in the absence of explainability tools, potential areas for improvement, and the relevance of standardized scanning protocols in ensuring the reliability of AI-based diagnostic tools in clinical practice. ICH = Intracranial Hemorrhage; NCHCT = Non-contrast Enhanced Head CT; AI = Artificial Intelligence; SDH = Subdural Hemorrhage; SAH = Subarachnoid Hemorrhage; IPH = Intraparenchymal Hemorrhage; IVH = Intraventricular Hemorrhage; PPV = Positive Predictive Value; NPV = Negative Predictive Value; CADt = Computer-Aided Triage; PACS = Picture Archiving and Communication System; FN = False Negative; FP = False Positive; CI = Confidence Interval.
Page 17 of 1411403 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.