Sort by:
Page 45 of 1321311 results

Automated Evaluation of Female Pelvic Organ Descent on Transperineal Ultrasound: Model Development and Validation.

Wu S, Wu J, Xu Y, Tan J, Wang R, Zhang X

pubmed logopapersJun 28 2025
Transperineal ultrasound (TPUS) is a widely used tool for evaluating female pelvic organ prolapse (POP), but its accurate interpretation relies on experience, causing diagnostic variability. This study aims to develop and validate a multi-task deep learning model to automate POP assessment using TPUS images. TPUS images from 1340 female patients (January-June 2023) were evaluated by two experienced physicians. The presence and severity of cystocele, uterine prolapse, rectocele, and excessive mobility of perineal body (EMoPB) were documented. After preprocessing, 1072 images were used for training and 268 for validation. The model used ResNet34 as the feature extractor and four parallel fully connected layers to predict the conditions. Model performance was assessed using confusion matrix and area under the curve (AUC). Gradient-weighted class activation mapping (Grad-CAM) visualized the model's focus areas. The model demonstrated strong diagnostic performance, with accuracies and AUC values as follows: cystocele, 0.869 (95% CI, 0.824-0.905) and 0.947 (95% CI, 0.930-0.962); uterine prolapse, 0.799 (95% CI, 0.746-0.842) and 0.931 (95% CI, 0.911-0.948); rectocele, 0.978 (95% CI, 0.952-0.990) and 0.892 (95% CI, 0.849-0.927); and EMoPB, 0.869 (95% CI, 0.824-0.905) and 0.942 (95% CI, 0.907-0.967). Grad-CAM heatmaps revealed that the model's focus areas were consistent with those observed by human experts. This study presents a multi-task deep learning model for automated POP assessment using TPUS images, showing promising efficacy and potential to benefit a broader population of women.

Developing ultrasound-based machine learning models for accurate differentiation between sclerosing adenosis and invasive ductal carcinoma.

Liu G, Yang N, Qu Y, Chen G, Wen G, Li G, Deng L, Mai Y

pubmed logopapersJun 28 2025
This study aimed to develop a machine learning model using breast ultrasound images to improve the non-invasive differential diagnosis between Sclerosing Adenosis (SA) and Invasive Ductal Carcinoma (IDC). 2046 ultrasound images from 772 SA and IDC patients were collected, Regions of Interest (ROI) were delineated, and features were extracted. The dataset was split into training and test cohorts, and feature selection was performed by correlation coefficients and Recursive Feature Elimination. 10 classifiers with Grid Search and 5-fold cross-validation were applied during model training. Receiver Operating Characteristic (ROC) curve and Youden index were used to model evaluation. SHapley Additive exPlanations (SHAP) was employed for model interpretation. Another 224 ROIs of 84 patients from other hospitals were used for external validation. For the ROI-level model, XGBoost with 18 features achieved an area under the curve (AUC) of 0.9758 (0.9654-0.9847) in the test cohort and 0.9906 (0.9805-0.9973) in the validation cohort. For the patient-level model, logistic regression with 9 features achieved an AUC of 0.9653 (0.9402-0.9859) in the test cohort and 0.9846 (0.9615-0.9978) in the validation cohort. The feature "Original shape Major Axis Length" was identified as the most important, with its value positively correlated with a higher likelihood of the sample being IDC. Feature contributions for specific ROIs were visualized as well. We developed explainable, ultrasound-based machine learning models with high performance for differentiating SA and IDC, offering a potential non-invasive tool for improved differential diagnosis. Question Accurately distinguishing between sclerosing adenosis (SA) and invasive ductal carcinoma (IDC) in a non-invasive manner has been a diagnostic challenge. Findings Explainable, ultrasound-based machine learning models with high performance were developed for differentiating SA and IDC, and validated well in external validation cohort. Critical relevance These models provide non-invasive tools to reduce misdiagnoses of SA and improve early detection for IDC.

Identifying visible tissue in intraoperative ultrasound: a method and application.

Weld A, Dixon L, Dyck M, Anichini G, Ranne A, Camp S, Giannarou S

pubmed logopapersJun 28 2025
Intraoperative ultrasound scanning is a demanding visuotactile task. It requires operators to simultaneously localise the ultrasound perspective and manually perform slight adjustments to the pose of the probe, making sure not to apply excessive force or breaking contact with the tissue, while also characterising the visible tissue. To analyse the probe-tissue contact, an iterative filtering and topological method is proposed to identify the underlying visible tissue, which can be used to detect acoustic shadow and construct confidence maps of perceptual salience. For evaluation, datasets containing both in vivo and medical phantom data are created. A suite of evaluations is performed, including an evaluation of acoustic shadow classification. Compared to an ablation, deep learning, and statistical method, the proposed approach achieves superior classification on in vivo data, achieving an <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>F</mi> <mi>β</mi></msub> </math> score of 0.864, in comparison with 0.838, 0.808, and 0.808. A novel framework for evaluating the confidence estimation of probe-tissue contact is created. The phantom data are captured specifically for this, and comparison is made against two established methods. The proposed method produced the superior response, achieving an average normalised root-mean-square error of 0.168, in comparison with 1.836 and 4.542. Evaluation is also extended to determine the algorithm's robustness to parameter perturbation, speckle noise, data distribution shift, and capability for guiding a robotic scan. The results of this comprehensive set of experiments justify the potential clinical value of the proposed algorithm, which can be used to support clinical training and robotic ultrasound automation.

Non-contrast computed tomography radiomics model to predict benign and malignant thyroid nodules with lobe segmentation: A dual-center study.

Wang H, Wang X, Du YS, Wang Y, Bai ZJ, Wu D, Tang WL, Zeng HL, Tao J, He J

pubmed logopapersJun 28 2025
Accurate preoperative differentiation of benign and malignant thyroid nodules is critical for optimal patient management. However, conventional imaging modalities present inherent diagnostic limitations. To develop a non-contrast computed tomography-based machine learning model integrating radiomics and clinical features for preoperative thyroid nodule classification. This multicenter retrospective study enrolled 272 patients with thyroid nodules (376 thyroid lobes) from center A (May 2021-April 2024), using histopathological findings as the reference standard. The dataset was stratified into a training cohort (264 lobes) and an internal validation cohort (112 lobes). Additional prospective temporal (97 lobes, May-August 2024, center A) and external multicenter (81 lobes, center B) test cohorts were incorporated to enhance generalizability. Thyroid lobes were segmented along the isthmus midline, with segmentation reliability confirmed by an intraclass correlation coefficient (≥ 0.80). Radiomics feature extraction was performed using Pearson correlation analysis followed by least absolute shrinkage and selection operator regression with 10-fold cross-validation. Seven machine learning algorithms were systematically evaluated, with model performance quantified through the area under the receiver operating characteristic curve (AUC), Brier score, decision curve analysis, and DeLong test for comparison with radiologists interpretations. Model interpretability was elucidated using SHapley Additive exPlanations (SHAP). The extreme gradient boosting model demonstrated robust diagnostic performance across all datasets, achieving AUCs of 0.899 [95% confidence interval (CI): 0.845-0.932] in the training cohort, 0.803 (95%CI: 0.715-0.890) in internal validation, 0.855 (95%CI: 0.775-0.935) in temporal testing, and 0.802 (95%CI: 0.664-0.939) in external testing. These results were significantly superior to radiologists assessments (AUCs: 0.596, 0.529, 0.558, and 0.538, respectively; <i>P</i> < 0.001 by DeLong test). SHAP analysis identified radiomic score, age, tumor size stratification, calcification status, and cystic components as key predictive features. The model exhibited excellent calibration (Brier scores: 0.125-0.144) and provided significant clinical net benefit at decision thresholds exceeding 20%, as evidenced by decision curve analysis. The non-contrast computed tomography-based radiomics-clinical fusion model enables robust preoperative thyroid nodule classification, with SHAP-driven interpretability enhancing its clinical applicability for personalized decision-making.

Pulmonary hypertension: diagnostic aspects-what is the role of imaging?

Ali HJ, Guha A

pubmed logopapersJun 27 2025
The role of imaging in diagnosis of pulmonary hypertension is multifaceted, spanning from estimation of pulmonary arterial pressures, understanding pulmonary artery-right ventricular interaction, and identification of the cause. The purpose of this review is to provide a comprehensive overview of multimodality imaging in evaluation of pulmonary hypertension as well as the novel applications of imaging techniques that have improved our detection and understanding of pulmonary hypertension. There are diverse imaging modalities available for comprehensive assessment of pulmonary hypertension that are expanding with new tracers (e.g., hyperpolarized xenon gas, 129Xe) and imaging techniques (C-arm cone-bean computed tomography). Artificial intelligence applications may improve efficiency and accuracy of screening for pulmonary hypertension as well as further characterize pulmonary vasculopathies using computed tomography of the chest. In the face of increasing imaging options, a "value-based imaging" approach should be adopted to reduce unnecessary burden on the patient and the healthcare system without compromising the accuracy and completeness of diagnostic assessment. Future studies are needed to optimize use of multimodality imaging and artificial intelligence in comprehensive evaluation of patients with pulmonary hypertension.

Quantifying Sagittal Craniosynostosis Severity: A Machine Learning Approach With CranioRate.

Tao W, Somorin TJ, Kueper J, Dixon A, Kass N, Khan N, Iyer K, Wagoner J, Rogers A, Whitaker R, Elhabian S, Goldstein JA

pubmed logopapersJun 27 2025
ObjectiveTo develop and validate machine learning (ML) models for objective and comprehensive quantification of sagittal craniosynostosis (SCS) severity, enhancing clinical assessment, management, and research.DesignA cross-sectional study that combined the analysis of computed tomography (CT) scans and expert ratings.SettingThe study was conducted at a children's hospital and a major computer imaging institution. Our survey collected expert ratings from participating surgeons.ParticipantsThe study included 195 patients with nonsyndromic SCS, 221 patients with nonsyndromic metopic craniosynostosis (CS), and 178 age-matched controls. Fifty-four craniofacial surgeons participated in rating 20 patients head CT scans.InterventionsComputed tomography scans for cranial morphology assessment and a radiographic diagnosis of nonsyndromic SCS.Main OutcomesAccuracy of the proposed Sagittal Severity Score (SSS) in predicting expert ratings compared to cephalic index (CI). Secondary outcomes compared Likert ratings with SCS status, the predictive power of skull-based versus skin-based landmarks, and assessments of an unsupervised ML model, the Cranial Morphology Deviation (CMD), as an alternative without ratings.ResultsThe SSS achieved significantly higher accuracy in predicting expert responses than CI (<i>P</i> < .05). Likert ratings outperformed SCS status in supervising ML models to quantify within-group variations. Skin-based landmarks demonstrated equivalent predictive power as skull landmarks (<i>P</i> < .05, threshold 0.02). The CMD demonstrated a strong correlation with the SSS (Pearson coefficient: 0.92, Spearman coefficient: 0.90, <i>P</i> < .01).ConclusionsThe SSS and CMD can provide accurate, consistent, and comprehensive quantification of SCS severity. Implementing these data-driven ML models can significantly advance CS care through standardized assessments, enhanced precision, and informed surgical planning.

Artificial intelligence in coronary CT angiography: transforming the diagnosis and risk stratification of atherosclerosis.

Irannejad K, Mafi M, Krishnan S, Budoff MJ

pubmed logopapersJun 27 2025
Coronary CT Angiography (CCTA) is essential for assessing atherosclerosis and coronary artery disease, aiding in early detection, risk prediction, and clinical assessment. However, traditional CCTA interpretation is limited by observer variability, time inefficiency, and inconsistent plaque characterization. AI has emerged as a transformative tool, enhancing diagnostic accuracy, workflow efficiency, and risk prediction for major adverse cardiovascular events (MACE). Studies show that AI improves stenosis detection by 27%, inter-reader agreement by 30%, and reduces reporting times by 40%, thereby addressing key limitations of manual interpretation. Integrating AI with multimodal imaging (e.g., FFR-CT, PET-CT) further enhances ischemia detection by 28% and lesion classification by 35%, providing a more comprehensive cardiovascular evaluation. This review synthesizes recent advancements in CCTA-AI automation, risk stratification, and precision diagnostics while critically analyzing data quality, generalizability, ethics, and regulation challenges. Future directions, including real-time AI-assisted triage, cloud-based diagnostics, and AI-driven personalized medicine, are explored for their potential to revolutionize clinical workflows and optimize patient outcomes.

Early prediction of adverse outcomes in liver cirrhosis using a CT-based multimodal deep learning model.

Xie N, Liang Y, Luo Z, Hu J, Ge R, Wan X, Wang C, Zou G, Guo F, Jiang Y

pubmed logopapersJun 27 2025
Early-stage cirrhosis frequently presents without symptoms, making timely identification of high-risk patients challenging. We aimed to develop a deep learning-based triple-modal fusion liver cirrhosis network (TMF-LCNet) for the prediction of adverse outcomes, offering a promising tool to enhance early risk assessment and improve clinical management strategies. This retrospective study included 243 patients with early-stage cirrhosis across two centers. Adverse outcomes were defined as the development of severe complications like ascites, hepatic encephalopathy and variceal bleeding. TMF-LCNet was developed by integrating three types of data: non-contrast abdominal CT images, radiomic features extracted from liver and spleen, and clinical text detailing laboratory parameters and adipose tissue composition measurements. TMF-LCNet was compared with conventional methods on the same dataset, and single-modality versions of TMF-LCNet were tested to determine the impact of each data type. Model effectiveness was measured using the area under the receiver operating characteristics curve (AUC) for discrimination, calibration curves for model fit, and decision curve analysis (DCA) for clinical utility. TMF-LCNet demonstrated superior predictive performance compared to conventional image-based, radiomics-based, and multimodal methods, achieving an AUC of 0.797 in the training cohort (n = 184) and 0.747 in the external test cohort (n = 59). Only TMF-LCNet exhibited robust model calibration in both cohorts. Of the three data types, the imaging modality contributed the most, as the image-only version of TMF-LCNet achieved performance closest to the complete version (AUC = 0.723 and 0.716, respectively; p > 0.05). This was followed by the text modality, with radiomics contributing the least, a pattern consistent with the clinical utility trends observed in DCA. TMF-LCNet represents an accurate and robust tool for predicting adverse outcomes in early-stage cirrhosis by integrating multiple data types. It holds potential for early identification of high-risk patients, guiding timely interventions, and ultimately improving patient prognosis.

Association of Covert Cerebrovascular Disease With Falls Requiring Medical Attention.

Clancy Ú, Puttock EJ, Chen W, Whiteley W, Vickery EM, Leung LY, Luetmer PH, Kallmes DF, Fu S, Zheng C, Liu H, Kent DM

pubmed logopapersJun 27 2025
The impact of covert cerebrovascular disease on falls in the general population is not well-known. Here, we determine the time to a first fall following incidentally detected covert cerebrovascular disease during a clinical neuroimaging episode. This longitudinal cohort study assessed computed tomography (CT) and magnetic resonance imaging from 2009 to 2019 of patients aged >50 years registered with Kaiser Permanente Southern California which is a healthcare organization combining health plan coverage with coordinated medical services, excluding those with before stroke/dementia. We extracted evidence of incidental covert brain infarcts (CBI) and white matter hyperintensities/hypoattenuation (WMH) from imaging reports using natural language processing. We examined associations of CBI and WMH with falls requiring medical attention, using Cox proportional hazards regression models with adjustment for 12 variables including age, sex, ethnicity multimorbidity, polypharmacy, and incontinence. We assessed 241 050 patients, mean age 64.9 (SD, 10.42) years, 61.3% female, detecting covert cerebrovascular disease in 31.1% over a mean follow-up duration of 3.04 years. A recorded fall occurred in 21.2% (51 239/241 050) during follow-up. On CT, single fall incidence rate/1000 person-years (p-y) was highest in individuals with both CBI and WMH on CT (129.3 falls/1000 p-y [95% CI, 123.4-135.5]), followed by WMH (109.9 falls/1000 p-y [108.0-111.9]). On magnetic resonance imaging, the incidence rate was the highest with both CBI and WMH (76.3 falls/1000 p-y [95% CI, 69.7-83.2]), followed by CBI (71.4 falls/1000 p-y [95% CI, 65.9-77.2]). The adjusted hazard ratio for single index fall in individuals with CBI on CT was 1.13 (95% CI, 1.09-1.17); versus magnetic resonance imaging 1.17 (95% CI, 1.08-1.27). On CT, the risk for single index fall incrementally increased for mild (1.37 [95% CI, 1.32-1.43]), moderate (1.57 [95% CI, 1.48-1.67]), or severe WMH (1.57 [95% CI, 1.45-1.70]). On magnetic resonance imaging, index fall risk similarly increased with increasing WMH severity: mild (1.11 [95% CI, 1.07-1.17]), moderate (1.21 [95% CI, 1.13-1.28]), and severe WMH (1.34 [95% CI, 1.22-1.46]). In a large population with neuroimaging, CBI and WMH are independently associated with greater risks of an index fall. Increasing severities of WMH are associated incrementally with fall risk across imaging modalities.

A multi-view CNN model to predict resolving of new lung nodules on follow-up low-dose chest CT.

Wang J, Zhang X, Tang W, van Tuinen M, Vliegenthart R, van Ooijen P

pubmed logopapersJun 27 2025
New, intermediate-sized nodules in lung cancer screening undergo follow-up CT, but some of these will resolve. We evaluated the performance of a multi-view convolutional neural network (CNN) in distinguishing resolving and non-resolving new, intermediate-sized lung nodules. This retrospective study utilized data on 344 intermediate-sized nodules (50-500 mm<sup>3</sup>) in 250 participants from the NELSON (Dutch-Belgian Randomized Lung Cancer Screening) trial. We implemented four-fold cross-validation for model training and testing. A multi-view CNN model was developed by combining three two-dimensional (2D) CNN models and one three-dimensional (3D) CNN model. We used 2D, 2.5D, and 3D models for comparison. The models' performance was evaluated using sensitivity, specificity, and area under the ROC curve (AUC). Specificity, indicating what percentage of non-resolving nodules requiring follow-up can be correctly predicted, was maximized. Among all nodules, 18.3% (63) were resolving. The multi-view CNN model achieved an AUC of 0.81, with a mean sensitivity of 0.63 (SD, 0.15) and a mean specificity of 0.93 (SD, 0.02). The model significantly improved performance compared to 2D, 2.5D, or 3D models (p < 0.05). Under the premise of specificity greater than 90% (meaning < 10% of non-resolving nodules are incorrectly identified as resolving), follow-up CT in 14% of individuals could be prevented. The multi-view CNN model achieved high specificity in discriminating new intermediate nodules that would need follow-up CT by identifying non-resolving nodules. After further validation and optimization, this model may assist with decision-making when new intermediate nodules are found in lung cancer screening. The multi-view CNN-based model has the potential to reduce unnecessary follow-up scans when new nodules are detected, aiding radiologists in making earlier, more informed decisions. Predicting the resolution of new intermediate lung nodules in lung cancer screening CT is a challenge. Our multi-view CNN model showed an AUC of 0.81, a specificity of 0.93, and a sensitivity of 0.63 at the nodule level. The multi-view model demonstrated a significant improvement in AUC compared to the three 2D models, one 2.5D model, and one 3D model.
Page 45 of 1321311 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.