Sort by:
Page 2 of 56552 results

End-to-end Spatiotemporal Analysis of Color Doppler Echocardiograms: Application for Rheumatic Heart Disease Detection.

Roshanitabrizi P, Nath V, Brown K, Broudy TG, Jiang Z, Parida A, Rwebembera J, Okello E, Beaton A, Roth HR, Sable CA, Linguraru MG

pubmed logopapersSep 29 2025
Rheumatic heart disease (RHD) represents a significant global health challenge, disproportionately affecting over 40 million people in low- and middle-income countries. Early detection through color Doppler echocardiography is crucial for treating RHD, but it requires specialized physicians who are often scarce in resource-limited settings. To address this disparity, artificial intelligence (AI)-driven tools for RHD screening can provide scalable, autonomous solutions to improve access to critical healthcare services in underserved regions. This paper introduces RADAR (Rapid AI-Assisted Echocardiography Detection and Analysis of RHD), a novel and generalizable AI approach for end-to-end spatiotemporal analysis of color Doppler echocardiograms, aimed at detecting early RHD in resource-limited settings. RADAR identifies key imaging views and employs convolutional neural networks to analyze diagnostically relevant phases of the cardiac cycle. It also localizes essential anatomical regions and examines blood flow patterns. It then integrates all findings into a cohesive analytical framework. RADAR was trained and validated on 1,022 echocardiogram videos from 511 Ugandan children, acquired using standard portable ultrasound devices. An independent set of 318 cases, acquired using a handheld ultrasound device with diverse imaging characteristics, was also tested. On the validation set, RADAR outperformed existing methods, achieving an average accuracy of 0.92, sensitivity of 0.94, and specificity of 0.90. In independent testing, it maintained high, clinically acceptable performance, with an average accuracy of 0.79, sensitivity of 0.87, and specificity of 0.70. These results highlight RADAR's potential to improve RHD detection and promote health equity for vulnerable children by enhancing timely, accurate diagnoses in underserved regions.

Development of a High-Performance Ultrasound Prediction Model for the Diagnosis of Endometrial Cancer: An Interpretable XGBoost Algorithm Utilizing SHAP Analysis.

Lai H, Wu Q, Weng Z, Lyu G, Yang W, Ye F

pubmed logopapersSep 29 2025
To develop and validate an ultrasonography-based machine learning (ML) model for predicting malignant endometrial and cavitary lesions. This retrospective study was conducted on patients with pathologically confirmed results following transvaginal or transrectal ultrasound from 2021 to 2023. Endometrial ultrasound features were characterized using the International Endometrial Tumor Analysis (IETA) terminology. The dataset was ranomly divided (7:3) into training and validation sets. LASSO (least absolute shrinkage and selection operator) regression was applied for feature selection, and an extreme gradient boosting (XGBoost) model was developed. Performance was assessed via receiver operating characteristic (ROC) analysis, calibration, decision curve analysis, sensitivity, specificity, and accuracy. Among 1080 patients, 6 had a non-measurable endometrium. Of the remaining 1074 cases, 641 were premenopausal and 433 postmenopausal. Performance of the XGBoost model on the test set: The area under the curve (AUC) for the premenopausal group was 0.845 (0.781-0.909), with a relatively low sensitivity (0.588, 0.442-0.722) and a relatively high specificity (0.923, 0.863-0.959); the AUC for the postmenopausal group was 0.968 (0.944-0.992), with both sensitivity (0.895, 0.778-0.956) and specificity (0.931, 0.839-0.974) being relatively high. SHapley Additive exPlanations (SHAP) analysis identified key predictors: endometrial-myometrial junction, endometrial thickness, endometrial echogenicity, color Doppler flow score, and vascular pattern in premenopausal women; endometrial thickness, endometrial-myometrial junction, endometrial echogenicity, and color Doppler flow score in postmenopausal women. The XGBoost-based model exhibited excellent predictive performance, particularly in postmenopausal patients. SHAP analysis further enhances interpretability by identifying key ultrasonographic predictors of malignancy.

A machine learning approach for non-invasive PCOS diagnosis from ultrasound and clinical features.

Agirsoy M, Oehlschlaeger MA

pubmed logopapersSep 29 2025
This study investigates the use of machine learning (ML) algorithms to support faster and more accurate diagnosis of polycystic ovary syndrome (PCOS), with a focus on both predictive performance and clinical applicability. Multiple algorithms were evaluated-including Artificial Neural Networks (ANN), Support Vector Machines (SVM), Logistic Regression (LR), K-Nearest Neighbors (KNN), and Extreme Gradient Boosting (XGBoost). XGBoost consistently outperformed the other models and was selected for final development and validation. To align with the Rotterdam criteria, the dataset was structured into three feature categories: clinical, biochemical, and ultrasound (USG) data. The study explored various combinations of these feature subsets to identify the most efficient diagnostic pathways. Feature selection using the chi-square-based SelectKBest method revealed the top 10 predictive features, which were further validated through XGBoost's internal feature importance, SHAP analysis, and expert clinical assessment. The final XGBoost model demonstrated robust performance across multiple feature combinations: • Clinical + USG + AMH: AUC = 0.9947, Precision = 0.9553, F1 Score = 0.9553, Accuracy = 0.9553. • Clinical + USG: AUC = 0.9852, Precision = 0.9583, F1 Score = 0.9388, Accuracy = 0.9384. The most influential features included follicle count on both ovaries, weight gain, Anti-Müllerian Hormone (AMH), hair growth, menstrual irregularity, fast food consumption, pimples, and hair loss, levels. External validation was performed using a publicly available dataset containing 320 instances and 18 diagnostic features. The XGBoost model trained on the top-ranked features achieved perfect performance on the test set (AUC = 1.0, Precision = 1.0, F1 Score = 1.0, Accuracy = 1.0), though further validation is necessary to rule out overfitting or data leakage. These findings suggest that combining clinical and ultrasound features enables highly accurate, non-invasive, and cost-effective PCOS diagnosis. This study demonstrates the potential of ML-driven tools to streamline clinical workflows, reduce reliance on invasive diagnostics, and support early intervention in women's health.

Artificial Intelligence Deep Learning Ultrasound Discrimination of Cosmetic Fillers: A Multicenter Study.

Wortsman X, Lozano M, Rodriguez FJ, Valderrama Y, Ortiz-Orellana G, Zattar L, de Cabo F, Ducati E, Sigrist R, Fontan C, Rezende J, Gonzalez C, Schelke L, Zavariz J, Barrera P, Velthuis P

pubmed logopapersSep 29 2025
Despite the growing use of artificial intelligence (AI) in medicine, imaging, and dermatology, to date, there is no information on the use of AI for discriminating cosmetic fillers on ultrasound (US). An international collaborative group working in dermatologic and esthetic US was formed and worked with the staff of the Department of Computer Science and AI of the Universidad de Granada to gather and process a relevant number of anonymized images. AI techniques based on deep learning (DL) with YOLO (you only look once) architecture, together with a bounding box annotation tool, allowed experts to manually delineate regions of interest for the discrimination of common cosmetic fillers under real-world conditions. A total of 14 physicians from 6 countries participated in the AI study and compiled a final dataset comprising 1432 US images, including HA (hyaluronic acid), PMMA (polymethylmethacrylate), CaHA (calcium hydroxyapatite), and SO (silicone oil) filler cases. The model exhibits robust and consistent classification performance, with an average accuracy of 0.92 ± 0.04 across the cross-validation folds. YOLOv11 demonstrated outstanding performance in the detection of HA and SO, yielding F1 scores of 0.96 ± 0.02 and 0.94 ± 0.04, respectively. On the other hand, CaHA and PMMA show somewhat lower and less consistent performance in terms of precision and recall, with F1-scores around 0.83. AI using YOLOv11 allowed us to discriminate reliably between HA and SO using different complexity high-frequency US devices and operators. Further AI DL-specific work is needed to identify CaHA and PMMA more accurately.

Advances in ultrasound-based imaging for diagnosis of endometrial cancer.

Tlais M, Hamze H, Hteit A, Haddad K, El Fassih I, Zalzali I, Mahmoud S, Karaki S, Jabbour D

pubmed logopapersSep 28 2025
Endometrial cancer (EC) is the most common gynecological malignancy in high-income countries, with incidence rates rising globally. Early and accurate diagnosis is essential for improving outcomes. Transvaginal ultrasound (TVUS) remains a cost-effective first-line tool, and emerging techniques such as three-dimensional (3D) ultrasound (US), contrast-enhanced US (CEUS), elastography, and artificial intelligence (AI)-enhanced imaging may further improve diagnostic performance. To systematically review recent advances in US-based imaging techniques for the diagnosis and staging of EC, and to compare their performance with magnetic resonance imaging (MRI). A systematic search of PubMed, Scopus, Web of Science, and Google Scholar was performed to identify studies published between January 2010 and March 2025. Eligible studies evaluated TVUS, 3D-US, CEUS, elastography, or AI-enhanced US in EC diagnosis and staging. Methodological quality was assessed using the QUADAS-2 tool. Sensitivity, specificity, and area under the curve (AUC) were extracted where available, with narrative synthesis due to heterogeneity. Forty-one studies met the inclusion criteria. TVUS demonstrated high sensitivity (76%-96%) but moderate specificity (61%-86%), while MRI achieved higher specificity (84%-95%) and superior staging accuracy. 3D-US yielded accuracy comparable to MRI in selected early-stage cases. CEUS and elastography enhanced tissue characterization, and AI-enhanced US achieved pooled AUCs up to 0.91 for risk prediction and lesion segmentation. Variability in performance was noted across modalities due to patient demographics, equipment differences, and operator experience. TVUS remains a highly sensitive initial screening tool, with MRI preferred for definitive staging. 3D-US, CEUS, elastography, and AI-enhanced techniques show promise as complementary or alternative approaches, particularly in low-resource settings. Standardization, multicenter validation, and integration of multi-modal imaging are needed to optimize diagnostic pathways for EC.

Modified UNet-enhanced ultrasonic superb microvascular imaging feature extraction and grading of carpal tunnel syndrome.

Gong X, Zhang G, Zhao D, Jin Z, Zhu Y, Jiang L, Ding B, Xue H, Lin H, Zhang W, Zhang D, Tu J

pubmed logopapersSep 28 2025
Carpal tunnel syndrome (CTS) is recognized as the most frequently encountered median nerve (MN) entrapment neuropathy, with a disproportionate burden in middle-aged and elderly individuals and in occupational groups with repetitive wrist use. Anatomically, CTS is characterized by compression of the median nerve within the confined space between the transverse carpal ligament and flexor tendons, and microcirculatory impairment is regarded as one of its key pathological bases. Although electrodiagnostic assessments are considered as diagnostic gold standard, their utility is limited by suboptimal patient compliance, procedural discomfort, and inadequate sensitivity for detecting mild disease. This study integrates ultrafast Superb Microvascular Imaging (SMI) with a classification-guided, improved UNet segmentation modal and quantitative image analysis to objectively extract microvascular features for CTS grading. In a cohort of 105 patients (21 mild, 71 moderate, 13 severe CTS) and 21 healthy controls, longitudinal and transverse SMI cine loops were segmented using an improved UNet with cross-plane classification guidance. The modified network can yielded superior segmentation effect over a traditional UNet. From segmented regions we extracted 6 SMI-derived geometric features, which were then used as predictors in a nonlinear quadratic regression model for CTS severity grading. The model achieved 93.7 % overall classification accuracy and an AUC of 0.95 in cross validation. Independent blind validation (n = 12) showed strong agreement with expert sonographers (Kappa = 0.87). These results demonstrate that high spatiotemporal SMI combined with anatomy-aware deep learning model could enable reproducible extraction of microvascular geometry, and supports robust, noninvasive grading of CTS, with potential for deployment on portable ultrasound platforms for point-of-care screening and bedside ultrasonic monitoring.

A Novel Hybrid Deep Learning and Chaotic Dynamics Approach for Thyroid Cancer Classification

Nada Bouchekout, Abdelkrim Boukabou, Morad Grimes, Yassine Habchi, Yassine Himeur, Hamzah Ali Alkhazaleh, Shadi Atalla, Wathiq Mansoor

arxiv logopreprintSep 28 2025
Timely and accurate diagnosis is crucial in addressing the global rise in thyroid cancer, ensuring effective treatment strategies and improved patient outcomes. We present an intelligent classification method that couples an Adaptive Convolutional Neural Network (CNN) with Cohen-Daubechies-Feauveau (CDF9/7) wavelets whose detail coefficients are modulated by an n-scroll chaotic system to enrich discriminative features. We evaluate on the public DDTI thyroid ultrasound dataset (n = 1,638 images; 819 malignant / 819 benign) using 5-fold cross-validation, where the proposed method attains 98.17% accuracy, 98.76% sensitivity, 97.58% specificity, 97.55% F1-score, and an AUC of 0.9912. A controlled ablation shows that adding chaotic modulation to CDF9/7 improves accuracy by +8.79 percentage points over a CDF9/7-only CNN (from 89.38% to 98.17%). To objectively position our approach, we trained state-of-the-art backbones on the same data and splits: EfficientNetV2-S (96.58% accuracy; AUC 0.987), Swin-T (96.41%; 0.986), ViT-B/16 (95.72%; 0.983), and ConvNeXt-T (96.94%; 0.987). Our method outperforms the best of these by +1.23 points in accuracy and +0.0042 in AUC, while remaining computationally efficient (28.7 ms per image; 1,125 MB peak VRAM). Robustness is further supported by cross-dataset testing on TCIA (accuracy 95.82%) and transfer to an ISIC skin-lesion subset (n = 28 unique images, augmented to 2,048; accuracy 97.31%). Explainability analyses (Grad-CAM, SHAP, LIME) highlight clinically relevant regions. Altogether, the wavelet-chaos-CNN pipeline delivers state-of-the-art thyroid ultrasound classification with strong generalization and practical runtime characteristics suitable for clinical integration.

A novel open-source ultrasound dataset with deep learning benchmarks for spinal cord injury localization and anatomical segmentation.

Kumar A, Kotkar K, Jiang K, Bhimreddy M, Davidar D, Weber-Levine C, Krishnan S, Kerensky MJ, Liang R, Leadingham KK, Routkevitch D, Hersh AM, Ashayeri K, Tyler B, Suk I, Son J, Theodore N, Thakor N, Manbachi A

pubmed logopapersSep 26 2025
While deep learning has catalyzed breakthroughs across numerous domains, its broader adoption in clinical settings is inhibited by the costly and time-intensive nature of data acquisition and annotation. To further facilitate medical machine learning, we present an ultrasound dataset of 10,223 brightness-mode (B-mode) images consisting of sagittal slices of porcine spinal cords (N = 25) before and after a contusion injury. We additionally benchmark the performance metrics of several state-of-the-art object detection algorithms to localize the site of injury and semantic segmentation models to label the anatomy for comparison and creation of task-specific architectures. Finally, we evaluate the zero-shot generalization capabilities of the segmentation models on human ultrasound spinal cord images to determine whether training on our porcine dataset is sufficient for accurately interpreting human data. Our results show that the YOLOv8 detection model outperforms all evaluated models for injury localization, achieving a mean Average Precision (mAP50-95) score of 0.606. Segmentation metrics indicate that the DeepLabv3 segmentation model achieves the highest accuracy on unseen porcine anatomy, with a Mean Dice score of 0.587, while SAMed achieves the highest mean Dice score generalizing to human anatomy (0.445). To the best of our knowledge, this is the largest annotated dataset of spinal cord ultrasound images made publicly available to researchers and medical professionals, as well as the first public report of object detection and segmentation architectures to assess anatomical markers in the spinal cord for methodology development and clinical applications.

Prediction of neoadjuvant chemotherapy efficacy in patients with HER2-low breast cancer based on ultrasound radiomics.

Peng Q, Ji Z, Xu N, Dong Z, Zhang T, Ding M, Qu L, Liu Y, Xie J, Jin F, Chen B, Song J, Zheng A

pubmed logopapersSep 26 2025
Neoadjuvant chemotherapy (NAC) is a crucial therapeutic approach for treating breast cancer, yet accurately predicting treatment response remains a significant clinical challenge. Conventional ultrasound plays a vital role in assessing tumor morphology but lacks the ability to quantitatively capture intratumoral heterogeneity. Ultrasound radiomics, which extracts high-throughput quantitative imaging features, offers a novel approach to enhance NAC response prediction. This study aims to evaluate the predictive efficacy of ultrasound radiomics models based on pre-treatment, post-treatment, and combined imaging features for assessing the NAC response in patients with HER2-low breast cancer. This retrospective multicenter study included 359 patients with HER2-low breast cancer who underwent NAC between January 1, 2016, and December 31, 2020. A total of 488 radiomic features were extracted from pre- and post-treatment ultrasound images. Feature selection was conducted in two stages: first, Pearson correlation analysis (threshold: 0.65) was applied to remove highly correlated features and reduce redundancy; then, Recursive Feature Elimination with Cross-Validation (RFECV) was employed to identify the optimal feature subset for model construction. The dataset was divided into a training set (244 patients) and an external validation set (115 patients from independent centers). Model performance was assessed via the area under the receiver operating characteristic curve (AUC), accuracy, precision, recall, and F1 score. Three models were initially developed: (1) a pre-treatment model (AUC = 0.716), (2) a post-treatment model (AUC = 0.772), and (3) a combined pre- and post-treatment model (AUC = 0.762).To enhance feature selection, Recursive Feature Elimination with Cross-Validation was applied, resulting in optimized models with reduced feature sets: (1) the pre-treatment model (AUC = 0.746), (2) the post-treatment model (AUC = 0.712), and (3) the combined model (AUC = 0.759). Ultrasound radiomics is a non-invasive and promising approach for predicting response to neoadjuvant chemotherapy in HER2-low breast cancer. The pre-treatment model yielded reliable performance after feature selection. While the combined model did not substantially enhance predictive accuracy, its stable performance suggests that longitudinal ultrasound imaging may help capture treatment-induced phenotypic changes. These findings offer preliminary support for individualized therapeutic decision-making.

Hybrid Fusion Model for Effective Distinguishing Benign and Malignant Parotid Gland Tumors in Gray-Scale Ultrasonography.

Mao Y, Jiang LP, Wang JL, Chen FQ, Zhang WP, Peng XQ, Chen L, Liu ZX

pubmed logopapersSep 26 2025
To develop a hybrid fusion model-deep learning radiomics nomograms (DLRN), integrating radiomics and transfer learning for assisting sonographers differentiate benign and malignant parotid gland tumors. This study retrospectively analyzed a total of 328 patients with pathologically confirmed parotid gland tumors from two centers. Radiomics features extracted from ultrasound images were input into eight machine learning classifiers to construct Radiomics (Rad) model. Additionally, images were also input into seven transfer learning networks to construct deep transfer learning (DTL) model. The prediction probabilities from these two models were combined through decision fusion to construct a DLR model. Clinical features were further integrated with the prediction probabilities of the DLR model to develop the DLRN model. The performance of these models was evaluated using receiver operating characteristic curve analysis, calibration curve, decision curve analysis and the Hosmer-Lemeshow test. In the internal and external validation cohorts, compared with Clinic (AUC = 0.891 and 0.734), Rad (AUC = 0.809 and 0.860), DTL (AUC = 0.905 and 0.782) and DLR (AUC = 0.932 and 0.828), the DLRN model demonstrated the greatest discriminative ability (AUC = 0.931 and 0.934), showing the best discriminative power. With the assistance of DLR, the diagnostic accuracy of resident, attending and chief physician increased by 6.6%, 6.5% and 1.2%, respectively. The hybrid fusion model DLRN significantly enhances the diagnostic performance for benign and malignant tumors of the parotid gland. It can effectively assist sonographers in making more accurate diagnoses.
Page 2 of 56552 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.