Sort by:
Page 6 of 56552 results

Artificial intelligence aided ultrasound imaging of foetal congenital heart disease: A scoping review.

Norris L, Lockwood P

pubmed logopapersSep 16 2025
Congenital heart diseases (CHD) are a significant cause of neonatal mortality and morbidity. Detecting these abnormalities during pregnancy increases survival rates, enhances prognosis, and improves pregnancy management and quality of life for the affected families. Foetal echocardiography can be considered an accurate method for detecting CHDs. However, the detection of CHDs can be limited by factors such as the sonographer's skill, expertise and patient specific variables. Using artificial intelligence (AI) has the potential to address these challenges, increasing antenatal CHD detection during prenatal care. A scoping review was conducted using Google Scholar, PubMed, and ScienceDirect databases, employing keywords, Boolean operators, and inclusion and exclusion criteria to identify peer-reviewed studies. Thematic mapping and synthesis of the found literature were conducted to review key concepts, research methods and findings. A total of n = 233 articles were identified, after exclusion criteria, the focus was narrowed to n = 7 that met the inclusion criteria. Themes in the literature identified the potential of AI to assist clinicians and trainees, alongside emerging new ethical limitations in ultrasound imaging. AI-based tools in ultrasound imaging offer great potential in assisting sonographers and doctors with decision-making in CHD diagnosis. However, due to the paucity of data and small sample sizes, further research and technological advancements are needed to improve reliability and integrate AI into routine clinical practice. This scoping review identified the reported accuracy and limitations of AI-based tools within foetal cardiac ultrasound imaging. AI has the potential to aid in reducing missed diagnoses, enhance training, and improve pregnancy management. There is a need to understand and address the ethical and legal considerations involved with this new paradigm in imaging.

A Fully Open and Generalizable Foundation Model for Ultrasound Clinical Applications

Hongyuan Zhang, Yuheng Wu, Mingyang Zhao, Zhiwei Chen, Rebecca Li, Fei Zhu, Haohan Zhao, Xiaohua Yuan, Meng Yang, Chunli Qiu, Xiang Cong, Haiyan Chen, Lina Luan, Randolph H. L. Wong, Huai Liao, Colin A Graham, Shi Chang, Guowei Tao, Dong Yi, Zhen Lei, Nassir Navab, Sebastien Ourselin, Jiebo Luo, Hongbin Liu, Gaofeng Meng

arxiv logopreprintSep 15 2025
Artificial intelligence (AI) that can effectively learn ultrasound representations by integrating multi-source data holds significant promise for advancing clinical care. However, the scarcity of large labeled datasets in real-world clinical environments and the limited generalizability of task-specific models have hindered the development of generalizable clinical AI models for ultrasound applications. In this study, we present EchoCare, a novel ultrasound foundation model for generalist clinical use, developed via self-supervised learning on our curated, publicly available, large-scale dataset EchoCareData. EchoCareData comprises 4.5 million ultrasound images, sourced from over 23 countries across 5 continents and acquired via a diverse range of distinct imaging devices, thus encompassing global cohorts that are multi-center, multi-device, and multi-ethnic. Unlike prior studies that adopt off-the-shelf vision foundation model architectures, we introduce a hierarchical classifier into EchoCare to enable joint learning of pixel-level and representation-level features, capturing both global anatomical contexts and local ultrasound characteristics. With minimal training, EchoCare outperforms state-of-the-art comparison models across 10 representative ultrasound benchmarks of varying diagnostic difficulties, spanning disease diagnosis, lesion segmentation, organ detection, landmark prediction, quantitative regression, imaging enhancement and report generation. The code and pretrained model are publicly released, rendering EchoCare accessible for fine-tuning and local adaptation, supporting extensibility to additional applications. EchoCare provides a fully open and generalizable foundation model to boost the development of AI technologies for diverse clinical ultrasound applications.

Deep Learning for Breast Mass Discrimination: Integration of B-Mode Ultrasound & Nakagami Imaging with Automatic Lesion Segmentation

Hassan, M. W., Hossain, M. M.

medrxiv logopreprintSep 15 2025
ObjectiveThis study aims to enhance breast cancer diagnosis by developing an automated deep learning framework for real-time, quantitative ultrasound imaging. Breast cancer is the second leading cause of cancer-related deaths among women, and early detection is crucial for improving survival rates. Conventional ultrasound, valued for its non-invasive nature and real-time capability, is limited by qualitative assessments and inter-observer variability. Quantitative ultrasound (QUS) methods, including Nakagami imaging--which models the statistical distribution of backscattered signals and lesion morphology--present an opportunity for more objective analysis. MethodsThe proposed framework integrates three convolutional neural networks (CNNs): (1) NakaSynthNet, synthesizing quantitative Nakagami parameter images from B-mode ultrasound; (2) SegmentNet, enabling automated lesion segmentation; and (3) FeatureNet, which combines anatomical and statistical features for classifying lesions as benign or malignant. Training utilized a diverse dataset of 110,247 images, comprising clinical B-mode scans and various simulated examples (fruit, mammographic lesions, digital phantoms). Quantitative performance was evaluated using mean squared error (MSE), structural similarity index (SSIM), segmentation accuracy, sensitivity, specificity, and area under the curve (AUC). ResultsNakaSynthNet achieved real-time synthesis at 21 frames/s, with MSE of 0.09% and SSIM of 98%. SegmentNet reached 98.4% accuracy, and FeatureNet delivered 96.7% overall classification accuracy, 93% sensitivity, 98% specificity, and an AUC of 98%. ConclusionThe proposed multi-parametric deep learning pipeline enables accurate, real-time breast cancer diagnosis from ultrasound data using objective quantitative imaging. SignificanceThis framework advances the clinical utility of ultrasound by reducing subjectivity and providing robust, multi-parametric information for improved breast cancer detection.

Pseudo-D: Informing Multi-View Uncertainty Estimation with Calibrated Neural Training Dynamics

Ang Nan Gu, Michael Tsang, Hooman Vaseli, Purang Abolmaesumi, Teresa Tsang

arxiv logopreprintSep 15 2025
Computer-aided diagnosis systems must make critical decisions from medical images that are often noisy, ambiguous, or conflicting, yet today's models are trained on overly simplistic labels that ignore diagnostic uncertainty. One-hot labels erase inter-rater variability and force models to make overconfident predictions, especially when faced with incomplete or artifact-laden inputs. We address this gap by introducing a novel framework that brings uncertainty back into the label space. Our method leverages neural network training dynamics (NNTD) to assess the inherent difficulty of each training sample. By aggregating and calibrating model predictions during training, we generate uncertainty-aware pseudo-labels that reflect the ambiguity encountered during learning. This label augmentation approach is architecture-agnostic and can be applied to any supervised learning pipeline to enhance uncertainty estimation and robustness. We validate our approach on a challenging echocardiography classification benchmark, demonstrating superior performance over specialized baselines in calibration, selective classification, and multi-view fusion.

Open-Source AI for Vastus Lateralis and Adipose Tissue Segmentation to Assess Muscle Size and Quality.

White MS, Horikawa-Strakovsky A, Mayer KP, Noehren BW, Wen Y

pubmed logopapersSep 13 2025
Ultrasound imaging is a clinically feasible method for assessing muscle size and quality, but manual processing is time-consuming and difficult to scale. Existing artificial intelligence (AI) models measure muscle cross-sectional area, but they do not include assessments of muscle quality or account for the influence of subcutaneous adipose tissue thickness on echo intensity measurements. We developed an open-source AI model to accurately segment the vastus lateralis and subcutaneous adipose tissue in B-mode images for automating measurements of muscle size and quality. The model was trained on 612 ultrasound images from 44 participants who had anterior cruciate ligament reconstruction. Model generalizability was evaluated on a test set of 50 images from 14 unique participants. A U-Net architecture with ResNet50 backbone was used for segmentation. Performance was assessed using the Dice coefficient and Intersection over Union (IoU). Agreement between model predictions and manual measurements was evaluated using intraclass correlation coefficients (ICCs), R² values and standard errors of measurement (SEM). Dice coefficients were 0.9095 and 0.9654 for subcutaneous adipose tissue and vastus lateralis segmentation, respectively. Excellent agreement was observed between model predictions and manual measurements for cross-sectional area (ICC = 0.986), echo intensity (ICC = 0.991) and subcutaneous adipose tissue thickness (ICC = 0.996). The model demonstrated high reliability with low SEM values for clinical measurements (cross-sectional area: 1.15 cm², echo intensity: 1.28-1.78 a.u.). We developed an open-source AI model that accurately segments the vastus lateralis and subcutaneous adipose tissue in B-mode ultrasound images, enabling automated measurements of muscle size and quality.

Biomechanical assessment of Hoffa fat pad characteristics with ultrasound: a narrative review focusing on diagnostic imaging and image-guided interventions.

Qin N, Zhang B, Zhang X, Tian L

pubmed logopapersSep 13 2025
The infrapatellar fat pad (IFP), a key intra-articular knee structure, plays a crucial role in biomechanical cushioning and metabolic regulation, with fibrosis and inflammation contributing to osteoarthritis-related pain and dysfunction. This review outlines the anatomy and clinical value of IFP ultrasonography in static and dynamic assessment, as well as guided interventions. Shear wave elastography (SWE), Doppler imaging, and dynamic ultrasound effectively quantify tissue stiffness, vascular signals, and flexion-extension morphology. Due to the limited penetration capability of ultrasound imaging, it is difficult to directly observe IPF through the patella. However, its real-time capability and sensitivity effectively complement the detailed anatomical information provided by MRI, making it an important supplementary method for MRI-based IPF detection. This integrated approach creates a robust diagnostic pathway, from initial assessment and precise treatment guidance to long-term monitoring. Advances in ultrasound-guided precision medicine, protocol standardization, and the integration of Artificial Intelligence (AI) with multimodal imaging hold significant promise for improving the management of IFP pathologies.

Harnessing Artificial Intelligence for Shoulder Ultrasonography: A Narrative Review.

Wu WT, Shu YC, Lin CY, Gonzalez-Suarez CB, Özçakar L, Chang KV

pubmed logopapersSep 12 2025
Shoulder pain is a common musculoskeletal complaint requiring accurate imaging for diagnosis and management. Ultrasound is favored for its accessibility, dynamic imaging, and high-resolution soft tissue visualization. However, its operator dependency and variability in interpretation present challenges. Recent advancements in artificial intelligence (AI), particularly deep learning algorithms like convolutional neural networks, offer promising applications in musculoskeletal imaging, enhancing diagnostic accuracy and efficiency. This narrative review explores AI integration in shoulder ultrasound, emphasizing automated pathology detection, image segmentation, and outcome prediction. Deep learning models have demonstrated high accuracy in grading bicipital peritendinous effusion and discriminating rotator cuff tendon tears, while machine learning techniques have shown efficacy in predicting the success of ultrasound-guided percutaneous irrigation for rotator cuff calcification. AI-powered segmentation models have improved anatomical delineation; however, despite these advancements, challenges remain, including the need for large, well-annotated datasets, model generalizability across diverse populations, and clinical validation. Future research should optimize AI algorithms for real-time applications, integrate multimodal imaging, and enhance clinician-AI collaboration.

A machine learning model combining ultrasound features and serological markers predicts gallbladder polyp malignancy: A retrospective cohort study.

Yang Y, Tu H, Lin Y, Wei J

pubmed logopapersSep 12 2025
Differentiating benign from malignant gallbladder polyps (GBPs) is critical for clinical decisions. Pathological biopsy, the gold standard, requires cholecystectomy, underscoring the need for noninvasive alternatives. This retrospective study included 202 patients (50 malignant, 152 benign) who underwent cholecystectomy (2018-2024) at Fujian Provincial Hospital. Ultrasound features (polyp diameter, stalk presence), serological markers (neutrophil-to-lymphocyte ratio [NLR], CA19-9), and demographics (age, sex, body mass index, waist-to-hip ratio, comorbidities, alcohol history) were analyzed. Patients were split into training (70%) and validation (30%) sets. Ten machine learning (ML) algorithms were trained; the model with the highest area under the receiver operating characteristic curve (AUC) was selected. Shapley additive explanations (SHAP) identified key predictors. Models were categorized as clinical (ultrasound + age), hematological (NLR + CA19-9), and combined (all 5 variables). ROC, precision-recall, calibration, and decision curve analysis curves were generated. A web-based calculator was developed. The Extra Trees model achieved the highest AUC (0.97 in training, 0.93 in validation). SHAP analysis highlighted polyp diameter, sessile morphology, NLR, age, and CA19-9 as top predictors. The combined model outperformed clinical (AUC 0.89) and hematological (AUC 0.68) models, with balanced sensitivity (66%-54%), specificity (94-93%), and accuracy (87%-83%). This ML model integrating ultrasound and serological markers accurately predicts GBP malignancy. The web-based calculator facilitates clinical adoption, potentially reducing unnecessary surgeries.

Artificial Intelligence and Carpal Tunnel Syndrome: A Systematic Review and Contemporary Update on Imaging Techniques.

Misch M, Medani K, Rhisheekesan A, Manjila S

pubmed logopapersSep 12 2025
Trailblazing strides in artificial intelligence (AI) programs have led to enhanced diagnostic imaging, including ultrasound (US), magnetic resonance imaging, and infrared thermography. This systematic review summarizes current efforts to integrate AI into the diagnosis of carpal tunnel syndrome (CTS) and its potential to improve clinical decision-making. A comprehensive literature search was conducted in PubMed, Embase, and Cochrane database in accordance with PRISMA guidelines. Articles were included if they evaluated the application of AI in the diagnosis or detection of CTS. Search terms included "carpal tunnel syndrome" and "artificial intelligence", along with relevant MeSH terms. A total of 22 studies met inclusion criteria and were analyzed qualitatively. AI models, especially deep learning algorithms, demonstrated strong diagnostic performance, particularly with US imaging. Frequently used inputs included echointensity, pixelation patterns, and the cross-sectional area of the median nerve. AI-assisted image analysis enabled superior detection and segmentation of the median nerve, often outperforming radiologists in sensitivity and specificity. Additionally, AI complemented electromyography by offering insight into the physiological integrity of the nerve. AI holds significant promise as an adjunctive tool in the diagnosis and management of CTS. Its ability to extract and quantify radiomic features may support accurate, reproducible diagnoses and allow for longitudinal digital documentation. When integrated with existing modalities, AI may enhance clinical assessments, inform surgical decision-making, and extend diagnostic capabilities into telehealth and point-of-care settings. Continued development and prospective validation of these technologies are essential for streamlining widespread integration into clinical practice.

Ultrasound-Based Deep Learning Radiomics to Predict Cervical Lymph Node Metastasis in Major Salivary Gland Carcinomas.

Su HZ, Hong LC, Li ZY, Fu QM, Wu YH, Wu SF, Zhang ZB, Yang DH, Zhang XD

pubmed logopapersSep 12 2025
Cervical lymph node metastasis (CLNM) critically impacts surgery approaches, prognosis, and recurrence in patients with major salivary gland carcinomas (MSGCs). We aimed to develop and validate an ultrasound (US)-based deep learning (DL) radiomics model for noninvasive prediction of CLNM in MSGCs. A total of 214 patients with MSGCs from 4 medical centers were divided into training (Centers 1-2, n = 144) and validation (Centers 3-4, n = 70) cohorts. Radiomics and DL features were extracted from preoperative US images. Following feature selection, radiomics score and DL score were constructed respectively. Subsequently, the least absolute shrinkage and selection operator (LASSO) regression was used to identify optimal features, which were then employed to develop predictive models using logistic regression (LR) and 8 machine learning algorithms. Model performance was evaluated using multiple metrics, with particular focus on the area under the receiver operating characteristic curve (AUC). Radiomics and DL scores showed robust performance in predicting CLNM in MSGCs, with AUCs of 0.819 and 0.836 in the validation cohort, respectively. After LASSO regression, 6 key features (patient age, tumor edge, calcification, US reported CLN-positive, radiomics score, and DL score) were selected to construct 9 predictive models. In the validation cohort, the models' AUCs ranged from 0.770 to 0.962. The LR model achieved the best performance, with an AUC of 0.962, accuracy of 0.886, precision of 0.762, recall of 0.842, and an F1 score of 0.8. The composite model integrating clinical, US, radiomics, and DL features accurately noninvasively predicts CLNM preoperatively in MSGCs. CLNM in MSGCs is critical for treatment planning, but noninvasive prediction is limited. This study developed an US-based DL radiomics model to enable noninvasive CLNM prediction, supporting personalized surgery and reducing unnecessary interventions.
Page 6 of 56552 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.