Sort by:
Page 18 of 56556 results

Using Machine Learning to Improve the Contrast-Enhanced Ultrasound Liver Imaging Reporting and Data System Diagnosis of Hepatocellular Carcinoma in Indeterminate Liver Nodules.

Hoopes JR, Lyshchik A, Xiao TS, Berzigotti A, Fetzer DT, Forsberg F, Sidhu PS, Wessner CE, Wilson SR, Keith SW

pubmed logopapersAug 11 2025
Liver cancer ranks among the most lethal cancers. Hepatocellular carcinoma (HCC) is the most common type of primary liver cancer and better diagnostic tools are needed to diagnose patients at risk. The aim is to develop a machine learning algorithm that enhances the sensitivity and specificity of the Contrast-Enhanced Ultrasound Liver Imaging Reporting and Data System (CEUS-LIRADS) in classifying indeterminate at-risk liver nodules (LR-M, LR-3, LR-4) as HCC or non-HCC. Our study includes patients at risk for HCC with untreated indeterminate focal liver observations detected on US or contrast-enhanced CT or MRI performed as part of their clinical standard of care from January 2018 to November 2022. Recursive partitioning was used to improve HCC diagnosis in indeterminate at-risk nodules. Demographics, blood biomarkers, and CEUS imaging features were evaluated as potential predictors for the algorithm to classify nodules as HCC or non-HCC. We evaluated 244 indeterminate liver nodules from 224 patients (mean age 62.9 y). Of the nodules, 73.2% (164/224) were from males. The algorithm was trained on a random 2/3 partition of 163 liver nodules and correctly reclassified more than half of the HCC liver nodules previously categorized as indeterminate in the independent 1/3 test partition of 81 liver nodules, achieving a sensitivity of 56.3% (95% CI: 42.0%, 70.2%) and specificity of 93.9% (95% CI: 84.4%, 100.0%). Machine learning was applied to the multicenter, multinational study of CEUS LI-RADS indeterminate at-risk liver nodules and correctly diagnosed HCC in more than half of the HCC nodules.

C<sup>5</sup>-net: Cross-organ cross-modality cswin-transformer coupled convolutional network for dual task transfer learning in lymph node segmentation and classification.

Wang M, Chen H, Mao L, Jiao W, Han H, Zhang Q

pubmed logopapersAug 11 2025
Deep learning has made notable strides in the ultrasonic diagnosis of lymph nodes, yet it faces three primary challenges: a limited number of lymph node images and a scarcity of annotated data; difficulty in comprehensively learning both local and global semantic information; and obstacles in collaborative learning for both image segmentation and classification to achieve accurate diagnosis. To address these issues, we propose the Cross-organ Cross-modality Cswin-transformer Coupled Convolutional Network (C<sup>5</sup>-Net). First, we design a cross-organ and cross-modality transfer learning strategy to leverage skin lesion dermoscopic images, which have abundant annotations and share similarities in fields of view and morphology with the lymph node ultrasound images. Second, we couple Transformer and convolutional network to comprehensively learn both local details and global information. Third, the encoder weights in the C<sup>5</sup>-Net are shared between segmentation and classification tasks to exploit the synergistic knowledge, enhancing overall performance in ultrasound lymph node diagnosis. Our study leverages 690 lymph node ultrasound images and 1000 skin lesion dermoscopic images. Experimental results show that our C<sup>5</sup>-Net achieves the best segmentation and classification performance for lymph nodes among advanced methods, with the Dice coefficient of segmentation equaling 0.854, and the accuracy of classification equaling 0.874. Our method has consistently shown accuracy and robustness in the segmentation and classification of lymph nodes, contributing to the early and accurate detection of lymph nodal malignancy, which is potentially essential for effective treatment planning in clinical oncology.

Ultrasound-Based Machine Learning and SHapley Additive exPlanations Method Evaluating Risk of Gallbladder Cancer: A Bicentric and Validation Study.

Chen B, Zhong H, Lin J, Lyu G, Su S

pubmed logopapersAug 9 2025
This study aims to construct and evaluate 8 machine learning models by integrating ultrasound imaging features, clinical characteristics, and serological features to assess the risk of gallbladder cancer (GBC) occurrence in patients. A retrospective analysis was conducted on ultrasound and clinical data of 300 suspected GBC patients who visited the Second Affiliated Hospital of Fujian Medical University from January 2020 to January 2024 and 69 patients who visited the Zhongshan Hospital Affiliated to Xiamen University from January 2024 to January 2025. Key relevant features were selected using Least Absolute Shrinkage and Selection Operator (LASSO) regression. Predictive models were constructed using XGBoost, logistic regression, support vector machine, k-nearest neighbors, random forest, decision tree, naive Bayes, and neural network, with the SHapley Additive exPlanations (SHAP) method employed to explain model interpretability. The LASSO regression demonstrated that gender, age, alkaline phosphatase (ALP), clarity of interface with liver, stratification of the gallbladder wall, intracapsular anechoic lesions, and intracapsular punctiform strong lesions were key features for GBC. The XGBoost model demonstrated an area under receiver operating characteristic curve (AUC) of 0.934, 0.916, and 0.813 in the training, validating, and test sets. SHAP analysis revealed the importance ranking of factors as clarity of interface with liver, stratification of the gallbladder wall, intracapsular anechoic lesions, and intracapsular punctiform strong lesions, ALP, gender, and age. Personalized prediction explanations through SHAP values demonstrated the contribution of each feature to the final prediction, enhancing result interpretability. Furthermore, decision plots were generated to display the influence trajectory of each feature on model predictions, aiding in analyzing which features had the greatest impact on these mispredictions; thereby facilitating further model optimization or feature adjustment. This study proposed a GBC ML model based on ultrasound, clinical, and serological characteristics, indicating the superior performance of the XGBoost model and enhancing the interpretability of the model through the SHAP method.

Automated coronary artery segmentation / tissue characterization and detection of lipid-rich plaque: An integrated backscatter intravascular ultrasound study.

Masuda Y, Takeshita R, Tsujimoto A, Sahashi Y, Watanabe T, Fukuoka D, Hara T, Kanamori H, Okura H

pubmed logopapersAug 8 2025
Intravascular ultrasound (IVUS)-based tissue characterization has been used to detect vulnerable plaque or lipid-rich plaque (LRP). Recently, advancements in artificial intelligence (AI) technology have enabled automated coronary arterial plaque segmentation and tissue characterization. The purpose of this study was to evaluate the feasibility and diagnostic accuracy of a deep learning model for plaque segmentation, tissue characterization and identification of LRP. A total of 1,098 IVUS images from 67 patients who underwent IVUS-guided percutaneous coronary intervention were selected for the training group, while 1,100 IVUS images from 100 vessels (88 patients) were used for the validation group. A 7-layer U-Net ++ was applied for automated coronary artery segmentation and tissue characterization. Segmentation and quantification of the external elastic membrane (EEM), lumen and guidewire artifact were performed and compared with manual measurements. Plaque tissue characterization was conducted using integrated backscatter (IB)-IVUS as the gold standard. LRP was defined as %lipid area of ≥65 %. The deep learning model accurately segmented EEM and lumen. AI-predicted %lipid area (R = 0.90, P < 0.001), % fibrosis area (R = 0.89, P < 0.001), %dense fibrosis area (R = 0.81, P < 0.001) and % calcification area (R = 0.89, P < 0.001), showed strong correlation with IB-IVUS measurements. The model predicted LRP with a sensitivity of 62 %, specificity of 94 %, positive predictive value of 69 %, negative predictive value of 92 % and an area under the receiver operating characteristic curve of 0.919 (95 % CI:0.902-0.934), respectively. The deep-learning model demonstrated accurate automatic segmentation and tissue characterization of human coronary arteries, showing promise for identifying LRP.

Enhancing B-mode-based breast cancer diagnosis via cross-attention fusion of H-scan and Nakagami imaging with multi-CAM-QUS-driven XAI.

Mondol SS, Hasan MK

pubmed logopapersAug 8 2025
B-mode ultrasound is widely employed for breast lesion diagnosis due to its affordability, widespread availability, and effectiveness, particularly in cases of dense breast tissue where mammography may be less sensitive. However, it disregards critical tissue information embedded in raw radiofrequency (RF) data. While both modalities have demonstrated promise in Computer-Aided Diagnosis (CAD), their combined potential remains largely unexplored.&#xD;Approach.This paper presents an automated breast lesion classification network that utilizes H-scan and Nakagami parametric images derived from RF ultrasound signals, combined with machine-generated B-mode images, seamlessly integrated through a Multi Modal Cross Attention Fusion (MM-CAF) mechanism to extract complementary information. The proposed architecture also incorporates an attention-guided modified InceptionV3 for feature extraction, a Knowledge-Guided Cross-Modality Learning (KGCML) module for inter‑modal knowledge sharing, and Attention-Driven Context Enhancement (ADCE) modules to improve contextual understanding and fusion with the classification network. The network employs categorical cross-entropy loss, a Multi-CAM-based loss to guide learning toward accurate lesion-specific features, and a Multi-QUS-based loss to embed clinically meaningful domain knowledge and effectively distinguishing between benign and malignant lesions, all while supporting explainable AI (XAI) principles.&#xD;Main results. Experiments conducted on multi-center breast ultrasound datasets--BUET-BUSD, ATL, and OASBUD--characterized by demographic diversity, demonstrate the effectiveness of the proposed approach, achieving classification accuracies of 92.54%, 89.93%, and 90.0%, respectively, along with high interpretability and trustworthiness. These results surpass those of existing methods based on B-mode and/or RF data, highlighting the superior performance and robustness of the proposed technique. By integrating complementary RF‑derived information with B‑mode imaging with pseudo‑segmentation and domain‑informed loss functions, our method significantly boosts lesion classification accuracy-enabling fully automated, explainable CAD and paving the way for widespread clinical adoption of AI‑driven breast screening.

Thyroid Volume Measurement With AI-Assisted Freehand 3D Ultrasound Compared to 2D Ultrasound-A Clinical Trial.

Rask KB, Makouei F, Wessman MHJ, Kristensen TT, Todsen T

pubmed logopapersAug 8 2025
Accurate thyroid volume assessment is critical in thyroid disease diagnostics, yet conventional high-resolution 2D ultrasound has limitations. Freehand 3D ultrasound with AI-assisted segmentation presents a potential advancement, but its clinical accuracy requires validation. This prospective clinical trial included 14 patients scheduled for total thyroidectomy. Preoperative thyroid volume was measured using both 2D ultrasound (ellipsoid method) and freehand 3D ultrasound with AI segmentation. Postoperative thyroid volume, determined via the water displacement method, served as the reference standard. The median postoperative thyroid volume was 14.8 mL (IQR 8.8-20.2). The median volume difference was 1.7 mL (IQR 1.2-3.3) for 3D ultrasound and 3.6 mL (IQR 2.3-6.6) for 2D ultrasound (p = 0.02). The inter-operator reliability coefficient for 3D ultrasound was 0.986 (p < 0.001). These findings suggest that freehand 3D ultrasound with AI-assisted segmentation provides superior accuracy and reproducibility compared to 2D ultrasound and may enhance clinical thyroid volume assessment. ClinicalTrials.gov identifier: NCT05510609.

A Co-Plane Machine Learning Model Based on Ultrasound Radiomics for the Evaluation of Diabetic Peripheral Neuropathy.

Jiang Y, Peng R, Liu X, Xu M, Shen H, Yu Z, Jiang Z

pubmed logopapersAug 8 2025
Detection of diabetic peripheral neuropathy (DPN) is critical for preventing severe complications. Machine learning (ML) and radiomics offer promising approaches for the diagnosis of DPN; however, their application in ultrasound-based detection of DPN remains limited. Moreover, there is no consensus on whether longitudinal or transverse ultrasound planes provide more robust radiomic features for nerve evaluation. This study aimed to analyze and compare radiomic features from different ultrasound planes of the tibial nerve and to develop a co-plane fusion ML model to enhance the diagnostic accuracy of DPN. In our study, a total of 516 feet from 262 diabetics across two institutions was analyzed and stratified into a training cohort (n = 309), an internal testing cohort (n = 133), and an external testing cohort (n = 74). A total of 1316 radiomic features were extracted from both transverse and longitudinal planes of the tibial nerve. After feature selection, six ML algorithms were utilized to construct radiomics models based on transverse, longitudinal, and combined planes. The performance of these models was assessed using receiver operating characteristic curves, calibration curves, and decision curve analysis (DCA). Shapley Additive exPlanations (SHAP) were employed to elucidate the key features and their contributions to predictions within the optimal model. The co-plane Support Vector Machine (SVM) model exhibited superior performance, achieving AUC values of 0.90 (95% CI: 0.86-0.93), 0.88 (95% CI: 0.84-0.91), and 0.70 (95% CI: 0.64-0.76) in the training, internal testing, and external testing cohorts, respectively. These results significantly exceeded those of the single-plane models, as determined by the DeLong test (P < 0.05). Calibration curves and DCA curve indicated a good model fit and suggested potential clinical utility. Furthermore, SHAP were employed to explain the model. The co-plane SVM model, which integrates transverse and longitudinal radiomic features of the tibial nerve, demonstrated optimal performance in DPN prediction, thereby significantly enhancing the efficacy of DPN diagnosis. This model may serve as a robust tool for noninvasive assessment of DPN, highlighting its promising applicability in clinical settings.

Hybrid Neural Networks for Precise Hydronephrosis Classification Using Deep Learning.

Salam A, Naznine M, Chowdhury MEH, Agzamkhodjaev S, Tekin A, Vallasciani S, Ramírez-Velázquez E, Abbas TO

pubmed logopapersAug 7 2025
To develop and evaluate a deep learning framework for automatic kidney and fluid segmentation in renal ultrasound images, aiming to enhance diagnostic accuracy and reduce variability in hydronephrosis assessment. A dataset of 1,731 renal ultrasound images, annotated by four experienced urologists, was used for model training and evaluation. The proposed framework integrates a DenseNet201 backbone, Feature Pyramid Network (FPN), and Self-Organizing Neural Network (SelfONN) layers to enable multi-scale feature extraction and improve spatial precision. Several architectures were tested under identical conditions to ensure fair comparison. Segmentation performance was assessed using standard metrics, including Dice coefficient, precision, and recall. The framework also supported hydronephrosis classification using the fluid-to-kidney area ratio, with a threshold of 0.213 derived from prior literature. The model achieved strong segmentation performance for kidneys (Dice: 0.92, precision: 0.93, recall: 0.91) and fluid regions (Dice: 0.89, precision: 0.90, recall: 0.88), outperforming baseline methods. The classification accuracy for detecting hydronephrosis reached 94%, based on the computed fluid-to-kidney ratio. Performance was consistent across varied image qualities, reflecting the robustness of the overall architecture. This study presents an automated, objective pipeline for analyzing renal ultrasound images. The proposed framework supports high segmentation accuracy and reliable classification, facilitating standardized and reproducible hydronephrosis assessment. Future work will focus on model optimization and incorporating explainable AI to enhance clinical integration.

Artificial Intelligence for the Detection of Fetal Ultrasound Findings Concerning for Major Congenital Heart Defects.

Zelop CM, Lam-Rachlin J, Arunamata A, Punn R, Behera SK, Lachaud M, David N, DeVore GR, Rebarber A, Fox NS, Gayanilo M, Garmel S, Boukobza P, Uzan P, Joly H, Girardot R, Cohen L, Stos B, De Boisredon M, Askinazi E, Thorey V, Gardella C, Levy M, Geiger M

pubmed logopapersAug 7 2025
To evaluate the performance of an artificial intelligence (AI)-based software to identify second-trimester fetal ultrasound examinations suspicious for congenital heart defects. The software analyzes all grayscale two-dimensional ultrasound cine clips of an examination to evaluate eight morphologic findings associated with severe congenital heart defects. A data set of 877 examinations was retrospectively collected from 11 centers. The presence of suspicious findings was determined by a panel of expert pediatric cardiologists, who determined that 311 examinations had at least one of the eight suspicious findings. The AI software processed each examination, labeling each finding as present, absent, or inconclusive. Of the 280 examinations with known severe congenital heart defects, 278 (sensitivity 0.993, 95% CI, 0.974-0.998) had at least one of the eight suspicious findings present as determined by the fetal cardiologists, highlighting the relevance of these eight findings. We then evaluated the performance of the AI software, which identified at least one finding as present in 271 examinations, that all eight findings were absent in five examinations, and was inconclusive in four of the 280 examinations with severe congenital heart defects, yielding a sensitivity of 0.968 (95% CI, 0.940-0.983) for severe congenital heart defects. When comparing the AI to the determination of findings by fetal cardiologists, the detection of any finding by the AI had a sensitivity of 0.987 (95% CI, 0.967-0.995) and a specificity of 0.977 (95% CI, 0.961-0.986) after exclusion of inconclusive examinations. The AI rendered a decision for any finding (either present or absent) in 98.7% of examinations. The AI-based software demonstrated high accuracy in identification of suspicious findings associated with severe congenital heart defects, yielding a high sensitivity for detecting severe congenital heart defects. These results show that AI has potential to improve antenatal congenital heart defect detection.

Towards Real-Time Detection of Fatty Liver Disease in Ultrasound Imaging: Challenges and Opportunities.

Alshagathrh FM, Schneider J, Househ MS

pubmed logopapersAug 7 2025
This study presents an AI framework for real-time NAFLD detection using ultrasound imaging, addressing operator dependency, imaging variability, and class imbalance. It integrates CNNs with machine learning classifiers and applies preprocessing techniques, including normalization and GAN-based augmentation, to enhance prediction for underrepresented disease stages. Grad-CAM provides visual explanations to support clinical interpretation. Trained on 10,352 annotated images from multiple Saudi centers, the framework achieved 98.9% accuracy and an AUC of 0.99, outperforming baseline CNNs by 12.4% and improving sensitivity for advanced fibrosis and subtle features. Future work will extend multi-class classification, validate performance across settings, and integrate with clinical systems.
Page 18 of 56556 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.