Sort by:
Page 3 of 40400 results

Enhancing B-mode-based breast cancer diagnosis via cross-attention fusion of H-scan and Nakagami imaging with multi-CAM-QUS-driven XAI.

Mondol SS, Hasan MK

pubmed logopapersAug 8 2025
B-mode ultrasound is widely employed for breast lesion diagnosis due to its affordability, widespread availability, and effectiveness, particularly in cases of dense breast tissue where mammography may be less sensitive. However, it disregards critical tissue information embedded in raw radiofrequency (RF) data. While both modalities have demonstrated promise in Computer-Aided Diagnosis (CAD), their combined potential remains largely unexplored.
Approach.This paper presents an automated breast lesion classification network that utilizes H-scan and Nakagami parametric images derived from RF ultrasound signals, combined with machine-generated B-mode images, seamlessly integrated through a Multi Modal Cross Attention Fusion (MM-CAF) mechanism to extract complementary information. The proposed architecture also incorporates an attention-guided modified InceptionV3 for feature extraction, a Knowledge-Guided Cross-Modality Learning (KGCML) module for inter‑modal knowledge sharing, and Attention-Driven Context Enhancement (ADCE) modules to improve contextual understanding and fusion with the classification network. The network employs categorical cross-entropy loss, a Multi-CAM-based loss to guide learning toward accurate lesion-specific features, and a Multi-QUS-based loss to embed clinically meaningful domain knowledge and effectively distinguishing between benign and malignant lesions, all while supporting explainable AI (XAI) principles.
Main results. Experiments conducted on multi-center breast ultrasound datasets--BUET-BUSD, ATL, and OASBUD--characterized by demographic diversity, demonstrate the effectiveness of the proposed approach, achieving classification accuracies of 92.54%, 89.93%, and 90.0%, respectively, along with high interpretability and trustworthiness. These results surpass those of existing methods based on B-mode and/or RF data, highlighting the superior performance and robustness of the proposed technique. By integrating complementary RF‑derived information with B‑mode imaging with pseudo‑segmentation and domain‑informed loss functions, our method significantly boosts lesion classification accuracy-enabling fully automated, explainable CAD and paving the way for widespread clinical adoption of AI‑driven breast screening.

A Co-Plane Machine Learning Model Based on Ultrasound Radiomics for the Evaluation of Diabetic Peripheral Neuropathy.

Jiang Y, Peng R, Liu X, Xu M, Shen H, Yu Z, Jiang Z

pubmed logopapersAug 8 2025
Detection of diabetic peripheral neuropathy (DPN) is critical for preventing severe complications. Machine learning (ML) and radiomics offer promising approaches for the diagnosis of DPN; however, their application in ultrasound-based detection of DPN remains limited. Moreover, there is no consensus on whether longitudinal or transverse ultrasound planes provide more robust radiomic features for nerve evaluation. This study aimed to analyze and compare radiomic features from different ultrasound planes of the tibial nerve and to develop a co-plane fusion ML model to enhance the diagnostic accuracy of DPN. In our study, a total of 516 feet from 262 diabetics across two institutions was analyzed and stratified into a training cohort (n = 309), an internal testing cohort (n = 133), and an external testing cohort (n = 74). A total of 1316 radiomic features were extracted from both transverse and longitudinal planes of the tibial nerve. After feature selection, six ML algorithms were utilized to construct radiomics models based on transverse, longitudinal, and combined planes. The performance of these models was assessed using receiver operating characteristic curves, calibration curves, and decision curve analysis (DCA). Shapley Additive exPlanations (SHAP) were employed to elucidate the key features and their contributions to predictions within the optimal model. The co-plane Support Vector Machine (SVM) model exhibited superior performance, achieving AUC values of 0.90 (95% CI: 0.86-0.93), 0.88 (95% CI: 0.84-0.91), and 0.70 (95% CI: 0.64-0.76) in the training, internal testing, and external testing cohorts, respectively. These results significantly exceeded those of the single-plane models, as determined by the DeLong test (P < 0.05). Calibration curves and DCA curve indicated a good model fit and suggested potential clinical utility. Furthermore, SHAP were employed to explain the model. The co-plane SVM model, which integrates transverse and longitudinal radiomic features of the tibial nerve, demonstrated optimal performance in DPN prediction, thereby significantly enhancing the efficacy of DPN diagnosis. This model may serve as a robust tool for noninvasive assessment of DPN, highlighting its promising applicability in clinical settings.

Artificial Intelligence for the Detection of Fetal Ultrasound Findings Concerning for Major Congenital Heart Defects.

Zelop CM, Lam-Rachlin J, Arunamata A, Punn R, Behera SK, Lachaud M, David N, DeVore GR, Rebarber A, Fox NS, Gayanilo M, Garmel S, Boukobza P, Uzan P, Joly H, Girardot R, Cohen L, Stos B, De Boisredon M, Askinazi E, Thorey V, Gardella C, Levy M, Geiger M

pubmed logopapersAug 7 2025
To evaluate the performance of an artificial intelligence (AI)-based software to identify second-trimester fetal ultrasound examinations suspicious for congenital heart defects. The software analyzes all grayscale two-dimensional ultrasound cine clips of an examination to evaluate eight morphologic findings associated with severe congenital heart defects. A data set of 877 examinations was retrospectively collected from 11 centers. The presence of suspicious findings was determined by a panel of expert pediatric cardiologists, who determined that 311 examinations had at least one of the eight suspicious findings. The AI software processed each examination, labeling each finding as present, absent, or inconclusive. Of the 280 examinations with known severe congenital heart defects, 278 (sensitivity 0.993, 95% CI, 0.974-0.998) had at least one of the eight suspicious findings present as determined by the fetal cardiologists, highlighting the relevance of these eight findings. We then evaluated the performance of the AI software, which identified at least one finding as present in 271 examinations, that all eight findings were absent in five examinations, and was inconclusive in four of the 280 examinations with severe congenital heart defects, yielding a sensitivity of 0.968 (95% CI, 0.940-0.983) for severe congenital heart defects. When comparing the AI to the determination of findings by fetal cardiologists, the detection of any finding by the AI had a sensitivity of 0.987 (95% CI, 0.967-0.995) and a specificity of 0.977 (95% CI, 0.961-0.986) after exclusion of inconclusive examinations. The AI rendered a decision for any finding (either present or absent) in 98.7% of examinations. The AI-based software demonstrated high accuracy in identification of suspicious findings associated with severe congenital heart defects, yielding a high sensitivity for detecting severe congenital heart defects. These results show that AI has potential to improve antenatal congenital heart defect detection.

Automatic Multi-Stage Classification Model for Fetal Ultrasound Images Based on EfficientNet.

Shih CS, Chiu HW

pubmed logopapersAug 7 2025
This study aims to enhance the accuracy of fetal ultrasound image classification using convolutional neural networks, specifically EfficientNet. The research focuses on data collection, preprocessing, model training, and evaluation at different pregnancy stages: early, midterm, and newborn. EfficientNet showed the best performance, particularly in the newborn stage, demonstrating deep learning's potential to improve classification performance and support clinical workflows.

Lower Extremity Bypass Surveillance and Peak Systolic Velocities Value Prediction Using Recurrent Neural Networks.

Luo X, Tahabi FM, Rollins DM, Sawchuk AP

pubmed logopapersAug 7 2025
Routine duplex ultrasound surveillance is recommended after femoral-popliteal and femoral-tibial-pedal vein bypass grafts at various post-operative intervals. Currently, there is no systematic method for bypass graft surveillance using a set of peak systolic velocities (PSVs) collected during these exams. This research aims to explore the use of recurrent neural networks to predict the next set of PSVs, which can then indicate occlusion status. Recurrent neural network models were developed to predict occlusion and stenosis based on one to three prior sets of PSVs, with a sequence-to-sequence model utilized to forecast future PSVs within the stent graft and nearby arteries. The study employed 5-fold cross-validation for model performance comparison, revealing that the BiGRU model outperformed BiLSTM when two or more sets of PSVs were included, demonstrating that increasing duplex ultrasound exams improve prediction accuracy and reduces error rates. This work establishes a basis for integrating comprehensive clinical data, including demographics, comorbidities, symptoms, and other risk factors, with PSVs to enhance lower extremity bypass graft surveillance predictions.

Towards Real-Time Detection of Fatty Liver Disease in Ultrasound Imaging: Challenges and Opportunities.

Alshagathrh FM, Schneider J, Househ MS

pubmed logopapersAug 7 2025
This study presents an AI framework for real-time NAFLD detection using ultrasound imaging, addressing operator dependency, imaging variability, and class imbalance. It integrates CNNs with machine learning classifiers and applies preprocessing techniques, including normalization and GAN-based augmentation, to enhance prediction for underrepresented disease stages. Grad-CAM provides visual explanations to support clinical interpretation. Trained on 10,352 annotated images from multiple Saudi centers, the framework achieved 98.9% accuracy and an AUC of 0.99, outperforming baseline CNNs by 12.4% and improving sensitivity for advanced fibrosis and subtle features. Future work will extend multi-class classification, validate performance across settings, and integrate with clinical systems.

Hybrid Neural Networks for Precise Hydronephrosis Classification Using Deep Learning.

Salam A, Naznine M, Chowdhury MEH, Agzamkhodjaev S, Tekin A, Vallasciani S, Ramírez-Velázquez E, Abbas TO

pubmed logopapersAug 7 2025
To develop and evaluate a deep learning framework for automatic kidney and fluid segmentation in renal ultrasound images, aiming to enhance diagnostic accuracy and reduce variability in hydronephrosis assessment. A dataset of 1,731 renal ultrasound images, annotated by four experienced urologists, was used for model training and evaluation. The proposed framework integrates a DenseNet201 backbone, Feature Pyramid Network (FPN), and Self-Organizing Neural Network (SelfONN) layers to enable multi-scale feature extraction and improve spatial precision. Several architectures were tested under identical conditions to ensure fair comparison. Segmentation performance was assessed using standard metrics, including Dice coefficient, precision, and recall. The framework also supported hydronephrosis classification using the fluid-to-kidney area ratio, with a threshold of 0.213 derived from prior literature. The model achieved strong segmentation performance for kidneys (Dice: 0.92, precision: 0.93, recall: 0.91) and fluid regions (Dice: 0.89, precision: 0.90, recall: 0.88), outperforming baseline methods. The classification accuracy for detecting hydronephrosis reached 94%, based on the computed fluid-to-kidney ratio. Performance was consistent across varied image qualities, reflecting the robustness of the overall architecture. This study presents an automated, objective pipeline for analyzing renal ultrasound images. The proposed framework supports high segmentation accuracy and reliable classification, facilitating standardized and reproducible hydronephrosis assessment. Future work will focus on model optimization and incorporating explainable AI to enhance clinical integration.

Automated ultrasound doppler angle estimation using deep learning

Nilesh Patil, Ajay Anand

arxiv logopreprintAug 6 2025
Angle estimation is an important step in the Doppler ultrasound clinical workflow to measure blood velocity. It is widely recognized that incorrect angle estimation is a leading cause of error in Doppler-based blood velocity measurements. In this paper, we propose a deep learning-based approach for automated Doppler angle estimation. The approach was developed using 2100 human carotid ultrasound images including image augmentation. Five pre-trained models were used to extract images features, and these features were passed to a custom shallow network for Doppler angle estimation. Independently, measurements were obtained by a human observer reviewing the images for comparison. The mean absolute error (MAE) between the automated and manual angle estimates ranged from 3.9{\deg} to 9.4{\deg} for the models evaluated. Furthermore, the MAE for the best performing model was less than the acceptable clinical Doppler angle error threshold thus avoiding misclassification of normal velocity values as a stenosis. The results demonstrate potential for applying a deep-learning based technique for automated ultrasound Doppler angle estimation. Such a technique could potentially be implemented within the imaging software on commercial ultrasound scanners.

ERDES: A Benchmark Video Dataset for Retinal Detachment and Macular Status Classification in Ocular Ultrasound

Pouyan Navard, Yasemin Ozkut, Srikar Adhikari, Elaine Situ-LaCasse, Josie Acuña, Adrienne Yarnish, Alper Yilmaz

arxiv logopreprintAug 5 2025
Retinal detachment (RD) is a vision-threatening condition that requires timely intervention to preserve vision. Macular involvement -- whether the macula is still intact (macula-intact) or detached (macula-detached) -- is the key determinant of visual outcomes and treatment urgency. Point-of-care ultrasound (POCUS) offers a fast, non-invasive, cost-effective, and accessible imaging modality widely used in diverse clinical settings to detect RD. However, ultrasound image interpretation is limited by a lack of expertise among healthcare providers, especially in resource-limited settings. Deep learning offers the potential to automate ultrasound-based assessment of RD. However, there are no ML ultrasound algorithms currently available for clinical use to detect RD and no prior research has been done on assessing macular status using ultrasound in RD cases -- an essential distinction for surgical prioritization. Moreover, no public dataset currently supports macular-based RD classification using ultrasound video clips. We introduce Eye Retinal DEtachment ultraSound, ERDES, the first open-access dataset of ocular ultrasound clips labeled for (i) presence of retinal detachment and (ii) macula-intact versus macula-detached status. The dataset is intended to facilitate the development and evaluation of machine learning models for detecting retinal detachment. We also provide baseline benchmarks using multiple spatiotemporal convolutional neural network (CNN) architectures. All clips, labels, and training code are publicly available at https://osupcvlab.github.io/ERDES/.

Automated ultrasound system ARTHUR V.2.0 with AI analysis DIANA V.2.0 matches expert rheumatologist in hand joint assessment of rheumatoid arthritis patients.

Frederiksen BA, Hammer HB, Terslev L, Ammitzbøll-Danielsen M, Savarimuthu TR, Weber ABH, Just SA

pubmed logopapersAug 5 2025
To evaluate the agreement and repeatability of an automated robotic ultrasound system (ARTHUR V.2.0) combined with an AI model (DIANA V.2.0) in assessing synovial hypertrophy (SH) and Doppler activity in rheumatoid arthritis (RA) patients, using an expert rheumatologist's assessment as the reference standard. 30 RA patients underwent two consecutive ARTHUR V.2.0 scans and rheumatologist assessment of 22 hand joints, with the rheumatologist blinded to the automated system's results. Images were scored for SH and Doppler by DIANA V.2.0 using the EULAR-OMERACT scale (0-3). The agreement was evaluated by weighted Cohen's kappa, percent exact agreement (PEA), percent close agreement (PCA) and binary outcomes using Global OMERACT-EULAR Synovitis Scoring (healthy ≤1 vs diseased ≥2). Comparisons included intra-robot repeatability and agreement with the expert rheumatologist and a blinded independent assessor. ARTHUR successfully scanned 564 out of 660 joints, corresponding to an overall success rate of 85.5%. Intra-robot agreement for SH: PEA 63.0%, PCA 93.0%, binary 90.5% and for Doppler, PEA 74.8%, PCA 93.7%, binary 88.1% and kappa values of 0.54 and 0.49. Agreement between ARTHUR+DIANA and the rheumatologist: SH (PEA 57.9%, PCA 92.9%, binary 87.3%, kappa 0.38); Doppler (PEA 77.3%, PCA 94.2%, binary 91.2%, kappa 0.44) and with the independent assessor: SH (PEA 49.0%, PCA 91.2%, binary 80.0%, kappa 0.39); Doppler (PEA 62.6%, PCA 94.4%, binary 88.1%, kappa 0.48). ARTHUR V.2.0 and DIANA V.2.0 demonstrated repeatability on par with intra-expert agreement reported in the literature and showed encouraging agreement with human assessors, though further refinement is needed to optimise performance across specific joints.
Page 3 of 40400 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.