Sort by:
Page 19 of 56556 results

Hybrid Neural Networks for Precise Hydronephrosis Classification Using Deep Learning.

Salam A, Naznine M, Chowdhury MEH, Agzamkhodjaev S, Tekin A, Vallasciani S, Ramírez-Velázquez E, Abbas TO

pubmed logopapersAug 7 2025
To develop and evaluate a deep learning framework for automatic kidney and fluid segmentation in renal ultrasound images, aiming to enhance diagnostic accuracy and reduce variability in hydronephrosis assessment. A dataset of 1,731 renal ultrasound images, annotated by four experienced urologists, was used for model training and evaluation. The proposed framework integrates a DenseNet201 backbone, Feature Pyramid Network (FPN), and Self-Organizing Neural Network (SelfONN) layers to enable multi-scale feature extraction and improve spatial precision. Several architectures were tested under identical conditions to ensure fair comparison. Segmentation performance was assessed using standard metrics, including Dice coefficient, precision, and recall. The framework also supported hydronephrosis classification using the fluid-to-kidney area ratio, with a threshold of 0.213 derived from prior literature. The model achieved strong segmentation performance for kidneys (Dice: 0.92, precision: 0.93, recall: 0.91) and fluid regions (Dice: 0.89, precision: 0.90, recall: 0.88), outperforming baseline methods. The classification accuracy for detecting hydronephrosis reached 94%, based on the computed fluid-to-kidney ratio. Performance was consistent across varied image qualities, reflecting the robustness of the overall architecture. This study presents an automated, objective pipeline for analyzing renal ultrasound images. The proposed framework supports high segmentation accuracy and reliable classification, facilitating standardized and reproducible hydronephrosis assessment. Future work will focus on model optimization and incorporating explainable AI to enhance clinical integration.

Lower Extremity Bypass Surveillance and Peak Systolic Velocities Value Prediction Using Recurrent Neural Networks.

Luo X, Tahabi FM, Rollins DM, Sawchuk AP

pubmed logopapersAug 7 2025
Routine duplex ultrasound surveillance is recommended after femoral-popliteal and femoral-tibial-pedal vein bypass grafts at various post-operative intervals. Currently, there is no systematic method for bypass graft surveillance using a set of peak systolic velocities (PSVs) collected during these exams. This research aims to explore the use of recurrent neural networks to predict the next set of PSVs, which can then indicate occlusion status. Recurrent neural network models were developed to predict occlusion and stenosis based on one to three prior sets of PSVs, with a sequence-to-sequence model utilized to forecast future PSVs within the stent graft and nearby arteries. The study employed 5-fold cross-validation for model performance comparison, revealing that the BiGRU model outperformed BiLSTM when two or more sets of PSVs were included, demonstrating that increasing duplex ultrasound exams improve prediction accuracy and reduces error rates. This work establishes a basis for integrating comprehensive clinical data, including demographics, comorbidities, symptoms, and other risk factors, with PSVs to enhance lower extremity bypass graft surveillance predictions.

Automated ultrasound doppler angle estimation using deep learning

Nilesh Patil, Ajay Anand

arxiv logopreprintAug 6 2025
Angle estimation is an important step in the Doppler ultrasound clinical workflow to measure blood velocity. It is widely recognized that incorrect angle estimation is a leading cause of error in Doppler-based blood velocity measurements. In this paper, we propose a deep learning-based approach for automated Doppler angle estimation. The approach was developed using 2100 human carotid ultrasound images including image augmentation. Five pre-trained models were used to extract images features, and these features were passed to a custom shallow network for Doppler angle estimation. Independently, measurements were obtained by a human observer reviewing the images for comparison. The mean absolute error (MAE) between the automated and manual angle estimates ranged from 3.9{\deg} to 9.4{\deg} for the models evaluated. Furthermore, the MAE for the best performing model was less than the acceptable clinical Doppler angle error threshold thus avoiding misclassification of normal velocity values as a stenosis. The results demonstrate potential for applying a deep-learning based technique for automated ultrasound Doppler angle estimation. Such a technique could potentially be implemented within the imaging software on commercial ultrasound scanners.

Beyond unimodal analysis: Multimodal ensemble learning for enhanced assessment of atherosclerotic disease progression.

Guarrasi V, Bertgren A, Näslund U, Wennberg P, Soda P, Grönlund C

pubmed logopapersAug 5 2025
Atherosclerosis is a leading cardiovascular disease typified by fatty streaks accumulating within arterial walls, culminating in potential plaque ruptures and subsequent strokes. Existing clinical risk scores, such as systematic coronary risk estimation and Framingham risk score, profile cardiovascular risks based on factors like age, cholesterol, and smoking, among others. However, these scores display limited sensitivity in early disease detection. Parallelly, ultrasound-based risk markers, such as the carotid intima media thickness, while informative, only offer limited predictive power. Notably, current models largely focus on either ultrasound image-derived risk markers or clinical risk factor data without combining both for a comprehensive, multimodal assessment. This study introduces a multimodal ensemble learning framework to assess atherosclerosis severity, especially in its early sub-clinical stage. We utilize a multi-objective optimization targeting both performance and diversity, aiming to integrate features from each modality effectively. Our objective is to measure the efficacy of models using multimodal data in assessing vascular aging, i.e., plaque presence and vascular age, over a six-year period. We also delineate a procedure for optimal model selection from a vast pool, focusing on best-suited models for classification tasks. Additionally, through eXplainable Artificial Intelligence techniques, this work delves into understanding key model contributors and discerning unique subject subgroups.

Prediction of breast cancer HER2 status changes based on ultrasound radiomics attention network.

Liu J, Xue X, Yan Y, Song Q, Cheng Y, Wang L, Wang X, Xu D

pubmed logopapersAug 5 2025
Following Neoadjuvant Chemotherapy (NAC), there exists a probability of changes occurring in the Human Epidermal Growth Factor Receptor 2 (HER2) status. If these changes are not promptly addressed, it could hinder the timely adjustment of treatment plans, thereby affecting the optimal management of breast cancer. Consequently, the accurate prediction of HER2 status changes holds significant clinical value, underscoring the need for a model capable of precisely forecasting these alterations. In this paper, we elucidate the intricacies surrounding HER2 status changes, and propose a deep learning architecture combined with radiomics techniques, named as Ultrasound Radiomics Attention Network (URAN), to predict HER2 status changes. Firstly, radiomics technology is used to extract ultrasound image features to provide rich and comprehensive medical information. Secondly, HER2 Key Feature Selection (HKFS) network is constructed for retain crucial features relevant to HER2 status change. Thirdly, we design Max and Average Attention and Excitation (MAAE) network to adjust the model's focus on different key features. Finally, a fully connected neural network is utilized to predict HER2 status changes. The code to reproduce our experiments can be found at https://github.com/joanaapa/Foundation-Medical. Our research was carried out using genuine ultrasound images sourced from hospitals. On this dataset, URAN outperformed both state-of-the-art and traditional methods in predicting HER2 status changes, achieving an accuracy of 0.8679 and an AUC of 0.8328 (95% CI: 0.77-0.90). Comparative experiments on the public BUS_UCLM dataset further demonstrated URAN's superiority, attaining an accuracy of 0.9283 and an AUC of 0.9161 (95% CI: 0.91-0.92). Additionally, we undertook rigorously crafted ablation studies, which validated the logicality and effectiveness of the radiomics techniques, as well as the HKFS and MAAE modules integrated within the URAN model. The results pertaining to specific HER2 statuses indicate that URAN exhibits superior accuracy in predicting changes in HER2 status characterized by low expression and IHC scores of 2+ or below. Furthermore, we examined the radiomics attributes of ultrasound images and discovered that various wavelet transform features significantly impacted the changes in HER2 status. We have developed a URAN method for predicting HER2 status changes that combines radiomics techniques and deep learning. URAN model have better predictive performance compared to other competing algorithms, and can mine key radiomics features related to HER2 status changes.

Automated ultrasound system ARTHUR V.2.0 with AI analysis DIANA V.2.0 matches expert rheumatologist in hand joint assessment of rheumatoid arthritis patients.

Frederiksen BA, Hammer HB, Terslev L, Ammitzbøll-Danielsen M, Savarimuthu TR, Weber ABH, Just SA

pubmed logopapersAug 5 2025
To evaluate the agreement and repeatability of an automated robotic ultrasound system (ARTHUR V.2.0) combined with an AI model (DIANA V.2.0) in assessing synovial hypertrophy (SH) and Doppler activity in rheumatoid arthritis (RA) patients, using an expert rheumatologist's assessment as the reference standard. 30 RA patients underwent two consecutive ARTHUR V.2.0 scans and rheumatologist assessment of 22 hand joints, with the rheumatologist blinded to the automated system's results. Images were scored for SH and Doppler by DIANA V.2.0 using the EULAR-OMERACT scale (0-3). The agreement was evaluated by weighted Cohen's kappa, percent exact agreement (PEA), percent close agreement (PCA) and binary outcomes using Global OMERACT-EULAR Synovitis Scoring (healthy ≤1 vs diseased ≥2). Comparisons included intra-robot repeatability and agreement with the expert rheumatologist and a blinded independent assessor. ARTHUR successfully scanned 564 out of 660 joints, corresponding to an overall success rate of 85.5%. Intra-robot agreement for SH: PEA 63.0%, PCA 93.0%, binary 90.5% and for Doppler, PEA 74.8%, PCA 93.7%, binary 88.1% and kappa values of 0.54 and 0.49. Agreement between ARTHUR+DIANA and the rheumatologist: SH (PEA 57.9%, PCA 92.9%, binary 87.3%, kappa 0.38); Doppler (PEA 77.3%, PCA 94.2%, binary 91.2%, kappa 0.44) and with the independent assessor: SH (PEA 49.0%, PCA 91.2%, binary 80.0%, kappa 0.39); Doppler (PEA 62.6%, PCA 94.4%, binary 88.1%, kappa 0.48). ARTHUR V.2.0 and DIANA V.2.0 demonstrated repeatability on par with intra-expert agreement reported in the literature and showed encouraging agreement with human assessors, though further refinement is needed to optimise performance across specific joints.

ERDES: A Benchmark Video Dataset for Retinal Detachment and Macular Status Classification in Ocular Ultrasound

Pouyan Navard, Yasemin Ozkut, Srikar Adhikari, Elaine Situ-LaCasse, Josie Acuña, Adrienne Yarnish, Alper Yilmaz

arxiv logopreprintAug 5 2025
Retinal detachment (RD) is a vision-threatening condition that requires timely intervention to preserve vision. Macular involvement -- whether the macula is still intact (macula-intact) or detached (macula-detached) -- is the key determinant of visual outcomes and treatment urgency. Point-of-care ultrasound (POCUS) offers a fast, non-invasive, cost-effective, and accessible imaging modality widely used in diverse clinical settings to detect RD. However, ultrasound image interpretation is limited by a lack of expertise among healthcare providers, especially in resource-limited settings. Deep learning offers the potential to automate ultrasound-based assessment of RD. However, there are no ML ultrasound algorithms currently available for clinical use to detect RD and no prior research has been done on assessing macular status using ultrasound in RD cases -- an essential distinction for surgical prioritization. Moreover, no public dataset currently supports macular-based RD classification using ultrasound video clips. We introduce Eye Retinal DEtachment ultraSound, ERDES, the first open-access dataset of ocular ultrasound clips labeled for (i) presence of retinal detachment and (ii) macula-intact versus macula-detached status. The dataset is intended to facilitate the development and evaluation of machine learning models for detecting retinal detachment. We also provide baseline benchmarks using multiple spatiotemporal convolutional neural network (CNN) architectures. All clips, labels, and training code are publicly available at https://osupcvlab.github.io/ERDES/.

NUTRITIONAL IMPACT OF LEUCINE-ENRICHED SUPPLEMENTS: EVALUATING PROTEIN TYPE THROUGH ARTIFICIAL INTELLIGENCE (AI)-AUGMENTED MUSCLE ULTRASONOGRAPHY IN HYPERCALORIC, HYPERPROTEIC SUPPORT.

López Gómez JJ, Gutiérrez JG, Jauregui OI, Cebriá Á, Asensio LE, Martín DP, Velasco PF, Pérez López P, Sahagún RJ, Bargues DR, Godoy EJ, de Luis Román DA

pubmed logopapersAug 5 2025
Malnutrition adversely affects physical function and body composition in patients with chronic diseases. Leucine supplementation has shown benefits in improving body composition and clinical outcomes. This study aimed to evaluate the effects of a leucine-enriched oral nutritional supplement (ONS) on the nutritional status of patients at risk of malnutrition. This prospective observational study followed two cohorts of malnourished patients receiving personalized nutritional interventions over 3 months. One group received a leucine-enriched oral supplement (20% protein, 100% whey, 3 g leucine), while other received a standard supplement (hypercaloric and normo-hyperproteic) with mixed protein sources. Nutritional status was assessed at baseline and after 3 months using anthropometry, bioelectrical impedance analysis, AI assisted muscle ultrasound, and handgrip strength RESULTS: A total of 142 patients were included (76 Leucine-ONS, 66 Standard-ONS), mostly women (65.5%), mean age 62.00 (18.66) years. Malnutrition was present in 90.1% and 34.5% had sarcopenia. Cancer was the most common condition (30.3%). The Leucine-ONS group showed greater improvements in phase angle (+2.08% vs. -1.57%; p=0.02) and rectus femoris thickness (+1.72% vs. -5.89%; p=0.03). Multivariate analysis confirmed associations between Leucine-ONS and improved phase angle (OR=2.41; 95%CI: 1.18-4.92; p=0.02) and reduced intramuscular fat (OR=2.24; 95%CI: 1.13-4.46; p=0.02). Leucine-enriched-ONS significantly improved phase angle and muscle thickness compared to standard ONS, supporting its role in enhancing body composition in malnourished patients. These results must be interpreted in the context of the observational design of the study, the heterogeneity of comparison groups and the short duration of intervention. Further randomized controlled trials are needed to confirm these results and assess long-term clinical and functional outcomes.

A Novel Deep Learning Radiomics Nomogram Integrating B-Mode Ultrasound and Contrast-Enhanced Ultrasound for Preoperative Prediction of Lymphovascular Invasion in Invasive Breast Cancer.

Niu R, Chen Z, Li Y, Fang Y, Gao J, Li J, Li S, Huang S, Zou X, Fu N, Jin Z, Shao Y, Li M, Kang Y, Wang Z

pubmed logopapersAug 4 2025
This study aimed to develop a deep learning radiomics nomogram (DLRN) that integrated B-mode ultrasound (BMUS) and contrast-enhanced ultrasound (CEUS) images for preoperative lymphovascular invasion (LVI) prediction in invasive breast cancer (IBC). Total 981 patients with IBC from three hospitals were retrospectively enrolled. Of 834 patients recruited from Hospital I, 688 were designated as the training cohort and 146 as the internal test cohort, whereas 147 patients from Hospitals II and III were assigned to constitute the external test cohort. Deep learning and handcrafted radiomics features of BMUS and CEUS images were extracted from breast cancer to construct a deep learning radiomics (DLR) signature. The DLRN was developed by integrating the DLR signature and independent clinicopathological parameters. The performance of the DLRN is evaluated with respect to discrimination, calibration, and clinical benefit. The DLRN exhibited good performance in predicting LVI, with areas under the receiver operating characteristic curves (AUCs) of 0.885 (95% confidence interval [CI,0.858-0.912), 0.914 (95% CI, 0.868-0.960) and 0.914 (95% CI, 0.867-0.960) in the training, internal test, and external test cohorts, respectively. The DLRN exhibited good stability and clinical practicability, as demonstrated by the calibration curve and decision curve analysis. In addition, the DLRN outperformed the traditional clinical model and the DLR signature for LVI prediction in the internal and external test cohorts (all p < 0.05). The DLRN exhibited good performance in predicting LVI, representing a non-invasive approach to preoperatively determining LVI status in IBC.

Deep Learning-Enabled Ultrasound for Advancing Anterior Talofibular Ligament Injuries Classification: A Multicenter Model Development and Validation Study.

Shi X, Zhang H, Yuan Y, Xu Z, Meng L, Xi Z, Qiao Y, Liu S, Sun J, Cui J, Du R, Yu Q, Wang D, Shen S, Gao C, Li P, Bai L, Xu H, Wang K

pubmed logopapersAug 4 2025
Ultrasound (US) is the preferred modality for assessing anterior talofibular ligament (ATFL) injuries. We aimed to advance ATFL injuries classification by developing a US-based deep learning (DL) model, and explore how artificial intelligence (AI) could help radiologists improve diagnostic performance. Consecutive healthy controls and patients with acute ATFL injuries (mild strain, partial tear, complete tear, and avulsion fracture) at 10 hospitals were retrospectively included. A US-based DL model (ATFLNet) was trained (n=2566), internally validated (n=642), and externally validated (n=717 and 493). Surgical or radiological findings based on the majority consensus of three experts served as the reference standard. Prospective validation was conducted at three additional hospitals (n=472). The performance was compared to that of 12 radiologists at different levels (external validation sets 1 and 2); an ATFLNet-aided strategy was developed, comparing with the radiologists when reviewing B-mode images (external validation set 2); the strategy was then tested in a simulated scenario (reviewing images alongside dynamic clips; prospective validation set). Statistical comparisons were performed using the McNemar's test, while inter-reader agreement was evaluated with the Multireader Fleiss κ statistic. ATFLNet obtained macro-average area under the curve ≥0.970 across all five classes in each dataset, indicating robust overall performance. Additionally, it consistently outperformed senior radiologists in external validation sets (all p<.05). ATFLNet-aided strategy improved radiologists' average accuracy (0.707 vs. 0.811, p<.001) for image review. In the simulated scenario, it led to enhanced accuracy (0.794 to 0.864, p=.003), and a reduction in diagnostic variability, particularly for junior radiologists. Our US-based model outperformed human experts for ATFL injury evaluation. AI-aided strategies hold the potential to enhance diagnostic performance in real-world clinical scenarios.
Page 19 of 56556 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.