Sort by:
Page 4 of 40400 results

ERDES: A Benchmark Video Dataset for Retinal Detachment and Macular Status Classification in Ocular Ultrasound

Pouyan Navard, Yasemin Ozkut, Srikar Adhikari, Elaine Situ-LaCasse, Josie Acuña, Adrienne Yarnish, Alper Yilmaz

arxiv logopreprintAug 5 2025
Retinal detachment (RD) is a vision-threatening condition that requires timely intervention to preserve vision. Macular involvement -- whether the macula is still intact (macula-intact) or detached (macula-detached) -- is the key determinant of visual outcomes and treatment urgency. Point-of-care ultrasound (POCUS) offers a fast, non-invasive, cost-effective, and accessible imaging modality widely used in diverse clinical settings to detect RD. However, ultrasound image interpretation is limited by a lack of expertise among healthcare providers, especially in resource-limited settings. Deep learning offers the potential to automate ultrasound-based assessment of RD. However, there are no ML ultrasound algorithms currently available for clinical use to detect RD and no prior research has been done on assessing macular status using ultrasound in RD cases -- an essential distinction for surgical prioritization. Moreover, no public dataset currently supports macular-based RD classification using ultrasound video clips. We introduce Eye Retinal DEtachment ultraSound, ERDES, the first open-access dataset of ocular ultrasound clips labeled for (i) presence of retinal detachment and (ii) macula-intact versus macula-detached status. The dataset is intended to facilitate the development and evaluation of machine learning models for detecting retinal detachment. We also provide baseline benchmarks using multiple spatiotemporal convolutional neural network (CNN) architectures. All clips, labels, and training code are publicly available at https://osupcvlab.github.io/ERDES/.

Prediction of breast cancer HER2 status changes based on ultrasound radiomics attention network.

Liu J, Xue X, Yan Y, Song Q, Cheng Y, Wang L, Wang X, Xu D

pubmed logopapersAug 5 2025
Following Neoadjuvant Chemotherapy (NAC), there exists a probability of changes occurring in the Human Epidermal Growth Factor Receptor 2 (HER2) status. If these changes are not promptly addressed, it could hinder the timely adjustment of treatment plans, thereby affecting the optimal management of breast cancer. Consequently, the accurate prediction of HER2 status changes holds significant clinical value, underscoring the need for a model capable of precisely forecasting these alterations. In this paper, we elucidate the intricacies surrounding HER2 status changes, and propose a deep learning architecture combined with radiomics techniques, named as Ultrasound Radiomics Attention Network (URAN), to predict HER2 status changes. Firstly, radiomics technology is used to extract ultrasound image features to provide rich and comprehensive medical information. Secondly, HER2 Key Feature Selection (HKFS) network is constructed for retain crucial features relevant to HER2 status change. Thirdly, we design Max and Average Attention and Excitation (MAAE) network to adjust the model's focus on different key features. Finally, a fully connected neural network is utilized to predict HER2 status changes. The code to reproduce our experiments can be found at https://github.com/joanaapa/Foundation-Medical. Our research was carried out using genuine ultrasound images sourced from hospitals. On this dataset, URAN outperformed both state-of-the-art and traditional methods in predicting HER2 status changes, achieving an accuracy of 0.8679 and an AUC of 0.8328 (95% CI: 0.77-0.90). Comparative experiments on the public BUS_UCLM dataset further demonstrated URAN's superiority, attaining an accuracy of 0.9283 and an AUC of 0.9161 (95% CI: 0.91-0.92). Additionally, we undertook rigorously crafted ablation studies, which validated the logicality and effectiveness of the radiomics techniques, as well as the HKFS and MAAE modules integrated within the URAN model. The results pertaining to specific HER2 statuses indicate that URAN exhibits superior accuracy in predicting changes in HER2 status characterized by low expression and IHC scores of 2+ or below. Furthermore, we examined the radiomics attributes of ultrasound images and discovered that various wavelet transform features significantly impacted the changes in HER2 status. We have developed a URAN method for predicting HER2 status changes that combines radiomics techniques and deep learning. URAN model have better predictive performance compared to other competing algorithms, and can mine key radiomics features related to HER2 status changes.

Beyond unimodal analysis: Multimodal ensemble learning for enhanced assessment of atherosclerotic disease progression.

Guarrasi V, Bertgren A, Näslund U, Wennberg P, Soda P, Grönlund C

pubmed logopapersAug 5 2025
Atherosclerosis is a leading cardiovascular disease typified by fatty streaks accumulating within arterial walls, culminating in potential plaque ruptures and subsequent strokes. Existing clinical risk scores, such as systematic coronary risk estimation and Framingham risk score, profile cardiovascular risks based on factors like age, cholesterol, and smoking, among others. However, these scores display limited sensitivity in early disease detection. Parallelly, ultrasound-based risk markers, such as the carotid intima media thickness, while informative, only offer limited predictive power. Notably, current models largely focus on either ultrasound image-derived risk markers or clinical risk factor data without combining both for a comprehensive, multimodal assessment. This study introduces a multimodal ensemble learning framework to assess atherosclerosis severity, especially in its early sub-clinical stage. We utilize a multi-objective optimization targeting both performance and diversity, aiming to integrate features from each modality effectively. Our objective is to measure the efficacy of models using multimodal data in assessing vascular aging, i.e., plaque presence and vascular age, over a six-year period. We also delineate a procedure for optimal model selection from a vast pool, focusing on best-suited models for classification tasks. Additionally, through eXplainable Artificial Intelligence techniques, this work delves into understanding key model contributors and discerning unique subject subgroups.

Deep Learning-Enabled Ultrasound for Advancing Anterior Talofibular Ligament Injuries Classification: A Multicenter Model Development and Validation Study.

Shi X, Zhang H, Yuan Y, Xu Z, Meng L, Xi Z, Qiao Y, Liu S, Sun J, Cui J, Du R, Yu Q, Wang D, Shen S, Gao C, Li P, Bai L, Xu H, Wang K

pubmed logopapersAug 4 2025
Ultrasound (US) is the preferred modality for assessing anterior talofibular ligament (ATFL) injuries. We aimed to advance ATFL injuries classification by developing a US-based deep learning (DL) model, and explore how artificial intelligence (AI) could help radiologists improve diagnostic performance. Consecutive healthy controls and patients with acute ATFL injuries (mild strain, partial tear, complete tear, and avulsion fracture) at 10 hospitals were retrospectively included. A US-based DL model (ATFLNet) was trained (n=2566), internally validated (n=642), and externally validated (n=717 and 493). Surgical or radiological findings based on the majority consensus of three experts served as the reference standard. Prospective validation was conducted at three additional hospitals (n=472). The performance was compared to that of 12 radiologists at different levels (external validation sets 1 and 2); an ATFLNet-aided strategy was developed, comparing with the radiologists when reviewing B-mode images (external validation set 2); the strategy was then tested in a simulated scenario (reviewing images alongside dynamic clips; prospective validation set). Statistical comparisons were performed using the McNemar's test, while inter-reader agreement was evaluated with the Multireader Fleiss κ statistic. ATFLNet obtained macro-average area under the curve ≥0.970 across all five classes in each dataset, indicating robust overall performance. Additionally, it consistently outperformed senior radiologists in external validation sets (all p<.05). ATFLNet-aided strategy improved radiologists' average accuracy (0.707 vs. 0.811, p<.001) for image review. In the simulated scenario, it led to enhanced accuracy (0.794 to 0.864, p=.003), and a reduction in diagnostic variability, particularly for junior radiologists. Our US-based model outperformed human experts for ATFL injury evaluation. AI-aided strategies hold the potential to enhance diagnostic performance in real-world clinical scenarios.

A Novel Deep Learning Radiomics Nomogram Integrating B-Mode Ultrasound and Contrast-Enhanced Ultrasound for Preoperative Prediction of Lymphovascular Invasion in Invasive Breast Cancer.

Niu R, Chen Z, Li Y, Fang Y, Gao J, Li J, Li S, Huang S, Zou X, Fu N, Jin Z, Shao Y, Li M, Kang Y, Wang Z

pubmed logopapersAug 4 2025
This study aimed to develop a deep learning radiomics nomogram (DLRN) that integrated B-mode ultrasound (BMUS) and contrast-enhanced ultrasound (CEUS) images for preoperative lymphovascular invasion (LVI) prediction in invasive breast cancer (IBC). Total 981 patients with IBC from three hospitals were retrospectively enrolled. Of 834 patients recruited from Hospital I, 688 were designated as the training cohort and 146 as the internal test cohort, whereas 147 patients from Hospitals II and III were assigned to constitute the external test cohort. Deep learning and handcrafted radiomics features of BMUS and CEUS images were extracted from breast cancer to construct a deep learning radiomics (DLR) signature. The DLRN was developed by integrating the DLR signature and independent clinicopathological parameters. The performance of the DLRN is evaluated with respect to discrimination, calibration, and clinical benefit. The DLRN exhibited good performance in predicting LVI, with areas under the receiver operating characteristic curves (AUCs) of 0.885 (95% confidence interval [CI,0.858-0.912), 0.914 (95% CI, 0.868-0.960) and 0.914 (95% CI, 0.867-0.960) in the training, internal test, and external test cohorts, respectively. The DLRN exhibited good stability and clinical practicability, as demonstrated by the calibration curve and decision curve analysis. In addition, the DLRN outperformed the traditional clinical model and the DLR signature for LVI prediction in the internal and external test cohorts (all p < 0.05). The DLRN exhibited good performance in predicting LVI, representing a non-invasive approach to preoperatively determining LVI status in IBC.

AI enhanced diagnostic accuracy and workload reduction in hepatocellular carcinoma screening.

Lu RF, She CY, He DN, Cheng MQ, Wang Y, Huang H, Lin YD, Lv JY, Qin S, Liu ZZ, Lu ZR, Ke WP, Li CQ, Xiao H, Xu ZF, Liu GJ, Yang H, Ren J, Wang HB, Lu MD, Huang QH, Chen LD, Wang W, Kuang M

pubmed logopapersAug 2 2025
Hepatocellular carcinoma (HCC) ultrasound screening encounters challenges related to accuracy and the workload of radiologists. This retrospective, multicenter study assessed four artificial intelligence (AI) enhanced strategies using 21,934 liver ultrasound images from 11,960 patients to improve HCC ultrasound screening accuracy and reduce radiologist workload. UniMatch was used for lesion detection and LivNet for classification, trained on 17,913 images. Among the strategies tested, Strategy 4, which combined AI for initial detection and radiologist evaluation of negative cases in both detection and classification phases, outperformed others. It not only matched the high sensitivity of original algorithm (0.956 vs. 0.991) but also improved specificity (0.787 vs. 0.698), reduced radiologist workload by 54.5%, and decreased both recall and false positive rates. This approach demonstrates a successful model of human-AI collaboration, not only enhancing clinical outcomes but also mitigating unnecessary patient anxiety and system burden by minimizing recalls and false positives.

Artificial Intelligence in Abdominal, Gynecological, Obstetric, Musculoskeletal, Vascular and Interventional Ultrasound.

Graumann O, Cui Xin W, Goudie A, Blaivas M, Braden B, Campbell Westerway S, Chammas MC, Dong Y, Gilja OH, Hsieh PC, Jiang Tian A, Liang P, Möller K, Nolsøe CP, Săftoiu A, Dietrich CF

pubmed logopapersAug 2 2025
Artificial Intelligence (AI) is a theoretical framework and systematic development of computational models designed to execute tasks that traditionally require human cognition. In medical imaging, AI is used for various modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and pathologies across multiple organ systems. However, integrating AI into medical ultrasound presents unique challenges compared to modalities like CT and MRI due to its operator-dependent nature and inherent variability in the image acquisition process. AI application to ultrasound holds the potential to mitigate multiple variabilities, recalibrate interpretative consistency, and uncover diagnostic patterns that may be difficult for humans to detect. Progress has led to significant innovation in medical ultrasound-based AI applications, facilitating their adoption in various clinical settings and for multiple diseases. This manuscript primarily aims to provide a concise yet comprehensive exploration of current and emerging AI applications in medical ultrasound within abdominal, musculoskeletal, and obstetric & gynecological and interventional medical ultrasound. The secondary aim is to discuss present limitations and potential challenges such technological implementations may encounter.

Deep learning-based super-resolution US radiomics to differentiate testicular seminoma and non-seminoma: an international multicenter study.

Zhang Y, Lu S, Peng C, Zhou S, Campo I, Bertolotto M, Li Q, Wang Z, Xu D, Wang Y, Xu J, Wu Q, Hu X, Zheng W, Zhou J

pubmed logopapersAug 1 2025
Subvariants of testicular germ cell tumor (TGCT) significantly affect therapeutic strategies and patient prognosis. However, preoperatively distinguishing seminoma (SE) from non-seminoma (n-SE) remains a challenge. This study aimed to evaluate the performance of a deep learning-based super-resolution (SR) US radiomics model for SE/n-SE differentiation. This international multicenter retrospective study recruited patients with confirmed TGCT between 2015 and 2023. A pre-trained SR reconstruction algorithm was applied to enhance native resolution (NR) images. NR and SR radiomics models were constructed, and the superior model was then integrated with clinical features to construct clinical-radiomics models. Diagnostic performance was evaluated by ROC analysis (AUC) and compared with radiologists' assessments using the DeLong test. A total of 486 male patients were enrolled for training (n = 338), domestic (n = 92), and international (n = 59) validation sets. The SR radiomics model achieved AUCs of 0.90, 0.82, and 0.91, respectively, in the training, domestic, and international validation sets, significantly surpassing the NR model (p < 0.001, p = 0.031, and p = 0.001, respectively). The clinical-radiomics model exhibited a significantly higher across both domestic and international validation sets compared to the SR radiomics model alone (0.95 vs 0.82, p = 0.004; 0.97 vs 0.91, p = 0.031). Moreover, the clinical-radiomics model surpassed the performance of experienced radiologists in both domestic (AUC, 0.95 vs 0.85, p = 0.012) and international (AUC, 0.97 vs 0.77, p < 0.001) validation cohorts. The SR-based clinical-radiomics model can effectively differentiate between SE and n-SE. This international multicenter study demonstrated that a radiomics model of deep learning-based SR reconstructed US images enabled effective differentiation between SE and n-SE. Clinical parameters and radiologists' assessments exhibit limited diagnostic accuracy for SE/n-SE differentiation in TGCT. Based on scrotal US images of TGCT, the SR radiomics models performed better than the NR radiomics models. The SR-based clinical-radiomics model outperforms both the radiomics model and radiologists' assessment, enabling accurate, non-invasive preoperative differentiation between SE and n-SE.

Explainable multimodal deep learning for predicting thyroid cancer lateral lymph node metastasis using ultrasound imaging.

Shen P, Yang Z, Sun J, Wang Y, Qiu C, Wang Y, Ren Y, Liu S, Cai W, Lu H, Yao S

pubmed logopapersAug 1 2025
Preoperative prediction of lateral lymph node metastasis is clinically crucial for guiding surgical strategy and prognosis assessment, yet precise prediction methods are lacking. We therefore develop Lateral Lymph Node Metastasis Network (LLNM-Net), a bidirectional-attention deep-learning model that fuses multimodal data (preoperative ultrasound images, radiology reports, pathological findings, and demographics) from 29,615 patients and 9836 surgical cases across seven centers. Integrating nodule morphology and position with clinical text, LLNM-Net achieves an Area Under the Curve (AUC) of 0.944 and 84.7% accuracy in multicenter testing, outperforming human experts (64.3% accuracy) and surpassing previous models by 7.4%. Here we show tumors within 0.25 cm of the thyroid capsule carry >72% metastasis risk, with middle and upper lobes as high-risk regions. Leveraging location, shape, echogenicity, margins, demographics, and clinician inputs, LLNM-Net further attains an AUC of 0.983 for identifying high-risk patients. The model is thus a promising for tool for preoperative screening and risk stratification.

High-grade glioma: combined use of 5-aminolevulinic acid and intraoperative ultrasound for resection and a predictor algorithm for detection.

Aibar-Durán JÁ, Mirapeix RM, Gallardo Alcañiz A, Salgado-López L, Freixer-Palau B, Casitas Hernando V, Hernández FM, de Quintana-Schmidt C

pubmed logopapersAug 1 2025
The primary goal in neuro-oncology is the maximally safe resection of high-grade glioma (HGG). A more extensive resection improves both overall and disease-free survival, while a complication-free surgery enables better tolerance to adjuvant therapies such as chemotherapy and radiotherapy. Techniques such as 5-aminolevulinic acid (5-ALA) fluorescence and intraoperative ultrasound (ioUS) are valuable for safe resection and cost-effective. However, the benefits of combining these techniques remain undocumented. The aim of this study was to investigate outcomes when combining 5-ALA and ioUS. From January 2019 to January 2024, 72 patients (mean age 62.2 years, 62.5% male) underwent HGG resection at a single hospital. Tumor histology included glioblastoma (90.3%), grade IV astrocytoma (4.1%), grade III astrocytoma (2.8%), and grade III oligodendroglioma (2.8%). Tumor resection was performed under natural light, followed by using 5-ALA and ioUS to detect residual tumor. Biopsies from the surgical bed were analyzed for tumor presence and categorized based on 5-ALA and ioUS results. Results of 5-ALA and ioUS were classified into positive, weak/doubtful, or negative. Histological findings of the biopsies were categorized into solid tumor, infiltration, or no tumor. Sensitivity, specificity, and predictive values for both techniques, separately and combined, were calculated. A machine learning algorithm (HGGPredictor) was developed to predict tumor presence in biopsies. The overall sensitivities of 5-ALA and ioUS were 84.9% and 76%, with specificities of 57.8% and 84.5%, respectively. The combination of both methods in a positive/positive scenario yielded the highest performance, achieving a sensitivity of 91% and specificity of 86%. The positive/doubtful combination followed, with sensitivity of 67.9% and specificity of 95.2%. Area under the curve analysis indicated superior performance when both techniques were combined, in comparison to each method used individually. Additionally, the HGGPredictor tool effectively estimated the quantity of tumor cells in surgical margins. Combining 5-ALA and ioUS enhanced diagnostic accuracy for HGG resection, suggesting a new surgical standard. An intraoperative predictive algorithm could further automate decision-making.
Page 4 of 40400 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.