Sort by:
Page 5 of 40400 results

Explainable multimodal deep learning for predicting thyroid cancer lateral lymph node metastasis using ultrasound imaging.

Shen P, Yang Z, Sun J, Wang Y, Qiu C, Wang Y, Ren Y, Liu S, Cai W, Lu H, Yao S

pubmed logopapersAug 1 2025
Preoperative prediction of lateral lymph node metastasis is clinically crucial for guiding surgical strategy and prognosis assessment, yet precise prediction methods are lacking. We therefore develop Lateral Lymph Node Metastasis Network (LLNM-Net), a bidirectional-attention deep-learning model that fuses multimodal data (preoperative ultrasound images, radiology reports, pathological findings, and demographics) from 29,615 patients and 9836 surgical cases across seven centers. Integrating nodule morphology and position with clinical text, LLNM-Net achieves an Area Under the Curve (AUC) of 0.944 and 84.7% accuracy in multicenter testing, outperforming human experts (64.3% accuracy) and surpassing previous models by 7.4%. Here we show tumors within 0.25 cm of the thyroid capsule carry >72% metastasis risk, with middle and upper lobes as high-risk regions. Leveraging location, shape, echogenicity, margins, demographics, and clinician inputs, LLNM-Net further attains an AUC of 0.983 for identifying high-risk patients. The model is thus a promising for tool for preoperative screening and risk stratification.

Contrast-Enhanced Ultrasound-Based Intratumoral and Peritumoral Radiomics for Discriminating Carcinoma In Situ and Invasive Carcinoma of the Breast.

Zheng Y, Song Y, Wu T, Chen J, Du Y, Liu H, Wu R, Kuang Y, Diao X

pubmed logopapersAug 1 2025
This study aimed to evaluate the efficacy of a diagnostic model integrating intratumoral and peritumoral radiomic features based on contrast-enhanced ultrasound (CEUS) for differentiation between carcinoma in situ (CIS) and invasive breast carcinoma (IBC). Consecutive cases confirmed by postoperative histopathological analysis were retrospectively gathered, comprising 143 cases of CIS from January 2018 to May 2024, and 186 cases of IBC from May 2022 to May 2024, totaling 322 patients with 329 lesion and complete preoperative CEUS imaging. Intratumoral regions of interest (ROI) were defined in CEUS peak-phase images deferring gray-scale mode, while peritumoral ROI were defined by expanding 2 mm, 5 mm, and 8 mm beyond the tumor margin for radiomic features extraction. Statistical and machine learning techniques were employed for feature selection. Logistic regression classifier was utilized to construct radiomic models integrating intratumoral, peritumoral, and clinical features. Model performance was assessed using the area under the curve (AUC). The model incorporating 5 mm peritumoral features with intratumoral and clinical data exhibited superior diagnostic performance, achieving AUCs of 0.927 and 0.911 in the training and test sets, respectively. It outperformed models based only on clinical features or other radiomic configurations, with the 5 mm peritumoral region proving most effective for lesions discrimination. This study highlights the significant potential of combined intratumoral and peritumoral CEUS radiomics for classifying CIS and IBC, with the integration of 5 mm peritumoral features notably enhancing diagnostic accuracy.

Do We Need Pre-Processing for Deep Learning Based Ultrasound Shear Wave Elastography?

Sarah Grube, Sören Grünhagen, Sarah Latus, Michael Meyling, Alexander Schlaefer

arxiv logopreprintAug 1 2025
Estimating the elasticity of soft tissue can provide useful information for various diagnostic applications. Ultrasound shear wave elastography offers a non-invasive approach. However, its generalizability and standardization across different systems and processing pipelines remain limited. Considering the influence of image processing on ultrasound based diagnostics, recent literature has discussed the impact of different image processing steps on reliable and reproducible elasticity analysis. In this work, we investigate the need of ultrasound pre-processing steps for deep learning-based ultrasound shear wave elastography. We evaluate the performance of a 3D convolutional neural network in predicting shear wave velocities from spatio-temporal ultrasound images, studying different degrees of pre-processing on the input images, ranging from fully beamformed and filtered ultrasound images to raw radiofrequency data. We compare the predictions from our deep learning approach to a conventional time-of-flight method across four gelatin phantoms with different elasticity levels. Our results demonstrate statistically significant differences in the predicted shear wave velocity among all elasticity groups, regardless of the degree of pre-processing. Although pre-processing slightly improves performance metrics, our results show that the deep learning approach can reliably differentiate between elasticity groups using raw, unprocessed radiofrequency data. These results show that deep learning-based approaches could reduce the need for and the bias of traditional ultrasound pre-processing steps in ultrasound shear wave elastography, enabling faster and more reliable clinical elasticity assessments.

Automated Assessment of Choroidal Mass Dimensions Using Static and Dynamic Ultrasonographic Imaging

Emmert, N., Wall, G., Nabavi, A., Rahdar, A., Wilson, M., King, B., Cernichiaro-Espinosa, L., Yousefi, S.

medrxiv logopreprintAug 1 2025
PurposeTo develop and validate an artificial intelligence (AI)-based model that automatically measures choroidal mass dimensions on B{square}scan ophthalmic ultrasound still images and cine loops. DesignRetrospective diagnostic accuracy study with internal and external validation. ParticipantsThe dataset included 1,822 still images and 283 cine loops of choroidal masses for model development and testing. An additional 182 still images were used for external validation, and 302 control images with other diagnoses were included to assess specificity MethodsA deep convolutional neural network (CNN) based on the U-Net architecture was developed to automatically measure the apical height and basal diameter of choroidal masses on B-scan ultrasound. All still images were manually annotated by expert graders and reviewed by a senior ocular oncologist. Cine loops were analyzed frame by frame and the frame with the largest detected mass dimensions was selected for evaluation. Outcome MeasuresThe primary outcome was the models measurement accuracy, defined by the mean absolute error (MAE) in millimeters, compared to expert manual annotations, for both apical height and basal diameter. Secondary metrics included the Dice coefficient, coefficient of determination (R2), and mean pixel distance between predicted and reference measurements. ResultsOn the internal test set of still images, the model successfully detected the tumor in 99.7% of cases. The mean absolute error (MAE) was 0.38 {+/-} 0.55 mm for apical height (95.1% of measurements <1 mm of the expert annotation) and was 0.99 {+/-} 1.15 mm for basal diameter (64.4% of measurements <1 mm). Linear agreement between predicted and reference measurements was strong, with R2 values of 0.74 for apical height and 0.89 for basal diameter. When applied to the control set of 302 control images, the model demonstrated a moderate false positive rate. On the external validation set, the model maintained comparable accuracy. Among the cine loops, the model detected tumors in 89.4% of cases with comparable accuracy. ConclusionDeep learning can deliver fast, reproducible, millimeter{square}level measurements of choroidal mass dimensions with robust performance across different mass types and imaging sources. These findings support the potential clinical utility of AI-assisted measurement tools in ocular oncology workflows.

Generative artificial intelligence for counseling of fetal malformations following ultrasound diagnosis.

Grünebaum A, Chervenak FA

pubmed logopapersJul 31 2025
To explore the potential role of generative artificial intelligence (GenAI) in enhancing patient counseling following prenatal ultrasound diagnosis of fetal malformations, with an emphasis on clinical utility, patient comprehension, and ethical implementation. The detection of fetal anomalies during the mid-trimester ultrasound is emotionally distressing for patients and presents significant challenges in communication and decision-making. Generative AI tools, such as GPT-4 and similar models, offer novel opportunities to support clinicians in delivering accurate, empathetic, and accessible counseling while preserving the physician's central role. We present a narrative review and applied framework illustrating how GenAI can assist obstetricians before, during, and after the fetal anomaly scan. Use cases include lay summaries, visual aids, anticipatory guidance, multilingual translation, and emotional support. Tables and sample prompts demonstrate practical applications across a range of anomalies.

Enhanced stroke risk prediction in hypertensive patients through deep learning integration of imaging and clinical data.

Li H, Zhang T, Han G, Huang Z, Xiao H, Ni Y, Liu B, Lin W, Lin Y

pubmed logopapersJul 31 2025
Stroke is one of the leading causes of death and disability worldwide, with a significantly elevated incidence among individuals with hypertension. Conventional risk assessment methods primarily rely on a limited set of clinical parameters and often exclude imaging-derived structural features, resulting in suboptimal predictive accuracy. This study aimed to develop a deep learning-based multimodal stroke risk prediction model by integrating carotid ultrasound imaging with multidimensional clinical data to enable precise identification of high-risk individuals among hypertensive patients. A total of 2,176 carotid artery ultrasound images from 1,088 hypertensive patients were collected. ResNet50 was employed to automatically segment the carotid intima-media and extract key structural features. These imaging features, along with clinical variables such as age, blood pressure, and smoking history, were fused using a Vision Transformer (ViT) and fed into a Radial Basis Probabilistic Neural Network (RBPNN) for risk stratification. The model's performance was systematically evaluated using metrics including AUC, Dice coefficient, IoU, and Precision-Recall curves. The proposed multimodal fusion model achieved outstanding performance on the test set, with an AUC of 0.97, a Dice coefficient of 0.90, and an IoU of 0.80. Ablation studies demonstrated that the inclusion of ViT and RBPNN modules significantly enhanced predictive accuracy. Subgroup analysis further confirmed the model's robust performance in high-risk populations, such as those with diabetes or smoking history. The deep learning-based multimodal fusion model effectively integrates carotid ultrasound imaging and clinical features, significantly improving the accuracy of stroke risk prediction in hypertensive patients. The model demonstrates strong generalizability and clinical application potential, offering a valuable tool for early screening and personalized intervention planning for stroke prevention. Not applicable.

Optimizing Thyroid Nodule Management With Artificial Intelligence: Multicenter Retrospective Study on Reducing Unnecessary Fine Needle Aspirations.

Ni JH, Liu YY, Chen C, Shi YL, Zhao X, Li XL, Ye BB, Hu JL, Mou LC, Sun LP, Fu HJ, Zhu XX, Zhang YF, Guo L, Xu HX

pubmed logopapersJul 30 2025
Most artificial intelligence (AI) models for thyroid nodules are designed to screen for malignancy to guide further interventions; however, these models have not yet been fully implemented in clinical practice. This study aimed to evaluate AI in real clinical settings for identifying potentially benign thyroid nodules initially deemed to be at risk for malignancy by radiologists, reducing unnecessary fine needle aspiration (FNA) and optimizing management. We retrospectively collected a validation cohort of thyroid nodules that had undergone FNA. These nodules were initially assessed as "suspicious for malignancy" by radiologists based on ultrasound features, following standard clinical practice, which prompted further FNA procedures. Ultrasound images of these nodules were re-evaluated using a deep learning-based AI system, and its diagnostic performance was assessed in terms of correct identification of benign nodules and error identification of malignant nodules. Performance metrics such as sensitivity, specificity, and the area under the receiver operating characteristic curve were calculated. In addition, a separate comparison cohort was retrospectively assembled to compare the AI system's ability to correctly identify benign thyroid nodules with that of radiologists. The validation cohort comprised 4572 thyroid nodules (benign: n=3134, 68.5%; malignant: n=1438, 31.5%). AI correctly identified 2719 (86.8% among benign nodules) and reduced unnecessary FNAs from 68.5% (3134/4572) to 9.1% (415/4572). However, 123 malignant nodules (8.6% of malignant cases) were mistakenly identified as benign, with the majority of these being of low or intermediate suspicion. In the comparison cohort, AI successfully identified 81.4% (96/118) of benign nodules. It outperformed junior and senior radiologists, who identified only 40% and 55%, respectively. The area under the curve (AUC) for the AI model was 0.88 (95% CI 0.85-0.91), demonstrating a superior AUC compared with that of the junior radiologists (AUC=0.43, 95% CI 0.36-0.50; P=.002) and senior radiologists (AUC=0.63, 95% CI 0.55-0.70; P=.003). Compared with radiologists, AI can better serve as a "goalkeeper" in reducing unnecessary FNAs by identifying benign nodules that are initially assessed as malignant by radiologists. However, active surveillance is still necessary for all these nodules since a very small number of low-aggressiveness malignant nodules may be mistakenly identified.

Advancing Fetal Ultrasound Image Quality Assessment in Low-Resource Settings

Dongli He, Hu Wang, Mohammad Yaqub

arxiv logopreprintJul 30 2025
Accurate fetal biometric measurements, such as abdominal circumference, play a vital role in prenatal care. However, obtaining high-quality ultrasound images for these measurements heavily depends on the expertise of sonographers, posing a significant challenge in low-income countries due to the scarcity of trained personnel. To address this issue, we leverage FetalCLIP, a vision-language model pretrained on a curated dataset of over 210,000 fetal ultrasound image-caption pairs, to perform automated fetal ultrasound image quality assessment (IQA) on blind-sweep ultrasound data. We introduce FetalCLIP$_{CLS}$, an IQA model adapted from FetalCLIP using Low-Rank Adaptation (LoRA), and evaluate it on the ACOUSLIC-AI dataset against six CNN and Transformer baselines. FetalCLIP$_{CLS}$ achieves the highest F1 score of 0.757. Moreover, we show that an adapted segmentation model, when repurposed for classification, further improves performance, achieving an F1 score of 0.771. Our work demonstrates how parameter-efficient fine-tuning of fetal ultrasound foundation models can enable task-specific adaptations, advancing prenatal care in resource-limited settings. The experimental code is available at: https://github.com/donglihe-hub/FetalCLIP-IQA.

High-Resolution Ultrasound Data for AI-Based Segmentation in Mouse Brain Tumor.

Dorosti S, Landry T, Brewer K, Forbes A, Davis C, Brown J

pubmed logopapersJul 30 2025
Glioblastoma multiforme (GBM) is the most aggressive type of brain cancer, making effective treatments essential to improve patient survival. To advance the understanding of GBM and develop more effective therapies, preclinical studies commonly use mouse models due to their genetic and physiological similarities to humans. In particular, the GL261 mouse glioma model is employed for its reproducible tumor growth and ability to mimic key aspects of human gliomas. Ultrasound imaging is a valuable modality in preclinical studies, offering real-time, non-invasive tumor monitoring and facilitating treatment response assessment. Furthermore, its potential therapeutic applications, such as in tumor ablation, expand its utility in preclinical studies. However, real-time segmentation of GL261 tumors during surgery introduces significant complexities, such as precise tumor boundary delineation and maintaining processing efficiency. Automated segmentation offers a solution, but its success relies on high-quality datasets with precise labeling. Our study introduces the first publicly available ultrasound dataset specifically developed to improve tumor segmentation in GL261 glioblastomas, providing 1,856 annotated images to support AI model development in preclinical research. This dataset bridges preclinical insights and clinical practice, laying the foundation for developing more accurate and effective tumor resection techniques.

Ultrasound derived deep learning features for predicting axillary lymph node metastasis in breast cancer using graph convolutional networks in a multicenter study.

Agyekum EA, Kong W, Agyekum DN, Issaka E, Wang X, Ren YZ, Tan G, Jiang X, Shen X, Qian X

pubmed logopapersJul 30 2025
The purpose of this study was to create and validate an ultrasound-based graph convolutional network (US-based GCN) model for the prediction of axillary lymph node metastasis (ALNM) in patients with breast cancer. A total of 820 eligible patients with breast cancer who underwent preoperative breast ultrasonography (US) between April 2016 and June 2022 were retrospectively enrolled. The training cohort consisted of 621 patients, whereas validation cohort 1 included 112 patients, and validation cohort 2 included 87 patients. A US-based GCN model was built using US deep learning features. In validation cohort 1, the US-based GCN model performed satisfactorily, with an AUC of 0.88 and an accuracy of 0.76. In validation cohort 2, the US-based GCN model performed satisfactorily, with an AUC of 0.84 and an accuracy of 0.75. This approach has the potential to help guide optimal ALNM management in breast cancer patients, particularly by preventing overtreatment. In conclusion, we developed a US-based GCN model to assess the ALN status of breast cancer patients prior to surgery. The US-based GCN model can provide a possible noninvasive method for detecting ALNM and aid in clinical decision-making. High-level evidence for clinical use in later studies is anticipated to be obtained through prospective studies.
Page 5 of 40400 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.