Sort by:
Page 20 of 56556 results

AI enhanced diagnostic accuracy and workload reduction in hepatocellular carcinoma screening.

Lu RF, She CY, He DN, Cheng MQ, Wang Y, Huang H, Lin YD, Lv JY, Qin S, Liu ZZ, Lu ZR, Ke WP, Li CQ, Xiao H, Xu ZF, Liu GJ, Yang H, Ren J, Wang HB, Lu MD, Huang QH, Chen LD, Wang W, Kuang M

pubmed logopapersAug 2 2025
Hepatocellular carcinoma (HCC) ultrasound screening encounters challenges related to accuracy and the workload of radiologists. This retrospective, multicenter study assessed four artificial intelligence (AI) enhanced strategies using 21,934 liver ultrasound images from 11,960 patients to improve HCC ultrasound screening accuracy and reduce radiologist workload. UniMatch was used for lesion detection and LivNet for classification, trained on 17,913 images. Among the strategies tested, Strategy 4, which combined AI for initial detection and radiologist evaluation of negative cases in both detection and classification phases, outperformed others. It not only matched the high sensitivity of original algorithm (0.956 vs. 0.991) but also improved specificity (0.787 vs. 0.698), reduced radiologist workload by 54.5%, and decreased both recall and false positive rates. This approach demonstrates a successful model of human-AI collaboration, not only enhancing clinical outcomes but also mitigating unnecessary patient anxiety and system burden by minimizing recalls and false positives.

Artificial Intelligence in Abdominal, Gynecological, Obstetric, Musculoskeletal, Vascular and Interventional Ultrasound.

Graumann O, Cui Xin W, Goudie A, Blaivas M, Braden B, Campbell Westerway S, Chammas MC, Dong Y, Gilja OH, Hsieh PC, Jiang Tian A, Liang P, Möller K, Nolsøe CP, Săftoiu A, Dietrich CF

pubmed logopapersAug 2 2025
Artificial Intelligence (AI) is a theoretical framework and systematic development of computational models designed to execute tasks that traditionally require human cognition. In medical imaging, AI is used for various modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and pathologies across multiple organ systems. However, integrating AI into medical ultrasound presents unique challenges compared to modalities like CT and MRI due to its operator-dependent nature and inherent variability in the image acquisition process. AI application to ultrasound holds the potential to mitigate multiple variabilities, recalibrate interpretative consistency, and uncover diagnostic patterns that may be difficult for humans to detect. Progress has led to significant innovation in medical ultrasound-based AI applications, facilitating their adoption in various clinical settings and for multiple diseases. This manuscript primarily aims to provide a concise yet comprehensive exploration of current and emerging AI applications in medical ultrasound within abdominal, musculoskeletal, and obstetric & gynecological and interventional medical ultrasound. The secondary aim is to discuss present limitations and potential challenges such technological implementations may encounter.

From Consensus to Standardization: Evaluating Deep Learning for Nerve Block Segmentation in Ultrasound Imaging.

Pelletier ED, Jeffries SD, Suissa N, Sarty I, Malka N, Song K, Sinha A, Hemmerling TM

pubmed logopapersAug 1 2025
Deep learning can automate nerve identification by learning from expert-labeled examples to detect and highlight nerves in ultrasound images. This study aims to evaluate the performance of deep-learning models in identifying nerves for ultrasound-guided nerve blocks. A total of 3594 raw ultrasound images were collected from public sources-an open GitHub repository and publicly available YouTube videos-covering 9 nerve block regions: Transversus Abdominis Plane (TAP), Femoral Nerve, Posterior Rectus Sheath, Median and Ulnar Nerves, Pectoralis Plane, Sciatic Nerve, Infraclavicular Brachial Plexus, Supraclavicular Brachial Plexus, and Interscalene Brachial Plexus. Of these, 10 images per nerve region were kept for testing, with each image labeled by 10 expert anesthesiologists. The remaining 3504 were labeled by a medical anesthesia resident and augmented to create a diverse training dataset of 25,000 images per nerve region. Additionally, 908 negative ultrasound images, which do not contain the targeted nerve structures, were included to improve model robustness. Ten convolutional neural network-based deep-learning architectures were selected to identify nerve structures. Models were trained using a 5-fold cross-validation approach on an Extended Video Graphics Array (EVGA) GeForce RTX 3090 GPU, with batch size, number of epochs, and the Adam optimizer adjusted to enhance the models' effectiveness. Posttraining, models were evaluated on a set of 10 images per nerve region, using the Dice score (range: 0 to 1, where 1 indicates perfect agreement and 0 indicates no overlap) to compare model predictions with expert-labeled images. Further validation was conducted by 10 medical experts who assessed whether they would insert a needle into the model's predictions. Statistical analyses were performed to explore the relationship between Dice scores and expert responses. The R2U-Net model achieved the highest average Dice score (0.7619) across all nerve regions, outperforming other models (0.7123-0.7619). However, statistically significant differences in model performance were observed only for the TAP nerve region (χ² = 26.4, df = 9, P = .002, ε² = 0.267). Expert evaluations indicated high accuracy in the model predictions, particularly for the Popliteal nerve region, where experts agreed to insert a needle based on all 100 model-generated predictions. Logistic modeling suggested that higher Dice overlap might increase the odds of expert acceptance in the Supraclavicular region (odds ratio [OR] = 8.59 × 10⁴, 95% confidence interval [CI], 0.33-2.25 × 10¹⁰; P = .073). The findings demonstrate the potential of deep-learning models, such as R2U-Net, to deliver consistent segmentation results in ultrasound-guided nerve block procedures.

Do We Need Pre-Processing for Deep Learning Based Ultrasound Shear Wave Elastography?

Sarah Grube, Sören Grünhagen, Sarah Latus, Michael Meyling, Alexander Schlaefer

arxiv logopreprintAug 1 2025
Estimating the elasticity of soft tissue can provide useful information for various diagnostic applications. Ultrasound shear wave elastography offers a non-invasive approach. However, its generalizability and standardization across different systems and processing pipelines remain limited. Considering the influence of image processing on ultrasound based diagnostics, recent literature has discussed the impact of different image processing steps on reliable and reproducible elasticity analysis. In this work, we investigate the need of ultrasound pre-processing steps for deep learning-based ultrasound shear wave elastography. We evaluate the performance of a 3D convolutional neural network in predicting shear wave velocities from spatio-temporal ultrasound images, studying different degrees of pre-processing on the input images, ranging from fully beamformed and filtered ultrasound images to raw radiofrequency data. We compare the predictions from our deep learning approach to a conventional time-of-flight method across four gelatin phantoms with different elasticity levels. Our results demonstrate statistically significant differences in the predicted shear wave velocity among all elasticity groups, regardless of the degree of pre-processing. Although pre-processing slightly improves performance metrics, our results show that the deep learning approach can reliably differentiate between elasticity groups using raw, unprocessed radiofrequency data. These results show that deep learning-based approaches could reduce the need for and the bias of traditional ultrasound pre-processing steps in ultrasound shear wave elastography, enabling faster and more reliable clinical elasticity assessments.

Automated Assessment of Choroidal Mass Dimensions Using Static and Dynamic Ultrasonographic Imaging

Emmert, N., Wall, G., Nabavi, A., Rahdar, A., Wilson, M., King, B., Cernichiaro-Espinosa, L., Yousefi, S.

medrxiv logopreprintAug 1 2025
PurposeTo develop and validate an artificial intelligence (AI)-based model that automatically measures choroidal mass dimensions on B{square}scan ophthalmic ultrasound still images and cine loops. DesignRetrospective diagnostic accuracy study with internal and external validation. ParticipantsThe dataset included 1,822 still images and 283 cine loops of choroidal masses for model development and testing. An additional 182 still images were used for external validation, and 302 control images with other diagnoses were included to assess specificity MethodsA deep convolutional neural network (CNN) based on the U-Net architecture was developed to automatically measure the apical height and basal diameter of choroidal masses on B-scan ultrasound. All still images were manually annotated by expert graders and reviewed by a senior ocular oncologist. Cine loops were analyzed frame by frame and the frame with the largest detected mass dimensions was selected for evaluation. Outcome MeasuresThe primary outcome was the models measurement accuracy, defined by the mean absolute error (MAE) in millimeters, compared to expert manual annotations, for both apical height and basal diameter. Secondary metrics included the Dice coefficient, coefficient of determination (R2), and mean pixel distance between predicted and reference measurements. ResultsOn the internal test set of still images, the model successfully detected the tumor in 99.7% of cases. The mean absolute error (MAE) was 0.38 {+/-} 0.55 mm for apical height (95.1% of measurements <1 mm of the expert annotation) and was 0.99 {+/-} 1.15 mm for basal diameter (64.4% of measurements <1 mm). Linear agreement between predicted and reference measurements was strong, with R2 values of 0.74 for apical height and 0.89 for basal diameter. When applied to the control set of 302 control images, the model demonstrated a moderate false positive rate. On the external validation set, the model maintained comparable accuracy. Among the cine loops, the model detected tumors in 89.4% of cases with comparable accuracy. ConclusionDeep learning can deliver fast, reproducible, millimeter{square}level measurements of choroidal mass dimensions with robust performance across different mass types and imaging sources. These findings support the potential clinical utility of AI-assisted measurement tools in ocular oncology workflows.

Contrast-Enhanced Ultrasound-Based Intratumoral and Peritumoral Radiomics for Discriminating Carcinoma In Situ and Invasive Carcinoma of the Breast.

Zheng Y, Song Y, Wu T, Chen J, Du Y, Liu H, Wu R, Kuang Y, Diao X

pubmed logopapersAug 1 2025
This study aimed to evaluate the efficacy of a diagnostic model integrating intratumoral and peritumoral radiomic features based on contrast-enhanced ultrasound (CEUS) for differentiation between carcinoma in situ (CIS) and invasive breast carcinoma (IBC). Consecutive cases confirmed by postoperative histopathological analysis were retrospectively gathered, comprising 143 cases of CIS from January 2018 to May 2024, and 186 cases of IBC from May 2022 to May 2024, totaling 322 patients with 329 lesion and complete preoperative CEUS imaging. Intratumoral regions of interest (ROI) were defined in CEUS peak-phase images deferring gray-scale mode, while peritumoral ROI were defined by expanding 2 mm, 5 mm, and 8 mm beyond the tumor margin for radiomic features extraction. Statistical and machine learning techniques were employed for feature selection. Logistic regression classifier was utilized to construct radiomic models integrating intratumoral, peritumoral, and clinical features. Model performance was assessed using the area under the curve (AUC). The model incorporating 5 mm peritumoral features with intratumoral and clinical data exhibited superior diagnostic performance, achieving AUCs of 0.927 and 0.911 in the training and test sets, respectively. It outperformed models based only on clinical features or other radiomic configurations, with the 5 mm peritumoral region proving most effective for lesions discrimination. This study highlights the significant potential of combined intratumoral and peritumoral CEUS radiomics for classifying CIS and IBC, with the integration of 5 mm peritumoral features notably enhancing diagnostic accuracy.

Explainable multimodal deep learning for predicting thyroid cancer lateral lymph node metastasis using ultrasound imaging.

Shen P, Yang Z, Sun J, Wang Y, Qiu C, Wang Y, Ren Y, Liu S, Cai W, Lu H, Yao S

pubmed logopapersAug 1 2025
Preoperative prediction of lateral lymph node metastasis is clinically crucial for guiding surgical strategy and prognosis assessment, yet precise prediction methods are lacking. We therefore develop Lateral Lymph Node Metastasis Network (LLNM-Net), a bidirectional-attention deep-learning model that fuses multimodal data (preoperative ultrasound images, radiology reports, pathological findings, and demographics) from 29,615 patients and 9836 surgical cases across seven centers. Integrating nodule morphology and position with clinical text, LLNM-Net achieves an Area Under the Curve (AUC) of 0.944 and 84.7% accuracy in multicenter testing, outperforming human experts (64.3% accuracy) and surpassing previous models by 7.4%. Here we show tumors within 0.25 cm of the thyroid capsule carry >72% metastasis risk, with middle and upper lobes as high-risk regions. Leveraging location, shape, echogenicity, margins, demographics, and clinician inputs, LLNM-Net further attains an AUC of 0.983 for identifying high-risk patients. The model is thus a promising for tool for preoperative screening and risk stratification.

Deep learning-based super-resolution US radiomics to differentiate testicular seminoma and non-seminoma: an international multicenter study.

Zhang Y, Lu S, Peng C, Zhou S, Campo I, Bertolotto M, Li Q, Wang Z, Xu D, Wang Y, Xu J, Wu Q, Hu X, Zheng W, Zhou J

pubmed logopapersAug 1 2025
Subvariants of testicular germ cell tumor (TGCT) significantly affect therapeutic strategies and patient prognosis. However, preoperatively distinguishing seminoma (SE) from non-seminoma (n-SE) remains a challenge. This study aimed to evaluate the performance of a deep learning-based super-resolution (SR) US radiomics model for SE/n-SE differentiation. This international multicenter retrospective study recruited patients with confirmed TGCT between 2015 and 2023. A pre-trained SR reconstruction algorithm was applied to enhance native resolution (NR) images. NR and SR radiomics models were constructed, and the superior model was then integrated with clinical features to construct clinical-radiomics models. Diagnostic performance was evaluated by ROC analysis (AUC) and compared with radiologists' assessments using the DeLong test. A total of 486 male patients were enrolled for training (n = 338), domestic (n = 92), and international (n = 59) validation sets. The SR radiomics model achieved AUCs of 0.90, 0.82, and 0.91, respectively, in the training, domestic, and international validation sets, significantly surpassing the NR model (p < 0.001, p = 0.031, and p = 0.001, respectively). The clinical-radiomics model exhibited a significantly higher across both domestic and international validation sets compared to the SR radiomics model alone (0.95 vs 0.82, p = 0.004; 0.97 vs 0.91, p = 0.031). Moreover, the clinical-radiomics model surpassed the performance of experienced radiologists in both domestic (AUC, 0.95 vs 0.85, p = 0.012) and international (AUC, 0.97 vs 0.77, p < 0.001) validation cohorts. The SR-based clinical-radiomics model can effectively differentiate between SE and n-SE. This international multicenter study demonstrated that a radiomics model of deep learning-based SR reconstructed US images enabled effective differentiation between SE and n-SE. Clinical parameters and radiologists' assessments exhibit limited diagnostic accuracy for SE/n-SE differentiation in TGCT. Based on scrotal US images of TGCT, the SR radiomics models performed better than the NR radiomics models. The SR-based clinical-radiomics model outperforms both the radiomics model and radiologists' assessment, enabling accurate, non-invasive preoperative differentiation between SE and n-SE.

Reference charts for first-trimester placental volume derived using OxNNet.

Mathewlynn S, Starck LN, Yin Y, Soltaninejad M, Swinburne M, Nicolaides KH, Syngelaki A, Contreras AG, Bigiotti S, Woess EM, Gerry S, Collins S

pubmed logopapersAug 1 2025
To establish a comprehensive reference range for OxNNet-derived first-trimester placental volume (FTPV), based on values observed in healthy pregnancies. Data were obtained from the First Trimester Placental Ultrasound Study, an observational cohort study in which three-dimensional placental ultrasound imaging was performed between 11 + 2 and 14 + 1 weeks' gestation, alongside otherwise routine care. A subgroup of singleton pregnancies resulting in term live birth, without neonatal unit admission or major chromosomal or structural abnormality, were included. Exclusion criteria were fetal growth restriction, maternal diabetes mellitus, hypertensive disorders of pregnancy or other maternal medical conditions (e.g. chronic hypertension, antiphospholipid syndrome, systemic lupus erythematosus). Placental images were processed using the OxNNet toolkit, a software solution based on a fully convolutional neural network, for automated placental segmentation and volume calculation. Quantile regression and the lambda-mu-sigma (LMS) method were applied to model the distribution of FTPV, using both crown-rump length (CRL) and gestational age as predictors. Model fit was assessed using the Akaike information criterion (AIC), and centile curves were constructed for visual inspection. The cohort comprised 2547 cases. The distribution of FTPV across gestational ages was positively skewed, with variation in the distribution at different gestational timepoints. In model comparisons, the LMS method yielded lower AIC values compared with quantile regression models. For predicting FTPV from CRL, the LMS model with the Sinh-Arcsinh distribution achieved the best performance, with the lowest AIC value. For gestational-age-based prediction, the LMS model with the Box-Cox Cole and Green original distribution achieved the lowest AIC value. The LMS models were selected to construct centile charts for FTPV based on both CRL and gestational age. Evaluation of the centile charts revealed strong agreement between predicted and observed centiles, with minimal deviations. Both models demonstrated excellent calibration, and the Z-scores derived using each of the models confirmed normal distribution. This study established reference ranges for FTPV based on both CRL and gestational age in healthy pregnancies. The LMS method provided the best model fit, demonstrating excellent calibration and minimal deviations between predicted and observed centiles. These findings should facilitate the exploration of FTPV as a potential biomarker for adverse pregnancy outcome and provide a foundation for future research into its clinical applications. © 2025 The Author(s). Ultrasound in Obstetrics & Gynecology published by John Wiley & Sons Ltd on behalf of International Society of Ultrasound in Obstetrics and Gynecology.

High-grade glioma: combined use of 5-aminolevulinic acid and intraoperative ultrasound for resection and a predictor algorithm for detection.

Aibar-Durán JÁ, Mirapeix RM, Gallardo Alcañiz A, Salgado-López L, Freixer-Palau B, Casitas Hernando V, Hernández FM, de Quintana-Schmidt C

pubmed logopapersAug 1 2025
The primary goal in neuro-oncology is the maximally safe resection of high-grade glioma (HGG). A more extensive resection improves both overall and disease-free survival, while a complication-free surgery enables better tolerance to adjuvant therapies such as chemotherapy and radiotherapy. Techniques such as 5-aminolevulinic acid (5-ALA) fluorescence and intraoperative ultrasound (ioUS) are valuable for safe resection and cost-effective. However, the benefits of combining these techniques remain undocumented. The aim of this study was to investigate outcomes when combining 5-ALA and ioUS. From January 2019 to January 2024, 72 patients (mean age 62.2 years, 62.5% male) underwent HGG resection at a single hospital. Tumor histology included glioblastoma (90.3%), grade IV astrocytoma (4.1%), grade III astrocytoma (2.8%), and grade III oligodendroglioma (2.8%). Tumor resection was performed under natural light, followed by using 5-ALA and ioUS to detect residual tumor. Biopsies from the surgical bed were analyzed for tumor presence and categorized based on 5-ALA and ioUS results. Results of 5-ALA and ioUS were classified into positive, weak/doubtful, or negative. Histological findings of the biopsies were categorized into solid tumor, infiltration, or no tumor. Sensitivity, specificity, and predictive values for both techniques, separately and combined, were calculated. A machine learning algorithm (HGGPredictor) was developed to predict tumor presence in biopsies. The overall sensitivities of 5-ALA and ioUS were 84.9% and 76%, with specificities of 57.8% and 84.5%, respectively. The combination of both methods in a positive/positive scenario yielded the highest performance, achieving a sensitivity of 91% and specificity of 86%. The positive/doubtful combination followed, with sensitivity of 67.9% and specificity of 95.2%. Area under the curve analysis indicated superior performance when both techniques were combined, in comparison to each method used individually. Additionally, the HGGPredictor tool effectively estimated the quantity of tumor cells in surgical margins. Combining 5-ALA and ioUS enhanced diagnostic accuracy for HGG resection, suggesting a new surgical standard. An intraoperative predictive algorithm could further automate decision-making.
Page 20 of 56556 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.