Sort by:
Page 11 of 41404 results

Diabetic Tibial Neuropathy Prediction: Improving interpretability of Various Machine-Learning Models Based on Multimodal-Ultrasound Features Using SHAP Methodology.

Chen Y, Sun Z, Zhong H, Chen Y, Wu X, Su L, Lai Z, Zheng T, Lyu G, Su Q

pubmed logopapersJul 12 2025
This study aimed to develop and evaluate eight machine learning models based on multimodal ultrasound to precisely predict of diabetic tibial neuropathy (DTN) in patients. Additionally, the SHapley Additive exPlanations(SHAP)framework was introduced to quantify the importance of each feature variable, providing a precise and noninvasive assessment tool for DTN patients, optimizing clinical management strategies, and enhancing patient prognosis. A prospective analysis was conducted using multimodal ultrasound and clinical data from 255 suspected DTN patients who visited the Second Affiliated Hospital of Fujian Medical University between January 2024 and November 2024. Key features were selected using Least Absolute Shrinkage and Selection Operator (LASSO) regression. Predictive models were constructed using Extreme Gradient Boosting (XGB), Logistic Regression, Support Vector Machines, k-Nearest Neighbors, Random Forest, Decision Tree, Naïve Bayes, and Neural Network. The SHAP method was employed to refine model interpretability. Furthermore, in order to verify the generalization degree of the model, this study also collected 135 patients from three other tertiary hospitals for external test. LASSO regression identified Echo intensity(EI), Cross-sectional area (CSA), Mean elasticity value(Emean), Superb microvascular imaging(SMI), and History of smoking were key features for DTN prediction. The XGB model achieved an Area Under the Curve (AUC) of 0.94, 0.83 and 0.79 in the training, internal test and external test sets, respectively. SHAP analysis highlighted the ranking significance of EI, CSA, Emean, SMI, and History of smoking. Personalized prediction explanations provided by theSHAP values demonstrated the contribution of each feature to the final prediction, and enhancing model interpretability. Furthermore, decision plots depicted how different features influenced mispredictions, thereby facilitating further model optimization or feature adjustment. This study proposed a DTN prediction model based on machine-learning algorithms applied to multimodal ultrasound data. The results indicated the superior performance of the XGB model and its interpretability was enhanced using SHAP analysis. This cost-effective and user-friendly approach provides potential support for personalized treatment and precision medicine for DTN.

Integrating Artificial Intelligence in Thyroid Nodule Management: Clinical Outcomes and Cost-Effectiveness Analysis.

Bodoque-Cubas J, Fernández-Sáez J, Martínez-Hervás S, Pérez-Lacasta MJ, Carles-Lavila M, Pallarés-Gasulla RM, Salazar-González JJ, Gil-Boix JV, Miret-Llauradó M, Aulinas-Masó A, Argüelles-Jiménez I, Tofé-Povedano S

pubmed logopapersJul 12 2025
The increasing incidence of thyroid nodules (TN) raises concerns about overdiagnosis and overtreatment. This study evaluates the clinical and economic impact of KOIOS, an FDA-approved artificial intelligence (AI) tool for the management of TN. A retrospective analysis was conducted on 176 patients who underwent thyroid surgery between May 2022 and November 2024. Ultrasound images were evaluated independently by an expert and novice operators using the American College of Radiology Thyroid Imaging Reporting and Data System (ACR-TIRADS), while KOIOS provided AI-adapted risk stratification. Sensitivity, specificity, and Receiver-Operating Curve (ROC) analysis were performed. The incremental cost-effectiveness ratio (ICER) was defined based on the number of optimal care interventions (FNAB and thyroid surgery). Both deterministic and probabilistic sensitivity analyses were conducted to evaluate model robustness. KOIOS AI demonstrated similar diagnostic performance to the expert operator (AUC: 0.794, 95% CI: 0.718-0.871 vs. 0.784, 95% CI: 0.706-0.861; p = 0.754) and significantly outperformed the novice operator (AUC: 0.619, 95% CI: 0.526-0.711; p < 0.001). ICER analysis estimated the cost per additional optimal care decision at -€8,085.56, indicating KOIOS as a dominant and cost-saving strategy when considering a third-party payer perspective over a one-year horizon. Deterministic sensitivity analysis identified surgical costs as the main drivers of variability, while probabilistic analysis consistently favored KOIOS as the optimal strategy. KOIOS AI is a cost-effective alternative, particularly in reducing overdiagnosis and overtreatment for benign TNs. Prospective, real-life studies are needed to validate these findings and explore long-term implications.

A View-Agnostic Deep Learning Framework for Comprehensive Analysis of 2D-Echocardiography

Anisuzzaman, D. M., Malins, J. G., Jackson, J. I., Lee, E., Naser, J. A., Rostami, B., Bird, J. G., Spiegelstein, D., Amar, T., Ngo, C. C., Oh, J. K., Pellikka, P. A., Thaden, J. J., Lopez-Jimenez, F., Poterucha, T. J., Friedman, P. A., Pislaru, S., Kane, G. C., Attia, Z. I.

medrxiv logopreprintJul 11 2025
Echocardiography traditionally requires experienced operators to select and interpret clips from specific viewing angles. Clinical decision-making is therefore limited for handheld cardiac ultrasound (HCU), which is often collected by novice users. In this study, we developed a view-agnostic deep learning framework to estimate left ventricular ejection fraction (LVEF), patient age, and patient sex from any of several views containing the left ventricle. Model performance was: (1) consistently strong across retrospective transthoracic echocardiography (TTE) datasets; (2) comparable between prospective HCU versus TTE (625 patients; LVEF r2 0.80 vs. 0.86, LVEF [> or [&le;]40%] AUC 0.981 vs. 0.993, age r2 0.85 vs. 0.87, sex classification AUC 0.985 vs. 0.996); (3) comparable between prospective HCU data collected by experts versus novice users (100 patients; LVEF r2 0.78 vs. 0.66, LVEF AUC 0.982 vs. 0.966). This approach may broaden the clinical utility of echocardiography by lessening the need for user expertise in image acquisition.

Acute Management of Nasal Bone Fractures: A Systematic Review and Practice Management Guideline.

Paliwoda ED, Newman-Plotnick H, Buzzetta AJ, Post NK, LaClair JR, Trandafirescu M, Gildener-Leapman N, Kpodzo DS, Edwards K, Tafen M, Schalet BJ

pubmed logopapersJul 10 2025
Nasal bone fractures represent the most common facial skeletal injury, challenging both function and aesthetics. This Preferred Reporting Items for Systematic Reviews and Meta-Analyses-based review analyzed 23 studies published within the past 5 years, selected from 998 records retrieved from PubMed, Embase, and Web of Science. Data from 1780 participants were extracted, focusing on diagnostic methods, surgical techniques, anesthesia protocols, and long-term outcomes. Ultrasound and artificial intelligence-based algorithms improved diagnostic accuracy, while telephone triage streamlined necessary encounters. Navigation-assisted reduction, ballooning, and septal reduction with polydioxanone plates improved outcomes. Anesthetic approaches ranged from local nerve blocks to general anesthesia with intraoperative administration of lidocaine, alongside techniques to manage pain from nasal pack removal postoperatively. Long-term follow-up demonstrated improved quality of life, breathing function, and aesthetic satisfaction with timely and individualized treatment. This review highlights the trend toward personalized, technology-assisted approaches in nasal fracture management, highlighting areas for future research.

Intratumoral and peritumoral radiomics based on 2D ultrasound imaging in breast cancer was used to determine the optimal peritumoral range for predicting KI-67 expression.

Huang W, Zheng S, Zhang X, Qi L, Li M, Zhang Q, Zhen Z, Yang X, Kong C, Li D, Hua G

pubmed logopapersJul 10 2025
Currently, radiomics focuses on intratumoral regions and fixed peritumoral regions, and lacks an optimal peritumoral region taken to predict KI-67 expression. The aim of this study was to develop a machine learning model to analyze ultrasound radiomics features with different regions of peri-tumor fetch values to determine the optimal peri-tumor region for predicting KI-67 expression. A total of 453 breast cancer patients were included. They were randomly assigned to training and validation sets in a 7:3 ratio. In the training cohort, machine learning models were constructed for intra-tumor and different peri-tumor regions (2 mm, 4 mm, 6 mm, 8 mm, 10 mm), identifying the relevant Ki-67 features for each ROI and comparing the different models to determine the best model. These models were validated using a test cohort to find the most accurate peri-tumor region for Ki-67 prediction. The area under the receiver operating characteristic curve (AUC) was used to evaluate the performance of predicting KI-67 expression, and the Delong test was used to assess the difference between each AUC.SHAP (Shapley Additive Decomposition) was performed to analyze the optimal prediction model and quantify the contribution of major radiomics features. In the validation cohort, the SVM model with the combination of intratumoral and peritumoral 6 mm regions showed the highest prediction effect, with an AUC of 0.9342.The intratumoral and peritumoral 6-mm SVM models showed statistically significant differences (P < 0.05) compared to the other models. SHAP analysis showed that peri-tumoral 6 mm features were more important than intratumoral features. SVM models using intratumoral and peritumoral 6 mm regions showed the best results in prediction of KI-67 expression.

Multiparametric ultrasound techniques are superior to AI-assisted ultrasound for assessment of solid thyroid nodules: a prospective study.

Li Y, Li X, Yan L, Xiao J, Yang Z, Zhang M, Luo Y

pubmed logopapersJul 10 2025
To evaluate the diagnostic performance of multiparametric ultrasound (mpUS) and AI-assisted B-mode ultrasound (AI-US), and their potential to reduce unnecessary biopsies to B-mode for solid thyroid nodules. This prospective study enrolled 226 solid thyroid nodules with 145 malignant and 81 benign pathological results from 189 patients (35 men and 154 women; age range, 19-73 years; mean age, 45 years). Each nodule was examined using B-mode, microvascular flow imaging (MVFI), elastography with elasticity contrast index (ECI), and an AI system. Image data were recorded for each modality. Ten readers with different experience levels independently evaluated the B-mode images of each nodule to make a "benign" or "malignant" diagnosis in both an unblinded and blinded manner to the AI reports. The most accurate ECI value and MVFI mode were selected and combined with the dichotomous prediction of all readers. Descriptive statistics and AUCs were used to evaluate the diagnostic performances of mpUS and AI-US. Triple mpUS with B-mode, MVFI, and ECI exhibited the highest diagnostic performance (average AUC = 0.811 vs. 0.677 for B-mode, p = 0.001), followed by AI-US (average AUC = 0.718, p = 0.315). Triple mpUS significantly reduced the unnecessary biopsy rate by up to 12% (p = 0.007). AUC and specificity were significantly higher for triple mpUS than for AI-US mode (both p < 0.05). Compared to AI-US, triple mpUS (B-mode, MVFI, and ECI) exhibited better diagnostic performance for thyroid cancer diagnosis, and resulted in a significant reduction in unnecessary biopsy rate. AI systems are expected to take advantage of multi-modal information to facilitate diagnoses.

Breast Ultrasound Tumor Generation via Mask Generator and Text-Guided Network:A Clinically Controllable Framework with Downstream Evaluation

Haoyu Pan, Hongxin Lin, Zetian Feng, Chuxuan Lin, Junyang Mo, Chu Zhang, Zijian Wu, Yi Wang, Qingqing Zheng

arxiv logopreprintJul 10 2025
The development of robust deep learning models for breast ultrasound (BUS) image analysis is significantly constrained by the scarcity of expert-annotated data. To address this limitation, we propose a clinically controllable generative framework for synthesizing BUS images. This framework integrates clinical descriptions with structural masks to generate tumors, enabling fine-grained control over tumor characteristics such as morphology, echogencity, and shape. Furthermore, we design a semantic-curvature mask generator, which synthesizes structurally diverse tumor masks guided by clinical priors. During inference, synthetic tumor masks serve as input to the generative framework, producing highly personalized synthetic BUS images with tumors that reflect real-world morphological diversity. Quantitative evaluations on six public BUS datasets demonstrate the significant clinical utility of our synthetic images, showing their effectiveness in enhancing downstream breast cancer diagnosis tasks. Furthermore, visual Turing tests conducted by experienced sonographers confirm the realism of the generated images, indicating the framework's potential to support broader clinical applications.

Deformable detection transformers for domain adaptable ultrasound localization microscopy with robustness to point spread function variations.

Gharamaleki SK, Helfield B, Rivaz H

pubmed logopapersJul 10 2025
Super-resolution imaging has emerged as a rapidly advancing field in diagnostic ultrasound. Ultrasound Localization Microscopy (ULM) achieves sub-wavelength precision in microvasculature imaging by tracking gas microbubbles (MBs) flowing through blood vessels. However, MB localization faces challenges due to dynamic point spread functions (PSFs) caused by harmonic and sub-harmonic emissions, as well as depth-dependent PSF variations in ultrasound imaging. Additionally, deep learning models often struggle to generalize from simulated to in vivo data due to significant disparities between the two domains. To address these issues, we propose a novel approach using the DEformable DEtection TRansformer (DE-DETR). This object detection network tackles object deformations by utilizing multi-scale feature maps and incorporating a deformable attention module. We further refine the super-resolution map by employing a KDTree algorithm for efficient MB tracking across consecutive frames. We evaluated our method using both simulated and in vivo data, demonstrating improved precision and recall compared to current state-of-the-art methodologies. These results highlight the potential of our approach to enhance ULM performance in clinical applications.

Intelligent quality assessment of ultrasound images for fetal nuchal translucency measurement during the first trimester of pregnancy based on deep learning models.

Liu L, Wang T, Zhu W, Zhang H, Tian H, Li Y, Cai W, Yang P

pubmed logopapersJul 10 2025
As increased nuchal translucency (NT) thickness is notably associated with fetal chromosomal abnormalities, structural defects, and genetic syndromes, accurate measurement of NT thickness is crucial for the screening of fetal abnormalities during the first trimester. We aimed to develop a model for quality assessment of ultrasound images for precise measurement of fetal NT thickness. We collected 2140 ultrasound images of midsagittal sections of the fetal face between 11 and 14 weeks of gestation. Several image segmentation models were trained, and the one exhibiting the highest DSC and HD 95 was chosen to automatically segment the ROI. The radiomics features and deep transfer learning (DTL) features were extracted and selected to construct radiomics and DTL models. Feature screening was conducted using the <i>t</i>-test, Mann-Whitney <i>U</i>-test, Spearman’s rank correlation analysis, and LASSO. We also developed early fusion and late fusion models to integrate the advantages of radiomics and DTL models. The optimal model was compared with junior radiologists. We used SHapley Additive exPlanations (SHAP) to investigate the model’s interpretability. The DeepLabV3 ResNet achieved the best segmentation performance (DSC: 98.07 ± 0.02%, HD 95: 0.75 ± 0.15 mm). The feature fusion model demonstrated the optimal performance (AUC: 0.978, 95% CI: 0.965–0.990, accuracy: 93.2%, sensitivity: 93.1%, specificity: 93.4%, PPV: 93.5%, NPV: 93.0%, precision: 93.5%). This model exhibited more reliable performance compared to junior radiologists and significantly improved the capabilities of junior radiologists. The SHAP summary plot showed DTL features were the most important features for feature fusion model. The proposed models innovatively bridge the gaps in previous studies, achieving intelligent quality assessment of ultrasound images for NT measurement and highly accurate automatic segmentation of ROIs. These models are potential tools to enhance quality control for fetal ultrasound examinations, streamline clinical workflows, and improve the professional skills of less-experienced radiologists. The online version contains supplementary material available at 10.1186/s12884-025-07863-y.

Integrative multimodal ultrasound and radiomics for early prediction of neoadjuvant therapy response in breast cancer: a clinical study.

Wang S, Liu J, Song L, Zhao H, Wan X, Peng Y

pubmed logopapersJul 9 2025
This study aimed to develop an early predictive model for neoadjuvant therapy (NAT) response in breast cancer by integrating multimodal ultrasound (conventional B-mode, shear-wave elastography, and contrast-enhanced ultrasound) and radiomics with clinical-pathological data, and to evaluate its predictive accuracy after two cycles of NAT. This retrospective study included 239 breast cancer patients receiving neoadjuvant therapy, divided into training (n = 167) and validation (n = 72) cohorts. Multimodal ultrasound-B-mode, shear-wave elastography (SWE), and contrast-enhanced ultrasound (CEUS)-was performed at baseline and after two cycles. Tumors were segmented using a U-Net-based deep learning model with radiologist adjustment, and radiomic features were extracted via PyRadiomics. Candidate variables were screened using univariate analysis and multicollinearity checks, followed by LASSO and stepwise logistic regression to build three models: a clinical-ultrasound model, a radiomics-only model, and a combined model. Model performance for early response prediction was assessed using ROC analysis. In the training cohort (n = 167), Model_Clinic achieved an AUC of 0.85, with HER2 positivity, maximum tumor stiffness (Emax), stiffness heterogeneity (Estd), and the CEUS "radiation sign" emerging as independent predictors (all P < 0.05). The radiomics model showed moderate performance at baseline (AUC 0.69) but improved after two cycles (AUC 0.83), and a model using radiomic feature changes achieved an AUC of 0.79. Model_Combined demonstrated the best performance with a training AUC of 0.91 (sensitivity 89.4%, specificity 82.9%). In the validation cohort (n = 72), all models showed comparable AUCs (Model_Combined ~ 0.90) without significant degradation, and Model_Combined significantly outperformed Model_Clinic and Model_RSA (DeLong P = 0.006 and 0.042, respectively). In our study, integrating multimodal ultrasound and radiomic features improved the early prediction of NAT response in breast cancer, and could provide valuable information to enable timely treatment adjustments and more personalized management strategies.
Page 11 of 41404 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.