Sort by:
Page 30 of 41404 results

Diagnosis of carpal tunnel syndrome using deep learning with comparative guidance.

Sim J, Lee S, Kim S, Jeong SH, Yoon J, Baek S

pubmed logopapersJun 1 2025
This study aims to develop a deep learning model for a robust diagnosis of Carpal Tunnel Syndrome (CTS) based on comparative classification leveraging the ultrasound images of the thenar and hypothenar muscles. We recruited 152 participants, both patients with varying severities of CTS and healthy individuals. The enrolled patients underwent ultrasonography, which provided ultrasound image data of the thenar and hypothenar muscles from the median and ulnar nerves. These images were used to train a deep learning model. We compared the performance of our model with previous comparative methods using echo intensity ratio or machine learning, and non-comparative methods based on deep learning. During the training process, comparative guidance based on cosine similarity was used so that the model learns to automatically identify the abnormal differences in echotexture between the ultrasound images of the thenar and hypothenar muscles. The proposed deep learning model with comparative guidance showed the highest performance. The comparison of Receiver operating characteristic (ROC) curves between models demonstrated that the Comparative guidance was effective in autonomously identifying complex features within the CTS dataset. The proposed deep learning model with comparative guidance was shown to be effective in automatically identifying important features for CTS diagnosis from the ultrasound images. The proposed comparative approach was found to be robust to the traditional problems in ultrasound image analysis such as different cut-off values and anatomical variation of patients. Proposed deep learning methodology facilitates accurate and efficient diagnosis of CTS from ultrasound images.

Predicting long-term patency of radiocephalic arteriovenous fistulas with machine learning and the PREDICT-AVF web app.

Fitzgibbon JJ, Ruan M, Heindel P, Appah-Sampong A, Dey T, Khan A, Hentschel DM, Ozaki CK, Hussain MA

pubmed logopapersJun 1 2025
The goal of this study was to expand our previously created prediction tool (PREDICT-AVF) and web app by estimating long-term primary and secondary patency of radiocephalic AVFs. The data source was 911 patients from PATENCY-1 and PATENCY-2 randomized controlled trials, which enrolled patients undergoing new radiocephalic AVF creation with prospective longitudinal follow up and ultrasound measurements. Models were built using a combination of baseline characteristics and post-operative ultrasound measurements to estimate patency up to 2.5 years. Discrimination performance was assessed, and an interactive web app was created using the most robust model. At 2.5 years, the unadjusted primary and secondary patency (95% CI) was 29% (26-33%) and 68% (65-72%). Models using baseline characteristics generally did not perform as well as those using post-operative ultrasound measurements. Overall, the Cox model (4-6 weeks ultrasound) had the best discrimination performance for primary and secondary patency, with an integrated Brier score of 0.183 (0.167, 0.199) and 0.106 (0.085, 0.126). Expansion of the PREDICT-AVF web app to include prediction of long-term patency can help guide clinicians in developing comprehensive end-stage kidney disease Life-Plans with hemodialysis access patients.

Coarse for Fine: Bounding Box Supervised Thyroid Ultrasound Image Segmentation Using Spatial Arrangement and Hierarchical Prediction Consistency.

Chi J, Lin G, Li Z, Zhang W, Chen JH, Huang Y

pubmed logopapersJun 1 2025
Weakly-supervised learning methods have become increasingly attractive for medical image segmentation, but suffered from a high dependence on quantifying the pixel-wise affinities of low-level features, which are easily corrupted in thyroid ultrasound images, resulting in segmentation over-fitting to weakly annotated regions without precise delineation of target boundaries. We propose a dual-branch weakly-supervised learning framework to optimize the backbone segmentation network by calibrating semantic features into rational spatial distribution under the indirect, coarse guidance of the bounding box mask. Specifically, in the spatial arrangement consistency branch, the maximum activations sampled from the preliminary segmentation prediction and the bounding box mask along the horizontal and vertical dimensions are compared to measure the rationality of the approximate target localization. In the hierarchical prediction consistency branch, the target and background prototypes are encapsulated from the semantic features under the combined guidance of the preliminary segmentation prediction and the bounding box mask. The secondary segmentation prediction induced from the prototypes is compared with the preliminary prediction to quantify the rationality of the elaborated target and background semantic feature perception. Experiments on three thyroid datasets illustrate that our model outperforms existing weakly-supervised methods for thyroid gland and nodule segmentation and is comparable to the performance of fully-supervised methods with reduced annotation time. The proposed method has provided a weakly-supervised segmentation strategy by simultaneously considering the target's location and the rationality of target and background semantic features distribution. It can improve the applicability of deep learning based segmentation in the clinical practice.

A Multimodal Model Based on Transvaginal Ultrasound-Based Radiomics to Predict the Risk of Peritoneal Metastasis in Ovarian Cancer: A Multicenter Study.

Zhou Y, Duan Y, Zhu Q, Li S, Zhang C

pubmed logopapersJun 1 2025
This study aimed to develop a predictive model for peritoneal metastasis (PM) in ovarian cancer using a combination radiomics and clinical biomarkers to improve diagnostic accuracy. This retrospective cohort study of 619 ovarian cancer patients involved demographic data, radiomics, O-RADS standardized description, clinical biomarkers, and histological findings. Radiomics features were extracted using 3D Slicer and Pyradiomics, with selective feature extraction using Least Absolute Shrinkage and Selection Operator regression. Model development and validation were carried out using logistic regression and machine learning methods RESULTS: Interobserver agreement was high for radiomics features, with 1049 features initially extracted and 7 features selected through regression analysis. Multi-modal information such as Ascites, Fallopian tube invasion, Greatest diameter, HE4 and D-dimer levels were significant predictors of PM. The developed radiomics nomogram demonstrated strong discriminatory power, with AUC values of 0.912, 0.883, and 0.831 in the training, internal test, and external test sets respectively. The nomogram displayed superior diagnostic performance compared to single-modality models. The integration of multimodal information in a predictive model for PM in ovarian cancer shows promise for enhancing diagnostic accuracy and guiding personalized treatment. This multi-modal approach offers a potential strategy for improving patient outcomes in ovarian cancer management with PM.

Machine learning can reliably predict malignancy of breast lesions based on clinical and ultrasonographic features.

Buzatto IPC, Recife SA, Miguel L, Bonini RM, Onari N, Faim ALPA, Silvestre L, Carlotti DP, Fröhlich A, Tiezzi DG

pubmed logopapersJun 1 2025
To establish a reliable machine learning model to predict malignancy in breast lesions identified by ultrasound (US) and optimize the negative predictive value to minimize unnecessary biopsies. We included clinical and ultrasonographic attributes from 1526 breast lesions classified as BI-RADS 3, 4a, 4b, 4c, 5, and 6 that underwent US-guided breast biopsy in four institutions. We selected the most informative attributes to train nine machine learning models, ensemble models and models with tuned threshold to make inferences about the diagnosis of BI-RADS 4a and 4b lesions (validation dataset). We tested the performance of the final model with 403 new suspicious lesions. The most informative attributes were shape, margin, orientation and size of the lesions, the resistance index of the internal vessel, the age of the patient and the presence of a palpable lump. The highest mean negative predictive value (NPV) was achieved with the K-Nearest Neighbors algorithm (97.9%). Making ensembles did not improve the performance. Tuning the threshold did improve the performance of the models and we chose the algorithm XGBoost with the tuned threshold as the final one. The tested performance of the final model was: NPV 98.1%, false negative 1.9%, positive predictive value 77.1%, false positive 22.9%. Applying this final model, we would have missed 2 of the 231 malignant lesions of the test dataset (0.8%). Machine learning can help physicians predict malignancy in suspicious breast lesions identified by the US. Our final model would be able to avoid 60.4% of the biopsies in benign lesions missing less than 1% of the cancer cases.

Automatic Segmentation of Ultrasound-Guided Quadratus Lumborum Blocks Based on Artificial Intelligence.

Wang Q, He B, Yu J, Zhang B, Yang J, Liu J, Ma X, Wei S, Li S, Zheng H, Tang Z

pubmed logopapersJun 1 2025
Ultrasound-guided quadratus lumborum block (QLB) technology has become a widely used perioperative analgesia method during abdominal and pelvic surgeries. Due to the anatomical complexity and individual variability of the quadratus lumborum muscle (QLM) on ultrasound images, nerve blocks heavily rely on anesthesiologist experience. Therefore, using artificial intelligence (AI) to identify different tissue regions in ultrasound images is crucial. In our study, we retrospectively collected 112 patients (3162 images) and developed a deep learning model named Q-VUM, which is a U-shaped network based on the Visual Geometry Group 16 (VGG16) network. Q-VUM precisely segments various tissues, including the QLM, the external oblique muscle, the internal oblique muscle, the transversus abdominis muscle (collectively referred to as the EIT), and the bones. Furthermore, we evaluated Q-VUM. Our model demonstrated robust performance, achieving mean intersection over union (mIoU), mean pixel accuracy, dice coefficient, and accuracy values of 0.734, 0.829, 0.841, and 0.944, respectively. The IoU, recall, precision, and dice coefficient achieved for the QLM were 0.711, 0.813, 0.850, and 0.831, respectively. Additionally, the Q-VUM predictions showed that 85% of the pixels in the blocked area fell within the actual blocked area. Finally, our model exhibited stronger segmentation performance than did the common deep learning segmentation networks (0.734 vs. 0.720 and 0.720, respectively). In summary, we proposed a model named Q-VUM that can accurately identify the anatomical structure of the quadratus lumborum in real time. This model aids anesthesiologists in precisely locating the nerve block site, thereby reducing potential complications and enhancing the effectiveness of nerve block procedures.

Adaptive ensemble loss and multi-scale attention in breast ultrasound segmentation with UMA-Net.

Dar MF, Ganivada A

pubmed logopapersJun 1 2025
The generalization of deep learning (DL) models is critical for accurate lesion segmentation in breast ultrasound (BUS) images. Traditional DL models often struggle to generalize well due to the high frequency and scale variations inherent in BUS images. Moreover, conventional loss functions used in these models frequently result in imbalanced optimization, either prioritizing region overlap or boundary accuracy, which leads to suboptimal segmentation performance. To address these issues, we propose UMA-Net, an enhanced UNet architecture specifically designed for BUS image segmentation. UMA-Net integrates residual connections, attention mechanisms, and a bottleneck with atrous convolutions to effectively capture multi-scale contextual information without compromising spatial resolution. Additionally, we introduce an adaptive ensemble loss function that dynamically balances the contributions of different loss components during training, ensuring optimization across key segmentation metrics. This novel approach mitigates the imbalances found in conventional loss functions. We validate UMA-Net on five diverse BUS datasets-BUET, BUSI, Mendeley, OMI, and UDIAT-demonstrating superior performance. Our findings highlight the importance of addressing frequency and scale variations, confirming UMA-Net as a robust and generalizable solution for BUS image segmentation.

BCT-Net: semantic-guided breast cancer segmentation on BUS.

Xin J, Yu Y, Shen Q, Zhang S, Su N, Wang Z

pubmed logopapersJun 1 2025
Accurately and swiftly segmenting breast tumors is significant for cancer diagnosis and treatment. Ultrasound imaging stands as one of the widely employed methods in clinical practice. However, due to challenges such as low contrast, blurred boundaries, and prevalent shadows in ultrasound images, tumor segmentation remains a daunting task. In this study, we propose BCT-Net, a network amalgamating CNN and transformer components for breast tumor segmentation. BCT-Net integrates a dual-level attention mechanism to capture more features and redefines the skip connection module. We introduce the utilization of a classification task as an auxiliary task to impart additional semantic information to the segmentation network, employing supervised contrastive learning. A hybrid objective loss function is proposed, which combines pixel-wise cross-entropy, binary cross-entropy, and supervised contrastive learning loss. Experimental results demonstrate that BCT-Net achieves high precision, with Pre and DSC indices of 86.12% and 88.70%, respectively. Experiments conducted on the BUSI dataset of breast ultrasound images manifest that this approach exhibits high accuracy in breast tumor segmentation.

Human-AI collaboration for ultrasound diagnosis of thyroid nodules: a clinical trial.

Edström AB, Makouei F, Wennervaldt K, Lomholt AF, Kaltoft M, Melchiors J, Hvilsom GB, Bech M, Tolsgaard M, Todsen T

pubmed logopapersJun 1 2025
This clinical trial examined how the articifial intelligence (AI)-based diagnostics system S-Detect for Thyroid influences the ultrasound diagnostic work-up of thyroid ultrasound (US) performed by different US users in clinical practice and how different US users influences the diagnostic accuracy of S-Detect. We conducted a clinical trial with 20 participants, including medical students, US novice physicians, and US experienced physicians. Five patients with thyroid nodules (one malignant and four benign) volunteered to undergo a thyroid US scan performed by all 20 participants using the same US systems with S-Detect installed. Participants performed a focused thyroid US on each patient case and made a nodule classification according to the European Thyroid Imaging Reporting And Data System (EU-TIRADS). They then performed a S-Detect analysis of the same nodule and were asked to re-evaluate their EU-TIRADS reporting. From the EU-TIRADS assessments by participants, we derived a biopsy recommendation outcome of whether fine needle aspiration biopsy (FNAB) was recommended. The mean diagnostic accuracy for S-Detect was 71.3% (range 40-100%) among all participants, with no significant difference between the groups (p = 0.31). The accuracy of our biopsy recommendation outcome was 69.8% before and 69.2% after AI for all participants (p = 0.75). In this trial, we did not find S-Detect to improve the thyroid diagnostic work-up in clinical practice among novice and intermediate ultrasound operators. However, the operator had a substantial impact on the AI-generated ultrasound diagnosis, with a variation in diagnostic accuracy from 40 to 100%, despite the same patients and ultrasound machines being used in the trial.

Diagnosis of Thyroid Nodule Malignancy Using Peritumoral Region and Artificial Intelligence: Results of Hand-Crafted, Deep Radiomics Features and Radiologists' Assessment in Multicenter Cohorts.

Abbasian Ardakani A, Mohammadi A, Yeong CH, Ng WL, Ng AH, Tangaraju KN, Behestani S, Mirza-Aghazadeh-Attari M, Suresh R, Acharya UR

pubmed logopapersJun 1 2025
To develop, test, and externally validate a hybrid artificial intelligence (AI) model based on hand-crafted and deep radiomics features extracted from B-mode ultrasound images in differentiating benign and malignant thyroid nodules compared to senior and junior radiologists. A total of 1602 thyroid nodules from four centers across two countries (Iran and Malaysia) were included for the development and validation of AI models. From each original and expanded contour, which included the peritumoral region, 2060 handcrafted and 1024 deep radiomics features were extracted to assess the effectiveness of the peritumoral region in the AI diagnosis profile. The performance of four algorithms, namely, support vector machine with linear (SVM_lin) and radial basis function (SVM_RBF) kernels, logistic regression, and K-nearest neighbor, was evaluated. The diagnostic performance of the proposed AI model was compared with two radiologists based on the American Thyroid Association (ATA) and the Thyroid Imaging Reporting & Data System (TI-RADS™) guidelines to show the model's applicability in clinical routines. Thirty-five hand-crafted and 36 deep radiomics features were considered for model development. In the training step, SVM_RBF and SVM_lin showed the best results when rectangular contours 40% greater than the original contours were used for both hand-crafted and deep features. Ensemble-learning with SVM_RBF and SVM_lin obtained AUC of 0.954, 0.949, 0.932, and 0.921 in internal and external validations of the Iran cohort and Malaysia cohorts 1 and 2, respectively, and outperformed both radiologists. The proposed AI model trained on nodule+the peripheral region performed optimally in external validations and outperformed the radiologists using the ATA and TI-RADS guidelines.
Page 30 of 41404 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.