Sort by:
Page 44 of 56552 results

Coarse for Fine: Bounding Box Supervised Thyroid Ultrasound Image Segmentation Using Spatial Arrangement and Hierarchical Prediction Consistency.

Chi J, Lin G, Li Z, Zhang W, Chen JH, Huang Y

pubmed logopapersJun 1 2025
Weakly-supervised learning methods have become increasingly attractive for medical image segmentation, but suffered from a high dependence on quantifying the pixel-wise affinities of low-level features, which are easily corrupted in thyroid ultrasound images, resulting in segmentation over-fitting to weakly annotated regions without precise delineation of target boundaries. We propose a dual-branch weakly-supervised learning framework to optimize the backbone segmentation network by calibrating semantic features into rational spatial distribution under the indirect, coarse guidance of the bounding box mask. Specifically, in the spatial arrangement consistency branch, the maximum activations sampled from the preliminary segmentation prediction and the bounding box mask along the horizontal and vertical dimensions are compared to measure the rationality of the approximate target localization. In the hierarchical prediction consistency branch, the target and background prototypes are encapsulated from the semantic features under the combined guidance of the preliminary segmentation prediction and the bounding box mask. The secondary segmentation prediction induced from the prototypes is compared with the preliminary prediction to quantify the rationality of the elaborated target and background semantic feature perception. Experiments on three thyroid datasets illustrate that our model outperforms existing weakly-supervised methods for thyroid gland and nodule segmentation and is comparable to the performance of fully-supervised methods with reduced annotation time. The proposed method has provided a weakly-supervised segmentation strategy by simultaneously considering the target's location and the rationality of target and background semantic features distribution. It can improve the applicability of deep learning based segmentation in the clinical practice.

Diagnosis of carpal tunnel syndrome using deep learning with comparative guidance.

Sim J, Lee S, Kim S, Jeong SH, Yoon J, Baek S

pubmed logopapersJun 1 2025
This study aims to develop a deep learning model for a robust diagnosis of Carpal Tunnel Syndrome (CTS) based on comparative classification leveraging the ultrasound images of the thenar and hypothenar muscles. We recruited 152 participants, both patients with varying severities of CTS and healthy individuals. The enrolled patients underwent ultrasonography, which provided ultrasound image data of the thenar and hypothenar muscles from the median and ulnar nerves. These images were used to train a deep learning model. We compared the performance of our model with previous comparative methods using echo intensity ratio or machine learning, and non-comparative methods based on deep learning. During the training process, comparative guidance based on cosine similarity was used so that the model learns to automatically identify the abnormal differences in echotexture between the ultrasound images of the thenar and hypothenar muscles. The proposed deep learning model with comparative guidance showed the highest performance. The comparison of Receiver operating characteristic (ROC) curves between models demonstrated that the Comparative guidance was effective in autonomously identifying complex features within the CTS dataset. The proposed deep learning model with comparative guidance was shown to be effective in automatically identifying important features for CTS diagnosis from the ultrasound images. The proposed comparative approach was found to be robust to the traditional problems in ultrasound image analysis such as different cut-off values and anatomical variation of patients. Proposed deep learning methodology facilitates accurate and efficient diagnosis of CTS from ultrasound images.

Predicting long-term patency of radiocephalic arteriovenous fistulas with machine learning and the PREDICT-AVF web app.

Fitzgibbon JJ, Ruan M, Heindel P, Appah-Sampong A, Dey T, Khan A, Hentschel DM, Ozaki CK, Hussain MA

pubmed logopapersJun 1 2025
The goal of this study was to expand our previously created prediction tool (PREDICT-AVF) and web app by estimating long-term primary and secondary patency of radiocephalic AVFs. The data source was 911 patients from PATENCY-1 and PATENCY-2 randomized controlled trials, which enrolled patients undergoing new radiocephalic AVF creation with prospective longitudinal follow up and ultrasound measurements. Models were built using a combination of baseline characteristics and post-operative ultrasound measurements to estimate patency up to 2.5 years. Discrimination performance was assessed, and an interactive web app was created using the most robust model. At 2.5 years, the unadjusted primary and secondary patency (95% CI) was 29% (26-33%) and 68% (65-72%). Models using baseline characteristics generally did not perform as well as those using post-operative ultrasound measurements. Overall, the Cox model (4-6 weeks ultrasound) had the best discrimination performance for primary and secondary patency, with an integrated Brier score of 0.183 (0.167, 0.199) and 0.106 (0.085, 0.126). Expansion of the PREDICT-AVF web app to include prediction of long-term patency can help guide clinicians in developing comprehensive end-stage kidney disease Life-Plans with hemodialysis access patients.

Prediction of mammographic breast density based on clinical breast ultrasound images using deep learning: a retrospective analysis.

Bunnell A, Valdez D, Wolfgruber TK, Quon B, Hung K, Hernandez BY, Seto TB, Killeen J, Miyoshi M, Sadowski P, Shepherd JA

pubmed logopapersJun 1 2025
Breast density, as derived from mammographic images and defined by the Breast Imaging Reporting & Data System (BI-RADS), is one of the strongest risk factors for breast cancer. Breast ultrasound is an alternative breast cancer screening modality, particularly useful in low-resource, rural contexts. To date, breast ultrasound has not been used to inform risk models that need breast density. The purpose of this study is to explore the use of artificial intelligence (AI) to predict BI-RADS breast density category from clinical breast ultrasound imaging. We compared deep learning methods for predicting breast density directly from breast ultrasound imaging, as well as machine learning models from breast ultrasound image gray-level histograms alone. The use of AI-derived breast ultrasound breast density as a breast cancer risk factor was compared to clinical BI-RADS breast density. Retrospective (2009-2022) breast ultrasound data were split by individual into 70/20/10% groups for training, validation, and held-out testing for reporting results. 405,120 clinical breast ultrasound images from 14,066 women (mean age 53 years, range 18-99 years) with clinical breast ultrasound exams were retrospectively selected for inclusion from three institutions: 10,393 training (302,574 images), 2593 validation (69,842), and 1074 testing (28,616). The AI model achieves AUROC 0.854 in breast density classification and statistically significantly outperforms all image statistic-based methods. In an existing clinical 5-year breast cancer risk model, breast ultrasound AI and clinical breast density predict 5-year breast cancer risk with 0.606 and 0.599 AUROC (DeLong's test p-value: 0.67), respectively. BI-RADS breast density can be estimated from breast ultrasound imaging with high accuracy. The AI model provided superior estimates to other machine learning approaches. Furthermore, we demonstrate that age-adjusted, AI-derived breast ultrasound breast density provides similar predictive power to mammographic breast density in our population. Estimated breast density from ultrasound may be useful in performing breast cancer risk assessment in areas where mammography may not be available. National Cancer Institute.

Tailoring ventilation and respiratory management in pediatric critical care: optimizing care with precision medicine.

Beauchamp FO, Thériault J, Sauthier M

pubmed logopapersJun 1 2025
Critically ill children admitted to the intensive care unit frequently need respiratory care to support the lung function. Mechanical ventilation is a complex field with multiples parameters to set. The development of precision medicine will allow clinicians to personalize respiratory care and improve patients' outcomes. Lung and diaphragmatic ultrasound, electrical impedance tomography, neurally adjusted ventilatory assist ventilation, as well as the use of monitoring data in machine learning models are increasingly used to tailor care. Each modality offers insights into different aspects of the patient's respiratory system function and enables the adjustment of treatment to better support the patient's physiology. Precision medicine in respiratory care has been associated with decreased ventilation time, increased extubation and ventilation wean success and increased ability to identify phenotypes to guide treatment and predict outcomes. This review will focus on the use of precision medicine in the setting of pediatric acute respiratory distress syndrome, asthma, bronchiolitis, extubation readiness trials and ventilation weaning, ventilation acquired pneumonia and other respiratory tract infections. Precision medicine is revolutionizing respiratory care and will decrease complications associated with ventilation. More research is needed to standardize its use and better evaluate its impact on patient outcomes.

Axial Skeletal Assessment in Osteoporosis Using Radiofrequency Echographic Multi-spectrometry: Diagnostic Performance, Clinical Utility, and Future Directions.

As'ad M

pubmed logopapersJun 1 2025
Osteoporosis, a prevalent skeletal disorder, necessitates accurate and accessible diagnostic tools for effective disease management and fracture prevention. While dual-energy X-ray absorptiometry (DXA) remains the clinical standard for bone mineral density (BMD) assessment, its limitations, including ionizing radiation exposure and susceptibility to artifacts, underscore the need for alternative technologies. Ultrasound-based methods have emerged as promising radiation-free alternatives, with radiofrequency echographic multi-spectrometry (REMS) representing a significant advancement in axial skeleton assessment, specifically at the lumbar spine and proximal femur. REMS analyzes unfiltered radiofrequency ultrasound signals, providing not only BMD estimates but also a novel fragility score (FS), which reflects bone quality and microarchitectural integrity. This review critically evaluates the underlying principles, diagnostic performance, and clinical applications of REMS. It compares REMS with DXA, quantitative computed tomography (QCT), and trabecular bone score (TBS), highlighting REMS's potential advantages in artifact-prone scenarios and specific populations, including children and patients with secondary osteoporosis. The clinical utility of REMS in fracture risk prediction and therapy monitoring is explored alongside its operational precision, cost-effectiveness, and portability. In addition, the integration of artificial intelligence (AI) within REMS software has enhanced its capacity for artifact exclusion and automated spectral interpretation, improving usability and reproducibility. Current limitations, such as the need for broader validation and guideline inclusion, are identified, and future research directions are proposed. These include multicenter validation studies, development of pediatric and secondary osteoporosis reference models, and deeper evaluation of AI-driven enhancements. REMS offers a compelling, non-ionizing alternative for axial bone health assessment and may significantly advance the diagnostic landscape for osteoporosis care.

A Multimodal Model Based on Transvaginal Ultrasound-Based Radiomics to Predict the Risk of Peritoneal Metastasis in Ovarian Cancer: A Multicenter Study.

Zhou Y, Duan Y, Zhu Q, Li S, Zhang C

pubmed logopapersJun 1 2025
This study aimed to develop a predictive model for peritoneal metastasis (PM) in ovarian cancer using a combination radiomics and clinical biomarkers to improve diagnostic accuracy. This retrospective cohort study of 619 ovarian cancer patients involved demographic data, radiomics, O-RADS standardized description, clinical biomarkers, and histological findings. Radiomics features were extracted using 3D Slicer and Pyradiomics, with selective feature extraction using Least Absolute Shrinkage and Selection Operator regression. Model development and validation were carried out using logistic regression and machine learning methods RESULTS: Interobserver agreement was high for radiomics features, with 1049 features initially extracted and 7 features selected through regression analysis. Multi-modal information such as Ascites, Fallopian tube invasion, Greatest diameter, HE4 and D-dimer levels were significant predictors of PM. The developed radiomics nomogram demonstrated strong discriminatory power, with AUC values of 0.912, 0.883, and 0.831 in the training, internal test, and external test sets respectively. The nomogram displayed superior diagnostic performance compared to single-modality models. The integration of multimodal information in a predictive model for PM in ovarian cancer shows promise for enhancing diagnostic accuracy and guiding personalized treatment. This multi-modal approach offers a potential strategy for improving patient outcomes in ovarian cancer management with PM.

Deep Learning to Localize Photoacoustic Sources in Three Dimensions: Theory and Implementation.

Gubbi MR, Bell MAL

pubmed logopapersJun 1 2025
Surgical tool tip localization and tracking are essential components of surgical and interventional procedures. The cross sections of tool tips can be considered as acoustic point sources to achieve these tasks with deep learning applied to photoacoustic channel data. However, source localization was previously limited to the lateral and axial dimensions of an ultrasound transducer. In this article, we developed a novel deep learning-based 3-D photoacoustic point source localization system using an object detection-based approach extended from our previous work. In addition, we derived theoretical relationships among point source locations, sound speeds, and waveform shapes in raw photoacoustic channel data frames. We then used this theory to develop a novel deep learning instance segmentation-based 3-D point source localization system. When tested with 4000 simulated, 993 phantom, and 1983 ex vivo channel data frames, the two systems achieved F1 scores as high as 99.82%, 93.05%, and 98.20%, respectively, and Euclidean localization errors (mean ± one standard deviation) as low as ${1.46} \; \pm \; {1.11}$ mm, ${1.58} \; \pm \; {1.30}$ mm, and ${1.55} \; \pm \; {0.86}$ mm, respectively. In addition, the instance segmentation-based system simultaneously estimated sound speeds with absolute errors (mean ± one standard deviation) of ${19.22} \; \pm \; {26.26}$ m/s in simulated data and standard deviations ranging 14.6-32.3 m/s in experimental data. These results demonstrate the potential of the proposed photoacoustic imaging-based methods to localize and track tool tips in three dimensions during surgical and interventional procedures.

Machine learning can reliably predict malignancy of breast lesions based on clinical and ultrasonographic features.

Buzatto IPC, Recife SA, Miguel L, Bonini RM, Onari N, Faim ALPA, Silvestre L, Carlotti DP, Fröhlich A, Tiezzi DG

pubmed logopapersJun 1 2025
To establish a reliable machine learning model to predict malignancy in breast lesions identified by ultrasound (US) and optimize the negative predictive value to minimize unnecessary biopsies. We included clinical and ultrasonographic attributes from 1526 breast lesions classified as BI-RADS 3, 4a, 4b, 4c, 5, and 6 that underwent US-guided breast biopsy in four institutions. We selected the most informative attributes to train nine machine learning models, ensemble models and models with tuned threshold to make inferences about the diagnosis of BI-RADS 4a and 4b lesions (validation dataset). We tested the performance of the final model with 403 new suspicious lesions. The most informative attributes were shape, margin, orientation and size of the lesions, the resistance index of the internal vessel, the age of the patient and the presence of a palpable lump. The highest mean negative predictive value (NPV) was achieved with the K-Nearest Neighbors algorithm (97.9%). Making ensembles did not improve the performance. Tuning the threshold did improve the performance of the models and we chose the algorithm XGBoost with the tuned threshold as the final one. The tested performance of the final model was: NPV 98.1%, false negative 1.9%, positive predictive value 77.1%, false positive 22.9%. Applying this final model, we would have missed 2 of the 231 malignant lesions of the test dataset (0.8%). Machine learning can help physicians predict malignancy in suspicious breast lesions identified by the US. Our final model would be able to avoid 60.4% of the biopsies in benign lesions missing less than 1% of the cancer cases.

Automatic Segmentation of Ultrasound-Guided Quadratus Lumborum Blocks Based on Artificial Intelligence.

Wang Q, He B, Yu J, Zhang B, Yang J, Liu J, Ma X, Wei S, Li S, Zheng H, Tang Z

pubmed logopapersJun 1 2025
Ultrasound-guided quadratus lumborum block (QLB) technology has become a widely used perioperative analgesia method during abdominal and pelvic surgeries. Due to the anatomical complexity and individual variability of the quadratus lumborum muscle (QLM) on ultrasound images, nerve blocks heavily rely on anesthesiologist experience. Therefore, using artificial intelligence (AI) to identify different tissue regions in ultrasound images is crucial. In our study, we retrospectively collected 112 patients (3162 images) and developed a deep learning model named Q-VUM, which is a U-shaped network based on the Visual Geometry Group 16 (VGG16) network. Q-VUM precisely segments various tissues, including the QLM, the external oblique muscle, the internal oblique muscle, the transversus abdominis muscle (collectively referred to as the EIT), and the bones. Furthermore, we evaluated Q-VUM. Our model demonstrated robust performance, achieving mean intersection over union (mIoU), mean pixel accuracy, dice coefficient, and accuracy values of 0.734, 0.829, 0.841, and 0.944, respectively. The IoU, recall, precision, and dice coefficient achieved for the QLM were 0.711, 0.813, 0.850, and 0.831, respectively. Additionally, the Q-VUM predictions showed that 85% of the pixels in the blocked area fell within the actual blocked area. Finally, our model exhibited stronger segmentation performance than did the common deep learning segmentation networks (0.734 vs. 0.720 and 0.720, respectively). In summary, we proposed a model named Q-VUM that can accurately identify the anatomical structure of the quadratus lumborum in real time. This model aids anesthesiologists in precisely locating the nerve block site, thereby reducing potential complications and enhancing the effectiveness of nerve block procedures.
Page 44 of 56552 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.