Sort by:
Page 32 of 41404 results

Subclinical atrial fibrillation prediction based on deep learning and strain analysis using echocardiography.

Huang SH, Lin YC, Chen L, Unankard S, Tseng VS, Tsao HM, Tang GJ

pubmed logopapersMay 31 2025
Subclinical atrial fibrillation (SCAF), also known as atrial high-rate episodes (AHREs), refers to asymptomatic heart rate elevations associated with increased risks of atrial fibrillation and cardiovascular events. Although deep learning (DL) models leveraging echocardiographic images from ultrasound are widely used for cardiac function analysis, their application to AHRE prediction remains unexplored. This study introduces a novel DL-based framework for automatic AHRE detection using echocardiograms. The approach encompasses left atrium (LA) segmentation, LA strain feature extraction, and AHRE classification. Data from 117 patients with cardiac implantable electronic devices undergoing echocardiography were analyzed, with 80% allocated to the development set and 20% to the test set. LA segmentation accuracy was quantified using the Dice coefficient, yielding scores of 0.923 for the LA cavity and 0.741 for the LA wall. For AHRE classification, metrics such as area under the curve (AUC), accuracy, sensitivity, and specificity were employed. A transformer-based model integrating patient characteristics demonstrated robust performance, achieving mean AUC of 0.815, accuracy of 0.809, sensitivity of 0.800, and specificity of 0.783 for a 24-h AHRE duration threshold. This framework represents a reliable tool for AHRE assessment and holds significant potential for early SCAF detection, enhancing clinical decision-making and patient outcomes.

Combining structural equation modeling analysis with machine learning for early malignancy detection in Bethesda Category III thyroid nodules.

Kasap ZA, Kurt B, Güner A, Özsağır E, Ercin ME

pubmed logopapersMay 30 2025
Atypia of Undetermined Significance (AUS), classified as Category III in the Bethesda Thyroid Cytopathology Reporting System, presents significant diagnostic challenges for clinicians. This study aims to develop a clinical decision support system that integrates structural equation modeling (SEM) and machine learning to predict malignancy in AUS thyroid nodules. The model integrates preoperative clinical data, ultrasonography (USG) findings, and cytopathological and morphometric variables. This retrospective cohort study was conducted between 2011 and 2019 at Karadeniz Technical University (KTU) Farabi Hospital. The dataset included 56 variables derived from 204 thyroid nodules diagnosed via ultrasound-guided fine-needle aspiration biopsy (FNAB) in 183 patients over 18 years. Logistic regression (LR) and SEM were used to identify risk factors for early thyroid cancer detection. Subsequently, machine learning algorithms-including Support Vector Machines (SVM), Naive Bayes (NB), and Decision Trees (DT) were used to construct decision support models. After feature selection with SEM, the SVM model achieved the highest performance, with an accuracy of 82 %, a specificity of 97 %, and an AUC value of 84 %. Additional models were developed for different scenarios, and their performance metrics were compared. Accurate preoperative prediction of malignancy in thyroid nodules is crucial for avoiding unnecessary surgeries. The proposed model supports more informed clinical decision-making by effectively identifying benign cases, thereby reducing surgical risk and improving patient care.

Deep learning without borders: recent advances in ultrasound image classification for liver diseases diagnosis.

Yousefzamani M, Babapour Mofrad F

pubmed logopapersMay 30 2025
Liver diseases are among the top global health burdens. Recently, there has been an increasing significance of diagnostics without discomfort to the patient; among them, ultrasound is the most used. Deep learning, in particular convolutional neural networks, has revolutionized the classification of liver diseases by automatically performing some specific analyses of difficult images. This review summarizes the progress that has been made in deep learning techniques for the classification of liver diseases using ultrasound imaging. It evaluates various models from CNNs to their hybrid versions, such as CNN-Transformer, for detecting fatty liver, fibrosis, and liver cancer, among others. Several challenges in the generalization of data and models across a different clinical environment are also discussed. Deep learning has great prospects for automatic diagnosis of liver diseases. Most of the models have performed with high accuracy in different clinical studies. Despite this promise, challenges relating to generalization have remained. Future hardware developments and access to quality clinical data continue to further improve the performance of these models and ensure their vital role in the diagnosis of liver diseases.

Real-time brain tumor detection in intraoperative ultrasound: From model training to deployment in the operating room.

Cepeda S, Esteban-Sinovas O, Romero R, Singh V, Shett P, Moiyadi A, Zemmoura I, Giammalva GR, Del Bene M, Barbotti A, DiMeco F, West TR, Nahed BV, Arrese I, Hornero R, Sarabia R

pubmed logopapersMay 30 2025
Intraoperative ultrasound (ioUS) is a valuable tool in brain tumor surgery due to its versatility, affordability, and seamless integration into the surgical workflow. However, its adoption remains limited, primarily because of the challenges associated with image interpretation and the steep learning curve required for effective use. This study aimed to enhance the interpretability of ioUS images by developing a real-time brain tumor detection system deployable in the operating room. We collected 2D ioUS images from the BraTioUS and ReMIND datasets, annotated with expert-refined tumor labels. Using the YOLO11 architecture and its variants, we trained object detection models to identify brain tumors. The dataset included 1732 images from 192 patients, divided into training, validation, and test sets. Data augmentation expanded the training set to 11,570 images. In the test dataset, YOLO11s achieved the best balance of precision and computational efficiency, with a mAP@50 of 0.95, mAP@50-95 of 0.65, and a processing speed of 34.16 frames per second. The proposed solution was prospectively validated in a cohort of 20 consecutively operated patients diagnosed with brain tumors. Neurosurgeons confirmed its seamless integration into the surgical workflow, with real-time predictions accurately delineating tumor regions. These findings highlight the potential of real-time object detection algorithms to enhance ioUS-guided brain tumor surgery, addressing key challenges in interpretation and providing a foundation for future development of computer vision-based tools for neuro-oncological surgery.

Using Deep learning to Predict Cardiovascular Magnetic Resonance Findings from Echocardiography Videos.

Sahashi Y, Vukadinovic M, Duffy G, Li D, Cheng S, Berman DS, Ouyang D, Kwan AC

pubmed logopapersMay 30 2025
Echocardiography is the most common modality for assessing cardiac structure and function. While cardiac magnetic resonance (CMR) imaging is less accessible, CMR can provide unique tissue characterization including late gadolinium enhancement (LGE), T1 and T2 mapping, and extracellular volume (ECV) which are associated with tissue fibrosis, infiltration, and inflammation. Deep learning has been shown to uncover findings not recognized by clinicians, however it is unknown whether CMR-based tissue characteristics can be derived from echocardiography videos using deep learning. To assess the performance of a deep learning model applied to echocardiography to detect CMR-specific parameters including LGE presence, and abnormal T1, T2 or ECV. In a retrospective single-center study, adult patients with CMRs and echocardiography studies within 30 days were included. A video-based convolutional neural network was trained on echocardiography videos to predict CMR-derived labels including LGE presence, and abnormal T1, T2 or ECV across echocardiography views. The model was also trained to predict presence/absence of wall motion abnormality (WMA) as a positive control for model function. The model performance was evaluated in a held-out test dataset not used for training. The study population included 1,453 adult patients (mean age 56±18 years, 42% female) with 2,556 paired echocardiography studies occurring at a median of 2 days after CMR (interquartile range 2 days prior to 6 days after). The model had high predictive capability for presence of WMA (AUC 0.873 [95%CI 0.816-0.922]) which was used for positive control. However, the model was unable to reliably detect the presence of LGE (AUC 0.699 [0.613-0.780]), abnormal native T1 (AUC 0.614 [0.500-0.715]), T2 0.553 [0.420-0.692], or ECV 0.564 [0.455-0.691]). Deep learning applied to echocardiography accurately identified CMR-based WMA, but was unable to predict tissue characteristics, suggesting that signal for these tissue characteristics may not be present within ultrasound videos, and that the use of CMR for tissue characterization remains essential within cardiology.

Deep learning based motion correction in ultrasound microvessel imaging approach improves thyroid nodule classification.

Saini M, Larson NB, Fatemi M, Alizad A

pubmed logopapersMay 30 2025
To address inter-frame motion artifacts in ultrasound quantitative high-definition microvasculature imaging (qHDMI), we introduced a novel deep learning-based motion correction technique. This approach enables the derivation of more accurate quantitative biomarkers from motion-corrected HDMI images, improving the classification of thyroid nodules. Inter-frame motion, often caused by carotid artery pulsation near the thyroid, can degrade image quality and compromise biomarker reliability, potentially leading to misdiagnosis. Our proposed technique compensates for these motion-induced artifacts, preserving the fine vascular structures critical for accurate biomarker extraction. In this study, we utilized the motion-corrected images obtained through this framework to derive the quantitative biomarkers and evaluated their effectiveness in thyroid nodule classification. We segregated the dataset according to the amount of motion into low and high motion containing cases based on the inter-frame correlation values and performed the thyroid nodule classification for the high motion containing cases and the full dataset. A comprehensive analysis of the biomarker distributions obtained after using the corresponding motion-corrected images demonstrates the significant differences between benign and malignant nodule biomarker characteristics compared to the original motion-containing images. Specifically, the bifurcation angle values derived from the quantitative high-definition microvasculature imaging (qHDMI) become more consistent with the usual trend after motion correction. The classification results demonstrated that sensitivity remained unchanged for groups with less motion, while improved by 9.2% for groups with high motion. These findings highlight that motion correction helps in deriving more accurate biomarkers, which improves the overall classification performance.

Dharma: A novel machine learning framework for pediatric appendicitis--diagnosis, severity assessment and evidence-based clinical decision support.

Thapa, A., Pahari, S., Timilsina, S., Chapagain, B.

medrxiv logopreprintMay 29 2025
BackgroundAcute appendicitis remains a challenging diagnosis in pediatric populations, with high rates of misdiagnosis and negative appendectomies despite advances in imaging modalities. Current diagnostic tools, including clinical scoring systems like Alvarado and Pediatric Appendicitis Score (PAS), lack sufficient sensitivity and specificity, while reliance on CT scans raises concerns about radiation exposure, contrast hazards and sedation in children. Moreover, no established tool effectively predicts progression from uncomplicated to complicated appendicitis, creating a critical gap in clinical decision-making. ObjectiveTo develop and evaluate a machine learning model that integrates clinical, laboratory, and radiological findings for accurate diagnosis and complication prediction in pediatric appendicitis and to deploy this model as an interpretable web-based tool for clinical decision support. MethodsWe analyzed data from 780 pediatric patients (ages 0-18) with suspected appendicitis admitted to Childrens Hospital St. Hedwig, Regensburg, between 2016 and 2021. For severity prediction, our dataset was augmented with 430 additional cases from published literature and only the confirmed cases of acute appendicitis(n=602) were used. After feature selection using statistical methods and recursive feature elimination, we developed a Random Forest model named Dharma, optimized through hyperparameter tuning and cross-validation. Model performance was evaluated on independent test sets and compared with conventional diagnostic tools. ResultsDharma demonstrated superior diagnostic performance with an AUC-ROC of 0.96 ({+/-}0.02 SD) in cross-validation and 0.97-0.98 on independent test sets. At an optimal threshold of 64%, the model achieved specificity of 88%-98%, sensitivity of 89%-95%, and positive predictive value of 93%-99%. For complication prediction, Dharma attained a sensitivity of 93% ({+/-}0.05 SD) in cross-validation and 96% on the test set, with a negative predictive value of 98%. The model maintained strong performance even in cases where the appendix could not be visualized on ultrasonography (AUC-ROC 0.95, sensitivity 89%, specificity 87% at the threshold of 30%). ConclusionDharma is a novel, interpretable machine learning based clinical decision support tool designed to address the diagnostic challenges of pediatric appendicitis by integrating easily obtainable clinical, laboratory, and radiological data into a unified, real-time predictive framework. Unlike traditional scoring systems and imaging modalities, which may lack specificity or raise safety concerns in children, Dharma demonstrates high accuracy in diagnosing appendicitis and predicting progression from uncomplicated to complicated cases, potentially reducing unnecessary surgeries and CT scans. Its robust performance, even with incomplete imaging data, underscores its utility in resource-limited settings. Delivered through an intuitive, transparent, and interpretable web application, Dharma supports frontline providers--particularly in low- and middle-income settings--in making timely, evidence-based decisions, streamlining patient referrals, and improving clinical outcomes. By bridging critical gaps in current diagnostic and prognostic tools, Dharma offers a practical and accessible 21st-century solution tailored to real-world pediatric surgical care across diverse healthcare contexts. Furthermore, the underlying framework and concepts of Dharma may be adaptable to other clinical challenges beyond pediatric appendicitis, providing a foundation for broader applications of machine learning in healthcare. Author SummaryAccurate diagnosis of pediatric appendicitis remains challenging, with current clinical scores and imaging tests limited by sensitivity, specificity, predictive values, and safety concerns. We developed Dharma, an interpretable machine learning model that integrates clinical, laboratory, and radiological data to assist in diagnosing appendicitis and predicting its severity in children. Evaluated on a large dataset supplemented by published cases, Dharma demonstrated strong diagnostic and prognostic performance, including in cases with incomplete imaging--making it potentially especially useful in resource-limited settings for early decision-making and streamlined referrals. Available as a web-based tool, it provides real-time support to healthcare providers in making evidence-based decisions that could reduce negative appendectomies while avoiding hazards associated with advanced imaging modalities such as sedation, contrast, or radiation exposure. Furthermore, the open-access concepts and framework underlying Dharma have the potential to address diverse healthcare challenges beyond pediatric appendicitis.

Prediction of clinical stages of cervical cancer via machine learning integrated with clinical features and ultrasound-based radiomics.

Zhang M, Zhang Q, Wang X, Peng X, Chen J, Yang H

pubmed logopapersMay 29 2025
To investigate the prediction of a model constructed by combining machine learning (ML) with clinical features and ultrasound radiomics in the clinical staging of cervical cancer. General clinical and ultrasound data of 227 patients with cervical cancer who received transvaginal ultrasonography were retrospectively analyzed. The region of interest (ROI) radiomics profiles of the original image and derived image were retrieved and profile screening was performed. The chosen profiles were employed in radiomics model and Radscore formula construction. Prediction models were developed utilizing several ML algorithms by Python based on an integrated dataset of clinical features and ultrasound radiomics. Model performances were evaluated via AUC. Plot calibration curves and clinical decision curves were used to assess model efficacy. The model developed by support vector machine (SVM) emerged as the superior model. Integrating clinical characteristics with ultrasound radiomics, it showed notable performance metrics in both the training and validation datasets. Specifically, in the training set, the model obtained an AUC of 0.88 (95% Confidence Interval (CI): 0.83-0.93), alongside a 0.84 accuracy, 0.68 sensitivity, and 0.91 specificity. When validated, the model maintained an AUC of 0.77 (95% CI: 0.63-0.88), with 0.77 accuracy, 0.62 sensitivity, and 0.83 specificity. The calibration curve aligned closely with the perfect calibration line. Additionally, based on the clinical decision curve analysis, the model offers clinical utility over wide-ranging threshold possibilities. The clinical- and radiomics-based SVM model provides a noninvasive tool for predicting cervical cancer stage, integrating ultrasound radiomics and key clinical factors (age, abortion history) to improve risk stratification. This approach could guide personalized treatment (surgery vs. chemoradiation) and optimize staging accuracy, particularly in resource-limited settings where advanced imaging is scarce.

Ultrasound image-based contrastive fusion non-invasive liver fibrosis staging algorithm.

Dong X, Tan Q, Xu S, Zhang J, Zhou M

pubmed logopapersMay 29 2025
The diagnosis of liver fibrosis is usually based on histopathological examination of liver puncture specimens. Although liver puncture is accurate, it has invasive risks and high economic costs, which are difficult for some patients to accept. Therefore, this study uses deep learning technology to build a liver fibrosis diagnosis model to achieve non-invasive staging of liver fibrosis, avoid complications, and reduce costs. This study uses ultrasound examination to obtain pure liver parenchyma image section data. With the consent of the patient, combined with the results of percutaneous liver puncture biopsy, the degree of liver fibrosis indicated by ultrasound examination data is judged. The concept of Fibrosis Contrast Layer (FCL) is creatively introduced in our experimental method, which can help our model more keenly capture the significant differences in the characteristics of liver fibrosis of various grades. Finally, through label fusion (LF), the characteristics of liver specimens of the same fibrosis stage are abstracted and fused to improve the accuracy and stability of the diagnostic model. Experimental evaluation demonstrated that our model achieved an accuracy of 85.6%, outperforming baseline models such as ResNet (81.9%), InceptionNet (80.9%), and VGG (80.8%). Even under a small-sample condition (30% data), the model maintained an accuracy of 84.8%, significantly outperforming traditional deep-learning models exhibiting sharp performance declines. The training results show that in the whole sample data set and 30% small sample data set training environments, the FCLLF model's test performance results are better than those of traditional deep learning models such as VGG, ResNet, and InceptionNet. The performance of the FCLLF model is more stable, especially in the small sample data set environment. Our proposed FCLLF model effectively improves the accuracy and stability of liver fibrosis staging using non-invasive ultrasound imaging.
Page 32 of 41404 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.