Sort by:
Page 5 of 56552 results

Bayesian machine learning enables discovery of risk factors for hepatosplenic multimorbidity related to schistosomiasis

Zhi, Y.-C., Anguajibi, V., Oryema, J. B., Nabatte, B., Opio, C. K., Kabatereine, N. B., Chami, G. F.

medrxiv logopreprintSep 19 2025
One in 25 deaths worldwide is related to liver disease, and often with multiple hepatosplenic conditions. Yet, little is understood of the risk factors for hepatosplenic multimorbidity, especially in the context of chronic infections. We present a novel Bayesian multitask learning framework to jointly model 45 hepatosplenic conditions assessed using point-of-care B-mode ultrasound for 3155 individuals aged 5-91 years within the SchistoTrack cohort across rural Uganda where chronic intestinal schistosomiasis is endemic. We identified distinct and shared biomedical, socioeconomic, and spatial risk factors for individual conditions and hepatosplenic multimorbidity, and introduced methods for measuring condition dependencies as risk factors. Notably, for gastro-oesophageal varices, we discovered key risk factors of older age, lower hemoglobin concentration, and severe schistosomal liver fibrosis. Our findings provide a compendium of risk factors to inform surveillance, triage, and follow-up, while our model enables improved prediction of hepatosplenic multimorbidity, and if validated on other systems, general multimorbidity.

Deep Learning Integration of Endoscopic Ultrasound Features and Serum Data Reveals <i>LTB4</i> as a Diagnostic and Therapeutic Target in ESCC.

Huo S, Zhang W, Wang Y, Qi J, Wang Y, Bai C

pubmed logopapersSep 18 2025
<b><i>Background:</i></b> Early diagnosis and accurate prediction of treatment response in esophageal squamous cell carcinoma (ESCC) remain major clinical challenges due to the lack of reliable and noninvasive biomarkers. Recently, artificial intelligence-driven endoscopic ultrasound image analysis has shown great promise in revealing genomic features associated with imaging phenotypes. <b><i>Methods:</i></b> A prospective study of 115 patients with ESCC was conducted. Deep features were extracted from endoscopic ultrasound using a ResNet50 convolutional neural network. Important features shared across three machine learning models (NN, GLM, DT) were used to construct an image-derived signature. Plasma levels of leukotriene B4 (<i>LTB4</i>) and other inflammatory markers were measured using enzyme-linked immunosorbent assay. Correlations between signature and inflammation markers were analyzed, followed by logistic regression and subgroup analyses. <b><i>Results:</i></b> The endoscopic ultrasound image-derived signature, generated using deep learning algorithms, effectively distinguished esophageal cancer from normal esophageal tissue. Among all inflammatory markers, <i>LTB4</i> exhibited the strongest negative correlation with the image signature and showed significantly higher expression in the healthy control group. Multivariate logistic regression analysis identified <i>LTB4</i> as an independent risk factor for ESCC (odds ratio = 1.74, <i>p</i> = 0.037). Furthermore, <i>LTB4</i> expression was significantly associated with patient sex, age, and chemotherapy response. Notably, higher <i>LTB4</i> levels were linked to an increased likelihood of achieving a favorable therapeutic response. <b><i>Conclusions:</i></b> This study demonstrates that deep learning-derived endoscopic ultrasound image features can effectively distinguish ESCC from normal esophageal tissue. By integrating image features with serological data, the authors identified <i>LTB4</i> as a key inflammation-related biomarker with significant diagnostic and therapeutic predictive value.

Guidance for reporting artificial intelligence technology evaluations for ultrasound scanning in regional anaesthesia (GRAITE-USRA): an international multidisciplinary consensus reporting framework.

Zhang X, Ferry J, Hewson DW, Collins GS, Wiles MD, Zhao Y, Martindale APL, Tomaschek M, Bowness JS

pubmed logopapersSep 18 2025
The application of artificial intelligence to enhance the clinical practice of ultrasound-guided regional anaesthesia is of increasing interest to clinicians, researchers and industry. The lack of standardised reporting for studies in this field hinders the comparability, reproducibility and integration of findings. We aimed to develop a consensus-based reporting guideline for research evaluating artificial intelligence applications for ultrasound scanning in regional anaesthesia. We followed methodology recommended by the EQUATOR Network for the development of reporting guidelines. Review of published literature and expert consultation generated a preliminary list of candidate reporting items. An international, multidisciplinary, modified Delphi process was then undertaken, involving experts from clinical practice, academia and industry. Two rounds of expert consultation were conducted, in which participants evaluated each item for inclusion in a final reporting guideline, followed by an online discussion. A total of 67 experts participated in the first Delphi round, 63 in the second round and 25 in the roundtable consensus meeting. The GRAITE-USRA reporting guideline comprises 40 items addressing key aspects of reporting in artificial intelligence research for ultrasound scanning in regional anaesthesia. Specific items include ultrasound acquisition protocols and operator expertise, which are not covered in existing artificial intelligence reporting guidelines. The GRAITE-USRA reporting guideline provides a minimum set of recommendations for artificial intelligence-related research for ultrasound scanning in regional anaesthesia. Its adoption will promote consistent reporting standards, enhance transparency, improve study reproducibility and ultimately support the effective integration of evidence into clinical practice.

Development and validation of machine learning predictive models for gastric volume based on ultrasonography: A multicentre study.

Liu J, Li S, Li M, Li G, Huang N, Shu B, Chen J, Zhu T, Huang H, Duan G

pubmed logopapersSep 18 2025
Aspiration of gastric contents is a serious complication associated with anaesthesia. Accurate prediction of gastric volume may assist in risk stratification and help prevent aspiration. This study aimed to develop and validate machine learning models to predict gastric volume based on ultrasound and clinical features. This cross-sectional multicentre study was conducted at two hospitals and included adult patients undergoing gastroscopy under intravenous anaesthesia. Patients from Centre 1 were prospectively enrolled and randomly divided into a training set (Cohort A, n = 415) and an internal validation set (Cohort B, n = 179), while patients from Centre 2 were used as an external validation set (Cohort C, n = 199). The primary outcome was gastric volume, which was measured by endoscopic aspiration immediately following ultrasonographic examination. Least absolute shrinkage and selection operator (LASSO) regression was used for feature selection, and eight machine learning models were developed and evaluated using Bland-Altman analysis. The models' ability to predict medium-to-high and high gastric volumes was assessed. The top-performing models were externally validated, and their predictive performance was compared with the traditional Perlas model. Among the 793 enrolled patients, the number and proportion of patients with high gastric volume were as follows: 23 (5.5 %) in the development cohort, 10 (5.6 %) in the internal validation cohort, and 3 (1.5 %) in the external validation cohort. Eight models were developed using age, cross-sectional area of gastric antrum in right lateral decubitus (RLD-CSA) position, and Perlas grade, with these variables selected through LASSO regression. In internal validation, Bland-Altman analysis showed that the Perlas model overestimated gastric volume (mean bias 23.5 mL), while the new models provided accurate estimates (mean bias -0.1 to 2.0 mL). The models significantly improved prediction of medium-high gastric volume (area under the curve [AUC]: 0.74-0.77 vs. 0.63) and high gastric volume (AUC: 0.85-0.94 vs. 0.74). The best-performing adaptive boosting and linear regression models underwent externally validation, with AUCs of 0.81 (95 % confidence interval [CI], 0.74-0.89) and 0.80 (95 %CI, 0.72-0.89) for medium-high and 0.96 (95 %CI, 0.91-1) and 0.96 (95 %CI, 0.89-1) for high gastric volume. We propose a novel machine learning-based predictive model that outperforms Perlas model by incorporating the key features of age, RLD-CSA, and Perlas grade, enabling accurate prediction of gastric volume.

Bridging the quality gap: Robust colon wall segmentation in noisy transabdominal ultrasound.

Gago L, González MAF, Engelmann J, Remeseiro B, Igual L

pubmed logopapersSep 18 2025
Colon wall segmentation in transabdominal ultrasound is challenging due to variations in image quality, speckle noise, and ambiguous boundaries. Existing methods struggle with low-quality images due to their inability to adapt to varying noise levels, poor boundary definition, and reduced contrast in ultrasound imaging, resulting in inconsistent segmentation performance. We present a novel quality-aware segmentation framework that simultaneously predicts image quality and adapts the segmentation process accordingly. Our approach uses a U-Net architecture with a ConvNeXt encoder backbone, enhanced with a parallel quality prediction branch that serves as a regularization mechanism. Our model learns robust features by explicitly modeling image quality during training. We evaluate our method on the C-TRUS dataset and demonstrate superior performance compared to state-of-the-art approaches, particularly on challenging low-quality images. Our method achieves Dice scores of 0.7780, 0.7025, and 0.5970 for high, medium, and low-quality images, respectively. The proposed quality-aware segmentation framework represents a significant step toward clinically viable automated colon wall segmentation systems.

Video Transformer for Segmentation of Echocardiography Images in Myocardial Strain Measurement.

Huang KC, Lin CE, Lin DS, Lin TT, Wu CK, Jeng GS, Lin LY, Lin LC

pubmed logopapersSep 17 2025
The adoption of left ventricular global longitudinal strain (LVGLS) is still restricted by variability among various vendors and observers, despite advancements from tissue Doppler to speckle tracking imaging, machine learning, and, more recently, convolutional neural network (CNN)-based segmentation strain analysis. While CNNs have enabled fully automated strain measurement, they are inherently constrained by restricted receptive fields and a lack of temporal consistency. Transformer-based networks have emerged as a powerful alternative in medical imaging, offering enhanced global attention. Among these, the Video Swin Transformer (V-SwinT) architecture, with its 3D-shifted windows and locality inductive bias, is particularly well suited for ultrasound imaging, providing temporal consistency while optimizing computational efficiency. In this study, we propose the DTHR-SegStrain model based on a V-SwinT backbone. This model incorporates contour regression and utilizes an FCN-style multiscale feature fusion. As a result, it can generate accurate and temporally consistent left ventricle (LV) contours, allowing for direct calculation of myocardial strain without the need for conversion from segmentation to contours or any additional postprocessing. Compared to EchoNet-dynamic and Unity-GLS, DTHR-SegStrain showed greater efficiency, reliability, and validity in LVGLS measurements. Furthermore, the hybridization experiments assessed the interaction between segmentation models and strain algorithms, reinforcing that consistent segmentation contours over time can simplify strain calculations and decrease measurement variability. These findings emphasize the potential of V-SwinT-based frameworks to enhance the standardization and clinical applicability of LVGLS assessments.

A machine learning model based on high-frequency ultrasound for differentiating benign and malignant skin tumors.

Qin Y, Zhang Z, Qu X, Liu W, Yan Y, Huang Y

pubmed logopapersSep 17 2025
This study aims to explore the potential of machine learning as a non-invasive automated tool for skin tumor differentiation. Data were included from 156 lesions, collected retrospectively from September 2021 to February 2024. Univariate and multivariate analyses of traditional clinical features were performed to establish a logistic regression model. Ultrasound-based radiomics features are extracted from grayscale images after delineating regions of interest (ROIs). Independent samples t-tests, Mann-Whitney U tests, and Least Absolute Shrinkage and Selection Operator (LASSO) regression were employed to select ultrasound-based radiomics features. Subsequently, five machine learning methods were used to construct radiomics models based on the selected features. Model performance was evaluated using receiver operating characteristic (ROC) curves and the Delong test. Age, poorly defined margins, and irregular shape were identified as independent risk factors for malignant skin tumors. The multilayer perception (MLP) model achieved the best performance, with area under the curve (AUC) values of 0.963 and 0.912, respectively. The results of DeLong's test revealed a statistically significant discrepancy in efficacy between the MLP and clinical models (Z=2.611, p=0.009). Machine learning based skin tumor models may serve as a potential non-invasive method to improve diagnostic efficiency.

Deep learning-based automated detection and diagnosis of gouty arthritis in ultrasound images of the first metatarsophalangeal joint.

Xiao L, Zhao Y, Li Y, Yan M, Liu M, Ning C

pubmed logopapersSep 17 2025
This study aimed to develop a deep learning (DL) model for automatic detection and diagnosis of gouty arthritis (GA) in the first metatarsophalangeal joint (MTPJ) using ultrasound (US) images. A retrospective study included individuals who underwent first MTPJ ultrasonography between February and July 2023. A five-fold cross-validation method (training set = 4:1) was employed. A deep residual convolutional neural network (CNN) was trained, and Gradient-weighted Class Activation Mapping (Grad-CAM) was used for visualization. Different ResNet18 models with varying residual blocks (2, 3, 4, 6) were compared to select the optimal model for image classification. Diagnostic decisions were based on a threshold proportion of abnormal images, determined from the training set. A total of 2401 US images from 260 patients (149 gout, 111 control) were analyzed. The model with 3 residual blocks performed best, achieving an AUC of 0.904 (95% CI: 0.887~0.927). Visualization results aligned with radiologist opinions in 2000 images. The diagnostic model attained an accuracy of 91.1% (95% CI: 90.4%~91.8%) on the testing set, with a diagnostic threshold of 0.328.  The DL model demonstrated excellent performance in automatically detecting and diagnosing GA in the first MTPJ.

A Novel Ultrasound-based Nomogram Using Contrast-enhanced and Conventional Ultrasound Features to Improve Preoperative Diagnosis of Parathyroid Adenomas versus Cervical Lymph Nodes.

Xu Y, Zuo Z, Peng Q, Zhang R, Tang K, Niu C

pubmed logopapersSep 17 2025
Precise preoperative localization of parathyroid gland lesion is essential for guiding surgery in primary hyperparathyroidism (PHPT). The aim of our study was to investigate the contrast-enhanced ultrasound (CEUS) characteristics of parathyroid gland adenoma (PGA) and to evaluate whether PGA can be differentiated from central cervical lymph nodes (CCLN). Fifty-four consecutive patients with PHPT were retrospectively enrolled and underwent preoperative imaging with high-resolution ultrasound (US) and CEUS, and underwent subsequent parathyroidectomy. One hundred and seventy-four lymph nodes of papillary thyroid carcinomas (PTC) patients were examined by high-resolution US and CEUS, and underwent unilateral, subtotal, or total thyroidectomy with central neck dissection were enrolled. By incorporating US and CEUS characteristics, a predictive model presented as a nomogram was developed, and their performance and utility were evaluated by plotting receiver operating characteristic (ROC) curves, calibration curves and decision curve analysis (DCA). Three US characteristics and two CEUS characteristics were independent characteristics related to PGA for their differentiation from CCLN, and were obtained for machine learning model construction. The area under the receiver characteristic curve (AUC) of the US+CEUS model was 0.915, was higher than the other US model (0.874) and CEUS model (0.791). It is recommended that CEUS techniques be used to enhance the diagnostic utility of US in cases of suspected parathyroid lesions. This is the first study to use a combination of US+CEUS to build a nomogram to distinguish between PGA and CCLN, filling a gap in the existing literatures.

AI-powered insights in pediatric nephrology: current applications and future opportunities.

Nada A, Ahmed Y, Hu J, Weidemann D, Gorman GH, Lecea EG, Sandokji IA, Cha S, Shin S, Bani-Hani S, Mannemuddhu SS, Ruebner RL, Kakajiwala A, Raina R, George R, Elchaki R, Moritz ML

pubmed logopapersSep 16 2025
Artificial intelligence (AI) is rapidly emerging as a transformative force in pediatric nephrology, enabling improvements in diagnostic accuracy, therapeutic precision, and operational workflows. By integrating diverse datasets-including patient histories, genomics, imaging, and longitudinal clinical records-AI-driven tools can detect subtle kidney anomalies, predict acute kidney injury, and forecast disease progression. Deep learning models, for instance, have demonstrated the potential to enhance ultrasound interpretations, refine kidney biopsy assessments, and streamline pathology evaluations. Coupled with robust decision support systems, these innovations also optimize medication dosing and dialysis regimens, ultimately improving patient outcomes. AI-powered chatbots hold promise for improving patient engagement and adherence, while AI-assisted documentation solutions offer relief from administrative burdens, mitigating physician burnout. However, ethical and practical challenges remain. Healthcare professionals must receive adequate training to harness AI's capabilities, ensuring that such technologies bolster rather than erode the vital doctor-patient relationship. Safeguarding data privacy, minimizing algorithmic bias, and establishing standardized regulatory frameworks are critical for safe deployment. Beyond clinical care, AI can accelerate pediatric nephrology research by identifying biomarkers, enabling more precise patient recruitment, and uncovering novel therapeutic targets. As these tools evolve, interdisciplinary collaborations and ongoing oversight will be key to integrating AI responsibly. Harnessing AI's vast potential could revolutionize pediatric nephrology, championing a future of individualized, proactive, and empathetic care for children with kidney diseases. Through strategic collaboration and transparent development, these advanced technologies promise to minimize disparities, foster innovation, and sustain compassionate patient-centered care, shaping a new horizon in pediatric nephrology research and practice.
Page 5 of 56552 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.