Sort by:
Page 8 of 56552 results

AI-Driven Fetal Liver Echotexture Analysis: A New Frontier in Predicting Neonatal Insulin Imbalance.

Da Correggio KS, Santos LO, Muylaert Barroso FS, Galluzzo RN, Chaves TZL, Wangenheim AV, Onofre ASC

pubmed logopapersSep 8 2025
To evaluate the performance of artificial intelligence (AI)-based models in predicting elevated neonatal insulin levels through fetal hepatic echotexture analysis. This diagnostic accuracy study analyzed ultrasound images of fetal livers from pregnancies between 37 and 42 weeks, including cases with and without gestational diabetes mellitus (GDM). Images were stored in Digital Imaging and Communications in Medicine (DICOM) format, annotated by experts, and converted to segmented masks after quality checks. A balanced dataset was created by randomly excluding overrepresented categories. Artificial intelligence classification models developed using the FastAI library-ResNet-18, ResNet-34, ResNet-50, EfficientNet-B0, and EfficientNet-B7-were trained to detect elevated C-peptide levels (>75th percentile) in umbilical cord blood at birth, based on fetal hepatic ultrasonographic images. Out of 2339 ultrasound images, 606 were excluded due to poor quality, resulting in 1733 images analyzed. Elevated C-peptide levels were observed in 34.3% of neonates. Among the 5 CNN models evaluated, EfficientNet-B0 demonstrated the highest overall performance, achieving a sensitivity of 86.5%, specificity of 82.1%, positive predictive value (PPV) of 83.0%, negative predictive value (NPV) of 85.7%, accuracy of 84.3%, and an area under the ROC curve (AUC) of 0.83 in predicting elevated neonatal insulin levels through fetal hepatic echotexture analysis. AI-based analysis of fetal liver echotexture via ultrasound effectively predicted elevated neonatal C-peptide levels, offering a promising non-invasive method for detecting insulin imbalance in newborns.

FetalMLOps: operationalizing machine learning models for standard fetal ultrasound plane classification.

Testi M, Fiorentino MC, Ballabio M, Visani G, Ciccozzi M, Frontoni E, Moccia S, Vessio G

pubmed logopapersSep 8 2025
Fetal standard plane detection is essential in prenatal care, enabling accurate assessment of fetal development and early identification of potential anomalies. Despite significant advancements in machine learning (ML) in this domain, its integration into clinical workflows remains limited-primarily due to the lack of standardized, end-to-end operational frameworks. To address this gap, we introduce FetalMLOps, the first comprehensive MLOps framework specifically designed for fetal ultrasound imaging. Our approach adopts a ten-step MLOps methodology that covers the entire ML lifecycle, with each phase meticulously adapted to clinical needs. From defining the clinical objective to curating and annotating fetal US datasets, every step ensures alignment with real-world medical practice. ETL (extract, transform, load) processes are developed to standardize, anonymize, and harmonize inputs, enhancing data quality. Model development prioritizes architectures that balance accuracy and efficiency, using clinically relevant evaluation metrics to guide selection. The best-performing model is deployed via a RESTful API, following MLOps best practices for continuous integration, delivery, and performance monitoring. Crucially, the framework embeds principles of explainability and environmental sustainability, promoting ethical, transparent, and responsible AI. By operationalizing ML models within a clinically meaningful pipeline, FetalMLOps bridges the gap between algorithmic innovation and real-world application, setting a precedent for trustworthy and scalable AI adoption in prenatal care.

Leveraging Information Divergence for Robust Semi-Supervised Fetal Ultrasound Image Segmentation

Fangyijie Wang, Guénolé Silvestre, Kathleen M. Curran

arxiv logopreprintSep 8 2025
Maternal-fetal Ultrasound is the primary modality for monitoring fetal development, yet automated segmentation remains challenging due to the scarcity of high-quality annotations. To address this limitation, we propose a semi-supervised learning framework that leverages information divergence for robust fetal ultrasound segmentation. Our method employs a lightweight convolutional network (1.47M parameters) and a Transformer-based network, trained jointly with labelled data through standard supervision and with unlabelled data via cross-supervision. To encourage consistent and confident predictions, we introduce an information divergence loss that combines per-pixel Kullback-Leibler divergence and Mutual Information Gap, effectively reducing prediction disagreement between the two models. In addition, we apply mixup on unlabelled samples to further enhance robustness. Experiments on two fetal ultrasound datasets demonstrate that our approach consistently outperforms seven state-of-the-art semi-supervised methods. When only 5% of training data is labelled, our framework improves the Dice score by 2.39%, reduces the 95% Hausdorff distance by 14.90, and decreases the Average Surface Distance by 4.18. These results highlight the effectiveness of leveraging information divergence for annotation-efficient and robust medical image segmentation. Our code is publicly available on GitHub.

AI Model Based on Diaphragm Ultrasound to Improve the Predictive Performance of Invasive Mechanical Ventilation Weaning: Prospective Cohort Study.

Song F, Liu H, Ma H, Chen X, Wang S, Qin T, Liang H, Huang D

pubmed logopapersSep 8 2025
Point-of-care ultrasonography has become a valuable tool for assessing diaphragmatic function in critically ill patients receiving invasive mechanical ventilation. However, conventional diaphragm ultrasound assessment remains highly operator-dependent and subjective. Previous research introduced automatic measurement of diaphragmatic excursion and velocity using 2D speckle-tracking technology. This study aimed to develop an artificial intelligence-multimodal learning framework to improve the prediction of weaning failure and guide individualized weaning strategies. This prospective study enrolled critically ill patients older than 18 years who received mechanical ventilation for more than 48 hours and were eligible for a spontaneous breathing trial in 2 intensive care units in Guangzhou, China. Before the spontaneous breathing trial, diaphragm ultrasound videos were collected using a standardized protocol, and automatic measurements of excursion and velocity were obtained. A total of 88 patients were included, with 50 successfully weaned and 38 experiencing weaning failure. Each patient record included 27 clinical and 6 diaphragmatic indicators, selected based on previous literature and phenotyping studies. Clinical variables were preprocessed using OneHotEncoder, normalization, and scaling. Ultrasound videos were interpolated to a uniform resolution of 224×224×96. Artificial intelligence-multimodal learning based on clinical characteristics, laboratory parameters, and diaphragm ultrasonic videos was established. Four experiments were conducted in an ablation setting to evaluate model performance using different combinations of input data: (1) diaphragmatic excursion only, (2) clinical and diaphragmatic indicators, (3) ultrasound videos only, and (4) all modalities combined (multimodal). Metrics for evaluation included classification accuracy, area under the receiver operating characteristic curve (AUC), average precision in the precision-recall curve, and calibration curve. Variable importance was assessed using SHAP (Shapley Additive Explanation) to interpret feature contributions and understand model predictions. The multimodal co-learning model outperformed all single-modal approaches. The accuracy improved when predicted through diaphragm ultrasound video data using Video Vision Transformer (accuracy=0.8095, AUC=0.852), clinical or ultrasound indicators (accuracy=0.7381, AUC=0.746), and the multimodal co-learning (accuracy=0.8331, AUC=0.894). The proposed co-learning model achieved the highest score (average precision=0.91) among the 4 experiments. Furthermore, calibration curve analysis demonstrated that the proposed colearning model was well calibrated, as the curve was closest to the perfectly calibrated line. Combining ultrasound and clinical data for colearning improved the accuracy of the weaning outcome prediction. Multimodal learning based on automatic measurement of point-of-care ultrasonography and automated collection of objective clinical indicators greatly enhanced the practical operability and user-friendliness of the system. The proposed model offered promising potential for widespread clinical application in intensive care settings.

XBusNet: Text-Guided Breast Ultrasound Segmentation via Multimodal Vision-Language Learning

Raja Mallina, Bryar Shareef

arxiv logopreprintSep 8 2025
Background: Precise breast ultrasound (BUS) segmentation supports reliable measurement, quantitative analysis, and downstream classification, yet remains difficult for small or low-contrast lesions with fuzzy margins and speckle noise. Text prompts can add clinical context, but directly applying weakly localized text-image cues (e.g., CAM/CLIP-derived signals) tends to produce coarse, blob-like responses that smear boundaries unless additional mechanisms recover fine edges. Methods: We propose XBusNet, a novel dual-prompt, dual-branch multimodal model that combines image features with clinically grounded text. A global pathway based on a CLIP Vision Transformer encodes whole-image semantics conditioned on lesion size and location, while a local U-Net pathway emphasizes precise boundaries and is modulated by prompts that describe shape, margin, and Breast Imaging Reporting and Data System (BI-RADS) terms. Prompts are assembled automatically from structured metadata, requiring no manual clicks. We evaluate on the Breast Lesions USG (BLU) dataset using five-fold cross-validation. Primary metrics are Dice and Intersection over Union (IoU); we also conduct size-stratified analyses and ablations to assess the roles of the global and local paths and the text-driven modulation. Results: XBusNet achieves state-of-the-art performance on BLU, with mean Dice of 0.8765 and IoU of 0.8149, outperforming six strong baselines. Small lesions show the largest gains, with fewer missed regions and fewer spurious activations. Ablation studies show complementary contributions of global context, local boundary modeling, and prompt-based modulation. Conclusions: A dual-prompt, dual-branch multimodal design that merges global semantics with local precision yields accurate BUS segmentation masks and improves robustness for small, low-contrast lesions.

Artificial Intelligence Algorithm Supporting the Diagnosis of Developmental Dysplasia of the Hip: Automated Ultrasound Image Segmentation.

Pulik Ł, Czech P, Kaliszewska J, Mulewicz B, Pykosz M, Wiszniewska J, Łęgosz P

pubmed logopapersSep 8 2025
<b>Background</b>: Developmental dysplasia of the hip (DDH), if not treated, can lead to osteoarthritis and disability. Ultrasound (US) is a primary screening method for the detection of DDH, but its interpretation remains highly operator-dependent. We propose a supervised machine learning (ML) image segmentation model for the automated recognition of anatomical structures in hip US images. <b>Methods</b>: We conducted a retrospective observational analysis based on a dataset of 10,767 hip US images from 311 patients. All images were annotated for eight key structures according to the Graf method and split into training (75.0%), validation (9.5%), and test (15.5%) sets. Model performance was assessed using the Intersection over Union (IoU) and Dice Similarity Coefficient (DSC). <b>Results</b>: The best-performing model was based on the SegNeXt architecture with an MSCAN_L backbone. The model achieved high segmentation accuracy (IoU; DSC) for chondro-osseous border (0.632; 0.774), femoral head (0.916; 0.956), labrum (0.625; 0.769), cartilaginous (0.672; 0.804), and bony roof (0.725; 0.841). The average Euclidean distance for point-based landmarks (bony rim and lower limb) was 4.8 and 4.5 pixels, respectively, and the baseline deflection angle was 1.7 degrees. <b>Conclusions</b>: This ML-based approach demonstrates promising accuracy and may enhance the reliability and accessibility of US-based DDH screening. Future applications could integrate real-time angle measurement and automated classification to support clinical decision-making.

Enabling micro-assessments of skills in the simulated setting using temporal artificial intelligence-models.

Bang Andersen I, Søndergaard Svendsen MB, Risgaard AL, Sander Danstrup C, Todsen T, Tolsgaard MG, Friis ML

pubmed logopapersSep 7 2025
Assessing skills in simulated settings is resource-intensive and lacks validated metrics. Advances in AI offer the potential for automated competence assessment, addressing these limitations. This study aimed to develop and validate a machine learning AI model for automated evaluation during simulation-based thyroid ultrasound (US) training. Videos from eight experts and 21 novices performing thyroid US on a simulator were analyzed. Frames were processed into sequences of 1, 10, and 50 seconds. A convolutional neural network with a pre-trained ResNet-50 base and a long short-term memory layer analyzed these sequences. The model was trained to distinguish competence levels (competent=1, not competent=0) using fourfold cross-validation, with performance metrics including precision, recall, F1 score, and accuracy. Bayesian updating and adaptive thresholding assessed performance over time. The AI model effectively differentiated expert and novice US performance. The 50-second sequences achieved the highest accuracy (70%) and F1 score (0.76). Experts showed significantly longer durations above the threshold (15.71s) compared to novices (9.31s, p= .030). A long short-term memory-based AI model provides near real-time, automated assessments of competence in US training. Utilizing temporal video data enables detailed micro-assessments of complex procedures, which may enhance interpretability and be applied across various procedural domains.

Early postnatal characteristics and differential diagnosis of choledochal cyst and cystic biliary atresia.

Tian Y, Chen S, Ji C, Wang XP, Ye M, Chen XY, Luo JF, Li X, Li L

pubmed logopapersSep 7 2025
Choledochal cysts (CC) and cystic biliary atresia (CBA) present similarly in early infancy but require different treatment approaches. While CC surgery can be delayed until 3-6 months of age in asymptomatic patients, CBA requires intervention within 60 days to prevent cirrhosis. To develop a diagnostic model for early differentiation between these conditions. A total of 319 patients with hepatic hilar cysts (< 60 days old at surgery) were retrospectively analyzed; these patients were treated at three hospitals between 2011 and 2022. Clinical features including biochemical markers and ultrasonographic measurements were compared between CC (<i>n</i> = 274) and CBA (<i>n</i> = 45) groups. Least absolute shrinkage and selection operator regression identified key diagnostic features, and 11 machine learning models were developed and compared. The CBA group showed higher levels of total bile acid, total bilirubin, γ-glutamyl transferase, aspartate aminotransferase, and alanine aminotransferase, and direct bilirubin, while longitudinal diameter of the cysts and transverse diameter of the cysts were larger in the CC group. The multilayer perceptron model demonstrated optimal performance with 95.8% accuracy, 92.9% sensitivity, 96.3% specificity, and an area under the curve of 0.990. Decision curve analysis confirmed its clinical utility. Based on the model, we developed user-friendly diagnostic software for clinical implementation. Our machine learning approach differentiates CC from CBA in early infancy using routinely available clinical parameters. Early accurate diagnosis facilitates timely surgical intervention for CBA cases, potentially improving patient outcomes.

Prenatal diagnosis of cerebellar hypoplasia in fetal ultrasound using deep learning under the constraint of the anatomical structures of the cerebellum and cistern.

Wu X, Liu F, Xu G, Ma Y, Cheng C, He R, Yang A, Gan J, Liang J, Wu X, Zhao S

pubmed logopapersSep 5 2025
The objective of this retrospective study is to develop and validate an artificial intelligence model constrained by the anatomical structure of the brain with the aim of improving the accuracy of prenatal diagnosis of fetal cerebellar hypoplasia using ultrasound imaging. Fetal central nervous system dysplasia is one of the most prevalent congenital malformations, and cerebellar hypoplasia represents a significant manifestation of this anomaly. Accurate clinical diagnosis is of great importance for the purpose of prenatal screening of fetal health. Although ultrasound has been extensively utilized to assess fetal development, the accurate assessment of cerebellar development remains challenging due to the inherent limitations of ultrasound imaging, including low resolution, artifacts, and acoustic shadowing of the skull. This retrospective study included 302 cases diagnosed with cerebellar hypoplasia and 549 normal pregnancies collected from Maternal and Child Health Hospital of Hubei Province between September 2019 and September 2023. For each case, experienced ultrasound physicians selected appropriate brain ultrasound images to delineate the boundaries of the skull, cerebellum, and cerebellomedullary cistern. These cases were divided into one training set and two test sets, based on the examination dates. This study then proposed a dual-branch deep learning classification network, anatomical structure-constrained network (ASC-Net), which took ultrasound images and anatomical structure masks as separate inputs. The performance of the ASC-Net was extensively evaluated and compared with several state-of-the-art deep learning networks. The impact of anatomical structures on the performance of ASC-Net was carefully examined. ASC-Net demonstrated superior performance in the diagnosis of cerebellar hypoplasia, achieving classification accuracies of 0.9778 and 0.9222, as well as areas under the receiver operating characteristic curve of 0.9986 and 0.9265 on the two test sets. These results significantly outperformed several state-of-the-art networks on the same dataset. In comparison to other studies on cerebellar hypoplasia auxiliary diagnosis, ASC-Net also demonstrated comparable or even better performance. A subgroup analysis revealed that ASC-Net was more capable of distinguishing cerebellar hypoplasia in cases with gestational weeks greater than 30 weeks. Furthermore, when constrained by anatomical structures of both the cerebellum and cistern, ASC-Net exhibited the best performance compared to other kinds of structural constraint. The development and validation of ASC-Net have significantly enhanced the accuracy of prenatal diagnosis of cerebellar hypoplasia using ultrasound images. This study highlights the importance of anatomical structures of the fetal cerebellum and cistern on the performance of the diagnostic artificial intelligence model in ultrasound. This might provide new insights for clinical diagnosis of cerebellar hypoplasia, assist clinicians in providing more targeted advice and treatment during pregnancy, and contribute to improved perinatal healthcare. ASC-Net is open-sourced and publicly available in a GitHub repository at https://github.com/Wwwwww111112/ASC-Net .

Predicting Efficacy of Neoadjuvant Chemoradiotherapy for Locally Advanced Rectal Cancer Using Transrectal Contrast-Enhanced Ultrasound-Based Radiomics Model.

Liao Z, Yang Y, Luo Y, Yin H, Jing J, Zhuang H

pubmed logopapersSep 5 2025
Predicting tumor regression grade (TRG) after neoadjuvant chemoradiotherapy (NCRT) in patients with locally advanced rectal cancer (LARC) preoperatively accurately is crucial for providing individualized treatment plans. This study aims to develop transrectal contrast-enhanced ultrasound-based (TR-CEUS) radiomics models for predicting TRG. A total of 190 LARC patients undergoing NCRT and subsequent total mesorectal excision were categorized into good and poor response groups based on pathological TRG. TR-CEUS examinations were conducted before and after NCRT. Machine learning (ML) models for predicting TRG were developed by employing pre- and post-NCRT TR-CEUS image series, based on seven classifiers, including random forest (RF), multi-layer perceptron (MLP) and so on. The predictive performance of models was evaluated using receiver operating characteristic curve analysis and Delong test. A total of 1525 TR-CEUS images were included for analysis, and 3360 ML models were constructed using image series before and after NCRT, respectively. The optimal pre-NCRT ML model, constructed from imaging series before NCRT, was RF; whereas the optimal post-NCRT model, derived from imaging series after NCRT, was MLP. The areas under the curve for the optimal RF and MLP models demonstrated values of 0.609 and 0.857, respectively, in the cross-validation cohort, with corresponding values of 0.659 and 0.841 observed in the independent test cohort. Delong tests showed that the predictive efficacy of the post-NCRT model was statistically higher than that of the pre-NCRT model (p < 0.05). Radiomics model developed by TR-CEUS images after NCRT demonstrated high predictive performance for TRG, thereby facilitating precise evaluation of therapeutic response to NCRT in LARC patients.
Page 8 of 56552 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.