Sort by:
Page 35 of 41404 results

Construction of a Prediction Model for Adverse Perinatal Outcomes in Foetal Growth Restriction Based on a Machine Learning Algorithm: A Retrospective Study.

Meng X, Wang L, Wu M, Zhang N, Li X, Wu Q

pubmed logopapersMay 23 2025
To create and validate a machine learning (ML)-based model for predicting the adverse perinatal outcome (APO) in foetal growth restriction (FGR) at diagnosis. A retrospective study. Multi-centre in China. Pregnancies affected by FGR. We enrolled singleton foetuses with a perinatal diagnosis of FGR who were admitted between January 2021 and November 2023. A total of 361 pregnancies from Beijing Obstetrics and Gynecology Hospital were used as the training set and the internal test set. In comparison, data from 50 pregnancies from Haidian Maternal and Child Health Hospital were used as the external test set. Feature screening was performed using the random forest (RF), the Least Absolute Shrinkage and Selection Operator (LASSO) and logistic regression (LR). Subsequently, six ML methods, including Stacking, were used to construct models to predict the APO of FGR. Model's performance was evaluated through indicators such as the area under the receiver operating characteristic curve (AUROC). The Shapley Additive Explanation analysis was used to rank each model feature and explain the final model. Mean ± SD gestational age at diagnosis was 32.3 ± 4.8 weeks in the absent APO group and 27.3 ± 3.7 in the present APO group. Women enrolled in the present APO group had a higher rate of hypertension related to pregnancy (74.8% vs. 18.8%, p < 0.001). Among 17 candidate predictors (including maternal characteristics, maternal comorbidities, obstetric characteristics and ultrasound parameters), the integration of RF, LASSO and LR methodologies identified maternal body mass index, hypertension, gestational age at diagnosis of FGR, estimated foetal weight (EFW) z score, EFW growth velocity and abnormal umbilical artery Doppler (defined as a pulsatility index above the 95th percentile or instances of absent/reversed diastolic flow) as significant predictors. The Stacking model demonstrated a good performance in both the internal test set [AUROC: 0.861, 95% confidence interval (CI), 0.838-0.896] and the external test set [AUROC: 0.906, 95% CI, 0.875-0.947]. The calibration curve showed high agreement between the predicted and observed risks. The Hosmer-Lemeshow test for the internal and external test sets was p = 0.387 and p = 0.825, respectively. The ML algorithm for APO, which integrates maternal clinical factors and ultrasound parameters, demonstrates good predictive value for APO in FGR at diagnosis. This suggested that ML techniques may be a valid approach for the early detection of high-risk APO in FGR pregnancies.

An Ultrasound Image-Based Deep Learning Radiomics Nomogram for Differentiating Between Benign and Malignant Indeterminate Cytology (Bethesda III) Thyroid Nodules: A Retrospective Study.

Zhong L, Shi L, Li W, Zhou L, Wang K, Gu L

pubmed logopapersMay 21 2025
Our objective is to develop and validate a deep learning radiomics nomogram (DLRN) based on preoperative ultrasound images and clinical features, for predicting the malignancy of thyroid nodules with indeterminate cytology (Bethesda III). Between June 2017 and June 2022, we conducted a retrospective study on 194 patients with surgically confirmed indeterminate cytology (Bethesda III) in our hospital. The training and internal validation cohorts were comprised of 155 and 39 patients, in a 7:3 ratio. To facilitate external validation, we selected an additional 80 patients from each of the remaining two medical centers. Utilizing preoperative ultrasound data, we obtained imaging markers that encompass both deep learning and manually radiomic features. After feature selection, we developed a comprehensive diagnostic model to evaluate the predictive value for Bethesda III benign and malignant cases. The model's diagnostic accuracy, calibration, and clinical applicability were systematically assessed. The results showed that the prediction model, which integrated 512 DTL features extracted from the pre-trained Resnet34 network, ultrasound radiomics, and clinical features, exhibited superior stability in distinguishing between benign and malignant indeterminate thyroid nodules (Bethesda Class III). In the validation set, the AUC was 0.92 (95% CI: 0.831-1.000), and the accuracy, sensitivity, specificity, precision, and recall were 0.897, 0.882, 0.909, 0.882, and 0.882, respectively. The comprehensive multidimensional data model based on deep transfer learning, ultrasound radiomics features, and clinical characteristics can effectively distinguish the benign and malignant indeterminate thyroid nodules (Bethesda Class III), providing valuable guidance for treatment selection in patients with indeterminate thyroid nodules (Bethesda Class III).

Update on the detection of frailty in older adults: a multicenter cohort machine learning-based study protocol.

Fernández-Carnero S, Martínez-Pozas O, Pecos-Martín D, Pardo-Gómez A, Cuenca-Zaldívar JN, Sánchez-Romero EA

pubmed logopapersMay 21 2025
This study aims to investigate the relationship between muscle activation variables assessed via ultrasound and the comprehensive assessment of geriatric patients, as well as to analyze ultrasound images to determine their correlation with morbimortality factors in frail patients. The present cohort study will be conducted in 500 older adults diagnosed with frailty. A multicenter study will be conducted among the day care centers and nursing homes. This will be achieved through the evaluation of frail older adults via instrumental and functional tests, along with specific ultrasound images to study sarcopenia and nutrition, followed by a detailed analysis of the correlation between all collected variables. This study aims to investigate the correlation between ultrasound-assessed muscle activation variables and the overall health of geriatric patients. It addresses the limitations of previous research by including a large sample size of 500 patients and measuring various muscle parameters beyond thickness. Additionally, it aims to analyze ultrasound images to identify markers associated with higher risk of complications in frail patients. The study involves frail older adults undergoing functional tests and specific ultrasound examinations. A comprehensive analysis of functional, ultrasound, and nutritional variables will be conducted to understand their correlation with overall health and risk of complications in frail older patients. The study was approved by the Research Ethics Committee of the Hospital Universitario Puerta de Hierro, Madrid, Spain (Act nº 18/2023). In addition, the study was registered with https://clinicaltrials.gov/ (NCT06218121).

Automated Fetal Biometry Assessment with Deep Ensembles using Sparse-Sampling of 2D Intrapartum Ultrasound Images

Jayroop Ramesh, Valentin Bacher, Mark C. Eid, Hoda Kalabizadeh, Christian Rupprecht, Ana IL Namburete, Pak-Hei Yeung, Madeleine K. Wyburd, Nicola K. Dinsdale

arxiv logopreprintMay 20 2025
The International Society of Ultrasound advocates Intrapartum Ultrasound (US) Imaging in Obstetrics and Gynecology (ISUOG) to monitor labour progression through changes in fetal head position. Two reliable ultrasound-derived parameters that are used to predict outcomes of instrumental vaginal delivery are the angle of progression (AoP) and head-symphysis distance (HSD). In this work, as part of the Intrapartum Ultrasounds Grand Challenge (IUGC) 2024, we propose an automated fetal biometry measurement pipeline to reduce intra- and inter-observer variability and improve measurement reliability. Our pipeline consists of three key tasks: (i) classification of standard planes (SP) from US videos, (ii) segmentation of fetal head and pubic symphysis from the detected SPs, and (iii) computation of the AoP and HSD from the segmented regions. We perform sparse sampling to mitigate class imbalances and reduce spurious correlations in task (i), and utilize ensemble-based deep learning methods for task (i) and (ii) to enhance generalizability under different US acquisition settings. Finally, to promote robustness in task iii) with respect to the structural fidelity of measurements, we retain the largest connected components and apply ellipse fitting to the segmentations. Our solution achieved ACC: 0.9452, F1: 0.9225, AUC: 0.983, MCC: 0.8361, DSC: 0.918, HD: 19.73, ASD: 5.71, $\Delta_{AoP}$: 8.90 and $\Delta_{HSD}$: 14.35 across an unseen hold-out set of 4 patients and 224 US frames. The results from the proposed automated pipeline can improve the understanding of labour arrest causes and guide the development of clinical risk stratification tools for efficient and effective prenatal care.

An explainable AI-driven deep neural network for accurate breast cancer detection from histopathological and ultrasound images.

Alom MR, Farid FA, Rahaman MA, Rahman A, Debnath T, Miah ASM, Mansor S

pubmed logopapersMay 20 2025
Breast cancer represents a significant global health challenge, which makes it essential to detect breast cancer early and accurately to improve patient prognosis and reduce mortality rates. However, traditional diagnostic processes relying on manual analysis of medical images are inherently complex and subject to variability between observers, highlighting the urgent need for robust automated breast cancer detection systems. While deep learning has demonstrated potential, many current models struggle with limited accuracy and lack of interpretability. This research introduces the Deep Neural Breast Cancer Detection (DNBCD) model, an explainable AI-based framework that utilizes deep learning methods for classifying breast cancer using histopathological and ultrasound images. The proposed model employs Densenet121 as a foundation, integrating customized Convolutional Neural Network (CNN) layers including GlobalAveragePooling2D, Dense, and Dropout layers along with transfer learning to achieve both high accuracy and interpretability for breast cancer diagnosis. The proposed DNBCD model integrates several preprocessing techniques, including image normalization and resizing, and augmentation techniques to enhance the model's robustness and address class imbalances using class weight. It employs Grad-CAM (Gradient-weighted Class Activation Mapping) to offer visual justifications for its predictions, increasing trust and transparency among healthcare providers. The model was assessed using two benchmark datasets: Breakhis-400x (B-400x) and Breast Ultrasound Images Dataset (BUSI) containing 1820 and 1578 images, respectively. We systematically divided the datasets into training (70%), testing (20%,) and validation (10%) sets, ensuring efficient model training and evaluation obtaining accuracies of 93.97% for B-400x dataset having benign and malignant classes and 89.87% for BUSI dataset having benign, malignant, and normal classes for breast cancer detection. Experimental results demonstrate that the proposed DNBCD model significantly outperforms existing state-of-the-art approaches with potential uses in clinical environments. We also made all the materials publicly accessible for the research community at: https://github.com/romzanalom/XAI-Based-Deep-Neural-Breast-Cancer-Detection .

Blind Restoration of High-Resolution Ultrasound Video

Chu Chen, Kangning Cui, Pasquale Cascarano, Wei Tang, Elena Loli Piccolomini, Raymond H. Chan

arxiv logopreprintMay 20 2025
Ultrasound imaging is widely applied in clinical practice, yet ultrasound videos often suffer from low signal-to-noise ratios (SNR) and limited resolutions, posing challenges for diagnosis and analysis. Variations in equipment and acquisition settings can further exacerbate differences in data distribution and noise levels, reducing the generalizability of pre-trained models. This work presents a self-supervised ultrasound video super-resolution algorithm called Deep Ultrasound Prior (DUP). DUP employs a video-adaptive optimization process of a neural network that enhances the resolution of given ultrasound videos without requiring paired training data while simultaneously removing noise. Quantitative and visual evaluations demonstrate that DUP outperforms existing super-resolution algorithms, leading to substantial improvements for downstream applications.

Semiautomated segmentation of breast tumor on automatic breast ultrasound image using a large-scale model with customized modules.

Zhou Y, Ye M, Ye H, Zeng S, Shu X, Pan Y, Wu A, Liu P, Zhang G, Cai S, Chen S

pubmed logopapersMay 19 2025
To verify the capability of the Segment Anything Model for medical images in 3D (SAM-Med3D), tailored with low-rank adaptation (LoRA) strategies, in segmenting breast tumors in Automated Breast Ultrasound (ABUS) images. This retrospective study collected data from 329 patients diagnosed with breast cancer (average age 54 years). The dataset was randomly divided into training (n = 204), validation (n = 29), and test sets (n = 59). Two experienced radiologists manually annotated the regions of interest of each sample in the dataset, which served as ground truth for training and evaluating the SAM-Med3D model with additional customized modules. For semi-automatic tumor segmentation, points were randomly sampled within the lesion areas to simulate the radiologists' clicks in real-world scenarios. The segmentation performance was evaluated using the Dice coefficient. A total of 492 cases (200 from the "Tumor Detection, Segmentation, and Classification Challenge on Automated 3D Breast Ultrasound (TDSC-ABUS) 2023 challenge") were subjected to semi-automatic segmentation inference. The average Dice Similariy Coefficient (DSC) scores for the training, validation, and test sets of the Lishui dataset were 0.75, 0.78, and 0.75, respectively. The Breast Imaging Reporting and Data System (BI-RADS) categories of all samples range from BI-RADS 3 to 6, yielding an average DSC coefficient between 0.73 and 0.77. By categorizing the samples (lesion volumes ranging from 1.64 to 100.03 cm<sup>3</sup>) based on lesion size, the average DSC falls between 0.72 and 0.77.And the overall average DSC for the TDSC-ABUS 2023 challenge dataset was 0.79, with the test set achieving a sora-of-art scores of 0.79. The SAM-Med3D model with additional customized modules demonstrates good performance in semi-automatic 3D ABUS breast cancer tumor segmentation, indicating its feasibility for application in computer-aided diagnosis systems.

Portable Ultrasound Bladder Volume Measurement Over Entire Volume Range Using a Deep Learning Artificial Intelligence Model in a Selected Cohort: A Proof of Principle Study.

Jeong HJ, Seol A, Lee S, Lim H, Lee M, Oh SJ

pubmed logopapersMay 19 2025
We aimed to prospectively investigate whether bladder volume measured using deep learning artificial intelligence (AI) algorithms (AI-BV) is more accurate than that measured using conventional methods (C-BV) if using a portable ultrasound bladder scanner (PUBS). Patients who underwent filling cystometry because of lower urinary tract symptoms between January 2021 and July 2022 were enrolled. Every time the bladder was filled serially with normal saline from 0 mL to maximum cystometric capacity in 50 mL increments, C-BV was measured using PUBS. Ultrasound images obtained during this process were manually annotated to define the bladder contour, which was used to build a deep learning AI model. The true bladder volume (T-BV) for each bladder volume range was compared with C-BV and AI-BV for analysis. We enrolled 250 patients (213 men and 37 women), and a deep learning AI model was established using 1912 bladder images. There was a significant difference between C-BV (205.5 ± 170.8 mL) and T-BV (190.5 ± 165.7 mL) (p = 0.001), but no significant difference between AI-BV (197.0 ± 161.1 mL) and T-BV (190.5 ± 165.7 mL) (p = 0.081). In bladder volume ranges of 101-150, 151-200, and 201-300 mL, there were significant differences in the percentage of volume differences between [C-BV and T-BV] and [AI-BV and T-BV] (p < 0.05), but no significant difference if converted to absolute values (p > 0.05). C-BV (R<sup>2</sup> = 0.91, p < 0.001) and AI-BV (R<sup>2</sup> = 0.90, p < 0.001) were highly correlated with T-BV. The mean difference between AI-BV and T-BV (6.5 ± 50.4) was significantly smaller than that between C-BV and T-BV (15.0 ± 50.9) (p = 0.001). Following image pre-processing, deep learning AI-BV more accurately estimated true BV than conventional methods in this selected cohort on internal validation. Determination of the clinical relevance of these findings and performance in external cohorts requires further study. The clinical trial was conducted using an approved product for its approved indication, so approval from the Ministry of Food and Drug Safety (MFDS) was not required. Therefore, there is no clinical trial registration number.

Longitudinal Validation of a Deep Learning Index for Aortic Stenosis Progression

Park, J., Kim, J., Yoon, Y. E., Jeon, J., Lee, S.-A., Choi, H.-M., Hwang, I.-C., Cho, G.-Y., Chang, H.-J., Park, J.-H.

medrxiv logopreprintMay 19 2025
AimsAortic stenosis (AS) is a progressive disease requiring timely monitoring and intervention. While transthoracic echocardiography (TTE) remains the diagnostic standard, deep learning (DL)-based approaches offer potential for improved disease tracking. This study examined the longitudinal changes in a previously developed DL-derived index for AS continuum (DLi-ASc) and assessed its value in predicting progression to severe AS. Methods and ResultsWe retrospectively analysed 2,373 patients a(7,371 TTEs) from two tertiary hospitals. DLi-ASc (scaled 0-100), derived from parasternal long- and/or short-axis views, was tracked longitudinally. DLi-ASc increased in parallel with worsening AS stages (p for trend <0.001) and showed strong correlations with AV maximal velocity (Vmax) (Pearson correlation coefficients [PCC] = 0.69, p<0.001) and mean pressure gradient (mPG) (PCC = 0.66, p<0.001). Higher baseline DLi-ASc was associated with a faster AS progression rate (p for trend <0.001). Additionally, the annualised change in DLi-ASc, estimated using linear mixed-effect models, correlated strongly with the annualised progression of AV Vmax (PCC = 0.71, p<0.001) and mPG (PCC = 0.68, p<0.001). In Fine-Gray competing risk models, baseline DLi-ASc independently predicted progression to severe AS, even after adjustment for AV Vmax or mPG (hazard ratio per 10-point increase = 2.38 and 2.80, respectively) ConclusionDLi-ASc increased in parallel with AS progression and independently predicted severe AS progression. These findings support its role as a non-invasive imaging-based digital marker for longitudinal AS monitoring and risk stratification.

Transformer model based on Sonazoid contrast-enhanced ultrasound for microvascular invasion prediction in hepatocellular carcinoma.

Qin Q, Pang J, Li J, Gao R, Wen R, Wu Y, Liang L, Que Q, Liu C, Peng J, Lv Y, He Y, Lin P, Yang H

pubmed logopapersMay 19 2025
Microvascular invasion (MVI) is strongly associated with the prognosis of patients with hepatocellular carcinoma (HCC). To evaluate the value of Transformer models with Sonazoid contrast-enhanced ultrasound (CEUS) in the preoperative prediction of MVI. This retrospective study included 164 HCC patients. Deep learning features and radiomic features were extracted from arterial and Kupffer phase images, alongside the collection of clinicopathological parameters. Normality was assessed using the Shapiro-Wilk test. The Mann‒Whitney U-test and least absolute shrinkage and selection operator algorithm were applied to screen features. Transformer, radiomic, and clinical prediction models for MVI were constructed with logistic regression. Repeated random splits followed a 7:3 ratio, with model performance evaluated over 50 iterations. The area under the receiver operating characteristic curve (AUC, 95% confidence interval [CI]), sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), decision curve, and calibration curve were used to evaluate the performance of the models. The DeLong test was applied to compare performance between models. The Bonferroni method was used to control type I error rates arising from multiple comparisons. A two-sided p-value of < 0.05 was considered statistically significant. In the training set, the diagnostic performance of the arterial-phase Transformer (AT) and Kupffer-phase Transformer (KT) models were better than that of the radiomic and clinical (Clin) models (p < 0.0001). In the validation set, both the AT and KT models outperformed the radiomic and Clin models in terms of diagnostic performance (p < 0.05). The AUC (95% CI) for the AT model was 0.821 (0.72-0.925) with an accuracy of 80.0%, and the KT model was 0.859 (0.766-0.977) with an accuracy of 70.0%. Logistic regression analysis indicated that tumor size (p = 0.016) and alpha-fetoprotein (AFP) (p = 0.046) were independent predictors of MVI. Transformer models using Sonazoid CEUS have potential for effectively identifying MVI-positive patients preoperatively.
Page 35 of 41404 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.