Sort by:
Page 30 of 100991 results

Decoding fetal motion in 4D ultrasound with DeepLabCut.

Inubashiri E, Kaishi Y, Miyake T, Yamaguchi R, Hamaguchi T, Inubashiri M, Ota H, Watanabe Y, Deguchi K, Kuroki K, Maeda N

pubmed logopapersAug 11 2025
This study aimed to objectively and quantitatively analyze fetal motor behavior using DeepLabCut (DLC), a markerless posture estimation tool based on deep learning, applied to four-dimensional ultrasound (4DUS) data collected during the second trimester. We propose a novel clinical method for precise assessment of fetal neurodevelopment. Fifty 4DUS video recordings of normal singleton fetuses aged 12 to 22 gestational weeks were analyzed. Eight fetal joints were manually labeled in 2% of each video to train a customized DLC model. The model's accuracy was evaluated using likelihood scores. Intra- and inter-rater reliability of manual labeling were assessed using intraclass correlation coefficients (ICC). Angular velocity time series derived from joint coordinates were analyzed to quantify fetal movement patterns and developmental coordination. Manual labeling demonstrated excellent reproducibility (inter-rater ICC = 0.990, intra-rater ICC = 0.961). The trained DLC model achieved a mean likelihood score of 0.960, confirming high tracking accuracy. Kinematic analysis revealed developmental trends: localized rapid limb movements were common at 12-13 weeks; movements became more coordinated and systemic by 18-20 weeks, reflecting advancing neuromuscular maturation. Although a modest increase in tracking accuracy was observed with gestational age, this trend did not reach statistical significance (p < 0.001). DLC enables precise quantitative analysis of fetal motor behavior from 4DUS recordings. This AI-driven approach offers a promising, noninvasive alternative to conventional qualitative assessments, providing detailed insights into early fetal neurodevelopmental trajectories and potential early screening for neurodevelopmental disorders.

A Deep Learning-Based Automatic Recognition Model for Polycystic Ovary Ultrasound Images.

Zhao B, Wen L, Huang Y, Fu Y, Zhou S, Liu J, Liu M, Li Y

pubmed logopapersAug 11 2025
Polycystic ovary syndrome (PCOS) has a significant impact on endocrine metabolism, reproductive function, and mental health in women of reproductive age. Ultrasound remains an essential diagnostic tool for PCOS, particularly in individuals presenting with oligomenorrhea or ovulatory dysfunction accompanied by polycystic ovaries, as well as hyperandrogenism associated with polycystic ovaries. However, the accuracy of ultrasound in identifying polycystic ovarian morphology remains variable. To develop a deep learning model capable of rapidly and accurately identifying PCOS using ovarian ultrasound images. Prospective diagnostic accuracy study. This prospective study included data from 1,751 women with suspected PCOS who presented at two affiliated hospitals at Central South University, with clinical and ultrasound information collected and archived. Patients from center 1 were randomly divided into a training set and an internal validation set in a 7:3 ratio, while patients from center 2 served as the external validation set. Using the YOLOv11 deep learning framework, an automated recognition model for ovarian ultrasound images in PCOS cases was constructed, and its diagnostic performance was evaluated. Ultrasound images from 933 patients (781 from center 1 and 152 from center 2) were analyzed. The mean average precision of the YOLOv11 model in detecting the target ovary was 95.7%, 97.6%, and 97.8% for the training, internal validation, and external validation sets, respectively. For diagnostic classification, the model achieved an F1 score of 95.0% in the training set and 96.9% in both validation sets. The area under the curve values were 0.953, 0.973, and 0.967 for the training, internal validation, and external validation sets respectively. The model also demonstrated significantly faster evaluation of a single ovary compared to clinicians (doctor, 5.0 seconds; model, 0.1 seconds; <i>p</i> < 0.01). The YOLOv11-based automatic recognition model for PCOS ovarian ultrasound images exhibits strong target detection and diagnostic performance. This approach can streamline the follicle counting process in conventional ultrasound and enhance the efficiency and generalizability of ultrasound-based PCOS assessment.

Ratio of visceral-to-subcutaneous fat area improves long-term mortality prediction over either measure alone: automated CT-based AI measures with longitudinal follow-up in a large adult cohort.

Liu D, Kuchnia AJ, Blake GM, Lee MH, Garrett JW, Pickhardt PJ

pubmed logopapersAug 11 2025
Fully automated AI-based algorithms can quantify adipose tissue on abdominal CT images. The aim of this study was to investigate the clinical value of these biomarkers by determining the association between adipose tissue measures and all-cause mortality. This retrospective study included 151,141 patients who underwent abdominal CT for any reason between 2000 and 2021. A validated AI-based algorithm quantified subcutaneous (SAT) and visceral (VAT) adipose tissue cross-sectional area. A visceral-to-subcutaneous adipose tissue area ratio (VSR) was calculated. Clinical data (age at the time of CT, sex, date of death, date of last contact) was obtained from a database search of the electronic health record. Hazard ratios (HR) and Kaplan-Meier curves assessed the relationship between adipose tissue measures and mortality. The endpoint of interest was all-cause mortality, with additional subgroup analysis including age and gender. 138,169 patients were included in the final analysis. Higher VSR was associated with increased mortality; this association was strongest in younger women (highest compared to lowest risk quartile HR 3.32 in 18-39y). Lower SAT was associated with increased mortality regardless of sex or age group (HR up to 1.63 in 18-39y). Higher VAT was associated with increased mortality in younger age groups, with the trend weakening and reversing with age; this association was stronger in women. AI-based CT measures of SAT, VAT, and VSR are predictive of mortality, with VSR being the highest performing fat area biomarker overall. These metrics tended to perform better for women and younger patients. Incorporating AI tools can augment patient assessment and management, improving outcome.

A Systematic Review of Multimodal Deep Learning and Machine Learning Fusion Techniques for Prostate Cancer Classification

Manzoor, F., Gupta, V., Pinky, L., Wang, Z., Chen, Z., Deng, Y., Neupane, S.

medrxiv logopreprintAug 11 2025
Prostate cancer remains one of the most prevalent malignancies and a leading cause of cancer-related deaths among men worldwide. Despite advances in traditional diagnostic methods such as Prostate-specific antigen testing, digital rectal examination, and multiparametric Magnetic resonance imaging, these approaches remain constrained by modality-specific limitations, suboptimal sensitivity and specificity, and reliance on expert interpretation, which may introduce diagnostic inconsistency. Multimodal deep learning and machine learning fusion, which integrates diverse data sources including imaging, clinical, and molecular information, has emerged as a promising strategy to enhance the accuracy of prostate cancer classification. This review aims to outline the current state-of-the-art deep learning and machine learning based fusion techniques for prostate cancer classification, focusing on their implementation, performance, challenges, and clinical applicability. Following the PRISMA guidelines, a total of 131 studies were identified, of which 27 met the inclusion criteria for studies published between 2021 and 2025. Extracted data included input techniques, deep learning architectures, performance metrics, and validation approaches. The majority of the studies used an early fusion approach with convolutional neural networks to integrate the data. Clinical and imaging data were the most commonly used modalities in the reviewed studies for prostate cancer research. Overall, multimodal deep learning and machine learning-based fusion significantly advances prostate cancer classification and outperform unimodal approaches.

Construction and validation of a urinary stone composition prediction model based on machine learning.

Guo J, Zhang J, Zhang J, Xu C, Wang X, Liu C

pubmed logopapersAug 11 2025
The composition of urinary calculi serves as a critical determinant for personalized surgical strategies; however, such compositional data are often unavailable preoperatively. This study aims to develop a machine learning-based preoperative prediction model for stone composition and evaluate its clinical utility. A retrospective cohort study design was employed to include patients with urinary calculi admitted to the Department of Urology at the Second Affiliated Hospital of Zhengzhou University from 2019 to 2024. Feature selection was performed using least absolute shrinkage and selection operator (LASSO) regression combined with multivariate logistic regression, and a binary prediction model for urinary calculi was subsequently constructed. Model validation was conducted using metrics such as the area under the curve (AUC), while Shapley Additive Explanations(SHAP) values were applied to interpret the predictive outcomes. Among 708 eligible patients, distinct prediction models were established for four stone types: calcium oxalate stones: Logistic regression achieved optimal performance (AUC = 0.845), with maximum stone CT value, 24-hour urinary oxalate, and stone size as top predictors (SHAP-ranked); infection stones: Logistic regression (AUC = 0.864) prioritized stone size, urinary pH, and recurrence history; uric acid stones: LASSO-ridge-elastic net model demonstrated exceptional accuracy (AUC = 0.961), driven by maximum CT value, 24-hour oxalate, and urinary calcium; calcium-containing stones: Logistic regression attained better prediction (AUC = 0.953), relying on CT value, 24-hour calcium, and stone size. This study developed a machine learning prediction model based on multi-algorithm integration, achieving accurate preoperative discrimination of urinary stone composition. The integration of key imaging features with metabolic indicators enhanced the model's predictive performance.

Artificial Intelligence-Driven Body Composition Analysis Enhances Chemotherapy Toxicity Prediction in Colorectal Cancer.

Liu YZ, Su PF, Tai AS, Shen MR, Tsai YS

pubmed logopapersAug 11 2025
Body surface area (BSA)-based chemotherapy dosing remains standard despite its limitations in predicting toxicity. Variations in body composition, particularly skeletal muscle and adipose tissue, influence drug metabolism and toxicity risk. This study aims to investigate the mediating role of body composition in the relationship between BSA-based dosing and dose-limiting toxicities (DLTs) in colorectal cancer patients receiving oxaliplatin-based chemotherapy. We retrospectively analyzed 483 stage III colorectal cancer patients treated at National Cheng Kung University Hospital (2013-2021). An artificial intelligence (AI)-driven algorithm quantified skeletal muscle and adipose tissue compartments from lumbar 3 (L3) vertebral-level computed tomography (CT) scans. Mediation analysis evaluated body composition's role in chemotherapy-related toxicities. Among the cohort, 18.2% (n = 88) experienced DLTs. While BSA alone was not significantly associated with DLTs (OR = 0.473, p = 0.376), increased intramuscular adipose tissue (IMAT) significantly predicted higher DLT risk (OR = 1.047, p = 0.038), whereas skeletal muscle area was protective. Mediation analysis confirmed that IMAT partially mediated the relationship between BSA and DLTs (indirect effect: 0.05, p = 0.040), highlighting adipose infiltration's role in chemotherapy toxicity. BSA-based dosing inadequately accounts for interindividual variations in chemotherapy tolerance. AI-assisted body composition analysis provides a precision oncology framework for identifying high-risk patients and optimizing chemotherapy regimens. Prospective validation is warranted to integrate body composition into routine clinical decision-making.

Diffusing the Blind Spot: Uterine MRI Synthesis with Diffusion Models

Johanna P. Müller, Anika Knupfer, Pedro Blöss, Edoardo Berardi Vittur, Bernhard Kainz, Jana Hutter

arxiv logopreprintAug 11 2025
Despite significant progress in generative modelling, existing diffusion models often struggle to produce anatomically precise female pelvic images, limiting their application in gynaecological imaging, where data scarcity and patient privacy concerns are critical. To overcome these barriers, we introduce a novel diffusion-based framework for uterine MRI synthesis, integrating both unconditional and conditioned Denoising Diffusion Probabilistic Models (DDPMs) and Latent Diffusion Models (LDMs) in 2D and 3D. Our approach generates anatomically coherent, high fidelity synthetic images that closely mimic real scans and provide valuable resources for training robust diagnostic models. We evaluate generative quality using advanced perceptual and distributional metrics, benchmarking against standard reconstruction methods, and demonstrate substantial gains in diagnostic accuracy on a key classification task. A blinded expert evaluation further validates the clinical realism of our synthetic images. We release our models with privacy safeguards and a comprehensive synthetic uterine MRI dataset to support reproducible research and advance equitable AI in gynaecology.

Enhanced Liver Tumor Detection in CT Images Using 3D U-Net and Bat Algorithm for Hyperparameter Optimization

Nastaran Ghorbani, Bitasadat Jamshidi, Mohsen Rostamy-Malkhalifeh

arxiv logopreprintAug 11 2025
Liver cancer is one of the most prevalent and lethal forms of cancer, making early detection crucial for effective treatment. This paper introduces a novel approach for automated liver tumor segmentation in computed tomography (CT) images by integrating a 3D U-Net architecture with the Bat Algorithm for hyperparameter optimization. The method enhances segmentation accuracy and robustness by intelligently optimizing key parameters like the learning rate and batch size. Evaluated on a publicly available dataset, our model demonstrates a strong ability to balance precision and recall, with a high F1-score at lower prediction thresholds. This is particularly valuable for clinical diagnostics, where ensuring no potential tumors are missed is paramount. Our work contributes to the field of medical image analysis by demonstrating that the synergy between a robust deep learning architecture and a metaheuristic optimization algorithm can yield a highly effective solution for complex segmentation tasks.

Using Machine Learning to Improve the Contrast-Enhanced Ultrasound Liver Imaging Reporting and Data System Diagnosis of Hepatocellular Carcinoma in Indeterminate Liver Nodules.

Hoopes JR, Lyshchik A, Xiao TS, Berzigotti A, Fetzer DT, Forsberg F, Sidhu PS, Wessner CE, Wilson SR, Keith SW

pubmed logopapersAug 11 2025
Liver cancer ranks among the most lethal cancers. Hepatocellular carcinoma (HCC) is the most common type of primary liver cancer and better diagnostic tools are needed to diagnose patients at risk. The aim is to develop a machine learning algorithm that enhances the sensitivity and specificity of the Contrast-Enhanced Ultrasound Liver Imaging Reporting and Data System (CEUS-LIRADS) in classifying indeterminate at-risk liver nodules (LR-M, LR-3, LR-4) as HCC or non-HCC. Our study includes patients at risk for HCC with untreated indeterminate focal liver observations detected on US or contrast-enhanced CT or MRI performed as part of their clinical standard of care from January 2018 to November 2022. Recursive partitioning was used to improve HCC diagnosis in indeterminate at-risk nodules. Demographics, blood biomarkers, and CEUS imaging features were evaluated as potential predictors for the algorithm to classify nodules as HCC or non-HCC. We evaluated 244 indeterminate liver nodules from 224 patients (mean age 62.9 y). Of the nodules, 73.2% (164/224) were from males. The algorithm was trained on a random 2/3 partition of 163 liver nodules and correctly reclassified more than half of the HCC liver nodules previously categorized as indeterminate in the independent 1/3 test partition of 81 liver nodules, achieving a sensitivity of 56.3% (95% CI: 42.0%, 70.2%) and specificity of 93.9% (95% CI: 84.4%, 100.0%). Machine learning was applied to the multicenter, multinational study of CEUS LI-RADS indeterminate at-risk liver nodules and correctly diagnosed HCC in more than half of the HCC nodules.

Prediction of cervical cancer lymph node metastasis based on multisequence magnetic resonance imaging radiomics and deep learning features: a dual-center study.

Luo S, Guo Y, Ye Y, Mu Q, Huang W, Tang G

pubmed logopapersAug 10 2025
Cervical cancer is a leading cause of death from malignant tumors in women, and accurate evaluation of occult lymph node metastasis (OLNM) is crucial for optimal treatment. This study aimed to develop several predictive models-including Clinical model, Radiomics models (RD), Deep Learning models (DL), Radiomics-Deep Learning fusion models (RD-DL), and a Clinical-RD-DL combined model-for assessing the risk of OLNM in cervical cancer patients.The study included 130 patients from Center 1 (training set) and 55 from Center 2 (test set). Clinical data and imaging sequences (T1, T2, and DWI) were used to extract features for model construction. Model performance was assessed using the DeLong test, and SHAP analysis was used to examine feature contributions. Results showed that both the RD-combined (AUC = 0.803) and DL-combined (AUC = 0.818) models outperformed single-sequence models as well as the standalone Clinical model (AUC = 0.702). The RD-DL model yielded the highest performance, achieving an AUC of 0.981 in the training set and 0.903 in the test set. Notably, integrating clinical variables did not further improve predictive performance; the Clinical-RD-DL model performed comparably to the RD-DL model. SHAP analysis showed that deep learning features had the greatest impact on model predictions. Both RD and DL models effectively predict OLNM, with the RD-DL model offering superior performance. These findings provide a rapid, non-invasive clinical prediction method.
Page 30 of 100991 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.