Sort by:
Page 18 of 50499 results

Enhancing weakly supervised data augmentation networks for thyroid nodule assessment using traditional and doppler ultrasound images.

Keatmanee C, Songsaeng D, Klabwong S, Nakaguro Y, Kunapinun A, Ekpanyapong M, Dailey MN

pubmed logopapersJun 30 2025
Thyroid ultrasound (US) is an essential tool for detecting and characterizing thyroid nodules. In this study, we propose an innovative approach to enhance thyroid nodule assessment by integrating Doppler US images with grayscale US images through weakly supervised data augmentation networks (WSDAN). Our method reduces background noise by replacing inefficient augmentation strategies, such as random cropping, with an advanced technique guided by bounding boxes derived from Doppler US images. This targeted augmentation significantly improves model performance in both classification and localization of thyroid nodules. The training dataset comprises 1288 paired grayscale and Doppler US images, with an additional 190 pairs used for three-fold cross-validation. To evaluate the model's efficacy, we tested it on a separate set of 190 grayscale US images. Compared to five state-of-the-art models and the original WSDAN, our Enhanced WSDAN model achieved superior performance. For classification, it reached an accuracy of 91%. For localization, it achieved Dice and Jaccard indices of 75% and 87%, respectively, demonstrating its potential as a valuable clinical tool.

Enhanced abdominal multi-organ segmentation with 3D UNet and UNet +  + deep neural networks utilizing the MONAI framework.

Tejashwini PS, Thriveni J, Venugopal KR

pubmed logopapersJun 30 2025
Accurate segmentation of organs in the abdomen is a primary requirement for any medical analysis and treatment planning. In this study, we propose an approach based on 3D UNet and UNet +  + architectures implemented in the MONAI framework for addressing challenges that arise due to anatomical variability, complex shape rendering of organs, and noise in CT/MRI scans. The models can analyze information in three dimensions from volumetric data, making use of skip and dense connections, and optimizing the parameters using Secretary Bird Optimization (SBO), which together help in better feature extraction and boundary delineation of the structures of interest across sets of multi-organ tissues. The developed model's performance was evaluated on multiple datasets, ranging from Pancreas-CT to Liver-CT and BTCV. The results indicated that on the Pancreas-CT dataset, a DSC of 94.54% was achieved for 3D UNet, while a slightly higher DSC of 95.62% was achieved for 3D UNet +  +. Both models performed well on the Liver-CT dataset, with 3D UNet acquiring a DSC score of 95.67% and 3D UNet +  + a DSC score of 97.36%. And in the case of the BTCV dataset, both models had DSC values ranging from 93.42 to 95.31%. These results demonstrate the robustness and efficiency of the models presented for clinical applications and medical research in multi-organ segmentation. This study validates the proposed architectures, underpinning and accentuating accuracy in medical imaging, creating avenues for scalable solutions for complex abdominal-imaging tasks.

Bidirectional Prototype-Guided Consistency Constraint for Semi-Supervised Fetal Ultrasound Image Segmentation.

Lyu C, Han K, Liu L, Chen J, Ma L, Pang Z, Liu Z

pubmed logopapersJun 30 2025
Fetal ultrasound (US) image segmentation plays an important role in fetal development assessment, maternal pregnancy management, and intrauterine surgery planning. However, obtaining large-scale, accurately annotated fetal US imaging data is time-consuming and labor-intensive, posing challenges to the application of deep learning in this field. To address this challenge, we propose a semi-supervised fetal US image segmentation method based on bidirectional prototype-guided consistency constraint (BiPCC). BiPCC utilizes the prototype to bridge labeled and unlabeled data and establishes interaction between them. Specifically, the model generates pseudo-labels using prototypes from labeled data and then utilizes these pseudo-labels to generate pseudo-prototypes for segmenting the labeled data inversely, thereby achieving bidirectional consistency. Additionally, uncertainty-based cross-supervision is incorporated to provide additional supervision signals, thereby enhancing the quality of pseudo-labels. Extensive experiments on two fetal US datasets demonstrate that BiPCC outperforms state-of-the-art methods for semi-supervised fetal US segmentation. Furthermore, experimental results on two additional medical segmentation datasets exhibit BiPCC's outstanding generalization capability for diverse medical image segmentation tasks. Our proposed method offers a novel insight for semi-supervised fetal US image segmentation and holds promise for further advancing the development of intelligent healthcare.

Prediction Crohn's Disease Activity Using Computed Tomography Enterography-Based Radiomics and Serum Markers.

Wang P, Liu Y, Wang Y

pubmed logopapersJun 30 2025
Accurate stratification of the activity index of Crohn's disease (CD) using computed tomography enterography (CTE) radiomics and serum markers can aid in predicting disease progression and assist physicians in personalizing therapeutic regimens for patients with CD. This retrospective study enrolled 233 patients diagnosed with CD between January 2019 and August 2024. Patients were divided into training and testing cohorts at a ratio of 7:3 and further categorized into remission, mild active phase, and moderate-severe active phase groups based on simple endoscopic score for CD (SEC-CD). Radiomics features were extracted from CTE venous images, and T-test and least absolute shrinkage and selection operator (LASSO) regression were applied for feature selection. The serum markers were selected based on the variance analysis. We also developed a random forest (RF) model for multi-class stratification of CD. The model performance was evaluated by the area under the receiver operating characteristic curve (AUC) and quantified the contribution of each feature in the dataset to CD activity via Shapley additive exPlanations (SHAP) values. Finally, we enrolled gender, radiomics scores, and serum scores to develop a nomogram model to verify the effectiveness of feature extraction. 14 non-zero coefficient radiomics features and six serum markers with significant differences (P<0.01) were ultimately selected to predict CD activity. The AUC (micro/macro) for the ensemble machine learning model combining the radiomics features and serum markers is 0.931/0.928 for three-class. The AUC for the remission phase, the mild active phase, and the moderate-severe active phase were 0.983, 0.852, and 0.917, respectively. The mean AUC for the nomogram model was 0.940. A radiomics model was developed by integrating radiomics and serum markers of CD patients, achieving enhanced consistency with SEC-CD in grade CD. This model has the potential to assist clinicians in accurate diagnosis and treatment.

Leveraging Representation Learning for Bi-parametric Prostate MRI to Disambiguate PI-RADS 3 and Improve Biopsy Decision Strategies.

Umapathy L, Johnson PM, Dutt T, Tong A, Chopra S, Sodickson DK, Chandarana H

pubmed logopapersJun 30 2025
Despite its high negative predictive value (NPV) for clinically significant prostate cancer (csPCa), MRI suffers from a substantial number of false positives, especially for intermediate-risk cases. In this work, we determine whether a deep learning model trained with PI-RADS-guided representation learning can disambiguate the PI-RADS 3 classification, detect csPCa from bi-parametric prostate MR images, and avoid unnecessary benign biopsies. This study included 28,263 MR examinations and radiology reports from 21,938 men imaged for known or suspected prostate cancer between 2015 and 2023 at our institution (21 imaging locations with 34 readers), with 6352 subsequent biopsies. We trained a deep learning model, a representation learner (RL), to learn how radiologists interpret conventionally acquired T2-weighted and diffusion-weighted MR images, using exams in which the radiologists are confident in their risk assessments (PI-RADS 1 and 2 for the absence of csPCa vs. PI-RADS 4 and 5 for the presence of csPCa, n=21,465). We then trained biopsy-decision models to detect csPCa (Gleason score ≥7) using these learned image representations, and compared them to the performance of radiologists, and of models trained on other clinical variables (age, prostate volume, PSA, and PSA density) for treatment-naïve test cohorts consisting of only PI-RADS 3 (n=253, csPCa=103) and all PI-RADS (n=531, csPCa=300) cases. On the 2 test cohorts (PI-RADS-3-only, all-PI-RADS), RL-based biopsy-decision models consistently yielded higher AUCs in detecting csPCa (AUC=0.73 [0.66, 0.79], 0.88 [0.85, 0.91]) compared with radiologists (equivocal, AUC=0.79 [0.75, 0.83]) and the clinical model (AUCs=0.69 [0.62, 0.75], 0.78 [0.74, 0.82]). In the PIRADS-3-only cohort, all of whom would be biopsied using our institution's standard of care, the RL decision model avoided 41% (62/150) of benign biopsies compared with the clinical model (26%, P<0.001), and improved biopsy yield by 10% compared with the PI-RADS ≥3 decision strategy (0.50 vs. 0.40). Furthermore, on the all-PI-RADS cohort, RL decision model avoided 27% of additional benign biopsies (138/231) compared to radiologists (33%, P<0.001) with comparable sensitivity (93% vs. 92%), higher NPV (0.87 vs. 0.77), and biopsy yield (0.75 vs. 0.64). The combination of clinical and RL decision models further avoided benign biopsies (46% in PI-RADS-3-only and 62% in all-PI-RADS) while improving NPV (0.82, 0.88) and biopsy yields (0.52, 0.76) across the 2 test cohorts. Our PI-RADS-guided deep learning RL model learns summary representations from bi-parametric prostate MR images that can provide additional information to disambiguate intermediate-risk PI-RADS 3 assessments. The resulting RL-based biopsy decision models also outperformed radiologists in avoiding benign biopsies while maintaining comparable sensitivity to csPCa for the all-PI-RADS cohort. Such AI models can easily be integrated into clinical practice to supplement radiologists' reads in general and improve biopsy yield for any equivocal decisions.

CMT-FFNet: A CMT-based feature-fusion network for predicting TACE treatment response in hepatocellular carcinoma.

Wang S, Zhao Y, Cai X, Wang N, Zhang Q, Qi S, Yu Z, Liu A, Yao Y

pubmed logopapersJun 30 2025
Accurately and preoperatively predicting tumor response to transarterial chemoembolization (TACE) treatment is crucial for individualized treatment decision-making hepatocellular carcinoma (HCC). In this study, we propose a novel feature fusion network based on the Convolutional Neural Networks Meet Vision Transformers (CMT) architecture, termed CMT-FFNet, to predict TACE efficacy using preoperative multiphase Magnetic Resonance Imaging (MRI) scans. The CMT-FFNet combines local feature extraction with global dependency modeling through attention mechanisms, enabling the extraction of complementary information from multiphase MRI data. Additionally, we introduce an orthogonality loss to optimize the fusion of imaging and clinical features, further enhancing the complementarity of cross-modal features. Moreover, visualization techniques were employed to highlight key regions contributing to model decisions. Extensive experiments were conducted to evaluate the effectiveness of the proposed modules and network architecture. Experimental results demonstrate that our model effectively captures latent correlations among features extracted from multiphase MRI data and multimodal inputs, significantly improving the prediction performance of TACE treatment response in HCC patients.

Automated Evaluation of Female Pelvic Organ Descent on Transperineal Ultrasound: Model Development and Validation.

Wu S, Wu J, Xu Y, Tan J, Wang R, Zhang X

pubmed logopapersJun 28 2025
Transperineal ultrasound (TPUS) is a widely used tool for evaluating female pelvic organ prolapse (POP), but its accurate interpretation relies on experience, causing diagnostic variability. This study aims to develop and validate a multi-task deep learning model to automate POP assessment using TPUS images. TPUS images from 1340 female patients (January-June 2023) were evaluated by two experienced physicians. The presence and severity of cystocele, uterine prolapse, rectocele, and excessive mobility of perineal body (EMoPB) were documented. After preprocessing, 1072 images were used for training and 268 for validation. The model used ResNet34 as the feature extractor and four parallel fully connected layers to predict the conditions. Model performance was assessed using confusion matrix and area under the curve (AUC). Gradient-weighted class activation mapping (Grad-CAM) visualized the model's focus areas. The model demonstrated strong diagnostic performance, with accuracies and AUC values as follows: cystocele, 0.869 (95% CI, 0.824-0.905) and 0.947 (95% CI, 0.930-0.962); uterine prolapse, 0.799 (95% CI, 0.746-0.842) and 0.931 (95% CI, 0.911-0.948); rectocele, 0.978 (95% CI, 0.952-0.990) and 0.892 (95% CI, 0.849-0.927); and EMoPB, 0.869 (95% CI, 0.824-0.905) and 0.942 (95% CI, 0.907-0.967). Grad-CAM heatmaps revealed that the model's focus areas were consistent with those observed by human experts. This study presents a multi-task deep learning model for automated POP assessment using TPUS images, showing promising efficacy and potential to benefit a broader population of women.

Prognostic value of body composition out of PSMA-PET/CT in prostate cancer patients undergoing PSMA-therapy.

Roll W, Plagwitz L, Ventura D, Masthoff M, Backhaus C, Varghese J, Rahbar K, Schindler P

pubmed logopapersJun 28 2025
This retrospective study aims to develop a deep learning-based approach to whole-body CT segmentation out of standard PSMA-PET-CT to assess body composition in metastatic castration resistant prostate cancer (mCRPC) patients prior to [<sup>177</sup>Lu]Lu-PSMA radioligand therapy (RLT). Our goal is to go beyond standard PSMA-PET-based pretherapeutic assessment and identify additional body composition metrics out of the CT-component, with potential prognostic value. We used a deep learning segmentation model to perform fully automated segmentation of different tissue compartments, including visceral- (VAT), subcutaneous- (SAT), intra/intermuscular- adipose tissue (IMAT) from [<sup>68</sup> Ga]Ga-PSMA-PET-CT scans of n = 86 prostate cancer patients before RLT. The proportions of different adipose tissue compartments to total adipose tissue (TAT) assessed on a 3D CT-volume of the abdomen or on a 2D single slice basis (centered at third lumbal vertebra (L3)) were compared for their prognostic value. First, univariate and multivariate Cox proportional hazards regression analyses were performed. Subsequently, the subjects were dichotomized at the median tissue composition, and these subgroups were evaluated by Kaplan-Meier analysis with the log-rank test. The automated segmentation model was useful for delineating different adipose tissue compartments and skeletal muscle across different patient anatomies. Analyses revealed significant correlations between lower SAT and higher IMAT ratios and poorer therapeutic outcomes in Cox regression analysis (SAT/TAT: p = 0.038; IMAT/TAT: p < 0.001) in the 3D model. In the single slice approach only IMAT/SAT was significantly associated with survival in Cox regression analysis (p < 0.001; SAT/TAT: p > 0.05). IMAT ratio remained an independent predictor of survival in multivariate analysis when including PSMA-PET and blood-based prognostic factors. In this proof-of-principle study the implementation of a deep learning-based whole-body analysis provides a robust and detailed CT-based assessment of body composition in mCRPC patients undergoing RLT. Potential prognostic parameters have to be corroborated in larger prospective datasets.

Non-contrast computed tomography radiomics model to predict benign and malignant thyroid nodules with lobe segmentation: A dual-center study.

Wang H, Wang X, Du YS, Wang Y, Bai ZJ, Wu D, Tang WL, Zeng HL, Tao J, He J

pubmed logopapersJun 28 2025
Accurate preoperative differentiation of benign and malignant thyroid nodules is critical for optimal patient management. However, conventional imaging modalities present inherent diagnostic limitations. To develop a non-contrast computed tomography-based machine learning model integrating radiomics and clinical features for preoperative thyroid nodule classification. This multicenter retrospective study enrolled 272 patients with thyroid nodules (376 thyroid lobes) from center A (May 2021-April 2024), using histopathological findings as the reference standard. The dataset was stratified into a training cohort (264 lobes) and an internal validation cohort (112 lobes). Additional prospective temporal (97 lobes, May-August 2024, center A) and external multicenter (81 lobes, center B) test cohorts were incorporated to enhance generalizability. Thyroid lobes were segmented along the isthmus midline, with segmentation reliability confirmed by an intraclass correlation coefficient (≥ 0.80). Radiomics feature extraction was performed using Pearson correlation analysis followed by least absolute shrinkage and selection operator regression with 10-fold cross-validation. Seven machine learning algorithms were systematically evaluated, with model performance quantified through the area under the receiver operating characteristic curve (AUC), Brier score, decision curve analysis, and DeLong test for comparison with radiologists interpretations. Model interpretability was elucidated using SHapley Additive exPlanations (SHAP). The extreme gradient boosting model demonstrated robust diagnostic performance across all datasets, achieving AUCs of 0.899 [95% confidence interval (CI): 0.845-0.932] in the training cohort, 0.803 (95%CI: 0.715-0.890) in internal validation, 0.855 (95%CI: 0.775-0.935) in temporal testing, and 0.802 (95%CI: 0.664-0.939) in external testing. These results were significantly superior to radiologists assessments (AUCs: 0.596, 0.529, 0.558, and 0.538, respectively; <i>P</i> < 0.001 by DeLong test). SHAP analysis identified radiomic score, age, tumor size stratification, calcification status, and cystic components as key predictive features. The model exhibited excellent calibration (Brier scores: 0.125-0.144) and provided significant clinical net benefit at decision thresholds exceeding 20%, as evidenced by decision curve analysis. The non-contrast computed tomography-based radiomics-clinical fusion model enables robust preoperative thyroid nodule classification, with SHAP-driven interpretability enhancing its clinical applicability for personalized decision-making.

White Box Modeling of Self-Determined Sequence Exercise Program Among Sarcopenic Older Adults: Uncovering a Novel Strategy Overcoming Decline of Skeletal Muscle Area.

Wei M, He S, Meng D, Lv Z, Guo H, Yang G, Wang Z

pubmed logopapersJun 27 2025
Resistance exercise, Taichi exercise, and the hybrid exercise program consisting of the two aforementioned methods have been demonstrated to increase the skeletal muscle mass of older individuals with sarcopenia. However, the exercise sequence has not been comprehensively investigated. Therefore, we designed a self-determined sequence exercise program, incorporating resistance exercises, Taichi, and the hybrid exercise program to overcome the decline of skeletal muscle area and reverse sarcopenia in older individuals. Ninety-one older patients with sarcopenia between the ages of 60 and 75 completed this three-stage randomized controlled trial for 24 weeks, including the self-determined sequence exercise program group (n = 31), the resistance training group (n = 30), and the control group (n = 30). We used quantitative computed tomography to measure the effects of different intervention protocols on skeletal muscle mass in participants. Participants' demographic variables were analyzed using one-way analysis of variance and chi-square tests, and experimental data were examined using repeated-measures analysis of variance. Furthermore, we utilized the Markov model to explain the effectiveness of the exercise programs among the three-stage intervention and explainable artificial intelligence to predict whether intervention programs can reverse sarcopenia. Repeated-measures analysis of variance results indicated that there were statistically significant Group × Time interactions detected in the L3 skeletal muscle density, L3 skeletal muscle area, muscle fat infiltration, handgrip strength, and relative skeletal muscle mass index. The stacking model exhibited the best accuracy (84.5%) and the best F1-score (68.8%) compared to other algorithms. In the self-determined sequence exercise program group, strength training contributed most to the reversal of sarcopenia. One self-determined sequence exercise program can improve skeletal muscle area among sarcopenic older people. Based on our stacking model, we can predict whether sarcopenia in older people can be reversed accurately. The trial was registered in ClinicalTrials.gov. TRN:NCT05694117. Our findings indicate that such tailored exercise interventions can substantially benefit sarcopenic patients, and our stacking model provides an accurate predictive tool for assessing the reversibility of sarcopenia in older adults. This approach not only enhances individual health outcomes but also informs future development of targeted exercise programs to mitigate age-related muscle decline.
Page 18 of 50499 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.