Sort by:
Page 77 of 99990 results

Attention-enhanced residual U-Net: lymph node segmentation method with bimodal MRI images.

Qiu J, Chen C, Li M, Hong J, Dong B, Xu S, Lin Y

pubmed logopapersJun 2 2025
In medical images, lymph nodes (LNs) have fuzzy boundaries, diverse shapes and sizes, and structures similar to surrounding tissues. To automatically segment uterine LNs from sagittal magnetic resonance (MRI) scans, we combined T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI) images and tested the final results in our proposed model. This study used a data set of 158 MRI images of patients with FIGO staged LN confirmed by pathology. To improve the robustness of the model, data augmentation was applied to expand the data set. The training data was manually annotated by two experienced radiologists. The DWI and T2 images were fused and inputted into U-Net. The efficient channel attention (ECA) module was added to U-Net. A residual network was added to the encoding-decoding stage, named Efficient residual U-Net (ERU-Net), to obtain the final segmentation results and calculate the mean intersection-over-union (mIoU). The experimental results demonstrated that the ERU-Net network showed strong segmentation performance, which was significantly better than other segmentation networks. The mIoU reached 0.83, and the average pixel accuracy was 0.91. In addition, the precision was 0.90, and the corresponding recall was 0.91. In this study, ERU-Net successfully achieved the segmentation of LN in uterine MRI images. Compared with other segmentation networks, our network has the best segmentation effect on uterine LN. This provides a valuable reference for doctors to develop more effective and efficient treatment plans.

Multimodal Artificial Intelligence Using Endoscopic USG, CT, and MRI to Differentiate Between Serous and Mucinous Cystic Neoplasms.

Seza K, Tawada K, Kobayashi A, Nakamura K

pubmed logopapersJun 1 2025
Introduction Serous cystic neoplasms (SCN) and mucinous cystic neoplasms (MCN) often exhibit similar imaging features when evaluated with a single imaging modality. Differentiating between SCN and MCN typically necessitates the utilization of multiple imaging techniques, including computed tomography (CT), magnetic resonance imaging (MRI), and endoscopic ultrasonography (EUS). Recent research indicates that artificial intelligence (AI) can effectively distinguish between SCN and MCN using single-modal imaging. Despite these advancements, the diagnostic performance of AI has not yet reached an optimal level. This study compares the efficacy of AI in classifying SCN and MCN using multimodal imaging versus single-modal imaging. The objective was to assess the effectiveness of AI utilizing multimodal imaging with EUS, CT, and MRI to classify these two types of pancreatic cysts. Methods We retrospectively gathered data from 25 patients with surgically confirmed SCN and 24 patients with surgically confirmed MCN as part of a multicenter study. Imaging was conducted using four modalities: EUS, early-phase contrast-enhanced abdominal CT, T2-weighted MRI, and magnetic resonance pancreatography. Four images per modality were obtained for each tumor. Data augmentation techniques were utilized, resulting in a final dataset of 39,200 images per modality. An AI model with ResNet was employed to categorize the cysts as SCN or MCN, incorporating clinical features and combinations of imaging modalities (single, double, triple, and all four modalities). The classification outcomes were compared with those of five experienced gastroenterologists with over 10 years of experience. The comparison is based on three performance metrics: sensitivity, specificity, and accuracy. Results For AI utilizing a single imaging modality, the sensitivity, specificity, and accuracy were 87.0%, 92.7%, and 90.8%, respectively. Combining two imaging modalities improved the sensitivity, specificity, and accuracy to 95.3%, 95.1%, and 94.9%. With three modalities, AI achieved a sensitivity of 96.0%, a specificity of 99.0%, and an accuracy of 97.0%. Ultimately, employing all four imaging modalities resulted in AI achieving 98.0% sensitivity, 100% specificity, and 99.0% accuracy. In contrast, experts utilizing all four modalities attained a sensitivity of 78.0%, specificity of 82.0%, and accuracy of 81.0%. The AI models consistently outperformed the experts across all metrics. A continuous enhancement in performance was observed with each additional imaging modality, with AI utilizing three and four modalities significantly surpassing single-modal imaging AI. Conclusion AI utilizing multimodal imaging offers better performance compared to both single-modal imaging AI and experienced human experts in classifying SCN and MCN.

A Multimodal Model Based on Transvaginal Ultrasound-Based Radiomics to Predict the Risk of Peritoneal Metastasis in Ovarian Cancer: A Multicenter Study.

Zhou Y, Duan Y, Zhu Q, Li S, Zhang C

pubmed logopapersJun 1 2025
This study aimed to develop a predictive model for peritoneal metastasis (PM) in ovarian cancer using a combination radiomics and clinical biomarkers to improve diagnostic accuracy. This retrospective cohort study of 619 ovarian cancer patients involved demographic data, radiomics, O-RADS standardized description, clinical biomarkers, and histological findings. Radiomics features were extracted using 3D Slicer and Pyradiomics, with selective feature extraction using Least Absolute Shrinkage and Selection Operator regression. Model development and validation were carried out using logistic regression and machine learning methods RESULTS: Interobserver agreement was high for radiomics features, with 1049 features initially extracted and 7 features selected through regression analysis. Multi-modal information such as Ascites, Fallopian tube invasion, Greatest diameter, HE4 and D-dimer levels were significant predictors of PM. The developed radiomics nomogram demonstrated strong discriminatory power, with AUC values of 0.912, 0.883, and 0.831 in the training, internal test, and external test sets respectively. The nomogram displayed superior diagnostic performance compared to single-modality models. The integration of multimodal information in a predictive model for PM in ovarian cancer shows promise for enhancing diagnostic accuracy and guiding personalized treatment. This multi-modal approach offers a potential strategy for improving patient outcomes in ovarian cancer management with PM.

MRI-based Radiomics for Predicting Prostate Cancer Grade Groups: A Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies.

Lomer NB, Ashoobi MA, Ahmadzadeh AM, Sotoudeh H, Tabari A, Torigian DA

pubmed logopapersJun 1 2025
Prostate cancer (PCa) is the second most common cancer among men and a leading cause of cancer-related mortalities. Radiomics has shown promising performances in the classification of PCa grade group (GG) in several studies. Here, we aimed to systematically review and meta-analyze the performance of radiomics in predicting GG in PCa. Adhering to PRISMA-DTA guidelines, we included studies employing magnetic resonance imaging-derived radiomics for predicting GG, with histopathologic evaluations as the reference standard. Databases searched included Web of Sciences, PubMed, Scopus, and Embase. The Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) and METhodological RadiomICs Score (METRICS) tools were used for quality assessment. Pooled estimates for sensitivity, specificity, likelihood ratios, diagnostic odds ratio, and area under the curve (AUC) were calculated. Cochran's Q and I-squared tests assessed heterogeneity, while meta-regression, subgroup analysis, and sensitivity analysis addressed potential sources. Publication bias was evaluated using Deek's funnel plot, while clinical applicability was assessed with Fagan nomograms and likelihood ratio scattergrams. Data were extracted from 43 studies involving 9983 patients. Radiomics models demonstrated high accuracy in predicting GG. Patient-based analyses yielded AUCs of 0.93 for GG≥2, 0.91 for GG≥3, and 0.93 for GG≥4. Lesion-based analyses showed AUCs of 0.84 for GG≥2 and 0.89 for GG≥3. Significant heterogeneity was observed, and meta-regression identified sources of heterogeneity. Radiomics model showed moderate power to exclude and confirm the GG. Radiomics appears to be an accurate noninvasive tool for predicting PCa GG. It improves the performance of standard diagnostic methods, enhancing clinical decision-making.

Integration of Deep Learning and Sub-regional Radiomics Improves the Prediction of Pathological Complete Response to Neoadjuvant Chemoradiotherapy in Locally Advanced Rectal Cancer Patients.

Wu X, Wang J, Chen C, Cai W, Guo Y, Guo K, Chen Y, Shi Y, Chen J, Lin X, Jiang X

pubmed logopapersJun 1 2025
The precise prediction of response to neoadjuvant chemoradiotherapy is crucial for tailoring perioperative treatment in patients diagnosed with locally advanced rectal cancer (LARC). This retrospective study aims to develop and validate a model that integrates deep learning and sub-regional radiomics from MRI imaging to predict pathological complete response (pCR) in patients with LARC. We retrospectively enrolled 768 eligible participants from three independent hospitals who had received neoadjuvant chemoradiotherapy followed by radical surgery. Pretreatment pelvic MRI scans (T2-weighted), were collected for annotation and feature extraction. The K-means approach was used to segment the tumor into sub-regions. Radiomics and deep learning features were extracted by the Pyradiomics and 3D ResNet50, respectively. The predictive models were developed using the radiomics, sub-regional radiomics, and deep learning features with the machine learning algorithm in training cohort, and then validated in the external tests. The models' performance was assessed using various metrics, including the area under the curve (AUC), decision curve analysis, Kaplan-Meier survival analysis. We constructed a combined model, named SRADL, which includes deep learning with sub-regional radiomics signatures, enabling precise prediction of pCR in LARC patients. SRADL had satisfactory performance for the prediction of pCR in the training cohort (AUC 0.925 [95% CI 0.894 to 0.948]), and in test 1 (AUC 0.915 [95% CI 0.869 to 0.949]) and in test 2 (AUC 0.902 [95% CI 0.846 to 0.945]). By employing optimal threshold of 0.486, the predicted pCR group had longer survival compared to predicted non-pCR group across three cohorts. SRADL also outperformed other single-modality prediction models. The novel SRADL, which integrates deep learning with sub-regional signatures, showed high accuracy and robustness in predicting pCR to neoadjuvant chemoradiotherapy using pretreatment MRI images, making it a promising tool for the personalized management of LARC.

Deep Learning Radiomics Nomogram Based on MRI for Differentiating between Borderline Ovarian Tumors and Stage I Ovarian Cancer: A Multicenter Study.

Wang X, Quan T, Chu X, Gao M, Zhang Y, Chen Y, Bai G, Chen S, Wei M

pubmed logopapersJun 1 2025
To develop and validate a deep learning radiomics nomogram (DLRN) based on T2-weighted MRI to distinguish between borderline ovarian tumors (BOTs) and stage I epithelial ovarian cancer (EOC) preoperatively. This retrospective multicenter study enrolled 279 patients from three centers, divided into a training set (n = 207) and an external test set (n = 72). The intra- and peritumoral radiomics analysis was employed to develop a combined radiomics model. A deep learning model was constructed based on the largest orthogonal slices of the tumor volume, and a clinical model was constructed using independent clinical predictors. The DLRN was then constructed by integrating deep learning, intra- and peritumoral radiomics, and clinical predictors. For comparison, an original radiomics model based solely on tumor volume (excluding the peritumoral area) was also constructed. All models were validated through 10-fold cross-validation and external testing, and their predictive performance was evaluated by the area under the receiver operating characteristic curve (AUC). The DLRN demonstrated superior performance across the 10-fold cross-validation, with the highest AUC of 0.825±0.082. On the external test set, the DLRN significantly outperformed the clinical model and the original radiomics model (AUC = 0.819 vs. 0.708 and 0.670, P = 0.047 and 0.015, respectively). Furthermore, the combined radiomics model performed significantly better than the original radiomics model (AUC = 0.778 vs. 0.670, P = 0.043). The DLRN exhibited promising performance in distinguishing BOTs from stage I EOC preoperatively, thus potentially assisting clinical decision-making.

Deep Learning-Based Automated Measurement of Cervical Length in Transvaginal Ultrasound Images of Pregnant Women.

Kwon H, Sun S, Cho HC, Yun HS, Park S, Jung YJ, Kwon JY, Seo JK

pubmed logopapersJun 1 2025
Cervical length (CL) measurement using transvaginal ultrasound is an effective screening tool to assess the risk of preterm birth. An adequate assessment of CL is crucial, however, manual sonographic CL measurement is highly operator-dependent and cumbersome. Therefore, a reliable and reproducible automatic method for CL measurement is in high demand to reduce inter-rater variability and improve workflow. Despite the increasing use of artificial intelligence techniques in ultrasound, applying deep learning (DL) to analyze ultrasound images of the cervix remains a challenge due to low signal-to-noise ratios and difficulties in capturing the cervical canal, which appears as a thin line and with extremely low contrast against the surrounding tissues. To address these challenges, we have developed CL-Net, a novel DL network that incorporates expert anatomical knowledge to identify the cervix, similar to the approach taken by clinicians. CL-Net captures anatomical features related to CL measurement, facilitating the identification of the cervical canal. It then identifies the cervical canal and automatically provides reproducible and reliable CL measurements. CL-Net achieved a success rate of 95.5% in recognizing the cervical canal, comparable to that of human experts (96.4%). Furthermore, the differences between the CL measurements of CL-Net and ground truth were considerably smaller than those made by non-experts and were comparable to those made by experts (median 1.36 mm, IQR 0.87-2.82 mm, range 0.06-6.95 mm for straight cervix; median 1.31 mm, IQR 0.61-2.65 mm, range 0.01-8.18 mm for curved one).

Semi-Supervised Gland Segmentation via Feature-Enhanced Contrastive Learning and Dual-Consistency Strategy.

Yu J, Li B, Pan X, Shi Z, Wang H, Lan R, Luo X

pubmed logopapersJun 1 2025
In the field of gland segmentation in histopathology, deep-learning methods have made significant progress. However, most existing methods not only require a large amount of high-quality annotated data but also tend to confuse the internal of the gland with the background. To address this challenge, we propose a new semi-supervised method named DCCL-Seg for gland segmentation, which follows the teacher-student framework. Our approach can be divided into follows steps. First, we design a contrastive learning module to improve the ability of the student model's feature extractor to distinguish between gland and background features. Then, we introduce a Signed Distance Field (SDF) prediction task and employ dual-consistency strategy (across tasks and models) to better reinforce the learning of gland internal. Next, we proposed a pseudo label filtering and reweighting mechanism, which filters and reweights the pseudo labels generated by the teacher model based on confidence. However, even after reweighting, the pseudo labels may still be influenced by unreliable pixels. Finally, we further designed an assistant predictor to learn the reweighted pseudo labels, which do not interfere with the student model's predictor and ensure the reliability of the student model's predictions. Experimental results on the publicly available GlaS and CRAG datasets demonstrate that our method outperforms other semi-supervised medical image segmentation methods.

GAN Inversion for Data Augmentation to Improve Colonoscopy Lesion Classification.

Golhar MV, Bobrow TL, Ngamruengphong S, Durr NJ

pubmed logopapersJun 1 2025
A major challenge in applying deep learning to medical imaging is the paucity of annotated data. This study explores the use of synthetic images for data augmentation to address the challenge of limited annotated data in colonoscopy lesion classification. We demonstrate that synthetic colonoscopy images generated by Generative Adversarial Network (GAN) inversion can be used as training data to improve polyp classification performance by deep learning models. We invert pairs of images with the same label to a semantically rich and disentangled latent space and manipulate latent representations to produce new synthetic images. These synthetic images maintain the same label as the input pairs. We perform image modality translation (style transfer) between white light and narrow-band imaging (NBI). We also generate realistic synthetic lesion images by interpolating between original training images to increase the variety of lesion shapes in the training dataset. Our experiments show that GAN inversion can produce multiple colonoscopy data augmentations that improve the downstream polyp classification performance by 2.7% in F1-score and 4.9% in sensitivity over other methods, including state-of-the-art data augmentation. Testing on unseen out-of-domain data also showcased an improvement of 2.9% in F1-score and 2.7% in sensitivity. This approach outperforms other colonoscopy data augmentation techniques and does not require re-training multiple generative models. It also effectively uses information from diverse public datasets, even those not specifically designed for the targeted downstream task, resulting in strong domain generalizability. Project code and model: https://github.com/DurrLab/GAN-Inversion.

Diagnostic value of deep learning of multimodal imaging of thyroid for TI-RADS category 3-5 classification.

Qian T, Feng X, Zhou Y, Ling S, Yao J, Lai M, Chen C, Lin J, Xu D

pubmed logopapersJun 1 2025
Thyroid nodules classified within the Thyroid Imaging Reporting and Data Systems (TI-RADS) category 3-5 are typically regarded as having varying degrees of malignancy risk, with the risk increasing from TI-RADS 3 to TI-RADS 5. While some of these nodules may undergo fine-needle aspiration (FNA) biopsy to assess their nature, this procedure carries a risk of false negatives and inherent complications. To avoid the need for unnecessary biopsy examination, we explored a method for distinguishing the benign and malignant characteristics of thyroid TI-RADS 3-5 nodules based on deep-learning ultrasound images combined with computed tomography (CT). Thyroid nodules, assessed as American College of Radiology (ACR) TI-RADS category 3-5 through conventional ultrasound, all of which had postoperative pathology results, were examined using both conventional ultrasound and CT before operation. We investigated the effectiveness of deep-learning models based on ultrasound alone, CT alone, and a combination of both imaging modalities using the following metrics: Area Under Curve (AUC), sensitivity, accuracy, and positive predictive value (PPV). Additionally, we compared the diagnostic efficacy of the combined methods with manual readings of ultrasound and CT. A total of 768 thyroid nodules falling within TI-RADS categories 3-5 were identified across 768 patients. The dataset comprised 499 malignant and 269 benign cases. For the automatic identification of thyroid TI-RADS category 3-5 nodules, deep learning combined with ultrasound and CT demonstrated a significantly higher AUC (0.930; 95% CI: 0.892, 0.969) compared to the application of ultrasound alone AUC (0.901; 95% CI: 0.856, 0.947) or CT alone AUC (0.776; 95% CI: 0.713, 0.840). Additionally, the AUC of combined modalities surpassed that of radiologists'assessments using ultrasound alone AUCmean (0.725;95% CI:0.677, 0.773), CT alone AUCmean (0.617; 95% CI:0.564, 0.669). Deep learning method combined with ultrasound and CT imaging of thyroid can allow more accurate and precise classification of nodules within TI-RADS categories 3-5.
Page 77 of 99990 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.