Sort by:
Page 12 of 99987 results

MBLEformer: Multi-Scale Bidirectional Lesion Enhancement Transformer for Cervical Cancer Image Segmentation.

Li S, Chen P, Zhang J, Wang B

pubmed logopapersSep 16 2025
Accurate segmentation of lesion areas from Lugol's Iodine Staining images is crucial for screening pre-cancerous cervical lesions. However, in underdeveloped regions lacking skilled clinicians, this method may lead to misdiagnosis and missed diagnoses. In recent years, deep learning methods have been widely applied to assist in medical image segmentation. This study aims to improve the accuracy of cervical cancer lesion segmentation by addressing the limitations of Convolutional Neural Networks (CNNs) and attention mechanisms in capturing global features and refining upsampling details. This paper presents a Multi-Scale Bidirectional Lesion Enhancement Network, named MBLEformer, which employs the Swin Transformer encoder to extract image features at multiple stages and utilizes a multi-scale attention mechanism to capture semantic features from different perspectives. Additionally, a bidirectional lesion enhancement upsampling strategy is introduced to refine the edge details of lesion areas. Experimental results demonstrate that the proposed model exhibits superior segmentation performance on a proprietary cervical cancer colposcopic dataset, outperforming other medical image segmentation methods, with a mean Intersection over Union (mIoU) of 82.5%, accuracy, and specificity of 94.9% and 83.6%. MBLEformer significantly improves the accuracy of lesion segmentation in iodine-stained cervical cancer images, with the potential to enhance the efficiency and accuracy of pre-cancerous lesion diagnosis and help address the issue of imbalanced medical resources.

Machine and deep learning for MRI-based quantification of liver iron overload: a systematic review and meta-analysis.

Elhaie M, Koozari A, Alshammari QT

pubmed logopapersSep 16 2025
Liver iron overload, associated with conditions such as hereditary hemochromatosis and β‑thalassemia major, requires accurate quantification of liver iron concentration (LIC) to guide timely interventions and prevent complications. Magnetic resonance imaging (MRI) is the gold standard for noninvasive LIC assessment, but challenges in protocol variability and diagnostic consistency persist. Machine learning (ML) and deep learning (DL) offer potential to enhance MRI-based LIC quantification, yet their efficacy remains underexplored. This systematic review and meta-analysis evaluates the diagnostic accuracy, algorithmic performance, and clinical applicability of ML and DL techniques for MRI-based LIC quantification in liver iron overload, adhering to PRISMA guidelines. A comprehensive search across PubMed, Embase, Scopus, Web of Science, Cochrane Library, and IEEE Xplore identified studies applying ML/DL to MRI-based LIC quantification. Eligible studies were assessed for diagnostic accuracy (sensitivity, specificity, AUC), LIC quantification precision (correlation, mean absolute error), and clinical applicability (automation, processing time). Methodological quality was evaluated using the QUADAS‑2 tool, with qualitative synthesis and meta-analysis where feasible. Eight studies were included, employing algorithms such as convolutional neural networks (CNNs), radiomics, and fuzzy C‑mean clustering on T2*-weighted and multiparametric MRI. Pooled diagnostic accuracy from three studies showed a sensitivity of 0.79 (95% CI: 0.66-0.88) and specificity of 0.77 (95% CI: 0.64-0.86), with an AUC of 0.84. The DL methods demonstrated high precision (e.g., Pearson's r = 0.999) and automation, reducing processing times to as low as 0.1 s/slice. Limitations included heterogeneity, limited generalizability, and small external validation sets. Both ML and DL enhance MRI-based LIC quantification, offering high accuracy and efficiency. Standardized protocols and multicenter validation are needed to ensure clinical scalability and equitable access.

Fully automatic bile duct segmentation in magnetic resonance cholangiopancreatography for biliary surgery planning using deep learning.

Tao H, Wang J, Guo K, Luo W, Zeng X, Lu M, Lin J, Li B, Qian Y, Yang J

pubmed logopapersSep 15 2025
To automatically and accurately perform three-dimensional reconstruction of dilated and non-dilated bile ducts based on magnetic resonance cholangiopancreatography (MRCP) data, assisting in the formulation of optimal surgical plans and guiding precise bile duct surgery. A total of 249 consecutive patients who underwent standardized 3D-MRCP scans were randomly divided into a training cohort (n = 208) and a testing cohort (n = 41). Ground truth segmentation was manually delineated by two hepatobiliary surgeons or radiologists following industry certification procedures and reviewed by two expert-level physicians for biliary surgery planning. The deep learning semantic segmentation model was constructed using the nnU-Net framework. Model performance was assessed by comparing model predictions with ground truth segmentation as well as real surgical scenarios. The generalization of the model was tested on a dataset of 10 3D-MRCP scans from other centers, with ground truth segmentation of biliary structures. The evaluation was performed on 41 internal test sets and 10 external test sets, with mean Dice Similarity Coefficient (DSC) values of respectively 0.9403 and 0.9070. The correlation coefficient between the 3D model based on automatic segmentation predictions and the ground truth results exceeded 0.95. The 95 % limits of agreement (LoA) for biliary tract length ranged from -4.456 to 4.781, and for biliary tract volume ranged from -3.404 to 3.650 ml. Furthermore, the intraoperative Indocyanine green (ICG) fluorescence imaging and operation situation validated that this model can accurately reconstruct biliary landmarks. By leveraging a deep learning algorithmic framework, an AI model can be trained to perform automatic and accurate 3D reconstructions of non-dilated bile ducts, thereby providing guidance for the preoperative planning of complex biliary surgeries.

Artificial Intelligence-Derived Intramuscular Adipose Tissue Assessment Predicts Perineal Wound Complications Following Abdominoperineal Resection.

Besson A, Cao K, Kokelaar R, Hajdarevic E, Wirth L, Yeung J, Yeung JM

pubmed logopapersSep 15 2025
Perineal wound complications following abdominoperineal resection (APR) significantly impacts patient morbidity. Despite various closure techniques, no method has proven superior. Body composition is a key factor influencing postoperative outcomes. AI-assisted CT scan analysis is an accurate and efficient approach to assessing body composition. This study aimed to evaluate whether body composition characteristics can predict perineal wound complications following APR. A retrospective cohort study of APR patients from 2012 to 2024 was conducted, comparing primary closure and inferior gluteal artery myocutaneous (IGAM) flap closure outcomes. Preoperative CT scans were analyzed using a validated AI model to measure lumbosacral skeletal muscle (SM), intramuscular adipose tissue (IMAT), visceral adipose tissue, and subcutaneous adipose tissue. Greater IMAT volume correlated with increased wound dehiscence in males undergoing IGAM closure (40% vs. 4.8% and p = 0.027). Lower SM-to-IMAT volume ratio was associated with higher wound infection rates (60% vs. 19% and p = 0.04). Closure technique did not significantly impact wound infection or dehiscence rates. This study is the first to use AI derived 3D body composition analysis to assess perineal wound complications after APR. IMAT volume significantly influences wound healing in male patients having IGAM reconstruction.

Image analysis of cardiac hepatopathy secondary to heart failure: Machine learning <i>vs</i> gastroenterologists and radiologists.

Miida S, Kamimura H, Fujiki S, Kobayashi T, Endo S, Maruyama H, Yoshida T, Watanabe Y, Kimura N, Abe H, Sakamaki A, Yokoo T, Tsukada M, Numano F, Kashimura T, Inomata T, Fuzawa Y, Hirata T, Horii Y, Ishikawa H, Nonaka H, Kamimura K, Terai S

pubmed logopapersSep 14 2025
Congestive hepatopathy, also known as nutmeg liver, is liver damage secondary to chronic heart failure (HF). Its morphological characteristics in terms of medical imaging are not defined and remain unclear. To leverage machine learning to capture imaging features of congestive hepatopathy using incidentally acquired computed tomography (CT) scans. We retrospectively analyzed 179 chronic HF patients who underwent echocardiography and CT within one year. Right HF severity was classified into three grades. Liver CT images at the paraumbilical vein level were used to develop a ResNet-based machine learning model to predict tricuspid regurgitation (TR) severity. Model accuracy was compared with that of six gastroenterology and four radiology experts. In the included patients, 120 were male (mean age: 73.1 ± 14.4 years). The accuracy of the results predicting TR severity from a single CT image for the machine learning model was significantly higher than the average accuracy of the experts. The model was found to be exceptionally reliable for predicting severe TR. Deep learning models, particularly those using ResNet architectures, can help identify morphological changes associated with TR severity, aiding in early liver dysfunction detection in patients with HF, thereby improving outcomes.

Multiparametric magnetic resonance imaging of deep learning-based super-resolution reconstruction for predicting histopathologic grade in hepatocellular carcinoma.

Wang ZZ, Song SM, Zhang G, Chen RQ, Zhang ZC, Liu R

pubmed logopapersSep 14 2025
Deep learning-based super-resolution (SR) reconstruction can obtain high-quality images with more detailed information. To compare multiparametric normal-resolution (NR) and SR magnetic resonance imaging (MRI) in predicting the histopathologic grade in hepatocellular carcinoma. We retrospectively analyzed a total of 826 patients from two medical centers (training 459; validation 196; test 171). T2-weighted imaging, diffusion-weighted imaging, and portal venous phases were collected. Tumor segmentations were conducted automatically by 3D U-Net. Based on generative adversarial network, we utilized 3D SR reconstruction to produce SR MRI. Radiomics models were developed and validated by XGBoost and Catboost. The predictive efficiency was demonstrated by calibration curves, decision curve analysis, area under the curve (AUC) and net reclassification index (NRI). We extracted 3045 radiomic features from both NR and SR MRI, retaining 29 and 28 features, respectively. For XGBoost models, SR MRI yielded higher AUC value than NR MRI in the validation and test cohorts (0.83 <i>vs</i> 0.79; 0.80 <i>vs</i> 0.78), respectively. Consistent trends were seen in CatBoost models: SR MRI achieved AUCs of 0.89 and 0.80 compared to NR MRI's 0.81 and 0.76. NRI indicated that the SR MRI models could improve the prediction accuracy by -1.6% to 20.9% compared to the NR MRI models. Deep learning-based SR MRI could improve the predictive performance of histopathologic grade in HCC. It may be a powerful tool for better stratification management for patients with operable HCC.

A machine learning model combining ultrasound features and serological markers predicts gallbladder polyp malignancy: A retrospective cohort study.

Yang Y, Tu H, Lin Y, Wei J

pubmed logopapersSep 12 2025
Differentiating benign from malignant gallbladder polyps (GBPs) is critical for clinical decisions. Pathological biopsy, the gold standard, requires cholecystectomy, underscoring the need for noninvasive alternatives. This retrospective study included 202 patients (50 malignant, 152 benign) who underwent cholecystectomy (2018-2024) at Fujian Provincial Hospital. Ultrasound features (polyp diameter, stalk presence), serological markers (neutrophil-to-lymphocyte ratio [NLR], CA19-9), and demographics (age, sex, body mass index, waist-to-hip ratio, comorbidities, alcohol history) were analyzed. Patients were split into training (70%) and validation (30%) sets. Ten machine learning (ML) algorithms were trained; the model with the highest area under the receiver operating characteristic curve (AUC) was selected. Shapley additive explanations (SHAP) identified key predictors. Models were categorized as clinical (ultrasound + age), hematological (NLR + CA19-9), and combined (all 5 variables). ROC, precision-recall, calibration, and decision curve analysis curves were generated. A web-based calculator was developed. The Extra Trees model achieved the highest AUC (0.97 in training, 0.93 in validation). SHAP analysis highlighted polyp diameter, sessile morphology, NLR, age, and CA19-9 as top predictors. The combined model outperformed clinical (AUC 0.89) and hematological (AUC 0.68) models, with balanced sensitivity (66%-54%), specificity (94-93%), and accuracy (87%-83%). This ML model integrating ultrasound and serological markers accurately predicts GBP malignancy. The web-based calculator facilitates clinical adoption, potentially reducing unnecessary surgeries.

Machine Learning for Preoperative Assessment and Postoperative Prediction in Cervical Cancer: Multicenter Retrospective Model Integrating MRI and Clinicopathological Data.

Li S, Guo C, Fang Y, Qiu J, Zhang H, Ling L, Xu J, Peng X, Jiang C, Wang J, Hua K

pubmed logopapersSep 12 2025
Machine learning (ML) has been increasingly applied to cervical cancer (CC) research. However, few studies have combined both clinical parameters and imaging data. At the same time, there remains an urgent need for more robust and accurate preoperative assessment of parametrial invasion and lymph node metastasis, as well as postoperative prognosis prediction. The objective of this study is to develop an integrated ML model combining clinicopathological variables and magnetic resonance image features for (1) preoperative parametrial invasion and lymph node metastasis detection and (2) postoperative recurrence and survival prediction. Retrospective data from 250 patients with CC (2014-2022; 2 tertiary hospitals) were analyzed. Variables were assessed for their predictive value regarding parametrial invasion, lymph node metastasis, survival, and recurrence using 7 ML models: K-nearest neighbor (KNN), support vector machine, decision tree, random forest (RF), balanced RF, weighted DT, and weighted KNN. Performance was assessed via 5-fold cross-validation using accuracy, sensitivity, specificity, precision, F1-score, and area under the receiver operating characteristic curve (AUC). The optimal models were deployed in an artificial intelligence-assisted contouring and prognosis prediction system. Among 250 women, there were 11 deaths and 24 recurrences. (1) For preoperative evaluation, the integrated model using balanced RF achieved optimal performance (sensitivity 0.81, specificity 0.85) for parametrial invasion, while weighted KNN achieved the best performance for lymph node metastasis (sensitivity 0.98, AUC 0.72). (2) For postoperative prognosis, weighted KNN also demonstrated high accuracy for recurrence (accuracy 0.94, AUC 0.86) and mortality (accuracy 0.97, AUC 0.77), with relatively balanced sensitivity of 0.80 and 0.33, respectively. (3) An artificial intelligence-assisted contouring and prognosis prediction system was developed to support preoperative evaluation and postoperative prognosis prediction. The integration of clinical data and magnetic resonance images provides enhanced diagnostic capability to preoperatively detect parametrial invasion and lymph node metastasis detection and prognostic capability to predict recurrence and mortality for CC, facilitating personalized, precise treatment strategies.

The Combined Use of Cervical Ultrasound and Deep Learning Improves the Detection of Patients at Risk for Spontaneous Preterm Delivery.

Sejer EPF, Pegios P, Lin M, Bashir Z, Wulff CB, Christensen AN, Nielsen M, Feragen A, Tolsgaard MG

pubmed logopapersSep 11 2025
Preterm birth is the leading cause of neonatal mortality and morbidity. While ultrasound-based cervical length measurement is the current standard for predicting preterm birth, its performance is limited. Artificial intelligence (AI) has shown potential in ultrasound analysis, yet few small-scale studies have evaluated its use for predicting preterm birth. To develop and validate an AI model for spontaneous preterm birth prediction from cervical ultrasound images and compare its performance to cervical length. In this multicenter study, we developed a deep learning-based AI model using data from women who underwent cervical ultrasound scans as part of antenatal care between 2008 and 2018 in Denmark. Indications for ultrasound were not systematically recorded, and scans were likely performed due to risk factors or symptoms of preterm labor. We compared the performance of the AI model with cervical length measurement for spontaneous preterm birth prediction by assessing the area under the curve (AUC), sensitivity, specificity, and likelihood ratios. Subgroup analyses evaluated model performance across baseline characteristics, and saliency heat maps identified anatomical features that influenced AI model predictions the most. The final dataset included 4,224 pregnancies and 7,862 cervical ultrasound images, with 50% resulting in spontaneous preterm birth. The AI model surpassed cervical length for predicting spontaneous preterm birth before 37 weeks with a sensitivity of 0.51 (95% CI 0.50-0.53) versus 0.41 (0.39-0.42) at a fixed specificity at 0.85, p<0.001, and a higher AUC of 0.75 (0.74-0.76) versus 0.67 (0.66-0.68), p<0.001. For identifying late preterm births at 34-37 weeks, the AI model had 36.6 % higher sensitivity than cervical length (0.47 versus 0.34, p<0.001). The AI model achieved higher AUCs across all subgroups, especially at earlier gestational ages. Saliency heat maps indicated that in 54% of preterm birth cases, the AI model focused on the posterior inner lining of the lower uterine segment, suggesting it incorporates more data than cervical length alone. To our knowledge, this is the first large-scale, multicenter study demonstrating that AI is more sensitive than cervical length measurement in identifying spontaneous preterm births across multiple characteristics, 19 hospital sites, and different ultrasound machines. The AI model performs particularly well at earlier gestational ages, enabling more timely prophylactic interventions.

Training With Local Data Remains Important for Deep Learning MRI Prostate Cancer Detection.

Carere SG, Jewell J, Nasute Fauerbach PV, Emerson DB, Finelli A, Ghai S, Haider MA

pubmed logopapersSep 11 2025
Domain shift has been shown to have a major detrimental effect on AI model performance however prior studies on domain shift for MRI prostate cancer segmentation have been limited to small, or heterogenous cohorts. Our objective was to assess whether prostate cancer segmentation models trained on local MRI data continue to outperform those trained on external data with cohorts exceeding 1000. We simulated a multi-institutional consortium using the public PICAI dataset (PICAI-TRAIN: <i>1241 exams</i>, PICAI-TEST: <i>259</i>) and a local dataset (LOCAL-TRAIN: <i>1400 exams</i>, LOCAL-TEST: <i>308</i>). IRB approval was obtained and consent waived. We compared nnUNet-v2 models trained on the combined data (CENTRAL-TRAIN) and separately on PICAI-TRAIN and LOCAL-TRAIN. Accuracy was evaluated using the open source PICAI Score on LOCAL-TEST. Significance was tested using bootstrapping. Just 22% (309/1400) of LOCAL-TRAIN exams would be sufficient to match the performance of a model trained on PICAI-TRAIN. The CENTRAL-TRAIN performance was similar to LOCAL-TRAIN performance, with PICAI Scores [95% CI] of 65 [58-71] and 66 [60-72], respectively. Both of these models exceeded the model trained on PICAI-TRAIN alone which had a score of 58 [51-64] (<i>P</i> < .002). Reducing training set size did not alter these relative trends. Domain shift limits MRI prostate cancer segmentation performance even when training with over 1000 exams from 3 external institutions. Use of local data is paramount at these scales.
Page 12 of 99987 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.