Sort by:
Page 9 of 41408 results

MRI Radiomics and Automated Habitat Analysis Enhance Machine Learning Prediction of Bone Metastasis and High-Grade Gleason Scores in Prostate Cancer.

Yang Y, Zheng B, Zou B, Liu R, Yang R, Chen Q, Guo Y, Yu S, Chen B

pubmed logopapersJun 23 2025
To explore the value of machine learning models based on MRI radiomics and automated habitat analysis in predicting bone metastasis and high-grade pathological Gleason scores in prostate cancer. This retrospective study enrolled 214 patients with pathologically diagnosed prostate cancer from May 2013 to January 2025, including 93 cases with bone metastasis and 159 cases with high-grade Gleason scores. Clinical, pathological and MRI data were collected. An nnUNet model automatically segmented the prostate in MRI scans. K-means clustering identified subregions within the entire prostate in T2-FS images. Senior radiologists manually segmented regions of interest (ROIs) in prostate lesions. Radiomics features were extracted from these habitat subregions and lesion ROIs. These features combined with clinical features were utilized to build multiple machine learning classifiers to predict bone metastasis and high-grade Gleason scores while a K-means clustering method was applied to obtain habitat subregions within the whole prostate. Finally, the models underwent interpretable analysis based on feature importance. The nnUNet model achieved a mean Dice coefficient of 0.970 for segmentation. Habitat analysis using 2 clusters yielded the highest average silhouette coefficient (0.57). Machine learning models based on a combination of lesion radiomics, habitat radiomics, and clinical features achieved the best performance in both prediction tasks. The Extra Trees Classifier achieved the highest AUC (0.900) for predicting bone metastasis, while the CatBoost Classifier performed best (AUC 0.895) for predicting high-grade Gleason scores. The interpretability analysis of the optimal models showed that the PSA clinical feature was crucial for predictions, while both habitat radiomics and lesion radiomics also played important roles. The study proposed an automated prostate habitat analysis for prostate cancer, enabling a comprehensive analysis of tumor heterogeneity. The machine learning models developed achieved excellent performance in predicting the risk of bone metastasis and high-grade Gleason scores in prostate cancer. This approach overcomes the limitations of manual feature extraction, and the inadequate analysis of heterogeneity often encountered in traditional radiomics, thereby improving model performance.

Machine Learning Models Based on CT Enterography for Differentiating Between Ulcerative Colitis and Colonic Crohn's Disease Using Intestinal Wall, Mesenteric Fat, and Visceral Fat Features.

Wang X, Wang X, Lei J, Rong C, Zheng X, Li S, Gao Y, Wu X

pubmed logopapersJun 23 2025
This study aimed to develop radiomic-based machine learning models using computed tomography enterography (CTE) features derived from the intestinal wall, mesenteric fat, and visceral fat to differentiate between ulcerative colitis (UC) and colonic Crohn's disease (CD). Clinical and imaging data from 116 patients with inflammatory bowel disease (IBD) (68 with UC and 48 with colonic CD) were retrospectively collected. Radiomic features were extracted from venous-phase CTE images. Feature selection was performed via the intraclass correlation coefficient (ICC), correlation analysis, SelectKBest, and least absolute shrinkage and selection operator (LASSO) regression. Support vector machine models were constructed using features from individual and combined regions, with model performance evaluated using the area under the ROC curve (AUC). The combined radiomic model, integrating features from all three regions, exhibited superior classification performance (AUC= 0.857, 95% CI, 0.732-0.982), with a sensitivity of 0.762 (95% CI, 0.547-0.903) and specificity of 0.857 (95% CI, 0.601-0.960) in the testing cohort. The models based on features from the intestinal wall, mesenteric fat, and visceral fat achieved AUCs of 0.847 (95% CI, 0.710-0.984), 0.707 (95% CI, 0.526-0.889), and 0.731 (95% CI, 0.553-0.910), respectively, in the testing cohort. The intestinal wall model demonstrated the best calibration. This study demonstrated the feasibility of constructing machine learning models based on radiomic features of the intestinal wall, mesenteric fat, and visceral fat to distinguish between UC and colonic CD.

Development and validation of a SOTA-based system for biliopancreatic segmentation and station recognition system in EUS.

Zhang J, Zhang J, Chen H, Tian F, Zhang Y, Zhou Y, Jiang Z

pubmed logopapersJun 23 2025
Endoscopic ultrasound (EUS) is a vital tool for diagnosing biliopancreatic disease, offering detailed imaging to identify key abnormalities. Its interpretation demands expertise, which limits its accessibility for less trained practitioners. Thus, the creation of tools or systems to assist in interpreting EUS images is crucial for improving diagnostic accuracy and efficiency. To develop an AI-assisted EUS system for accurate pancreatic and biliopancreatic duct segmentation, and evaluate its impact on endoscopists' ability to identify biliary-pancreatic diseases during segmentation and anatomical localization. The EUS-AI system was designed to perform station positioning and anatomical structure segmentation. A total of 45,737 EUS images from 1852 patients were used for model training. Among them, 2881 images were for internal testing, and 2747 images from 208 patients were for external validation. Additionally, 340 images formed a man-machine competition test set. During the research process, various newer state-of-the-art (SOTA) deep learning algorithms were also compared. In classification, in the station recognition task, compared to the ResNet-50 and YOLOv8-CLS algorithms, the Mean Teacher algorithm achieved the highest accuracy, with an average of 95.60% (92.07%-99.12%) in the internal test set and 92.72% (88.30%-97.15%) in the external test set. For segmentation, compared to the UNet ++ and YOLOv8 algorithms, the U-Net v2 algorithm was optimal. Ultimately, the EUS-AI system was constructed using the optimal models from two tasks, and a man-machine competition experiment was conducted. The results demonstrated that the performance of the EUS-AI system significantly outperformed that of mid-level endoscopists, both in terms of position recognition (p < 0.001) and pancreas and biliopancreatic duct segmentation tasks (p < 0.001, p = 0.004). The EUS-AI system is expected to significantly shorten the learning curve for the pancreatic EUS examination and enhance procedural standardization.

CT Radiomics-Based Explainable Machine Learning Model for Accurate Differentiation of Malignant and Benign Endometrial Tumors: A Two-Center Study

Tingrui Zhang, Honglin Wu, Zekun Jiang, Yingying Wang, Rui Ye, Huiming Ni, Chang Liu, Jin Cao, Xuan Sun, Rong Shao, Xiaorong Wei, Yingchun Sun

arxiv logopreprintJun 22 2025
Aimed to develop and validate a CT radiomics-based explainable machine learning model for diagnosing malignancy and benignity specifically in endometrial cancer (EC) patients. A total of 83 EC patients from two centers, including 46 with malignant and 37 with benign conditions, were included, with data split into a training set (n=59) and a testing set (n=24). The regions of interest (ROIs) were manually segmented from pre-surgical CT scans, and 1132 radiomic features were extracted from the pre-surgical CT scans using Pyradiomics. Six explainable machine learning modeling algorithms were implemented respectively, for determining the optimal radiomics pipeline. The diagnostic performance of the radiomic model was evaluated by using sensitivity, specificity, accuracy, precision, F1 score, confusion matrices, and ROC curves. To enhance clinical understanding and usability, we separately implemented SHAP analysis and feature mapping visualization, and evaluated the calibration curve and decision curve. By comparing six modeling strategies, the Random Forest model emerged as the optimal choice for diagnosing EC, with a training AUC of 1.00 and a testing AUC of 0.96. SHAP identified the most important radiomic features, revealing that all selected features were significantly associated with EC (P < 0.05). Radiomics feature maps also provide a feasible assessment tool for clinical applications. DCA indicated a higher net benefit for our model compared to the "All" and "None" strategies, suggesting its clinical utility in identifying high-risk cases and reducing unnecessary interventions. In conclusion, the CT radiomics-based explainable machine learning model achieved high diagnostic performance, which could be used as an intelligent auxiliary tool for the diagnosis of endometrial cancer.

Ultrasound placental image texture analysis using artificial intelligence and deep learning models to predict hypertension in pregnancy.

Arora U, Vigneshwar P, Sai MK, Yadav R, Sengupta D, Kumar M

pubmed logopapersJun 21 2025
This study considers the application of ultrasound placental image texture analysis for the prediction of hypertensive disorders of pregnancy (HDP) using deep learning (DL) algorithm. In this prospective observational study, placental ultrasound images were taken serially at 11-14 weeks (T1), 20-24 weeks (T2), and 28-32 weeks (T3). Pregnant women with blood pressure at or above 140/90 mmHg on two occasions 4 h apart were considered to have HDP. The image data of women with HDP were compared with those with a normal outcome using DL techniques such as convolutional neural networks (CNN), transfer learning, and a Vision Transformer (ViT) with a TabNet classifier. The accuracy and the Cohen kappa scores of the different DL techniques were compared. A total of 600/1008 (59.5%) subjects had a normal outcome, and 143/1008 (14.2%) had HDP; the reminder, 265/1008 (26.3%), had other adverse outcomes. In the basic CNN model, the accuracy was 81.6% for T1, 80% for T2, and 82.8% for T3. Using the Efficient Net B0 transfer learning model, the accuracy was 87.7%, 85.3%, and 90.3% for T1, T2, and T3, respectively. Using a TabNet classifier with a ViT, the accuracy and area under the receiver operating characteristic curve scores were 91.4% and 0.915 for T1, 90.2% and 0.904 for T2, and 90.3% and 0.907 for T3. The sensitivity and specificity for HDP prediction using ViT were 89.1% and 91.7% for T1, 86.6% and 93.7% for T2, and 85.6% and 94.6% for T3. Ultrasound placental image texture analysis using DL could differentiate women with a normal outcome and those with HDP with excellent accuracy and could open new avenues for research in this field.

SE-ATT-YOLO- A deep learning driven ultrasound based respiratory motion compensation system for precision radiotherapy.

Kuo CC, Pillai AG, Liao AH, Yu HW, Ramanathan S, Zhou H, Boominathan CM, Jeng SC, Chiou JF, Chuang HC

pubmed logopapersJun 21 2025
The therapeutic management of neoplasm employs high level energy beam to ablate malignant cells, which can cause collateral damage to adjacent normal tissue. Furthermore, respiration-induced organ motion, during radiotherapy can lead to significant displacement of neoplasms. In this work, a non-invasive ultrasound-based deep learning algorithm for respiratory motion compensation system (RMCS) was developed to mitigate the effect of respiratory motion induced neoplasm movement in radiotherapy. The deep learning algorithm generated based on modified YOLOv8n (You Only Look Once), by incorporating squeeze and excitation blocks for channel wise recalibration and enhanced attention mechanisms for spatial channel focus (SE-ATT-YOLO) to cope up with enhanced ultrasound image detection in real time scenario. The trained model was inferred with ultrasound movement of human diaphragm and tracked the bounding box coordinates using BoT-Sort, which drives the RMCS. The SE-ATT-YOLO model achieved mean average precision (mAP) of 0.88 which outperforms YOLOv8n with the value of 0.85. The root mean square error (RMSE) obtained from prerecorded respiratory signals with the compensated RMCS signal was calculated. The model achieved an inference speed of approximately 50 FPS. The RMSE values recorded were 4.342 for baseline shift, 3.105 for sinusoidal signal, 1.778 for deep breath, and 1.667 for slow signal. The SE-ATT-YOLO model outperformed all the results of previous models. The loss function uncertainty in YOLOv8n model was rectified in SE-ATT YOLO depicting the stability of the model. The model' stability, speed and accuracy of the model optimized the performance of the RMCS.

Development of Radiomics-Based Risk Prediction Models for Stages of Hashimoto's Thyroiditis Using Ultrasound, Clinical, and Laboratory Factors.

Chen JH, Kang K, Wang XY, Chi JN, Gao XM, Li YX, Huang Y

pubmed logopapersJun 21 2025
To develop a radiomics risk-predictive model for differentiating the different stages of Hashimoto's thyroiditis (HT). Data from patients with HT who underwent definitive surgical pathology between January 2018 and December 2023 were retrospectively collected and categorized into early HT (HT patients with simple positive antibodies or simultaneously accompanied by elevated thyroid hormones) and late HT (HT patients with positive antibodies and beginning to present subclinical hypothyroidism or developing hypothyroidism). Ultrasound images and five clinical and 12 laboratory indicators were obtained. Six classifiers were used to construct radiomics models. The gradient boosting decision tree (GBDT) classifier was used to screen for the best features to explore the main risk factors for differentiating early HT. The performance of each model was evaluated by receiver operating characteristic (ROC) curve. The model was validated using one internal and two external test cohorts. A total of 785 patients were enrolled. Extreme gradient boosting (XGBOOST) showed best performance in the training cohort, with an AUC of 0.999 (0.998, 1), and AUC values of 0.993 (0.98, 1), 0.947 (0.866, 1), and 0.98 (0.939, 1), respectively, in the internal test, first external, and second external cohorts. Ultrasound radiomic features contributed to 78.6% (11/14) of the model. The first-order feature of traverse section of thyroid ultrasound image, texture feature gray-level run length matrix (GLRLM) of longitudinal section of thyroid ultrasound image and free thyroxine showed the greatest contributions in the model. Our study developed and tested a risk-predictive model that effectively differentiated HT stages to more precisely and actively manage patients with HT at an earlier stage.

DRIMV_TSK: An Interpretable Surgical Evaluation Model for Incomplete Multi-View Rectal Cancer Data

Wei Zhang, Zi Wang, Hanwen Zhou, Zhaohong Deng, Weiping Ding, Yuxi Ge, Te Zhang, Yuanpeng Zhang, Kup-Sze Choi, Shitong Wang, Shudong Hu

arxiv logopreprintJun 21 2025
A reliable evaluation of surgical difficulty can improve the success of the treatment for rectal cancer and the current evaluation method is based on clinical data. However, more data about rectal cancer can be collected with the development of technology. Meanwhile, with the development of artificial intelligence, its application in rectal cancer treatment is becoming possible. In this paper, a multi-view rectal cancer dataset is first constructed to give a more comprehensive view of patients, including the high-resolution MRI image view, pressed-fat MRI image view, and clinical data view. Then, an interpretable incomplete multi-view surgical evaluation model is proposed, considering that it is hard to obtain extensive and complete patient data in real application scenarios. Specifically, a dual representation incomplete multi-view learning model is first proposed to extract the common information between views and specific information in each view. In this model, the missing view imputation is integrated into representation learning, and second-order similarity constraint is also introduced to improve the cooperative learning between these two parts. Then, based on the imputed multi-view data and the learned dual representation, a multi-view surgical evaluation model with the TSK fuzzy system is proposed. In the proposed model, a cooperative learning mechanism is constructed to explore the consistent information between views, and Shannon entropy is also introduced to adapt the view weight. On the MVRC dataset, we compared it with several advanced algorithms and DRIMV_TSK obtained the best results.

Multitask Deep Learning for Automated Segmentation and Prognostic Stratification of Endometrial Cancer via Biparametric MRI.

Yan R, Zhang X, Cao Q, Xu J, Chen Y, Qin S, Zhang S, Zhao W, Xing X, Yang W, Lang N

pubmed logopapersJun 19 2025
Endometrial cancer (EC) is a common gynecologic malignancy; accurate assessment of key prognostic factors is important for treatment planning. To develop a deep learning (DL) framework based on biparametric MRI for automated segmentation and multitask classification of EC key prognostic factors, including grade, stage, histological subtype, lymphovascular space invasion (LVSI), and deep myometrial invasion (DMI). Retrospective. A total of 325 patients with histologically confirmed EC were included: 211 training, 54 validation, and 60 test cases. T2-weighted imaging (T2WI, FSE/TSE) and diffusion-weighted imaging (DWI, SS-EPI) sequences at 1.5 and 3 T. The DL model comprised tumor segmentation and multitask classification. Manual delineation on T2WI and DWI acted as the reference standard for segmentation. Separate models were trained using T2WI alone, DWI alone and combined T2WI + DWI to classify dichotomized key prognostic factors. Performance was assessed in validation and test cohorts. For DMI, the combined model's was compared with visual assessment by four radiologists (with 1, 4, 7, and 20 years' experience), each of whom independently reviewed all cases. Segmentation was evaluated using the dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC), Hausdorff distance (HD95), and average surface distance (ASD). Classification performance was assessed using area under the receiver operating characteristic curve (AUC). Model AUCs were compared using DeLong's test. p < 0.05 was considered significant. In the test cohort, DSCs were 0.80 (T2WI) and 0.78 (DWI) and JSCs were 0.69 for both. HD95 and ASD were 7.02/1.71 mm (T2WI) versus 10.58/2.13 mm (DWI). The classification framework achieved AUCs of 0.78-0.94 (validation) and 0.74-0.94 (test). For DMI, the combined model performed comparably to radiologists (p = 0.07-0.84). The unified DL framework demonstrates strong EC segmentation and classification performance, with high accuracy across multiple tasks. 3. Stage 3.

Artificial intelligence in imaging diagnosis of liver tumors: current status and future prospects.

Hori M, Suzuki Y, Sofue K, Sato J, Nishigaki D, Tomiyama M, Nakamoto A, Murakami T, Tomiyama N

pubmed logopapersJun 19 2025
Liver cancer remains a significant global health concern, ranking as the sixth most common malignancy and the third leading cause of cancer-related deaths worldwide. Medical imaging plays a vital role in managing liver tumors, particularly hepatocellular carcinoma (HCC) and metastatic lesions. However, the large volume and complexity of imaging data can make accurate and efficient interpretation challenging. Artificial intelligence (AI) is recognized as a promising tool to address these challenges. Therefore, this review aims to explore the recent advances in AI applications in liver tumor imaging, focusing on key areas such as image reconstruction, image quality enhancement, lesion detection, tumor characterization, segmentation, and radiomics. Among these, AI-based image reconstruction has already been widely integrated into clinical workflows, helping to enhance image quality while reducing radiation exposure. While the adoption of AI-assisted diagnostic tools in liver imaging has lagged behind other fields, such as chest imaging, recent developments are driving their increasing integration into clinical practice. In the future, AI is expected to play a central role in various aspects of liver cancer care, including comprehensive image analysis, treatment planning, response evaluation, and prognosis prediction. This review offers a comprehensive overview of the status and prospects of AI applications in liver tumor imaging.
Page 9 of 41408 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.