Sort by:
Page 27 of 99990 results

A Case Study on Colposcopy-Based Cervical Cancer Staging Reveals an Alarming Lack of Data Sharing Hindering the Adoption of Machine Learning in Clinical Practice

Schulz, M., Leha, A.

medrxiv logopreprintAug 15 2025
BackgroundThe inbuilt ability to adapt existing models to new applications has been one of the key drivers of the success of deep learning models. Thereby, sharing trained models is crucial for their adaptation to different populations and domains. Not sharing models prohibits validation and potentially following translation into clinical practice, and hinders scientific progress. In this paper we examine the current state of data and model sharing in the medical field using cervical cancer staging on colposcopy images as a case example. MethodsWe conducted a comprehensive literature search in PubMed to identify studies employing machine learning techniques in the analysis of colposcopy images. For studies where raw data was not directly accessible, we systematically inquired about accessing the pre-trained model weights and/or raw colposcopy image data by contacting the authors using various channels. ResultsWe included 46 studies and one publicly available dataset in our study. We retrieved data of the latter and inquired about data access for the 46 studies by contacting a total of 92 authors. We received 15 responses related to 14 studies (30%). The remaining 32 studies remained unresponsive (70%). Of the 15 responses received, two responses redirected our inquiry to other authors, two responses were initially pending, and 11 declined data sharing. Despite our follow-up efforts on all responses received, none of the inquiries led to actual data sharing (0%). The only available data source remained the publicly available dataset. ConclusionsDespite the long-standing demands for reproducible research and efforts to incentivize data sharing, such as the requirement of data availability statements, our case study reveals a persistent lack of data sharing culture. Reasons identified in this case study include a lack of resources to provide the data, data privacy concerns, ongoing trial registrations and low response rates to inquiries. Potential routes for improvement could include comprehensive data availability statements required by journals, data preparation and deposition in a repository as part of the publication process, an automatic maximal embargo time after which data will become openly accessible and data sharing rules set by funders.

Automatic segmentation of cone beam CT images using treatment planning CT images in patients with prostate cancer.

Takayama Y, Kadoya N, Yamamoto T, Miyasaka Y, Kusano Y, Kajikawa T, Tomori S, Katsuta Y, Tanaka S, Arai K, Takeda K, Jingu K

pubmed logopapersAug 14 2025
Cone-beam computed tomography-based online adaptive radiotherapy (CBCT-based online ART) is currently used in clinical practice; however, deep learning-based segmentation of CBCT images remains challenging. Previous studies generated CBCT datasets for segmentation by adding contours outside clinical practice or synthesizing tissue contrast-enhanced diagnostic images paired with CBCT images. This study aimed to improve CBCT segmentation by matching the treatment planning CT (tpCT) image quality to CBCT images without altering the tpCT image or its contours. A deep-learning-based CBCT segmentation model was trained for the male pelvis using only the tpCT dataset. To bridge the quality gap between tpCT and routine CBCT images, an artificial pseudo-CBCT dataset was generated using Gaussian noise and Fourier domain adaptation (FDA) for 80 tpCT datasets (the hybrid FDA method). A five-fold cross-validation approach was used for model training. For comparison, atlas-based segmentation was performed with a registered tpCT dataset. The Dice similarity coefficient (DSC) assessed contour quality between the model-predicted and reference manual contours. The average DSC values for the clinical target volume, bladder, and rectum using the hybrid FDA method were 0.71 ± 0.08, 0.84 ± 0.08, and 0.78 ± 0.06, respectively. Conversely, the values for the model using plain tpCT were 0.40 ± 0.12, 0.17 ± 0.21, and 0.18 ± 0.14, and for the atlas-based model were 0.66 ± 0.13, 0.59 ± 0.16, and 0.66 ± 0.11, respectively. The segmentation model using the hybrid FDA method demonstrated significantly higher accuracy than models trained on plain tpCT datasets and those using atlas-based segmentation.

Radiomics-based machine-learning method to predict extrahepatic metastasis in hepatocellular carcinoma after hepatectomy: a multicenter study.

He Y, Dong B, Hu B, Hao X, Xia N, Yang C, Dong Q, Zhu C

pubmed logopapersAug 14 2025
This study investigates the use of CT-based radiomics for predicting extrahepatic metastasis in hepatocellular carcinoma (HCC) following hepatectomy. We analyzed data from 374 patients from two centers (277 in the training cohort and 97 in an external validation cohort). Radiomic features were extracted from contrast-enhanced CT scans. Key features were identified using the least absolute shrinkage and selection operator (LASSO) to compute radiomics scores (radscore) for model development. A clinical model based on risk factors was also created. We developed a combined model integrating both radscore and clinical variables, constructing nomograms for personalized risk assessment. Model performance was compared via the Delong test, with calibration curves assessing prediction consistency. Decision curve analysis (DCA) was employed to assess the clinical utility and net benefit of the predictive models across different threshold probabilities, thereby evaluating their potential value in guiding clinical decision-making for extrahepatic metastasis. Radscore based on CT was an independent predictor of extrahepatic disease (p < 0.05). The combined model showed high predictive performance with an AUC of 87.2% (95% CI: 81.8%-92.6%) in the training group and 86.0% (95% CI: 69.4%-100%) in the validation group. Predictive performance of the combined model significantly outperformed both the radiomics and clinical models (p < 0.05). The DCA shows that the combined model has a higher net benefit in predicting extrahepatic metastases of HCC than the clinical model and radiomics model. The combined prediction model, utilizing CT radscore alongside clinical risk factors, effectively forecasts extrahepatic metastasis in HCC patients.

A novel unified Inception-U-Net hybrid gravitational optimization model (UIGO) incorporating automated medical image segmentation and feature selection for liver tumor detection.

Banerjee T, Singh DP, Kour P, Swain D, Mahajan S, Kadry S, Kim J

pubmed logopapersAug 14 2025
Segmenting liver tumors in medical imaging is pivotal for precise diagnosis, treatment, and evaluating therapy outcomes. Even with modern imaging technologies, fully automated segmentation systems have not overcome the challenge posed by the diversity in the shape, size, and texture of liver tumors. Such delays often hinder clinicians from making timely and accurate decisions. This study tries to resolve these issues with the development of UIGO. This new deep learning model merges U-Net and Inception networks, incorporating advanced feature selection and optimization strategies. The goals of UIGO include achieving high precision segmented results while maintaining optimal computational requirements for efficiency in real-world clinical use. Publicly available liver tumor segmentation datasets were used for testing the model: LiTS (Liver Tumor Segmentation Challenge), CHAOS (Combined Healthy Abdominal Organ Segmentation), and 3D-IRCADb1 (3D-IRCAD liver dataset). With various tumor shapes and sizes ranging across different imaging modalities such as CT and MRI, these datasets ensured comprehensive testing of UIGO's performance in diverse clinical scenarios. The experimental outcomes show the effectiveness of UIGO with a segmentation accuracy of 99.93%, an AUC score of 99.89%, a Dice Coefficient of 0.997, and an IoU of 0.998. UIGO demonstrated higher performance than other contemporary liver tumor segmentation techniques, indicating the system's ability to enhance clinician's ability to deliver precise and prompt evaluations at a lower computational expense. This study underscores the effort towards advanced streamlined, dependable, and clinically useful devices for liver tumor segmentation in medical imaging.

Multimodal artificial intelligence for subepithelial lesion classification and characterization: a multicenter comparative study (with video).

Li J, Jing X, Zhang Q, Wang X, Wang L, Shan J, Zhou Z, Fan L, Gong X, Sun X, He S

pubmed logopapersAug 14 2025
Subepithelial lesions (SELs) present significant diagnostic challenges in gastrointestinal endoscopy, particularly in differentiating malignant types, such as gastrointestinal stromal tumors (GISTs) and neuroendocrine tumors, from benign types like leiomyomas. Misdiagnosis can lead to unnecessary interventions or delayed treatment. To address this challenge, we developed ECMAI-WME, a parallel fusion deep learning model integrating white light endoscopy (WLE) and microprobe endoscopic ultrasonography (EUS), to improve SEL classification and lesion characterization. A total of 523 SELs from four hospitals were used to develop serial and parallel fusion AI models. The Parallel Model, demonstrating superior performance, was designated as ECMAI-WME. The model was tested on an external validation cohort (n = 88) and a multicenter test cohort (n = 274). Diagnostic performance, lesion characterization, and clinical decision-making support were comprehensively evaluated and compared with endoscopists' performance. The ECMAI-WME model significantly outperformed endoscopists in diagnostic accuracy (96.35% vs. 63.87-86.13%, p < 0.001) and treatment decision-making accuracy (96.35% vs. 78.47-86.13%, p < 0.001). It achieved 98.72% accuracy in internal validation, 94.32% in external validation, and 96.35% in multicenter testing. For distinguishing gastric GISTs from leiomyomas, the model reached 91.49% sensitivity, 100% specificity, and 96.38% accuracy. Lesion characteristics were identified with a mean accuracy of 94.81% (range: 90.51-99.27%). The model maintained robust performance despite class imbalance, confirmed by five complementary analyses. Subgroup analyses showed consistent accuracy across lesion size, location, or type (p > 0.05), demonstrating strong generalizability. The ECMAI-WME model demonstrates excellent diagnostic performance and robustness in the multiclass SEL classification and characterization, supporting its potential for real-time deployment to enhance diagnostic consistency and guide clinical decision-making.

Data-Driven Abdominal Phenotypes of Type 2 Diabetes in Lean, Overweight, and Obese Cohorts

Lucas W. Remedios, Chloe Choe, Trent M. Schwartz, Dingjie Su, Gaurav Rudravaram, Chenyu Gao, Aravind R. Krishnan, Adam M. Saunders, Michael E. Kim, Shunxing Bao, Alvin C. Powers, Bennett A. Landman, John Virostko

arxiv logopreprintAug 14 2025
Purpose: Although elevated BMI is a well-known risk factor for type 2 diabetes, the disease's presence in some lean adults and absence in others with obesity suggests that detailed body composition may uncover abdominal phenotypes of type 2 diabetes. With AI, we can now extract detailed measurements of size, shape, and fat content from abdominal structures in 3D clinical imaging at scale. This creates an opportunity to empirically define body composition signatures linked to type 2 diabetes risk and protection using large-scale clinical data. Approach: To uncover BMI-specific diabetic abdominal patterns from clinical CT, we applied our design four times: once on the full cohort (n = 1,728) and once on lean (n = 497), overweight (n = 611), and obese (n = 620) subgroups separately. Briefly, our experimental design transforms abdominal scans into collections of explainable measurements through segmentation, classifies type 2 diabetes through a cross-validated random forest, measures how features contribute to model-estimated risk or protection through SHAP analysis, groups scans by shared model decision patterns (clustering from SHAP) and links back to anatomical differences (classification). Results: The random-forests achieved mean AUCs of 0.72-0.74. There were shared type 2 diabetes signatures in each group; fatty skeletal muscle, older age, greater visceral and subcutaneous fat, and a smaller or fat-laden pancreas. Univariate logistic regression confirmed the direction of 14-18 of the top 20 predictors within each subgroup (p < 0.05). Conclusions: Our findings suggest that abdominal drivers of type 2 diabetes may be consistent across weight classes.

Deep Learning-Based Automated Segmentation of Uterine Myomas

Tausifa Jan Saleem, Mohammad Yaqub

arxiv logopreprintAug 14 2025
Uterine fibroids (myomas) are the most common benign tumors of the female reproductive system, particularly among women of childbearing age. With a prevalence exceeding 70%, they pose a significant burden on female reproductive health. Clinical symptoms such as abnormal uterine bleeding, infertility, pelvic pain, and pressure-related discomfort play a crucial role in guiding treatment decisions, which are largely influenced by the size, number, and anatomical location of the fibroids. Magnetic Resonance Imaging (MRI) is a non-invasive and highly accurate imaging modality commonly used by clinicians for the diagnosis of uterine fibroids. Segmenting uterine fibroids requires a precise assessment of both the uterus and fibroids on MRI scans, including measurements of volume, shape, and spatial location. However, this process is labor intensive and time consuming and subjected to variability due to intra- and inter-expert differences at both pre- and post-treatment stages. As a result, there is a critical need for an accurate and automated segmentation method for uterine fibroids. In recent years, deep learning algorithms have shown re-markable improvements in medical image segmentation, outperforming traditional methods. These approaches offer the potential for fully automated segmentation. Several studies have explored the use of deep learning models to achieve automated segmentation of uterine fibroids. However, most of the previous work has been conducted using private datasets, which poses challenges for validation and comparison between studies. In this study, we leverage the publicly available Uterine Myoma MRI Dataset (UMD) to establish a baseline for automated segmentation of uterine fibroids, enabling standardized evaluation and facilitating future research in this domain.

DINOMotion: advanced robust tissue motion tracking with DINOv2 in 2D-Cine MRI-guided radiotherapy

Soorena Salari, Catherine Spino, Laurie-Anne Pharand, Fabienne Lathuiliere, Hassan Rivaz, Silvain Beriault, Yiming Xiao

arxiv logopreprintAug 14 2025
Accurate tissue motion tracking is critical to ensure treatment outcome and safety in 2D-Cine MRI-guided radiotherapy. This is typically achieved by registration of sequential images, but existing methods often face challenges with large misalignments and lack of interpretability. In this paper, we introduce DINOMotion, a novel deep learning framework based on DINOv2 with Low-Rank Adaptation (LoRA) layers for robust, efficient, and interpretable motion tracking. DINOMotion automatically detects corresponding landmarks to derive optimal image registration, enhancing interpretability by providing explicit visual correspondences between sequential images. The integration of LoRA layers reduces trainable parameters, improving training efficiency, while DINOv2's powerful feature representations offer robustness against large misalignments. Unlike iterative optimization-based methods, DINOMotion directly computes image registration at test time. Our experiments on volunteer and patient datasets demonstrate its effectiveness in estimating both linear and nonlinear transformations, achieving Dice scores of 92.07% for the kidney, 90.90% for the liver, and 95.23% for the lung, with corresponding Hausdorff distances of 5.47 mm, 8.31 mm, and 6.72 mm, respectively. DINOMotion processes each scan in approximately 30ms and consistently outperforms state-of-the-art methods, particularly in handling large misalignments. These results highlight its potential as a robust and interpretable solution for real-time motion tracking in 2D-Cine MRI-guided radiotherapy.

Preoperative ternary classification using DCE-MRI radiomics and machine learning for HCC, ICC, and HIPT.

Xie P, Liao ZJ, Xie L, Zhong J, Zhang X, Yuan W, Yin Y, Chen T, Lv H, Wen X, Wang X, Zhang L

pubmed logopapersAug 14 2025
This study develops a machine learning model using dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) radiomics and clinical data to preoperatively differentiate hepatocellular carcinoma (HCC), intrahepatic cholangiocarcinoma (ICC), and hepatic inflammatory pseudotumor (HIPT), addressing limitations of conventional diagnostics. This retrospective study included 280 patients (HCC = 160, ICC = 80, HIPT = 40) who underwent DCE-MRI from 2008 to 2024 at three hospitals. Radiomics features and clinical data were extracted and analyzed using LASSO regression and machine learning algorithms (Logistic Regression, Random Forest, and Extreme Gradient Boosting), with class weighting (HCC:ICC:HIPT = 1:2:4) to address class imbalance. Models were compared using macro-average Area Under the Curve (AUC), accuracy, recall, and precision. The fusion model, integrating radiomics and clinical features, achieved an AUC of 0.933 (95% CI: 0.91-0.95) and 84.5% accuracy, outperforming radiomics-only (AUC = 0.856, 72.6%) and clinical-only (AUC = 0.795, 66.7%) models (p < 0.05). Rim enhancement is a key model feature for distinguishing HCC from ICC and HIPT, while hepatic lobe atrophy distinguishes ICC and HIPT from HCC. This study developed a novel preoperative imaging-based model to differentiate HCC, ICC, and HIPT. The fusion model performed exceptionally well, demonstrating superior accuracy in ICC identification, significantly outperforming traditional diagnostic methods (e.g., radiology and biomarkers) and single-modality machine learning models (p < 0.05). This noninvasive approach enhances diagnostic precision and supports personalized treatment planning in liver disease management. This study develops a novel preoperative imaging-based machine learning model to differentiate hepatocellular carcinoma (HCC), intrahepatic cholangiocarcinoma (ICC), and hepatic inflammatory pseudotumor (HIPT), improving diagnostic accuracy and advancing personalized treatment strategies in clinical radiology. A machine learning model integrates DCE-MRI radiomics and clinical data for liver lesion differentiation. The fusion model outperforms single-modality models with 0.933 AUC and 84.5% accuracy. This model provides a noninvasive, reliable tool for personalized liver disease diagnosis and treatment planning.

Deep learning-based non-invasive prediction of PD-L1 status and immunotherapy survival stratification in esophageal cancer using [<sup>18</sup>F]FDG PET/CT.

Xie F, Zhang M, Zheng C, Zhao Z, Wang J, Li Y, Wang K, Wang W, Lin J, Wu T, Wang Y, Chen X, Li Y, Zhu Z, Wu H, Li Y, Liu Q

pubmed logopapersAug 14 2025
This study aimed to develop and validate deep learning models using [<sup>18</sup>F]FDG PET/CT to predict PD-L1 status in esophageal cancer (EC) patients. Additionally, we assessed the potential of derived deep learning model scores (DLS) for survival stratification in immunotherapy. In this retrospective study, we included 331 EC patients from two centers, dividing them into training, internal validation, and external validation cohorts. Fifty patients who received immunotherapy were followed up. We developed four 3D ResNet10-based models-PET + CT + clinical factors (CPC), PET + CT (PC), PET (P), and CT (C)-using pre-treatment [<sup>18</sup>F]FDG PET/CT scans. For comparison, we also constructed a logistic model incorporating clinical factors (clinical model). The DLS were evaluated as radiological markers for survival stratification, and nomograms for predicting survival were constructed. The models demonstrated accurate prediction of PD-L1 status. The areas under the curve (AUCs) for predicting PD-L1 status were as follows: CPC (0.927), PC (0.904), P (0.886), C (0.934), and the clinical model (0.603) in the training cohort; CPC (0.882), PC (0.848), P (0.770), C (0.745), and the clinical model (0.524) in the internal validation cohort; and CPC (0.843), PC (0.806), P (0.759), C (0.667), and the clinical model (0.671) in the external validation cohort. The CPC and PC models exhibited superior predictive performance. Survival analysis revealed that the DLS from most models effectively stratified overall survival and progression-free survival at appropriate cut-off points (P < 0.05), outperforming stratification based on PD-L1 status (combined positive score ≥ 10). Furthermore, incorporating model scores with clinical factors in nomograms enhanced the predictive probability of survival after immunotherapy. Deep learning models based on [<sup>18</sup>F]FDG PET/CT can accurately predict PD-L1 status in esophageal cancer patients. The derived DLS can effectively stratify survival outcomes following immunotherapy, particularly when combined with clinical factors.
Page 27 of 99990 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.