Sort by:
Page 12 of 2052045 results

Ultrasound-based machine learning model to predict the risk of endometrial cancer among postmenopausal women.

Li YX, Lu Y, Song ZM, Shen YT, Lu W, Ren M

pubmed logopapersJul 1 2025
Current ultrasound-based screening for endometrial cancer (EC) primarily relies on endometrial thickness (ET) and morphological evaluation, which suffer from low specificity and high interobserver variability. This study aimed to develop and validate an artificial intelligence (AI)-driven diagnostic model to improve diagnostic accuracy and reduce variability. A total of 1,861 consecutive postmenopausal women were enrolled from two centers between April 2021 and April 2024. Super-resolution (SR) technique was applied to enhance image quality before feature extraction. Radiomics features were extracted using Pyradiomics, and deep learning features were derived from convolutional neural network (CNN). Three models were developed: (1) R model: radiomics-based machine learning (ML) algorithms; (2) CNN model: image-based CNN algorithms; (3) DLR model: a hybrid model combining radiomics and deep learning features with ML algorithms. Using endometrium-level regions of interest (ROI), the DLR model achieved the best diagnostic performance, with an area under the receiver operating characteristic curve (AUROC) of 0.893 (95% CI: 0.847-0.932), sensitivity of 0.847 (95% CI: 0.692-0.944), and specificity of 0.810 (95% CI: 0.717-0.910) in the internal testing dataset. Consistent performance was observed in the external testing dataset (AUROC 0.871, sensitivity 0.792, specificity 0.829). The DLR model consistently outperformed both the R and CNN models. Moreover, endometrium-level ROIs yielded better results than uterine-corpus-level ROIs. This study demonstrates the feasibility and clinical value of AI-enhanced ultrasound analysis for EC detection. By integrating radiomics and deep learning features with SR-based image preprocessing, our model improves diagnostic specificity, reduces false positives, and mitigates operator-dependent variability. This non-invasive approach offers a more accurate and reliable tool for EC screening in postmenopausal women. Not applicable.

Multi-parametric MRI Habitat Radiomics Based on Interpretable Machine Learning for Preoperative Assessment of Microsatellite Instability in Rectal Cancer.

Wang Y, Xie B, Wang K, Zou W, Liu A, Xue Z, Liu M, Ma Y

pubmed logopapersJul 1 2025
This study constructed an interpretable machine learning model based on multi-parameter MRI sub-region habitat radiomics and clinicopathological features, aiming to preoperatively evaluate the microsatellite instability (MSI) status of rectal cancer (RC) patients. This retrospective study recruited 291 rectal cancer patients with pathologically confirmed MSI status and randomly divided them into a training cohort and a testing cohort at a ratio of 8:2. First, the K-means method was used for cluster analysis of tumor voxels, and sub-region radiomics features and classical radiomics features were respectively extracted from multi-parameter MRI sequences. Then, the synthetic minority over-sampling technique method was used to balance the sample size, and finally, the features were screened. Prediction models were established using logistic regression based on clinicopathological variables, classical radiomics features, and MSI-related sub-region radiomics features, and the contribution of each feature to the model decision was quantified by the Shapley-Additive-Explanations (SHAP) algorithm. The area under the curve (AUC) of the sub-region radiomics model in the training and testing groups was 0.848 and 0.8, respectively, both better than that of the classical radiomics and clinical models. The combined model performed the best, with AUCs of 0.908 and 0.863 in the training and testing groups, respectively. We developed and validated a robust combined model that integrates clinical variables, classical radiomics features, and sub-region radiomics features to accurately determine the MSI status of RC patients. We visualized the prediction process using SHAP, enabling more effective personalized treatment plans and ultimately improving RC patient survival rates.

Prediction of High-risk Capsule Characteristics for Recurrence of Pleomorphic Adenoma in the Parotid Gland Based on Habitat Imaging and Peritumoral Radiomics: A Two-center Study.

Wang Y, Dai A, Wen Y, Sun M, Gao J, Yin Z, Han R

pubmed logopapersJul 1 2025
This study aims to develop and validate an ultrasoundbased habitat imaging and peritumoral radiomics model for predicting high-risk capsule characteristics for recurrence of pleomorphic adenoma (PA) of the parotid gland while also exploring the optimal range of peritumoral region. Retrospective analysis was conducted on 325 patients (171 in training set, 74 in validation set and 80 in testing set) diagnosed with PA at two medical centers. Univariate and multivariate logistic regression analyses were performed to identify clinical risk factors. The tumor was segmented into four habitat subregions using K-means clustering, with peri-tumor regions expanded at thicknesses of 1/3/5mm. Radiomics features were extracted from intra-tumor, habitat subregions, and peritumoral regions respectively to construct predictive models, integrating three machine learning classifiers: SVM, RandomForest, and XGBoost. Additionally, a combined model was developed by incorporating peritumoral features and clinical factors based on habitat imaging. Model performance was evaluated using receiver operating characteristic (ROC) curves, calibration curves, and decision curve analysis (DCA). SHAP analysis was employed to improve the interpretability. The RandomForest model in habitat imaging consistently outperformed other models in predictive performance, with AUC values of 0.881, 0.823, and 0.823 for the training set, validation set, and testing set respectively. Incorporating peri-1mm features and clinical factors into the combined model slightly improved its performance, resulting in AUC values of 0.898, 0.833, and 0.829 for each set. The calibration curves and DCA exhibited excellent fit for the combined model while providing great clinical net benefit. The combined model exhibits robust predictive performance in identifying high-risk capsule characteristics for recurrence of PA in the parotid gland. This model may assist in determining optimal surgical margin and assessing patients' prognosis.

A Machine Learning Model for Predicting the HER2 Positive Expression of Breast Cancer Based on Clinicopathological and Imaging Features.

Qin X, Yang W, Zhou X, Yang Y, Zhang N

pubmed logopapersJul 1 2025
To develop a machine learning (ML) model based on clinicopathological and imaging features to predict the Human Epidermal Growth Factor Receptor 2 (HER2) positive expression (HER2-p) of breast cancer (BC), and to compare its performance with that of a logistic regression (LR) model. A total of 2541 consecutive female patients with pathologically confirmed primary breast lesions were enrolled in this study. Based on chronological order, 2034 patients treated between January 2018 and December 2022 were designated as the retrospective development cohort, while 507 patients treated between January 2023 and May 2024 were designated as the prospective validation cohort. The patients were randomly divided into a train cohort (n=1628) and a test cohort (n=406) in an 8:2 ratio within the development cohort. Pretreatment mammography (MG) and breast MRI data, along with clinicopathological features, were recorded. Extreme Gradient Boosting (XGBoost) in combination with Artificial Neural Network (ANN) and multivariate LR analyses were employed to extract features associated with HER2 positivity in BC and to develop an ANN model (using XGBoost features) and an LR model, respectively. The predictive value was assessed using a receiver operating characteristic (ROC) curve. Following the application of Recursive Feature Elimination with Cross-Validation (RFE-CV) for feature dimensionality reduction, the XGBoost algorithm identified tumor size, suspicious calcifications, Ki-67 index, spiculation, and minimum apparent diffusion coefficient (minimum ADC) as key feature subsets indicative of HER2-p in BC. The constructed ANN model consistently outperformed the LR model, achieving the area under the curve (AUC) of 0.853 (95% CI: 0.837-0.872) in the train cohort, 0.821 (95% CI: 0.798-0.853) in the test cohort, and 0.809 (95% CI: 0.776-0.841) in the validation cohort. The ANN model, built using the significant feature subsets identified by the XGBoost algorithm with RFE-CV, demonstrates potential in predicting HER2-p in BC.

Accelerated Multi-b-Value DWI Using Deep Learning Reconstruction: Image Quality Improvement and Microvascular Invasion Prediction in BCLC Stage A Hepatocellular Carcinoma.

Zhu Y, Wang P, Wang B, Feng B, Cai W, Wang S, Meng X, Wang S, Zhao X, Ma X

pubmed logopapersJul 1 2025
To investigate the effect of accelerated deep-learning (DL) multi-b-value DWI (Mb-DWI) on acquisition time, image quality, and predictive ability of microvascular invasion (MVI) in BCLC stage A hepatocellular carcinoma (HCC), compared to standard Mb-DWI. Patients who underwent liver MRI were prospectively collected. Subjective image quality, signal-to-noise ratio (SNR), lesion contrast-to-noise ratio (CNR), and Mb-DWI-derived parameters from various models (mono-exponential model, intravoxel incoherent motion, diffusion kurtosis imaging, and stretched exponential model) were calculated and compared between the two sequences. The Mb-DWI parameters of two sequences were compared between MVI-positive and MVI-negative groups, respectively. ROC and logistic regression analysis were performed to evaluate and identify the predictive performance. The study included 118 patients. 48/118 (40.67%) lesions were identified as MVI positive. DL Mb-DWI significantly reduced acquisition time by 52.86%. DL Mb-DWI produced significantly higher overall image quality, SNR, and CNR than standard Mb-DWI. All diffusion-related parameters except pseudo-diffusion coefficient showed significant differences between the two sequences. Both in DL and standard Mb-DWI, the apparent diffusion coefficient, true diffusion coefficient (D), perfusion fraction (f), mean diffusivity (MD), mean kurtosis (MK), and distributed diffusion coefficient (DDC) values were significantly different between MVI-positive and MVI-negative groups. The combination of D, f, and MK yield the highest AUC of 0.912 and 0.928 in standard and DL sequences, with no significant difference regarding the predictive efficiency. The DL Mb-DWI significantly reduces acquisition time and improves image quality, with comparable predictive performance to standard Mb-DWI in discriminating MVI status in BCLC stage A HCC.

Radiomics Analysis of Different Machine Learning Models based on Multiparametric MRI to Identify Benign and Malignant Testicular Lesions.

Jian Y, Yang S, Liu R, Tan X, Zhao Q, Wu J, Chen Y

pubmed logopapersJul 1 2025
To develop and validate a machine learning-based prediction model for the use of multiparametric magnetic resonance imaging(MRI) to predict benign and malignant lesions in the testis. The study retrospectively enrolled 148 patients with pathologically confirmed benign and malignant testicular lesions, dividing them into: training set (n=103) and validation set (n=45). Radiomics characteristics were derived from T2-weighted(T2WI)、contrast-enhanced T1-weighted(CE-T1WI)、diffusion-weighted imaging(DWI) and Apparent diffusion coefficient(ADC) MRI images, followed by feature selection. A machine learning-based combined model was developed by incorporating radiomics scores (rad scores) from the optimal radiomics model along with clinical predictors. Draw the receiver operating characteristic (ROC) curve and use the area under the curve (AUC) to evaluate and compare the predictive performance of each model. The diagnostic efficacy of the various machine learning models was evaluated using the Delong test. Radiomics features were extracted from four sequence-based groups(CE-T1WI+DWI+ADC+T2WI), and the model that combined Logistic Regression(LR) machine learning showed the best performance in the radiomics model. The clinical model identified one independent predictors. The combined clinical-radiomics model showed the best performance, whose AUC value was 0.932(95% confidence intervals(CI)0.868-0.978), sensitivity was 0.875, specificity was 0.871 and accuracy was 0.884 in validation set. The combined clinical-radiomics model can be used as a reliable tool to predict benign and malignant testicular lesions and provide a reference for clinical treatment method decisions.

PROTEUS: A Physically Realistic Contrast-Enhanced Ultrasound Simulator-Part I: Numerical Methods.

Blanken N, Heiles B, Kuliesh A, Versluis M, Jain K, Maresca D, Lajoinie G

pubmed logopapersJul 1 2025
Ultrasound contrast agents (UCAs) have been used as vascular reporters for the past 40 years. The ability to enhance vascular features in ultrasound images with engineered lipid-shelled microbubbles has enabled breakthroughs such as the detection of tissue perfusion or super-resolution imaging of the microvasculature. However, advances in the field of contrast-enhanced ultrasound are hindered by experimental variables that are difficult to control in a laboratory setting, such as complex vascular geometries, the lack of ground truth, and tissue nonlinearities. In addition, the demand for large datasets to train deep learning-based computational ultrasound imaging methods calls for the development of a simulation tool that can reproduce the physics of ultrasound wave interactions with tissues and microbubbles. Here, we introduce a physically realistic contrast-enhanced ultrasound simulator (PROTEUS) consisting of four interconnected modules that account for blood flow dynamics in segmented vascular geometries, intravascular microbubble trajectories, ultrasound wave propagation, and nonlinear microbubble scattering. The first part of this study describes the numerical methods that enabled this development. We demonstrate that PROTEUS can generate contrast-enhanced radio-frequency (RF) data in various vascular architectures across the range of medical ultrasound frequencies. PROTEUS offers a customizable framework to explore novel ideas in the field of contrast-enhanced ultrasound imaging. It is released as an open-source tool for the scientific community.

Convolutional neural network-based measurement of crown-implant ratio for implant-supported prostheses.

Zhang JP, Wang ZH, Zhang J, Qiu J

pubmed logopapersJul 1 2025
Research has revealed that the crown-implant ratio (CIR) is a critical variable influencing the long-term stability of implant-supported prostheses in the oral cavity. Nevertheless, inefficient manual measurement and varied measurement methods have caused significant inconvenience in both clinical and scientific work. This study aimed to develop an automated system for detecting the CIR of implant-supported prostheses from radiographs, with the objective of enhancing the efficiency of radiograph interpretation for dentists. The method for measuring the CIR of implant-supported prostheses was based on convolutional neural networks (CNNs) and was designed to recognize implant-supported prostheses and identify key points around it. The experiment used the You Only Look Once version 4 (Yolov4) to locate the implant-supported prosthesis using a rectangular frame. Subsequently, two CNNs were used to identify key points. The first CNN determined the general position of the feature points, while the second CNN finetuned the output of the first network to precisely locate the key points. The network underwent testing on a self-built dataset, and the anatomic CIR and clinical CIR were obtained simultaneously through the vertical distance method. Key point accuracy was validated through Normalized Error (NE) values, and a set of data was selected to compare machine and manual measurement results. For statistical analysis, the paired t test was applied (α=.05). A dataset comprising 1106 images was constructed. The integration of multiple networks demonstrated satisfactory recognition of implant-supported prostheses and their surrounding key points. The average NE value for key points indicated a high level of accuracy. Statistical studies confirmed no significant difference in the crown-implant ratio between machine and manual measurement results (P>.05). Machine learning proved effective in identifying implant-supported prostheses and detecting their crown-implant ratios. If applied as a clinical tool for analyzing radiographs, this research can assist dentists in efficiently and accurately obtaining crown-implant ratio results.

Photon-counting detector CT of the brain reduces variability of Hounsfield units and has a mean offset compared with energy-integrating detector CT.

Stein T, Lang F, Rau S, Reisert M, Russe MF, Schürmann T, Fink A, Kellner E, Weiss J, Bamberg F, Urbach H, Rau A

pubmed logopapersJul 1 2025
Distinguishing gray matter (GM) from white matter (WM) is essential for CT of the brain. The recently established photon-counting detector CT (PCD-CT) technology employs a novel detection technique that might allow more precise measurement of tissue attenuation for an improved delineation of attenuation values (Hounsfield units - HU) and improved image quality in comparison with energy-integrating detector CT (EID-CT). To investigate this, we compared HU, GM vs. WM contrast, and image noise using automated deep learning-based brain segmentations. We retrospectively included patients who received either PCD-CT or EID-CT and did not display a cerebral pathology. A deep learning-based segmentation of the GM and WM was used to extract HU. From this, the gray-to-white ratio and contrast-to-noise ratio were calculated. We included 329 patients with EID-CT (mean age 59.8 ± 20.2 years) and 180 with PCD-CT (mean age 64.7 ± 16.5 years). GM and WM showed significantly lower HU in PCD-CT (GM: 40.4 ± 2.2 HU; WM: 33.4 ± 1.5 HU) compared to EID-CT (GM: 45.1 ± 1.6 HU; WM: 37.4 ± 1.6 HU, p < .001). Standard deviations of HU were also lower in PCD-CT (GM and WM both p < .001) and contrast-tonoise ratio was significantly higher in PCD-CT compared to EID-CT (p < .001). Gray-to-white matter ratios were not significantly different across both modalities (p > .99). In an age-matched subset (n = 157 patients from both cohorts), all findings were replicated. This comprehensive comparison of HU in cerebral gray and white matter revealed substantially reduced image noise and an average offset with lower HU in PCD-CT while the ratio between GM and WM remained constant. The potential need to adapt windowing presets based on this finding should be investigated in future studies. CNR = Contrast-to-Noise Ratio; CTDIvol = Volume Computed Tomography Dose Index; EID = Energy-Integrating Detector; GWR = Gray-to-White Matter Ratio; HU = Hounsfield Units; PCD = Photon-Counting Detector; ROI = Region of Interest; VMI = Virtual Monoenergetic Images.

Deep learning-based lung cancer classification of CT images.

Faizi MK, Qiang Y, Wei Y, Qiao Y, Zhao J, Aftab R, Urrehman Z

pubmed logopapersJul 1 2025
Lung cancer remains a leading cause of cancer-related deaths worldwide, with accurate classification of lung nodules being critical for early diagnosis. Traditional radiological methods often struggle with high false-positive rates, underscoring the need for advanced diagnostic tools. In this work, we introduce DCSwinB, a novel deep learning-based lung nodule classifier designed to improve the accuracy and efficiency of benign and malignant nodule classification in CT images. Built on the Swin-Tiny Vision Transformer (ViT), DCSwinB incorporates several key innovations: a dual-branch architecture that combines CNNs for local feature extraction and Swin Transformer for global feature extraction, and a Conv-MLP module that enhances connections between adjacent windows to capture long-range dependencies in 3D images. Pretrained on the LUNA16 and LUNA16-K datasets, which consist of annotated CT scans from thousands of patients, DCSwinB was evaluated using ten-fold cross-validation. The model demonstrated superior performance, achieving 90.96% accuracy, 90.56% recall, 89.65% specificity, and an AUC of 0.94, outperforming existing models such as ResNet50 and Swin-T. These results highlight the effectiveness of DCSwinB in enhancing feature representation while optimizing computational efficiency. By improving the accuracy and reliability of lung nodule classification, DCSwinB has the potential to assist radiologists in reducing diagnostic errors, enabling earlier intervention and improved patient outcomes.
Page 12 of 2052045 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.