Sort by:
Page 505 of 7527514 results

Wang C, Zhang S, Xu J, Wang H, Wang Q, Zhu Y, Xing X, Hao D, Lang N

pubmed logopapersJul 1 2025
To generate virtual T1 contrast-enhanced (T1CE) sequences from plain spinal MRI sequences using the denoising diffusion probabilistic model (DDPM) and to compare its performance against one baseline model pix2pix and three advanced models. A total of 1195 consecutive spinal tumor patients who underwent contrast-enhanced MRI at two hospitals were divided into a training set (n = 809, 49 ± 17 years, 437 men), an internal test set (n = 203, 50 ± 16 years, 105 men), and an external test set (n = 183, 52 ± 16 years, 94 men). Input sequences were T1- and T2-weighted images, and T2 fat-saturation images. The output was T1CE images. In the test set, one radiologist read the virtual images and marked all visible enhancing lesions. Results were evaluated using sensitivity (SE) and false discovery rate (FDR). We compared differences in lesion size and enhancement degree between reference and virtual images, and calculated signal-to-noise (SNR) and contrast-to-noise ratios (CNR) for image quality assessment. In the external test set, the mean squared error was 0.0038±0.0065, and structural similarity index 0.78±0.10. Upon evaluation by the reader, the overall SE of the generated T1CE images was 94% with FDR 2%. There was no difference in lesion size or signal intensity ratio between the reference and generated images. The CNR was higher in the generated images than the reference images (9.241 vs. 4.021; P<0.001). The proposed DDPM demonstrates potential as an alternative to gadolinium contrast in spinal MRI examinations of oncologic patients.

Elgarba BM, Ali S, Fontenele RC, Meeus J, Jacobs R

pubmed logopapersJul 1 2025
Accurately registering intraoral and cone beam computed tomography (CBCT) scans in patients with metal artifacts poses a significant challenge. Whether a cloud-based platform trained for artificial intelligence (AI)-driven segmentation can improve registration is unclear. The purpose of this clinical study was to validate a cloud-based platform trained for the AI-driven segmentation of prosthetic crowns on CBCT scans and subsequent multimodal intraoral scan-to-CBCT registration in the presence of high metal artifact expression. A dataset consisting of 30 time-matched maxillary and mandibular CBCT and intraoral scans, each containing at least 4 prosthetic crowns, was collected. CBCT acquisition involved placing cotton rolls between the cheeks and teeth to facilitate soft tissue delineation. Segmentation and registration were compared using either a semi-automated (SA) method or an AI-automated (AA). SA served as clinical reference, where prosthetic crowns and their radicular parts (natural roots or implants) were threshold-based segmented with point surface-based registration. The AA method included fully automated segmentation and registration based on AI algorithms. Quantitative assessment compared AA's median surface deviation (MSD) and root mean square (RMS) in crown segmentation and subsequent intraoral scan-to-CBCT registration with those of SA. Additionally, segmented crown STL files were voxel-wise analyzed for comparison between AA and SA. A qualitative assessment of AA-based crown segmentation evaluated the need for refinement, while the AA-based registration assessment scrutinized the alignment of the registered-intraoral scan with the CBCT teeth and soft tissue contours. Ultimately, the study compared the time efficiency and consistency of both methods. Quantitative outcomes were analyzed with the Kruskal-Wallis, Mann-Whitney, and Student t tests, and qualitative outcomes with the Wilcoxon test (all α=.05). Consistency was evaluated by using the intraclass correlation coefficient (ICC). Quantitatively, AA methods excelled with a 0.91 Dice Similarity Coefficient for crown segmentation and an MSD of 0.03 ±0.05 mm for intraoral scan-to-CBCT registration. Additionally, AA achieved 91% clinically acceptable matches of teeth and gingiva on CBCT scans, surpassing SA method's 80%. Furthermore, AA was significantly faster than SA (P<.05), being 200 times faster in segmentation and 4.5 times faster in registration. Both AA and SA exhibited excellent consistency in segmentation and registration, with ICC values of 0.99 and 1 for AA and 0.99 and 0.96 for SA, respectively. The novel cloud-based platform demonstrated accurate, consistent, and time-efficient prosthetic crown segmentation, as well as intraoral scan-to-CBCT registration in scenarios with high artifact expression.

Li YX, Lu Y, Song ZM, Shen YT, Lu W, Ren M

pubmed logopapersJul 1 2025
Current ultrasound-based screening for endometrial cancer (EC) primarily relies on endometrial thickness (ET) and morphological evaluation, which suffer from low specificity and high interobserver variability. This study aimed to develop and validate an artificial intelligence (AI)-driven diagnostic model to improve diagnostic accuracy and reduce variability. A total of 1,861 consecutive postmenopausal women were enrolled from two centers between April 2021 and April 2024. Super-resolution (SR) technique was applied to enhance image quality before feature extraction. Radiomics features were extracted using Pyradiomics, and deep learning features were derived from convolutional neural network (CNN). Three models were developed: (1) R model: radiomics-based machine learning (ML) algorithms; (2) CNN model: image-based CNN algorithms; (3) DLR model: a hybrid model combining radiomics and deep learning features with ML algorithms. Using endometrium-level regions of interest (ROI), the DLR model achieved the best diagnostic performance, with an area under the receiver operating characteristic curve (AUROC) of 0.893 (95% CI: 0.847-0.932), sensitivity of 0.847 (95% CI: 0.692-0.944), and specificity of 0.810 (95% CI: 0.717-0.910) in the internal testing dataset. Consistent performance was observed in the external testing dataset (AUROC 0.871, sensitivity 0.792, specificity 0.829). The DLR model consistently outperformed both the R and CNN models. Moreover, endometrium-level ROIs yielded better results than uterine-corpus-level ROIs. This study demonstrates the feasibility and clinical value of AI-enhanced ultrasound analysis for EC detection. By integrating radiomics and deep learning features with SR-based image preprocessing, our model improves diagnostic specificity, reduces false positives, and mitigates operator-dependent variability. This non-invasive approach offers a more accurate and reliable tool for EC screening in postmenopausal women. Not applicable.

Wang Y, Xie B, Wang K, Zou W, Liu A, Xue Z, Liu M, Ma Y

pubmed logopapersJul 1 2025
This study constructed an interpretable machine learning model based on multi-parameter MRI sub-region habitat radiomics and clinicopathological features, aiming to preoperatively evaluate the microsatellite instability (MSI) status of rectal cancer (RC) patients. This retrospective study recruited 291 rectal cancer patients with pathologically confirmed MSI status and randomly divided them into a training cohort and a testing cohort at a ratio of 8:2. First, the K-means method was used for cluster analysis of tumor voxels, and sub-region radiomics features and classical radiomics features were respectively extracted from multi-parameter MRI sequences. Then, the synthetic minority over-sampling technique method was used to balance the sample size, and finally, the features were screened. Prediction models were established using logistic regression based on clinicopathological variables, classical radiomics features, and MSI-related sub-region radiomics features, and the contribution of each feature to the model decision was quantified by the Shapley-Additive-Explanations (SHAP) algorithm. The area under the curve (AUC) of the sub-region radiomics model in the training and testing groups was 0.848 and 0.8, respectively, both better than that of the classical radiomics and clinical models. The combined model performed the best, with AUCs of 0.908 and 0.863 in the training and testing groups, respectively. We developed and validated a robust combined model that integrates clinical variables, classical radiomics features, and sub-region radiomics features to accurately determine the MSI status of RC patients. We visualized the prediction process using SHAP, enabling more effective personalized treatment plans and ultimately improving RC patient survival rates.

Wang Y, Dai A, Wen Y, Sun M, Gao J, Yin Z, Han R

pubmed logopapersJul 1 2025
This study aims to develop and validate an ultrasoundbased habitat imaging and peritumoral radiomics model for predicting high-risk capsule characteristics for recurrence of pleomorphic adenoma (PA) of the parotid gland while also exploring the optimal range of peritumoral region. Retrospective analysis was conducted on 325 patients (171 in training set, 74 in validation set and 80 in testing set) diagnosed with PA at two medical centers. Univariate and multivariate logistic regression analyses were performed to identify clinical risk factors. The tumor was segmented into four habitat subregions using K-means clustering, with peri-tumor regions expanded at thicknesses of 1/3/5mm. Radiomics features were extracted from intra-tumor, habitat subregions, and peritumoral regions respectively to construct predictive models, integrating three machine learning classifiers: SVM, RandomForest, and XGBoost. Additionally, a combined model was developed by incorporating peritumoral features and clinical factors based on habitat imaging. Model performance was evaluated using receiver operating characteristic (ROC) curves, calibration curves, and decision curve analysis (DCA). SHAP analysis was employed to improve the interpretability. The RandomForest model in habitat imaging consistently outperformed other models in predictive performance, with AUC values of 0.881, 0.823, and 0.823 for the training set, validation set, and testing set respectively. Incorporating peri-1mm features and clinical factors into the combined model slightly improved its performance, resulting in AUC values of 0.898, 0.833, and 0.829 for each set. The calibration curves and DCA exhibited excellent fit for the combined model while providing great clinical net benefit. The combined model exhibits robust predictive performance in identifying high-risk capsule characteristics for recurrence of PA in the parotid gland. This model may assist in determining optimal surgical margin and assessing patients' prognosis.

Qin X, Yang W, Zhou X, Yang Y, Zhang N

pubmed logopapersJul 1 2025
To develop a machine learning (ML) model based on clinicopathological and imaging features to predict the Human Epidermal Growth Factor Receptor 2 (HER2) positive expression (HER2-p) of breast cancer (BC), and to compare its performance with that of a logistic regression (LR) model. A total of 2541 consecutive female patients with pathologically confirmed primary breast lesions were enrolled in this study. Based on chronological order, 2034 patients treated between January 2018 and December 2022 were designated as the retrospective development cohort, while 507 patients treated between January 2023 and May 2024 were designated as the prospective validation cohort. The patients were randomly divided into a train cohort (n=1628) and a test cohort (n=406) in an 8:2 ratio within the development cohort. Pretreatment mammography (MG) and breast MRI data, along with clinicopathological features, were recorded. Extreme Gradient Boosting (XGBoost) in combination with Artificial Neural Network (ANN) and multivariate LR analyses were employed to extract features associated with HER2 positivity in BC and to develop an ANN model (using XGBoost features) and an LR model, respectively. The predictive value was assessed using a receiver operating characteristic (ROC) curve. Following the application of Recursive Feature Elimination with Cross-Validation (RFE-CV) for feature dimensionality reduction, the XGBoost algorithm identified tumor size, suspicious calcifications, Ki-67 index, spiculation, and minimum apparent diffusion coefficient (minimum ADC) as key feature subsets indicative of HER2-p in BC. The constructed ANN model consistently outperformed the LR model, achieving the area under the curve (AUC) of 0.853 (95% CI: 0.837-0.872) in the train cohort, 0.821 (95% CI: 0.798-0.853) in the test cohort, and 0.809 (95% CI: 0.776-0.841) in the validation cohort. The ANN model, built using the significant feature subsets identified by the XGBoost algorithm with RFE-CV, demonstrates potential in predicting HER2-p in BC.

Zhu Y, Wang P, Wang B, Feng B, Cai W, Wang S, Meng X, Wang S, Zhao X, Ma X

pubmed logopapersJul 1 2025
To investigate the effect of accelerated deep-learning (DL) multi-b-value DWI (Mb-DWI) on acquisition time, image quality, and predictive ability of microvascular invasion (MVI) in BCLC stage A hepatocellular carcinoma (HCC), compared to standard Mb-DWI. Patients who underwent liver MRI were prospectively collected. Subjective image quality, signal-to-noise ratio (SNR), lesion contrast-to-noise ratio (CNR), and Mb-DWI-derived parameters from various models (mono-exponential model, intravoxel incoherent motion, diffusion kurtosis imaging, and stretched exponential model) were calculated and compared between the two sequences. The Mb-DWI parameters of two sequences were compared between MVI-positive and MVI-negative groups, respectively. ROC and logistic regression analysis were performed to evaluate and identify the predictive performance. The study included 118 patients. 48/118 (40.67%) lesions were identified as MVI positive. DL Mb-DWI significantly reduced acquisition time by 52.86%. DL Mb-DWI produced significantly higher overall image quality, SNR, and CNR than standard Mb-DWI. All diffusion-related parameters except pseudo-diffusion coefficient showed significant differences between the two sequences. Both in DL and standard Mb-DWI, the apparent diffusion coefficient, true diffusion coefficient (D), perfusion fraction (f), mean diffusivity (MD), mean kurtosis (MK), and distributed diffusion coefficient (DDC) values were significantly different between MVI-positive and MVI-negative groups. The combination of D, f, and MK yield the highest AUC of 0.912 and 0.928 in standard and DL sequences, with no significant difference regarding the predictive efficiency. The DL Mb-DWI significantly reduces acquisition time and improves image quality, with comparable predictive performance to standard Mb-DWI in discriminating MVI status in BCLC stage A HCC.

Jian Y, Yang S, Liu R, Tan X, Zhao Q, Wu J, Chen Y

pubmed logopapersJul 1 2025
To develop and validate a machine learning-based prediction model for the use of multiparametric magnetic resonance imaging(MRI) to predict benign and malignant lesions in the testis. The study retrospectively enrolled 148 patients with pathologically confirmed benign and malignant testicular lesions, dividing them into: training set (n=103) and validation set (n=45). Radiomics characteristics were derived from T2-weighted(T2WI)、contrast-enhanced T1-weighted(CE-T1WI)、diffusion-weighted imaging(DWI) and Apparent diffusion coefficient(ADC) MRI images, followed by feature selection. A machine learning-based combined model was developed by incorporating radiomics scores (rad scores) from the optimal radiomics model along with clinical predictors. Draw the receiver operating characteristic (ROC) curve and use the area under the curve (AUC) to evaluate and compare the predictive performance of each model. The diagnostic efficacy of the various machine learning models was evaluated using the Delong test. Radiomics features were extracted from four sequence-based groups(CE-T1WI+DWI+ADC+T2WI), and the model that combined Logistic Regression(LR) machine learning showed the best performance in the radiomics model. The clinical model identified one independent predictors. The combined clinical-radiomics model showed the best performance, whose AUC value was 0.932(95% confidence intervals(CI)0.868-0.978), sensitivity was 0.875, specificity was 0.871 and accuracy was 0.884 in validation set. The combined clinical-radiomics model can be used as a reliable tool to predict benign and malignant testicular lesions and provide a reference for clinical treatment method decisions.

Blanken N, Heiles B, Kuliesh A, Versluis M, Jain K, Maresca D, Lajoinie G

pubmed logopapersJul 1 2025
Ultrasound contrast agents (UCAs) have been used as vascular reporters for the past 40 years. The ability to enhance vascular features in ultrasound images with engineered lipid-shelled microbubbles has enabled breakthroughs such as the detection of tissue perfusion or super-resolution imaging of the microvasculature. However, advances in the field of contrast-enhanced ultrasound are hindered by experimental variables that are difficult to control in a laboratory setting, such as complex vascular geometries, the lack of ground truth, and tissue nonlinearities. In addition, the demand for large datasets to train deep learning-based computational ultrasound imaging methods calls for the development of a simulation tool that can reproduce the physics of ultrasound wave interactions with tissues and microbubbles. Here, we introduce a physically realistic contrast-enhanced ultrasound simulator (PROTEUS) consisting of four interconnected modules that account for blood flow dynamics in segmented vascular geometries, intravascular microbubble trajectories, ultrasound wave propagation, and nonlinear microbubble scattering. The first part of this study describes the numerical methods that enabled this development. We demonstrate that PROTEUS can generate contrast-enhanced radio-frequency (RF) data in various vascular architectures across the range of medical ultrasound frequencies. PROTEUS offers a customizable framework to explore novel ideas in the field of contrast-enhanced ultrasound imaging. It is released as an open-source tool for the scientific community.

Zhang JP, Wang ZH, Zhang J, Qiu J

pubmed logopapersJul 1 2025
Research has revealed that the crown-implant ratio (CIR) is a critical variable influencing the long-term stability of implant-supported prostheses in the oral cavity. Nevertheless, inefficient manual measurement and varied measurement methods have caused significant inconvenience in both clinical and scientific work. This study aimed to develop an automated system for detecting the CIR of implant-supported prostheses from radiographs, with the objective of enhancing the efficiency of radiograph interpretation for dentists. The method for measuring the CIR of implant-supported prostheses was based on convolutional neural networks (CNNs) and was designed to recognize implant-supported prostheses and identify key points around it. The experiment used the You Only Look Once version 4 (Yolov4) to locate the implant-supported prosthesis using a rectangular frame. Subsequently, two CNNs were used to identify key points. The first CNN determined the general position of the feature points, while the second CNN finetuned the output of the first network to precisely locate the key points. The network underwent testing on a self-built dataset, and the anatomic CIR and clinical CIR were obtained simultaneously through the vertical distance method. Key point accuracy was validated through Normalized Error (NE) values, and a set of data was selected to compare machine and manual measurement results. For statistical analysis, the paired t test was applied (α=.05). A dataset comprising 1106 images was constructed. The integration of multiple networks demonstrated satisfactory recognition of implant-supported prostheses and their surrounding key points. The average NE value for key points indicated a high level of accuracy. Statistical studies confirmed no significant difference in the crown-implant ratio between machine and manual measurement results (P>.05). Machine learning proved effective in identifying implant-supported prostheses and detecting their crown-implant ratios. If applied as a clinical tool for analyzing radiographs, this research can assist dentists in efficiently and accurately obtaining crown-implant ratio results.
Page 505 of 7527514 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,500+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.