Sort by:
Page 14 of 2602591 results

A Machine Learning Model for Predicting the HER2 Positive Expression of Breast Cancer Based on Clinicopathological and Imaging Features.

Qin X, Yang W, Zhou X, Yang Y, Zhang N

pubmed logopapersJul 1 2025
To develop a machine learning (ML) model based on clinicopathological and imaging features to predict the Human Epidermal Growth Factor Receptor 2 (HER2) positive expression (HER2-p) of breast cancer (BC), and to compare its performance with that of a logistic regression (LR) model. A total of 2541 consecutive female patients with pathologically confirmed primary breast lesions were enrolled in this study. Based on chronological order, 2034 patients treated between January 2018 and December 2022 were designated as the retrospective development cohort, while 507 patients treated between January 2023 and May 2024 were designated as the prospective validation cohort. The patients were randomly divided into a train cohort (n=1628) and a test cohort (n=406) in an 8:2 ratio within the development cohort. Pretreatment mammography (MG) and breast MRI data, along with clinicopathological features, were recorded. Extreme Gradient Boosting (XGBoost) in combination with Artificial Neural Network (ANN) and multivariate LR analyses were employed to extract features associated with HER2 positivity in BC and to develop an ANN model (using XGBoost features) and an LR model, respectively. The predictive value was assessed using a receiver operating characteristic (ROC) curve. Following the application of Recursive Feature Elimination with Cross-Validation (RFE-CV) for feature dimensionality reduction, the XGBoost algorithm identified tumor size, suspicious calcifications, Ki-67 index, spiculation, and minimum apparent diffusion coefficient (minimum ADC) as key feature subsets indicative of HER2-p in BC. The constructed ANN model consistently outperformed the LR model, achieving the area under the curve (AUC) of 0.853 (95% CI: 0.837-0.872) in the train cohort, 0.821 (95% CI: 0.798-0.853) in the test cohort, and 0.809 (95% CI: 0.776-0.841) in the validation cohort. The ANN model, built using the significant feature subsets identified by the XGBoost algorithm with RFE-CV, demonstrates potential in predicting HER2-p in BC.

Accelerated Multi-b-Value DWI Using Deep Learning Reconstruction: Image Quality Improvement and Microvascular Invasion Prediction in BCLC Stage A Hepatocellular Carcinoma.

Zhu Y, Wang P, Wang B, Feng B, Cai W, Wang S, Meng X, Wang S, Zhao X, Ma X

pubmed logopapersJul 1 2025
To investigate the effect of accelerated deep-learning (DL) multi-b-value DWI (Mb-DWI) on acquisition time, image quality, and predictive ability of microvascular invasion (MVI) in BCLC stage A hepatocellular carcinoma (HCC), compared to standard Mb-DWI. Patients who underwent liver MRI were prospectively collected. Subjective image quality, signal-to-noise ratio (SNR), lesion contrast-to-noise ratio (CNR), and Mb-DWI-derived parameters from various models (mono-exponential model, intravoxel incoherent motion, diffusion kurtosis imaging, and stretched exponential model) were calculated and compared between the two sequences. The Mb-DWI parameters of two sequences were compared between MVI-positive and MVI-negative groups, respectively. ROC and logistic regression analysis were performed to evaluate and identify the predictive performance. The study included 118 patients. 48/118 (40.67%) lesions were identified as MVI positive. DL Mb-DWI significantly reduced acquisition time by 52.86%. DL Mb-DWI produced significantly higher overall image quality, SNR, and CNR than standard Mb-DWI. All diffusion-related parameters except pseudo-diffusion coefficient showed significant differences between the two sequences. Both in DL and standard Mb-DWI, the apparent diffusion coefficient, true diffusion coefficient (D), perfusion fraction (f), mean diffusivity (MD), mean kurtosis (MK), and distributed diffusion coefficient (DDC) values were significantly different between MVI-positive and MVI-negative groups. The combination of D, f, and MK yield the highest AUC of 0.912 and 0.928 in standard and DL sequences, with no significant difference regarding the predictive efficiency. The DL Mb-DWI significantly reduces acquisition time and improves image quality, with comparable predictive performance to standard Mb-DWI in discriminating MVI status in BCLC stage A HCC.

Radiomics Analysis of Different Machine Learning Models based on Multiparametric MRI to Identify Benign and Malignant Testicular Lesions.

Jian Y, Yang S, Liu R, Tan X, Zhao Q, Wu J, Chen Y

pubmed logopapersJul 1 2025
To develop and validate a machine learning-based prediction model for the use of multiparametric magnetic resonance imaging(MRI) to predict benign and malignant lesions in the testis. The study retrospectively enrolled 148 patients with pathologically confirmed benign and malignant testicular lesions, dividing them into: training set (n=103) and validation set (n=45). Radiomics characteristics were derived from T2-weighted(T2WI)、contrast-enhanced T1-weighted(CE-T1WI)、diffusion-weighted imaging(DWI) and Apparent diffusion coefficient(ADC) MRI images, followed by feature selection. A machine learning-based combined model was developed by incorporating radiomics scores (rad scores) from the optimal radiomics model along with clinical predictors. Draw the receiver operating characteristic (ROC) curve and use the area under the curve (AUC) to evaluate and compare the predictive performance of each model. The diagnostic efficacy of the various machine learning models was evaluated using the Delong test. Radiomics features were extracted from four sequence-based groups(CE-T1WI+DWI+ADC+T2WI), and the model that combined Logistic Regression(LR) machine learning showed the best performance in the radiomics model. The clinical model identified one independent predictors. The combined clinical-radiomics model showed the best performance, whose AUC value was 0.932(95% confidence intervals(CI)0.868-0.978), sensitivity was 0.875, specificity was 0.871 and accuracy was 0.884 in validation set. The combined clinical-radiomics model can be used as a reliable tool to predict benign and malignant testicular lesions and provide a reference for clinical treatment method decisions.

PROTEUS: A Physically Realistic Contrast-Enhanced Ultrasound Simulator-Part I: Numerical Methods.

Blanken N, Heiles B, Kuliesh A, Versluis M, Jain K, Maresca D, Lajoinie G

pubmed logopapersJul 1 2025
Ultrasound contrast agents (UCAs) have been used as vascular reporters for the past 40 years. The ability to enhance vascular features in ultrasound images with engineered lipid-shelled microbubbles has enabled breakthroughs such as the detection of tissue perfusion or super-resolution imaging of the microvasculature. However, advances in the field of contrast-enhanced ultrasound are hindered by experimental variables that are difficult to control in a laboratory setting, such as complex vascular geometries, the lack of ground truth, and tissue nonlinearities. In addition, the demand for large datasets to train deep learning-based computational ultrasound imaging methods calls for the development of a simulation tool that can reproduce the physics of ultrasound wave interactions with tissues and microbubbles. Here, we introduce a physically realistic contrast-enhanced ultrasound simulator (PROTEUS) consisting of four interconnected modules that account for blood flow dynamics in segmented vascular geometries, intravascular microbubble trajectories, ultrasound wave propagation, and nonlinear microbubble scattering. The first part of this study describes the numerical methods that enabled this development. We demonstrate that PROTEUS can generate contrast-enhanced radio-frequency (RF) data in various vascular architectures across the range of medical ultrasound frequencies. PROTEUS offers a customizable framework to explore novel ideas in the field of contrast-enhanced ultrasound imaging. It is released as an open-source tool for the scientific community.

Convolutional neural network-based measurement of crown-implant ratio for implant-supported prostheses.

Zhang JP, Wang ZH, Zhang J, Qiu J

pubmed logopapersJul 1 2025
Research has revealed that the crown-implant ratio (CIR) is a critical variable influencing the long-term stability of implant-supported prostheses in the oral cavity. Nevertheless, inefficient manual measurement and varied measurement methods have caused significant inconvenience in both clinical and scientific work. This study aimed to develop an automated system for detecting the CIR of implant-supported prostheses from radiographs, with the objective of enhancing the efficiency of radiograph interpretation for dentists. The method for measuring the CIR of implant-supported prostheses was based on convolutional neural networks (CNNs) and was designed to recognize implant-supported prostheses and identify key points around it. The experiment used the You Only Look Once version 4 (Yolov4) to locate the implant-supported prosthesis using a rectangular frame. Subsequently, two CNNs were used to identify key points. The first CNN determined the general position of the feature points, while the second CNN finetuned the output of the first network to precisely locate the key points. The network underwent testing on a self-built dataset, and the anatomic CIR and clinical CIR were obtained simultaneously through the vertical distance method. Key point accuracy was validated through Normalized Error (NE) values, and a set of data was selected to compare machine and manual measurement results. For statistical analysis, the paired t test was applied (α=.05). A dataset comprising 1106 images was constructed. The integration of multiple networks demonstrated satisfactory recognition of implant-supported prostheses and their surrounding key points. The average NE value for key points indicated a high level of accuracy. Statistical studies confirmed no significant difference in the crown-implant ratio between machine and manual measurement results (P>.05). Machine learning proved effective in identifying implant-supported prostheses and detecting their crown-implant ratios. If applied as a clinical tool for analyzing radiographs, this research can assist dentists in efficiently and accurately obtaining crown-implant ratio results.

Photon-counting detector CT of the brain reduces variability of Hounsfield units and has a mean offset compared with energy-integrating detector CT.

Stein T, Lang F, Rau S, Reisert M, Russe MF, Schürmann T, Fink A, Kellner E, Weiss J, Bamberg F, Urbach H, Rau A

pubmed logopapersJul 1 2025
Distinguishing gray matter (GM) from white matter (WM) is essential for CT of the brain. The recently established photon-counting detector CT (PCD-CT) technology employs a novel detection technique that might allow more precise measurement of tissue attenuation for an improved delineation of attenuation values (Hounsfield units - HU) and improved image quality in comparison with energy-integrating detector CT (EID-CT). To investigate this, we compared HU, GM vs. WM contrast, and image noise using automated deep learning-based brain segmentations. We retrospectively included patients who received either PCD-CT or EID-CT and did not display a cerebral pathology. A deep learning-based segmentation of the GM and WM was used to extract HU. From this, the gray-to-white ratio and contrast-to-noise ratio were calculated. We included 329 patients with EID-CT (mean age 59.8 ± 20.2 years) and 180 with PCD-CT (mean age 64.7 ± 16.5 years). GM and WM showed significantly lower HU in PCD-CT (GM: 40.4 ± 2.2 HU; WM: 33.4 ± 1.5 HU) compared to EID-CT (GM: 45.1 ± 1.6 HU; WM: 37.4 ± 1.6 HU, p < .001). Standard deviations of HU were also lower in PCD-CT (GM and WM both p < .001) and contrast-tonoise ratio was significantly higher in PCD-CT compared to EID-CT (p < .001). Gray-to-white matter ratios were not significantly different across both modalities (p > .99). In an age-matched subset (n = 157 patients from both cohorts), all findings were replicated. This comprehensive comparison of HU in cerebral gray and white matter revealed substantially reduced image noise and an average offset with lower HU in PCD-CT while the ratio between GM and WM remained constant. The potential need to adapt windowing presets based on this finding should be investigated in future studies. CNR = Contrast-to-Noise Ratio; CTDIvol = Volume Computed Tomography Dose Index; EID = Energy-Integrating Detector; GWR = Gray-to-White Matter Ratio; HU = Hounsfield Units; PCD = Photon-Counting Detector; ROI = Region of Interest; VMI = Virtual Monoenergetic Images.

Deep learning-based lung cancer classification of CT images.

Faizi MK, Qiang Y, Wei Y, Qiao Y, Zhao J, Aftab R, Urrehman Z

pubmed logopapersJul 1 2025
Lung cancer remains a leading cause of cancer-related deaths worldwide, with accurate classification of lung nodules being critical for early diagnosis. Traditional radiological methods often struggle with high false-positive rates, underscoring the need for advanced diagnostic tools. In this work, we introduce DCSwinB, a novel deep learning-based lung nodule classifier designed to improve the accuracy and efficiency of benign and malignant nodule classification in CT images. Built on the Swin-Tiny Vision Transformer (ViT), DCSwinB incorporates several key innovations: a dual-branch architecture that combines CNNs for local feature extraction and Swin Transformer for global feature extraction, and a Conv-MLP module that enhances connections between adjacent windows to capture long-range dependencies in 3D images. Pretrained on the LUNA16 and LUNA16-K datasets, which consist of annotated CT scans from thousands of patients, DCSwinB was evaluated using ten-fold cross-validation. The model demonstrated superior performance, achieving 90.96% accuracy, 90.56% recall, 89.65% specificity, and an AUC of 0.94, outperforming existing models such as ResNet50 and Swin-T. These results highlight the effectiveness of DCSwinB in enhancing feature representation while optimizing computational efficiency. By improving the accuracy and reliability of lung nodule classification, DCSwinB has the potential to assist radiologists in reducing diagnostic errors, enabling earlier intervention and improved patient outcomes.

Automated classification of chondroid tumor using 3D U-Net and radiomics with deep features.

Le Dinh T, Lee S, Park H, Lee S, Choi H, Chun KS, Jung JY

pubmed logopapersJul 1 2025
Classifying chondroid tumors is an essential step for effective treatment planning. Recently, with the advances in computer-aided diagnosis and the increasing availability of medical imaging data, automated tumor classification using deep learning shows promise in assisting clinical decision-making. In this study, we propose a hybrid approach that integrates deep learning and radiomics for chondroid tumor classification. First, we performed tumor segmentation using the nnUNetv2 framework, which provided three-dimensional (3D) delineation of tumor regions of interest (ROIs). From these ROIs, we extracted a set of radiomics features and deep learning-derived features. After feature selection, we identified 15 radiomics and 15 deep features to build classification models. We developed 5 machine learning classifiers including Random Forest, XGBoost, Gradient Boosting, LightGBM, and CatBoost for the classification models. The approach integrating features from radiomics, ROI-originated deep learning features, and clinical variables yielded the best overall classification results. Among the classifiers, CatBoost classifier achieved the highest accuracy of 0.90 (95% CI 0.90-0.93), a weighted kappa of 0.85, and an AUC of 0.91. These findings highlight the potential of integrating 3D U-Net-assisted segmentation with radiomics and deep learning features to improve classification of chondroid tumors.

Deep Learning for Detecting and Subtyping Renal Cell Carcinoma on Contrast-Enhanced CT Scans Using 2D Neural Network with Feature Consistency Techniques.

Gupta A, Dhanakshirur RR, Jain K, Garg S, Yadav N, Seth A, Das CJ

pubmed logopapersJul 1 2025
<b>Objective</b>  The aim of this study was to explore an innovative approach for developing deep learning (DL) algorithm for renal cell carcinoma (RCC) detection and subtyping on computed tomography (CT): clear cell RCC (ccRCC) versus non-ccRCC using two-dimensional (2D) neural network architecture and feature consistency modules. <b>Materials and Methods</b>  This retrospective study included baseline CT scans from 196 histopathologically proven RCC patients: 143 ccRCCs and 53 non-ccRCCs. Manual tumor annotations were performed on axial slices of corticomedullary phase images, serving as ground truth. After image preprocessing, the dataset was divided into training, validation, and testing subsets. The study tested multiple 2D DL architectures, with the FocalNet-DINO demonstrating highest effectiveness in detecting and classifying RCC. The study further incorporated spatial and class consistency modules to enhance prediction accuracy. Models' performance was evaluated using free-response receiver operating characteristic curves, recall rates, specificity, accuracy, F1 scores, and area under the curve (AUC) scores. <b>Results</b>  The FocalNet-DINO architecture achieved the highest recall rate of 0.823 at 0.025 false positives per image (FPI) for RCC detection. The integration of spatial and class consistency modules into the architecture led to 0.2% increase in recall rate at 0.025 FPI, along with improvements of 0.1% in both accuracy and AUC scores for RCC classification. These enhancements allowed detection of cancer in an additional 21 slices and reduced false positives in 126 slices. <b>Conclusion</b>  This study demonstrates high performance for RCC detection and classification using DL algorithm leveraging 2D neural networks and spatial and class consistency modules, to offer a novel, computationally simpler, and accurate DL approach to RCC characterization.

Establishment and evaluation of an automatic multi?sequence MRI segmentation model of primary central nervous system lymphoma based on the nnU?Net deep learning network method.

Wang T, Tang X, Du J, Jia Y, Mou W, Lu G

pubmed logopapersJul 1 2025
Accurate quantitative assessment using gadolinium-contrast magnetic resonance imaging (MRI) is crucial in therapy planning, surveillance and prognostic assessment of primary central nervous system lymphoma (PCNSL). The present study aimed to develop a multimodal artificial intelligence deep learning segmentation model to address the challenges associated with traditional 2D measurements and manual volume assessments in MRI. Data from 49 pathologically-confirmed patients with PCNSL from six Chinese medical centers were analyzed, and regions of interest were manually segmented on contrast-enhanced T1-weighted and T2-weighted MRI scans for each patient, followed by fully automated voxel-wise segmentation of tumor components using a 3-dimenstional convolutional deep neural network. Furthermore, the efficiency of the model was evaluated using practical indicators and its consistency and accuracy was compared with traditional methods. The performance of the models were assessed using the Dice similarity coefficient (DSC). The Mann-Whitney U test was used to compare continuous clinical variables and the χ<sup>2</sup> test was used for comparisons between categorical clinical variables. T1WI sequences exhibited the optimal performance (training dice: 0.923, testing dice: 0.830, outer validation dice: 0.801), while T2WI showed a relatively poor performance (training dice of 0.761, a testing dice of 0.647, and an outer validation dice of 0.643. In conclusion, the automatic multi-sequences MRI segmentation model for PCNSL in the present study displayed high spatial overlap ratio and similar tumor volume with routine manual segmentation, indicating its significant potential.
Page 14 of 2602591 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.