Sort by:
Page 141 of 2432424 results

Automated classification of chondroid tumor using 3D U-Net and radiomics with deep features.

Le Dinh T, Lee S, Park H, Lee S, Choi H, Chun KS, Jung JY

pubmed logopapersJul 1 2025
Classifying chondroid tumors is an essential step for effective treatment planning. Recently, with the advances in computer-aided diagnosis and the increasing availability of medical imaging data, automated tumor classification using deep learning shows promise in assisting clinical decision-making. In this study, we propose a hybrid approach that integrates deep learning and radiomics for chondroid tumor classification. First, we performed tumor segmentation using the nnUNetv2 framework, which provided three-dimensional (3D) delineation of tumor regions of interest (ROIs). From these ROIs, we extracted a set of radiomics features and deep learning-derived features. After feature selection, we identified 15 radiomics and 15 deep features to build classification models. We developed 5 machine learning classifiers including Random Forest, XGBoost, Gradient Boosting, LightGBM, and CatBoost for the classification models. The approach integrating features from radiomics, ROI-originated deep learning features, and clinical variables yielded the best overall classification results. Among the classifiers, CatBoost classifier achieved the highest accuracy of 0.90 (95% CI 0.90-0.93), a weighted kappa of 0.85, and an AUC of 0.91. These findings highlight the potential of integrating 3D U-Net-assisted segmentation with radiomics and deep learning features to improve classification of chondroid tumors.

Radiomics analysis based on dynamic contrast-enhanced MRI for predicting early recurrence after hepatectomy in hepatocellular carcinoma patients.

Wang KD, Guan MJ, Bao ZY, Shi ZJ, Tong HH, Xiao ZQ, Liang L, Liu JW, Shen GL

pubmed logopapersJul 1 2025
This study aimed to develop a machine learning model based on Magnetic Resonance Imaging (MRI) radiomics for predicting early recurrence after curative surgery in patients with hepatocellular carcinoma (HCC).A retrospective analysis was conducted on 200 patients with HCC who underwent curative hepatectomy. Patients were randomly allocated to training (n = 140) and validation (n = 60) cohorts. Preoperative arterial, portal venous, and delayed phase images were acquired. Tumor regions of interest (ROIs) were manually delineated, with an additional ROI obtained by expanding the tumor boundary by 5 mm. Radiomic features were extracted and selected using the Least Absolute Shrinkage and Selection Operator (LASSO). Multiple machine learning algorithms were employed to develop predictive models. Model performance was evaluated using receiver operating characteristic (ROC) curves, decision curve analysis, and calibration curves. The 20 most discriminative radiomic features were integrated with tumor size and satellite nodules for model development. In the validation cohort, the clinical-peritumoral radiomics model demonstrated superior predictive accuracy (AUC = 0.85, 95% CI: 0.74-0.95) compared to the clinical-intratumoral radiomics model (AUC = 0.82, 95% CI: 0.68-0.93) and the radiomics-only model (AUC = 0.82, 95% CI: 0.69-0.93). Furthermore, calibration curves and decision curve analyses indicated superior calibration ability and clinical benefit. The MRI-based peritumoral radiomics model demonstrates significant potential for predicting early recurrence of HCC.

Brain structural features with functional priori to classify Parkinson's disease and multiple system atrophy using diagnostic MRI.

Zhou K, Li J, Huang R, Yu J, Li R, Liao W, Lu F, Hu X, Chen H, Gao Q

pubmed logopapersJul 1 2025
Clinical two-dimensional (2D) MRI data has seen limited application in the early diagnosis of Parkinson's disease (PD) and multiple system atrophy (MSA) due to quality limitations, yet its diagnostic and therapeutic potential remains underexplored. This study presents a novel machine learning framework using reconstructed clinical images to accurately distinguish PD from MSA and identify disease-specific neuroimaging biomarkers. The structure constrained super-resolution network (SCSRN) algorithm was employed to reconstruct clinical 2D MRI data for 56 PD and 58 MSA patients. Features were derived from a functional template, and hierarchical SHAP-based feature selection improved model accuracy and interpretability. In the test set, the Extra Trees and logistic regression models based on the functional template demonstrated an improved accuracy rate of 95.65% and an AUC of 99%. The positive and negative impacts of various features predicting PD and MSA were clarified, with larger fourth ventricular and smaller brainstem volumes being most significant. The proposed framework provides new insights into the comprehensive utilization of clinical 2D MRI images to explore underlying neuroimaging biomarkers that can distinguish between PD and MSA, highlighting disease-specific alterations in brain morphology observed in these conditions.

Machine learning for Parkinson's disease: a comprehensive review of datasets, algorithms, and challenges.

Shokrpour S, MoghadamFarid A, Bazzaz Abkenar S, Haghi Kashani M, Akbari M, Sarvizadeh M

pubmed logopapersJul 1 2025
Parkinson's disease (PD) is a devastating neurological ailment affecting both mobility and cognitive function, posing considerable problems to the health of the elderly across the world. The absence of a conclusive treatment underscores the requirement to investigate cutting-edge diagnostic techniques to improve patient outcomes. Machine learning (ML) has the potential to revolutionize PD detection by applying large repositories of structured data to enhance diagnostic accuracy. 133 papers published between 2021 and April 2024 were reviewed using a systematic literature review (SLR) methodology, and subsequently classified into five categories: acoustic data, biomarkers, medical imaging, movement data, and multimodal datasets. This comprehensive analysis offers valuable insights into the applications of ML in PD diagnosis. Our SLR identifies the datasets and ML algorithms used for PD diagnosis, as well as their merits, limitations, and evaluation factors. We also discuss challenges, future directions, and outstanding issues.

Hybrid transfer learning and self-attention framework for robust MRI-based brain tumor classification.

Panigrahi S, Adhikary DRD, Pattanayak BK

pubmed logopapersJul 1 2025
Brain tumors are a significant contributor to cancer-related deaths worldwide. Accurate and prompt detection is crucial to reduce mortality rates and improve patient survival prospects. Magnetic Resonance Imaging (MRI) is crucial for diagnosis, but manual analysis is resource-intensive and error-prone, highlighting the need for robust Computer-Aided Diagnosis (CAD) systems. This paper proposes a novel hybrid model combining Transfer Learning (TL) and attention mechanisms to enhance brain tumor classification accuracy. Leveraging features from the pre-trained DenseNet201 Convolutional Neural Networks (CNN) model and integrating a Transformer-based architecture, our approach overcomes challenges like computational intensity, detail detection, and noise sensitivity. We also evaluated five additional pre-trained models-VGG19, InceptionV3, Xception, MobileNetV2, and ResNet50V2 and incorporated Multi-Head Self-Attention (MHSA) and Squeeze-and-Excitation Attention (SEA) blocks individually to improve feature representation. Using the Br35H dataset of 3,000 MRI images, our proposed DenseTransformer model achieved a consistent accuracy of 99.41%, demonstrating its reliability as a diagnostic tool. Statistical analysis using Z-test based on Cohen's Kappa Score, DeLong's test based on AUC Score and McNemar's test based on F1-score confirms the model's reliability. Additionally, Explainable AI (XAI) techniques like Gradient-weighted Class Activation Mapping (Grad-CAM) and Local Interpretable Model-agnostic Explanations (LIME) enhanced model transparency and interpretability. This study underscores the potential of hybrid Deep Learning (DL) models in advancing brain tumor diagnosis and improving patient outcomes.

CBCT radiomics features combine machine learning to diagnose cystic lesions in the jaw.

Sha X, Wang C, Sun J, Qi S, Yuan X, Zhang H, Yang J

pubmed logopapersJul 1 2025
The aim of this study was to develop a radiomics model based on cone beam CT (CBCT) to differentiate odontogenic cysts (OCs), odontogenic keratocysts (OKCs), and ameloblastomas (ABs). In this retrospective study, CBCT images were collected from 300 patients diagnosed with OC, OKC, and AB who underwent histopathological diagnosis. These patients were randomly divided into training (70%) and test (30%) cohorts. Radiomics features were extracted from the images, and the optimal features were incorporated into random forest model, support vector classifier (SVC) model, logistic regression model, and a soft VotingClassifier based on the above 3 algorithms. The performance of the models was evaluated using a receiver operating characteristic (ROC) curve and the area under the curve (AUC). The optimal model among these was then used to establish the final radiomics prediction model, whose performance was evaluated using the sensitivity, accuracy, precision, specificity, and F1 score in both the training cohort and the test cohort. The 6 optimal radiomics features were incorporated into a soft VotingClassifier. Its performance was the best overall. The AUC values of the One-vs-Rest (OvR) multi-classification strategy were AB-vs-Rest 0.963; OKC-vs-Rest 0.928; OC-vs-Rest 0.919 in the training cohort and AB-vs-Rest 0.814; OKC-vs-Rest 0.781; OC-vs-Rest 0.849 in the test cohort. The overall accuracy of the model in the training cohort was 0.757, and in the test cohort was 0.711. The VotingClassifier model demonstrated the ability of the CBCT radiomics to distinguish the multiple types of diseases (OC, OKC, and AB) in the jaw and may have the potential to diagnose accurately under non-invasive conditions.

A Preoperative CT-based Multiparameter Deep Learning and Radiomic Model with Extracellular Volume Parameter Images Can Predict the Tumor Budding Grade in Rectal Cancer Patients.

Tang X, Zhuang Z, Jiang L, Zhu H, Wang D, Zhang L

pubmed logopapersJul 1 2025
To investigate a computed tomography (CT)-based multiparameter deep learning-radiomic model (DLRM) for predicting the preoperative tumor budding (TB) grade in patients with rectal cancer. Data from 135 patients with histologically confirmed rectal cancer (85 in the Bd1+2 group and 50 in the Bd3 group) were retrospectively included. Deep learning (DL) features and hand-crafted radiomic (HCR) features were separately extracted and selected from preoperative CT-based extracellular volume (ECV) parameter images and venous-phase images. Six predictive signatures were subsequently constructed from machine learning classification algorithms. Finally, a combined DL and HCR model, the DLRM, was established to predict the TB grade of rectal cancer patients by merging the DL and HCR features from the two image sets. In the training and test cohorts, the AUC values of the DLRM were 0.976 [95% CI: 0.942-0.997] and 0.976 [95% CI: 0.942-1.00], respectively. The DLRM had good output agreement and clinical applicability according to calibration curve analysis and DCA, respectively. The DLRM outperformed the individual DL and HCR signatures in terms of predicting the TB grade of rectal cancer patients (p < 0.05). The DLRM can be used to evaluate the TB grade of rectal cancer patients in a noninvasive manner before surgery, thereby providing support for clinical treatment decision-making for these patients.

Dual-Modality Virtual Biopsy System Integrating MRI and MG for Noninvasive Predicting HER2 Status in Breast Cancer.

Wang Q, Zhang ZQ, Huang CC, Xue HW, Zhang H, Bo F, Guan WT, Zhou W, Bai GJ

pubmed logopapersJul 1 2025
Accurate determination of human epidermal growth factor receptor 2 (HER2) expression is critical for guiding targeted therapy in breast cancer. This study aimed to develop and validate a deep learning (DL)-based decision-making visual biomarker system (DM-VBS) for predicting HER2 status using radiomics and DL features derived from magnetic resonance imaging (MRI) and mammography (MG). Radiomics features were extracted from MRI, and DL features were derived from MG. Four submodels were constructed: Model I (MRI-radiomics) and Model III (mammography-DL) for distinguishing HER2-zero/low from HER2-positive cases, and Model II (MRI-radiomics) and Model IV (mammography-DL) for differentiating HER2-zero from HER2-low/positive cases. These submodels were integrated into a XGBoost model for ternary classification of HER2 status. Radiologists assessed imaging features associated with HER2 expression, and model performance was validated using two independent datasets from The Cancer Image Archive. A total of 550 patients were divided into training, internal validation, and external validation cohorts. Models I and III achieved an area under the curve (AUC) of 0.800-0.850 for distinguishing HER2-zero/low from HER2-positive cases, while Models II and IV demonstrated AUC values of 0.793-0.847 for differentiating HER2-zero from HER2-low/positive cases. The DM-VBS achieved average accuracy of 85.42%, 80.4%, and 89.68% for HER2-zero, -low, and -positive patients in the validation cohorts, respectively. Imaging features such as lesion size, number of lesions, enhancement type, and microcalcifications significantly differed across HER2 statuses, except between HER2-zero and -low groups. DM-VBS can predict HER2 status and assist clinicians in making treatment decisions for breast cancer.

CT Differentiation and Prognostic Modeling in COVID-19 and Influenza A Pneumonia.

Chen X, Long Z, Lei Y, Liang S, Sima Y, Lin R, Ding Y, Lin Q, Ma T, Deng Y

pubmed logopapersJul 1 2025
This study aimed to compare CT features of COVID-19 and Influenza A pneumonia, develop a diagnostic differential model, and explore a prognostic model for lesion resolution. A total of 446 patients diagnosed with COVID-19 and 80 with Influenza A pneumonitis underwent baseline chest CT evaluation. Logistic regression analysis was conducted after multivariate analysis and the results were presented as nomograms. Machine learning models were also evaluated for their diagnostic performance. Prognostic factors for lesion resolution were analyzed using Cox regression after excluding patients who were lost to follow-up, with a nomogram being created. COVID-19 patients showed more features such as thickening of bronchovascular bundles, crazy paving sign and traction bronchiectasis. Influenza A patients exhibited more features such as consolidation, coarse banding and pleural effusion (P < 0.05). The logistic regression model achieved AUC values of 0.937 (training) and 0.931 (validation). Machine learning models exhibited area under the curve values ranging from 0.8486 to 0.9017. COVID-19 patients showed better lesion resolution. Independent prognostic factors for resolution at baseline included age, sex, lesion distribution, morphology, coarse banding, and widening of the main pulmonary artery. Distinct imaging features can differentiate COVID-19 from Influenza A pneumonia. The logistic discriminative model and each machine - learning network model constructed in this study demonstrated efficacy. The nomogram for the logistic discriminative model exhibited high utility. Patients with COVID-19 may exhibit a better resolution of lesions. Certain baseline characteristics may act as independent prognostic factors for complete resolution of lesions.

Deep learning-based lung cancer classification of CT images.

Faizi MK, Qiang Y, Wei Y, Qiao Y, Zhao J, Aftab R, Urrehman Z

pubmed logopapersJul 1 2025
Lung cancer remains a leading cause of cancer-related deaths worldwide, with accurate classification of lung nodules being critical for early diagnosis. Traditional radiological methods often struggle with high false-positive rates, underscoring the need for advanced diagnostic tools. In this work, we introduce DCSwinB, a novel deep learning-based lung nodule classifier designed to improve the accuracy and efficiency of benign and malignant nodule classification in CT images. Built on the Swin-Tiny Vision Transformer (ViT), DCSwinB incorporates several key innovations: a dual-branch architecture that combines CNNs for local feature extraction and Swin Transformer for global feature extraction, and a Conv-MLP module that enhances connections between adjacent windows to capture long-range dependencies in 3D images. Pretrained on the LUNA16 and LUNA16-K datasets, which consist of annotated CT scans from thousands of patients, DCSwinB was evaluated using ten-fold cross-validation. The model demonstrated superior performance, achieving 90.96% accuracy, 90.56% recall, 89.65% specificity, and an AUC of 0.94, outperforming existing models such as ResNet50 and Swin-T. These results highlight the effectiveness of DCSwinB in enhancing feature representation while optimizing computational efficiency. By improving the accuracy and reliability of lung nodule classification, DCSwinB has the potential to assist radiologists in reducing diagnostic errors, enabling earlier intervention and improved patient outcomes.
Page 141 of 2432424 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.