Sort by:
Page 136 of 2432422 results

Heterogeneity Habitats -Derived Radiomics of Gd-EOB-DTPA Enhanced MRI for Predicting Proliferation of Hepatocellular Carcinoma.

Sun S, Yu Y, Xiao S, He Q, Jiang Z, Fan Y

pubmed logopapersJul 2 2025
To construct and validate the optimal model for preoperative prediction of proliferative HCC based on habitat-derived radiomics features of Gd-EOB-DTPA-Enhanced MRI. A total of 187 patients who underwent Gd-EOB-DTPA-enhanced MRI before curative partial hepatectomy were divided into training (n=130, 50 proliferative and 80 nonproliferative HCC) and validation cohort (n=57, 25 proliferative and 32 nonproliferative HCC). Habitat subregion generation was performed using the Gaussian Mixture Model (GMM) clustering method to cluster all pixels to identify similar subregions within the tumor. Radiomic features were extracted from each tumor subregion in the arterial phase (AP) and hepatobiliary phase (HBP). Independent sample t tests, Pearson correlation coefficient, and Least Absolute Shrinkage and Selection Operator (LASSO) algorithm were performed to select the optimal features of subregions. After feature integration and selection, machine-learning classification models using the sci-kit-learn library were constructed. Receiver Operating Characteristic (ROC) curves and the DeLong test were performed to compare the identified performance for predicting proliferative HCC among these models. The optimal number of clusters was determined to be 3 based on the Silhouette coefficient. 20, 12, and 23 features were retained from the AP, HBP, and the combined AP and HBP habitat (subregions 1, 2, 3) radiomics features. Three models were constructed with these selected features in AP, HBP, and the combined AP and HBP habitat radiomics features. The ROC analysis and DeLong test show that the Naive Bayes model of AP and HBP habitat radiomics (AP-HBP-Hab-Rad) archived the best performance. Finally, the combined model using the Light Gradient Boosting Machine (LightGBM) algorithm, incorporating the AP-HBP-Hab-Rad, age, and AFP (Alpha-Fetoprotein), was identified as the optimal model for predicting proliferative HCC. For the training and validation cohort, the accuracy, sensitivity, specificity, and AUC were 0.923, 0.880, 0.950, 0.966 (95% CI: 0.937-0.994) and 0.825, 0.680, 0.937, 0.877 (95% CI: 0.786-0.969), respectively. In its validation cohort of the combined model, the AUC value was statistically higher than the other models (P<0.01). A combined model, including AP-HBP-Hab-Rad, serum AFP, and age using the LightGBM algorithm, can satisfactorily predict proliferative HCC preoperatively.

A deep learning model for early diagnosis of alzheimer's disease combined with 3D CNN and video Swin transformer.

Zhou J, Wei Y, Li X, Zhou W, Tao R, Hua Y, Liu H

pubmed logopapersJul 2 2025
Alzheimer's disease (AD) constitutes a neurodegenerative disorder predominantly observed in the geriatric population. If AD can be diagnosed early, both in terms of prevention and treatment, it is very beneficial to patients. Therefore, our team proposed a novel deep learning model named 3D-CNN-VSwinFormer. The model consists of two components: the first part is a 3D CNN equipped with a 3D Convolutional Block Attention Module (3D CBAM) module, and the second part involves a fine-tuned Video Swin Transformer. Our investigation extracts features from subject-level 3D Magnetic resonance imaging (MRI) data, retaining only a single 3D MRI image per participant. This method circumvents data leakage and addresses the issue of 2D slices failing to capture global spatial information. We utilized the ADNI dataset to validate our proposed model. In differentiating between AD patients and cognitively normal (CN) individuals, we achieved accuracy and AUC values of 92.92% and 0.9660, respectively. Compared to other studies on AD and CN recognition, our model yielded superior results, enhancing the efficiency of AD diagnosis.

Intelligent diagnosis model for chest X-ray images diseases based on convolutional neural network.

Yang S, Wu Y

pubmed logopapersJul 2 2025
To address misdiagnosis caused by feature coupling in multi-label medical image classification, this study introduces a chest X-ray pathology reasoning method. It combines hierarchical attention convolutional networks with a multi-label decoupling loss function. This method aims to enhance the precise identification of complex lesions. It dynamically captures multi-scale lesion morphological features and integrates lung field partitioning with lesion localization through a dual-path attention mechanism, thereby improving clinical disease prediction accuracy. An adaptive dilated convolution module with 3 × 3 deformable kernels dynamically captures multi-scale lesion features. A channel-space dual-path attention mechanism enables precise feature selection for lung field partitioning and lesion localization. Cross-scale skip connections fuse shallow texture and deep semantic information, enhancing microlesion detection. A KL divergence-constrained contrastive loss function decouples 14 pathological feature representations via orthogonal regularization, effectively resolving multi-label coupling. Experiments on ChestX-ray14 show a weighted F1-score of 0.97, Hamming Loss of 0.086, and AUC values exceeding 0.94 for all pathologies. This study provides a reliable tool for multi-disease collaborative diagnosis.

Habitat-Derived Radiomic Features of Planning Target Volume to Determine the Local Recurrence After Radiotherapy in Patients with Gliomas: A Feasibility Study.

Wang Y, Lin L, Hu Z, Wang H

pubmed logopapersJul 2 2025
To develop a machine learning-based predictive model for local recurrence after radiotherapy in patients with gliomas, with interpretability enhanced through SHapley Additive exPlanations (SHAP). We retrospectively enrolled 145 patients with pathologically confirmed gliomas who underwent brain radiotherapy (training: validation = 102:43). Physiological and structural magnetic resonance imaging (MRI) were used to define habitat regions. A total of 2153 radiomic features were extracted from each MRI sequence in each habitat region, respectively. Relief and Recursive Feature Elimination were used for radiomic feature selection. Support vector machine (SVM) and random forest models incorporating clinical and radiomic features were constructed for each habitat region. The SHAP method was used to explain the predictive model. In the training cohort and validation cohort, the Physiological_Habitat1 (e-THRIVE)_radiomic SVM model demonstrated the best AUC of 0.703 (95% CI 0.569-0.836) and 0.670 (95% CI 0.623-0.717) compared to the other radiomic models. The SHAP summary plot and SHAP force plot were used to interpret the best-performing Physiological_Habitat1 (e-THRIVE)_radiomic SVM model. Radiomic features derived from the Physiological_Habitat1 (e-THRIVE) were predictive of local recurrence in glioma patients following radiotherapy. The SHAP method provided insights into how the tumor microenvironment might influence the effectiveness of radiotherapy in postoperative gliomas.

Individualized structural network deviations predict surgical outcome in mesial temporal lobe epilepsy: a multicentre validation study.

Feng L, Han H, Mo J, Huang Y, Huang K, Zhou C, Wang X, Zhang J, Yang Z, Liu D, Zhang K, Chen H, Liu Q, Li R

pubmed logopapersJul 2 2025
Surgical resection is an effective treatment for medically refractory mesial temporal lobe epilepsy (mTLE), however, more than one-third of patients fail to achieve seizure freedom after surgery. This study aimed to evaluate preoperative individual morphometric network characteristics and develop a machine learning model to predict surgical outcome in mTLE. This multicentre, retrospective study included 189 mTLE patients who underwent unilateral temporal lobectomy and 78 normal controls between February 2018 and June 2023. Postoperative seizure outcomes were categorized as seizure-free (SF, n = 125) or non-seizure-free (NSF, n = 64) at a minimum of one-year follow-up. The preoperative individualized structural covariance network (iSCN) derived from T1-weighted MRI was constructed for each patient by calculating deviations from the control-based reference distribution, and further divided into the surgery network and the surgically spared network using a standard resection mask by merging each patient's individual lacuna. Regional features were selected separately from bilateral, ipsilateral and contralateral iSCN abnormalities to train support vector machine models, validated in two independent external datasets. NSF patients showed greater iSCN deviations from the normative distribution in the surgically spared network compared to SF patients (P = 0.02). These deviations were widely distributed in the contralateral functional modules (P < 0.05, false discovery rate corrected). Seizure outcome was optimally predicted by the contralateral iSCN features, with an accuracy of 82% (P < 0.05, permutation test) and an area under the receiver operating characteristic curve (AUC) of 0.81, with the default mode and fronto-parietal areas contributing most. External validation in two independent cohorts showed accuracy of 80% and 88%, with AUC of 0.80 and 0.82, respectively, emphasizing the generalizability of the model. This study provides reliable personalized structural biomarkers for predicting surgical outcome in mTLE and has the potential to assist tailored surgical treatment strategies.

Development and validation of a deep learning ultrasound radiomics model for predicting drug resistance in lymph node tuberculosis a multicenter study.

Zhang X, Dong Z, Li H, Cheng Y, Tang W, Ni T, Zhang Y, Ai Q, Yang G

pubmed logopapersJul 2 2025
To develop and validate an ensemble machine learning ultrasound radiomics model for predicting drug resistance in lymph node tuberculosis (LNTB). This multicenter study retrospectively included 234 cervical LNTB patients from one center, randomly divided into training (70%) and internal validation (30%) cohorts. Radiomic features were extracted from ultrasound images, and an L1-based method was used for feature selection. A predictive model combining ensemble machine learning and AdaBoost algorithms was developed to predict drug resistance. Model performance was assessed using independent external test sets (Test A and Test B) from two other centres, with metrics including AUC, accuracy, precision, recall, F1 score, and decision curve analysis. Of the 851 radiometric features extracted, 161 were selected for the model. The model achieved AUCs of 0.998 (95% CI: 0.996-0.999), 0.798 (95% CI: 0.692-0.904), 0.846 (95% CI: 0.700-0.992), and 0.831 (95% CI: 0.688-0.974) in training, internal validation, and external test sets A and B, respectively. The decision curve analysis showed a substantial net benefit across a threshold probability range of 0.38 to 0.57. The LNTB resistance prediction model developed demonstrated high diagnostic efficacy in both internal and external validation. Radiomics, through the application of ensemble machine learning algorithms, provides new insights into drug resistance mechanisms and offers potential strategies for more effective patient treatment. Lymph node tuberculosis; Drug resistance; Ultrasound; Radiomics; Machine learning.

Multichannel deep learning prediction of major pathological response after neoadjuvant immunochemotherapy in lung cancer: a multicenter diagnostic study.

Geng Z, Li K, Mei P, Gong Z, Yan R, Huang Y, Zhang C, Zhao B, Lu M, Yang R, Wu G, Ye G, Liao Y

pubmed logopapersJul 2 2025
This study aimed to develop a pretreatment CT-based multichannel predictor integrating deep learning features encoded by Transformer models for preoperative diagnosis of major pathological response (MPR) in non-small cell lung cancer (NSCLC) patients receiving neoadjuvant immunochemotherapy. This multicenter diagnostic study retrospectively included 332 NSCLC patients from four centers. Pretreatment computed tomography images were preprocessed and segmented into region of interest cubes for radiomics modeling. These cubes were cropped into four groups of 2 dimensional image modules. GoogLeNet architecture was trained independently on each group within a multichannel framework, with gradient-weighted class activation mapping and SHapley Additive exPlanations value‌ for visualization. Deep learning features were carefully extracted and fused across the four image groups using the Transformer fusion model. After models training, model performance was evaluated via the area under the curve (AUC), sensitivity, specificity, F1 score, confusion matrices, calibration curves, decision curve analysis, integrated discrimination improvement, net reclassification improvement, and DeLong test. The dataset was allocated into training (n = 172, Center 1), internal validation (n = 44, Center 1), and external test (n = 116, Centers 2-4) cohorts. Four optimal deep learning models and the best Transformer fusion model were developed. In the external test cohort, traditional radiomics model exhibited an AUC of 0.736 [95% confidence interval (CI): 0.645-0.826]. The‌ optimal deep learning imaging ‌module‌ showed superior AUC of 0.855 (95% CI: 0.777-0.934). The fusion model named Transformer_GoogLeNet further improved classification accuracy (AUC = 0.924, 95% CI: 0.875-0.973). The new method of fusing multichannel deep learning with the Transformer Encoder can accurately diagnose whether NSCLC patients receiving neoadjuvant immunochemotherapy will achieve MPR. Our findings may support improved surgical planning and contribute to better treatment outcomes through more accurate preoperative assessment.

A novel neuroimaging based early detection framework for alzheimer disease using deep learning.

Alasiry A, Shinan K, Alsadhan AA, Alhazmi HE, Alanazi F, Ashraf MU, Muhammad T

pubmed logopapersJul 2 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that significantly impacts cognitive function, posing a major global health challenge. Despite its rising prevalence, particularly in low and middle-income countries, early diagnosis remains inadequate, with projections estimating over 55 million affected individuals by 2022, expected to triple by 2050. Accurate early detection is critical for effective intervention. This study presents Neuroimaging-based Early Detection of Alzheimer's Disease using Deep Learning (NEDA-DL), a novel computer-aided diagnostic (CAD) framework leveraging a hybrid ResNet-50 and AlexNet architecture optimized with CUDA-based parallel processing. The proposed deep learning model processes MRI and PET neuroimaging data, utilizing depthwise separable convolutions to enhance computational efficiency. Performance evaluation using key metrics including accuracy, sensitivity, specificity, and F1-score demonstrates state-of-the-art classification performance, with the Softmax classifier achieving 99.87% accuracy. Comparative analyses further validate the superiority of NEDA-DL over existing methods. By integrating structural and functional neuroimaging insights, this approach enhances diagnostic precision and supports clinical decision-making in Alzheimer's disease detection.

Optimizing the early diagnosis of neurological disorders through the application of machine learning for predictive analytics in medical imaging.

Sadu VB, Bagam S, Naved M, Andluru SKR, Ramineni K, Alharbi MG, Sengan S, Khadhar Moideen R

pubmed logopapersJul 2 2025
Early diagnosis of Neurological Disorders (ND) such as Alzheimer's disease (AD) and Brain Tumors (BT) can be highly challenging since these diseases cause minor changes in the brain's anatomy. Magnetic Resonance Imaging (MRI) is a vital tool for diagnosing and visualizing these ND; however, standard techniques contingent upon human analysis can be inaccurate, require a long-time, and detect early-stage symptoms necessary for effective treatment. Spatial Feature Extraction (FE) has been improved by Convolutional Neural Networks (CNN) and hybrid models, both of which are changes in Deep Learning (DL). However, these analysis methods frequently fail to accept temporal dynamics, which is significant for a complete test. The present investigation introduces the STGCN-ViT, a hybrid model that integrates CNN + Spatial-Temporal Graph Convolutional Networks (STGCN) + Vision Transformer (ViT) components to address these gaps. The model causes the reference to EfficientNet-B0 for FE in space, STGCN for FE in time, and ViT for FE using AM. By applying the Open Access Series of Imaging Studies (OASIS) and Harvard Medical School (HMS) benchmark datasets, the recommended approach proved effective in the investigations, with Group A attaining an accuracy of 93.56%, a precision of 94.41% and an Area under the Receiver Operating Characteristic Curve (AUC-ROC) score of 94.63%. Compared with standard and transformer-based models, the model attains better results for Group B, with an accuracy of 94.52%, precision of 95.03%, and AUC-ROC score of 95.24%. Those results support the model's use in real-time medical applications by providing proof of the probability of accurate but early-stage ND diagnosis.

Lightweight convolutional neural networks using nonlinear Lévy chaotic moth flame optimisation for brain tumour classification via efficient hyperparameter tuning.

Dehkordi AA, Neshat M, Khosravian A, Thilakaratne M, Safaa Sadiq A, Mirjalili S

pubmed logopapersJul 2 2025
Deep convolutional neural networks (CNNs) have seen significant growth in medical image classification applications due to their ability to automate feature extraction, leverage hierarchical learning, and deliver high classification accuracy. However, Deep CNNs require substantial computational power and memory, particularly for large datasets and complex architectures. Additionally, optimising the hyperparameters of deep CNNs, although critical for enhancing model performance, is challenging due to the high computational costs involved, making it difficult without access to high-performance computing resources. To address these limitations, this study presents a fast and efficient model that aims to achieve superior classification performance compared to popular Deep CNNs by developing lightweight CNNs combined with the Nonlinear Lévy chaotic moth flame optimiser (NLCMFO) for automatic hyperparameter optimisation. NLCMFO integrates the Lévy flight, chaotic parameters, and nonlinear control mechanisms to enhance the exploration capabilities of the Moth Flame Optimiser during the search phase while also leveraging the Lévy flight theorem to improve the exploitation phase. To assess the efficiency of the proposed model, empirical analyses were performed using a dataset of 2314 brain tumour detection images (1245 images of brain tumours and 1069 normal brain images). The evaluation results indicate that the CNN_NLCMFO outperformed a non-optimised CNN by 5% (92.40% accuracy) and surpassed established models such as DarkNet19 (96.41%), EfficientNetB0 (96.32%), Xception (96.41%), ResNet101 (92.15%), and InceptionResNetV2 (95.63%) by margins ranging from 1 to 5.25%. The findings demonstrate that the lightweight CNN combined with NLCMFO provides a computationally efficient yet highly accurate solution for medical image classification, addressing the challenges associated with traditional deep CNNs.
Page 136 of 2432422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.