Sort by:
Page 23 of 1291284 results

Multichannel deep learning prediction of major pathological response after neoadjuvant immunochemotherapy in lung cancer: a multicenter diagnostic study.

Geng Z, Li K, Mei P, Gong Z, Yan R, Huang Y, Zhang C, Zhao B, Lu M, Yang R, Wu G, Ye G, Liao Y

pubmed logopapersJul 2 2025
This study aimed to develop a pretreatment CT-based multichannel predictor integrating deep learning features encoded by Transformer models for preoperative diagnosis of major pathological response (MPR) in non-small cell lung cancer (NSCLC) patients receiving neoadjuvant immunochemotherapy. This multicenter diagnostic study retrospectively included 332 NSCLC patients from four centers. Pretreatment computed tomography images were preprocessed and segmented into region of interest cubes for radiomics modeling. These cubes were cropped into four groups of 2 dimensional image modules. GoogLeNet architecture was trained independently on each group within a multichannel framework, with gradient-weighted class activation mapping and SHapley Additive exPlanations value‌ for visualization. Deep learning features were carefully extracted and fused across the four image groups using the Transformer fusion model. After models training, model performance was evaluated via the area under the curve (AUC), sensitivity, specificity, F1 score, confusion matrices, calibration curves, decision curve analysis, integrated discrimination improvement, net reclassification improvement, and DeLong test. The dataset was allocated into training (n = 172, Center 1), internal validation (n = 44, Center 1), and external test (n = 116, Centers 2-4) cohorts. Four optimal deep learning models and the best Transformer fusion model were developed. In the external test cohort, traditional radiomics model exhibited an AUC of 0.736 [95% confidence interval (CI): 0.645-0.826]. The‌ optimal deep learning imaging ‌module‌ showed superior AUC of 0.855 (95% CI: 0.777-0.934). The fusion model named Transformer_GoogLeNet further improved classification accuracy (AUC = 0.924, 95% CI: 0.875-0.973). The new method of fusing multichannel deep learning with the Transformer Encoder can accurately diagnose whether NSCLC patients receiving neoadjuvant immunochemotherapy will achieve MPR. Our findings may support improved surgical planning and contribute to better treatment outcomes through more accurate preoperative assessment.

Classification based deep learning models for lung cancer and disease using medical images

Ahmad Chaddad, Jihao Peng, Yihang Wu

arxiv logopreprintJul 2 2025
The use of deep learning (DL) in medical image analysis has significantly improved the ability to predict lung cancer. In this study, we introduce a novel deep convolutional neural network (CNN) model, named ResNet+, which is based on the established ResNet framework. This model is specifically designed to improve the prediction of lung cancer and diseases using the images. To address the challenge of missing feature information that occurs during the downsampling process in CNNs, we integrate the ResNet-D module, a variant designed to enhance feature extraction capabilities by modifying the downsampling layers, into the traditional ResNet model. Furthermore, a convolutional attention module was incorporated into the bottleneck layers to enhance model generalization by allowing the network to focus on relevant regions of the input images. We evaluated the proposed model using five public datasets, comprising lung cancer (LC2500 $n$=3183, IQ-OTH/NCCD $n$=1336, and LCC $n$=25000 images) and lung disease (ChestXray $n$=5856, and COVIDx-CT $n$=425024 images). To address class imbalance, we used data augmentation techniques to artificially increase the representation of underrepresented classes in the training dataset. The experimental results show that ResNet+ model demonstrated remarkable accuracy/F1, reaching 98.14/98.14\% on the LC25000 dataset and 99.25/99.13\% on the IQ-OTH/NCCD dataset. Furthermore, the ResNet+ model saved computational cost compared to the original ResNet series in predicting lung cancer images. The proposed model outperformed the baseline models on publicly available datasets, achieving better performance metrics. Our codes are publicly available at https://github.com/AIPMLab/Graduation-2024/tree/main/Peng.

Individualized structural network deviations predict surgical outcome in mesial temporal lobe epilepsy: a multicentre validation study.

Feng L, Han H, Mo J, Huang Y, Huang K, Zhou C, Wang X, Zhang J, Yang Z, Liu D, Zhang K, Chen H, Liu Q, Li R

pubmed logopapersJul 2 2025
Surgical resection is an effective treatment for medically refractory mesial temporal lobe epilepsy (mTLE), however, more than one-third of patients fail to achieve seizure freedom after surgery. This study aimed to evaluate preoperative individual morphometric network characteristics and develop a machine learning model to predict surgical outcome in mTLE. This multicentre, retrospective study included 189 mTLE patients who underwent unilateral temporal lobectomy and 78 normal controls between February 2018 and June 2023. Postoperative seizure outcomes were categorized as seizure-free (SF, n = 125) or non-seizure-free (NSF, n = 64) at a minimum of one-year follow-up. The preoperative individualized structural covariance network (iSCN) derived from T1-weighted MRI was constructed for each patient by calculating deviations from the control-based reference distribution, and further divided into the surgery network and the surgically spared network using a standard resection mask by merging each patient's individual lacuna. Regional features were selected separately from bilateral, ipsilateral and contralateral iSCN abnormalities to train support vector machine models, validated in two independent external datasets. NSF patients showed greater iSCN deviations from the normative distribution in the surgically spared network compared to SF patients (P = 0.02). These deviations were widely distributed in the contralateral functional modules (P < 0.05, false discovery rate corrected). Seizure outcome was optimally predicted by the contralateral iSCN features, with an accuracy of 82% (P < 0.05, permutation test) and an area under the receiver operating characteristic curve (AUC) of 0.81, with the default mode and fronto-parietal areas contributing most. External validation in two independent cohorts showed accuracy of 80% and 88%, with AUC of 0.80 and 0.82, respectively, emphasizing the generalizability of the model. This study provides reliable personalized structural biomarkers for predicting surgical outcome in mTLE and has the potential to assist tailored surgical treatment strategies.

Lightweight convolutional neural networks using nonlinear Lévy chaotic moth flame optimisation for brain tumour classification via efficient hyperparameter tuning.

Dehkordi AA, Neshat M, Khosravian A, Thilakaratne M, Safaa Sadiq A, Mirjalili S

pubmed logopapersJul 2 2025
Deep convolutional neural networks (CNNs) have seen significant growth in medical image classification applications due to their ability to automate feature extraction, leverage hierarchical learning, and deliver high classification accuracy. However, Deep CNNs require substantial computational power and memory, particularly for large datasets and complex architectures. Additionally, optimising the hyperparameters of deep CNNs, although critical for enhancing model performance, is challenging due to the high computational costs involved, making it difficult without access to high-performance computing resources. To address these limitations, this study presents a fast and efficient model that aims to achieve superior classification performance compared to popular Deep CNNs by developing lightweight CNNs combined with the Nonlinear Lévy chaotic moth flame optimiser (NLCMFO) for automatic hyperparameter optimisation. NLCMFO integrates the Lévy flight, chaotic parameters, and nonlinear control mechanisms to enhance the exploration capabilities of the Moth Flame Optimiser during the search phase while also leveraging the Lévy flight theorem to improve the exploitation phase. To assess the efficiency of the proposed model, empirical analyses were performed using a dataset of 2314 brain tumour detection images (1245 images of brain tumours and 1069 normal brain images). The evaluation results indicate that the CNN_NLCMFO outperformed a non-optimised CNN by 5% (92.40% accuracy) and surpassed established models such as DarkNet19 (96.41%), EfficientNetB0 (96.32%), Xception (96.41%), ResNet101 (92.15%), and InceptionResNetV2 (95.63%) by margins ranging from 1 to 5.25%. The findings demonstrate that the lightweight CNN combined with NLCMFO provides a computationally efficient yet highly accurate solution for medical image classification, addressing the challenges associated with traditional deep CNNs.

Artificial Intelligence-Driven Cancer Diagnostics: Enhancing Radiology and Pathology through Reproducibility, Explainability, and Multimodality.

Khosravi P, Fuchs TJ, Ho DJ

pubmed logopapersJul 2 2025
The integration of artificial intelligence (AI) in cancer research has significantly advanced radiology, pathology, and multimodal approaches, offering unprecedented capabilities in image analysis, diagnosis, and treatment planning. AI techniques provide standardized assistance to clinicians, in which many diagnostic and predictive tasks are manually conducted, causing low reproducibility. These AI methods can additionally provide explainability to help clinicians make the best decisions for patient care. This review explores state-of-the-art AI methods, focusing on their application in image classification, image segmentation, multiple instance learning, generative models, and self-supervised learning. In radiology, AI enhances tumor detection, diagnosis, and treatment planning through advanced imaging modalities and real-time applications. In pathology, AI-driven image analysis improves cancer detection, biomarker discovery, and diagnostic consistency. Multimodal AI approaches can integrate data from radiology, pathology, and genomics to provide comprehensive diagnostic insights. Emerging trends, challenges, and future directions in AI-driven cancer research are discussed, emphasizing the transformative potential of these technologies in improving patient outcomes and advancing cancer care. This article is part of a special series: Driving Cancer Discoveries with Computational Research, Data Science, and Machine Learning/AI.

Classifying and diagnosing Alzheimer's disease with deep learning using 6735 brain MRI images.

Mousavi SM, Moulaei K, Ahmadian L

pubmed logopapersJul 2 2025
Traditional diagnostic methods for Alzheimer's disease often suffer from low accuracy and lengthy processing times, delaying crucial interventions and patient care. Deep convolutional neural networks trained on MRI data can enhance diagnostic precision. This study aims to utilize deep convolutional neural networks (CNNs) trained on MRI data for Alzheimer's disease diagnosis and classification. In this study, the Alzheimer MRI Preprocessed Dataset was used, which includes 6735 brain structural MRI scan images. After data preprocessing and normalization, four models Xception, VGG19, VGG16 and InceptionResNetV2 were utilized. Generalization and hyperparameter tuning were applied to improve training. Early stopping and dynamic learning rate were used to prevent overfitting. Model performance was evaluated based on accuracy, F-score, recall, and precision. The InceptionResnetV2 model showed superior performance in predicting Alzheimer's patients with an accuracy, F-score, recall, and precision of 0.99. Then, the Xception model excelled in precision, recall, and F-score, with values of 0.97 and an accuracy of 96.89. Notably, InceptionResnetV2 and VGG19 demonstrated faster learning, reaching convergence sooner and requiring fewer training iterations than other models. The InceptionResNetV2 model achieved the highest performance, with precision, recall, and F-score of 100% for both mild and moderate dementia classes. The Xception model also performed well, attaining 100% for the moderate dementia class and 99-100% for the mild dementia class. Additionally, the VGG16 and VGG19 models showed strong results, with VGG16 reaching 100% precision, recall, and F-score for the moderate dementia class. Deep convolutional neural networks enhance Alzheimer's diagnosis, surpassing traditional methods with improved precision and efficiency. Models like InceptionResnetV2 show outstanding performance, potentially speeding up patient interventions.

Optimizing the early diagnosis of neurological disorders through the application of machine learning for predictive analytics in medical imaging.

Sadu VB, Bagam S, Naved M, Andluru SKR, Ramineni K, Alharbi MG, Sengan S, Khadhar Moideen R

pubmed logopapersJul 2 2025
Early diagnosis of Neurological Disorders (ND) such as Alzheimer's disease (AD) and Brain Tumors (BT) can be highly challenging since these diseases cause minor changes in the brain's anatomy. Magnetic Resonance Imaging (MRI) is a vital tool for diagnosing and visualizing these ND; however, standard techniques contingent upon human analysis can be inaccurate, require a long-time, and detect early-stage symptoms necessary for effective treatment. Spatial Feature Extraction (FE) has been improved by Convolutional Neural Networks (CNN) and hybrid models, both of which are changes in Deep Learning (DL). However, these analysis methods frequently fail to accept temporal dynamics, which is significant for a complete test. The present investigation introduces the STGCN-ViT, a hybrid model that integrates CNN + Spatial-Temporal Graph Convolutional Networks (STGCN) + Vision Transformer (ViT) components to address these gaps. The model causes the reference to EfficientNet-B0 for FE in space, STGCN for FE in time, and ViT for FE using AM. By applying the Open Access Series of Imaging Studies (OASIS) and Harvard Medical School (HMS) benchmark datasets, the recommended approach proved effective in the investigations, with Group A attaining an accuracy of 93.56%, a precision of 94.41% and an Area under the Receiver Operating Characteristic Curve (AUC-ROC) score of 94.63%. Compared with standard and transformer-based models, the model attains better results for Group B, with an accuracy of 94.52%, precision of 95.03%, and AUC-ROC score of 95.24%. Those results support the model's use in real-time medical applications by providing proof of the probability of accurate but early-stage ND diagnosis.

Multitask Deep Learning Based on Longitudinal CT Images Facilitates Prediction of Lymph Node Metastasis and Survival in Chemotherapy-Treated Gastric Cancer.

Qiu B, Zheng Y, Liu S, Song R, Wu L, Lu C, Yang X, Wang W, Liu Z, Cui Y

pubmed logopapersJul 2 2025
Accurate preoperative assessment of lymph node metastasis (LNM) and overall survival (OS) status is essential for patients with locally advanced gastric cancer receiving neoadjuvant chemotherapy, providing timely guidance for clinical decision-making. However, current approaches to evaluate LNM and OS have limited accuracy. In this study, we used longitudinal CT images from 1,021 patients with locally advanced gastric cancer to develop and validate a multitask deep learning model, named co-attention tri-oriented spatial Mamba (CTSMamba), to simultaneously predict LNM and OS. CTSMamba was trained and validated on 398 patients, and the performance was further validated on 623 patients at two additional centers. Notably, CTSMamba exhibited significantly more robust performance than a clinical model in predicting LNM across all of the cohorts. Additionally, integrating CTSMamba survival scores with clinical predictors further improved personalized OS prediction. These results support the potential of CTSMamba to accurately predict LNM and OS from longitudinal images, potentially providing clinicians with a tool to inform individualized treatment approaches and optimized prognostic strategies. CTSMamba is a multitask deep learning model trained on longitudinal CT images of neoadjuvant chemotherapy-treated locally advanced gastric cancer that accurately predicts lymph node metastasis and overall survival to inform clinical decision-making. This article is part of a special series: Driving Cancer Discoveries with Computational Research, Data Science, and Machine Learning/AI.

CareAssist GPT improves patient user experience with a patient centered approach to computer aided diagnosis.

Algarni A

pubmed logopapersJul 2 2025
The rapid integration of artificial intelligence (AI) into healthcare has enhanced diagnostic accuracy; however, patient engagement and satisfaction remain significant challenges that hinder the widespread acceptance and effectiveness of AI-driven clinical tools. This study introduces CareAssist-GPT, a novel AI-assisted diagnostic model designed to improve both diagnostic accuracy and the patient experience through real-time, understandable, and empathetic communication. CareAssist-GPT combines high-resolution X-ray images, real-time physiological vital signs, and clinical notes within a unified predictive framework using deep learning. Feature extraction is performed using convolutional neural networks (CNNs), gated recurrent units (GRUs), and transformer-based NLP modules. Model performance was evaluated in terms of accuracy, precision, recall, specificity, and response time, alongside patient satisfaction through a structured user feedback survey. CareAssist-GPT achieved a diagnostic accuracy of 95.8%, improving by 2.4% over conventional models. It reported high precision (94.3%), recall (93.8%), and specificity (92.7%), with an AUC-ROC of 0.97. The system responded within 500 ms-23.1% faster than existing tools-and achieved a patient satisfaction score of 9.3 out of 10, demonstrating its real-time usability and communicative effectiveness. CareAssist-GPT significantly enhances the diagnostic process by improving accuracy and fostering patient trust through transparent, real-time explanations. These findings position it as a promising patient-centered AI solution capable of transforming healthcare delivery by bridging the gap between advanced diagnostics and human-centered communication.

Multiparametric MRI-based Interpretable Machine Learning Radiomics Model for Distinguishing Between Luminal and Non-luminal Tumors in Breast Cancer: A Multicenter Study.

Zhou Y, Lin G, Chen W, Chen Y, Shi C, Peng Z, Chen L, Cai S, Pan Y, Chen M, Lu C, Ji J, Chen S

pubmed logopapersJul 1 2025
To construct and validate an interpretable machine learning (ML) radiomics model derived from multiparametric magnetic resonance imaging (MRI) images to differentiate between luminal and non-luminal breast cancer (BC) subtypes. This study enrolled 1098 BC participants from four medical centers, categorized into a training cohort (n = 580) and validation cohorts 1-3 (n = 252, 89, and 177, respectively). Multiparametric MRI-based radiomics features, including T1-weighted imaging (T1WI), T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC), and dynamic contrast-enhanced (DCE) imaging, were extracted. Five ML algorithms were applied to develop various radiomics models, from which the best performing model was identified. A ML-based combined model including optimal radiomics features and clinical predictors was constructed, with performance assessed through receiver operating characteristic (ROC) analysis. The Shapley additive explanation (SHAP) method was utilized to assess model interpretability. Tumor size and MR-reported lymph node status were chosen as significant clinical variables. Thirteen radiomics features were identified from multiparametric MRI images. The extreme gradient boosting (XGBoost) radiomics model performed the best, achieving area under the curves (AUCs) of 0.941, 0.903, 0.862, and 0.894 across training and validation cohorts 1-3, respectively. The XGBoost combined model showed favorable discriminative power, with AUCs of 0.956, 0.912, 0.894, and 0.906 in training and validation cohorts 1-3, respectively. The SHAP visualization facilitated global interpretation, identifying "ADC_wavelet-HLH_glszm_ZoneEntropy" and "DCE_wavelet-HLL_gldm_DependenceVariance" as the most significant features for the model's predictions. The XGBoost combined model derived from multiparametric MRI may proficiently differentiate between luminal and non-luminal BC and aid in treatment decision-making. An interpretable machine learning radiomics model can preoperatively predict luminal and non-luminal subtypes in breast cancer, thereby aiding therapeutic decision-making.
Page 23 of 1291284 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.