Sort by:
Page 7 of 1321311 results

Vision transformer and complex network analysis for autism spectrum disorder classification in T1 structural MRI.

Gao X, Xu Y

pubmed logopapersJul 15 2025
Autism spectrum disorder (ASD) affects social interaction, communication, and behavior. Early diagnosis is important as it enables timely intervention that can significantly improve long-term outcomes, but current diagnostic, which rely heavily on behavioral observations and clinical interviews, are often subjective and time-consuming. This study introduces an AI-based approach that uses T1-weighted structural MRI (sMRI) scans, network analysis, and vision transformers to automatically diagnose ASD. sMRI data from 79 ASD patients and 105 healthy controls were obtained from the Autism Brain Imaging Data Exchange (ABIDE) database. Complex network analysis (CNA) features and ViT (Vision Transformer) features were developed for predicting ASD. Five models were developed for each type of features: logistic regression, support vector machine (SVM), gradient boosting (GB), K-nearest neighbors (KNN), and neural network (NN). 25 models were further developed by federating the two sets of 5 models. Model performance was evaluated using accuracy, area under the receiver operating characteristic curve (AUC-ROC), sensitivity, and specificity via fivefold cross-validation. The federate model CNA(KNN)-ViT(NN) achieved highest performance, with accuracy 0.951 ± 0.067, AUC-ROC 0.980 ± 0.020, sensitivity 0.963 ± 0.050, and specificity 0.943 ± 0.047. The performance of the ViT-based models exceeds that of the complex network-based models on 80% of the performance metrics. By federating CNA models, the ViT models can achieve better performance. This study demonstrates the feasibility of using CNA and ViT models for the automated diagnosis of ASD. The proposed CNA(KNN)-ViT(NN) model achieved better accuracy in ASD classification based solely on T1 sMRI images. The proposed method's reliance on widely available T1 sMRI scans highlights its potential for integration into routine clinical examinations, facilitating more efficient and accessible ASD screening.

Deep Learning Applications in Lymphoma Imaging.

Sorin V, Cohen I, Lekach R, Partovi S, Raskin D

pubmed logopapersJul 14 2025
Lymphomas are a diverse group of disorders characterized by the clonal proliferation of lymphocytes. While definitive diagnosis of lymphoma relies on histopathology, immune-phenotyping and additional molecular analyses, imaging modalities such as PET/CT, CT, and MRI play a central role in the diagnostic process and management, from assessing disease extent, to evaluation of response to therapy and detecting recurrence. Artificial intelligence (AI), particularly deep learning models like convolutional neural networks (CNNs), is transforming lymphoma imaging by enabling automated detection, segmentation, and classification. This review elaborates on recent advancements in deep learning for lymphoma imaging and its integration into clinical practice. Challenges include obtaining high-quality, annotated datasets, addressing biases in training data, and ensuring consistent model performance. Ongoing efforts are focused on enhancing model interpretability, incorporating diverse patient populations to improve generalizability, and ensuring safe and effective integration of AI into clinical workflows, with the goal of improving patient outcomes.

Multimodal Deep Learning Model Based on Ultrasound and Cytological Images Predicts Risk Stratification of cN0 Papillary Thyroid Carcinoma.

He F, Chen S, Liu X, Yang X, Qin X

pubmed logopapersJul 14 2025
Accurately assessing the risk stratification of cN0 papillary thyroid carcinoma (PTC) preoperatively aids in making treatment decisions. We integrated preoperative ultrasound and cytological images of patients to develop and validate a multimodal deep learning (DL) model for non-invasive assessment of N0 PTC risk stratification before surgery. In this retrospective multicenter group study, we developed a comprehensive DL model based on ultrasound and cytological images. The model was trained and validated on 890 PTC patients undergoing thyroidectomy and lymph node dissection across five medical centers. The testing group included 107 patients from one medical center. We analyzed the model's performance, including the area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity. The combined DL model demonstrated strong performance, with an area under the curve (AUC) of 0.922 (0.866-0.979) in the internal validation group and an AUC of 0.845 (0.794-0.895) in the testing group. The diagnostic performance of the combined DL model surpassed that of clinical models. Image region heatmaps assisted in interpreting the diagnosis of risk stratification. The multimodal DL model based on ultrasound and cytological images can accurately determine the risk stratification of N0 PTC and guide treatment decisions.

ESE and Transfer Learning for Breast Tumor Classification.

He Y, Batumalay M, Thinakaran R

pubmed logopapersJul 14 2025
In this study, we proposed a lightweight neural network architecture based on inverted residual network, efficient squeeze excitation (ESE) module, and double transfer learning, called TLese-ResNet, for breast cancer molecular subtype recognition. The inverted ResNet reduces the number of network parameters while enhancing the cross-layer gradient propagation and feature expression capabilities. The introduction of the ESE module reduces the network complexity while maintaining the channel relationship collection. The dataset of this study comes from the mammography images of patients diagnosed with invasive breast cancer in a hospital in Jiangxi. The dataset comprises preoperative mammography images with CC and MLO views. Given that the dataset is somewhat small, in addition to the commonly used data augmentation methods, double transfer learning is also used. Double transfer learning includes the first transfer, in which the source domain is ImageNet and the target domain is the COVID-19 chest X-ray image dataset, and the second transfer, in which the source domain is the target domain of the first transfer, and the target domain is the mammography dataset we collected. By using five-fold cross-validation, the mean accuracy and area under received surgery feature on mammographic images of CC and MLO views were 0.818 and 0.883, respectively, outperforming other state-of-the-art deep learning-based models such as ResNet-50 and DenseNet-121. Therefore, the proposed model can provide clinicians with an effective and non-invasive auxiliary tool for molecular subtype identification of breast cancer.

Pathological omics prediction of early and advanced colon cancer based on artificial intelligence model.

Wang Z, Wu Y, Li Y, Wang Q, Yi H, Shi H, Sun X, Liu C, Wang K

pubmed logopapersJul 14 2025
Artificial intelligence (AI) models based on pathological slides have great potential to assist pathologists in disease diagnosis and have become an important research direction in the field of medical image analysis. The aim of this study was to develop an AI model based on whole-slide images to predict the stage of colon cancer. In this study, a total of 100 pathological slides of colon cancer patients were collected as the training set, and 421 pathological slides of colon cancer were downloaded from The Cancer Genome Atlas (TCGA) database as the external validation set. Cellprofiler and CLAM tools were used to extract pathological features, and machine learning algorithms and deep learning algorithms were used to construct prediction models. The area under the curve (AUC) of the best machine learning model was 0.78 in the internal test set and 0.68 in the external test set. The AUC of the deep learning model in the internal test set was 0.889, and the accuracy of the model was 0.854. The AUC of the deep learning model in the external test set was 0.700. The prediction model has the potential to generalize in the process of combining pathological omics diagnosis. Compared with machine learning, deep learning has higher recognition and accuracy of images, and the performance of the model is better.

Is a score enough? Pitfalls and solutions for AI severity scores.

Bernstein MH, van Assen M, Bruno MA, Krupinski EA, De Cecco C, Baird GL

pubmed logopapersJul 14 2025
Severity scores, which often refer to the likelihood or probability of a pathology, are commonly provided by artificial intelligence (AI) tools in radiology. However, little attention has been given to the use of these AI scores, and there is a lack of transparency into how they are generated. In this comment, we draw on key principles from psychological science and statistics to elucidate six human factors limitations of AI scores that undermine their utility: (1) variability across AI systems; (2) variability within AI systems; (3) variability between radiologists; (4) variability within radiologists; (5) unknown distribution of AI scores; and (6) perceptual challenges. We hypothesize that these limitations can be mitigated by providing the false discovery rate and false omission rate for each score as a threshold. We discuss how this hypothesis could be empirically tested. KEY POINTS: The radiologist-AI interaction has not been given sufficient attention. The utility of AI scores is limited by six key human factors limitations. We propose a hypothesis for how to mitigate these limitations by using false discovery rate and false omission rate.

Leveraging Swin Transformer for enhanced diagnosis of Alzheimer's disease using multi-shell diffusion MRI

Quentin Dessain, Nicolas Delinte, Bernard Hanseeuw, Laurence Dricot, Benoît Macq

arxiv logopreprintJul 14 2025
Objective: This study aims to support early diagnosis of Alzheimer's disease and detection of amyloid accumulation by leveraging the microstructural information available in multi-shell diffusion MRI (dMRI) data, using a vision transformer-based deep learning framework. Methods: We present a classification pipeline that employs the Swin Transformer, a hierarchical vision transformer model, on multi-shell dMRI data for the classification of Alzheimer's disease and amyloid presence. Key metrics from DTI and NODDI were extracted and projected onto 2D planes to enable transfer learning with ImageNet-pretrained models. To efficiently adapt the transformer to limited labeled neuroimaging data, we integrated Low-Rank Adaptation. We assessed the framework on diagnostic group prediction (cognitively normal, mild cognitive impairment, Alzheimer's disease dementia) and amyloid status classification. Results: The framework achieved competitive classification results within the scope of multi-shell dMRI-based features, with the best balanced accuracy of 95.2% for distinguishing cognitively normal individuals from those with Alzheimer's disease dementia using NODDI metrics. For amyloid detection, it reached 77.2% balanced accuracy in distinguishing amyloid-positive mild cognitive impairment/Alzheimer's disease dementia subjects from amyloid-negative cognitively normal subjects, and 67.9% for identifying amyloid-positive individuals among cognitively normal subjects. Grad-CAM-based explainability analysis identified clinically relevant brain regions, including the parahippocampal gyrus and hippocampus, as key contributors to model predictions. Conclusion: This study demonstrates the promise of diffusion MRI and transformer-based architectures for early detection of Alzheimer's disease and amyloid pathology, supporting biomarker-driven diagnostics in data-limited biomedical settings.

Predicting the molecular subtypes of 2021 WHO grade 4 glioma by a multiparametric MRI-based machine learning model.

Xu W, Li Y, Zhang J, Zhang Z, Shen P, Wang X, Yang G, Du J, Zhang H, Tan Y

pubmed logopapersJul 14 2025
Accurately distinguishing the different molecular subtypes of 2021 World Health Organization (WHO) grade 4 Central Nervous System (CNS) gliomas is highly relevant for prognostic stratification and personalized treatment. To develop and validate a machine learning (ML) model using multiparametric MRI for the preoperative differentiation of astrocytoma, CNS WHO grade 4, and glioblastoma (GBM), isocitrate dehydrogenase-wild-type (IDH-wt) (WHO 2021) (Task 1:grade 4 vs. GBM); and to stratify astrocytoma, CNS WHO grade 4, by distinguish astrocytoma, IDH-mutant (IDH-mut), CNS WHO grade 4 from astrocytoma, IDH-wild-type (IDH-wt), CNS WHO grade 4 (Task 2:IDH-mut <sup>grade 4</sup> vs. IDH-wt <sup>grade 4</sup>). Additionally, to evaluate the model's prognostic value. We retrospectively analyzed 320 glioma patients from three hospitals (training/testing, 7:3 ratio) and 99 patients from ‌The Cancer Genome Atlas (TCGA) database for external validation‌. Radiomic features were extracted from tumor and edema on contrast-enhanced T1-weighted imaging (CE-T1WI) and T2 fluid-attenuated inversion recovery (T2-FLAIR). Extreme gradient boosting (XGBoost) was utilized for constructing the ML, clinical, and combined models. Model performance was evaluated with receiver operating characteristic (ROC) curves, decision curves, and calibration curves. Stability was evaluated using six additional classifiers. Kaplan-Meier (KM) survival analysis and the log-rank test assessed the model's prognostic value. In Task 1 and Task 2, the combined model (AUC = 0.907, 0.852 and 0.830 for Task 1; AUC = 0.899, 0.895 and 0.792 for Task 2) and the optimal ML model (AUC = 0.902, 0.854 and 0.832 for Task 1; AUC = 0.904, 0.899 and 0.783 for Task 2) significantly outperformed the clinical model (AUC = 0.671, 0.656, and 0.543 for Task 1; AUC = 0.619, 0.605 and 0.400 for Task 2) in both the training, testing and validation sets. Survival analysis showed the combined model performed similarly to molecular subtype in both tasks (p = 0.964 and p = 0.746). The multiparametric MRI ML model effectively distinguished astrocytoma, CNS WHO grade 4 from GBM, IDH-wt (WHO 2021) and differentiated astrocytoma, IDH-mut from astrocytoma, IDH-wt, CNS WHO grade 4. Additionally, the model provided reliable survival stratification for glioma patients across different molecular subtypes.

Classification of Renal Lesions by Leveraging Hybrid Features from CT Images Using Machine Learning Techniques.

Kaur R, Khattar S, Singla S

pubmed logopapersJul 14 2025
Renal cancer is amid the several reasons of increasing mortality rates globally, which can be reduced by early detection and diagnosis. The classification of lesions is based mostly on their characteristics, which include varied shape and texture properties. Computed tomography (CT) imaging is a regularly used imaging modality for study of the renal soft tissues. Furthermore, a radiologist's ability to assess a corpus of CT images is limited, which can lead to misdiagnosis of kidney lesions, which might lead to cancer progression or unnecessary chemotherapy. To address these challenges, this study presents a machine learning technique based on a novel feature vector for the automated classification of renal lesions using a multi-model texture-based feature extraction. The proposed feature vector could serve as an integral component in improving the accuracy of a computer aided diagnosis (CAD) system for identifying the texture of renal lesion and can assist physicians in order to provide more precise lesion interpretation. In this work, the authors employed different texture models for the analysis of CT scans, in order to classify benign and malignant kidney lesions. Texture analysis is performed using features such as first-order statistics (FoS), spatial gray level co-occurrence matrix (SGLCM), Fourier power spectrum (FPS), statistical feature matrix (SFM), Law's texture energy measures (TEM), gray level difference statistics (GLDS), fractal, and neighborhood gray tone difference matrix (NGTDM). Multiple texture models were utilized to quantify the renal texture patterns, which used image texture analysis on a selected region of interest (ROI) from the renal lesions. In addition, dimensionality reduction is employed to discover the most discriminative features for categorization of benign and malignant lesions, and a unique feature vector based on correlation-based feature selection, information gain, and gain ratio is proposed. Different machine learning-based classifiers were employed to test the performance of the proposed features, out of which the random forest (RF) model outperforms all other techniques to distinguish benign from malignant tumors in terms of distinct performance evaluation metrics. The final feature set is evaluated using various machine learning classifiers, with the RF model achieving the highest performance. The proposed system is validated on a dataset of 50 subjects, achieving a classification accuracy of 95.8%, outperforming other conventional models.

Deep Learning-Based Prediction for Bone Cement Leakage During Percutaneous Kyphoplasty Using Preoperative Computed Tomography: MODEL Development and Validation.

Chen R, Wang T, Liu X, Xi Y, Liu D, Xie T, Wang A, Fan N, Yuan S, Du P, Jiao S, Zhang Y, Zang L

pubmed logopapersJul 14 2025
Retrospective study. To develop a deep learning (DL) model to predict bone cement leakage (BCL) subtypes during percutaneous kyphoplasty (PKP) using preoperative computed tomography (CT) as well as employing multicenter data to evaluate the effectiveness and generalizability of the model. DL excels at automatically extracting features from medical images. However, there is a lack of models that can predict BCL subtypes based on preoperative images. This study included an internal dataset for DL model training, validation, and testing as well as an external dataset for additional model testing. Our model integrated a segment localization module based on vertebral segmentation via three-dimensional (3D) U-Net with a classification module based on 3D ResNet-50. Vertebral level mismatch rates were calculated, and confusion matrixes were used to compare the performance of the DL model with that of spine surgeons in predicting BCL subtypes. Furthermore, the simple Cohen's kappa coefficient was used to assess the reliability of spine surgeons and the DL model against the reference standard. A total of 901 patients containing 997 eligible segments were included in the internal dataset. The model demonstrated a vertebral segment identification accuracy of 96.9%. It also showed high area under the curve (AUC) values of 0.734-0.831 and sensitivities of 0.649-0.900 for BCL prediction in the internal dataset. Similar favorable AUC values of 0.709-0.818 and sensitivities of 0.706-0.857 were observed in the external dataset, indicating the stability and generalizability of the model. Moreover, the model outperformed nonexpert spine surgeons in predicting BCL subtypes, except for type II. The model achieved satisfactory accuracy, reliability, generalizability, and interpretability in predicting BCL subtypes, outperforming nonexpert spine surgeons. This study offers valuable insights for assessing osteoporotic vertebral compression fractures, thereby aiding preoperative surgical decision-making. 3.
Page 7 of 1321311 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.