Sort by:
Page 52 of 1241236 results

Combining multi-parametric MRI radiomics features with tumor abnormal protein to construct a machine learning-based predictive model for prostate cancer.

Zhang C, Wang Z, Shang P, Zhou Y, Zhu J, Xu L, Chen Z, Yu M, Zang Y

pubmed logopapersJul 2 2025
This study aims to investigate the diagnostic value of integrating multi-parametric magnetic resonance imaging (mpMRI) radiomic features with tumor abnormal protein (TAP) and clinical characteristics for diagnosing prostate cancer. A cohort of 109 patients who underwent both mpMRI and TAP assessments prior to prostate biopsy were enrolled. Radiomic features were meticulously extracted from T2-weighted imaging (T2WI) and the apparent diffusion coefficient (ADC) maps. Feature selection was performed using t-tests and the Least Absolute Shrinkage and Selection Operator (LASSO) regression, followed by model construction using the random forest algorithm. To further enhance the model's accuracy and predictive performance, this study incorporated clinical factors including age, serum prostate-specific antigen (PSA) levels, and prostate volume. By integrating these clinical indicators with radiomic features, a more comprehensive and precise predictive model was developed. Finally, the model's performance was quantified by calculating accuracy, sensitivity, specificity, precision, recall, F1 score, and the area under the curve (AUC). From mpMRI sequences of T2WI, dADC(b = 100/1000 s/mm<sup>2</sup>), and dADC(b = 100/2000 s/mm<sup>2</sup>), 8, 10, and 13 radiomic features were identified as significantly correlated with prostate cancer, respectively. Random forest models constructed based on these three sets of radiomic features achieved AUCs of 0.83, 0.86, and 0.87, respectively. When integrating all three sets of data to formulate a random forest model, an AUC of 0.84 was obtained. Additionally, a random forest model constructed on TAP and clinical characteristics achieved an AUC of 0.85. Notably, combining mpMRI radiomic features with TAP and clinical characteristics, or integrating dADC (b = 100/2000 s/mm²) sequence with TAP and clinical characteristics to construct random forest models, improved the AUCs to 0.91 and 0.92, respectively. The proposed model, which integrates radiomic features, TAP and clinical characteristics using machine learning, demonstrated high predictive efficiency in diagnosing prostate cancer.

Deep learning strategies for semantic segmentation of pediatric brain tumors in multiparametric MRI.

Cariola A, Sibilano E, Guerriero A, Bevilacqua V, Brunetti A

pubmed logopapersJul 2 2025
Automated segmentation of pediatric brain tumors (PBTs) can support precise diagnosis and treatment monitoring, but it is still poorly investigated in literature. This study proposes two different Deep Learning approaches for semantic segmentation of tumor regions in PBTs from MRI scans. Two pipelines were developed for segmenting enhanced tumor (ET), tumor core (TC), and whole tumor (WT) in pediatric gliomas from the BraTS-PEDs 2024 dataset. First, a pre-trained SegResNet model was retrained with a transfer learning approach and tested on the pediatric cohort. Then, two novel multi-encoder architectures leveraging the attention mechanism were designed and trained from scratch. To enhance the performance on ET regions, an ensemble paradigm and post-processing techniques were implemented. Overall, the 3-encoder model achieved the best performance in terms of Dice Score on TC and WT when trained with Dice Loss and on ET when trained with Generalized Dice Focal Loss. SegResNet showed higher recall on TC and WT, and higher precision on ET. After post-processing, we reached Dice Scores of 0.843, 0.869, 0.757 with the pre-trained model and 0.852, 0.876, 0.764 with the ensemble model for TC, WT and ET, respectively. Both strategies yielded state-of-the-art performances, although the ensemble demonstrated significantly superior results. Segmentation of the ET region was improved after post-processing, which increased test metrics while maintaining the integrity of the data.

Multi channel fusion diffusion models for brain tumor MRI data augmentation.

Zuo C, Xue J, Yuan C

pubmed logopapersJul 2 2025
The early diagnosis of brain tumors is crucial for patient prognosis, and medical imaging techniques such as MRI and CT scans are essential tools for diagnosing brain tumors. However, high-quality medical image data for brain tumors is often scarce and difficult to obtain, which hinders the development and application of medical image analysis models. With the advancement of artificial intelligence, particularly deep learning technologies in the field of medical imaging, new concepts and tools have been introduced for the early diagnosis, treatment planning, and prognosis evaluation of brain tumors. To address the challenge of imbalanced brain tumor datasets, we propose a novel data augmentation technique based on a diffusion model, referred to as the Multi-Channel Fusion Diffusion Model(MCFDiffusion). This method tackles the issue of data imbalance by converting healthy brain MRI images into images containing tumors, thereby enabling deep learning models to achieve better performance and assisting physicians in making more accurate diagnoses and treatment plans. In our experiments, we used a publicly available brain tumor dataset and compared the performance of image classification and segmentation tasks between the original data and the data enhanced by our method. The results show that the enhanced data improved the classification accuracy by approximately 3% and the Dice coefficient for segmentation tasks by 1.5%-2.5%. Our research builds upon previous work involving Denoising Diffusion Implicit Models (DDIMs) for image generation and further enhances the applicability of this model in medical imaging by introducing a multi-channel approach and fusing defective areas with healthy images. Future work will explore the application of this model to various types of medical images and further optimize the model to improve its generalization capabilities. We release our code at https://github.com/feiyueaaa/MCFDiffusion.

Classifying and diagnosing Alzheimer's disease with deep learning using 6735 brain MRI images.

Mousavi SM, Moulaei K, Ahmadian L

pubmed logopapersJul 2 2025
Traditional diagnostic methods for Alzheimer's disease often suffer from low accuracy and lengthy processing times, delaying crucial interventions and patient care. Deep convolutional neural networks trained on MRI data can enhance diagnostic precision. This study aims to utilize deep convolutional neural networks (CNNs) trained on MRI data for Alzheimer's disease diagnosis and classification. In this study, the Alzheimer MRI Preprocessed Dataset was used, which includes 6735 brain structural MRI scan images. After data preprocessing and normalization, four models Xception, VGG19, VGG16 and InceptionResNetV2 were utilized. Generalization and hyperparameter tuning were applied to improve training. Early stopping and dynamic learning rate were used to prevent overfitting. Model performance was evaluated based on accuracy, F-score, recall, and precision. The InceptionResnetV2 model showed superior performance in predicting Alzheimer's patients with an accuracy, F-score, recall, and precision of 0.99. Then, the Xception model excelled in precision, recall, and F-score, with values of 0.97 and an accuracy of 96.89. Notably, InceptionResnetV2 and VGG19 demonstrated faster learning, reaching convergence sooner and requiring fewer training iterations than other models. The InceptionResNetV2 model achieved the highest performance, with precision, recall, and F-score of 100% for both mild and moderate dementia classes. The Xception model also performed well, attaining 100% for the moderate dementia class and 99-100% for the mild dementia class. Additionally, the VGG16 and VGG19 models showed strong results, with VGG16 reaching 100% precision, recall, and F-score for the moderate dementia class. Deep convolutional neural networks enhance Alzheimer's diagnosis, surpassing traditional methods with improved precision and efficiency. Models like InceptionResnetV2 show outstanding performance, potentially speeding up patient interventions.

Multi-scheme cross-level attention embedded U-shape transformer for MRI semantic segmentation.

Wang Q, Xue Y

pubmed logopapersJul 2 2025
Accurate MRI image segmentation is crucial for disease diagnosis, but current Transformer-based methods face two key challenges: limited capability to capture detailed information, leading to blurred boundaries and false localization, and the lack of MRI-specific embedding paradigms for attention modules, which limits their potential and representation capability. To address these challenges, this paper proposes a multi-scheme cross-level attention embedded U-shape Transformer (MSCL-SwinUNet). This model integrates cross-level spatial-wise attention (SW-Attention) to transfer detailed information from encoder to decoder, cross-stage channel-wise attention (CW-Attention) to filter out redundant features and enhance task-related channels, and multi-stage scale-wise attention (ScaleW-Attention) to adaptively process multi-scale features. Extensive experiments on the ACDC, MM-WHS and Synapse datasets demonstrate that the proposed MSCL-SwinUNet surpasses state-of-the-art methods in accuracy and generalizability. Visualization further confirms the superiority of our model in preserving detailed boundaries. This work not only advances Transformer-based segmentation in medical imaging but also provides new insights into designing MRI-specific attention embedding paradigms.Our code is available at https://github.com/waylans/MSCL-SwinUNet .

Optimizing the early diagnosis of neurological disorders through the application of machine learning for predictive analytics in medical imaging.

Sadu VB, Bagam S, Naved M, Andluru SKR, Ramineni K, Alharbi MG, Sengan S, Khadhar Moideen R

pubmed logopapersJul 2 2025
Early diagnosis of Neurological Disorders (ND) such as Alzheimer's disease (AD) and Brain Tumors (BT) can be highly challenging since these diseases cause minor changes in the brain's anatomy. Magnetic Resonance Imaging (MRI) is a vital tool for diagnosing and visualizing these ND; however, standard techniques contingent upon human analysis can be inaccurate, require a long-time, and detect early-stage symptoms necessary for effective treatment. Spatial Feature Extraction (FE) has been improved by Convolutional Neural Networks (CNN) and hybrid models, both of which are changes in Deep Learning (DL). However, these analysis methods frequently fail to accept temporal dynamics, which is significant for a complete test. The present investigation introduces the STGCN-ViT, a hybrid model that integrates CNN + Spatial-Temporal Graph Convolutional Networks (STGCN) + Vision Transformer (ViT) components to address these gaps. The model causes the reference to EfficientNet-B0 for FE in space, STGCN for FE in time, and ViT for FE using AM. By applying the Open Access Series of Imaging Studies (OASIS) and Harvard Medical School (HMS) benchmark datasets, the recommended approach proved effective in the investigations, with Group A attaining an accuracy of 93.56%, a precision of 94.41% and an Area under the Receiver Operating Characteristic Curve (AUC-ROC) score of 94.63%. Compared with standard and transformer-based models, the model attains better results for Group B, with an accuracy of 94.52%, precision of 95.03%, and AUC-ROC score of 95.24%. Those results support the model's use in real-time medical applications by providing proof of the probability of accurate but early-stage ND diagnosis.

A multi-modal graph-based framework for Alzheimer's disease detection.

Mashhadi N, Marinescu R

pubmed logopapersJul 2 2025
We propose a compositional graph-based Machine Learning (ML) framework for Alzheimer's disease (AD) detection that constructs complex ML predictors from modular components. In our directed computational graph, datasets are represented as nodes [Formula: see text], and deep learning (DL) models are represented as directed edges [Formula: see text], allowing us to model complex image-processing pipelines [Formula: see text] as end-to-end DL predictors. Each directed path in the graph functions as a DL predictor, supporting both forward propagation for transforming data representations, as well as backpropagation for model finetuning, saliency map computation, and input data optimization. We demonstrate our model on Alzheimer's disease prediction, a complex problem that requires integrating multimodal data containing scans of different modalities and contrasts, genetic data and cognitive tests. We built a graph of 11 nodes (data) and 14 edges (ML models), where each model has been trained on handling a specific task (e.g. skull-stripping MRI scans, AD detection,image2image translation, ...). By using a modular and adaptive approach, our framework effectively integrates diverse data types, handles distribution shifts, and scales to arbitrary complexity, offering a practical tool that remains accurate even when modalities are missing for advancing Alzheimer's disease diagnosis and potentially other complex medical prediction tasks.

Lightweight convolutional neural networks using nonlinear Lévy chaotic moth flame optimisation for brain tumour classification via efficient hyperparameter tuning.

Dehkordi AA, Neshat M, Khosravian A, Thilakaratne M, Safaa Sadiq A, Mirjalili S

pubmed logopapersJul 2 2025
Deep convolutional neural networks (CNNs) have seen significant growth in medical image classification applications due to their ability to automate feature extraction, leverage hierarchical learning, and deliver high classification accuracy. However, Deep CNNs require substantial computational power and memory, particularly for large datasets and complex architectures. Additionally, optimising the hyperparameters of deep CNNs, although critical for enhancing model performance, is challenging due to the high computational costs involved, making it difficult without access to high-performance computing resources. To address these limitations, this study presents a fast and efficient model that aims to achieve superior classification performance compared to popular Deep CNNs by developing lightweight CNNs combined with the Nonlinear Lévy chaotic moth flame optimiser (NLCMFO) for automatic hyperparameter optimisation. NLCMFO integrates the Lévy flight, chaotic parameters, and nonlinear control mechanisms to enhance the exploration capabilities of the Moth Flame Optimiser during the search phase while also leveraging the Lévy flight theorem to improve the exploitation phase. To assess the efficiency of the proposed model, empirical analyses were performed using a dataset of 2314 brain tumour detection images (1245 images of brain tumours and 1069 normal brain images). The evaluation results indicate that the CNN_NLCMFO outperformed a non-optimised CNN by 5% (92.40% accuracy) and surpassed established models such as DarkNet19 (96.41%), EfficientNetB0 (96.32%), Xception (96.41%), ResNet101 (92.15%), and InceptionResNetV2 (95.63%) by margins ranging from 1 to 5.25%. The findings demonstrate that the lightweight CNN combined with NLCMFO provides a computationally efficient yet highly accurate solution for medical image classification, addressing the challenges associated with traditional deep CNNs.

Automated grading of rectocele with an MRI radiomics model.

Lai W, Wang S, Li J, Qi R, Zhao Z, Wang M

pubmed logopapersJul 2 2025
To develop an automated grading model for rectocele (RC) based on radiomics and evaluate its efficacy. This study retrospectively analyzed a total of 9,392 magnetic resonance imaging (MRI) images obtained from 222 patients who underwent dynamic magnetic resonance defecography (DMRD) over the period from August 2021 to June 2023. The focus was specifically on the defecation phase images of the DMRD, as this phase provides critical information for assessing RC. To develop and evaluate the model, the MRI images from all patients were randomly divided into two groups. 70% of the data were allocated to the training cohort to build the model, and the remaining 30% was reserved as a test cohort to evaluate its performance. First, the severity of RC was assessed using the RC MRI grading criteria by two independent radiologists. To extract and select radiomic features, two additional radiologists independently delineated the regions of interest (ROIs). These features were then dimensionality reduced to retain only the most relevant data for the analysis. The radiomics features were reduced in dimension, and a machine learning model was developed using a Support Vector Machine (SVM). Finally, receiver operating characteristic curve (ROC) and area under the curve (AUC) were used to evaluate the classification efficiency of the model. The AUC (macro/micro) of the model using defecation phase images was 0.794/0.824, and the overall accuracy was 0.754. The radiomics model built using the combination of DMRD defecation phase images is well suited for grading RC and helping clinicians diagnose and treat the disease.

Robust brain age estimation from structural MRI with contrastive learning

Carlo Alberto Barbano, Benoit Dufumier, Edouard Duchesnay, Marco Grangetto, Pietro Gori

arxiv logopreprintJul 2 2025
Estimating brain age from structural MRI has emerged as a powerful tool for characterizing normative and pathological aging. In this work, we explore contrastive learning as a scalable and robust alternative to supervised approaches for brain age estimation. We introduce a novel contrastive loss function, $\mathcal{L}^{exp}$, and evaluate it across multiple public neuroimaging datasets comprising over 20,000 scans. Our experiments reveal four key findings. First, scaling pre-training on diverse, multi-site data consistently improves generalization performance, cutting external mean absolute error (MAE) nearly in half. Second, $\mathcal{L}^{exp}$ is robust to site-related confounds, maintaining low scanner-predictability as training size increases. Third, contrastive models reliably capture accelerated aging in patients with cognitive impairment and Alzheimer's disease, as shown through brain age gap analysis, ROC curves, and longitudinal trends. Lastly, unlike supervised baselines, $\mathcal{L}^{exp}$ maintains a strong correlation between brain age accuracy and downstream diagnostic performance, supporting its potential as a foundation model for neuroimaging. These results position contrastive learning as a promising direction for building generalizable and clinically meaningful brain representations.
Page 52 of 1241236 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.