Sort by:
Page 92 of 92919 results

RRFNet: A free-anchor brain tumor detection and classification network based on reparameterization technology.

Liu W, Guo X

pubmed logopapersJan 1 2025
Advancements in medical imaging technology have facilitated the acquisition of high-quality brain images through computed tomography (CT) or magnetic resonance imaging (MRI), enabling professional brain specialists to diagnose brain tumors more effectively. However, manual diagnosis is time-consuming, which has led to the growing importance of automatic detection and classification through brain imaging. Conventional object detection models for brain tumor detection face limitations in brain tumor detection owing to the significant differences between medical images and natural scene images, as well as challenges such as complex backgrounds, noise interference, and blurred boundaries between cancerous and normal tissues. This study investigates the application of deep learning to brain tumor detection, analyzing the effect of three factors, the number of model parameters, input data batch size, and the use of anchor boxes, on detection performance. Experimental results reveal that an excessive number of model parameters or the use of anchor boxes may reduce detection accuracy. However, increasing the number of brain tumor samples improves detection performance. This study, introduces a backbone network built using RepConv and RepC3, along with FGConcat feature map splicing module to optimize the brain tumor detection model. The experimental results show that the proposed RepConv-RepC3-FGConcat Network (RRFNet) can learn underlying semantic information about brain tumors during training stage, while maintaining a low number of parameters during inference, which improves the speed of brain tumor detection. Compared with YOLOv8, RRFNet achieved a higher accuracy in brain tumor detection, with a mAP value of 79.2%. This optimized approach enhances both accuracy and efficiency, which is essential in clinical settings where time and precision are critical.

Radiomics machine learning based on asymmetrically prominent cortical and deep medullary veins combined with clinical features to predict prognosis in acute ischemic stroke: a retrospective study.

Li H, Chang C, Zhou B, Lan Y, Zang P, Chen S, Qi S, Ju R, Duan Y

pubmed logopapersJan 1 2025
Acute ischemic stroke (AIS) has a poor prognosis and a high recurrence rate. Predicting the outcomes of AIS patients in the early stages of the disease is therefore important. The establishment of intracerebral collateral circulation significantly improves the survival of brain cells and the outcomes of AIS patients. However, no machine learning method has been applied to investigate the correlation between the dynamic evolution of intracerebral venous collateral circulation and AIS prognosis. Therefore, we employed a support vector machine (SVM) algorithm to analyze asymmetrically prominent cortical veins (APCVs) and deep medullary veins (DMVs) to establish a radiomic model for predicting the prognosis of AIS by combining clinical indicators. The magnetic resonance imaging (MRI) data and clinical indicators of 150 AIS patients were retrospectively analyzed. Regions of interest corresponding to the DMVs and APCVs were delineated, and least absolute shrinkage and selection operator (LASSO) regression was used to select features extracted from these regions. An APCV-DMV radiomic model was created via the SVM algorithm, and independent clinical risk factors associated with AIS were combined with the radiomic model to generate a joint model. The SVM algorithm was selected because of its proven efficacy in handling high-dimensional radiomic data compared with alternative classifiers (<i>e.g.</i>, random forest) in pilot experiments. Nine radiomic features associated with AIS patient outcomes were ultimately selected. In the internal training test set, the AUCs of the clinical, DMV-APCV radiomic and joint models were 0.816, 0.976 and 0.996, respectively. The DeLong test revealed that the predictive performance of the joint model was better than that of the individual models, with a test set AUC of 0.996, sensitivity of 0.905, and specificity of 1.000 (<i>P</i> < 0.05). Using radiomic methods, we propose a novel joint predictive model that combines the imaging histologic features of the APCV and DMV with clinical indicators. This model quantitatively characterizes the morphological and functional attributes of venous collateral circulation, elucidating its important role in accurately evaluating the prognosis of patients with AIS and providing a noninvasive and highly accurate imaging tool for early prognostic prediction.

Recognition of flight cadets brain functional magnetic resonance imaging data based on machine learning analysis.

Ye L, Weng S, Yan D, Ma S, Chen X

pubmed logopapersJan 1 2025
The rapid advancement of the civil aviation industry has attracted significant attention to research on pilots. However, the brain changes experienced by flight cadets following their training remain, to some extent, an unexplored territory compared to those of the general population. The aim of this study was to examine the impact of flight training on brain function by employing machine learning(ML) techniques. We collected resting-state functional magnetic resonance imaging (resting-state fMRI) data from 79 flight cadets and ground program cadets, extracting blood oxygenation level-dependent (BOLD) signal, amplitude of low frequency fluctuation (ALFF), regional homogeneity (ReHo), and functional connectivity (FC) metrics as feature inputs for ML models. After conducting feature selection using a two-sample t-test, we established various ML classification models, including Extreme Gradient Boosting (XGBoost), Logistic Regression (LR), Random Forest (RF), Support Vector Machine (SVM), and Gaussian Naive Bayes (GNB). Comparative analysis of the model results revealed that the LR classifier based on BOLD signals could accurately distinguish flight cadets from the general population, achieving an AUC of 83.75% and an accuracy of 0.93. Furthermore, an analysis of the features contributing significantly to the ML classification models indicated that these features were predominantly located in brain regions associated with auditory-visual processing, motor function, emotional regulation, and cognition, primarily within the Default Mode Network (DMN), Visual Network (VN), and SomatoMotor Network (SMN). These findings suggest that flight-trained cadets may exhibit enhanced functional dynamics and cognitive flexibility.

Neurovision: A deep learning driven web application for brain tumour detection using weight-aware decision approach.

Santhosh TRS, Mohanty SN, Pradhan NR, Khan T, Derbali M

pubmed logopapersJan 1 2025
In recent times, appropriate diagnosis of brain tumour is a crucial task in medical system. Therefore, identification of a potential brain tumour is challenging owing to the complex behaviour and structure of the human brain. To address this issue, a deep learning-driven framework consisting of four pre-trained models viz DenseNet169, VGG-19, Xception, and EfficientNetV2B2 is developed to classify potential brain tumours from medical resonance images. At first, the deep learning models are trained and fine-tuned on the training dataset, obtained validation scores of trained models are considered as model-wise weights. Then, trained models are subsequently evaluated on the test dataset to generate model-specific predictions. In the weight-aware decision module, the class-bucket of a probable output class is updated with the weights of deep models when their predictions match the class. Finally, the bucket with the highest aggregated value is selected as the final output class for the input image. A novel weight-aware decision mechanism is a key feature of this framework, which effectively deals tie situations in multi-class classification compared to conventional majority-based techniques. The developed framework has obtained promising results of 98.7%, 97.52%, and 94.94% accuracy on three different datasets. The entire framework is seamlessly integrated into an end-to-end web-application for user convenience. The source code, dataset and other particulars are publicly released at https://github.com/SaiSanthosh1508/Brain-Tumour-Image-classification-app [Rishik Sai Santhosh, "Brain Tumour Image Classification Application," https://github.com/SaiSanthosh1508/Brain-Tumour-Image-classification-app] for academic, research and other non-commercial usage.

Refining CT image analysis: Exploring adaptive fusion in U-nets for enhanced brain tissue segmentation.

Chen BC, Shen CY, Chai JW, Hwang RH, Chiang WC, Chou CH, Liu WM

pubmed logopapersJan 1 2025
Non-contrast Computed Tomography (NCCT) quickly diagnoses acute cerebral hemorrhage or infarction. However, Deep-Learning (DL) algorithms often generate false alarms (FA) beyond the cerebral region. We introduce an enhanced brain tissue segmentation method for infarction lesion segmentation (ILS). This method integrates an adaptive result fusion strategy to confine the search operation within cerebral tissue, effectively reducing FAs. By leveraging fused brain masks, DL-based ILS algorithms focus on pertinent radiomic correlations. Various U-Net models underwent rigorous training, with exploration of diverse fusion strategies. Further refinement entailed applying a 9x9 Gaussian filter with unit standard deviation followed by binarization to mitigate false positives. Performance evaluation utilized Intersection over Union (IoU) and Hausdorff Distance (HD) metrics, complemented by external validation on a subset of the COCO dataset. Our study comprised 20 ischemic stroke patients (14 males, 4 females) with an average age of 68.9 ± 11.7 years. Fusion with UNet2+ and UNet3 + yielded an IoU of 0.955 and an HD of 1.33, while fusion with U-net, UNet2 + , and UNet3 + resulted in an IoU of 0.952 and an HD of 1.61. Evaluation on the COCO dataset demonstrated an IoU of 0.463 and an HD of 584.1 for fusion with UNet2+ and UNet3 + , and an IoU of 0.453 and an HD of 728.0 for fusion with U-net, UNet2 + , and UNet3 + . Our adaptive fusion strategy significantly diminishes FAs and enhances the training efficacy of DL-based ILS algorithms, surpassing individual U-Net models. This methodology holds promise as a versatile, data-independent approach for cerebral lesion segmentation.

Enhancing Attention Network Spatiotemporal Dynamics for Motor Rehabilitation in Parkinson's Disease.

Pei G, Hu M, Ouyang J, Jin Z, Wang K, Meng D, Wang Y, Chen K, Wang L, Cao LZ, Funahashi S, Yan T, Fang B

pubmed logopapersJan 1 2025
Optimizing resource allocation for Parkinson's disease (PD) motor rehabilitation necessitates identifying biomarkers of responsiveness and dynamic neuroplasticity signatures underlying efficacy. A cohort study of 52 early-stage PD patients undergoing 2-week multidisciplinary intensive rehabilitation therapy (MIRT) was conducted, which stratified participants into responders and nonresponders. A multimodal analysis of resting-state electroencephalography (EEG) microstates and functional magnetic resonance imaging (fMRI) coactivation patterns was performed to characterize MIRT-induced spatiotemporal network reorganization. Responders demonstrated clinically meaningful improvement in motor symptoms, exceeding the minimal clinically important difference threshold of 3.25 on the Unified PD Rating Scale part III, alongside significant reductions in bradykinesia and a significant enhancement in quality-of-life scores at the 3-month follow-up. Resting-state EEG in responders showed a significant attenuation in microstate C and a significant enhancement in microstate D occurrences, along with significantly increased transitions from microstate A/B to D, which significantly correlated with motor function, especially in bradykinesia gains. Concurrently, fMRI analyses identified a prolonged dwell time of the dorsal attention network coactivation/ventral attention network deactivation pattern, which was significantly inversely associated with microstate C occurrence and significantly linked to motor improvement. The identified brain spatiotemporal neural markers were validated using machine learning models to assess the efficacy of MIRT in motor rehabilitation for PD patients, achieving an average accuracy rate of 86%. These findings suggest that MIRT may facilitate a shift in neural networks from sensory processing to higher-order cognitive control, with the dynamic reallocation of attentional resources. This preliminary study validates the necessity of integrating cognitive-motor strategies for the motor rehabilitation of PD and identifies novel neural markers for assessing treatment efficacy.

Brain tumor classification using MRI images and deep learning techniques.

Wong Y, Su ELM, Yeong CF, Holderbaum W, Yang C

pubmed logopapersJan 1 2025
Brain tumors pose a significant medical challenge, necessitating early detection and precise classification for effective treatment. This study aims to address this challenge by introducing an automated brain tumor classification system that utilizes deep learning (DL) and Magnetic Resonance Imaging (MRI) images. The main purpose of this research is to develop a model that can accurately detect and classify different types of brain tumors, including glioma, meningioma, pituitary tumors, and normal brain scans. A convolutional neural network (CNN) architecture with pretrained VGG16 as the base model is employed, and diverse public datasets are utilized to ensure comprehensive representation. Data augmentation techniques are employed to enhance the training dataset, resulting in a total of 17,136 brain MRI images across the four classes. The accuracy of this model was 99.24%, a higher accuracy than other similar works, demonstrating its potential clinical utility. This higher accuracy was achieved mainly due to the utilization of a large and diverse dataset, the improvement of network configuration, the application of a fine-tuning strategy to adjust pretrained weights, and the implementation of data augmentation techniques in enhancing classification performance for brain tumor detection. In addition, a web application was developed by leveraging HTML and Dash components to enhance usability, allowing for easy image upload and tumor prediction. By harnessing artificial intelligence (AI), the developed system addresses the need to reduce human error and enhance diagnostic accuracy. The proposed approach provides an efficient and reliable solution for brain tumor classification, facilitating early diagnosis and enabling timely medical interventions. This work signifies a potential advancement in brain tumor classification, promising improved patient care and outcomes.

3D-MRI brain glioma intelligent segmentation based on improved 3D U-net network.

Wang T, Wu T, Yang D, Xu Y, Lv D, Jiang T, Wang H, Chen Q, Xu S, Yan Y, Lin B

pubmed logopapersJan 1 2025
To enhance glioma segmentation, a 3D-MRI intelligent glioma segmentation method based on deep learning is introduced. This method offers significant guidance for medical diagnosis, grading, and treatment strategy selection. Glioma case data were sourced from the BraTS2023 public dataset. Firstly, we preprocess the dataset, including 3D clipping, resampling, artifact elimination and normalization. Secondly, in order to enhance the perception ability of the network to different scale features, we introduce the space pyramid pool module. Then, by making the model focus on glioma details and suppressing irrelevant background information, we propose a multi-scale fusion attention mechanism; And finally, to address class imbalance and enhance learning of misclassified voxels, a combination of Dice and Focal loss functions was employed, creating a loss function, this method not only maintains the accuracy of segmentation, It also improves the recognition of challenge samples, thus improving the accuracy and generalization of the model in glioma segmentation. Experimental findings reveal that the enhanced 3D U-Net network model stabilizes training loss at 0.1 after 150 training iterations. The refined model demonstrates superior performance with the highest DSC, Recall, and Precision values of 0.7512, 0.7064, and 0.77451, respectively. In Whole Tumor (WT) segmentation, the Dice Similarity Coefficient (DSC), Recall, and Precision scores are 0.9168, 0.9426, and 0.9375, respectively. For Core Tumor (TC) segmentation, these scores are 0.8954, 0.9014, and 0.9369, respectively. In Enhanced Tumor (ET) segmentation, the method achieves DSC, Recall, and Precision values of 0.8674, 0.9045, and 0.9011, respectively. The DSC, Recall, and Precision indices in the WT, TC, and ET segments using this method are the highest recorded, significantly enhancing glioma segmentation. This improvement bolsters the accuracy and reliability of diagnoses, ultimately providing a scientific foundation for clinical diagnosis and treatment.

Ensuring Fairness in Detecting Mild Cognitive Impairment with MRI.

Tong B, Edwards T, Yang S, Hou B, Tarzanagh DA, Urbanowicz RJ, Moore JH, Ritchie MD, Davatzikos C, Shen L

pubmed logopapersJan 1 2024
Machine learning (ML) algorithms play a crucial role in the early and accurate diagnosis of Alzheimer's Disease (AD), which is essential for effective treatment planning. However, existing methods are not well-suited for identifying Mild Cognitive Impairment (MCI), a critical transitional stage between normal aging and AD. This inadequacy is primarily due to label imbalance and bias from different sensitve attributes in MCI classification. To overcome these challenges, we have designed an end-to-end fairness-aware approach for label-imbalanced classification, tailored specifically for neuroimaging data. This method, built on the recently developed FACIMS framework, integrates into STREAMLINE, an automated ML environment. We evaluated our approach against nine other ML algorithms and found that it achieves comparable balanced accuracy to other methods while prioritizing fairness in classifications with five different sensitive attributes. This analysis contributes to the development of equitable and reliable ML diagnostics for MCI detection.
Page 92 of 92919 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.