Sort by:
Page 132 of 1321316 results

3D-MRI brain glioma intelligent segmentation based on improved 3D U-net network.

Wang T, Wu T, Yang D, Xu Y, Lv D, Jiang T, Wang H, Chen Q, Xu S, Yan Y, Lin B

pubmed logopapersJan 1 2025
To enhance glioma segmentation, a 3D-MRI intelligent glioma segmentation method based on deep learning is introduced. This method offers significant guidance for medical diagnosis, grading, and treatment strategy selection. Glioma case data were sourced from the BraTS2023 public dataset. Firstly, we preprocess the dataset, including 3D clipping, resampling, artifact elimination and normalization. Secondly, in order to enhance the perception ability of the network to different scale features, we introduce the space pyramid pool module. Then, by making the model focus on glioma details and suppressing irrelevant background information, we propose a multi-scale fusion attention mechanism; And finally, to address class imbalance and enhance learning of misclassified voxels, a combination of Dice and Focal loss functions was employed, creating a loss function, this method not only maintains the accuracy of segmentation, It also improves the recognition of challenge samples, thus improving the accuracy and generalization of the model in glioma segmentation. Experimental findings reveal that the enhanced 3D U-Net network model stabilizes training loss at 0.1 after 150 training iterations. The refined model demonstrates superior performance with the highest DSC, Recall, and Precision values of 0.7512, 0.7064, and 0.77451, respectively. In Whole Tumor (WT) segmentation, the Dice Similarity Coefficient (DSC), Recall, and Precision scores are 0.9168, 0.9426, and 0.9375, respectively. For Core Tumor (TC) segmentation, these scores are 0.8954, 0.9014, and 0.9369, respectively. In Enhanced Tumor (ET) segmentation, the method achieves DSC, Recall, and Precision values of 0.8674, 0.9045, and 0.9011, respectively. The DSC, Recall, and Precision indices in the WT, TC, and ET segments using this method are the highest recorded, significantly enhancing glioma segmentation. This improvement bolsters the accuracy and reliability of diagnoses, ultimately providing a scientific foundation for clinical diagnosis and treatment.

Same-model and cross-model variability in knee cartilage thickness measurements using 3D MRI systems.

Katano H, Kaneko H, Sasaki E, Hashiguchi N, Nagai K, Ishijima M, Ishibashi Y, Adachi N, Kuroda R, Tomita M, Masumoto J, Sekiya I

pubmed logopapersJan 1 2025
Magnetic Resonance Imaging (MRI) based three-dimensional analysis of knee cartilage has evolved to become fully automatic. However, when implementing these measurements across multiple clinical centers, scanner variability becomes a critical consideration. Our purposes were to quantify and compare same-model variability (between repeated scans on the same MRI system) and cross-model variability (across different MRI systems) in knee cartilage thickness measurements using MRI scanners from five manufacturers, as analyzed with a specific 3D volume analysis software. Ten healthy volunteers (eight males and two females, aged 22-60 years) underwent two scans of their right knee on 3T MRI systems from five manufacturers (Canon, Fujifilm, GE, Philips, and Siemens). The imaging protocol included fat-suppressed spoiled gradient echo and proton density weighted sequences. Cartilage regions were automatically segmented into 7 subregions using a specific deep learning-based 3D volume analysis software. This resulted in 350 measurements for same-model variability and 2,800 measurements for cross-model variability. For same-model variability, 82% of measurements showed variability ≤0.10 mm, and 98% showed variability ≤0.20 mm. For cross-model variability, 51% showed variability ≤0.10 mm, and 84% showed variability ≤0.20 mm. The mean same-model variability (0.06 ± 0.05 mm) was significantly lower than cross-model variability (0.11 ± 0.09 mm) (p < 0.001). This study demonstrates that knee cartilage thickness measurements exhibit significantly higher variability across different MRI systems compared to repeated measurements on the same system, when analyzed using this specific software. This finding has important implications for multi-center studies and longitudinal assessments using different MRI systems and highlights the software-dependent nature of such variability assessments.

RRFNet: A free-anchor brain tumor detection and classification network based on reparameterization technology.

Liu W, Guo X

pubmed logopapersJan 1 2025
Advancements in medical imaging technology have facilitated the acquisition of high-quality brain images through computed tomography (CT) or magnetic resonance imaging (MRI), enabling professional brain specialists to diagnose brain tumors more effectively. However, manual diagnosis is time-consuming, which has led to the growing importance of automatic detection and classification through brain imaging. Conventional object detection models for brain tumor detection face limitations in brain tumor detection owing to the significant differences between medical images and natural scene images, as well as challenges such as complex backgrounds, noise interference, and blurred boundaries between cancerous and normal tissues. This study investigates the application of deep learning to brain tumor detection, analyzing the effect of three factors, the number of model parameters, input data batch size, and the use of anchor boxes, on detection performance. Experimental results reveal that an excessive number of model parameters or the use of anchor boxes may reduce detection accuracy. However, increasing the number of brain tumor samples improves detection performance. This study, introduces a backbone network built using RepConv and RepC3, along with FGConcat feature map splicing module to optimize the brain tumor detection model. The experimental results show that the proposed RepConv-RepC3-FGConcat Network (RRFNet) can learn underlying semantic information about brain tumors during training stage, while maintaining a low number of parameters during inference, which improves the speed of brain tumor detection. Compared with YOLOv8, RRFNet achieved a higher accuracy in brain tumor detection, with a mAP value of 79.2%. This optimized approach enhances both accuracy and efficiency, which is essential in clinical settings where time and precision are critical.

Enhancement of Fairness in AI for Chest X-ray Classification.

Jackson NJ, Yan C, Malin BA

pubmed logopapersJan 1 2024
The use of artificial intelligence (AI) in medicine has shown promise to improve the quality of healthcare decisions. However, AI can be biased in a manner that produces unfair predictions for certain demographic subgroups. In MIMIC-CXR, a publicly available dataset of over 300,000 chest X-ray images, diagnostic AI has been shown to have a higher false negative rate for racial minorities. We evaluated the capacity of synthetic data augmentation, oversampling, and demographic-based corrections to enhance the fairness of AI predictions. We show that adjusting unfair predictions for demographic attributes, such as race, is ineffective at improving fairness or predictive performance. However, using oversampling and synthetic data augmentation to modify disease prevalence reduced such disparities by 74.7% and 10.6%, respectively. Moreover, such fairness gains were accomplished without reduction in performance (95% CI AUC: [0.816, 0.820] versus [0.810, 0.819] versus [0.817, 0.821] for baseline, oversampling, and augmentation, respectively).

Ensuring Fairness in Detecting Mild Cognitive Impairment with MRI.

Tong B, Edwards T, Yang S, Hou B, Tarzanagh DA, Urbanowicz RJ, Moore JH, Ritchie MD, Davatzikos C, Shen L

pubmed logopapersJan 1 2024
Machine learning (ML) algorithms play a crucial role in the early and accurate diagnosis of Alzheimer's Disease (AD), which is essential for effective treatment planning. However, existing methods are not well-suited for identifying Mild Cognitive Impairment (MCI), a critical transitional stage between normal aging and AD. This inadequacy is primarily due to label imbalance and bias from different sensitve attributes in MCI classification. To overcome these challenges, we have designed an end-to-end fairness-aware approach for label-imbalanced classification, tailored specifically for neuroimaging data. This method, built on the recently developed FACIMS framework, integrates into STREAMLINE, an automated ML environment. We evaluated our approach against nine other ML algorithms and found that it achieves comparable balanced accuracy to other methods while prioritizing fairness in classifications with five different sensitive attributes. This analysis contributes to the development of equitable and reliable ML diagnostics for MCI detection.

Integrating AI into Clinical Workflows: A Simulation Study on Implementing AI-aided Same-day Diagnostic Testing Following an Abnormal Screening Mammogram.

Lin Y, Hoyt AC, Manuel VG, Inkelas M, Maehara CK, Ayvaci MUS, Ahsen ME, Hsu W

pubmed logopapersJan 1 2024
Artificial intelligence (AI) shows promise in clinical tasks, yet its integration into workflows remains underexplored. This study proposes an AI-aided same-day diagnostic imaging workup to reduce recall rates following abnormal screening mammograms and alleviate patient anxiety while waiting for the diagnostic examinations. Using discrete simulation, we found minimal disruption to the workflow (a 4% reduction in daily patient volume or a 2% increase in operating time) under specific conditions: operation from 9 am to 12 pm with all radiologists managing all patient types (screenings, diagnostics, and biopsies). Costs specific to the AI-aided same-day diagnostic workup include AI software expenses and potential losses from unused pre-reserved slots for same-day diagnostic workups. These simulation findings can inform the implementation of an AI-aided same-day diagnostic workup, with future research focusing on its potential benefits, including improved patient satisfaction, reduced anxiety, lower recall rates, and shorter time to cancer diagnoses and treatment.
Page 132 of 1321316 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.