Sort by:
Page 224 of 2252246 results

A novel spectral transformation technique based on special functions for improved chest X-ray image classification.

Aljohani A

pubmed logopapersJan 1 2025
Chest X-ray image classification plays an important role in medical diagnostics. Machine learning algorithms enhanced the performance of these classification algorithms by introducing advance techniques. These classification algorithms often requires conversion of a medical data to another space in which the original data is reduced to important values or moments. We developed a mechanism which converts a given medical image to a spectral space which have a base set composed of special functions. In this study, we propose a chest X-ray image classification method based on spectral coefficients. The spectral coefficients are based on an orthogonal system of Legendre type smooth polynomials. We developed the mathematical theory to calculate spectral moment in Legendre polynomails space and use these moments to train traditional classifier like SVM and random forest for a classification task. The procedure is applied to a latest data set of X-Ray images. The data set is composed of X-Ray images of three different classes of patients, normal, Covid infected and pneumonia. The moments designed in this study, when used in SVM or random forest improves its ability to classify a given X-Ray image at a high accuracy. A parametric study of the proposed approach is presented. The performance of these spectral moments is checked in Support vector machine and Random forest algorithm. The efficiency and accuracy of the proposed method is presented in details. All our simulation is performed in computation softwares, Matlab and Python. The image pre processing and spectral moments generation is performed in Matlab and the implementation of the classifiers is performed with python. It is observed that the proposed approach works well and provides satisfactory results (0.975 accuracy), however further studies are required to establish a more accurate and fast version of this approach.

SA-UMamba: Spatial attention convolutional neural networks for medical image segmentation.

Liu L, Huang Z, Wang S, Wang J, Liu B

pubmed logopapersJan 1 2025
Medical image segmentation plays an important role in medical diagnosis and treatment. Most recent medical image segmentation methods are based on a convolutional neural network (CNN) or Transformer model. However, CNN-based methods are limited by locality, whereas Transformer-based methods are constrained by the quadratic complexity of attention computations. Alternatively, the state-space model-based Mamba architecture has garnered widespread attention owing to its linear computational complexity for global modeling. However, Mamba and its variants are still limited in their ability to extract local receptive field features. To address this limitation, we propose a novel residual spatial state-space (RSSS) block that enhances spatial feature extraction by integrating global and local representations. The RSSS block combines the Mamba module for capturing global dependencies with a receptive field attention convolution (RFAC) module to extract location-sensitive local patterns. Furthermore, we introduce a residual adjust strategy to dynamically fuse global and local information, improving spatial expressiveness. Based on the RSSS block, we design a U-shaped SA-UMamba segmentation framework that effectively captures multi-scale spatial context across different stages. Experiments conducted on the Synapse, ISIC17, ISIC18 and CVC-ClinicDB datasets validate the segmentation performance of our proposed SA-UMamba framework.

Providing context: Extracting non-linear and dynamic temporal motifs from brain activity.

Geenjaar E, Kim D, Calhoun V

pubmed logopapersJan 1 2025
Approaches studying the dynamics of resting-state functional magnetic resonance imaging (rs-fMRI) activity often focus on time-resolved functional connectivity (tr-FC). While many tr-FC approaches have been proposed, most are linear approaches, e.g. computing the linear correlation at a timestep or within a window. In this work, we propose to use a generative non-linear deep learning model, a disentangled variational autoencoder (DSVAE), that factorizes out window-specific (context) information from timestep-specific (local) information. This has the advantage of allowing our model to capture differences at multiple temporal scales. We find that by separating out temporal scales our model's window-specific embeddings, or as we refer to them, context embeddings, more accurately separate windows from schizophrenia patients and control subjects than baseline models and the standard tr-FC approach in a low-dimensional space. Moreover, we find that for individuals with schizophrenia, our model's context embedding space is significantly correlated with both age and symptom severity. Interestingly, patients appear to spend more time in three clusters, one closer to controls which shows increased visual-sensorimotor, cerebellar-subcortical, and reduced cerebellar-visual functional network connectivity (FNC), an intermediate station showing increased subcortical-sensorimotor FNC, and one that shows decreased visual-sensorimotor, decreased subcortical-sensorimotor, and increased visual-subcortical domains. We verify that our model captures features that are complementary to - but not the same as - standard tr-FC features. Our model can thus help broaden the neuroimaging toolset in analyzing fMRI dynamics and shows potential as an approach for finding psychiatric links that are more sensitive to individual and group characteristics.

3D-MRI brain glioma intelligent segmentation based on improved 3D U-net network.

Wang T, Wu T, Yang D, Xu Y, Lv D, Jiang T, Wang H, Chen Q, Xu S, Yan Y, Lin B

pubmed logopapersJan 1 2025
To enhance glioma segmentation, a 3D-MRI intelligent glioma segmentation method based on deep learning is introduced. This method offers significant guidance for medical diagnosis, grading, and treatment strategy selection. Glioma case data were sourced from the BraTS2023 public dataset. Firstly, we preprocess the dataset, including 3D clipping, resampling, artifact elimination and normalization. Secondly, in order to enhance the perception ability of the network to different scale features, we introduce the space pyramid pool module. Then, by making the model focus on glioma details and suppressing irrelevant background information, we propose a multi-scale fusion attention mechanism; And finally, to address class imbalance and enhance learning of misclassified voxels, a combination of Dice and Focal loss functions was employed, creating a loss function, this method not only maintains the accuracy of segmentation, It also improves the recognition of challenge samples, thus improving the accuracy and generalization of the model in glioma segmentation. Experimental findings reveal that the enhanced 3D U-Net network model stabilizes training loss at 0.1 after 150 training iterations. The refined model demonstrates superior performance with the highest DSC, Recall, and Precision values of 0.7512, 0.7064, and 0.77451, respectively. In Whole Tumor (WT) segmentation, the Dice Similarity Coefficient (DSC), Recall, and Precision scores are 0.9168, 0.9426, and 0.9375, respectively. For Core Tumor (TC) segmentation, these scores are 0.8954, 0.9014, and 0.9369, respectively. In Enhanced Tumor (ET) segmentation, the method achieves DSC, Recall, and Precision values of 0.8674, 0.9045, and 0.9011, respectively. The DSC, Recall, and Precision indices in the WT, TC, and ET segments using this method are the highest recorded, significantly enhancing glioma segmentation. This improvement bolsters the accuracy and reliability of diagnoses, ultimately providing a scientific foundation for clinical diagnosis and treatment.

Same-model and cross-model variability in knee cartilage thickness measurements using 3D MRI systems.

Katano H, Kaneko H, Sasaki E, Hashiguchi N, Nagai K, Ishijima M, Ishibashi Y, Adachi N, Kuroda R, Tomita M, Masumoto J, Sekiya I

pubmed logopapersJan 1 2025
Magnetic Resonance Imaging (MRI) based three-dimensional analysis of knee cartilage has evolved to become fully automatic. However, when implementing these measurements across multiple clinical centers, scanner variability becomes a critical consideration. Our purposes were to quantify and compare same-model variability (between repeated scans on the same MRI system) and cross-model variability (across different MRI systems) in knee cartilage thickness measurements using MRI scanners from five manufacturers, as analyzed with a specific 3D volume analysis software. Ten healthy volunteers (eight males and two females, aged 22-60 years) underwent two scans of their right knee on 3T MRI systems from five manufacturers (Canon, Fujifilm, GE, Philips, and Siemens). The imaging protocol included fat-suppressed spoiled gradient echo and proton density weighted sequences. Cartilage regions were automatically segmented into 7 subregions using a specific deep learning-based 3D volume analysis software. This resulted in 350 measurements for same-model variability and 2,800 measurements for cross-model variability. For same-model variability, 82% of measurements showed variability ≤0.10 mm, and 98% showed variability ≤0.20 mm. For cross-model variability, 51% showed variability ≤0.10 mm, and 84% showed variability ≤0.20 mm. The mean same-model variability (0.06 ± 0.05 mm) was significantly lower than cross-model variability (0.11 ± 0.09 mm) (p < 0.001). This study demonstrates that knee cartilage thickness measurements exhibit significantly higher variability across different MRI systems compared to repeated measurements on the same system, when analyzed using this specific software. This finding has important implications for multi-center studies and longitudinal assessments using different MRI systems and highlights the software-dependent nature of such variability assessments.

RRFNet: A free-anchor brain tumor detection and classification network based on reparameterization technology.

Liu W, Guo X

pubmed logopapersJan 1 2025
Advancements in medical imaging technology have facilitated the acquisition of high-quality brain images through computed tomography (CT) or magnetic resonance imaging (MRI), enabling professional brain specialists to diagnose brain tumors more effectively. However, manual diagnosis is time-consuming, which has led to the growing importance of automatic detection and classification through brain imaging. Conventional object detection models for brain tumor detection face limitations in brain tumor detection owing to the significant differences between medical images and natural scene images, as well as challenges such as complex backgrounds, noise interference, and blurred boundaries between cancerous and normal tissues. This study investigates the application of deep learning to brain tumor detection, analyzing the effect of three factors, the number of model parameters, input data batch size, and the use of anchor boxes, on detection performance. Experimental results reveal that an excessive number of model parameters or the use of anchor boxes may reduce detection accuracy. However, increasing the number of brain tumor samples improves detection performance. This study, introduces a backbone network built using RepConv and RepC3, along with FGConcat feature map splicing module to optimize the brain tumor detection model. The experimental results show that the proposed RepConv-RepC3-FGConcat Network (RRFNet) can learn underlying semantic information about brain tumors during training stage, while maintaining a low number of parameters during inference, which improves the speed of brain tumor detection. Compared with YOLOv8, RRFNet achieved a higher accuracy in brain tumor detection, with a mAP value of 79.2%. This optimized approach enhances both accuracy and efficiency, which is essential in clinical settings where time and precision are critical.

Patients', clinicians' and developers' perspectives and experiences of artificial intelligence in cardiac healthcare: A qualitative study.

Baillie L, Stewart-Lord A, Thomas N, Frings D

pubmed logopapersJan 1 2025
This study investigated perspectives and experiences of artificial intelligence (AI) developers, clinicians and patients about the use of AI-based software in cardiac healthcare. A qualitative study took place at two hospitals in England that had trialled AI-based software use in stress echocardiography, a scan that uses ultrasound to assess heart function. Semi-structured interviews were conducted with: patients (<i>n = </i>9), clinicians (<i>n = </i>16) and AI software developers (<i>n = </i>5). Data were analysed using thematic analysis. Potential benefits identified were increasing consistency and reliability through reducing human error, and greater efficiency. Concerns included over-reliance on the AI technology, and data security. Participants discussed the need for human input and empathy within healthcare, transparency about AI use, and issues around trusting AI. Participants considered AI's role as assisting diagnosis but not replacing clinician involvement. Clinicians and patients emphasised holistic diagnosis that involves more than the scan. Clinicians considered their diagnostic ability as superior and discrepancies were managed in line with clinicians' diagnoses rather than AI reports. The practicalities of using the AI software concerned image acquisition to meet AI processing requirements and workflow integration. There was positivity towards AI use, but the AI software was considered an adjunct to clinicians rather than replacing their input. Clinicians' experiences were that their diagnostic ability remained superior to the AI, and acquiring images acceptable to AI was sometimes problematic. Despite hopes for increased efficiency through AI use, clinicians struggled to identify fit with clinical workflow to bring benefit.

Enhancing Attention Network Spatiotemporal Dynamics for Motor Rehabilitation in Parkinson's Disease.

Pei G, Hu M, Ouyang J, Jin Z, Wang K, Meng D, Wang Y, Chen K, Wang L, Cao LZ, Funahashi S, Yan T, Fang B

pubmed logopapersJan 1 2025
Optimizing resource allocation for Parkinson's disease (PD) motor rehabilitation necessitates identifying biomarkers of responsiveness and dynamic neuroplasticity signatures underlying efficacy. A cohort study of 52 early-stage PD patients undergoing 2-week multidisciplinary intensive rehabilitation therapy (MIRT) was conducted, which stratified participants into responders and nonresponders. A multimodal analysis of resting-state electroencephalography (EEG) microstates and functional magnetic resonance imaging (fMRI) coactivation patterns was performed to characterize MIRT-induced spatiotemporal network reorganization. Responders demonstrated clinically meaningful improvement in motor symptoms, exceeding the minimal clinically important difference threshold of 3.25 on the Unified PD Rating Scale part III, alongside significant reductions in bradykinesia and a significant enhancement in quality-of-life scores at the 3-month follow-up. Resting-state EEG in responders showed a significant attenuation in microstate C and a significant enhancement in microstate D occurrences, along with significantly increased transitions from microstate A/B to D, which significantly correlated with motor function, especially in bradykinesia gains. Concurrently, fMRI analyses identified a prolonged dwell time of the dorsal attention network coactivation/ventral attention network deactivation pattern, which was significantly inversely associated with microstate C occurrence and significantly linked to motor improvement. The identified brain spatiotemporal neural markers were validated using machine learning models to assess the efficacy of MIRT in motor rehabilitation for PD patients, achieving an average accuracy rate of 86%. These findings suggest that MIRT may facilitate a shift in neural networks from sensory processing to higher-order cognitive control, with the dynamic reallocation of attentional resources. This preliminary study validates the necessity of integrating cognitive-motor strategies for the motor rehabilitation of PD and identifies novel neural markers for assessing treatment efficacy.

AI-Assisted 3D Planning of CT Parameters for Personalized Femoral Prosthesis Selection in Total Hip Arthroplasty.

Yang TJ, Qian W

pubmed logopapersJan 1 2025
To investigate the efficacy of CT measurement parameters combined with AI-assisted 3D planning for personalized femoral prosthesis selection in total hip arthroplasty (THA). A retrospective analysis was conducted on clinical data from 247 patients with unilateral hip or knee joint disorders treated at Renmin Hospital of Hubei University of Medicine between April 2021 and February 2024. All patients underwent preoperative full-pelvis and bilateral full-length femoral CT scans. The raw CT data were imported into Mimics 19.0 software to reconstruct a three-dimensional (3D) model of the healthy femur. Using 3-matic Research 11.0 software, the femoral head rotation center was located, and parameters including femoral head diameter (FHD), femoral neck length (FNL), femoral neck-shaft angle (FNSA), femoral offset (FO), femoral neck anteversion angle (FNAA), tip-apex distance (TAD), and tip-apex angle (TAA) were measured. AI-assisted THA 3D planning system AIJOINT V1.0.0.0 software was used for preoperative planning and design, enabling personalized selection of femoral prostheses with varying neck-shaft angles and surgical simulation. Groups were compared by gender, age, and parameters. ROC curves evaluated prediction efficacy. Females exhibited smaller FHD, FNL, FO, TAD, TAA but larger FNSA/FNAA vs males (P<0.05). Patients >65 years had higher FO, TAD, TAA (P<0.05). TAD-TAA correlation was strong (r=0.954), while FNSA negatively correlated with TAD/TAA (r=-0.773/-0.701). ROC analysis demonstrated high predictive accuracy: TAD (AUC=0.891, sensitivity=91.7%, specificity=87.6%) and TAA (AUC=0.882, sensitivity=100%, specificity=88.8%). CT parameters (TAA, TAD, FNSA, FO) are interrelated and effective predictors for femoral prosthesis selection. Integration with AI-assisted planning optimizes personalized THA, reducing biomechanical mismatch risks.

Current application, possibilities, and challenges of artificial intelligence in the management of rheumatoid arthritis, axial spondyloarthritis, and psoriatic arthritis.

Bilgin E

pubmed logopapersJan 1 2025
This narrative review outlines the current applications and considerations of artificial intelligence (AI) for diagnosis, management, and prognosis in rheumatoid arthritis (RA), axial spondyloarthritis (axSpA), and psoriatic arthritis (PsA). Advances in AI, mainly in machine learning and deep learning, have significantly influenced medical research and clinical practice over the past decades by offering precisions in data understanding and treatment approaches. AI applications have enhanced risk prediction models, early diagnosis, and better management in RA. Predictive models have guided treatment decisions such as-response to methotrexate and biologics-while wearable devices and electronic health records (EHR) improve disease activity monitoring. In addition, AI applications are reported as promising for the early identification of extra-articular involvements, prediction, detection, and assessment of comorbidities. In axSpA, AI-driven models using imaging techniques such as sacroiliac radiography, magnetic resonance imaging, and computed tomography have increased diagnostic accuracy, especially for early inflammatory changes. Predictive algorithms help stratify and predict disease outcomes, while clinical decision support systems integrate clinical and imaging data for optimized management. For PsA, AI has also allowed for early detection among psoriasis patients using genetic markers, immune profiling, and EHR-based natural language processing systems. Overall, AI models may predict diagnosis, disease severity, treatment response, and comorbidities to improve care in patients with RA, axSpA, and PsA. As a rapidly developing and improving area, AI has the potential to change our current perspective of medical practice by offering better diagnostic evaluation and treatments and improved patient follow-up. Multimodal AI, focusing on collaboration, reliability, transparency, and patient-centered innovation, looks like the future of medical practice. However, data quality, model interpretability, and ethical considerations must be addressed to ensure reliable and equitable applications in clinical practice.
Page 224 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.