Sort by:
Page 351 of 3523516 results

Investigating methods to enhance interpretability and performance in cardiac MRI for myocardial scarring diagnosis using convolutional neural network classification and One Match.

Udin MH, Armstrong S, Kai A, Doyle ST, Pokharel S, Ionita CN, Sharma UC

pubmed logopapersJan 1 2025
Machine learning (ML) classification of myocardial scarring in cardiac MRI is often hindered by limited explainability, particularly with convolutional neural networks (CNNs). To address this, we developed One Match (OM), an algorithm that builds on template matching to improve on both the explainability and performance of ML myocardial scaring classification. By incorporating OM, we aim to foster trust in AI models for medical diagnostics and demonstrate that improved interpretability does not have to compromise classification accuracy. Using a cardiac MRI dataset from 279 patients, this study evaluates One Match, which classifies myocardial scarring in images by matching each image to a set of labeled template images. It uses the highest correlation score from these matches for classification and is compared to a traditional sequential CNN. Enhancements such as autodidactic enhancement (AE) and patient-level classifications (PLCs) were applied to improve the predictive accuracy of both methods. Results are reported as follows: accuracy, sensitivity, specificity, precision, and F1-score. The highest classification performance was observed with the OM algorithm when enhanced by both AE and PLCs, 95.3% accuracy, 92.3% sensitivity, 96.7% specificity, 92.3% precision, and 92.3% F1-score, marking a significant improvement over the base configurations. AE alone had a positive impact on OM increasing accuracy from 89.0% to 93.2%, but decreased the accuracy of the CNN from 85.3% to 82.9%. In contrast, PLCs improved accuracy for both the CNN and OM, raising the CNN's accuracy by 4.2% and OM's by 7.4%. This study demonstrates the effectiveness of OM in classifying myocardial scars, particularly when enhanced with AE and PLCs. The interpretability of OM also enabled the examination of misclassifications, providing insights that could accelerate development and foster greater trust among clinical stakeholders.

A plaque recognition algorithm for coronary OCT images by Dense Atrous Convolution and attention mechanism.

Meng H, Zhao R, Zhang Y, Zhang B, Zhang C, Wang D, Sun J

pubmed logopapersJan 1 2025
Currently, plaque segmentation in Optical Coherence Tomography (OCT) images of coronary arteries is primarily carried out manually by physicians, and the accuracy of existing automatic segmentation techniques needs further improvement. To furnish efficient and precise decision support, automated detection of plaques in coronary OCT images holds paramount importance. For addressing these challenges, we propose a novel deep learning algorithm featuring Dense Atrous Convolution (DAC) and attention mechanism to realize high-precision segmentation and classification of Coronary artery plaques. Then, a relatively well-established dataset covering 760 original images, expanded to 8,000 using data enhancement. This dataset serves as a significant resource for future research endeavors. The experimental results demonstrate that the dice coefficients of calcified, fibrous, and lipid plaques are 0.913, 0.900, and 0.879, respectively, surpassing those generated by five other conventional medical image segmentation networks. These outcomes strongly attest to the effectiveness and superiority of our proposed algorithm in the task of automatic coronary artery plaque segmentation.

Clinical-radiomics models with machine-learning algorithms to distinguish uncomplicated from complicated acute appendicitis in adults: a multiphase multicenter cohort study.

Li L, Sun Y, Sun Y, Gao Y, Zhang B, Qi R, Sheng F, Yang X, Liu X, Liu L, Lu C, Chen L, Zhang K

pubmed logopapersJan 1 2025
Increasing evidence suggests that non-operative management (NOM) with antibiotics could serve as a safe alternative to surgery for the treatment of uncomplicated acute appendicitis (AA). However, accurately differentiating between uncomplicated and complicated AA remains challenging. Our aim was to develop and validate machine-learning-based diagnostic models to differentiate uncomplicated from complicated AA. This was a multicenter cohort trial conducted from January 2021 and December 2022 across five tertiary hospitals. Three distinct diagnostic models were created, namely, the clinical-parameter-based model, the CT-radiomics-based model, and the clinical-radiomics-fused model. These models were developed using a comprehensive set of eight machine-learning algorithms, which included logistic regression (LR), support vector machine (SVM), random forest (RF), decision tree (DT), gradient boosting (GB), K-nearest neighbors (KNN), Gaussian Naïve Bayes (GNB), and multi-layer perceptron (MLP). The performance and accuracy of these diverse models were compared. All models exhibited excellent diagnostic performance in the training cohort, achieving a maximal AUC of 1.00. For the clinical-parameter model, the GB classifier yielded the optimal AUC of 0.77 (95% confidence interval [CI]: 0.64-0.90) in the testing cohort, while the LR classifier yielded the optimal AUC of 0.76 (95% CI: 0.66-0.86) in the validation cohort. For the CT-radiomics-based model, GB classifier achieved the best AUC of 0.74 (95% CI: 0.60-0.88) in the testing cohort, and SVM yielded an optimal AUC of 0.63 (95% CI: 0.51-0.75) in the validation cohort. For the clinical-radiomics-fused model, RF classifier yielded an optimal AUC of 0.84 (95% CI: 0.74-0.95) in the testing cohort and 0.76 (95% CI: 0.67-0.86) in the validation cohort. An open-access, user-friendly online tool was developed for clinical application. This multicenter study suggests that the clinical-radiomics-fused model, constructed using RF algorithm, effectively differentiated between complicated and uncomplicated AA.

Fully automated MRI-based analysis of the locus coeruleus in aging and Alzheimer's disease dementia using ELSI-Net.

Dünnwald M, Krohn F, Sciarra A, Sarkar M, Schneider A, Fliessbach K, Kimmich O, Jessen F, Rostamzadeh A, Glanz W, Incesoy EI, Teipel S, Kilimann I, Goerss D, Spottke A, Brustkern J, Heneka MT, Brosseron F, Lüsebrink F, Hämmerer D, Düzel E, Tönnies K, Oeltze-Jafra S, Betts MJ

pubmed logopapersJan 1 2025
The locus coeruleus (LC) is linked to the development and pathophysiology of neurodegenerative diseases such as Alzheimer's disease (AD). Magnetic resonance imaging-based LC features have shown potential to assess LC integrity in vivo. We present a deep learning-based LC segmentation and feature extraction method called Ensemble-based Locus Coeruleus Segmentation Network (ELSI-Net) and apply it to healthy aging and AD dementia datasets. Agreement to expert raters and previously published LC atlases were assessed. We aimed to reproduce previously reported differences in LC integrity in aging and AD dementia and correlate extracted features to cerebrospinal fluid (CSF) biomarkers of AD pathology. ELSI-Net demonstrated high agreement to expert raters and published atlases. Previously reported group differences in LC integrity were detected and correlations to CSF biomarkers were found. Although we found excellent performance, further evaluations on more diverse datasets from clinical cohorts are required for a conclusive assessment of ELSI-Net's general applicability. We provide a thorough evaluation of a fully automatic locus coeruleus (LC) segmentation method termed Ensemble-based Locus Coeruleus Segmentation Network (ELSI-Net) in aging and Alzheimer's disease (AD) dementia.ELSI-Net outperforms previous work and shows high agreement with manual ratings and previously published LC atlases.ELSI-Net replicates previously shown LC group differences in aging and AD.ELSI-Net's LC mask volume correlates with cerebrospinal fluid biomarkers of AD pathology.

Improved swin transformer-based thorax disease classification with optimal feature selection using chest X-ray.

Rana N, Coulibaly Y, Noor A, Noor TH, Alam MI, Khan Z, Tahir A, Khan MZ

pubmed logopapersJan 1 2025
Thoracic diseases, including pneumonia, tuberculosis, lung cancer, and others, pose significant health risks and require timely and accurate diagnosis to ensure proper treatment. Thus, in this research, a model for thorax disease classification using Chest X-rays is proposed by considering deep learning model. The input is pre-processed by resizing, normalizing pixel values, and applying data augmentation to address the issue of imbalanced datasets and improve model generalization. Significant features are extracted from the images using an Enhanced Auto-Encoder (EnAE) model, which combines a stacked auto-encoder architecture with an attention module to enhance feature representation and classification accuracy. To further improve feature selection, we utilize the Chaotic Whale Optimization (ChWO) Algorithm, which optimally selects the most relevant attributes from the extracted features. Finally, the disease classification is performed using the novel Improved Swin Transformer (IMSTrans) model, which is designed to efficiently process high-dimensional medical image data and achieve superior classification performance. The proposed EnAE + ChWO+IMSTrans model for thorax disease classification was evaluated using extensive Chest X-ray datasets and the Lung Disease Dataset. The proposed method demonstrates enhanced Accuracy, Precision, Recall, F-Score, MCC and MAE of 0.964, 0.977, 0.9845, 0.964, 0.9647, and 0.184 respectively indicating the reliable and efficient solution for thorax disease classification.

Providing context: Extracting non-linear and dynamic temporal motifs from brain activity.

Geenjaar E, Kim D, Calhoun V

pubmed logopapersJan 1 2025
Approaches studying the dynamics of resting-state functional magnetic resonance imaging (rs-fMRI) activity often focus on time-resolved functional connectivity (tr-FC). While many tr-FC approaches have been proposed, most are linear approaches, e.g. computing the linear correlation at a timestep or within a window. In this work, we propose to use a generative non-linear deep learning model, a disentangled variational autoencoder (DSVAE), that factorizes out window-specific (context) information from timestep-specific (local) information. This has the advantage of allowing our model to capture differences at multiple temporal scales. We find that by separating out temporal scales our model's window-specific embeddings, or as we refer to them, context embeddings, more accurately separate windows from schizophrenia patients and control subjects than baseline models and the standard tr-FC approach in a low-dimensional space. Moreover, we find that for individuals with schizophrenia, our model's context embedding space is significantly correlated with both age and symptom severity. Interestingly, patients appear to spend more time in three clusters, one closer to controls which shows increased visual-sensorimotor, cerebellar-subcortical, and reduced cerebellar-visual functional network connectivity (FNC), an intermediate station showing increased subcortical-sensorimotor FNC, and one that shows decreased visual-sensorimotor, decreased subcortical-sensorimotor, and increased visual-subcortical domains. We verify that our model captures features that are complementary to - but not the same as - standard tr-FC features. Our model can thus help broaden the neuroimaging toolset in analyzing fMRI dynamics and shows potential as an approach for finding psychiatric links that are more sensitive to individual and group characteristics.

A novel spectral transformation technique based on special functions for improved chest X-ray image classification.

Aljohani A

pubmed logopapersJan 1 2025
Chest X-ray image classification plays an important role in medical diagnostics. Machine learning algorithms enhanced the performance of these classification algorithms by introducing advance techniques. These classification algorithms often requires conversion of a medical data to another space in which the original data is reduced to important values or moments. We developed a mechanism which converts a given medical image to a spectral space which have a base set composed of special functions. In this study, we propose a chest X-ray image classification method based on spectral coefficients. The spectral coefficients are based on an orthogonal system of Legendre type smooth polynomials. We developed the mathematical theory to calculate spectral moment in Legendre polynomails space and use these moments to train traditional classifier like SVM and random forest for a classification task. The procedure is applied to a latest data set of X-Ray images. The data set is composed of X-Ray images of three different classes of patients, normal, Covid infected and pneumonia. The moments designed in this study, when used in SVM or random forest improves its ability to classify a given X-Ray image at a high accuracy. A parametric study of the proposed approach is presented. The performance of these spectral moments is checked in Support vector machine and Random forest algorithm. The efficiency and accuracy of the proposed method is presented in details. All our simulation is performed in computation softwares, Matlab and Python. The image pre processing and spectral moments generation is performed in Matlab and the implementation of the classifiers is performed with python. It is observed that the proposed approach works well and provides satisfactory results (0.975 accuracy), however further studies are required to establish a more accurate and fast version of this approach.

Verity plots: A novel method of visualizing reliability assessments of artificial intelligence methods in quantitative cardiovascular magnetic resonance.

Hadler T, Ammann C, Saad H, Grassow L, Reisdorf P, Lange S, Däuber S, Schulz-Menger J

pubmed logopapersJan 1 2025
Artificial intelligence (AI) methods have established themselves in cardiovascular magnetic resonance (CMR) as automated quantification tools for ventricular volumes, function, and myocardial tissue characterization. Quality assurance approaches focus on measuring and controlling AI-expert differences but there is a need for tools that better communicate reliability and agreement. This study introduces the Verity plot, a novel statistical visualization that communicates the reliability of quantitative parameters (QP) with clear agreement criteria and descriptive statistics. Tolerance ranges for the acceptability of the bias and variance of AI-expert differences were derived from intra- and interreader evaluations. AI-expert agreement was defined by bias confidence and variance tolerance intervals being within bias and variance tolerance ranges. A reliability plot was designed to communicate this statistical test for agreement. Verity plots merge reliability plots with density and a scatter plot to illustrate AI-expert differences. Their utility was compared against Correlation, Box and Bland-Altman plots. Bias and variance tolerance ranges were established for volume, function, and myocardial tissue characterization QPs. Verity plots provided insights into statstistcal properties, outlier detection, and parametric test assumptions, outperforming Correlation, Box and Bland-Altman plots. Additionally, they offered a framework for determining the acceptability of AI-expert bias and variance. Verity plots offer markers for bias, variance, trends and outliers, in addition to deciding AI quantification acceptability. The plots were successfully applied to various AI methods in CMR and decisively communicated AI-expert agreement.

3D-MRI brain glioma intelligent segmentation based on improved 3D U-net network.

Wang T, Wu T, Yang D, Xu Y, Lv D, Jiang T, Wang H, Chen Q, Xu S, Yan Y, Lin B

pubmed logopapersJan 1 2025
To enhance glioma segmentation, a 3D-MRI intelligent glioma segmentation method based on deep learning is introduced. This method offers significant guidance for medical diagnosis, grading, and treatment strategy selection. Glioma case data were sourced from the BraTS2023 public dataset. Firstly, we preprocess the dataset, including 3D clipping, resampling, artifact elimination and normalization. Secondly, in order to enhance the perception ability of the network to different scale features, we introduce the space pyramid pool module. Then, by making the model focus on glioma details and suppressing irrelevant background information, we propose a multi-scale fusion attention mechanism; And finally, to address class imbalance and enhance learning of misclassified voxels, a combination of Dice and Focal loss functions was employed, creating a loss function, this method not only maintains the accuracy of segmentation, It also improves the recognition of challenge samples, thus improving the accuracy and generalization of the model in glioma segmentation. Experimental findings reveal that the enhanced 3D U-Net network model stabilizes training loss at 0.1 after 150 training iterations. The refined model demonstrates superior performance with the highest DSC, Recall, and Precision values of 0.7512, 0.7064, and 0.77451, respectively. In Whole Tumor (WT) segmentation, the Dice Similarity Coefficient (DSC), Recall, and Precision scores are 0.9168, 0.9426, and 0.9375, respectively. For Core Tumor (TC) segmentation, these scores are 0.8954, 0.9014, and 0.9369, respectively. In Enhanced Tumor (ET) segmentation, the method achieves DSC, Recall, and Precision values of 0.8674, 0.9045, and 0.9011, respectively. The DSC, Recall, and Precision indices in the WT, TC, and ET segments using this method are the highest recorded, significantly enhancing glioma segmentation. This improvement bolsters the accuracy and reliability of diagnoses, ultimately providing a scientific foundation for clinical diagnosis and treatment.

Improving lung cancer diagnosis and survival prediction with deep learning and CT imaging.

Wang X, Sharpnack J, Lee TCM

pubmed logopapersJan 1 2025
Lung cancer is a major cause of cancer-related deaths, and early diagnosis and treatment are crucial for improving patients' survival outcomes. In this paper, we propose to employ convolutional neural networks to model the non-linear relationship between the risk of lung cancer and the lungs' morphology revealed in the CT images. We apply a mini-batched loss that extends the Cox proportional hazards model to handle the non-convexity induced by neural networks, which also enables the training of large data sets. Additionally, we propose to combine mini-batched loss and binary cross-entropy to predict both lung cancer occurrence and the risk of mortality. Simulation results demonstrate the effectiveness of both the mini-batched loss with and without the censoring mechanism, as well as its combination with binary cross-entropy. We evaluate our approach on the National Lung Screening Trial data set with several 3D convolutional neural network architectures, achieving high AUC and C-index scores for lung cancer classification and survival prediction. These results, obtained from simulations and real data experiments, highlight the potential of our approach to improving the diagnosis and treatment of lung cancer.
Page 351 of 3523516 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.