Sort by:
Page 387 of 3903899 results

3D-MRI brain glioma intelligent segmentation based on improved 3D U-net network.

Wang T, Wu T, Yang D, Xu Y, Lv D, Jiang T, Wang H, Chen Q, Xu S, Yan Y, Lin B

pubmed logopapersJan 1 2025
To enhance glioma segmentation, a 3D-MRI intelligent glioma segmentation method based on deep learning is introduced. This method offers significant guidance for medical diagnosis, grading, and treatment strategy selection. Glioma case data were sourced from the BraTS2023 public dataset. Firstly, we preprocess the dataset, including 3D clipping, resampling, artifact elimination and normalization. Secondly, in order to enhance the perception ability of the network to different scale features, we introduce the space pyramid pool module. Then, by making the model focus on glioma details and suppressing irrelevant background information, we propose a multi-scale fusion attention mechanism; And finally, to address class imbalance and enhance learning of misclassified voxels, a combination of Dice and Focal loss functions was employed, creating a loss function, this method not only maintains the accuracy of segmentation, It also improves the recognition of challenge samples, thus improving the accuracy and generalization of the model in glioma segmentation. Experimental findings reveal that the enhanced 3D U-Net network model stabilizes training loss at 0.1 after 150 training iterations. The refined model demonstrates superior performance with the highest DSC, Recall, and Precision values of 0.7512, 0.7064, and 0.77451, respectively. In Whole Tumor (WT) segmentation, the Dice Similarity Coefficient (DSC), Recall, and Precision scores are 0.9168, 0.9426, and 0.9375, respectively. For Core Tumor (TC) segmentation, these scores are 0.8954, 0.9014, and 0.9369, respectively. In Enhanced Tumor (ET) segmentation, the method achieves DSC, Recall, and Precision values of 0.8674, 0.9045, and 0.9011, respectively. The DSC, Recall, and Precision indices in the WT, TC, and ET segments using this method are the highest recorded, significantly enhancing glioma segmentation. This improvement bolsters the accuracy and reliability of diagnoses, ultimately providing a scientific foundation for clinical diagnosis and treatment.

MRI based early Temporal Lobe Epilepsy detection using DGWO based optimized HAETN and Fuzzy-AAL Segmentation Framework (FASF).

Khan H, Alutaibi AI, Tejani GG, Sharma SK, Khan AR, Ahmad F, Mousavirad SJ

pubmed logopapersJan 1 2025
This work aims to promote early and accurate diagnosis of Temporal Lobe Epilepsy (TLE) by developing state-of-the-art deep learning techniques, with the goal of minimizing the consequences of epilepsy on individuals and society. Current approaches for TLE detection have drawbacks, including applicability to particular MRI sequences, moderate ability to determine the side of the onset zones, and weak cross-validation with different patient groups, which hampers their practical use. To overcome these difficulties, a new Hybrid Attention-Enhanced Transformer Network (HAETN) is introduced for early TLE diagnosis. This approach uses newly developed Fuzzy-AAL Segmentation Framework (FASF) which is a combination of Fuzzy Possibilistic C-Means (FPCM) algorithm for segmentation of tissue and AAL labelling for labelling of tissues. Furthermore, an effective feature selection method is proposed using the Dipper- grey wolf optimization (DGWO) algorithm to improve the performance of the proposed model. The performance of the proposed method is thoroughly assessed by accuracy, sensitivity, and F1-score. The performance of the suggested approach is evaluated on the Temporal Lobe Epilepsy-UNAM MRI Dataset, where it attains an accuracy of 98.61%, a sensitivity of 99.83%, and F1-score of 99.82%, indicating its efficiency and applicability in clinical practice.

Verity plots: A novel method of visualizing reliability assessments of artificial intelligence methods in quantitative cardiovascular magnetic resonance.

Hadler T, Ammann C, Saad H, Grassow L, Reisdorf P, Lange S, Däuber S, Schulz-Menger J

pubmed logopapersJan 1 2025
Artificial intelligence (AI) methods have established themselves in cardiovascular magnetic resonance (CMR) as automated quantification tools for ventricular volumes, function, and myocardial tissue characterization. Quality assurance approaches focus on measuring and controlling AI-expert differences but there is a need for tools that better communicate reliability and agreement. This study introduces the Verity plot, a novel statistical visualization that communicates the reliability of quantitative parameters (QP) with clear agreement criteria and descriptive statistics. Tolerance ranges for the acceptability of the bias and variance of AI-expert differences were derived from intra- and interreader evaluations. AI-expert agreement was defined by bias confidence and variance tolerance intervals being within bias and variance tolerance ranges. A reliability plot was designed to communicate this statistical test for agreement. Verity plots merge reliability plots with density and a scatter plot to illustrate AI-expert differences. Their utility was compared against Correlation, Box and Bland-Altman plots. Bias and variance tolerance ranges were established for volume, function, and myocardial tissue characterization QPs. Verity plots provided insights into statstistcal properties, outlier detection, and parametric test assumptions, outperforming Correlation, Box and Bland-Altman plots. Additionally, they offered a framework for determining the acceptability of AI-expert bias and variance. Verity plots offer markers for bias, variance, trends and outliers, in addition to deciding AI quantification acceptability. The plots were successfully applied to various AI methods in CMR and decisively communicated AI-expert agreement.

Providing context: Extracting non-linear and dynamic temporal motifs from brain activity.

Geenjaar E, Kim D, Calhoun V

pubmed logopapersJan 1 2025
Approaches studying the dynamics of resting-state functional magnetic resonance imaging (rs-fMRI) activity often focus on time-resolved functional connectivity (tr-FC). While many tr-FC approaches have been proposed, most are linear approaches, e.g. computing the linear correlation at a timestep or within a window. In this work, we propose to use a generative non-linear deep learning model, a disentangled variational autoencoder (DSVAE), that factorizes out window-specific (context) information from timestep-specific (local) information. This has the advantage of allowing our model to capture differences at multiple temporal scales. We find that by separating out temporal scales our model's window-specific embeddings, or as we refer to them, context embeddings, more accurately separate windows from schizophrenia patients and control subjects than baseline models and the standard tr-FC approach in a low-dimensional space. Moreover, we find that for individuals with schizophrenia, our model's context embedding space is significantly correlated with both age and symptom severity. Interestingly, patients appear to spend more time in three clusters, one closer to controls which shows increased visual-sensorimotor, cerebellar-subcortical, and reduced cerebellar-visual functional network connectivity (FNC), an intermediate station showing increased subcortical-sensorimotor FNC, and one that shows decreased visual-sensorimotor, decreased subcortical-sensorimotor, and increased visual-subcortical domains. We verify that our model captures features that are complementary to - but not the same as - standard tr-FC features. Our model can thus help broaden the neuroimaging toolset in analyzing fMRI dynamics and shows potential as an approach for finding psychiatric links that are more sensitive to individual and group characteristics.

Enhancing Disease Detection in Radiology Reports Through Fine-tuning Lightweight LLM on Weak Labels.

Wei Y, Wang X, Ong H, Zhou Y, Flanders A, Shih G, Peng Y

pubmed logopapersJan 1 2025
Despite significant progress in applying large language models (LLMs) to the medical domain, several limitations still prevent them from practical applications. Among these are the constraints on model size and the lack of cohort-specific labeled datasets. In this work, we investigated the potential of improving a lightweight LLM, such as Llama 3.1-8B, through fine-tuning with datasets using synthetic labels. Two tasks are jointly trained by combining their respective instruction datasets. When the quality of the task-specific synthetic labels is relatively high (e.g., generated by GPT4-o), Llama 3.1-8B achieves satisfactory performance on the open-ended disease detection task, with a micro F1 score of 0.91. Conversely, when the quality of the task-relevant synthetic labels is relatively low (e.g., from the MIMIC-CXR dataset), fine-tuned Llama 3.1-8B is able to surpass its noisy teacher labels (micro F1 score of 0.67 v.s. 0.63) when calibrated against curated labels, indicating the strong inherent underlying capability of the model. These findings demonstrate the potential offine-tuning LLMs with synthetic labels, offering a promising direction for future research on LLM specialization in the medical domain.

XLLC-Net: A lightweight and explainable CNN for accurate lung cancer classification using histopathological images.

Jim JR, Rayed ME, Mridha MF, Nur K

pubmed logopapersJan 1 2025
Lung cancer imaging plays a crucial role in early diagnosis and treatment, where machine learning and deep learning have significantly advanced the accuracy and efficiency of disease classification. This study introduces the Explainable and Lightweight Lung Cancer Net (XLLC-Net), a streamlined convolutional neural network designed for classifying lung cancer from histopathological images. Using the LC25000 dataset, which includes three lung cancer classes and two colon cancer classes, we focused solely on the three lung cancer classes for this study. XLLC-Net effectively discerns complex disease patterns within these classes. The model consists of four convolutional layers and contains merely 3 million parameters, considerably reducing its computational footprint compared to existing deep learning models. This compact architecture facilitates efficient training, completing each epoch in just 60 seconds. Remarkably, XLLC-Net achieves a classification accuracy of 99.62% [Formula: see text] 0.16%, with precision, recall, and F1 score of 99.33% [Formula: see text] 0.30%, 99.67% [Formula: see text] 0.30%, and 99.70% [Formula: see text] 0.30%, respectively. Furthermore, the integration of Explainable AI techniques, such as Saliency Map and GRAD-CAM, enhances the interpretability of the model, offering clear visual insights into its decision-making process. Our results underscore the potential of lightweight DL models in medical imaging, providing high accuracy and rapid training while ensuring model transparency and reliability.

Clinical-radiomics models with machine-learning algorithms to distinguish uncomplicated from complicated acute appendicitis in adults: a multiphase multicenter cohort study.

Li L, Sun Y, Sun Y, Gao Y, Zhang B, Qi R, Sheng F, Yang X, Liu X, Liu L, Lu C, Chen L, Zhang K

pubmed logopapersJan 1 2025
Increasing evidence suggests that non-operative management (NOM) with antibiotics could serve as a safe alternative to surgery for the treatment of uncomplicated acute appendicitis (AA). However, accurately differentiating between uncomplicated and complicated AA remains challenging. Our aim was to develop and validate machine-learning-based diagnostic models to differentiate uncomplicated from complicated AA. This was a multicenter cohort trial conducted from January 2021 and December 2022 across five tertiary hospitals. Three distinct diagnostic models were created, namely, the clinical-parameter-based model, the CT-radiomics-based model, and the clinical-radiomics-fused model. These models were developed using a comprehensive set of eight machine-learning algorithms, which included logistic regression (LR), support vector machine (SVM), random forest (RF), decision tree (DT), gradient boosting (GB), K-nearest neighbors (KNN), Gaussian Naïve Bayes (GNB), and multi-layer perceptron (MLP). The performance and accuracy of these diverse models were compared. All models exhibited excellent diagnostic performance in the training cohort, achieving a maximal AUC of 1.00. For the clinical-parameter model, the GB classifier yielded the optimal AUC of 0.77 (95% confidence interval [CI]: 0.64-0.90) in the testing cohort, while the LR classifier yielded the optimal AUC of 0.76 (95% CI: 0.66-0.86) in the validation cohort. For the CT-radiomics-based model, GB classifier achieved the best AUC of 0.74 (95% CI: 0.60-0.88) in the testing cohort, and SVM yielded an optimal AUC of 0.63 (95% CI: 0.51-0.75) in the validation cohort. For the clinical-radiomics-fused model, RF classifier yielded an optimal AUC of 0.84 (95% CI: 0.74-0.95) in the testing cohort and 0.76 (95% CI: 0.67-0.86) in the validation cohort. An open-access, user-friendly online tool was developed for clinical application. This multicenter study suggests that the clinical-radiomics-fused model, constructed using RF algorithm, effectively differentiated between complicated and uncomplicated AA.

Refining CT image analysis: Exploring adaptive fusion in U-nets for enhanced brain tissue segmentation.

Chen BC, Shen CY, Chai JW, Hwang RH, Chiang WC, Chou CH, Liu WM

pubmed logopapersJan 1 2025
Non-contrast Computed Tomography (NCCT) quickly diagnoses acute cerebral hemorrhage or infarction. However, Deep-Learning (DL) algorithms often generate false alarms (FA) beyond the cerebral region. We introduce an enhanced brain tissue segmentation method for infarction lesion segmentation (ILS). This method integrates an adaptive result fusion strategy to confine the search operation within cerebral tissue, effectively reducing FAs. By leveraging fused brain masks, DL-based ILS algorithms focus on pertinent radiomic correlations. Various U-Net models underwent rigorous training, with exploration of diverse fusion strategies. Further refinement entailed applying a 9x9 Gaussian filter with unit standard deviation followed by binarization to mitigate false positives. Performance evaluation utilized Intersection over Union (IoU) and Hausdorff Distance (HD) metrics, complemented by external validation on a subset of the COCO dataset. Our study comprised 20 ischemic stroke patients (14 males, 4 females) with an average age of 68.9 ± 11.7 years. Fusion with UNet2+ and UNet3 + yielded an IoU of 0.955 and an HD of 1.33, while fusion with U-net, UNet2 + , and UNet3 + resulted in an IoU of 0.952 and an HD of 1.61. Evaluation on the COCO dataset demonstrated an IoU of 0.463 and an HD of 584.1 for fusion with UNet2+ and UNet3 + , and an IoU of 0.453 and an HD of 728.0 for fusion with U-net, UNet2 + , and UNet3 + . Our adaptive fusion strategy significantly diminishes FAs and enhances the training efficacy of DL-based ILS algorithms, surpassing individual U-Net models. This methodology holds promise as a versatile, data-independent approach for cerebral lesion segmentation.

SA-UMamba: Spatial attention convolutional neural networks for medical image segmentation.

Liu L, Huang Z, Wang S, Wang J, Liu B

pubmed logopapersJan 1 2025
Medical image segmentation plays an important role in medical diagnosis and treatment. Most recent medical image segmentation methods are based on a convolutional neural network (CNN) or Transformer model. However, CNN-based methods are limited by locality, whereas Transformer-based methods are constrained by the quadratic complexity of attention computations. Alternatively, the state-space model-based Mamba architecture has garnered widespread attention owing to its linear computational complexity for global modeling. However, Mamba and its variants are still limited in their ability to extract local receptive field features. To address this limitation, we propose a novel residual spatial state-space (RSSS) block that enhances spatial feature extraction by integrating global and local representations. The RSSS block combines the Mamba module for capturing global dependencies with a receptive field attention convolution (RFAC) module to extract location-sensitive local patterns. Furthermore, we introduce a residual adjust strategy to dynamically fuse global and local information, improving spatial expressiveness. Based on the RSSS block, we design a U-shaped SA-UMamba segmentation framework that effectively captures multi-scale spatial context across different stages. Experiments conducted on the Synapse, ISIC17, ISIC18 and CVC-ClinicDB datasets validate the segmentation performance of our proposed SA-UMamba framework.

Radiomics and Deep Learning as Important Techniques of Artificial Intelligence - Diagnosing Perspectives in Cytokeratin 19 Positive Hepatocellular Carcinoma.

Wang F, Yan C, Huang X, He J, Yang M, Xian D

pubmed logopapersJan 1 2025
Currently, there are inconsistencies among different studies on preoperative prediction of Cytokeratin 19 (CK19) expression in HCC using traditional imaging, radiomics, and deep learning. We aimed to systematically analyze and compare the performance of non-invasive methods for predicting CK19-positive HCC, thereby providing insights for the stratified management of HCC patients. A comprehensive literature search was conducted in PubMed, EMBASE, Web of Science, and the Cochrane Library from inception to February 2025. Two investigators independently screened and extracted data based on inclusion and exclusion criteria. Eligible studies were included, and key findings were summarized in tables to provide a clear overview. Ultimately, 22 studies involving 3395 HCC patients were included. 72.7% (16/22) focused on traditional imaging, 36.4% (8/22) on radiomics, 9.1% (2/22) on deep learning, and 54.5% (12/22) on combined models. The magnetic resonance imaging was the most commonly used imaging modality (19/22), and over half of the studies (12/22) were published between 2022 and 2025. Moreover, 27.3% (6/22) were multicenter studies, 36.4% (8/22) included a validation set, and only 13.6% (3/22) were prospective. The area under the curve (AUC) range of using clinical and traditional imaging was 0.560 to 0.917. The AUC ranges of radiomics were 0.648 to 0.951, and the AUC ranges of deep learning were 0.718 to 0.820. Notably, the AUC ranges of combined models of clinical, imaging, radiomics and deep learning were 0.614 to 0.995. Nevertheless, the multicenter external data were limited, with only 13.6% (3/22) incorporating validation. The combined model integrating traditional imaging, radiomics and deep learning achieves excellent potential and performance for predicting CK19 in HCC. Based on current limitations, future research should focus on building an easy-to-use dynamic online tool, combining multicenter-multimodal imaging and advanced deep learning approaches to enhance the accuracy and robustness of model predictions.
Page 387 of 3903899 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.