Sort by:
Page 358 of 3593587 results

The Role of Computed Tomography and Artificial Intelligence in Evaluating the Comorbidities of Chronic Obstructive Pulmonary Disease: A One-Stop CT Scanning for Lung Cancer Screening.

Lin X, Zhang Z, Zhou T, Li J, Jin Q, Li Y, Guan Y, Xia Y, Zhou X, Fan L

pubmed logopapersJan 1 2025
Chronic obstructive pulmonary disease (COPD) is a major cause of morbidity and mortality worldwide. Comorbidities in patients with COPD significantly increase morbidity, mortality, and healthcare costs, posing a significant burden on the management of COPD. Given the complex clinical manifestations and varying severity of COPD comorbidities, accurate diagnosis and evaluation are particularly important in selecting appropriate treatment options. With the development of medical imaging technology, AI-based chest CT, as a noninvasive imaging modality, provides a detailed assessment of COPD comorbidities. Recent studies have shown that certain radiographic features on chest CT can be used as alternative markers of comorbidities in COPD patients. CT-based radiomics features provided incremental predictive value than clinical risk factors only, predicting an AUC of 0.73 for COPD combined with CVD. However, AI has inherent limitations such as lack of interpretability, and further research is needed to improve them. This review evaluates the progress of AI technology combined with chest CT imaging in COPD comorbidities, including lung cancer, cardiovascular disease, osteoporosis, sarcopenia, excess adipose depots, and pulmonary hypertension, with the aim of improving the understanding of imaging and the management of COPD comorbidities for the purpose of improving disease screening, efficacy assessment, and prognostic evaluation.

Improved swin transformer-based thorax disease classification with optimal feature selection using chest X-ray.

Rana N, Coulibaly Y, Noor A, Noor TH, Alam MI, Khan Z, Tahir A, Khan MZ

pubmed logopapersJan 1 2025
Thoracic diseases, including pneumonia, tuberculosis, lung cancer, and others, pose significant health risks and require timely and accurate diagnosis to ensure proper treatment. Thus, in this research, a model for thorax disease classification using Chest X-rays is proposed by considering deep learning model. The input is pre-processed by resizing, normalizing pixel values, and applying data augmentation to address the issue of imbalanced datasets and improve model generalization. Significant features are extracted from the images using an Enhanced Auto-Encoder (EnAE) model, which combines a stacked auto-encoder architecture with an attention module to enhance feature representation and classification accuracy. To further improve feature selection, we utilize the Chaotic Whale Optimization (ChWO) Algorithm, which optimally selects the most relevant attributes from the extracted features. Finally, the disease classification is performed using the novel Improved Swin Transformer (IMSTrans) model, which is designed to efficiently process high-dimensional medical image data and achieve superior classification performance. The proposed EnAE + ChWO+IMSTrans model for thorax disease classification was evaluated using extensive Chest X-ray datasets and the Lung Disease Dataset. The proposed method demonstrates enhanced Accuracy, Precision, Recall, F-Score, MCC and MAE of 0.964, 0.977, 0.9845, 0.964, 0.9647, and 0.184 respectively indicating the reliable and efficient solution for thorax disease classification.

Enhancing Disease Detection in Radiology Reports Through Fine-tuning Lightweight LLM on Weak Labels.

Wei Y, Wang X, Ong H, Zhou Y, Flanders A, Shih G, Peng Y

pubmed logopapersJan 1 2025
Despite significant progress in applying large language models (LLMs) to the medical domain, several limitations still prevent them from practical applications. Among these are the constraints on model size and the lack of cohort-specific labeled datasets. In this work, we investigated the potential of improving a lightweight LLM, such as Llama 3.1-8B, through fine-tuning with datasets using synthetic labels. Two tasks are jointly trained by combining their respective instruction datasets. When the quality of the task-specific synthetic labels is relatively high (e.g., generated by GPT4-o), Llama 3.1-8B achieves satisfactory performance on the open-ended disease detection task, with a micro F1 score of 0.91. Conversely, when the quality of the task-relevant synthetic labels is relatively low (e.g., from the MIMIC-CXR dataset), fine-tuned Llama 3.1-8B is able to surpass its noisy teacher labels (micro F1 score of 0.67 v.s. 0.63) when calibrated against curated labels, indicating the strong inherent underlying capability of the model. These findings demonstrate the potential offine-tuning LLMs with synthetic labels, offering a promising direction for future research on LLM specialization in the medical domain.

SA-UMamba: Spatial attention convolutional neural networks for medical image segmentation.

Liu L, Huang Z, Wang S, Wang J, Liu B

pubmed logopapersJan 1 2025
Medical image segmentation plays an important role in medical diagnosis and treatment. Most recent medical image segmentation methods are based on a convolutional neural network (CNN) or Transformer model. However, CNN-based methods are limited by locality, whereas Transformer-based methods are constrained by the quadratic complexity of attention computations. Alternatively, the state-space model-based Mamba architecture has garnered widespread attention owing to its linear computational complexity for global modeling. However, Mamba and its variants are still limited in their ability to extract local receptive field features. To address this limitation, we propose a novel residual spatial state-space (RSSS) block that enhances spatial feature extraction by integrating global and local representations. The RSSS block combines the Mamba module for capturing global dependencies with a receptive field attention convolution (RFAC) module to extract location-sensitive local patterns. Furthermore, we introduce a residual adjust strategy to dynamically fuse global and local information, improving spatial expressiveness. Based on the RSSS block, we design a U-shaped SA-UMamba segmentation framework that effectively captures multi-scale spatial context across different stages. Experiments conducted on the Synapse, ISIC17, ISIC18 and CVC-ClinicDB datasets validate the segmentation performance of our proposed SA-UMamba framework.

Clinical-radiomics models with machine-learning algorithms to distinguish uncomplicated from complicated acute appendicitis in adults: a multiphase multicenter cohort study.

Li L, Sun Y, Sun Y, Gao Y, Zhang B, Qi R, Sheng F, Yang X, Liu X, Liu L, Lu C, Chen L, Zhang K

pubmed logopapersJan 1 2025
Increasing evidence suggests that non-operative management (NOM) with antibiotics could serve as a safe alternative to surgery for the treatment of uncomplicated acute appendicitis (AA). However, accurately differentiating between uncomplicated and complicated AA remains challenging. Our aim was to develop and validate machine-learning-based diagnostic models to differentiate uncomplicated from complicated AA. This was a multicenter cohort trial conducted from January 2021 and December 2022 across five tertiary hospitals. Three distinct diagnostic models were created, namely, the clinical-parameter-based model, the CT-radiomics-based model, and the clinical-radiomics-fused model. These models were developed using a comprehensive set of eight machine-learning algorithms, which included logistic regression (LR), support vector machine (SVM), random forest (RF), decision tree (DT), gradient boosting (GB), K-nearest neighbors (KNN), Gaussian Naïve Bayes (GNB), and multi-layer perceptron (MLP). The performance and accuracy of these diverse models were compared. All models exhibited excellent diagnostic performance in the training cohort, achieving a maximal AUC of 1.00. For the clinical-parameter model, the GB classifier yielded the optimal AUC of 0.77 (95% confidence interval [CI]: 0.64-0.90) in the testing cohort, while the LR classifier yielded the optimal AUC of 0.76 (95% CI: 0.66-0.86) in the validation cohort. For the CT-radiomics-based model, GB classifier achieved the best AUC of 0.74 (95% CI: 0.60-0.88) in the testing cohort, and SVM yielded an optimal AUC of 0.63 (95% CI: 0.51-0.75) in the validation cohort. For the clinical-radiomics-fused model, RF classifier yielded an optimal AUC of 0.84 (95% CI: 0.74-0.95) in the testing cohort and 0.76 (95% CI: 0.67-0.86) in the validation cohort. An open-access, user-friendly online tool was developed for clinical application. This multicenter study suggests that the clinical-radiomics-fused model, constructed using RF algorithm, effectively differentiated between complicated and uncomplicated AA.

MRISeqClassifier: A Deep Learning Toolkit for Precise MRI Sequence Classification.

Pan J, Chen Q, Sun C, Liang R, Bian J, Xu J

pubmed logopapersJan 1 2025
Magnetic Resonance Imaging (MRI) is a crucial diagnostic tool in medicine, widely used to detect and assess various health conditions. Different MRI sequences, such as T1-weighted, T2-weighted, and FLAIR, serve distinct roles by highlighting different tissue characteristics and contrasts. However, distinguishing them based solely on the description file is currently impossible due to confusing or incorrect annotations. Additionally, there is a notable lack of effective tools to differentiate these sequences. In response, we developed a deep learning-based toolkit tailored for small, unrefined MRI datasets. This toolkit enables precise sequence classification and delivers performance comparable to systems trained on large, meticulously curated datasets. Utilizing lightweight model architectures and incorporating a voting ensemble method, the toolkit enhances accuracy and stability. It achieves a 99% accuracy rate using only 10% of the data typically required in other research. The code is available at https://github.com/JinqianPan/MRISeqClassifier.

Refining CT image analysis: Exploring adaptive fusion in U-nets for enhanced brain tissue segmentation.

Chen BC, Shen CY, Chai JW, Hwang RH, Chiang WC, Chou CH, Liu WM

pubmed logopapersJan 1 2025
Non-contrast Computed Tomography (NCCT) quickly diagnoses acute cerebral hemorrhage or infarction. However, Deep-Learning (DL) algorithms often generate false alarms (FA) beyond the cerebral region. We introduce an enhanced brain tissue segmentation method for infarction lesion segmentation (ILS). This method integrates an adaptive result fusion strategy to confine the search operation within cerebral tissue, effectively reducing FAs. By leveraging fused brain masks, DL-based ILS algorithms focus on pertinent radiomic correlations. Various U-Net models underwent rigorous training, with exploration of diverse fusion strategies. Further refinement entailed applying a 9x9 Gaussian filter with unit standard deviation followed by binarization to mitigate false positives. Performance evaluation utilized Intersection over Union (IoU) and Hausdorff Distance (HD) metrics, complemented by external validation on a subset of the COCO dataset. Our study comprised 20 ischemic stroke patients (14 males, 4 females) with an average age of 68.9 ± 11.7 years. Fusion with UNet2+ and UNet3 + yielded an IoU of 0.955 and an HD of 1.33, while fusion with U-net, UNet2 + , and UNet3 + resulted in an IoU of 0.952 and an HD of 1.61. Evaluation on the COCO dataset demonstrated an IoU of 0.463 and an HD of 584.1 for fusion with UNet2+ and UNet3 + , and an IoU of 0.453 and an HD of 728.0 for fusion with U-net, UNet2 + , and UNet3 + . Our adaptive fusion strategy significantly diminishes FAs and enhances the training efficacy of DL-based ILS algorithms, surpassing individual U-Net models. This methodology holds promise as a versatile, data-independent approach for cerebral lesion segmentation.

Providing context: Extracting non-linear and dynamic temporal motifs from brain activity.

Geenjaar E, Kim D, Calhoun V

pubmed logopapersJan 1 2025
Approaches studying the dynamics of resting-state functional magnetic resonance imaging (rs-fMRI) activity often focus on time-resolved functional connectivity (tr-FC). While many tr-FC approaches have been proposed, most are linear approaches, e.g. computing the linear correlation at a timestep or within a window. In this work, we propose to use a generative non-linear deep learning model, a disentangled variational autoencoder (DSVAE), that factorizes out window-specific (context) information from timestep-specific (local) information. This has the advantage of allowing our model to capture differences at multiple temporal scales. We find that by separating out temporal scales our model's window-specific embeddings, or as we refer to them, context embeddings, more accurately separate windows from schizophrenia patients and control subjects than baseline models and the standard tr-FC approach in a low-dimensional space. Moreover, we find that for individuals with schizophrenia, our model's context embedding space is significantly correlated with both age and symptom severity. Interestingly, patients appear to spend more time in three clusters, one closer to controls which shows increased visual-sensorimotor, cerebellar-subcortical, and reduced cerebellar-visual functional network connectivity (FNC), an intermediate station showing increased subcortical-sensorimotor FNC, and one that shows decreased visual-sensorimotor, decreased subcortical-sensorimotor, and increased visual-subcortical domains. We verify that our model captures features that are complementary to - but not the same as - standard tr-FC features. Our model can thus help broaden the neuroimaging toolset in analyzing fMRI dynamics and shows potential as an approach for finding psychiatric links that are more sensitive to individual and group characteristics.

Brain tumor classification using MRI images and deep learning techniques.

Wong Y, Su ELM, Yeong CF, Holderbaum W, Yang C

pubmed logopapersJan 1 2025
Brain tumors pose a significant medical challenge, necessitating early detection and precise classification for effective treatment. This study aims to address this challenge by introducing an automated brain tumor classification system that utilizes deep learning (DL) and Magnetic Resonance Imaging (MRI) images. The main purpose of this research is to develop a model that can accurately detect and classify different types of brain tumors, including glioma, meningioma, pituitary tumors, and normal brain scans. A convolutional neural network (CNN) architecture with pretrained VGG16 as the base model is employed, and diverse public datasets are utilized to ensure comprehensive representation. Data augmentation techniques are employed to enhance the training dataset, resulting in a total of 17,136 brain MRI images across the four classes. The accuracy of this model was 99.24%, a higher accuracy than other similar works, demonstrating its potential clinical utility. This higher accuracy was achieved mainly due to the utilization of a large and diverse dataset, the improvement of network configuration, the application of a fine-tuning strategy to adjust pretrained weights, and the implementation of data augmentation techniques in enhancing classification performance for brain tumor detection. In addition, a web application was developed by leveraging HTML and Dash components to enhance usability, allowing for easy image upload and tumor prediction. By harnessing artificial intelligence (AI), the developed system addresses the need to reduce human error and enhance diagnostic accuracy. The proposed approach provides an efficient and reliable solution for brain tumor classification, facilitating early diagnosis and enabling timely medical interventions. This work signifies a potential advancement in brain tumor classification, promising improved patient care and outcomes.

Radiomics and Deep Learning as Important Techniques of Artificial Intelligence - Diagnosing Perspectives in Cytokeratin 19 Positive Hepatocellular Carcinoma.

Wang F, Yan C, Huang X, He J, Yang M, Xian D

pubmed logopapersJan 1 2025
Currently, there are inconsistencies among different studies on preoperative prediction of Cytokeratin 19 (CK19) expression in HCC using traditional imaging, radiomics, and deep learning. We aimed to systematically analyze and compare the performance of non-invasive methods for predicting CK19-positive HCC, thereby providing insights for the stratified management of HCC patients. A comprehensive literature search was conducted in PubMed, EMBASE, Web of Science, and the Cochrane Library from inception to February 2025. Two investigators independently screened and extracted data based on inclusion and exclusion criteria. Eligible studies were included, and key findings were summarized in tables to provide a clear overview. Ultimately, 22 studies involving 3395 HCC patients were included. 72.7% (16/22) focused on traditional imaging, 36.4% (8/22) on radiomics, 9.1% (2/22) on deep learning, and 54.5% (12/22) on combined models. The magnetic resonance imaging was the most commonly used imaging modality (19/22), and over half of the studies (12/22) were published between 2022 and 2025. Moreover, 27.3% (6/22) were multicenter studies, 36.4% (8/22) included a validation set, and only 13.6% (3/22) were prospective. The area under the curve (AUC) range of using clinical and traditional imaging was 0.560 to 0.917. The AUC ranges of radiomics were 0.648 to 0.951, and the AUC ranges of deep learning were 0.718 to 0.820. Notably, the AUC ranges of combined models of clinical, imaging, radiomics and deep learning were 0.614 to 0.995. Nevertheless, the multicenter external data were limited, with only 13.6% (3/22) incorporating validation. The combined model integrating traditional imaging, radiomics and deep learning achieves excellent potential and performance for predicting CK19 in HCC. Based on current limitations, future research should focus on building an easy-to-use dynamic online tool, combining multicenter-multimodal imaging and advanced deep learning approaches to enhance the accuracy and robustness of model predictions.
Page 358 of 3593587 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.