Sort by:
Page 382 of 3853843 results

Clinical-radiomics models with machine-learning algorithms to distinguish uncomplicated from complicated acute appendicitis in adults: a multiphase multicenter cohort study.

Li L, Sun Y, Sun Y, Gao Y, Zhang B, Qi R, Sheng F, Yang X, Liu X, Liu L, Lu C, Chen L, Zhang K

pubmed logopapersJan 1 2025
Increasing evidence suggests that non-operative management (NOM) with antibiotics could serve as a safe alternative to surgery for the treatment of uncomplicated acute appendicitis (AA). However, accurately differentiating between uncomplicated and complicated AA remains challenging. Our aim was to develop and validate machine-learning-based diagnostic models to differentiate uncomplicated from complicated AA. This was a multicenter cohort trial conducted from January 2021 and December 2022 across five tertiary hospitals. Three distinct diagnostic models were created, namely, the clinical-parameter-based model, the CT-radiomics-based model, and the clinical-radiomics-fused model. These models were developed using a comprehensive set of eight machine-learning algorithms, which included logistic regression (LR), support vector machine (SVM), random forest (RF), decision tree (DT), gradient boosting (GB), K-nearest neighbors (KNN), Gaussian Naïve Bayes (GNB), and multi-layer perceptron (MLP). The performance and accuracy of these diverse models were compared. All models exhibited excellent diagnostic performance in the training cohort, achieving a maximal AUC of 1.00. For the clinical-parameter model, the GB classifier yielded the optimal AUC of 0.77 (95% confidence interval [CI]: 0.64-0.90) in the testing cohort, while the LR classifier yielded the optimal AUC of 0.76 (95% CI: 0.66-0.86) in the validation cohort. For the CT-radiomics-based model, GB classifier achieved the best AUC of 0.74 (95% CI: 0.60-0.88) in the testing cohort, and SVM yielded an optimal AUC of 0.63 (95% CI: 0.51-0.75) in the validation cohort. For the clinical-radiomics-fused model, RF classifier yielded an optimal AUC of 0.84 (95% CI: 0.74-0.95) in the testing cohort and 0.76 (95% CI: 0.67-0.86) in the validation cohort. An open-access, user-friendly online tool was developed for clinical application. This multicenter study suggests that the clinical-radiomics-fused model, constructed using RF algorithm, effectively differentiated between complicated and uncomplicated AA.

Refining CT image analysis: Exploring adaptive fusion in U-nets for enhanced brain tissue segmentation.

Chen BC, Shen CY, Chai JW, Hwang RH, Chiang WC, Chou CH, Liu WM

pubmed logopapersJan 1 2025
Non-contrast Computed Tomography (NCCT) quickly diagnoses acute cerebral hemorrhage or infarction. However, Deep-Learning (DL) algorithms often generate false alarms (FA) beyond the cerebral region. We introduce an enhanced brain tissue segmentation method for infarction lesion segmentation (ILS). This method integrates an adaptive result fusion strategy to confine the search operation within cerebral tissue, effectively reducing FAs. By leveraging fused brain masks, DL-based ILS algorithms focus on pertinent radiomic correlations. Various U-Net models underwent rigorous training, with exploration of diverse fusion strategies. Further refinement entailed applying a 9x9 Gaussian filter with unit standard deviation followed by binarization to mitigate false positives. Performance evaluation utilized Intersection over Union (IoU) and Hausdorff Distance (HD) metrics, complemented by external validation on a subset of the COCO dataset. Our study comprised 20 ischemic stroke patients (14 males, 4 females) with an average age of 68.9 ± 11.7 years. Fusion with UNet2+ and UNet3 + yielded an IoU of 0.955 and an HD of 1.33, while fusion with U-net, UNet2 + , and UNet3 + resulted in an IoU of 0.952 and an HD of 1.61. Evaluation on the COCO dataset demonstrated an IoU of 0.463 and an HD of 584.1 for fusion with UNet2+ and UNet3 + , and an IoU of 0.453 and an HD of 728.0 for fusion with U-net, UNet2 + , and UNet3 + . Our adaptive fusion strategy significantly diminishes FAs and enhances the training efficacy of DL-based ILS algorithms, surpassing individual U-Net models. This methodology holds promise as a versatile, data-independent approach for cerebral lesion segmentation.

SA-UMamba: Spatial attention convolutional neural networks for medical image segmentation.

Liu L, Huang Z, Wang S, Wang J, Liu B

pubmed logopapersJan 1 2025
Medical image segmentation plays an important role in medical diagnosis and treatment. Most recent medical image segmentation methods are based on a convolutional neural network (CNN) or Transformer model. However, CNN-based methods are limited by locality, whereas Transformer-based methods are constrained by the quadratic complexity of attention computations. Alternatively, the state-space model-based Mamba architecture has garnered widespread attention owing to its linear computational complexity for global modeling. However, Mamba and its variants are still limited in their ability to extract local receptive field features. To address this limitation, we propose a novel residual spatial state-space (RSSS) block that enhances spatial feature extraction by integrating global and local representations. The RSSS block combines the Mamba module for capturing global dependencies with a receptive field attention convolution (RFAC) module to extract location-sensitive local patterns. Furthermore, we introduce a residual adjust strategy to dynamically fuse global and local information, improving spatial expressiveness. Based on the RSSS block, we design a U-shaped SA-UMamba segmentation framework that effectively captures multi-scale spatial context across different stages. Experiments conducted on the Synapse, ISIC17, ISIC18 and CVC-ClinicDB datasets validate the segmentation performance of our proposed SA-UMamba framework.

Radiomics and Deep Learning as Important Techniques of Artificial Intelligence - Diagnosing Perspectives in Cytokeratin 19 Positive Hepatocellular Carcinoma.

Wang F, Yan C, Huang X, He J, Yang M, Xian D

pubmed logopapersJan 1 2025
Currently, there are inconsistencies among different studies on preoperative prediction of Cytokeratin 19 (CK19) expression in HCC using traditional imaging, radiomics, and deep learning. We aimed to systematically analyze and compare the performance of non-invasive methods for predicting CK19-positive HCC, thereby providing insights for the stratified management of HCC patients. A comprehensive literature search was conducted in PubMed, EMBASE, Web of Science, and the Cochrane Library from inception to February 2025. Two investigators independently screened and extracted data based on inclusion and exclusion criteria. Eligible studies were included, and key findings were summarized in tables to provide a clear overview. Ultimately, 22 studies involving 3395 HCC patients were included. 72.7% (16/22) focused on traditional imaging, 36.4% (8/22) on radiomics, 9.1% (2/22) on deep learning, and 54.5% (12/22) on combined models. The magnetic resonance imaging was the most commonly used imaging modality (19/22), and over half of the studies (12/22) were published between 2022 and 2025. Moreover, 27.3% (6/22) were multicenter studies, 36.4% (8/22) included a validation set, and only 13.6% (3/22) were prospective. The area under the curve (AUC) range of using clinical and traditional imaging was 0.560 to 0.917. The AUC ranges of radiomics were 0.648 to 0.951, and the AUC ranges of deep learning were 0.718 to 0.820. Notably, the AUC ranges of combined models of clinical, imaging, radiomics and deep learning were 0.614 to 0.995. Nevertheless, the multicenter external data were limited, with only 13.6% (3/22) incorporating validation. The combined model integrating traditional imaging, radiomics and deep learning achieves excellent potential and performance for predicting CK19 in HCC. Based on current limitations, future research should focus on building an easy-to-use dynamic online tool, combining multicenter-multimodal imaging and advanced deep learning approaches to enhance the accuracy and robustness of model predictions.

Brain tumor classification using MRI images and deep learning techniques.

Wong Y, Su ELM, Yeong CF, Holderbaum W, Yang C

pubmed logopapersJan 1 2025
Brain tumors pose a significant medical challenge, necessitating early detection and precise classification for effective treatment. This study aims to address this challenge by introducing an automated brain tumor classification system that utilizes deep learning (DL) and Magnetic Resonance Imaging (MRI) images. The main purpose of this research is to develop a model that can accurately detect and classify different types of brain tumors, including glioma, meningioma, pituitary tumors, and normal brain scans. A convolutional neural network (CNN) architecture with pretrained VGG16 as the base model is employed, and diverse public datasets are utilized to ensure comprehensive representation. Data augmentation techniques are employed to enhance the training dataset, resulting in a total of 17,136 brain MRI images across the four classes. The accuracy of this model was 99.24%, a higher accuracy than other similar works, demonstrating its potential clinical utility. This higher accuracy was achieved mainly due to the utilization of a large and diverse dataset, the improvement of network configuration, the application of a fine-tuning strategy to adjust pretrained weights, and the implementation of data augmentation techniques in enhancing classification performance for brain tumor detection. In addition, a web application was developed by leveraging HTML and Dash components to enhance usability, allowing for easy image upload and tumor prediction. By harnessing artificial intelligence (AI), the developed system addresses the need to reduce human error and enhance diagnostic accuracy. The proposed approach provides an efficient and reliable solution for brain tumor classification, facilitating early diagnosis and enabling timely medical interventions. This work signifies a potential advancement in brain tumor classification, promising improved patient care and outcomes.

Integrating multimodal imaging and peritumoral features for enhanced prostate cancer diagnosis: A machine learning approach.

Zhou H, Xie M, Shi H, Shou C, Tang M, Zhang Y, Hu Y, Liu X

pubmed logopapersJan 1 2025
Prostate cancer is a common malignancy in men, and accurately distinguishing between benign and malignant nodules at an early stage is crucial for optimizing treatment. Multimodal imaging (such as ADC and T2) plays an important role in the diagnosis of prostate cancer, but effectively combining these imaging features for accurate classification remains a challenge. This retrospective study included MRI data from 199 prostate cancer patients. Radiomic features from both the tumor and peritumoral regions were extracted, and a random forest model was used to select the most contributive features for classification. Three machine learning models-Random Forest, XGBoost, and Extra Trees-were then constructed and trained on four different feature combinations (tumor ADC, tumor T2, tumor ADC+T2, and tumor + peritumoral ADC+T2). The model incorporating multimodal imaging features and peritumoral characteristics showed superior classification performance. The Extra Trees model outperformed the others across all feature combinations, particularly in the tumor + peritumoral ADC+T2 group, where the AUC reached 0.729. The AUC values for the other combinations also exceeded 0.65. While the Random Forest and XGBoost models performed slightly lower, they still demonstrated strong classification abilities, with AUCs ranging from 0.63 to 0.72. SHAP analysis revealed that key features, such as tumor texture and peritumoral gray-level features, significantly contributed to the model's classification decisions. The combination of multimodal imaging data with peritumoral features moderately improved the accuracy of prostate cancer classification. This model provides a non-invasive and effective diagnostic tool for clinical use and supports future personalized treatment decisions.

A plaque recognition algorithm for coronary OCT images by Dense Atrous Convolution and attention mechanism.

Meng H, Zhao R, Zhang Y, Zhang B, Zhang C, Wang D, Sun J

pubmed logopapersJan 1 2025
Currently, plaque segmentation in Optical Coherence Tomography (OCT) images of coronary arteries is primarily carried out manually by physicians, and the accuracy of existing automatic segmentation techniques needs further improvement. To furnish efficient and precise decision support, automated detection of plaques in coronary OCT images holds paramount importance. For addressing these challenges, we propose a novel deep learning algorithm featuring Dense Atrous Convolution (DAC) and attention mechanism to realize high-precision segmentation and classification of Coronary artery plaques. Then, a relatively well-established dataset covering 760 original images, expanded to 8,000 using data enhancement. This dataset serves as a significant resource for future research endeavors. The experimental results demonstrate that the dice coefficients of calcified, fibrous, and lipid plaques are 0.913, 0.900, and 0.879, respectively, surpassing those generated by five other conventional medical image segmentation networks. These outcomes strongly attest to the effectiveness and superiority of our proposed algorithm in the task of automatic coronary artery plaque segmentation.

Comparative analysis of diagnostic performance in mammography: A reader study on the impact of AI assistance.

Ramli Hamid MT, Ab Mumin N, Abdul Hamid S, Mohd Ariffin N, Mat Nor K, Saib E, Mohamed NA

pubmed logopapersJan 1 2025
This study evaluates the impact of artificial intelligence (AI) assistance on the diagnostic performance of radiologists with varying levels of experience in interpreting mammograms in a Malaysian tertiary referral center, particularly in women with dense breasts. A retrospective study including 434 digital mammograms interpreted by two general radiologists (12 and 6 years of experience) and two trainees (2 years of experience). Diagnostic performance was assessed with and without AI assistance (Lunit INSIGHT MMG), using sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and area under the receiver operating characteristic curve (AUC). Inter-reader agreement was measured using kappa statistics. AI assistance significantly improved the diagnostic performance of all reader groups across all metrics (p < 0.05). The senior radiologist consistently achieved the highest sensitivity (86.5% without AI, 88.0% with AI) and specificity (60.5% without AI, 59.2% with AI). The junior radiologist demonstrated the highest PPV (56.9% without AI, 74.6% with AI) and NPV (90.3% without AI, 92.2% with AI). The trainees showed the lowest performance, but AI significantly enhanced their accuracy. AI assistance was particularly beneficial in interpreting mammograms of women with dense breasts. AI assistance significantly enhances the diagnostic accuracy and consistency of radiologists in mammogram interpretation, with notable benefits for less experienced readers. These findings support the integration of AI into clinical practice, particularly in resource-limited settings where access to specialized breast radiologists is constrained.

Deep learning-based fine-grained assessment of aneurysm wall characteristics using 4D-CT angiography.

Kumrai T, Maekawa T, Chen Y, Sugiyama Y, Takagaki M, Yamashiro S, Takizawa K, Ichinose T, Ishida F, Kishima H

pubmed logopapersJan 1 2025
This study proposes a novel deep learning-based approach for aneurysm wall characteristics, including thin-walled (TW) and hyperplastic-remodeling (HR) regions. We analyzed fifty-two unruptured cerebral aneurysms employing 4D-computed tomography angiography (4D-CTA) and intraoperative recordings. The TW and HR regions were identified in intraoperative images. The 3D trajectories of observation points on aneurysm walls were processed to compute a time series of 3D speed, acceleration, and smoothness of motion, aiming to evaluate the aneurysm wall characteristics. To facilitate point-level risk evaluation using the time-series data, we developed a convolutional neural network (CNN)-long- short-term memory (LSTM)-based regression model enriched with attention layers. In order to accommodate patient heterogeneity, a patient-independent feature extraction mechanism was introduced. Furthermore, unlabeled data were incorporated to enhance the data-intensive deep model. The proposed method achieved an average diagnostic accuracy of 92%, significantly outperforming a simpler model lacking attention. These results underscore the significance of patient-independent feature extraction and the use of unlabeled data. This study demonstrates the efficacy of a fine-grained deep learning approach in predicting aneurysm wall characteristics using 4D-CTA. Notably, incorporating an attention-based network structure proved to be particularly effective, contributing to enhanced performance.

Intelligent and precise auxiliary diagnosis of breast tumors using deep learning and radiomics.

Wang T, Zang B, Kong C, Li Y, Yang X, Yu Y

pubmed logopapersJan 1 2025
Breast cancer is the most common malignant tumor among women worldwide, and early diagnosis is crucial for reducing mortality rates. Traditional diagnostic methods have significant limitations in terms of accuracy and consistency. Imaging is a common technique for diagnosing and predicting breast cancer, but human error remains a concern. Increasingly, artificial intelligence (AI) is being employed to assist physicians in reducing diagnostic errors. We developed an intelligent diagnostic model combining deep learning and radiomics to enhance breast tumor diagnosis. The model integrates MobileNet with ResNeXt-inspired depthwise separable and grouped convolutions, improving feature processing and efficiency while reducing parameters. Using AI-Dhabyani and TCIA breast ultrasound datasets, we validated the model internally and externally, comparing it to VGG16, ResNet, AlexNet, and MobileNet. Results: The internal validation set achieved an accuracy of 83.84% with an AUC of 0.92, outperforming other models. The external validation set showed an accuracy of 69.44% with an AUC of 0.75, demonstrating high robustness and generalizability. Conclusions: We developed an intelligent diagnostic model using deep learning and radiomics to improve breast tumor diagnosis. The model combines MobileNet with ResNeXt-inspired depthwise separable and grouped convolutions, enhancing feature processing and efficiency while reducing parameters. It was validated internally and externally using the AI-Dhabyani and TCIA breast ultrasound datasets and compared with VGG16, ResNet, AlexNet, and MobileNet.
Page 382 of 3853843 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.