Sort by:
Page 77 of 94940 results

Evaluation of tumour pseudocapsule using computed tomography-based radiomics in pancreatic neuroendocrine tumours to predict prognosis and guide surgical strategy: a cohort study.

Wang Y, Gu W, Huang D, Zhang W, Chen Y, Xu J, Li Z, Zhou C, Chen J, Xu X, Tang W, Yu X, Ji S

pubmed logopapersMay 16 2025
To date, indications for a surgical approach of small pancreatic neuroendocrine tumours (PanNETs) remain controversial. This cohort study aimed to identify the pseudocapsule status preoperatively to estimate the rationality of enucleation and survival prognosis of PanNETs, particularly in small tumours. Clinicopathological data were collected from patients with PanNETs who underwent the first pancreatectomy at our hospital (n = 578) between February 2012 and September 2023. Kaplan-Meier curves were constructed to visualise prognostic differences. Five distinct tissue samples were obtained for single-cell RNA sequencing (scRNA-seq) to evaluate variations in the tumour microenvironment. Radiological features were extracted from preoperative arterial-phase contrast-enhanced computed tomography. The performance of the pseudocapsule radiomics model was assessed using the area under the curve (AUC) metric. 475 cases (mean [SD] age, 53.01 [12.20] years; female vs male, 1.24:1) were eligible for this study. The mean pathological diameter of tumour was 2.99 cm (median: 2.50 cm; interquartile range [IQR]: 1.50-4.00 cm). These cases were stratified into complete (223, 46.95%) and incomplete (252, 53.05%) pseudocapsule groups. A statistically significant difference in aggressive indicators was observed between the two groups (P < 0.001). Through scRNA-seq analysis, we identified that the incomplete group presented a markedly immunosuppressive microenvironment. Regarding the impact on recurrence-free survival, the 3-year and 5-year rates were 94.8% and 92.5%, respectively, for the complete pseudocapsule group, compared to 76.7% and 70.4% for the incomplete pseudocapsule group. The radiomics-predictive model has a significant discrimination for the state of the pseudocapsule, particularly in small tumours (AUC, 0.744; 95% CI, 0.652-0.837). By combining computed tomography-based radiomics and machine learning for preoperative identification of pseudocapsule status, the intact group is more likely to benefit from enucleation.

Escarcitys: A framework for enhancing medical image classification performance in scarcity of trainable samples scenarios.

Wang T, Dai Q, Xiong W

pubmed logopapersMay 16 2025
In the field of healthcare, the acquisition and annotation of medical images present significant challenges, resulting in a scarcity of trainable samples. This data limitation hinders the performance of deep learning models, creating bottlenecks in clinical applications. To address this issue, we construct a framework (EScarcityS) aimed at enhancing the success rate of disease diagnosis in scarcity of trainable medical image scenarios. Firstly, considering that Transformer-based deep learning networks rely on a large amount of trainable data, this study takes into account the unique characteristics of pathological regions. By extracting the feature representations of all particles in medical images at different granularities, a multi-granularity Transformer network (MGVit) is designed. This network leverages additional prior knowledge to assist the Transformer network during training, thereby reducing the data requirement to some extent. Next, the importance maps of particles at different granularities, generated by MGVit, are fused to construct disease probability maps corresponding to the images. Based on these maps, a disease probability map-guided diffusion generation model is designed to generate more realistic and interpretable synthetic data. Subsequently, authentic and synthetical data are mixed and used to retrain MGVit, aiming to enhance the accuracy of medical image classification in scarcity of trainable medical image scenarios. Finally, we conducted detailed experiments on four real medical image datasets to validate the effectiveness of EScarcityS and its specific modules.

Deep learning progressive distill for predicting clinical response to conversion therapy from preoperative CT images of advanced gastric cancer patients.

Han S, Zhang T, Deng W, Han S, Wu H, Jiang B, Xie W, Chen Y, Deng T, Wen X, Liu N, Fan J

pubmed logopapersMay 16 2025
Identifying patients suitable for conversion therapy through early non-invasive screening is crucial for tailoring treatment in advanced gastric cancer (AGC). This study aimed to develop and validate a deep learning method, utilizing preoperative computed tomography (CT) images, to predict the response to conversion therapy in AGC patients. This retrospective study involved 140 patients. We utilized Progressive Distill (PD) methodology to construct a deep learning model for predicting clinical response to conversion therapy based on preoperative CT images. Patients in the training set (n = 112) and in the test set (n = 28) were sourced from The First Affiliated Hospital of Wenzhou Medical University between September 2017 and November 2023. Our PD models' performance was compared with baseline models and those utilizing Knowledge Distillation (KD), with evaluation metrics including accuracy, sensitivity, specificity, receiver operating characteristic curves, areas under the receiver operating characteristic curve (AUCs), and heat maps. The PD model exhibited the best performance, demonstrating robust discrimination of clinical response to conversion therapy with an AUC of 0.99 and accuracy of 99.11% in the training set, and 0.87 AUC and 85.71% accuracy in the test set. Sensitivity and specificity were 97.44% and 100% respectively in the training set, 85.71% and 85.71% each in the test set, suggesting absence of discernible bias. The deep learning model of PD method accurately predicts clinical response to conversion therapy in AGC patients. Further investigation is warranted to assess its clinical utility alongside clinicopathological parameters.

Diagnostic challenges of carpal tunnel syndrome in patients with congenital thenar hypoplasia: a comprehensive review.

Naghizadeh H, Salkhori O, Akrami S, Khabiri SS, Arabzadeh A

pubmed logopapersMay 16 2025
Carpal Tunnel Syndrome (CTS) is the most common entrapment neuropathy, frequently presenting with pain, numbness, and muscle weakness due to median nerve compression. However, diagnosing CTS becomes particularly challenging in patients with Congenital Thenar Hypoplasia (CTH), a rare congenital anomaly characterized by underdeveloped thenar muscles. The overlapping symptoms of CTH and CTS, such as thumb weakness, impaired hand function, and thenar muscle atrophy, can obscure the identification of median nerve compression. This review highlights the diagnostic complexities arising from this overlap and evaluates existing clinical, imaging, and electrophysiological assessment methods. While traditional diagnostic tests, including Phalen's and Tinel's signs, exhibit limited sensitivity in CTH patients, advanced imaging modalities like ultrasonography (US), magnetic resonance imaging (MRI), and diffusion tensor imaging (DTI) provide valuable insights into structural abnormalities. Additionally, emerging technologies such as artificial intelligence (AI) enhance diagnostic precision by automating imaging analysis and identifying subtle nerve alterations. Combining clinical history, functional assessments, and advanced imaging, an interdisciplinary approach is critical to differentiate between CTH-related anomalies and CTS accurately. This comprehensive review underscores the need for tailored diagnostic protocols to improve early detection, personalised management, and outcomes for this unique patient population.

Machine Learning-Based Multimodal Radiomics and Transcriptomics Models for Predicting Radiotherapy Sensitivity and Prognosis in Esophageal Cancer.

Ye C, Zhang H, Chi Z, Xu Z, Cai Y, Xu Y, Tong X

pubmed logopapersMay 15 2025
Radiotherapy plays a critical role in treating esophageal cancer, but individual responses vary significantly, impacting patient outcomes. This study integrates machine learning-driven multimodal radiomics and transcriptomics to develop predictive models for radiotherapy sensitivity and prognosis in esophageal cancer. We applied the SEResNet101 deep learning model to imaging and transcriptomic data from the UCSC Xena and TCGA databases, identifying prognosis-associated genes such as STUB1, PEX12, and HEXIM2. Using Lasso regression and Cox analysis, we constructed a prognostic risk model that accurately stratifies patients based on survival probability. Notably, STUB1, an E3 ubiquitin ligase, enhances radiotherapy sensitivity by promoting the ubiquitination and degradation of SRC, a key oncogenic protein. In vitro and in vivo experiments confirmed that STUB1 overexpression or SRC silencing significantly improves radiotherapy response in esophageal cancer models. These findings highlight the predictive power of multimodal data integration for individualized radiotherapy planning and underscore STUB1 as a promising therapeutic target for enhancing radiotherapy efficacy in esophageal cancer.

Performance of Artificial Intelligence in Diagnosing Lumbar Spinal Stenosis: A Systematic Review and Meta-Analysis.

Yang X, Zhang Y, Li Y, Wu Z

pubmed logopapersMay 15 2025
The present study followed the reporting guidelines for systematic reviews and meta-analyses. We conducted this study to review the diagnostic value of artificial intelligence (AI) for various types of lumbar spinal stenosis (LSS) and the level of stenosis, offering evidence-based support for the development of smart diagnostic tools. AI is currently being utilized for image processing in clinical practice. Some studies have explored AI techniques for identifying the severity of LSS in recent years. Nevertheless, there remains a shortage of structured data proving its effectiveness. Four databases (PubMed, Cochrane, Embase, and Web of Science) were searched until March 2024, including original studies that utilized deep learning (DL) and machine learning (ML) models to diagnose LSS. The risk of bias of included studies was assessed using Quality Assessment of Diagnostic Accuracy Studies is a quality evaluation tool for diagnostic research (diagnostic tests). Computed Tomography. PROSPERO is an international database of prospectively registered systematic reviews. Summary Receiver Operating Characteristic. Magnetic Resonance. Central canal stenosis. three-dimensional magnetic resonance myelography. The accuracy in the validation set was extracted for a meta-analysis. The meta-analysis was completed in R4.4.0. A total of 48 articles were included, with an overall accuracy of 0.885 (95% CI: 0.860-0907) for dichotomous tasks. Among them, the accuracy was 0.892 (95% CI: 0.867-0915) for DL and 0.833 (95% CI: 0.760-0895) for ML. The overall accuracy for LSS was 0.895 (95% CI: 0.858-0927), with an accuracy of 0.912 (95% CI: 0.873-0.944) for DL and 0.843 (95% CI: 0.766-0.907) for ML. The overall accuracy for central canal stenosis was 0.875 (95% CI: 0.821-0920), with an accuracy of 0.881 (95% CI: 0.829-0.925) for DL and 0.733 (95% CI: 0.541-0.877) for ML. The overall accuracy for neural foramen stenosis was 0.893 (95% CI: 0.851-0.928). In polytomous tasks, the accuracy was 0.936 (95% CI: 0.895-0.967) for no LSS, 0.503 (95% CI: 0.391-0.614) for mild LSS, 0.512 (95% CI: 0.336-0.688) for moderate LSS, and 0.860 for severe LSS (95% CI: 0.733-0.954). AI is highly valuable for diagnosing LSS. However, further external validation is necessary to enhance the analysis of different stenosis categories and improve the diagnostic accuracy for mild to moderate stenosis levels.

MIMI-ONET: Multi-Modal image augmentation via Butterfly Optimized neural network for Huntington DiseaseDetection.

Amudaria S, Jawhar SJ

pubmed logopapersMay 15 2025
Huntington's disease (HD) is a chronic neurodegenerative ailment that affects cognitive decline, motor impairment, and psychiatric symptoms. However, the existing HD detection methods are struggle with limited annotated datasets that restricts their generalization performance. This research work proposes a novel MIMI-ONET for primary detection of HD using augmented multi-modal brain MRI images. The two-dimensional stationary wavelet transform (2DSWT) decomposes the MRI images into different frequency wavelet sub-bands. These sub-bands are enhanced with Contract Stretching Adaptive Histogram Equalization (CSAHE) and Multi-scale Adaptive Retinex (MSAR) by reducing the irrelevant distortions. The proposed MIMI-ONET introduces a Hepta Generative Adversarial Network (Hepta-GAN) to generates different noise-free HD images based on hepta azimuth angles (45°, 90°, 135°, 180°, 225°, 270°, 315°). Hepta-GAN incorporates Affine Estimation Module (AEM) to extract the multi-scale features using dilated convolutional layers for efficient HD image generation. Moreover, Hepta-GAN is normalized with Butterfly Optimization (BO) algorithm for enhancing augmentation performance by balancing the parameters. Finally, the generated images are given to Deep neural network (DNN) for the classification of normal control (NC), Adult-Onset HD (AHD) and Juvenile HD (JHD) cases. The ability of the proposed MIMI-ONET is evaluated with precision, specificity, f1 score, recall, and accuracy, PSNR and MSE. From the experimental results, the proposed MIMI-ONET attains the accuracy of 98.85% and reaches PSNR value of 48.05 based on the gathered Image-HD dataset. The proposed MIMI-ONET increases the overall accuracy of 9.96%, 1.85%, 5.91%, 13.80% and 13.5% for 3DCNN, KNN, FCN, RNN and ML framework respectively.

Artificial intelligence algorithm improves radiologists' bone age assessment accuracy artificial intelligence algorithm improves radiologists' bone age assessment accuracy.

Chang TY, Chou TY, Jen IA, Yuh YS

pubmed logopapersMay 15 2025
Artificial intelligence (AI) algorithms can provide rapid and precise radiographic bone age (BA) assessment. This study assessed the effects of an AI algorithm on the BA assessment performance of radiologists, and evaluated how automation bias could affect radiologists. In this prospective randomized crossover study, six radiologists with varying levels of experience (senior, mi-level, and junior) assessed cases from a test set of 200 standard BA radiographs. The test set was equally divided into two subsets: datasets A and B. Each radiologist assessed BA independently without AI assistance (A- B-) and with AI assistance (A+ B+). We used the mean of assessments made by two experts as the ground truth for accuracy assessment; subsequently, we calculated the mean absolute difference (MAD) between the radiologists' BA predictions and ground-truth BA and evaluated the proportion of estimates for which the MAD exceeded one year. Additionally, we compared the radiologists' performance under conditions of early AI assistance with their performance under conditions of delayed AI assistance; the radiologists were allowed to reject AI interpretations. The overall accuracy of senior, mid-level, and junior radiologists improved significantly with AI assistance than without AI assistance (MAD: 0.74 vs. 0.46 years, p < 0.001; proportion of assessments for which MAD exceeded 1 year: 24.0% vs. 8.4%, p < 0.001). The proportion of improved BA predictions with AI assistance (16.8%) was significantly higher than that of less accurate predictions with AI assistance (2.3%; p < 0.001). No consistent timing effect was observed between conditions of early and delayed AI assistance. Most disagreements between radiologists and AI occurred over images for patients aged ≤8 years. Senior radiologists had more disagreements than other radiologists. The AI algorithm improved the BA assessment accuracy of radiologists with varying experience levels. Automation bias was prone to affect less experienced radiologists.

CLIF-Net: Intersection-guided Cross-view Fusion Network for Infection Detection from Cranial Ultrasound.

Yu M, Peterson MR, Burgoine K, Harbaugh T, Olupot-Olupot P, Gladstone M, Hagmann C, Cowan FM, Weeks A, Morton SU, Mulondo R, Mbabazi-Kabachelor E, Schiff SJ, Monga V

pubmed logopapersMay 15 2025
This paper addresses the problem of detecting possible serious bacterial infection (pSBI) of infancy, i.e. a clinical presentation consistent with bacterial sepsis in newborn infants using cranial ultrasound (cUS) images. The captured image set for each patient enables multiview imagery: coronal and sagittal, with geometric overlap. To exploit this geometric relation, we develop a new learning framework, called the intersection-guided Crossview Local- and Image-level Fusion Network (CLIF-Net). Our technique employs two distinct convolutional neural network branches to extract features from coronal and sagittal images with newly developed multi-level fusion blocks. Specifically, we leverage the spatial position of these images to locate the intersecting region. We then identify and enhance the semantic features from this region across multiple levels using cross-attention modules, facilitating the acquisition of mutually beneficial and more representative features from both views. The final enhanced features from the two views are then integrated and projected through the image-level fusion layer, outputting pSBI and non-pSBI class probabilities. We contend that our method of exploiting multi-view cUS images enables a first of its kind, robust 3D representation tailored for pSBI detection. When evaluated on a dataset of 302 cUS scans from Mbale Regional Referral Hospital in Uganda, CLIF-Net demonstrates substantially enhanced performance, surpassing the prevailing state-of-the-art infection detection techniques.

Privacy-Protecting Image Classification Within the Web Browser Using Deep Learning Models from Zenodo.

Auer F, Mayer S, Kramer F

pubmed logopapersMay 15 2025
Integrating deep learning into clinical workflows for medical image analysis holds promise for improving diagnostic accuracy. However, strict data privacy regulations and the sensitivity of clinical IT infrastructure limit the deployment of cloud-based solutions. This paper introduces WebIPred, a web-based application that loads deep learning models directly within the client's web browser, protecting patient privacy while maintaining compatibility with clinical IT environments. WebIPred supports the application of pre-trained models published on Zenodo and other repositories, allowing clinicians to apply these models to real patient data without the need for extensive technical knowledge. This paper outlines WebIPred's model integration system, prediction workflow, and privacy features. Our results show that WebIPred offers a privacy-protecting and flexible application for image classification, only relying on client-side processing. WebIPred combines its strong commitment to data privacy and security with a user-friendly interface that makes it easy for clinicians to integrate AI into their workflows.
Page 77 of 94940 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.