Sort by:
Page 161 of 1861854 results

Automatic head and neck tumor segmentation through deep learning and Bayesian optimization on three-dimensional medical images.

Douglas Z, Rahman A, Duggar WN, Wang H

pubmed logopapersMay 15 2025
Medical imaging constitutes critical information in the diagnostic and prognostic evaluation of patients, as it serves to uncover a broad spectrum of pathologies and deviances. Clinical practitioners who carry out medical image screening are primarily reliant on their knowledge and experience for disease diagnosis. Convolutional Neural Networks (CNNs) hold the potential to serve as a formidable decision-support tool in the realm of medical image analysis due to their high capacity to extract hierarchical features and effectuate direct classification and segmentation from image data. However, CNNs contain a myriad of hyperparameters and optimizing these hyperparameters poses a major obstacle to the effective implementation of CNNs. In this work, a two-phase Bayesian Optimization-derived Scheduling (BOS) approach is proposed for hyperparameter optimization for the head and cancerous tissue segmentation tasks. We proposed this two-phase BOS approach to incorporate both rapid convergences in the first training phase and slower (but without overfitting) improvements in the last training phase. Furthermore, we found that batch size and learning rate have a significant impact on the training process, but optimizing them separately can lead to sub-optimal hyperparameter combinations. Therefore, batch size and learning rate have been coupled as the batch size to learning rate (B2L) ratio and utilized in the optimization process to optimize both simultaneously. The optimized hyperparameters have been tested for a three-dimensional V-Net model with computed tomography (CT) and positron emission tomography (PET) scans to segment and classify cancerous and noncancerous tissues. The results of 10-fold cross-validation indicate that the optimal batch size to learning rate (B2L) ratio for each phase of the training method can improve the overall medical image segmentation performance.

Machine Learning-Based Multimodal Radiomics and Transcriptomics Models for Predicting Radiotherapy Sensitivity and Prognosis in Esophageal Cancer.

Ye C, Zhang H, Chi Z, Xu Z, Cai Y, Xu Y, Tong X

pubmed logopapersMay 15 2025
Radiotherapy plays a critical role in treating esophageal cancer, but individual responses vary significantly, impacting patient outcomes. This study integrates machine learning-driven multimodal radiomics and transcriptomics to develop predictive models for radiotherapy sensitivity and prognosis in esophageal cancer. We applied the SEResNet101 deep learning model to imaging and transcriptomic data from the UCSC Xena and TCGA databases, identifying prognosis-associated genes such as STUB1, PEX12, and HEXIM2. Using Lasso regression and Cox analysis, we constructed a prognostic risk model that accurately stratifies patients based on survival probability. Notably, STUB1, an E3 ubiquitin ligase, enhances radiotherapy sensitivity by promoting the ubiquitination and degradation of SRC, a key oncogenic protein. In vitro and in vivo experiments confirmed that STUB1 overexpression or SRC silencing significantly improves radiotherapy response in esophageal cancer models. These findings highlight the predictive power of multimodal data integration for individualized radiotherapy planning and underscore STUB1 as a promising therapeutic target for enhancing radiotherapy efficacy in esophageal cancer.

Machine learning prediction prior to onset of mild cognitive impairment using T1-weighted magnetic resonance imaging radiomic of the hippocampus.

Zhan S, Wang J, Dong J, Ji X, Huang L, Zhang Q, Xu D, Peng L, Wang X, Zhang Y, Liang S, Chen L

pubmed logopapersMay 15 2025
Early identification of individuals who progress from normal cognition (NC) to mild cognitive impairment (MCI) may help prevent cognitive decline. We aimed to build predictive models using radiomic features of the bilateral hippocampus in combination with scores from neuropsychological assessments. We utilized the Alzheimer's Disease Neuroimaging Initiative (ADNI) database to study 175 NC individuals, identifying 50 who progressed to MCI within seven years. Employing the Least Absolute Shrinkage and Selection Operator (LASSO) on T1-weighted images, we extracted and refined hippocampal features. Classification models, including Logistic Regression (LR), Support Vector Machine (SVM), Random Forest (RF), and light gradient boosters (LightGBM), were built based on significant neuropsychological scores. Model validation was conducted using 5-fold cross-validation, and hyperparameters were optimized with Scikit-learn, using an 80:20 data split for training and testing. We found that the LightGBM model achieved an area under the receiver operating characteristic (ROC) curve (AUC) value of 0.89 and an accuracy of 0.79 in the training set, and an AUC value of 0.80 and an accuracy of 0.74 in the test set. The study identified that T1-weighted magnetic resonance imaging radiomic of the hippocampus would be used to predict the progression to MCI at the normal cognitive stage, which might provide a new insight into clinical research.

Privacy-Protecting Image Classification Within the Web Browser Using Deep Learning Models from Zenodo.

Auer F, Mayer S, Kramer F

pubmed logopapersMay 15 2025
Integrating deep learning into clinical workflows for medical image analysis holds promise for improving diagnostic accuracy. However, strict data privacy regulations and the sensitivity of clinical IT infrastructure limit the deployment of cloud-based solutions. This paper introduces WebIPred, a web-based application that loads deep learning models directly within the client's web browser, protecting patient privacy while maintaining compatibility with clinical IT environments. WebIPred supports the application of pre-trained models published on Zenodo and other repositories, allowing clinicians to apply these models to real patient data without the need for extensive technical knowledge. This paper outlines WebIPred's model integration system, prediction workflow, and privacy features. Our results show that WebIPred offers a privacy-protecting and flexible application for image classification, only relying on client-side processing. WebIPred combines its strong commitment to data privacy and security with a user-friendly interface that makes it easy for clinicians to integrate AI into their workflows.

Uncertainty Co-estimator for Improving Semi-Supervised Medical Image Segmentation.

Zeng X, Xiong S, Xu J, Du G, Rong Y

pubmed logopapersMay 15 2025
Recently, combining the strategy of consistency regularization with uncertainty estimation has shown promising performance on semi-supervised medical image segmentation tasks. However, most existing methods estimate the uncertainty solely based on the outputs of a single neural network, which results in imprecise uncertainty estimations and eventually degrades the segmentation performance. In this paper, we propose a novel Uncertainty Co-estimator (UnCo) framework to deal with this problem. Inspired by the co-training technique, UnCo establishes two different mean-teacher modules (i.e., two pairs of teacher and student models), and estimates three types of uncertainty from the multi-source predictions generated by these models. Through combining these uncertainties, their differences will help to filter out incorrect noise in each estimate, thus allowing the final fused uncertainty maps to be more accurate. These resulting maps are then used to enhance a cross-consistency regularization imposed between the two modules. In addition, UnCo also designs an internal consistency regularization within each module, so that the student models can aggregate diverse feature information from both modules, thus promoting the semi-supervised segmentation performance. Finally, an adversarial constraint is introduced to maintain the model diversity. Experimental results on four medical image datasets indicate that UnCo can achieve new state-of-the-art performance on both 2D and 3D semi-supervised segmentation tasks. The source code will be available at https://github.com/z1010x/UnCo.

CLIF-Net: Intersection-guided Cross-view Fusion Network for Infection Detection from Cranial Ultrasound.

Yu M, Peterson MR, Burgoine K, Harbaugh T, Olupot-Olupot P, Gladstone M, Hagmann C, Cowan FM, Weeks A, Morton SU, Mulondo R, Mbabazi-Kabachelor E, Schiff SJ, Monga V

pubmed logopapersMay 15 2025
This paper addresses the problem of detecting possible serious bacterial infection (pSBI) of infancy, i.e. a clinical presentation consistent with bacterial sepsis in newborn infants using cranial ultrasound (cUS) images. The captured image set for each patient enables multiview imagery: coronal and sagittal, with geometric overlap. To exploit this geometric relation, we develop a new learning framework, called the intersection-guided Crossview Local- and Image-level Fusion Network (CLIF-Net). Our technique employs two distinct convolutional neural network branches to extract features from coronal and sagittal images with newly developed multi-level fusion blocks. Specifically, we leverage the spatial position of these images to locate the intersecting region. We then identify and enhance the semantic features from this region across multiple levels using cross-attention modules, facilitating the acquisition of mutually beneficial and more representative features from both views. The final enhanced features from the two views are then integrated and projected through the image-level fusion layer, outputting pSBI and non-pSBI class probabilities. We contend that our method of exploiting multi-view cUS images enables a first of its kind, robust 3D representation tailored for pSBI detection. When evaluated on a dataset of 302 cUS scans from Mbale Regional Referral Hospital in Uganda, CLIF-Net demonstrates substantially enhanced performance, surpassing the prevailing state-of-the-art infection detection techniques.

Artificial intelligence algorithm improves radiologists' bone age assessment accuracy artificial intelligence algorithm improves radiologists' bone age assessment accuracy.

Chang TY, Chou TY, Jen IA, Yuh YS

pubmed logopapersMay 15 2025
Artificial intelligence (AI) algorithms can provide rapid and precise radiographic bone age (BA) assessment. This study assessed the effects of an AI algorithm on the BA assessment performance of radiologists, and evaluated how automation bias could affect radiologists. In this prospective randomized crossover study, six radiologists with varying levels of experience (senior, mi-level, and junior) assessed cases from a test set of 200 standard BA radiographs. The test set was equally divided into two subsets: datasets A and B. Each radiologist assessed BA independently without AI assistance (A- B-) and with AI assistance (A+ B+). We used the mean of assessments made by two experts as the ground truth for accuracy assessment; subsequently, we calculated the mean absolute difference (MAD) between the radiologists' BA predictions and ground-truth BA and evaluated the proportion of estimates for which the MAD exceeded one year. Additionally, we compared the radiologists' performance under conditions of early AI assistance with their performance under conditions of delayed AI assistance; the radiologists were allowed to reject AI interpretations. The overall accuracy of senior, mid-level, and junior radiologists improved significantly with AI assistance than without AI assistance (MAD: 0.74 vs. 0.46 years, p < 0.001; proportion of assessments for which MAD exceeded 1 year: 24.0% vs. 8.4%, p < 0.001). The proportion of improved BA predictions with AI assistance (16.8%) was significantly higher than that of less accurate predictions with AI assistance (2.3%; p < 0.001). No consistent timing effect was observed between conditions of early and delayed AI assistance. Most disagreements between radiologists and AI occurred over images for patients aged ≤8 years. Senior radiologists had more disagreements than other radiologists. The AI algorithm improved the BA assessment accuracy of radiologists with varying experience levels. Automation bias was prone to affect less experienced radiologists.

MIMI-ONET: Multi-Modal image augmentation via Butterfly Optimized neural network for Huntington DiseaseDetection.

Amudaria S, Jawhar SJ

pubmed logopapersMay 15 2025
Huntington's disease (HD) is a chronic neurodegenerative ailment that affects cognitive decline, motor impairment, and psychiatric symptoms. However, the existing HD detection methods are struggle with limited annotated datasets that restricts their generalization performance. This research work proposes a novel MIMI-ONET for primary detection of HD using augmented multi-modal brain MRI images. The two-dimensional stationary wavelet transform (2DSWT) decomposes the MRI images into different frequency wavelet sub-bands. These sub-bands are enhanced with Contract Stretching Adaptive Histogram Equalization (CSAHE) and Multi-scale Adaptive Retinex (MSAR) by reducing the irrelevant distortions. The proposed MIMI-ONET introduces a Hepta Generative Adversarial Network (Hepta-GAN) to generates different noise-free HD images based on hepta azimuth angles (45°, 90°, 135°, 180°, 225°, 270°, 315°). Hepta-GAN incorporates Affine Estimation Module (AEM) to extract the multi-scale features using dilated convolutional layers for efficient HD image generation. Moreover, Hepta-GAN is normalized with Butterfly Optimization (BO) algorithm for enhancing augmentation performance by balancing the parameters. Finally, the generated images are given to Deep neural network (DNN) for the classification of normal control (NC), Adult-Onset HD (AHD) and Juvenile HD (JHD) cases. The ability of the proposed MIMI-ONET is evaluated with precision, specificity, f1 score, recall, and accuracy, PSNR and MSE. From the experimental results, the proposed MIMI-ONET attains the accuracy of 98.85% and reaches PSNR value of 48.05 based on the gathered Image-HD dataset. The proposed MIMI-ONET increases the overall accuracy of 9.96%, 1.85%, 5.91%, 13.80% and 13.5% for 3DCNN, KNN, FCN, RNN and ML framework respectively.

Performance of Artificial Intelligence in Diagnosing Lumbar Spinal Stenosis: A Systematic Review and Meta-Analysis.

Yang X, Zhang Y, Li Y, Wu Z

pubmed logopapersMay 15 2025
The present study followed the reporting guidelines for systematic reviews and meta-analyses. We conducted this study to review the diagnostic value of artificial intelligence (AI) for various types of lumbar spinal stenosis (LSS) and the level of stenosis, offering evidence-based support for the development of smart diagnostic tools. AI is currently being utilized for image processing in clinical practice. Some studies have explored AI techniques for identifying the severity of LSS in recent years. Nevertheless, there remains a shortage of structured data proving its effectiveness. Four databases (PubMed, Cochrane, Embase, and Web of Science) were searched until March 2024, including original studies that utilized deep learning (DL) and machine learning (ML) models to diagnose LSS. The risk of bias of included studies was assessed using Quality Assessment of Diagnostic Accuracy Studies is a quality evaluation tool for diagnostic research (diagnostic tests). Computed Tomography. PROSPERO is an international database of prospectively registered systematic reviews. Summary Receiver Operating Characteristic. Magnetic Resonance. Central canal stenosis. three-dimensional magnetic resonance myelography. The accuracy in the validation set was extracted for a meta-analysis. The meta-analysis was completed in R4.4.0. A total of 48 articles were included, with an overall accuracy of 0.885 (95% CI: 0.860-0907) for dichotomous tasks. Among them, the accuracy was 0.892 (95% CI: 0.867-0915) for DL and 0.833 (95% CI: 0.760-0895) for ML. The overall accuracy for LSS was 0.895 (95% CI: 0.858-0927), with an accuracy of 0.912 (95% CI: 0.873-0.944) for DL and 0.843 (95% CI: 0.766-0.907) for ML. The overall accuracy for central canal stenosis was 0.875 (95% CI: 0.821-0920), with an accuracy of 0.881 (95% CI: 0.829-0.925) for DL and 0.733 (95% CI: 0.541-0.877) for ML. The overall accuracy for neural foramen stenosis was 0.893 (95% CI: 0.851-0.928). In polytomous tasks, the accuracy was 0.936 (95% CI: 0.895-0.967) for no LSS, 0.503 (95% CI: 0.391-0.614) for mild LSS, 0.512 (95% CI: 0.336-0.688) for moderate LSS, and 0.860 for severe LSS (95% CI: 0.733-0.954). AI is highly valuable for diagnosing LSS. However, further external validation is necessary to enhance the analysis of different stenosis categories and improve the diagnostic accuracy for mild to moderate stenosis levels.
Page 161 of 1861854 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.