Sort by:
Page 116 of 1291284 results

A fully automatic radiomics pipeline for postoperative facial nerve function prediction of vestibular schwannoma.

Song G, Li K, Wang Z, Liu W, Xue Q, Liang J, Zhou Y, Geng H, Liu D

pubmed logopapersMay 14 2025
Vestibular schwannoma (VS) is the most prevalent intracranial schwannoma. Surgery is one of the options for the treatment of VS, with the preservation of facial nerve (FN) function being the primary objective. Therefore, postoperative FN function prediction is essential. However, achieving automation for such a method remains a challenge. In this study, we proposed a fully automatic deep learning approach based on multi-sequence magnetic resonance imaging (MRI) to predict FN function after surgery in VS patients. We first developed a segmentation network 2.5D Trans-UNet, which combined Transformer and U-Net to optimize contour segmentation for radiomic feature extraction. Next, we built a deep learning network based on the integration of 1DConvolutional Neural Network (1DCNN) and Gated Recurrent Unit (GRU) to predict postoperative FN function using the extracted features. We trained and tested the 2.5D Trans-UNet segmentation network on public and private datasets, achieving accuracies of 89.51% and 90.66%, respectively, confirming the model's strong performance. Then Feature extraction and selection were performed on the private dataset's segmentation results using 2.5D Trans-UNet. The selected features were used to train the 1DCNN-GRU network for classification. The results showed that our proposed fully automatic radiomics pipeline outperformed the traditional radiomics pipeline on the test set, achieving an accuracy of 88.64%, demonstrating its effectiveness in predicting the postoperative FN function in VS patients. Our proposed automatic method has the potential to become a valuable decision-making tool in neurosurgery, assisting neurosurgeons in making more informed decisions regarding surgical interventions and improving the treatment of VS patients.

CT-based AI framework leveraging multi-scale features for predicting pathological grade and Ki67 index in clear cell renal cell carcinoma: a multicenter study.

Yang H, Zhang Y, Li F, Liu W, Zeng H, Yuan H, Ye Z, Huang Z, Yuan Y, Xiang Y, Wu K, Liu H

pubmed logopapersMay 14 2025
To explore whether a CT-based AI framework, leveraging multi-scale features, can offer a non-invasive approach to accurately predict pathological grade and Ki67 index in clear cell renal cell carcinoma (ccRCC). In this multicenter retrospective study, a total of 1073 pathologically confirmed ccRCC patients from seven cohorts were split into internal cohorts (training and validation sets) and an external test set. The AI framework comprised an image processor, a 3D-kidney and tumor segmentation model by 3D-UNet, a multi-scale features extractor built upon unsupervised learning, and a multi-task classifier utilizing XGBoost. A quantitative model interpretation technique, known as SHapley Additive exPlanations (SHAP), was employed to explore the contribution of multi-scale features. The 3D-UNet model showed excellent performance in segmenting both the kidney and tumor regions, with Dice coefficients exceeding 0.92. The proposed multi-scale features model exhibited strong predictive capability for pathological grading and Ki67 index, with AUROC values of 0.84 and 0.87, respectively, in the internal validation set, and 0.82 and 0.82, respectively, in the external test set. The SHAP results demonstrated that features from radiomics, the 3D Auto-Encoder, and dimensionality reduction all made significant contributions to both prediction tasks. The proposed AI framework, leveraging multi-scale features, accurately predicts the pathological grade and Ki67 index of ccRCC. The CT-based AI framework leveraging multi-scale features offers a promising avenue for accurately predicting the pathological grade and Ki67 index of ccRCC preoperatively, indicating a direction for non-invasive assessment. Non-invasively determining pathological grade and Ki67 index in ccRCC could guide treatment decisions. The AI framework integrates segmentation, classification, and model interpretation, enabling fully automated analysis. The AI framework enables non-invasive preoperative detection of high-risk tumors, assisting clinical decision-making.

Cardiovascular imaging techniques for electrophysiologists.

Rogers AJ, Reynbakh O, Ahmed A, Chung MK, Charate R, Yarmohammadi H, Gopinathannair R, Khan H, Lakkireddy D, Leal M, Srivatsa U, Trayanova N, Wan EY

pubmed logopapersMay 13 2025
Rapid technological advancements in noninvasive and invasive imaging including echocardiography, computed tomography, magnetic resonance imaging and positron emission tomography have allowed for improved anatomical visualization and precise measurement of cardiac structure and function. These imaging modalities allow for evaluation of how cardiac substrate changes, such as myocardial wall thickness, fibrosis, scarring and chamber enlargement and/or dilation, have an important role in arrhythmia initiation and perpetuation. Here, we review the various imaging techniques and modalities used by clinical and basic electrophysiologists to study cardiac arrhythmia mechanisms, periprocedural planning, risk stratification and precise delivery of ablation therapy. We also review the use of artificial intelligence and machine learning to improve identification of areas for triggered activity and isthmuses in reentrant arrhythmias, which may be favorable ablation targets.

Blockchain enabled collective and combined deep learning framework for COVID19 diagnosis.

Periyasamy S, Kaliyaperumal P, Thirumalaisamy M, Balusamy B, Elumalai T, Meena V, Jadoun VK

pubmed logopapersMay 13 2025
The rapid spread of SARS-CoV-2 has highlighted the need for intelligent methodologies in COVID-19 diagnosis. Clinicians face significant challenges due to the virus's fast transmission rate and the lack of reliable diagnostic tools. Although artificial intelligence (AI) has improved image processing, conventional approaches still rely on centralized data storage and training. This reliance increases complexity and raises privacy concerns, which hinder global data exchange. Therefore, it is essential to develop collaborative models that balance accuracy with privacy protection. This research presents a novel framework that combines blockchain technology with a combined learning paradigm to ensure secure data distribution and reduced complexity. The proposed Combined Learning Collective Deep Learning Blockchain Model (CLCD-Block) aggregates data from multiple institutions and leverages a hybrid capsule learning network for accurate predictions. Extensive testing with lung CT images demonstrates that the model outperforms existing models, achieving an accuracy exceeding 97%. Specifically, on four benchmark datasets, CLCD-Block achieved up to 98.79% Precision, 98.84% Recall, 98.79% Specificity, 98.81% F1-Score, and 98.71% Accuracy, showcasing its superior diagnostic capability. Designed for COVID-19 diagnosis, the CLCD-Block framework is adaptable to other applications, integrating AI, decentralized training, privacy protection, and secure blockchain collaboration. It addresses challenges in diagnosing chronic diseases, facilitates cross-institutional research and monitors infectious outbreaks. Future work will focus on enhancing scalability, optimizing real-time performance and adapting the model for broader healthcare datasets.

Development and validation of an early diagnosis model for severe mycoplasma pneumonia in children based on interpretable machine learning.

Xie S, Wu M, Shang Y, Tuo W, Wang J, Cai Q, Yuan C, Yao C, Xiang Y

pubmed logopapersMay 13 2025
Pneumonia is a major threat to the health of children, especially those under the age of five. Mycoplasma  pneumoniae infection is a core cause of pediatric pneumonia, and the incidence of severe mycoplasma pneumoniae pneumonia (SMPP) has increased in recent years. Therefore, there is an urgent need to establish an early warning model for SMPP to improve the prognosis of pediatric pneumonia. The study comprised 597 SMPP patients aged between 1 month and 18 years. Clinical data were selected through Lasso regression analysis, followed by the application of eight machine learning algorithms to develop early warning model. The accuracy of the model was assessed using validation and prospective cohort. To facilitate clinical assessment, the study simplified the indicators and constructed visualized simplified model. The clinical applicability of the model was evaluated by DCA and CIC curve. After variable selection, eight machine learning models were developed using age, sex and 21 serum indicators identified as predictive factors for SMPP. A Light Gradient Boosting Machine (LightGBM) model demonstrated strong performance, achieving AUC of 0.92 for prospective validation. The SHAP analysis was utilized to screen advantageous variables, which contains of serum S100A8/A9, tracheal computed tomography (CT), retinol-binding protein(RBP), platelet larger cell ratio(P-LCR) and CD4+CD25+Treg cell counts, for constructing a simplified model (SCRPT) to improve clinical applicability. The SCRPT diagnostic model exhibited favorable diagnostic efficacy (AUC > 0.8). Additionally, the study found that S100A8/A9 outperformed clinical inflammatory markers can also differentiate the severity of MPP. The SCRPT model consisting of five dominant variables (S100A8/A9, CT, RBP, PLCR and Treg cell) screened based on eight machine learning is expected to be a tool for early diagnosis of SMPP. S100A8/A9 can also be used as a biomarker for validity differentiation of SMPP when medical conditions are limited.

A Deep Learning-Driven Inhalation Injury Grading Assistant Using Bronchoscopy Images

Yifan Li, Alan W Pang, Jo Woon Chong

arxiv logopreprintMay 13 2025
Inhalation injuries present a challenge in clinical diagnosis and grading due to Conventional grading methods such as the Abbreviated Injury Score (AIS) being subjective and lacking robust correlation with clinical parameters like mechanical ventilation duration and patient mortality. This study introduces a novel deep learning-based diagnosis assistant tool for grading inhalation injuries using bronchoscopy images to overcome subjective variability and enhance consistency in severity assessment. Our approach leverages data augmentation techniques, including graphic transformations, Contrastive Unpaired Translation (CUT), and CycleGAN, to address the scarcity of medical imaging data. We evaluate the classification performance of two deep learning models, GoogLeNet and Vision Transformer (ViT), across a dataset significantly expanded through these augmentation methods. The results demonstrate GoogLeNet combined with CUT as the most effective configuration for grading inhalation injuries through bronchoscopy images and achieves a classification accuracy of 97.8%. The histograms and frequency analysis evaluations reveal variations caused by the augmentation CUT with distribution changes in the histogram and texture details of the frequency spectrum. PCA visualizations underscore the CUT substantially enhances class separability in the feature space. Moreover, Grad-CAM analyses provide insight into the decision-making process; mean intensity for CUT heatmaps is 119.6, which significantly exceeds 98.8 of the original datasets. Our proposed tool leverages mechanical ventilation periods as a novel grading standard, providing comprehensive diagnostic support.

Deep learning based on ultrasound images to predict platinum resistance in patients with epithelial ovarian cancer.

Su C, Miao K, Zhang L, Dong X

pubmed logopapersMay 13 2025
The study aimed at developing and validating a deep learning (DL) model based on the ultrasound imaging for predicting the platinum resistance of patients with epithelial ovarian cancer (EOC). 392 patients were enrolled in this retrospective study who had been diagnosed with EOC between 2014 and 2020 and underwent pelvic ultrasound before initial treatment. A DL model was developed to predict patients' platinum resistance, and the model underwent evaluation through receiver-operating characteristic (ROC) curves, decision curve analysis (DCA), and calibration curve. The ROC curves showed that the area under the curve (AUC) of the DL model for predicting patients' platinum resistance in the internal and external test sets were 0.86 (95% CI 0.83-0.90) and 0.86 (95% CI 0.84-0.89), respectively. The model demonstrated high clinical value through clinical decision curve analysis and exhibited good calibration efficiency in the training cohort. Kaplan-Meier analyses showed that the model's optimal cutoff value successfully distinguished between patients at high and low risk of recurrence, with hazard ratios of 3.1 (95% CI 2.3-4.1, P < 0.0001) and 2.9 (95% CI 2.3-3.9; P < 0.0001) in the high-risk group of the internal and external test sets, serving as a prognostic indicator. The DL model based on ultrasound imaging can predict platinum resistance in patients with EOC and may support clinicians in making the most appropriate treatment decisions.

Artificial intelligence for chronic total occlusion percutaneous coronary interventions.

Rempakos A, Pilla P, Alexandrou M, Mutlu D, Strepkos D, Carvalho PEP, Ser OS, Bahbah A, Amin A, Prasad A, Azzalini L, Ybarra LF, Mastrodemos OC, Rangan BV, Al-Ogaili A, Jalli S, Burke MN, Sandoval Y, Brilakis ES

pubmed logopapersMay 13 2025
Artificial intelligence (AI) has become pivotal in advancing medical care, particularly in interventional cardiology. Recent AI developments have proven effective in guiding advanced procedures and complex decisions. The authors review the latest AI-based innovations in the diagnosis of chronic total occlusions (CTO) and in determining the probability of success of CTO percutaneous coronary intervention (PCI). Neural networks and deep learning strategies were the most commonly used algorithms, and the models were trained and deployed using a variety of data types, such as clinical parameters and imaging. AI holds great promise in facilitating CTO PCI.

Diagnosis of thyroid cartilage invasion by laryngeal and hypopharyngeal cancers based on CT with deep learning.

Takano Y, Fujima N, Nakagawa J, Dobashi H, Shimizu Y, Kanaya M, Kano S, Homma A, Kudo K

pubmed logopapersMay 13 2025
To develop a convolutional neural network (CNN) model to diagnose thyroid cartilage invasion by laryngeal and hypopharyngeal cancers observed on computed tomography (CT) images and evaluate the model's diagnostic performance. We retrospectively analyzed 91 cases of laryngeal or hypopharyngeal cancer treated surgically at our hospital during the period April 2010 through May 2023, and we divided the cases into datasets for training (n = 61) and testing (n = 30). We reviewed the CT images and pathological diagnoses in all cases to determine the invasion positive- or negative-status as a ground truth. We trained the new CNN model to classify thyroid cartilage invasion-positive or -negative status from the pre-treatment axial CT images by transfer learning from Residual Network 101 (ResNet101), using the training dataset. We then used the test dataset to evaluate the model's performance. Two radiologists, one with extensive head and neck imaging experience (senior reader) and the other with less experience (junior reader) reviewed the CT images of the test dataset to determine whether thyroid cartilage invasion was present. The following were obtained by the CNN model with the test dataset: area under the curve (AUC), 0.82; 90 % accuracy, 80 % sensitivity, and 95 % specificity. The CNN model showed a significant difference in AUCs compared to the junior reader (p = 0.035) but not the senior reader (p = 0.61). The CNN-based diagnostic model can be a useful supportive tool for the assessment of thyroid cartilage invasion in patients with laryngeal or hypopharyngeal cancer.

Congenital Heart Disease recognition using Deep Learning/Transformer models

Aidar Amangeldi, Vladislav Yarovenko, Angsar Taigonyrov

arxiv logopreprintMay 13 2025
Congenital Heart Disease (CHD) remains a leading cause of infant morbidity and mortality, yet non-invasive screening methods often yield false negatives. Deep learning models, with their ability to automatically extract features, can assist doctors in detecting CHD more effectively. In this work, we investigate the use of dual-modality (sound and image) deep learning methods for CHD diagnosis. We achieve 73.9% accuracy on the ZCHSound dataset and 80.72% accuracy on the DICOM Chest X-ray dataset.
Page 116 of 1291284 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.