Sort by:
Page 166 of 2442432 results

DSA-NRP: No-Reflow Prediction from Angiographic Perfusion Dynamics in Stroke EVT

Shreeram Athreya, Carlos Olivares, Ameera Ismail, Kambiz Nael, William Speier, Corey Arnold

arxiv logopreprintJun 20 2025
Following successful large-vessel recanalization via endovascular thrombectomy (EVT) for acute ischemic stroke (AIS), some patients experience a complication known as no-reflow, defined by persistent microvascular hypoperfusion that undermines tissue recovery and worsens clinical outcomes. Although prompt identification is crucial, standard clinical practice relies on perfusion magnetic resonance imaging (MRI) within 24 hours post-procedure, delaying intervention. In this work, we introduce the first-ever machine learning (ML) framework to predict no-reflow immediately after EVT by leveraging previously unexplored intra-procedural digital subtraction angiography (DSA) sequences and clinical variables. Our retrospective analysis included AIS patients treated at UCLA Medical Center (2011-2024) who achieved favorable mTICI scores (2b-3) and underwent pre- and post-procedure MRI. No-reflow was defined as persistent hypoperfusion (Tmax > 6 s) on post-procedural imaging. From DSA sequences (AP and lateral views), we extracted statistical and temporal perfusion features from the target downstream territory to train ML classifiers for predicting no-reflow. Our novel method significantly outperformed a clinical-features baseline(AUC: 0.7703 $\pm$ 0.12 vs. 0.5728 $\pm$ 0.12; accuracy: 0.8125 $\pm$ 0.10 vs. 0.6331 $\pm$ 0.09), demonstrating that real-time DSA perfusion dynamics encode critical insights into microvascular integrity. This approach establishes a foundation for immediate, accurate no-reflow prediction, enabling clinicians to proactively manage high-risk patients without reliance on delayed imaging.

Deep learning NTCP model for late dysphagia after radiotherapy for head and neck cancer patients based on 3D dose, CT and segmentations

de Vette, S. P., Neh, H., van der Hoek, L., MacRae, D. C., Chu, H., Gawryszuk, A., Steenbakkers, R. J., van Ooijen, P. M., Fuller, C. D., Hutcheson, K. A., Langendijk, J. A., Sijtsema, N. M., van Dijk, L. V.

medrxiv logopreprintJun 20 2025
Background & purposeLate radiation-associated dysphagia after head and neck cancer (HNC) significantly impacts patients health and quality of life. Conventional normal tissue complication probability (NTCP) models use discrete dose parameters to predict toxicity risk but fail to fully capture the complexity of this side effect. Deep learning (DL) offers potential improvements by incorporating 3D dose data for all anatomical structures involved in swallowing. This study aims to enhance dysphagia prediction with 3D DL NTCP models compared to conventional NTCP models. Materials & methodsA multi-institutional cohort of 1484 HNC patients was used to train and validate a 3D DL model (Residual Network) incorporating 3D dose distributions, organ-at-risk segmentations, and CT scans, with or without patient- or treatment-related data. Predictions of grade [≥]2 dysphagia (CTCAEv4) at six months post-treatment were evaluated using area under the curve (AUC) and calibration curves. Results were compared to a conventional NTCP model based on pre-treatment dysphagia, tumour location, and mean dose to swallowing organs. Attention maps highlighting regions of interest for individual patients were assessed. ResultsDL models outperformed the conventional NTCP model in both the independent test set (AUC=0.80-0.84 versus 0.76) and external test set (AUC=0.73-0.74 versus 0.63) in AUC and calibration. Attention maps showed a focus on the oral cavity and superior pharyngeal constrictor muscle. ConclusionDL NTCP models performed better than the conventional NTCP model, suggesting the benefit of using 3D-input over the conventional discrete dose parameters. Attention maps highlighted relevant regions linked to dysphagia, supporting the utility of DL for improved predictions.

Proportional Sensitivity in Generative Adversarial Network (GAN)-Augmented Brain Tumor Classification Using Convolutional Neural Network

Mahin Montasir Afif, Abdullah Al Noman, K. M. Tahsin Kabir, Md. Mortuza Ahmmed, Md. Mostafizur Rahman, Mufti Mahmud, Md. Ashraful Babu

arxiv logopreprintJun 20 2025
Generative Adversarial Networks (GAN) have shown potential in expanding limited medical imaging datasets. This study explores how different ratios of GAN-generated and real brain tumor MRI images impact the performance of a CNN in classifying healthy vs. tumorous scans. A DCGAN was used to create synthetic images which were mixed with real ones at various ratios to train a custom CNN. The CNN was then evaluated on a separate real-world test set. Our results indicate that the model maintains high sensitivity and precision in tumor classification, even when trained predominantly on synthetic data. When only a small portion of GAN data was added, such as 900 real images and 100 GAN images, the model achieved excellent performance, with test accuracy reaching 95.2%, and precision, recall, and F1-score all exceeding 95%. However, as the proportion of GAN images increased further, performance gradually declined. This study suggests that while GANs are useful for augmenting limited datasets especially when real data is scarce, too much synthetic data can introduce artifacts that affect the model's ability to generalize to real world cases.

Robust Training with Data Augmentation for Medical Imaging Classification

Josué Martínez-Martínez, Olivia Brown, Mostafa Karami, Sheida Nabavi

arxiv logopreprintJun 20 2025
Deep neural networks are increasingly being used to detect and diagnose medical conditions using medical imaging. Despite their utility, these models are highly vulnerable to adversarial attacks and distribution shifts, which can affect diagnostic reliability and undermine trust among healthcare professionals. In this study, we propose a robust training algorithm with data augmentation (RTDA) to mitigate these vulnerabilities in medical image classification. We benchmark classifier robustness against adversarial perturbations and natural variations of RTDA and six competing baseline techniques, including adversarial training and data augmentation approaches in isolation and combination, using experimental data sets with three different imaging technologies (mammograms, X-rays, and ultrasound). We demonstrate that RTDA achieves superior robustness against adversarial attacks and improved generalization performance in the presence of distribution shift in each image classification task while maintaining high clean accuracy.

DSA-NRP: No-Reflow Prediction from Angiographic Perfusion Dynamics in Stroke EVT

Shreeram Athreya, Carlos Olivares, Ameera Ismail, Kambiz Nael, William Speier, Corey Arnold

arxiv logopreprintJun 20 2025
Following successful large-vessel recanalization via endovascular thrombectomy (EVT) for acute ischemic stroke (AIS), some patients experience a complication known as no-reflow, defined by persistent microvascular hypoperfusion that undermines tissue recovery and worsens clinical outcomes. Although prompt identification is crucial, standard clinical practice relies on perfusion magnetic resonance imaging (MRI) within 24 hours post-procedure, delaying intervention. In this work, we introduce the first-ever machine learning (ML) framework to predict no-reflow immediately after EVT by leveraging previously unexplored intra-procedural digital subtraction angiography (DSA) sequences and clinical variables. Our retrospective analysis included AIS patients treated at UCLA Medical Center (2011-2024) who achieved favorable mTICI scores (2b-3) and underwent pre- and post-procedure MRI. No-reflow was defined as persistent hypoperfusion (Tmax > 6 s) on post-procedural imaging. From DSA sequences (AP and lateral views), we extracted statistical and temporal perfusion features from the target downstream territory to train ML classifiers for predicting no-reflow. Our novel method significantly outperformed a clinical-features baseline(AUC: 0.7703 $\pm$ 0.12 vs. 0.5728 $\pm$ 0.12; accuracy: 0.8125 $\pm$ 0.10 vs. 0.6331 $\pm$ 0.09), demonstrating that real-time DSA perfusion dynamics encode critical insights into microvascular integrity. This approach establishes a foundation for immediate, accurate no-reflow prediction, enabling clinicians to proactively manage high-risk patients without reliance on delayed imaging.

BioTransX: A novel bi-former based hybrid model with bi-level routing attention for brain tumor classification with explainable insights.

Rajpoot R, Jain S, Semwal VB

pubmed logopapersJun 20 2025
Brain tumors, known for their life-threatening implications, underscore the urgency of precise and interpretable early detection. Expertise remains essential for accurate identification through MRI scans due to the intricacies involved. However, the growing recognition of automated detection systems holds the potential to enhance accuracy and improve interpretability. By consistently providing easily comprehensible results, these automated solutions could boost the overall efficiency and effectiveness of brain tumor diagnosis, promising a transformative era in healthcare. This paper introduces a new hybrid model, BioTransX, which uses a bi-former encoder mechanism, a dynamic sparse attention-based transformer, in conjunction with ensemble convolutional networks. Recognizing the importance of better contrast and data quality, we applied Contrast-Limited Adaptive Histogram Equalization (CLAHE) during the initial data processing stage. Additionally, to address the crucial aspect of model interpretability, we integrated Grad-CAM and Gradient Attention Rollout, which elucidate decisions by highlighting influential regions within medical images. Our hybrid deep learning model was primarily evaluated on the Kaggle MRI dataset for multi-class brain tumor classification, achieving a mean accuracy and F1-score of 99.29%. To validate its generalizability and robustness, BioTransX was further tested on two additional benchmark datasets, BraTS and Figshare, where it consistently maintained high performance across key evaluation metrics. The transformer-based hybrid model demonstrated promising performance in explainable identification and offered notable advantages in computational efficiency and memory usage. These strengths differentiate BioTransX from existing models in the literature and make it ideal for real-world deployment in resource-constrained clinical infrastructures.

Generalizable model to predict new or progressing compression fractures in tumor-infiltrated thoracolumbar vertebrae in an all-comer population.

Flores A, Nitturi V, Kavoussi A, Feygin M, Andrade de Almeida RA, Ramirez Ferrer E, Anand A, Nouri S, Allam AK, Ricciardelli A, Reyes G, Reddy S, Rampalli I, Rhines L, Tatsui CE, North RY, Ghia A, Siewerdsen JH, Ropper AE, Alvarez-Breckenridge C

pubmed logopapersJun 20 2025
Neurosurgical evaluation is required in the setting of spinal metastases at high risk for leading to a vertebral body fracture. Both irradiated and nonirradiated vertebrae are affected. Understanding fracture risk is critical in determining management, including follow-up timing and prophylactic interventions. Herein, the authors report the results of a machine learning model that predicts the development or progression of a pathological vertebral compression fracture (VCF) in metastatic tumor-infiltrated thoracolumbar vertebrae in an all-comer population. A multi-institutional all-comer cohort of patients with tumor containing vertebral levels spanning T1 through L5 and at least 1 year of follow-up was included in the study. Clinical features of the patients, diseases, and treatments were collected. CT radiomic features of the vertebral bodies were extracted from tumor-infiltrated vertebrae that did or did not subsequently fracture or progress. Recursive feature elimination (RFE) of both radiomic and clinical features was performed. The resulting features were used to create a purely clinical model, purely radiomic model, and combined clinical-radiomic model. A Spine Instability Neoplastic Score (SINS) model was created for a baseline performance comparison. Model performance was assessed using the area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity (with 95% confidence intervals) with tenfold cross-validation. Within 1 year from initial CT, 123 of 977 vertebrae developed VCF. Selected clinical features included SINS, SINS component for < 50% vertebral body collapse, SINS component for "none of the prior 3" (i.e., "none of the above" on the SINS component for vertebral body involvement), histology, age, and BMI. Of the 2015 radiomic features, RFE selected 19 to be used in the pure radiomic model and the combined clinical-radiomic model. The best performing model was a random forest classifier using both clinical and radiomic features, demonstrating an AUROC of 0.86 (95% CI 0.82-0.9), sensitivity of 0.78 (95% CI 0.70-0.84), and specificity of 0.80 (95% CI 0.77-0.82). This performance was significantly higher than the best SINS-alone model (AUROC 0.75, 95% CI 0.70-0.80) and outperformed the clinical-only model but not in a statistically significant manner (AUROC 0.82, 95% CI 0.77-0.87). The authors developed a clinically generalizable machine learning model to predict the risk of a new or progressing VCF in an all-comer population. This model addresses limitations from prior work and was trained on the largest cohort of patients and vertebrae published to date. If validated, the model could lead to more consistent and systematic identification of high-risk vertebrae, resulting in faster, more accurate triage of patients for optimal management.

Detection of breast cancer using fractional discrete sinc transform based on empirical Fourier decomposition.

Azmy MM

pubmed logopapersJun 20 2025
Breast cancer is the most common cause of death among women worldwide. Early detection of breast cancer is important; for saving patients' lives. Ultrasound and mammography are the most common noninvasive methods for detecting breast cancer. Computer techniques are used to help physicians diagnose cancer. In most of the previous studies, the classification parameter rates were not high enough to achieve the correct diagnosis. In this study, new approaches were applied to detect breast cancer images from three databases. The programming software used to extract features from the images was MATLAB R2022a. Novel approaches were obtained using new fractional transforms. These fractional transforms were deduced from the fraction Fourier transform and novel discrete transforms. The novel discrete transforms were derived from discrete sine and cosine transforms. The steps of the approaches were described below. First, fractional transforms were applied to the breast images. Then, the empirical Fourier decomposition (EFD) was obtained. The mean, variance, kurtosis, and skewness were subsequently calculated. Finally, RNN-BILSTM (recurrent neural network-bidirectional-long short-term memory) was used as a classification phase. The proposed approaches were compared to obtain the highest accuracy rate during the classification phase based on different fractional transforms. The highest accuracy rate was obtained when the fractional discrete sinc transform of approach 4 was applied. The area under the receiver operating characteristic curve (AUC) was 1. The accuracy, sensitivity, specificity, precision, G-mean, and F-measure rates were 100%. If traditional machine learning methods, such as support vector machines (SVMs) and artificial neural networks (ANNs), were used, the classification parameter rates would be low. Therefore, the fourth approach used RNN-BILSTM to extract the features of breast images perfectly. This approach can be programed on a computer to help physicians correctly classify breast images.

Effective workflow from multimodal MRI data to model-based prediction.

Jung K, Wischnewski KJ, Eickhoff SB, Popovych OV

pubmed logopapersJun 20 2025
Predicting human behavior from neuroimaging data remains a complex challenge in neuroscience. To address this, we propose a systematic and multi-faceted framework that incorporates a model-based workflow using dynamical brain models. This approach utilizes multi-modal MRI data for brain modeling and applies the optimized modeling outcome to machine learning. We demonstrate the performance of such an approach through several examples such as sex classification and prediction of cognition or personality traits. We in particular show that incorporating the simulated data into machine learning can significantly improve the prediction performance compared to using empirical features alone. These results suggest considering the output of the dynamical brain models as an additional neuroimaging data modality that complements empirical data by capturing brain features that are difficult to measure directly. The discussed model-based workflow can offer a promising avenue for investigating and understanding inter-individual variability in brain-behavior relationships and enhancing prediction performance in neuroimaging research.

MVKD-Trans: A Multi-View Knowledge Distillation Vision Transformer Architecture for Breast Cancer Classification Based on Ultrasound Images.

Ling D, Jiao X

pubmed logopapersJun 20 2025
Breast cancer is the leading cancer threatening women's health. In recent years, deep neural networks have outperformed traditional methods in terms of both accuracy and efficiency for breast cancer classification. However, most ultrasound-based breast cancer classification methods rely on single-perspective information, which may lead to higher misdiagnosis rates. In this study, we propose a multi-view knowledge distillation vision transformer architecture (MVKD-Trans) for the classification of benign and malignant breast tumors. We utilize multi-view ultrasound images of the same tumor to capture diverse features. Additionally, we employ a shuffle module for feature fusion, extracting channel and spatial dual-attention information to improve the model's representational capability. Given the limited computational capacity of ultrasound devices, we also utilize knowledge distillation (KD) techniques to compress the multi-view network into a single-view network. The results show that the accuracy, area under the ROC curve (AUC), sensitivity, specificity, precision, and F1 score of the model are 88.15%, 91.23%, 81.41%, 90.73%, 78.29%, and 79.69%, respectively. The superior performance of our approach, compared to several existing models, highlights its potential to significantly enhance the understanding and classification of breast cancer.
Page 166 of 2442432 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.