Sort by:
Page 104 of 2542539 results

Generative deep-learning-model based contrast enhancement for digital subtraction angiography using a text-conditioned image-to-image model.

Takata T, Yamada K, Yamamoto M, Kondo H

pubmed logopapersJun 20 2025
Digital subtraction angiography (DSA) is an essential imaging technique in interventional radiology, enabling detailed visualization of blood vessels by subtracting pre- and post-contrast images. However, reduced contrast, either accidental or intentional, can impair the clarity of vascular structures. This issue becomes particularly critical in patients with chronic kidney disease (CKD), where minimizing iodinated contrast is necessary to reduce the risk of contrast-induced nephropathy (CIN). This study explored the potential of using a generative deep-learning-model based contrast enhancement technique for DSA. A text-conditioned image-to-image model was developed using Stable Diffusion, augmented with ControlNet to reduce hallucinations and Low-Rank Adaptation for model fine-tuning. A total of 1207 DSA series were used for training and testing, with additional low-contrast images generated through data augmentation. The model was trained using tagged text labels and evaluated using metrics such as Root Mean Square (RMS) contrast, Michelson contrast, signal-to-noise ratio (SNR), and entropy. Evaluation results indicated significant improvements, with RMS contrast, Michelson contrast, and entropy respectively increased from 7.91 to 17.7, 0.875 to 0.992, and 3.60 to 5.60, reflecting enhanced detail. However, SNR decreased from 21.3 to 8.50, indicating increased noise. This study demonstrated the feasibility of deep learning-based contrast enhancement for DSA images and highlights the potential for generative deep-learning-model to improve angiographic imaging. Further refinements, particularly in artifact suppression and clinical validation, are necessary for practical implementation in medical settings.

Three-dimensional U-Net with transfer learning improves automated whole brain delineation from MRI brain scans of rats, mice, and monkeys.

Porter VA, Hobson BA, D'Almeida AJ, Bales KL, Lein PJ, Chaudhari AJ

pubmed logopapersJun 20 2025
Automated whole-brain delineation (WBD) techniques often struggle to generalize across pre-clinical studies due to variations in animal models, magnetic resonance imaging (MRI) scanners, and tissue contrasts. We developed a 3D U-Net neural network for WBD pre-trained on organophosphate intoxication (OPI) rat brain MRI scans. We used transfer learning (TL) to adapt this OPI-pretrained network to other animal models: rat model of Alzheimer's disease (AD), mouse model of tetramethylenedisulfotetramine (TETS) intoxication, and titi monkey model of social bonding. We assessed an OPI-pretrained 3D U-Net across animal models under three conditions: (1) direct application to each dataset; (2) utilizing TL; and (3) training disease-specific U-Net models. For each condition, training dataset size (TDS) was optimized, and output WBDs were compared to manual segmentations for accuracy. The OPI-pretrained 3D U-Net (TDS = 100) achieved the best accuracy [median[min-max]] for the test OPI dataset with a Dice coefficient (DC) = [0.987 [0.977-0.992]] and Hausdorff distance (HD) = [0.86 [0.55-1.27]]mm. TL improved generalization across all models [AD (TDS = 40): DC = 0.987 [0.977-0.992] and HD = 0.72 [0.54-1.00]mm; TETS (TDS = 10): DC = 0.992 [0.984-0.993] and HD = 0.40 [0.31-0.50]mm; Monkey (TDS = 8): DC = 0.977 [0.968-0.979] and HD = 3.03 [2.19-3.91]mm], showing performance comparable to disease-specific networks. The OPI-pretrained 3D U-Net with TL achieved accuracy comparable to disease-specific networks with reduced training data (TDS ≤ 40 scans) across all models. Future work will focus on developing a multi-region delineation pipeline for pre-clinical MRI brain data, utilizing the proposed WBD as an initial step.

Proportional Sensitivity in Generative Adversarial Network (GAN)-Augmented Brain Tumor Classification Using Convolutional Neural Network

Mahin Montasir Afif, Abdullah Al Noman, K. M. Tahsin Kabir, Md. Mortuza Ahmmed, Md. Mostafizur Rahman, Mufti Mahmud, Md. Ashraful Babu

arxiv logopreprintJun 20 2025
Generative Adversarial Networks (GAN) have shown potential in expanding limited medical imaging datasets. This study explores how different ratios of GAN-generated and real brain tumor MRI images impact the performance of a CNN in classifying healthy vs. tumorous scans. A DCGAN was used to create synthetic images which were mixed with real ones at various ratios to train a custom CNN. The CNN was then evaluated on a separate real-world test set. Our results indicate that the model maintains high sensitivity and precision in tumor classification, even when trained predominantly on synthetic data. When only a small portion of GAN data was added, such as 900 real images and 100 GAN images, the model achieved excellent performance, with test accuracy reaching 95.2%, and precision, recall, and F1-score all exceeding 95%. However, as the proportion of GAN images increased further, performance gradually declined. This study suggests that while GANs are useful for augmenting limited datasets especially when real data is scarce, too much synthetic data can introduce artifacts that affect the model's ability to generalize to real world cases.

TextBraTS: Text-Guided Volumetric Brain Tumor Segmentation with Innovative Dataset Development and Fusion Module Exploration

Xiaoyu Shi, Rahul Kumar Jain, Yinhao Li, Ruibo Hou, Jingliang Cheng, Jie Bai, Guohua Zhao, Lanfen Lin, Rui Xu, Yen-wei Chen

arxiv logopreprintJun 20 2025
Deep learning has demonstrated remarkable success in medical image segmentation and computer-aided diagnosis. In particular, numerous advanced methods have achieved state-of-the-art performance in brain tumor segmentation from MRI scans. While recent studies in other medical imaging domains have revealed that integrating textual reports with visual data can enhance segmentation accuracy, the field of brain tumor analysis lacks a comprehensive dataset that combines radiological images with corresponding textual annotations. This limitation has hindered the exploration of multimodal approaches that leverage both imaging and textual data. To bridge this critical gap, we introduce the TextBraTS dataset, the first publicly available volume-level multimodal dataset that contains paired MRI volumes and rich textual annotations, derived from the widely adopted BraTS2020 benchmark. Building upon this novel dataset, we propose a novel baseline framework and sequential cross-attention method for text-guided volumetric medical image segmentation. Through extensive experiments with various text-image fusion strategies and templated text formulations, our approach demonstrates significant improvements in brain tumor segmentation accuracy, offering valuable insights into effective multimodal integration techniques. Our dataset, implementation code, and pre-trained models are publicly available at https://github.com/Jupitern52/TextBraTS.

BioTransX: A novel bi-former based hybrid model with bi-level routing attention for brain tumor classification with explainable insights.

Rajpoot R, Jain S, Semwal VB

pubmed logopapersJun 20 2025
Brain tumors, known for their life-threatening implications, underscore the urgency of precise and interpretable early detection. Expertise remains essential for accurate identification through MRI scans due to the intricacies involved. However, the growing recognition of automated detection systems holds the potential to enhance accuracy and improve interpretability. By consistently providing easily comprehensible results, these automated solutions could boost the overall efficiency and effectiveness of brain tumor diagnosis, promising a transformative era in healthcare. This paper introduces a new hybrid model, BioTransX, which uses a bi-former encoder mechanism, a dynamic sparse attention-based transformer, in conjunction with ensemble convolutional networks. Recognizing the importance of better contrast and data quality, we applied Contrast-Limited Adaptive Histogram Equalization (CLAHE) during the initial data processing stage. Additionally, to address the crucial aspect of model interpretability, we integrated Grad-CAM and Gradient Attention Rollout, which elucidate decisions by highlighting influential regions within medical images. Our hybrid deep learning model was primarily evaluated on the Kaggle MRI dataset for multi-class brain tumor classification, achieving a mean accuracy and F1-score of 99.29%. To validate its generalizability and robustness, BioTransX was further tested on two additional benchmark datasets, BraTS and Figshare, where it consistently maintained high performance across key evaluation metrics. The transformer-based hybrid model demonstrated promising performance in explainable identification and offered notable advantages in computational efficiency and memory usage. These strengths differentiate BioTransX from existing models in the literature and make it ideal for real-world deployment in resource-constrained clinical infrastructures.

DSA-NRP: No-Reflow Prediction from Angiographic Perfusion Dynamics in Stroke EVT

Shreeram Athreya, Carlos Olivares, Ameera Ismail, Kambiz Nael, William Speier, Corey Arnold

arxiv logopreprintJun 20 2025
Following successful large-vessel recanalization via endovascular thrombectomy (EVT) for acute ischemic stroke (AIS), some patients experience a complication known as no-reflow, defined by persistent microvascular hypoperfusion that undermines tissue recovery and worsens clinical outcomes. Although prompt identification is crucial, standard clinical practice relies on perfusion magnetic resonance imaging (MRI) within 24 hours post-procedure, delaying intervention. In this work, we introduce the first-ever machine learning (ML) framework to predict no-reflow immediately after EVT by leveraging previously unexplored intra-procedural digital subtraction angiography (DSA) sequences and clinical variables. Our retrospective analysis included AIS patients treated at UCLA Medical Center (2011-2024) who achieved favorable mTICI scores (2b-3) and underwent pre- and post-procedure MRI. No-reflow was defined as persistent hypoperfusion (Tmax > 6 s) on post-procedural imaging. From DSA sequences (AP and lateral views), we extracted statistical and temporal perfusion features from the target downstream territory to train ML classifiers for predicting no-reflow. Our novel method significantly outperformed a clinical-features baseline(AUC: 0.7703 $\pm$ 0.12 vs. 0.5728 $\pm$ 0.12; accuracy: 0.8125 $\pm$ 0.10 vs. 0.6331 $\pm$ 0.09), demonstrating that real-time DSA perfusion dynamics encode critical insights into microvascular integrity. This approach establishes a foundation for immediate, accurate no-reflow prediction, enabling clinicians to proactively manage high-risk patients without reliance on delayed imaging.

DSA-NRP: No-Reflow Prediction from Angiographic Perfusion Dynamics in Stroke EVT

Shreeram Athreya, Carlos Olivares, Ameera Ismail, Kambiz Nael, William Speier, Corey Arnold

arxiv logopreprintJun 20 2025
Following successful large-vessel recanalization via endovascular thrombectomy (EVT) for acute ischemic stroke (AIS), some patients experience a complication known as no-reflow, defined by persistent microvascular hypoperfusion that undermines tissue recovery and worsens clinical outcomes. Although prompt identification is crucial, standard clinical practice relies on perfusion magnetic resonance imaging (MRI) within 24 hours post-procedure, delaying intervention. In this work, we introduce the first-ever machine learning (ML) framework to predict no-reflow immediately after EVT by leveraging previously unexplored intra-procedural digital subtraction angiography (DSA) sequences and clinical variables. Our retrospective analysis included AIS patients treated at UCLA Medical Center (2011-2024) who achieved favorable mTICI scores (2b-3) and underwent pre- and post-procedure MRI. No-reflow was defined as persistent hypoperfusion (Tmax > 6 s) on post-procedural imaging. From DSA sequences (AP and lateral views), we extracted statistical and temporal perfusion features from the target downstream territory to train ML classifiers for predicting no-reflow. Our novel method significantly outperformed a clinical-features baseline(AUC: 0.7703 $\pm$ 0.12 vs. 0.5728 $\pm$ 0.12; accuracy: 0.8125 $\pm$ 0.10 vs. 0.6331 $\pm$ 0.09), demonstrating that real-time DSA perfusion dynamics encode critical insights into microvascular integrity. This approach establishes a foundation for immediate, accurate no-reflow prediction, enabling clinicians to proactively manage high-risk patients without reliance on delayed imaging.

Segmentation of clinical imagery for improved epidural stimulation to address spinal cord injury

Matelsky, J. K., Sharma, P., Johnson, E. C., Wang, S., Boakye, M., Angeli, C., Forrest, G. F., Harkema, S. J., Tenore, F.

medrxiv logopreprintJun 20 2025
Spinal cord injury (SCI) can severely impair motor and autonomic function, with long-term consequences for quality of life. Epidural stimulation has emerged as a promising intervention, offering partial recovery by activating neural circuits below the injury. To make this therapy effective in practice, precise placement of stimulation electrodes is essential -- and that requires accurate segmentation of spinal cord structures in MRI data. We present a protocol for manual segmentation tailored to SCI anatomy, and evaluated a deep learning approach using a U-Net architecture to automate this segmentation process. Our approach yields accurate, efficient segmentation that identify potential electrode placement sites with high fidelity. Preliminary results suggest that this framework can accelerate SCI MRI analysis and improve planning for epidural stimulation, helping bridge the gap between advanced neurotechnologies and real-world clinical application with faster surgeries and more accurate electrode placement.

TextBraTS: Text-Guided Volumetric Brain Tumor Segmentation with Innovative Dataset Development and Fusion Module Exploration

Xiaoyu Shi, Rahul Kumar Jain, Yinhao Li, Ruibo Hou, Jingliang Cheng, Jie Bai, Guohua Zhao, Lanfen Lin, Rui Xu, Yen-wei Chen

arxiv logopreprintJun 20 2025
Deep learning has demonstrated remarkable success in medical image segmentation and computer-aided diagnosis. In particular, numerous advanced methods have achieved state-of-the-art performance in brain tumor segmentation from MRI scans. While recent studies in other medical imaging domains have revealed that integrating textual reports with visual data can enhance segmentation accuracy, the field of brain tumor analysis lacks a comprehensive dataset that combines radiological images with corresponding textual annotations. This limitation has hindered the exploration of multimodal approaches that leverage both imaging and textual data. To bridge this critical gap, we introduce the TextBraTS dataset, the first publicly available volume-level multimodal dataset that contains paired MRI volumes and rich textual annotations, derived from the widely adopted BraTS2020 benchmark. Building upon this novel dataset, we propose a novel baseline framework and sequential cross-attention method for text-guided volumetric medical image segmentation. Through extensive experiments with various text-image fusion strategies and templated text formulations, our approach demonstrates significant improvements in brain tumor segmentation accuracy, offering valuable insights into effective multimodal integration techniques. Our dataset, implementation code, and pre-trained models are publicly available at https://github.com/Jupitern52/TextBraTS.

Robust Training with Data Augmentation for Medical Imaging Classification

Josué Martínez-Martínez, Olivia Brown, Mostafa Karami, Sheida Nabavi

arxiv logopreprintJun 20 2025
Deep neural networks are increasingly being used to detect and diagnose medical conditions using medical imaging. Despite their utility, these models are highly vulnerable to adversarial attacks and distribution shifts, which can affect diagnostic reliability and undermine trust among healthcare professionals. In this study, we propose a robust training algorithm with data augmentation (RTDA) to mitigate these vulnerabilities in medical image classification. We benchmark classifier robustness against adversarial perturbations and natural variations of RTDA and six competing baseline techniques, including adversarial training and data augmentation approaches in isolation and combination, using experimental data sets with three different imaging technologies (mammograms, X-rays, and ultrasound). We demonstrate that RTDA achieves superior robustness against adversarial attacks and improved generalization performance in the presence of distribution shift in each image classification task while maintaining high clean accuracy.
Page 104 of 2542539 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.