Sort by:
Page 239 of 3903899 results

OpenMAP-BrainAge: Generalizable and Interpretable Brain Age Predictor

Pengyu Kan, Craig Jones, Kenichi Oishi

arxiv logopreprintJun 21 2025
Purpose: To develop an age prediction model which is interpretable and robust to demographic and technological variances in brain MRI scans. Materials and Methods: We propose a transformer-based architecture that leverages self-supervised pre-training on large-scale datasets. Our model processes pseudo-3D T1-weighted MRI scans from three anatomical views and incorporates brain volumetric information. By introducing a stem architecture, we reduce the conventional quadratic complexity of transformer models to linear complexity, enabling scalability for high-dimensional MRI data. We trained our model on ADNI2 $\&$ 3 (N=1348) and OASIS3 (N=716) datasets (age range: 42 - 95) from the North America, with an 8:1:1 split for train, validation and test. Then, we validated it on the AIBL dataset (N=768, age range: 60 - 92) from Australia. Results: We achieved an MAE of 3.65 years on ADNI2 $\&$ 3 and OASIS3 test set and a high generalizability of MAE of 3.54 years on AIBL. There was a notable increase in brain age gap (BAG) across cognitive groups, with mean of 0.15 years (95% CI: [-0.22, 0.51]) in CN, 2.55 years ([2.40, 2.70]) in MCI, 6.12 years ([5.82, 6.43]) in AD. Additionally, significant negative correlation between BAG and cognitive scores was observed, with correlation coefficient of -0.185 (p < 0.001) for MoCA and -0.231 (p < 0.001) for MMSE. Gradient-based feature attribution highlighted ventricles and white matter structures as key regions influenced by brain aging. Conclusion: Our model effectively fused information from different views and volumetric information to achieve state-of-the-art brain age prediction accuracy, improved generalizability and interpretability with association to neurodegenerative disorders.

Generalizable model to predict new or progressing compression fractures in tumor-infiltrated thoracolumbar vertebrae in an all-comer population.

Flores A, Nitturi V, Kavoussi A, Feygin M, Andrade de Almeida RA, Ramirez Ferrer E, Anand A, Nouri S, Allam AK, Ricciardelli A, Reyes G, Reddy S, Rampalli I, Rhines L, Tatsui CE, North RY, Ghia A, Siewerdsen JH, Ropper AE, Alvarez-Breckenridge C

pubmed logopapersJun 20 2025
Neurosurgical evaluation is required in the setting of spinal metastases at high risk for leading to a vertebral body fracture. Both irradiated and nonirradiated vertebrae are affected. Understanding fracture risk is critical in determining management, including follow-up timing and prophylactic interventions. Herein, the authors report the results of a machine learning model that predicts the development or progression of a pathological vertebral compression fracture (VCF) in metastatic tumor-infiltrated thoracolumbar vertebrae in an all-comer population. A multi-institutional all-comer cohort of patients with tumor containing vertebral levels spanning T1 through L5 and at least 1 year of follow-up was included in the study. Clinical features of the patients, diseases, and treatments were collected. CT radiomic features of the vertebral bodies were extracted from tumor-infiltrated vertebrae that did or did not subsequently fracture or progress. Recursive feature elimination (RFE) of both radiomic and clinical features was performed. The resulting features were used to create a purely clinical model, purely radiomic model, and combined clinical-radiomic model. A Spine Instability Neoplastic Score (SINS) model was created for a baseline performance comparison. Model performance was assessed using the area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity (with 95% confidence intervals) with tenfold cross-validation. Within 1 year from initial CT, 123 of 977 vertebrae developed VCF. Selected clinical features included SINS, SINS component for < 50% vertebral body collapse, SINS component for "none of the prior 3" (i.e., "none of the above" on the SINS component for vertebral body involvement), histology, age, and BMI. Of the 2015 radiomic features, RFE selected 19 to be used in the pure radiomic model and the combined clinical-radiomic model. The best performing model was a random forest classifier using both clinical and radiomic features, demonstrating an AUROC of 0.86 (95% CI 0.82-0.9), sensitivity of 0.78 (95% CI 0.70-0.84), and specificity of 0.80 (95% CI 0.77-0.82). This performance was significantly higher than the best SINS-alone model (AUROC 0.75, 95% CI 0.70-0.80) and outperformed the clinical-only model but not in a statistically significant manner (AUROC 0.82, 95% CI 0.77-0.87). The authors developed a clinically generalizable machine learning model to predict the risk of a new or progressing VCF in an all-comer population. This model addresses limitations from prior work and was trained on the largest cohort of patients and vertebrae published to date. If validated, the model could lead to more consistent and systematic identification of high-risk vertebrae, resulting in faster, more accurate triage of patients for optimal management.

DSA-NRP: No-Reflow Prediction from Angiographic Perfusion Dynamics in Stroke EVT

Shreeram Athreya, Carlos Olivares, Ameera Ismail, Kambiz Nael, William Speier, Corey Arnold

arxiv logopreprintJun 20 2025
Following successful large-vessel recanalization via endovascular thrombectomy (EVT) for acute ischemic stroke (AIS), some patients experience a complication known as no-reflow, defined by persistent microvascular hypoperfusion that undermines tissue recovery and worsens clinical outcomes. Although prompt identification is crucial, standard clinical practice relies on perfusion magnetic resonance imaging (MRI) within 24 hours post-procedure, delaying intervention. In this work, we introduce the first-ever machine learning (ML) framework to predict no-reflow immediately after EVT by leveraging previously unexplored intra-procedural digital subtraction angiography (DSA) sequences and clinical variables. Our retrospective analysis included AIS patients treated at UCLA Medical Center (2011-2024) who achieved favorable mTICI scores (2b-3) and underwent pre- and post-procedure MRI. No-reflow was defined as persistent hypoperfusion (Tmax > 6 s) on post-procedural imaging. From DSA sequences (AP and lateral views), we extracted statistical and temporal perfusion features from the target downstream territory to train ML classifiers for predicting no-reflow. Our novel method significantly outperformed a clinical-features baseline(AUC: 0.7703 $\pm$ 0.12 vs. 0.5728 $\pm$ 0.12; accuracy: 0.8125 $\pm$ 0.10 vs. 0.6331 $\pm$ 0.09), demonstrating that real-time DSA perfusion dynamics encode critical insights into microvascular integrity. This approach establishes a foundation for immediate, accurate no-reflow prediction, enabling clinicians to proactively manage high-risk patients without reliance on delayed imaging.

TextBraTS: Text-Guided Volumetric Brain Tumor Segmentation with Innovative Dataset Development and Fusion Module Exploration

Xiaoyu Shi, Rahul Kumar Jain, Yinhao Li, Ruibo Hou, Jingliang Cheng, Jie Bai, Guohua Zhao, Lanfen Lin, Rui Xu, Yen-wei Chen

arxiv logopreprintJun 20 2025
Deep learning has demonstrated remarkable success in medical image segmentation and computer-aided diagnosis. In particular, numerous advanced methods have achieved state-of-the-art performance in brain tumor segmentation from MRI scans. While recent studies in other medical imaging domains have revealed that integrating textual reports with visual data can enhance segmentation accuracy, the field of brain tumor analysis lacks a comprehensive dataset that combines radiological images with corresponding textual annotations. This limitation has hindered the exploration of multimodal approaches that leverage both imaging and textual data. To bridge this critical gap, we introduce the TextBraTS dataset, the first publicly available volume-level multimodal dataset that contains paired MRI volumes and rich textual annotations, derived from the widely adopted BraTS2020 benchmark. Building upon this novel dataset, we propose a novel baseline framework and sequential cross-attention method for text-guided volumetric medical image segmentation. Through extensive experiments with various text-image fusion strategies and templated text formulations, our approach demonstrates significant improvements in brain tumor segmentation accuracy, offering valuable insights into effective multimodal integration techniques. Our dataset, implementation code, and pre-trained models are publicly available at https://github.com/Jupitern52/TextBraTS.

Trans${^2}$-CBCT: A Dual-Transformer Framework for Sparse-View CBCT Reconstruction

Minmin Yang, Huantao Ren, Senem Velipasalar

arxiv logopreprintJun 20 2025
Cone-beam computed tomography (CBCT) using only a few X-ray projection views enables faster scans with lower radiation dose, but the resulting severe under-sampling causes strong artifacts and poor spatial coverage. We address these challenges in a unified framework. First, we replace conventional UNet/ResNet encoders with TransUNet, a hybrid CNN-Transformer model. Convolutional layers capture local details, while self-attention layers enhance global context. We adapt TransUNet to CBCT by combining multi-scale features, querying view-specific features per 3D point, and adding a lightweight attenuation-prediction head. This yields Trans-CBCT, which surpasses prior baselines by 1.17 dB PSNR and 0.0163 SSIM on the LUNA16 dataset with six views. Second, we introduce a neighbor-aware Point Transformer to enforce volumetric coherence. This module uses 3D positional encoding and attention over k-nearest neighbors to improve spatial consistency. The resulting model, Trans$^2$-CBCT, provides an additional gain of 0.63 dB PSNR and 0.0117 SSIM. Experiments on LUNA16 and ToothFairy show consistent gains from six to ten views, validating the effectiveness of combining CNN-Transformer features with point-based geometry reasoning for sparse-view CBCT reconstruction.

Significance of Papillary and Trabecular Muscular Volume in Right Ventricular Volumetry with Cardiac MR Imaging.

Shibagaki Y, Oka H, Imanishi R, Shimada S, Nakau K, Takahashi S

pubmed logopapersJun 20 2025
Pulmonary valve regurgitation after repaired Tetralogy of Fallot (TOF) or double-outlet right ventricle (DORV) causes hypertrophy and papillary muscle enlargement. Cardiac magnetic resonance imaging (CMR) can evaluate the right ventricular (RV) dilatation, but the effect of trabecular and papillary muscle (TPM) exclusion on RV volume for TOF or DORV reoperation decision is unclear. Twenty-three patients with repaired TOF or DORV, and 19 healthy controls aged ≥15, underwent CMR from 2012 to 2022. TPM volume is measured by artificial intelligence. Reoperation was considered when RV end-diastolic volume index (RVEDVI) >150 mL/m<sup>2</sup> or RV end-systolic volume index (RVESVI) >80 mL/m<sup>2</sup>. RV volumes were higher in the disease group than controls (P α 0.001). RV mass and TPM volumes were higher in the disease group (P α 0.001). The reduction rate of RV volumes due to the exclusion of TPM volume was 6.3% (2.1-10.5), 11.7% (6.9-13.8), and 13.9% (9.5-19.4) in the control, volume load, and volume α pressure load groups, respectively. TPM/RV volumes were higher in the volume α pressure load group (control: 0.07 g/mL, volume: 0.14 g/mL, volume α pressure: 0.17 g/mL), and correlated with QRS duration (R α 0.77). In 3 patients in the volume α pressure, RV volume included TPM was indicated for reoperation, but when RV volume was reduced by TPM removal, reoperation was no indicated. RV volume measurements, including TPM in volume α pressure load, may help determine appropriate volume recommendations for reoperation.

TextBraTS: Text-Guided Volumetric Brain Tumor Segmentation with Innovative Dataset Development and Fusion Module Exploration

Xiaoyu Shi, Rahul Kumar Jain, Yinhao Li, Ruibo Hou, Jingliang Cheng, Jie Bai, Guohua Zhao, Lanfen Lin, Rui Xu, Yen-wei Chen

arxiv logopreprintJun 20 2025
Deep learning has demonstrated remarkable success in medical image segmentation and computer-aided diagnosis. In particular, numerous advanced methods have achieved state-of-the-art performance in brain tumor segmentation from MRI scans. While recent studies in other medical imaging domains have revealed that integrating textual reports with visual data can enhance segmentation accuracy, the field of brain tumor analysis lacks a comprehensive dataset that combines radiological images with corresponding textual annotations. This limitation has hindered the exploration of multimodal approaches that leverage both imaging and textual data. To bridge this critical gap, we introduce the TextBraTS dataset, the first publicly available volume-level multimodal dataset that contains paired MRI volumes and rich textual annotations, derived from the widely adopted BraTS2020 benchmark. Building upon this novel dataset, we propose a novel baseline framework and sequential cross-attention method for text-guided volumetric medical image segmentation. Through extensive experiments with various text-image fusion strategies and templated text formulations, our approach demonstrates significant improvements in brain tumor segmentation accuracy, offering valuable insights into effective multimodal integration techniques. Our dataset, implementation code, and pre-trained models are publicly available at https://github.com/Jupitern52/TextBraTS.

Proportional Sensitivity in Generative Adversarial Network (GAN)-Augmented Brain Tumor Classification Using Convolutional Neural Network

Mahin Montasir Afif, Abdullah Al Noman, K. M. Tahsin Kabir, Md. Mortuza Ahmmed, Md. Mostafizur Rahman, Mufti Mahmud, Md. Ashraful Babu

arxiv logopreprintJun 20 2025
Generative Adversarial Networks (GAN) have shown potential in expanding limited medical imaging datasets. This study explores how different ratios of GAN-generated and real brain tumor MRI images impact the performance of a CNN in classifying healthy vs. tumorous scans. A DCGAN was used to create synthetic images which were mixed with real ones at various ratios to train a custom CNN. The CNN was then evaluated on a separate real-world test set. Our results indicate that the model maintains high sensitivity and precision in tumor classification, even when trained predominantly on synthetic data. When only a small portion of GAN data was added, such as 900 real images and 100 GAN images, the model achieved excellent performance, with test accuracy reaching 95.2%, and precision, recall, and F1-score all exceeding 95%. However, as the proportion of GAN images increased further, performance gradually declined. This study suggests that while GANs are useful for augmenting limited datasets especially when real data is scarce, too much synthetic data can introduce artifacts that affect the model's ability to generalize to real world cases.

Robust Training with Data Augmentation for Medical Imaging Classification

Josué Martínez-Martínez, Olivia Brown, Mostafa Karami, Sheida Nabavi

arxiv logopreprintJun 20 2025
Deep neural networks are increasingly being used to detect and diagnose medical conditions using medical imaging. Despite their utility, these models are highly vulnerable to adversarial attacks and distribution shifts, which can affect diagnostic reliability and undermine trust among healthcare professionals. In this study, we propose a robust training algorithm with data augmentation (RTDA) to mitigate these vulnerabilities in medical image classification. We benchmark classifier robustness against adversarial perturbations and natural variations of RTDA and six competing baseline techniques, including adversarial training and data augmentation approaches in isolation and combination, using experimental data sets with three different imaging technologies (mammograms, X-rays, and ultrasound). We demonstrate that RTDA achieves superior robustness against adversarial attacks and improved generalization performance in the presence of distribution shift in each image classification task while maintaining high clean accuracy.

Combination of 2D and 3D nnU-Net for ground glass opacity segmentation in CT images of Post-COVID-19 patients.

Nguyen QH, Hoang DA, Pham HV

pubmed logopapersJun 20 2025
The COVID-19 pandemic plays a significant roles in the global health, highlighting the imperative for effective management of post-recovery symptoms. Within this context, Ground Glass Opacity (GGO) in lung computed tomography CT scans emerges as a critical indicator for early intervention. Recently, most researchers have investigated initially a challenge to refine techniques for GGO segmentation. These approaches aim to scrutinize and juxtapose cutting-edge methods for analyzing lung CT images of patients recuperating from COVID-19. While many methods in this challenge utilize the nnU-Net architecture, its general approach has not concerned completely GGO areas such as marking infected areas, ground-glass opacity, irregular shapes and fuzzy boundaries. This research has investigated a specialized machine learning algorithm, advancing the nn-UNet framework to accurately segment GGO in lung CT scans of post-COVID-19 patients. We propose a novel approach for two-stage image segmentation methods based on nnU-Net 2D and 3D models including lung and shadow image segmentation, incorporating the attention mechanism. The combination models enhance automatic segmentation and models' accuracy when using different error function in the training process. Experimental results show that the proposed model's outcomes DSC score ranks fifth among the compared results. The proposed method has also the second-highest sensitivity value among the methods, which shows that this method has a higher true segmentation rate than most of the other methods. The proposed method has achieved a Hausdorff95 of 54.566, Surface dice of 0.7193, Sensitivity of 0.7528, and Specificity of 0.7749. As compared with the state-of-the-art methods, the proposed model in experimental results is improved much better than the current methods in term of segmentation of infected areas. The proposed model has been deployed in the case study of real-world problems with the combination of 2D and 3D models. It is demonstrated the capacity to comprehensively detect lung lesions correctly. Additionally, the boundary loss function has assisted in achieving more precise segmentation for low-resolution images. Initially segmenting lung area has reduced the volume of images requiring processing, while diminishing for training process.
Page 239 of 3903899 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.