Sort by:
Page 33 of 93924 results

Deep Learning based Collateral Scoring on Multi-Phase CTA in patients with acute ischemic stroke in MCA region.

Liu H, Zhang J, Chen S, Ganesh A, Xu Y, Hu B, Menon BK, Qiu W

pubmed logopapersJul 7 2025
Collateral circulation is a critical determinant of clinical outcomes in acute ischemic stroke (AIS) patients and plays a key role in patient selection for endovascular therapy. This study aimed to develop an automated method for assessing and quantifying collateral circulation on multi-phase CT angiography, aiming to reduce observer variability and improve diagnostic efficiency. This retrospective study included mCTA images from 420 AIS patients within 14 hours of stroke symptom onset. A deep learning-based classification method with a tailored preprocessing module was developed to assess collateral circulation status. Manual evaluations using the simplified Menon method served as the ground truth. Model performance was assessed through five-fold cross-validation using metrics including accuracy, F1 score, precision, sensitivity, specificity, and the area under the receiver operating characteristic curve. The median age of the 420 patients was 73 years (IQR: 64-80 years; 222 men), and the median time from symptom onset to mCTA acquisition was 123 minutes (IQR: 79-245.5 minutes). The proposed framework achieved an accuracy of 87.6% for three-class collateral scores (good, intermediate, poor), with F1 score (85.7%), precision (83.8%), sensitivity (89.3%), specificity (92.9%), AUC (93.7%), ICC (0.832), and Kappa (0.781). For two-class collateral scores, we obtained 94.0% accuracy for good vs. non-good scores (F1 score(94.4%), precision (95.9%), sensitivity (93.0%), specificity (94.1%), AUC (97.1%),ICC(0.882),kappa(0.881)) and 97.1% for poor vs. non-poor scores (F1 score (98.5%), precision (98.0%), sensitivity (99.0%), specificity (84.8%), AUC (95.6%), ICC(0.740), kappa(0.738)). Additional analyses demonstrated that multi-phase CTA showed improved performance over single or two-phase CTA in collateral assessment. The proposed deep learning framework demonstrated high accuracy and consistency with radiologist-assigned scores for evaluating collateral circulation on multi-phase CTA in AIS patients. This method may offer a useful tool to aid clinical decision-making, reducing variability and improving diagnostic workflow. AIS = Acute Ischemic Stroke; mCTA = multi-phase Computed Tomography Angiography; DL = deep learning; AUC = area under the receiver operating characteristic curve; IQR = interquartile range; ROC = receiver operating characteristic.

AG-MS3D-CNN multiscale attention guided 3D convolutional neural network for robust brain tumor segmentation across MRI protocols.

Lilhore UK, Sunder R, Simaiya S, Alsafyani M, Monish Khan MD, Alroobaea R, Alsufyani H, Baqasah AM

pubmed logopapersJul 7 2025
Accurate segmentation of brain tumors from multimodal Magnetic Resonance Imaging (MRI) plays a critical role in diagnosis, treatment planning, and disease monitoring in neuro-oncology. Traditional methods of tumor segmentation, often manual and labour-intensive, are prone to inconsistencies and inter-observer variability. Recently, deep learning models, particularly Convolutional Neural Networks (CNNs), have shown great promise in automating this process. However, these models face challenges in terms of generalization across diverse datasets, accurate tumor boundary delineation, and uncertainty estimation. To address these challenges, we propose AG-MS3D-CNN, an attention-guided multiscale 3D convolutional neural network for brain tumor segmentation. Our model integrates local and global contextual information through multiscale feature extraction and leverages spatial attention mechanisms to enhance boundary delineation, particularly in complex tumor regions. We also introduce Monte Carlo dropout for uncertainty estimation, providing clinicians with confidence scores for each segmentation, which is crucial for informed decision-making. Furthermore, we adopt a multitask learning framework, which enables the simultaneous segmentation, classification, and volume estimation of tumors. To ensure robustness and generalizability across diverse MRI acquisition protocols and scanners, we integrate a domain adaptation module into the network. Extensive evaluations on the BraTS 2021 dataset and additional external datasets, such as OASIS, ADNI, and IXI, demonstrate the superior performance of AG-MS3D-CNN compared to existing state-of-the-art methods. Our model achieves high Dice scores and shows excellent robustness, making it a valuable tool for clinical decision support in neuro-oncology.

Prediction of Motor Symptom Progression of Parkinson's Disease Through Multimodal Imaging-Based Machine Learning.

Dai Y, Imami M, Hu R, Zhang C, Zhao L, Kargilis DC, Zhang H, Yu G, Liao WH, Jiao Z, Zhu C, Yang L, Bai HX

pubmed logopapersJul 7 2025
The unrelenting progression of Parkinson's disease (PD) leads to severely impaired quality of life, with considerable variability in progression rates among patients. Identifying biomarkers of PD progression could improve clinical monitoring and management. Radiomics, which facilitates data extraction from imaging for use in machine learning models, offers a promising approach to this challenge. This study investigated the use of multi-modality imaging, combining conventional magnetic resonance imaging (MRI) and dopamine transporter single photon emission computed tomography (DAT-SPECT), to predict motor progression in PD. Motor progression was measured by changes in the Movement Disorder Society Unified Parkinson's Disease Rating Scale (MDS-UPDRS) motor subscale scores. Radiomic features were selected from the midbrain region in MRI and caudate nucleus, putamen, and ventral striatum in DAT-SPECT. Patients were stratified into fast progression vs. slow progression based on change in MDS-UPDRS in follow-up. Various feature selection methods and machine learning classifiers were evaluated for each modality, and the best-performing models were combined into an ensemble. On the internal test set, the ensemble model, which integrated clinical information, T1WI, T2WI and DAT-SPECT achieved a ROC AUC of 0.93 (95% CI: 0.80-1.00), PR AUC of 0.88 (95%CI 0.61-1.00), accuracy of 0.85 (95% CI: 0.70-0.89), sensitivity of 0.72 (95% CI: 0.43-1.00), and specificity of 0.92 (95% CI: 0.77-1.00). On the external test set, the ensemble model outperformed single-modality models with a ROC AUC of 0.77 (95% CI: 0.53-0.93), PR AUC of 0.79 (95% CI: 0.56-0.95), accuracy of 0.68 (95% CI: 0.50-0.86), sensitivity of 0.53 (95% CI: 0.27-0.82), and specificity of 0.82 (95% CI: 0.55-1.00). In conclusion, this study developed an imaging-based model to identify baseline characteristics predictive of disease progression in PD patients. The findings highlight the strength of using multiple imaging modalities and integrating imaging data with clinical information to enhance the prediction of motor progression in PD.

Impact of super-resolution deep learning-based reconstruction for hippocampal MRI: A volunteer and phantom study.

Takada S, Nakaura T, Yoshida N, Uetani H, Shiraishi K, Kobayashi N, Matsuo K, Morita K, Nagayama Y, Kidoh M, Yamashita Y, Takayanagi R, Hirai T

pubmed logopapersJul 5 2025
To evaluate the effects of super-resolution deep learning-based reconstruction (SR-DLR) on thin-slice T2-weighted hippocampal MR image quality using 3 T MRI, in both human volunteers and phantoms. Thirteen healthy volunteers underwent hippocampal MRI at standard and high resolutions. Original (standard-resolution; StR) images were reconstructed with and without deep learning-based reconstruction (DLR) (Matrix = 320 × 320), and with SR-DLR (Matrix = 960 × 960). High-resolution (HR) images were also reconstructed with/without DLR (Matrix = 960 × 960). Contrast, contrast-to-noise ratio (CNR), and septum slope were analyzed. Two radiologists evaluated the images for noise, contrast, artifacts, sharpness, and overall quality. Quantitative and qualitative results are reported as medians and interquartile ranges (IQR). Comparisons used the Wilcoxon signed-rank test with Holm correction. We also scanned an American College of Radiology (ACR) phantom to evaluate the ability of our SR-DLR approach to reduce artifacts induced by zero-padding interpolation (ZIP). SR-DLR exhibited contrast comparable to original images and significantly higher than HR-images. Its slope was comparable to that of HR images but was significantly steeper than that of StR images (p < 0.01). Furthermore, the CNR of SR-DLR (10.53; IQR: 10.08, 11.69) was significantly superior to the StR-images without DLR (7.5; IQR: 6.4, 8.37), StR-images with DLR (8.73; IQR: 7.68, 9.0), HR-images without DLR (2.24; IQR: 1.43, 2.38), and HR-images with DLR (4.84; IQR: 2.99, 5.43) (p < 0.05). In the phantom study, artifacts induced by ZIP were scarcely observed when using SR-DLR. SR-DLR for hippocampal MRI potentially improves image quality beyond that of actual HR-images while reducing acquisition time.

Unveiling genetic architecture of white matter microstructure through unsupervised deep representation learning of fractional anisotropy images.

Zhao X, Xie Z, He W, Fornage M, Zhi D

pubmed logopapersJul 5 2025
Fractional anisotropy (FA) derived from diffusion MRI is a widely used marker of white matter (WM) integrity. However, conventional FA based genetic studies focus on phenotypes representing tract- or atlas-defined averages, which may oversimplify spatial patterns of WM integrity and thus limiting the genetic discovery. Here, we proposed a deep learning-based framework, termed unsupervised deep representation of white matter (UDR-WM), to extract brain-wide FA features-referred to as UDIP-FA, that capture distributed microstructural variation without prior anatomical assumptions. UDIP-FAs exhibit enhanced sensitivity to aging and substantially higher SNP-based heritability compared to traditional FA phenotypes ( <i>P</i> < 2.20e-16, Mann-Whitney U test, mean h <sup>2</sup> = 50.81%). Through multivariate GWAS, we identified 939 significant lead SNPs in 586 loci, mapped to 3480 genes, dubbed UDIP-FA related genes (UFAGs). UFAGs are overexpressed in glial cells, particularly in astrocytes and oligodendrocytes (Bonferroni-corrected <i>P <</i> 2e-6, Wald Test), and show strong overlap with risk gene sets for schizophrenia and Parkinson disease (Bonferroni-corrected P < 7.06e-3, Fisher exact test). UDIP-FAs are genetically correlated with multiple brain disorders and cognitive traits, including fluid intelligence and reaction time, and are associated with polygenic risk for bone mineral density. Network analyses reveal that UFAGs form disease-enriched modules across protein-protein interaction and co-expression networks, implicating core pathways in myelination and axonal structure. Notably, several UFAGs, including <i>ACHE</i> and <i>ALDH2</i> , are targets of existing neuropsychiatric drugs. Together, our findings establish UDIP-FA as a biologically and clinically informative brain phenotype, enabling high-resolution dissection of white matter genetic architecture and its genetic links to complex brain traits.

MRI-based detection of multiple sclerosis using an optimized attention-based deep learning framework.

Palaniappan R, Delshi Howsalya Devi R, Mathankumar M, Ilangovan K

pubmed logopapersJul 5 2025
Multiple Sclerosis (MS) is a chronic neurological disorder affecting millions worldwide. Early detection is vital to prevent long-term disability. Magnetic Resonance Imaging (MRI) plays a crucial role in MS diagnosis, yet differentiating MS lesions from other brain anomalies remains a complex challenge. To develop and evaluate a novel deep learning framework-2DRK-MSCAN-for the early and accurate detection of MS lesions using MRI data. The proposed approach is validated using three publicly available MRI-based brain tumor datasets and comprises three main stages. First, Gradient Domain Guided Filtering (GDGF) is applied during pre-processing to enhance image quality. Next, an EfficientNetV2L backbone embedded within a U-shaped encoder-decoder architecture facilitates precise segmentation and rich feature extraction. Finally, classification of MS lesions is performed using the 2DRK-MSCAN model, which incorporates deep diffusion residual kernels and multiscale snake convolutional attention mechanisms to improve detection accuracy and robustness. The proposed framework achieved 99.9% accuracy in cross-validation experiments, demonstrating its capability to distinguish MS lesions from other anomalies with high precision. The 2DRK-MSCAN framework offers a reliable and effective solution for early MS detection using MRI. While clinical validation is ongoing, the method shows promising potential for aiding timely intervention and improving patient care.

Improving risk assessment of local failure in brain metastases patients using vision transformers - A multicentric development and validation study.

Erdur AC, Scholz D, Nguyen QM, Buchner JA, Mayinger M, Christ SM, Brunner TB, Wittig A, Zimmer C, Meyer B, Guckenberger M, Andratschke N, El Shafie RA, Debus JU, Rogers S, Riesterer O, Schulze K, Feldmann HJ, Blanck O, Zamboglou C, Bilger-Z A, Grosu AL, Wolff R, Eitz KA, Combs SE, Bernhardt D, Wiestler B, Rueckert D, Peeken JC

pubmed logopapersJul 4 2025
This study investigates the use of Vision Transformers (ViTs) to predict Freedom from Local Failure (FFLF) in patients with brain metastases using pre-operative MRI scans. The goal is to develop a model that enhances risk stratification and informs personalized treatment strategies. Within the AURORA retrospective trial, patients (n = 352) who received surgical resection followed by post-operative stereotactic radiotherapy (SRT) were collected from seven hospitals. We trained our ViT for the direct image-to-risk task on T1-CE and FLAIR sequences and combined clinical features along the way. We employed segmentation-guided image modifications, model adaptations, and specialized patient sampling strategies during training. The model was evaluated with five-fold cross-validation and ensemble learning across all validation runs. An external, international test cohort (n = 99) within the dataset was used to assess the generalization capabilities of the model, and saliency maps were generated for explainability analysis. We achieved a competent C-Index score of 0.7982 on the test cohort, surpassing all clinical, CNN-based, and hybrid baselines. Kaplan-Meier analysis showed significant FFLF risk stratification. Saliency maps focusing on the BM core confirmed that model explanations aligned with expert observations. Our ViT-based model offers a potential for personalized treatment strategies and follow-up regimens in patients with brain metastases. It provides an alternative to radiomics as a robust, automated tool for clinical workflows, capable of improving patient outcomes through effective risk assessment and stratification.

A preliminary attempt to harmonize using physics-constrained deep neural networks for multisite and multiscanner MRI datasets (PhyCHarm).

Lee G, Ye DH, Oh SH

pubmed logopapersJul 4 2025
In magnetic resonance imaging (MRI), variations in scan parameters and scanner specifications can result in differences in image appearance. To minimize these differences, harmonization in MRI has been suggested as a crucial image processing technique. In this study, we developed an MR physics-based harmonization framework, Physics-Constrained Deep Neural Network for multisite and multiscanner Harmonization (PhyCHarm). PhyCHarm includes two deep neural networks: (1) the Quantitative Maps Generator to generate T<sub>1</sub>- and M<sub>0</sub>-maps and (2) the Harmonization Network. We used an open dataset consisting of 3T MP2RAGE images from 50 healthy individuals for the Quantitative Maps Generator and a traveling dataset consisting of 3T T<sub>1</sub>w images from 9 healthy individuals for the Harmonization Network. PhyCHarm was evaluated using the structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and normalized-root-mean square error (NRMSE) for the Quantitative Maps Generator, and using SSIM, PSNR, and volumetric analysis for the Harmonization network, respectively. PhyCHarm demonstrated increased SSIM and PSNR, the highest Dice score in the FSL FAST segmentation results for gray and white matter compared to U-Net, Pix2Pix, CALAMITI, and HarmonizingFlows. PhyCHarm showed a greater reduction in volume differences after harmonization for gray and white matter than U-Net, Pix2Pix, CALAMITI, or HarmonizingFlows. As an initial step toward developing advanced harmonization techniques, we investigated the applicability of physics-based constraints within a supervised training strategy. The proposed physics constraints could be integrated with unsupervised methods, paving the way for more sophisticated harmonization qualities.

An Advanced Deep Learning Framework for Ischemic and Hemorrhagic Brain Stroke Diagnosis Using Computed Tomography (CT) Images

Md. Sabbir Hossen, Eshat Ahmed Shuvo, Shibbir Ahmed Arif, Pabon Shaha, Md. Saiduzzaman, Mostofa Kamal Nasir

arxiv logopreprintJul 4 2025
Brain stroke is one of the leading causes of mortality and long-term disability worldwide, highlighting the need for precise and fast prediction techniques. Computed Tomography (CT) scan is considered one of the most effective methods for diagnosing brain strokes. The majority of stroke classification techniques rely on a single slice-level prediction mechanism, allowing the radiologist to manually choose the most critical CT slice from the original CT volume. Although clinical evaluations are often used in traditional diagnostic procedures, machine learning (ML) has opened up new avenues for improving stroke diagnosis. To supplement traditional diagnostic techniques, this study investigates the use of machine learning models, specifically concerning the prediction of brain stroke at an early stage utilizing CT scan images. In this research, we proposed a novel approach to brain stroke detection leveraging machine learning techniques, focusing on optimizing classification performance with pre-trained deep learning models and advanced optimization strategies. Pre-trained models, including DenseNet201, InceptionV3, MobileNetV2, ResNet50, and Xception, are utilized for feature extraction. Additionally, we employed feature engineering techniques, including BFO, PCA, and LDA, to enhance models' performance further. These features are subsequently classified using machine learning algorithms such as SVC, RF, XGB, DT, LR, KNN, and GNB. Our experiments demonstrate that the combination of MobileNetV2, LDA, and SVC achieved the highest classification accuracy of 97.93%, significantly outperforming other model-optimizer-classifier combinations. The results underline the effectiveness of integrating lightweight pre-trained models with robust optimization and classification techniques for brain stroke diagnosis.

Intelligent brain tumor detection using hybrid finetuned deep transfer features and ensemble machine learning algorithms.

Salakapuri R, Terlapu PV, Kalidindi KR, Balaka RN, Jayaram D, Ravikumar T

pubmed logopapersJul 4 2025
Brain tumours (BTs) are severe neurological disorders. They affect more than 308,000 people each year worldwide. The mortality rate is over 251,000 deaths annually (IARC, 2020 reports). Detecting BTs is complex because they vary in nature. Early diagnosis is essential for better survival rates. The study presents a new system for detecting BTs. It combines deep (DL) learning and machine (ML) learning techniques. The system uses advanced models like Inception-V3, ResNet-50, and VGG-16 for feature extraction, and for dimensional reduction, it uses the PCA model. It also employs ensemble methods such as Stacking, k-NN, Gradient Boosting, AdaBoost, Multi-Layer Perceptron (MLP), and Support Vector Machines for classification and predicts the BTs using MRI scans. The MRI scans were resized to 224 × 224 pixels, and pixel intensities were normalized to a [0,1] scale. We apply the Gaussian filter for stability. We use the Keras Image Data Generator for image augmentation. It applied methods like zooming and ± 10% brightness adjustments. The dataset has 5,712 MRI scans. These scans are classified into four groups: Meningioma, No-Tumor, Glioma, and Pituitary. A tenfold cross-validation method helps check if the model is reliable. Deep transfer (TL) learning and ensemble ML models work well together. They showed excellent results in detecting BTs. The stacking ensemble model achieved the highest accuracy across all feature extraction methods, with ResNet-50 features reduced by PCA (500), producing an accuracy of 0.957, 95% CI: 0.948-0.966; AUC: 0.996, 95% CI: 0.989-0.998, significantly outperforming baselines (p < 0.01). Neural networks and gradient-boosting models also show strong performance. The stacking model is robust and reliable. This method is useful for medical applications. Future studies will focus on using multi-modal imaging. This will help improve diagnostic accuracy. The research improves early detection of brain tumors.
Page 33 of 93924 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.