Sort by:
Page 7 of 1241236 results

Pilot research on predicting the sub-volume with high risk of tumor recurrence inside peritumoral edema using the ratio-maxiADC/meanADC from the advanced MRI.

Zhang J, Liu H, Wu Y, Zhu J, Wang Y, Zhou Y, Wang M, Sun Q, Che F, Li B

pubmed logopapersSep 24 2025
This study aimed to identify key image parameters from the traditional and advanced MR sequences within the peritumoral edema in glioblastoma, which could predict the sub-volume with high risk of tumor recurrence. The retrospective cohort involved 32 cases with recurrent glioblastoma, while the retrospective validation cohort consisted of 5 cases. The volume of interest (VOI) including tumor and edema were manually contoured on each MR sequence. Rigid registration was performed between sequences before and after tumor recurrence. The edema before tumor recurrence was divided into the subedema-rec and subedema-no-rec depending on whether tumors occurred after registration. The histogram parameters of VOI on each sequence were collected and statistically analyzed. Beside Spearman's rank correlation analysis, Wilcoxon's paired test, least absolute shrinkage and selection operator (LASSO) analysis, and a forward stepwise logistic regression model(FSLRM) comparing with two machine learning models was developed to distinguish the subedema-rec and subedema-no-rec. The efficiency and applicability of the model was evaluated using receiver operating characteristic (ROC) curve analysis, image prediction and pathological detection. Differences of the characteristics from the ADC map between the subedema-rec and subedema-no-rec were identified, which included the standard deviation of the mean ADC value (stdmeanADC), the maximum ADC value (maxiADC), the minimum ADC value (miniADC), the Ratio-maxiADC/meanADC (maxiADC divided by the meanADC), and the kurtosis coefficient of the ADC value (all P < 0.05). FSLRM showed that the area under the ROC curve (AUC) of a single-parameter model based on Ratio-maxiADC/meanADC (0.823) was higher than that of the support vector machine (0.813) and random forest models (0.592), compared to the retrospective validation cohort's AUC of 0.776. The location prediction in image revealed that tumor recurrent mostly in the area with Ratio-maxiADC/meanADC less than 2.408. Pathological detection in 10 patients confirmed that the tumor cell dotted within the subedema-rec while not in the subedema-no-rec. The Ratio-maxiADC/meanADC is useful in predicting location of the subedema-rec.

Machine Learning-Based Meningioma Location Classification Using Vision Transformers and Transfer Learning

Bande, J. K., Johnson, E. T., Banderudrappagari, R.

medrxiv logopreprintSep 24 2025
PurposeIn this study, we aimed to use advanced machine learning (ML) techniques, specifically transfer learning and Vision Transformers (ViTs), to accurately classify meningioma in brain MRI scans. ViTs process images similarly to how humans visually perceive details and are useful for analyzing complex medical images. Transfer learning is a technique that uses models previously trained on large datasets and adapts them to specific use cases. Using transfer learning, this study aimed to enhance the diagnostic accuracy of meningioma location and demonstrate the capabilities the new technology. ApproachWe used a Google ViT model pre-trained on ImageNet-21k (a dataset with 14 million images and 21,843 classes) and fine-tuned on ImageNet 2012 (a dataset with 1 million images and 1,000 classes). Using this model, which was pre-trained and fine-tuned on large datasets of images, allowed us to leverage the predictive capabilities of the model trained on those large datasets without needing to train an entirely new model specific to only meningioma MRI scans. Transfer learning was used to adapt the pre-trained ViT to our specific use case, being meningioma location classification, using a dataset of 1,094 images of T1, contrast-enhanced, and T2-weighted MRI scans of meningiomas sorted according to location in the brain, with 11 different classes. ResultsThe final model trained and adapted on the meningioma MRI dataset achieved an average validation accuracy of 98.17% and a test accuracy of 89.95%. ConclusionsThis study demonstrates the potential of ViTs in meningioma location classification, leveraging their ability to analyze spatial relationships in medical images. While transfer learning enabled effective adaptation with limited data, class imbalance affected classification performance. Future work should focus on expanding datasets and incorporating ensemble learning to improve diagnostic reliability.

A novel hybrid deep learning model for segmentation and uzzy Res-LeNet based classification for Alzheimer's disease.

R S, Maganti S, Akundi SH

pubmed logopapersSep 24 2025
Alzheimer's disease (AD) is a progressive illness that can cause behavioural abnormalities, personality changes, and memory loss. Early detection helps with future planning for both the affected person and caregivers. Thus, an innovative hybrid Deep Learning (DL) method is introduced for the segmentation and classification of AD. The classification is performed by a Fuzzy Res-LeNet model. At first, an input Magnetic Resonance Imaging (MRI) image is attained from the database. Image preprocessing is then performed by a Bilateral Filter (BF) to enhance the quality of image by denoising. Then segmentation is carried out by the proposed O-SegUNet. This method integrates the O-SegNet and U-Net model using Pearson correlation coefficient-based fusion. After the segmentation, augmentation is carried out by utilizing Synthetic Minority Oversampling Technique (SMOTE) to address class imbalance. After that, feature extraction is carried out. Finally, AD classification is performed by the Fuzzy Res-LeNet. The stages are classified as Mild Cognitive Impairment (MCI), AD, Cognitive Normal (CN), Early Mild Cognitive Impairment (EMCI), and Late Mild Cognitive Impairment (LMCI). Here, Fuzzy Res-LeNet is devised by integrating Fuzzy logic, ResNeXt, and LeNet. Furthermore, the proposed Fuzzy Res-LeNet obtained the maximum performance with an accuracy of 93.887%, sensitivity of 94.587%, and specificity of 94.008%.

Artificial intelligence in cerebral cavernous malformations: a scoping review.

Santos AN, Venkatesh V, Chidambaram S, Piedade Santos G, Dawoud B, Rauschenbach L, Choucha A, Bingöl S, Wipplinger T, Wipplinger C, Siegel AM, Dammann P, Abou-Hamden A

pubmed logopapersSep 24 2025
Artificial Intelligence (AI) and Machine Learning (ML) are increasingly being applied in medical research, including studies on cerebral cavernous malformations (CCM). This scoping review aims to analyze the scope and impact of AI in CCM, focusing on diagnostic tools, risk assessment, biomarker identification, outcome prediction, and treatment planning. We conducted a comprehensive literature search across different databases, reviewing articles that explore AI applications in CCM. Articles were selected based on predefined eligibility criteria and categorized according to their primary focus: drug discovery, diagnostic imaging, genetic analysis, biomarker identification, outcome prediction, and treatment planning. Sixteen studies met the inclusion criteria, showcasing diverse AI applications in CCM. Nearly half (47%) were cohort or prospective studies, primarily focused on biomarker discovery and risk prediction. Technical notes and diagnostic studies accounted for 27%, concentrating on computer-aided diagnosis (CAD) systems and drug screening. Other studies included a conceptual review on AI for surgical planning and a systematic review confirming ML's superiority in predicting clinical outcomes within neurosurgery. AI applications in CCM show significant promise, particularly in enhancing diagnostic accuracy, risk assessment, and surgical planning. These advancements suggest that AI could transform CCM management, offering pathways to improved patient outcomes and personalized care strategies.

A systematic review of early neuroimaging and neurophysiological biomarkers for post-stroke mobility prognostication

Levy, C., Dalton, E. J., Ferris, J. K., Campbell, B. C. V., Brodtmann, A., Brauer, S., Churilov, L., Hayward, K. S.

medrxiv logopreprintSep 23 2025
BackgroundAccurate prognostication of mobility outcomes is essential to guide rehabilitation and manage patient expectations. The prognostic utility of neuroimaging and neurophysiological biomarkers remains uncertain when measured early post-stroke. This systematic review aimed to examine the prognostic capacity of early neuroimaging and neurophysiological biomarkers of mobility outcomes up to 24-months post-stroke. MethodsMEDLINE and EMBASE were searched from inception to June 2025. Cohort studies that reported neuroimaging or neurophysiological biomarkers measured [&le;]14-days post-stroke and mobility outcome(s) assessed >14-days and [&le;]24-months post-stroke were included. Biomarker analyses were classified by statistical analysis approach (association, discrimination/classification or validation). Magnitude of relevant statistical measures was used as the primary indicator of prognostic capacity. Risk of bias was assessed using the Quality in Prognostic Studies tool. Meta-analysis was not performed due to heterogeneity. ResultsTwenty reports from 18 independent study samples (n=2,160 participants) were included. Biomarkers were measured a median 7.5-days post-stroke, and outcomes were assessed between 1- and 12-months. Eighty-six biomarker analyses were identified (61 neuroimaging, 25 neurophysiological) and the majority used an association approach (88%). Few used discrimination/classification methods (11%), and only one conducted internal validation (1%); an MRI-based machine learning model which demonstrated excellent discrimination but still requires external validation. Structural and functional corticospinal tract integrity were frequently investigated, and most associations were small or non-significant. Lesion location and size were also commonly examined, but findings were inconsistent and often lacked magnitude reporting. Methodological limitations were common, including small sample sizes, moderate to high risk of bias, poor reporting of magnitudes, and heterogeneous outcome measures and follow-up time points. ConclusionsCurrent evidence provides limited support for early neuroimaging and neurophysiological biomarkers to prognosticate post-stroke mobility outcomes. Most analyses remain at the association stage, with minimal progress toward validation and clinical implementation. Advancing the field requires international collaboration using harmonized methodologies, standardised statistical reporting, and consistent outcome measures and timepoints. RegistrationURL: https://www.crd.york.ac.uk/prospero/; Unique identifier: CRD42022350771.

Exploiting Cross-modal Collaboration and Discrepancy for Semi-supervised Ischemic Stroke Lesion Segmentation from Multi-sequence MRI Images.

Cao Y, Qin T, Liu Y

pubmed logopapersSep 23 2025
Accurate ischemic stroke lesion segmentation is useful to define the optimal reperfusion treatment and unveil the stroke etiology. Despite the importance of diffusion-weighted MRI (DWI) for stroke diagnosis, learning from multi-sequence MRI images like apparent diffusion coefficient (ADC) can capitalize on the complementary nature of information from various modalities and show strong potential to improve the performance of segmentation. However, existing deep learning-based methods require large amounts of well-annotated data from multiple modalities for training, while acquiring such datasets is often impractical. We conduct the exploration of semi-supervised stroke lesion segmentation from multi-sequence MRI images by utilizing unlabeled data to improve performance using limited annotation and propose a novel framework by exploiting cross-modality collaboration and discrepancy to efficiently utilize unlabeled data. Specifically, we adopt a cross-modal bidirectional copy-paste strategy to enable information collaboration between different modalities and a cross-modal discrepancy-informed correction strategy to efficiently learn from limited labeled multi-sequence MRI data and abundant unlabeled data. Extensive experiments on the ischemic stroke lesion segmentation (ISLES 22) dataset demonstrate that our method efficiently utilizes unlabeled data with 12.32% DSC improvements compared with a supervised baseline using 10% annotations and outperforms existing semi-supervised segmentation methods with better performance.

3D CoAt U SegNet-enhanced deep learning framework for accurate segmentation of acute ischemic stroke lesions from non-contrast CT scans.

Nag MK, Sadhu AK, Das S, Kumar C, Choudhary S

pubmed logopapersSep 23 2025
Segmenting ischemic stroke lesions from Non-Contrast CT (NCCT) scans is a complex task due to the hypo-intense nature of these lesions compared to surrounding healthy brain tissue and their iso-intensity with lateral ventricles in many cases. Identifying early acute ischemic stroke lesions in NCCT remains particularly challenging. Computer-assisted detection and segmentation can serve as valuable tools to support clinicians in stroke diagnosis. This paper introduces CoAt U SegNet, a novel deep learning model designed to detect and segment acute ischemic stroke lesions from NCCT scans. Unlike conventional 3D segmentation models, this study presents an advanced 3D deep learning approach to enhance delineation accuracy. Traditional machine learning models have struggled to achieve satisfactory segmentation performance, highlighting the need for more sophisticated techniques. For model training, 50 NCCT scans were used, with 10 scans for validation and 500 scans for testing. The encoder convolution blocks incorporated dilation rates of 1, 3, and 5 to capture multi-scale features effectively. Performance evaluation on 500 unseen NCCT scans yielded a Dice similarity score of 75% and a Jaccard index of 70%, demonstrating notable improvement in segmentation accuracy. An enhanced similarity index was employed to refine lesion segmentation, which can further aid in distinguishing the penumbra from the core infarct area, contributing to improved clinical decision-making.

Deep Learning for Standardized Head CT Reformatting: A Quantitative Analysis of Image Quality and Operator Variability.

Chang PD, Chu E, Floriolli D, Soun J, Fussell D

pubmed logopapersSep 23 2025
To validate a deep learning foundation model for automated head computed tomography (CT) reformatting and to quantify the quality, speed, and variability of conventional manual reformats in a real-world dataset. A foundation artificial intelligence (AI) model was used to create automated reformats for 1,763 consecutive non-contrast head CT examinations. Model accuracy was first validated on a 100-exam subset by assessing landmark detection as well as rotational, centering, and zoom error against expert manual annotations. The validated model was subsequently used as a reference standard to evaluate the quality and speed of the original technician-generated reformats from the full dataset. The AI model demonstrated high concordance with expert annotations, with a mean landmark localization error of 0.6-0.9 mm. Compared to expert-defined planes, AI-generated reformats exhibited a mean rotational error of 0.7 degrees, a mean centering error of 0.3%, and a mean zoom error of 0.4%. By contrast, technician-generated reformats demonstrated a mean rotational error of 11.2 degrees, a mean centering error of 6.4%, and a mean zoom error of 6.2%. Significant variability in manual reformat quality was observed across different factors including patient age, scanner location, report findings, and individual technician operators. Manual head CT reformatting is subject to substantial variability in both quality and speed. A single-shot deep learning foundation model can generate reformats with high accuracy and consistency. The implementation of such an automated method offers the potential to improve standardization, increase workflow efficiency, and reduce operational costs in clinical practice.

Generating Brain MRI with StyleGAN2-ADA: The Effect of the Training Set Size on the Quality of Synthetic Images.

Lai M, Mascalchi M, Tessa C, Diciotti S

pubmed logopapersSep 23 2025
The potential of deep learning for medical imaging is often constrained by limited data availability. Generative models can unlock this potential by generating synthetic data that reproduces the statistical properties of real data while being more accessible for sharing. In this study, we investigated the influence of training set size on the performance of a state-of-the-art generative adversarial network, the StyleGAN2-ADA, trained on a cohort of 3,227 subjects from the OpenBHB dataset to generate 2D slices of brain MR images from healthy subjects. The quality of the synthetic images was assessed through qualitative evaluations and state-of-the-art quantitative metrics, which are provided in a publicly accessible repository. Our results demonstrate that StyleGAN2-ADA generates realistic and high-quality images, deceiving even expert radiologists while preserving privacy, as it did not memorize training images. Notably, increasing the training set size led to slight improvements in fidelity metrics. However, training set size had no noticeable impact on diversity metrics, highlighting the persistent limitation of mode collapse. Furthermore, we observed that diversity metrics, such as coverage and β-recall, are highly sensitive to the number of synthetic images used in their computation, leading to inflated values when synthetic data significantly outnumber real ones. These findings underscore the need to carefully interpret diversity metrics and the importance of employing complementary evaluation strategies for robust assessment. Overall, while StyleGAN2-ADA shows promise as a tool for generating privacy-preserving synthetic medical images, overcoming diversity limitations will require exploring alternative generative architectures or incorporating additional regularization techniques.

Improved pharmacokinetic parameter estimation from DCE-MRI via spatial-temporal information-driven unsupervised learning.

He X, Wang L, Yang Q, Wang J, Xing Z, Cao D, Cai C, Cai S

pubmed logopapersSep 23 2025
<b>Objective</b>: Pharmacokinetic (PK) parameters derived from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) provide quantitative characterization of tissue perfusion and permeability. However, existing deep learning methods for PK parameter estimation rely on either temporal or spatial features alone, overlooking the integrated spatial-temporal characteristics of DCE-MRI data. This study aims to remove this barrier by fully leveraging the spatial and temporal information to improve parameter estimation.&#xD;<b>Approach</b>: A spatial-temporal information-driven unsupervised deep learning method (STUDE) was proposed. STUDE combines convolutional neural networks (CNNs) and a customized Vision Transformer (ViT) to separately capture spatial and temporal features, enabling comprehensive modelling of contrast agent dynamics and tissue heterogeneity. Besides, a spatial-temporal attention (STA) feature fusion module was proposed to enable adaptive focus on both dimensions for more effective feature fusion. Moreover, the extended Tofts model imposed physical constraints on PK parameter estimation, enabling unsupervised training of STUDE. The accuracy and diagnostic value of STUDE was compared with the orthodox non-linear least squares (NLLS) and representative deep learning-based methods (i.e., GRU, CNN, U-Net, and VTDCE-Net) on a numerical brain phantom and 87 glioma patients, respectively.&#xD;<b>Main results</b>: On the numerical brain phantom, STUDE produced PK parameter maps with the lowest systematic and random errors even under low SNR conditions (SNR = 10 dB). On glioma data, STUDE generated parameter maps with reduced noise compared to NLLS and demonstrated superior structural clarity compared to other methods. Furthermore, STUDE outshined all other methods in the identification of glioma isocitrate dehydrogenase (IDH) mutation status, achieving the area under the curve (AUC) values at 0.840 and 0.908 for the receiver operating characteristic curves of<i>K<sup>trans</sup></i>and<i>V<sub>e</sub></i>, respectively. A combination of all PK parameters improved AUC to 0.926.&#xD;<b>Significance</b>: STUDE advances spatial-temporal information-driven and physics-informed learning for precise PK parameter estimation, demonstrating its potential clinical significance.&#xD.
Page 7 of 1241236 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.