Sort by:
Page 3 of 90896 results

Multimodal radiomics in glioma: predicting recurrence in the peritumoural brain zone using integrated MRI.

Li Q, Xiang C, Zeng X, Liao A, Chen K, Yang J, Li Y, Jia M, Song L, Hu X

pubmed logopapersAug 11 2025
Gliomas exhibit a high recurrence rate, particularly in the peritumoural brain zone after surgery. This study aims to develop and validate a radiomics-based model using preoperative fluid-attenuated inversion recovery (FLAIR) and T1-weighted contrast-enhanced (T1-CE) magnetic resonance imaging (MRI) sequences to predict glioma recurrence within specific quadrants of the surgical margin. In this retrospective study, 149 patients with confirmed glioma recurrence were included. 23 cases of data from Guizhou Medical University were used as a test set, and the remaining data were randomly used as a training set (70%) and a validation set (30%). Two radiologists from the research group established a Cartesian coordinate system centred on the tumour, based on FLAIR and T1-CE MRI sequences, dividing the tumour into four quadrants. Recurrence in each quadrant after surgery was assessed, categorising preoperative tumour quadrants as recurrent and non-recurrent. Following the division of tumours into quadrants and the removal of outliers, These quadrants were assigned to a training set (105 non-recurrence quadrants and 226 recurrence quadrants), a verification set (45 non-recurrence quadrants and 97 recurrence quadrants) and a test set (16 non-recurrence quadrants and 68 recurrence quadrants). Imaging features were extracted from preoperative sequences, and feature selection was performed using least absolute shrinkage and selection operator. Machine learning models included support vector machine, random forest, extra trees, and XGBoost. Clinical efficacy was evaluated through model calibration and decision curve analysis. The fusion model, which combines features from FLAIR and T1-CE sequences, exhibited higher predictive accuracy than single-modality models. Among the models, the LightGBM model demonstrated the highest predictive accuracy, with an area under the curve of 0.906 in the training set, 0.832 in the validation set and 0.805 in the test set. The study highlights the potential of a multimodal radiomics approach for predicting glioma recurrence, with the fusion model serving as a robust tool for clinical decision-making.

Enhanced MRI brain tumor detection using deep learning in conjunction with explainable AI SHAP based diverse and multi feature analysis.

Rahman A, Hayat M, Iqbal N, Alarfaj FK, Alkhalaf S, Alturise F

pubmed logopapersAug 11 2025
Recent innovations in medical imaging have markedly improved brain tumor identification, surpassing conventional diagnostic approaches that suffer from low resolution, radiation exposure, and limited contrast. Magnetic Resonance Imaging (MRI) is pivotal in precise and accurate tumor characterization owing to its high-resolution, non-invasive nature. This study investigates the synergy among multiple feature representation schemes such as local Binary Patterns (LBP), Gabor filters, Discrete Wavelet Transform, Fast Fourier Transform, Convolutional Neural Networks (CNN), and Gray-Level Run Length Matrix alongside five learning algorithms namely: k-nearest Neighbor, Random Forest, Support Vector Classifier (SVC), and probabilistic neural network (PNN), and CNN. Empirical findings indicate that LBP in conjunction with SVC and CNN obtained high specificity and accuracy, rendering it a promising method for MRI-based tumor diagnosis. Further to investigate the contribution of LBP, Statistical analysis chi-square and p-value tests are used to confirm the significant impact of LBP feature space for identification of brain Tumor. In addition, The SHAP analysis was used to identify the most important features in classification. In a small dataset, CNN obtained 97.8% accuracy while SVC yielded 98.06% accuracy. In subsequent analysis, a large benchmark dataset is also utilized to evaluate the performance of learning algorithms in order to investigate the generalization power of the proposed model. CNN achieves the highest accuracy of 98.9%, followed by SVC at 96.7%. These results highlight CNN's effectiveness in automated, high-precision tumor diagnosis. This achievement is ascribed with MRI-based feature extraction by combining high resolution, non-invasive imaging capabilities with the powerful analytical abilities of CNN. CNN demonstrates superiority in medical imaging owing to its ability to learn intricate spatial patterns and generalize effectively. This interaction enhances the accuracy, speed, and consistency of brain tumor detection, ultimately leading to better patient outcomes and more efficient healthcare delivery. https://github.com/asifrahman557/BrainTumorDetection .

Outcome Prediction in Pediatric Traumatic Brain Injury Utilizing Social Determinants of Health and Machine Learning Methods.

Kaliaev A, Vejdani-Jahromi M, Gunawan A, Qureshi M, Setty BN, Farris C, Takahashi C, AbdalKader M, Mian A

pubmed logopapersAug 11 2025
Considerable socioeconomic disparities exist among pediatric traumatic brain injury (TBI) patients. This study aims to analyze the effects of social determinants of health on head injury outcomes and to create a novel machine-learning algorithm (MLA) that incorporates socioeconomic factors to predict the likelihood of a positive or negative trauma-related finding on head computed tomography (CT). A cohort of blunt trauma patients under age 15 who presented to the largest safety net hospital in New England between January 2006 and December 2013 (n=211) was included in this study. Patient socioeconomic data such as race, language, household income, and insurance type were collected alongside other parameters like Injury Severity Score (ISS), age, sex, and mechanism of injury. Multivariable analysis was performed to identify significant factors in predicting a positive head CT outcome. The cohort was split into 80% training (168 samples) and 20% testing (43 samples) datasets using stratified sampling. Twenty-two multi-parametric MLAs were trained with 5-fold cross-validation and hyperparameter tuning via GridSearchCV, and top-performing models were evaluated on the test dataset. Significant factors associated with pediatric head CT outcome included ISS, age, and insurance type (p<0.05). The age of the subjects with a clinically relevant trauma-related head CT finding (median= 1.8 years) was significantly different from the age of patients without such findings (median= 9.1 years). These predictors were utilized to train the machine learning models. With ISS, the Fine Gaussian SVM achieved the highest test AUC (0.923), with accuracy=0.837, sensitivity=0.647, and specificity=0.962. The Coarse Tree yielded accuracy=0.837, AUC=0.837, sensitivity=0.824, and specificity=0.846. Without ISS, the Narrow Neural Network performed best with accuracy=0.837, AUC=0.857, sensitivity=0.765, and specificity=0.885. Key predictors of clinically relevant head CT findings in pediatric TBI include ISS, age, and social determinants of health, with children under 5 at higher risk. A novel Fine Gaussian SVM model outperformed other MLA, offering high accuracy in predicting outcomes. This tool shows promise for improving clinical decisions while minimizing radiation exposure in children. TBI = Traumatic Brain Injury; ISS = Injury Severity Score; MLA = Machine Learning Algorithm; CT = Computed Tomography; AUC = Area Under the Curve.

Deep Learning-Based Desikan-Killiany Parcellation of the Brain Using Diffusion MRI

Yousef Sadegheih, Dorit Merhof

arxiv logopreprintAug 11 2025
Accurate brain parcellation in diffusion MRI (dMRI) space is essential for advanced neuroimaging analyses. However, most existing approaches rely on anatomical MRI for segmentation and inter-modality registration, a process that can introduce errors and limit the versatility of the technique. In this study, we present a novel deep learning-based framework for direct parcellation based on the Desikan-Killiany (DK) atlas using only diffusion MRI data. Our method utilizes a hierarchical, two-stage segmentation network: the first stage performs coarse parcellation into broad brain regions, and the second stage refines the segmentation to delineate more detailed subregions within each coarse category. We conduct an extensive ablation study to evaluate various diffusion-derived parameter maps, identifying an optimal combination of fractional anisotropy, trace, sphericity, and maximum eigenvalue that enhances parellation accuracy. When evaluated on the Human Connectome Project and Consortium for Neuropsychiatric Phenomics datasets, our approach achieves superior Dice Similarity Coefficients compared to existing state-of-the-art models. Additionally, our method demonstrates robust generalization across different image resolutions and acquisition protocols, producing more homogeneous parcellations as measured by the relative standard deviation within regions. This work represents a significant advancement in dMRI-based brain segmentation, providing a precise, reliable, and registration-free solution that is critical for improved structural connectivity and microstructural analyses in both research and clinical applications. The implementation of our method is publicly available on github.com/xmindflow/DKParcellationdMRI.

Automated Prediction of Bone Volume Removed in Mastoidectomy.

Nagururu NV, Ishida H, Ding AS, Ishii M, Unberath M, Taylor RH, Munawar A, Sahu M, Creighton FX

pubmed logopapersAug 11 2025
The bone volume drilled by surgeons during mastoidectomy is determined by the need to localize the position, optimize the view, and reach the surgical endpoint while avoiding critical structures. Predicting the volume of bone removed before an operation can significantly enhance surgical training by providing precise, patient-specific guidance and enable the development of more effective computer-assisted and robotic surgical interventions. Single institution, cross-sectional. VR simulation. We developed a deep learning pipeline to automate the prediction of bone volume removed during mastoidectomy using data from virtual reality mastoidectomy simulations. The data set included 15 deidentified temporal bone computed tomography scans. The network was evaluated using fivefold cross-validation, comparing predicted and actual bone removal with metrics such as the Dice score (DSC) and Hausdorff distance (HD). Our method achieved a median DSC of 0.775 (interquartile range [IQR]: 0.725-0.810) and a median HD of 0.492 mm (IQR: 0.298-0.757 mm). Predictions reached the mastoidectomy endpoint of visualizing the horizontal canal and incus in 80% (12/15) of temporal bones. Qualitative analysis indicated that predictions typically produced realistic mastoidectomy endpoints, though some cases showed excessive or insufficient bone removal, particularly at the temporal bone cortex and tegmen mastoideum. This study establishes a foundational step in using deep learning to predict bone volume removal during mastoidectomy. The results indicate that learning-based methods can reasonably approximate the surgical endpoint of mastoidectomy. Further refinement with larger, more diverse data sets and improved model architectures will be essential for enhancing prediction accuracy.

Generative Artificial Intelligence to Automate Cerebral Perfusion Mapping in Acute Ischemic Stroke from Non-contrast Head Computed Tomography Images: Pilot Study.

Primiano NJ, Changa AR, Kohli S, Greenspan H, Cahan N, Kummer BR

pubmed logopapersAug 11 2025
Acute ischemic stroke (AIS) is a leading cause of death and long-term disability worldwide, where rapid reperfusion remains critical for salvaging brain tissue. Although CT perfusion (CTP) imaging provides essential hemodynamic information, its limitations-including extended processing times, additional radiation exposure, and variable software outputs-can delay treatment. In contrast, non-contrast head CT (NCHCT) is ubiquitously available in acute stroke settings. This study explores a generative artificial intelligence approach to predict key perfusion parameters (relative cerebral blood flow [rCBF] and time-to-maximum [Tmax]) directly from NCHCT, potentially streamlining stroke imaging workflows and expanding access to critical perfusion data. We retrospectively identified patients evaluated for AIS who underwent NCHCT, CT angiography, and CTP. Ground truth perfusion maps (rCBF and Tmax) were extracted from VIZ.ai post-processed CTP studies. A modified pix2pix-turbo generative adversarial network (GAN) was developed to translate co-registered NCHCT images into corresponding perfusion maps. The network was trained using paired NCHCT-CTP data, with training, validation, and testing splits of 80%:10%:10%. Performance was assessed on the test set using quantitative metrics including the structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and Fréchet inception distance (FID). Out of 120 patients, studies from 99 patients fitting our inclusion and exclusion criteria were used as the primary cohort (mean age 73.3 ± 13.5 years; 46.5% female). Cerebral occlusions were predominantly in the middle cerebral artery. GAN-generated Tmax maps achieved an SSIM of 0.827, PSNR of 16.99, and FID of 62.21, while the rCBF maps demonstrated comparable performance (SSIM 0.79, PSNR 16.38, FID 59.58). These results indicate that the model approximates ground truth perfusion maps to a moderate degree and successfully captures key cerebral hemodynamic features. Our findings demonstrate the feasibility of generating functional perfusion maps directly from widely available NCHCT images using a modified GAN. This cross-modality approach may serve as a valuable adjunct in AIS evaluation, particularly in resource-limited settings or when traditional CTP provides limited diagnostic information. Future studies with larger, multicenter datasets and further model refinements are warranted to enhance clinical accuracy and utility.

Neonatal neuroimaging: from research to bedside practice.

Cizmeci MN, El-Dib M, de Vries LS

pubmed logopapersAug 11 2025
Neonatal neuroimaging is essential in research and clinical practice, offering important insights into brain development and neurologic injury mechanisms. Visualizing the brain enables researchers and clinicians to improve neonatal care and parental counselling through better diagnosis and prognostication of disease. Common neuroimaging modalities used in the neonatal intensive care unit (NICU) are cranial ultrasonography (cUS) and magnetic resonance imaging (MRI). Between these modalities, conventional MRI provides the optimal image resolution and detail about the developing brain, while advanced MRI techniques allow for the evaluation of tissue microstructure and functional networks. Over the last two decades, medical imaging techniques using brain MRI have rapidly progressed, and these advances have facilitated high-quality extraction of quantitative features as well as the implementation of novel devices for use in neurological disorders. Major advancements encompass the use of low-field dedicated MRI systems within the NICU and trials of ultralow-field portable MRI systems at the bedside. Additionally, higher-field magnets are utilized to enhance image quality, and ultrafast brain MRI is employed to decrease image acquisition time. Furthermore, the implementation of advanced MRI sequences, the application of machine learning algorithms, multimodal neuroimaging techniques, motion correction techniques, and novel modalities are used to visualize pathologies that are not visible to the human eye. In this narrative review, we will discuss the fundamentals of these neuroimaging modalities, and their clinical applications to explore the present landscape of neonatal neuroimaging from bench to bedside.

A Physics-Driven Neural Network with Parameter Embedding for Generating Quantitative MR Maps from Weighted Images

Lingjing Chen, Chengxiu Zhang, Yinqiao Yi, Yida Wang, Yang Song, Xu Yan, Shengfang Xu, Dalin Zhu, Mengqiu Cao, Yan Zhou, Chenglong Wang, Guang Yang

arxiv logopreprintAug 11 2025
We propose a deep learning-based approach that integrates MRI sequence parameters to improve the accuracy and generalizability of quantitative image synthesis from clinical weighted MRI. Our physics-driven neural network embeds MRI sequence parameters -- repetition time (TR), echo time (TE), and inversion time (TI) -- directly into the model via parameter embedding, enabling the network to learn the underlying physical principles of MRI signal formation. The model takes conventional T1-weighted, T2-weighted, and T2-FLAIR images as input and synthesizes T1, T2, and proton density (PD) quantitative maps. Trained on healthy brain MR images, it was evaluated on both internal and external test datasets. The proposed method achieved high performance with PSNR values exceeding 34 dB and SSIM values above 0.92 for all synthesized parameter maps. It outperformed conventional deep learning models in accuracy and robustness, including data with previously unseen brain structures and lesions. Notably, our model accurately synthesized quantitative maps for these unseen pathological regions, highlighting its superior generalization capability. Incorporating MRI sequence parameters via parameter embedding allows the neural network to better learn the physical characteristics of MR signals, significantly enhancing the performance and reliability of quantitative MRI synthesis. This method shows great potential for accelerating qMRI and improving its clinical utility.

Post-deployment Monitoring of AI Performance in Intracranial Hemorrhage Detection by ChatGPT.

Rohren E, Ahmadzade M, Colella S, Kottler N, Krishnan S, Poff J, Rastogi N, Wiggins W, Yee J, Zuluaga C, Ramis P, Ghasemi-Rad M

pubmed logopapersAug 11 2025
To evaluate the post-deployment performance of an artificial intelligence (AI) system (Aidoc) for intracranial hemorrhage (ICH) detection and assess the utility of ChatGPT-4 Turbo for automated AI monitoring. This retrospective study evaluated 332,809 head CT examinations from 37 radiology practices across the United States (December 2023-May 2024). Of these, 13,569 cases were flagged as positive for ICH by the Aidoc AI system. A HIPAA (Health Insurance Portability and Accountability Act) -compliant version of ChatGPT-4 Turbo was used to extract data from radiology reports. Ground truth was established through radiologists' review of 200 randomly selected cases. Performance metrics were calculated for ChatGPT, Aidoc and radiologists. ChatGPT-4 Turbo demonstrated high diagnostic accuracy in identifying intracranial hemorrhage (ICH) from radiology reports, with a positive predictive value of 1 and a negative predictive value of 0.988 (AUC:0.996). Aidoc's false positive classifications were influenced by scanner manufacturer, midline shift, mass effect, artifacts, and neurologic symptoms. Multivariate analysis identified Philips scanners (OR: 6.97, p=0.003) and artifacts (OR: 3.79, p=0.029) as significant contributors to false positives, while midline shift (OR: 0.08, p=0.021) and mass effect (OR: 0.18, p=0.021) were associated with a reduced false positive rate. Aidoc-assisted radiologists achieved a sensitivity of 0.936 and a specificity of 1. This study underscores the importance of continuous performance monitoring for AI systems in clinical practice. The integration of LLMs offers a scalable solution for evaluating AI performance, ensuring reliable deployment and enhancing diagnostic workflows.

Improving early detection of Alzheimer's disease through MRI slice selection and deep learning techniques.

Şener B, Açıcı K, Sümer E

pubmed logopapersAug 10 2025
Alzheimer's disease is a progressive neurodegenerative disorder marked by cognitive decline, memory loss, and behavioral changes. Early diagnosis, particularly identifying Early Mild Cognitive Impairment (EMCI), is vital for managing the disease and improving patient outcomes. Detecting EMCI is challenging due to the subtle structural changes in the brain, making precise slice selection from MRI scans essential for accurate diagnosis. In this context, the careful selection of specific MRI slices that provide distinct anatomical details significantly enhances the ability to identify these early changes. The chief novelty of the study is that instead of selecting all slices, an approach for identifying the important slices is developed. The ADNI-3 dataset was used as the dataset when running the models for early detection of Alzheimer's disease. Satisfactory results have been obtained by classifying with deep learning models, vision transformers (ViT) and by adding new structures to them, together with the model proposal. In the results obtained, while an accuracy of 99.45% was achieved with EfficientNetB2 + FPN in AD vs. LMCI classification from the slices selected with SSIM, an accuracy of 99.19% was achieved in AD vs. EMCI classification, in fact, the study significantly advances early detection by demonstrating improved diagnostic accuracy of the disease at the EMCI stage. The results obtained with these methods emphasize the importance of developing deep learning models with slice selection integrated with the Vision Transformers architecture. Focusing on accurate slice selection enables early detection of Alzheimer's at the EMCI stage, allowing for timely interventions and preventive measures before the disease progresses to more advanced stages. This approach not only facilitates early and accurate diagnosis, but also lays the groundwork for timely intervention and treatment, offering hope for better patient outcomes in Alzheimer's disease. The study is finally evaluated by a statistical significance test.
Page 3 of 90896 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.