Sort by:
Page 21 of 58579 results

FFLUNet: Feature Fused Lightweight UNet for brain tumor segmentation.

Kundu S, Dutta S, Mukhopadhyay J, Chakravorty N

pubmed logopapersJun 14 2025
Brain tumors, particularly glioblastoma multiforme, are considered one of the most threatening types of tumors in neuro-oncology. Segmenting brain tumors is a crucial part of medical imaging. It plays a key role in diagnosing conditions, planning treatments, and keeping track of patients' progress. This paper presents a novel lightweight deep convolutional neural network (CNN) model specifically designed for accurate and efficient brain tumor segmentation from magnetic resonance imaging (MRI) scans. Our model leverages a streamlined architecture that reduces computational complexity while maintaining high segmentation accuracy. We have introduced several novel approaches, including optimized convolutional layers that capture both local and global features with minimal parameters. A layerwise adaptive weighting feature fusion technique is implemented that enhances comprehensive feature representation. By incorporating shifted windowing, the model achieves better generalization across data variations. Dynamic weighting is introduced in skip connections that allows backpropagation to determine the ideal balance between semantic and positional features. To evaluate our approach, we conducted experiments on publicly available MRI datasets and compared our model against state-of-the-art segmentation methods. Our lightweight model has an efficient architecture with 1.45 million parameters - 95% fewer than nnUNet (30.78M), 91% fewer than standard UNet (16.21M), and 85% fewer than a lightweight hybrid CNN-transformer network (Liu et al., 2024) (9.9M). Coupled with a 4.9× faster GPU inference time (0.904 ± 0.002 s vs. nnUNet's 4.416 ± 0.004 s), the design enables real-time deployment on resource-constrained devices while maintaining competitive segmentation accuracy. Code is available at: FFLUNet.

Hierarchical Deep Feature Fusion and Ensemble Learning for Enhanced Brain Tumor MRI Classification

Zahid Ullah, Jihie Kim

arxiv logopreprintJun 14 2025
Accurate brain tumor classification is crucial in medical imaging to ensure reliable diagnosis and effective treatment planning. This study introduces a novel double ensembling framework that synergistically combines pre-trained deep learning (DL) models for feature extraction with optimized machine learning (ML) classifiers for robust classification. The framework incorporates comprehensive preprocessing and data augmentation of brain magnetic resonance images (MRI), followed by deep feature extraction using transfer learning with pre-trained Vision Transformer (ViT) networks. The novelty lies in the dual-level ensembling strategy: feature-level ensembling, which integrates deep features from the top-performing ViT models, and classifier-level ensembling, which aggregates predictions from hyperparameter-optimized ML classifiers. Experiments on two public Kaggle MRI brain tumor datasets demonstrate that this approach significantly surpasses state-of-the-art methods, underscoring the importance of feature and classifier fusion. The proposed methodology also highlights the critical roles of hyperparameter optimization (HPO) and advanced preprocessing techniques in improving diagnostic accuracy and reliability, advancing the integration of DL and ML for clinically relevant medical image analysis.

Sex-estimation method for three-dimensional shapes of the skull and skull parts using machine learning.

Imaizumi K, Usui S, Nagata T, Hayakawa H, Shiotani S

pubmed logopapersJun 14 2025
Sex estimation is an indispensable test for identifying skeletal remains in the field of forensic anthropology. We developed a novel sex-estimation method for skulls and several parts of the skull using machine learning. A total of 240 skull shapes were obtained from postmortem computed tomography scans. The shapes of the whole skull, cranium, and mandible were simplified by wrapping them with virtual elastic film. These were then transformed into homologous shape models. Homologous models of the cranium and mandible were segmented into six regions containing well-known sexually dimorphic areas. Shape data were reduced in dimensionality by principal component analysis (PCA) or partial least squares regression (PLS). The components of PCA and PLS were applied to a support vector machine (SVM), and the accuracy rates of sex estimation were assessed. High accuracy rates in sex estimation were observed in SVM after reducing the dimensionality of data with PLS. The rates exceeded 90 % in two of the nine regions examined, whereas the SVM with PCA components did not reach 90 % in any region. Virtual shapes created from very large and small scores of the first principal components of PLS closely resembled masculine and feminine models created by emphasizing the shape difference between the averaged shape of male and female skulls. Such similarities were observed in all skull regions examined, particularly in sexually dimorphic areas. Estimation models also achieved high estimation accuracies in newly prepared skull shapes, suggesting that the estimation method developed here may be sufficiently applicable to actual casework.

Prediction of NIHSS Scores and Acute Ischemic Stroke Severity Using a Cross-attention Vision Transformer Model with Multimodal MRI.

Tuxunjiang P, Huang C, Zhou Z, Zhao W, Han B, Tan W, Wang J, Kukun H, Zhao W, Xu R, Aihemaiti A, Subi Y, Zou J, Xie C, Chang Y, Wang Y

pubmed logopapersJun 13 2025
This study aimed to develop and evaluate models for classifying the severity of neurological impairment in acute ischemic stroke (AIS) patients using multimodal MRI data. A retrospective cohort of 1227 AIS patients was collected and categorized into mild (NIHSS<5) and moderate-to-severe (NIHSS≥5) stroke groups based on NIHSS scores. Eight baseline models were constructed for performance comparison, including a clinical model, radiomics models using DWI or multiple MRI sequences, and deep learning (DL) models with varying fusion strategies (early fusion, later fusion, full cross-fusion, and DWI-centered cross-fusion). All DL models were based on the Vision Transformer (ViT) framework. Model performance was evaluated using metrics such as AUC and ACC, and robustness was assessed through subgroup analyses and visualization using Grad-CAM. Among the eight models, the DL model using DWI as the primary sequence with cross-fusion of other MRI sequences (Model 8) achieved the best performance. In the test cohort, Model 8 demonstrated an AUC of 0.914, ACC of 0.830, and high specificity (0.818) and sensitivity (0.853). Subgroup analysis shows that model 8 is robust in most subgroups with no significant prediction difference (p > 0.05), and the AUC value consistently exceeds 0.900. A significant predictive difference was observed in the BMI group (p < 0.001). The results of external validation showed that the AUC values of the model 8 in center 2 and center 3 reached 0.910 and 0.912, respectively. Visualization using Grad-CAM emphasized the infarct core as the most critical region contributing to predictions, with consistent feature attention across DWI, T1WI, T2WI, and FLAIR sequences, further validating the interpretability of the model. A ViT-based DL model with cross-modal fusion strategies provides a non-invasive and efficient tool for classifying AIS severity. Its robust performance across subgroups and interpretability make it a promising tool for personalized management and decision-making in clinical practice.

Does restrictive anorexia nervosa impact brain aging? A machine learning approach to estimate age based on brain structure.

Gupta Y, de la Cruz F, Rieger K, di Giuliano M, Gaser C, Cole J, Breithaupt L, Holsen LM, Eddy KT, Thomas JJ, Cetin-Karayumak S, Kubicki M, Lawson EA, Miller KK, Misra M, Schumann A, Bär KJ

pubmed logopapersJun 13 2025
Anorexia nervosa (AN), a severe eating disorder marked by extreme weight loss and malnutrition, leads to significant alterations in brain structure. This study used machine learning (ML) to estimate brain age from structural MRI scans and investigated brain-predicted age difference (brain-PAD) as a potential biomarker in AN. Structural MRI scans were collected from female participants aged 10-40 years across two institutions (Boston, USA, and Jena, Germany), including acute AN (acAN; n=113), weight-restored AN (wrAN; n=35), and age-matched healthy controls (HC; n=90). The ML model was trained on 3487 healthy female participants (ages 5-45 years) from ten datasets, using 377 neuroanatomical features extracted from T1-weighted MRI scans. The model achieved strong performance with a mean absolute error (MAE) of 1.93 years and a correlation of r = 0.88 in HCs. In acAN patients, brain age was overestimated by an average of +2.25 years, suggesting advanced brain aging. In contrast, wrAN participants showed significantly lower brain-PAD than acAN (+0.26 years, p=0.0026) and did not differ from HC (p=0.98), suggesting normalization of brain age estimates following weight restoration. A significant group-by-age interaction effect on predicted brain age (p<0.001) indicated that brain age deviations were most pronounced in younger acAN participants. Brain-PAD in acAN was significantly negatively associated with BMI (r = -0.291, p<sub>fdr</sub> = 0.005), but not in wrAN or HC groups. Importantly, no significant associations were found between brain-PAD and clinical symptom severity. These findings suggest that acute AN is linked to advanced brain aging during the acute stage, and that may partially normalize following weight recovery.

Prediction of functional outcome after traumatic brain injury: a narrative review.

Iaquaniello C, Scordo E, Robba C

pubmed logopapersJun 13 2025
To synthesize current evidence on prognostic factors, tools, and strategies influencing functional outcomes in patients with traumatic brain injury (TBI), with a focus on the acute and postacute phases of care. Key early predictors such as Glasgow Coma Scale (GCS) scores, pupillary reactivity, and computed tomography (CT) imaging findings remain fundamental in guiding clinical decision-making. Prognostic models like IMPACT and CRASH enhance early risk stratification, while outcome measures such as the Glasgow Outcome Scale-Extended (GOS-E) provide structured long-term assessments. Despite their utility, heterogeneity in assessment approaches and treatment protocols continues to limit consistency in outcome predictions. Recent advancements highlight the value of fluid biomarkers like neurofilament light chain (NFL) and glial fibrillary acidic protein (GFAP), which offer promising avenues for improved accuracy. Additionally, artificial intelligence models are emerging as powerful tools to integrate complex datasets and refine individualized outcome forecasting. Neurological prognostication after TBI is evolving through the integration of clinical, radiological, molecular, and computational data. Although standardized models and scales remain foundational, emerging technologies and therapies - such as biomarkers, machine learning, and neurostimulants - represent a shift toward more personalized and actionable strategies to optimize recovery and long-term function.

Recent Advances in sMRI and Artificial Intelligence for Presurgical Planning in Focal Cortical Dysplasia: A Systematic Review.

Mahmoudi A, Alizadeh A, Ganji Z, Zare H

pubmed logopapersJun 13 2025
Focal Cortical Dysplasia (FCD) is a leading cause of drug-resistant epilepsy, particularly in children and young adults, necessitating precise presurgical planning. Traditional structural MRI often fails to detect subtle FCD lesions, especially in MRI-negative cases. Recent advancements in Artificial Intelligence (AI), particularly Machine Learning (ML) and Deep Learning (DL), have the potential to enhance FCD detection's sensitivity and specificity. This systematic review, following PRISMA guidelines, searched PubMed, Embase, Scopus, Web of Science, and Science Direct for articles published from 2020 onwards, using keywords related to "Focal Cortical Dysplasia," "MRI," and "Artificial Intelligence/Machine Learning/Deep Learning." Included were original studies employing AI and structural MRI (sMRI) for FCD detection in humans, reporting quantitative performance metrics, and published in English. Data extraction was performed independently by two reviewers, with discrepancies resolved by a third. The included studies demonstrated that AI significantly improved FCD detection, achieving sensitivity up to 97.1% and specificities up to 84.3% across various MRI sequences, including MPRAGE, MP2RAGE, and FLAIR. AI models, particularly deep learning models, matched or surpassed human radiologist performance, with combined AI-human expertise reaching up to 87% detection rates. Among 88 full-text articles reviewed, 27 met inclusion criteria. The studies emphasized the importance of advanced MRI sequences and multimodal MRI for enhanced detection, though model performance varied with FCD type and training datasets. Recent advances in sMRI and AI, especially deep learning, offer substantial potential to improve FCD detection, leading to better presurgical planning and patient outcomes in drug-resistant epilepsy. These methods enable faster, more accurate, and automated FCD detection, potentially enhancing surgical decision-making. Further clinical validation and optimization of AI algorithms across diverse datasets are essential for broader clinical translation.

Enhancing Privacy: The Utility of Stand-Alone Synthetic CT and MRI for Tumor and Bone Segmentation

André Ferreira, Kunpeng Xie, Caroline Wilpert, Gustavo Correia, Felix Barajas Ordonez, Tiago Gil Oliveira, Maike Bode, Robert Siepmann, Frank Hölzle, Rainer Röhrig, Jens Kleesiek, Daniel Truhn, Jan Egger, Victor Alves, Behrus Puladi

arxiv logopreprintJun 13 2025
AI requires extensive datasets, while medical data is subject to high data protection. Anonymization is essential, but poses a challenge for some regions, such as the head, as identifying structures overlap with regions of clinical interest. Synthetic data offers a potential solution, but studies often lack rigorous evaluation of realism and utility. Therefore, we investigate to what extent synthetic data can replace real data in segmentation tasks. We employed head and neck cancer CT scans and brain glioma MRI scans from two large datasets. Synthetic data were generated using generative adversarial networks and diffusion models. We evaluated the quality of the synthetic data using MAE, MS-SSIM, Radiomics and a Visual Turing Test (VTT) performed by 5 radiologists and their usefulness in segmentation tasks using DSC. Radiomics indicates high fidelity of synthetic MRIs, but fall short in producing highly realistic CT tissue, with correlation coefficient of 0.8784 and 0.5461 for MRI and CT tumors, respectively. DSC results indicate limited utility of synthetic data: tumor segmentation achieved DSC=0.064 on CT and 0.834 on MRI, while bone segmentation a mean DSC=0.841. Relation between DSC and correlation is observed, but is limited by the complexity of the task. VTT results show synthetic CTs' utility, but with limited educational applications. Synthetic data can be used independently for the segmentation task, although limited by the complexity of the structures to segment. Advancing generative models to better tolerate heterogeneous inputs and learn subtle details is essential for enhancing their realism and expanding their application potential.

BraTS orchestrator : Democratizing and Disseminating state-of-the-art brain tumor image analysis

Florian Kofler, Marcel Rosier, Mehdi Astaraki, Ujjwal Baid, Hendrik Möller, Josef A. Buchner, Felix Steinbauer, Eva Oswald, Ezequiel de la Rosa, Ivan Ezhov, Constantin von See, Jan Kirschke, Anton Schmick, Sarthak Pati, Akis Linardos, Carla Pitarch, Sanyukta Adap, Jeffrey Rudie, Maria Correia de Verdier, Rachit Saluja, Evan Calabrese, Dominic LaBella, Mariam Aboian, Ahmed W. Moawad, Nazanin Maleki, Udunna Anazodo, Maruf Adewole, Marius George Linguraru, Anahita Fathi Kazerooni, Zhifan Jiang, Gian Marco Conte, Hongwei Li, Juan Eugenio Iglesias, Spyridon Bakas, Benedikt Wiestler, Marie Piraud, Bjoern Menze

arxiv logopreprintJun 13 2025
The Brain Tumor Segmentation (BraTS) cluster of challenges has significantly advanced brain tumor image analysis by providing large, curated datasets and addressing clinically relevant tasks. However, despite its success and popularity, algorithms and models developed through BraTS have seen limited adoption in both scientific and clinical communities. To accelerate their dissemination, we introduce BraTS orchestrator, an open-source Python package that provides seamless access to state-of-the-art segmentation and synthesis algorithms for diverse brain tumors from the BraTS challenge ecosystem. Available on GitHub (https://github.com/BrainLesion/BraTS), the package features intuitive tutorials designed for users with minimal programming experience, enabling both researchers and clinicians to easily deploy winning BraTS algorithms for inference. By abstracting the complexities of modern deep learning, BraTS orchestrator democratizes access to the specialized knowledge developed within the BraTS community, making these advances readily available to broader neuro-radiology and neuro-oncology audiences.

Deep-Learning Based Contrast Boosting Improves Lesion Visualization and Image Quality: A Multi-Center Multi-Reader Study on Clinical Performance with Standard Contrast Enhanced MRI of Brain Tumors

Pasumarthi, S., Campbell Arnold, T., Colombo, S., Rudie, J. D., Andre, J. B., Elor, R., Gulaka, P., Shankaranarayanan, A., Erb, G., Zaharchuk, G.

medrxiv logopreprintJun 13 2025
BackgroundGadolinium-based Contrast Agents (GBCAs) are used in brain MRI exams to improve the visualization of pathology and improve the delineation of lesions. Higher doses of GBCAs can improve lesion sensitivity but involve substantial deviation from standard-of-care procedures and may have safety implications, particularly in the light of recent findings on gadolinium retention and deposition. PurposeTo evaluate the clinical performance of an FDA cleared deep-learning (DL) based contrast boosting algorithm in routine clinical brain MRI exams. MethodsA multi-center retrospective database of contrast-enhanced brain MRI images (obtained from April 2017 to December 2023) was used to evaluate a DL-based contrast boosting algorithm. Pre-contrast and standard post-contrast (SC) images were processed with the algorithm to obtain contrast boosted (CB) images. Quantitative performance of CB images in comparison to SC images was compared using contrast-to-noise ratio (CNR), lesion-to-brain ratio (LBR) and contrast enhancement percentage (CEP). Three board-certified radiologists reviewed CB and SC images side-by-side for qualitative evaluation and rated them on a 4-point Likert scale for lesion contrast enhancement, border delineation, internal morphology, overall image quality, presence of artefacts, and changes in vessel conspicuity. The presence, cause, and severity of any false lesions was recorded. CB results were compared to SC using Wilcoxon signed rank test for statistical significance. ResultsBrain MRI images from 110 patients (47 {+/-} 22 years; 52 Females, 47 Males, 11 N/A) were evaluated. CB images had superior quantitative performance than SC images in terms of CNR (+634%), LBR (+70%) and CEP (+150%). In the qualitative assessment CB images showed better lesion visualization (3.73 vs 3.16) and had better image quality (3.55 vs 3.07). Readers were able to rule out all false lesions on CB by using SC for comparison. ConclusionsDeep learning based contrast boosting improves lesion visualization and image quality without increasing contrast dosage. Key ResultsO_LIIn a retrospective study of 110 patients, deep-learning based contrast boosted (CB) images showed better lesion visualization than standard post-contrast (SC) brain MRI images (3.73 vs 3.16; mean reader scores [4-point Likert scale]) C_LIO_LICB images had better overall image quality than SC images (3.55 vs 3.07) C_LIO_LIContrast-to-noise ratio, Lesion-to-brain Ratio and Contrast Enhancement Percentage for CB images were significantly higher than SC images (+729%, +88% and +165%; p < 0.001) C_LI Summary StatementDeep-learning based contrast boosting achieves better lesion visualization and overall image quality and provides more contrast information, without increasing the contrast dosage in contrast-enhanced brain MR protocols.
Page 21 of 58579 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.