Sort by:
Page 25 of 58579 results

SMART MRS: A Simulated MEGA-PRESS ARTifacts toolbox for GABA-edited MRS.

Bugler H, Shamaei A, Souza R, Harris AD

pubmed logopapersJun 8 2025
To create a Python-based toolbox to simulate commonly occurring artifacts for single voxel gamma-aminobutyric acid (GABA)-edited MRS data. The toolbox was designed to maximize user flexibility and contains artifact, applied, input/output (I/O), and support functions. The artifact functions can produce spurious echoes, eddy currents, nuisance peaks, line broadening, baseline contamination, linear frequency drifts, and frequency and phase shift artifacts. Applied functions combine or apply specific parameter values to produce recognizable effects such as lipid peak and motion contamination. I/O and support functions provide additional functionality to accommodate different kinds of input data (MATLAB FID-A.mat files, NIfTI-MRS files), which vary by domain (time vs. frequency), MRS data type (e.g., edited vs. non-edited) and scale. A frequency and phase correction machine learning model experiment trained on corrupted simulated data and validated on in vivo data is shown to highlight the utility of our toolbox. Data simulated from the toolbox are complementary for research applications, as demonstrated by training a frequency and phase correction deep learning model that is applied to in vivo data containing artifacts. Visual assessment also confirms the resemblance of simulated artifacts compared to artifacts found in in vivo data. Our easy to install Python artifact simulated toolbox SMART_MRS is useful to enhance the diversity and quality of existing simulated edited-MRS data and is complementary to existing MRS simulation software.

Simultaneous Segmentation of Ventricles and Normal/Abnormal White Matter Hyperintensities in Clinical MRI using Deep Learning

Mahdi Bashiri Bawil, Mousa Shamsi, Abolhassan Shakeri Bavil

arxiv logopreprintJun 8 2025
Multiple sclerosis (MS) diagnosis and monitoring rely heavily on accurate assessment of brain MRI biomarkers, particularly white matter hyperintensities (WMHs) and ventricular changes. Current segmentation approaches suffer from several limitations: they typically segment these structures independently despite their pathophysiological relationship, struggle to differentiate between normal and pathological hyperintensities, and are poorly optimized for anisotropic clinical MRI data. We propose a novel 2D pix2pix-based deep learning framework for simultaneous segmentation of ventricles and WMHs with the unique capability to distinguish between normal periventricular hyperintensities and pathological MS lesions. Our method was developed and validated on FLAIR MRI scans from 300 MS patients. Compared to established methods (SynthSeg, Atlas Matching, BIANCA, LST-LPA, LST-LGA, and WMH-SynthSeg), our approach achieved superior performance for both ventricle segmentation (Dice: 0.801+/-0.025, HD95: 18.46+/-7.1mm) and WMH segmentation (Dice: 0.624+/-0.061, precision: 0.755+/-0.161). Furthermore, our method successfully differentiated between normal and abnormal hyperintensities with a Dice coefficient of 0.647. Notably, our approach demonstrated exceptional computational efficiency, completing end-to-end processing in approximately 4 seconds per case, up to 36 times faster than baseline methods, while maintaining minimal resource requirements. This combination of improved accuracy, clinically relevant differentiation capability, and computational efficiency addresses critical limitations in current neuroimaging analysis, potentially enabling integration into routine clinical workflows and enhancing MS diagnosis and monitoring.

Transfer Learning and Explainable AI for Brain Tumor Classification: A Study Using MRI Data from Bangladesh

Shuvashis Sarker

arxiv logopreprintJun 8 2025
Brain tumors, regardless of being benign or malignant, pose considerable health risks, with malignant tumors being more perilous due to their swift and uncontrolled proliferation, resulting in malignancy. Timely identification is crucial for enhancing patient outcomes, particularly in nations such as Bangladesh, where healthcare infrastructure is constrained. Manual MRI analysis is arduous and susceptible to inaccuracies, rendering it inefficient for prompt diagnosis. This research sought to tackle these problems by creating an automated brain tumor classification system utilizing MRI data obtained from many hospitals in Bangladesh. Advanced deep learning models, including VGG16, VGG19, and ResNet50, were utilized to classify glioma, meningioma, and various brain cancers. Explainable AI (XAI) methodologies, such as Grad-CAM and Grad-CAM++, were employed to improve model interpretability by emphasizing the critical areas in MRI scans that influenced the categorization. VGG16 achieved the most accuracy, attaining 99.17%. The integration of XAI enhanced the system's transparency and stability, rendering it more appropriate for clinical application in resource-limited environments such as Bangladesh. This study highlights the capability of deep learning models, in conjunction with explainable artificial intelligence (XAI), to enhance brain tumor detection and identification in areas with restricted access to advanced medical technologies.

MRI-mediated intelligent multimodal imaging system: from artificial intelligence to clinical imaging diagnosis.

Li Y, Wang J, Pan X, Shan Y, Zhang J

pubmed logopapersJun 8 2025
MRI, as a mature diagnostic method in clinical application, is favored by doctors and patients, there are also insurmountable bottleneck problems. AI strategies such as multimodal imaging integration and machine learning are used to build an intelligent multimodal imaging system based on MRI data to solve the unmet clinical needs in various medical environments. This review systematically discusses the development of MRI-guided multimodal imaging systems and the application of intelligent multimodal imaging systems integrated with artificial intelligence in the early diagnosis of brain and cardiovascular diseases. The safe and effective deployment of AI in clinical diagnostic equipment can help enhance early accurate diagnosis and personalized patient care.

A review of multimodal fusion-based deep learning for Alzheimer's disease.

Zhang R, Sheng J, Zhang Q, Wang J, Wang B

pubmed logopapersJun 7 2025
Alzheimer's Disease (AD) as one of the most prevalent neurodegenerative disorders worldwide, characterized by significant memory and cognitive decline in its later stages, severely impacting daily lives. Consequently, early diagnosis and accurate assessment are crucial for delaying disease progression. In recent years, multimodal imaging has gained widespread adoption in AD diagnosis and research, particularly the combined use of Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). The complementarity of these modalities in structural and metabolic information offers a unique advantage for comprehensive disease understanding and precise diagnosis. With the rapid advancement of deep learning techniques, efficient fusion of MRI and PET multimodal data has emerged as a prominent research focus. This review systematically surveys the latest advancements in deep learning-based multimodal fusion of MRI and PET images for AD research, with a particular focus on studies published in the past five years (2021-2025). It first introduces the main sources of AD-related data, along with data preprocessing and feature extraction methods. Then, it summarizes performance metrics and multimodal fusion techniques. Next, it explores the application of various deep learning models and their variants in multimodal fusion tasks. Finally, it analyzes the key challenges currently faced in the field, including data scarcity and imbalance, inter-institutional data heterogeneity, etc., and discusses potential solutions and future research directions. This review aims to provide systematic guidance for researchers in the field of MRI and PET multimodal fusion, with the ultimate goal of advancing the development of early AD diagnosis and intervention strategies.

Current utilization and impact of AI LVO detection tools in acute stroke triage: a multicenter survey analysis.

Darkhabani Z, Ezzeldin R, Delora A, Kass-Hout O, Alderazi Y, Nguyen TN, El-Ghanem M, Anwoju T, Ali Z, Ezzeldin M

pubmed logopapersJun 7 2025
Artificial intelligence (AI) tools for large vessel occlusion (LVO) detection are increasingly used in acute stroke triage to expedite diagnosis and intervention. However, variability in access and workflow integration limits their potential impact. This study assessed current usage patterns, access disparities, and integration levels across U.S. stroke programs. Cross-sectional, web-based survey of 97 multidisciplinary stroke care providers from diverse institutions. Descriptive statistics summarized demographics, AI tool usage, access, and integration. Two-proportion Z-tests assessed differences across institutional types. Most respondents (97.9%) reported AI tool use, primarily Viz AI and Rapid AI, but only 62.1% consistently used them for triage prior to radiologist interpretation. Just 37.5% reported formal protocol integration, and 43.6% had designated personnel for AI alert response. Access varied significantly across departments, and in only 61.7% of programs did all relevant team members have access. Formal implementation of the AI detection tools did not differ based on the certification (z = -0.2; <i>p</i> = 0.4) or whether the program was academic or community-based (z =-0.3; <i>p</i> = 0.3). AI-enabled LVO detection tools have the potential to improve stroke care and patient outcomes by expediting workflows and reducing treatment delays. This survey effectively evaluated current utilization of these tools and revealed widespread adoption alongside significant variability in access, integration, and workflow standardization. Larger, more diverse samples are needed to validate these findings across different hospital types, and further prospective research is essential to determine how formal integration of AI tools can enhance stroke care delivery, reduce disparities, and improve clinical outcomes.

SCAI-Net: An AI-driven framework for optimized, fast, and resource-efficient skull implant generation for cranioplasty using CT images.

Juneja M, Poddar A, Kharbanda M, Sudhir A, Gupta S, Joshi P, Goel A, Fatma N, Gupta M, Tarkas S, Gupta V, Jindal P

pubmed logopapersJun 7 2025
Skull damage caused by craniectomy or trauma necessitates accurate and precise Patient-Specific Implant (PSI) design to restore the cranial cavity. Conventional Computer-Aided Design (CAD)-based methods for PSI design are highly infrastructure-intensive, require specialised skills, and are time-consuming, resulting in prolonged patient wait times. Recent advancements in Artificial Intelligence (AI) provide automated, faster and scalable alternatives. This study introduces the Skull Completion using AI Network (SCAI-Net) framework, a deep-learning-based approach for automated cranial defect reconstruction using Computer Tomography (CT) images. The framework proposes two defect reconstruction variants: SCAI-Net-SDR (Subtraction-based Defect Reconstruction), which first reconstructs the full skull, then performs binary subtraction to obtain the reconstructed defect, and SCAI-Net-DDR (Direct Defect Reconstruction), which generates the reconstructed defect directly without requiring full-skull reconstruction. To enhance model robustness, the SCAI-Net was trained on an augmented dataset of 2760 images, created by combining MUG500+ and SkullFix datasets, featuring artificial defects across multiple cranial regions. Unlike subtraction-based SCAI-Net-SDR, which requires full-skull reconstruction before binary subtraction, and conventional CAD-based methods, which rely on interpolation or mirroring, SCAI-Net-DDR significantly reduces computational overhead. By eliminating the full-skull reconstruction step, DDR reduces training time by 66 % (85 min vs. 250 min for SDR) and achieves a 99.996 % faster defect reconstruction time compared to CAD (0.1s vs. 2400s). Based on the quantitative evaluation conducted on the SkullFix test cases, SCAI-Net-DDR emerged as the leading model among all evaluated approaches. SCAI-Net-DDR achieved the highest Dice Similarity Coefficient (DSC: 0.889), a low Hausdorff Distance (HD: 1.856 mm), and a superior Structural Similarity Index (SSIM: 0.897). Similarly, within the subset of subtraction-based reconstruction approaches evaluated, SCAI-Net-SDR demonstrated competitive performance, achieving the best HD (1.855 mm) and the highest SSIM (0.889), confirming its strong standing among methods using the subtraction paradigm. SCAI-Net generates reconstructed defects, which undergo post-processing to ensure manufacturing readiness. Steps include surface smoothing, thickness validation and edge preparation for secure fixation and seamless digital manufacturing compatibility. End-to-end implant generation time for DDR demonstrated a 96.68 % reduction (93.5 s), while SDR achieved a 96.64 % reduction (94.6 s), significantly outperforming CAD-based methods (2820s). Finite Element Analysis (FEA) confirmed the SCAI-Net-generated implants' robust load-bearing capacity under extreme loading (1780N) conditions, while edge gap analysis validated precise anatomical fit. Clinical validation further confirmed boundary accuracy, curvature alignment, and secure fit within cranial cavity. These results position SCAI-Net as a transformative, time-efficient, and resource-optimized solution for AI-driven cranial defect reconstruction and implant generation.

NeXtBrain: Combining local and global feature learning for brain tumor classification.

Pacal I, Akhan O, Deveci RT, Deveci M

pubmed logopapersJun 7 2025
The accurate and timely diagnosis of brain tumors is of paramount clinical significance for effective treatment planning and improved patient outcomes. While deep learning has advanced medical image analysis, concurrently achieving high classification accuracy, robust generalization, and computational efficiency remains a formidable challenge. This is often due to the difficulty in optimally capturing both fine-grained local tumor features and their broader global contextual cues without incurring substantial computational costs. This paper introduces NeXtBrain, a novel hybrid architecture meticulously designed to overcome these limitations. NeXtBrain's core innovations, the NeXt Convolutional Block (NCB) and the NeXt Transformer Block (NTB), synergistically enhance feature learning: NCB leverages Multi-Head Convolutional Attention and a SwiGLU-based MLP to precisely extract subtle local tumor morphologies and detailed textures, while NTB integrates self-attention with convolutional attention and a SwiGLU MLP to effectively model long-range spatial dependencies and global contextual relationships, crucial for differentiating complex tumor characteristics. Evaluated on two publicly available benchmark datasets, Figshare and Kaggle, NeXtBrain was rigorously compared against 17 state-of-the-art (SOTA) models. On Figshare, it achieved 99.78 % accuracy and a 99.77 % F1-score. On Kaggle, it attained 99.78 % accuracy and a 99.81 % F1-score, surpassing leading SOTA ViT, CNN, and hybrid models. Critically, NeXtBrain demonstrates exceptional computational efficiency, achieving these SOTA results with only 23.91 million parameters, requiring just 10.32 GFLOPs, and exhibiting a rapid inference time of 0.007 ms. This efficiency allows it to outperform significantly larger models such as DeiT3-Base with 85.82 M parameters, Swin-Base with 86.75 M parameters in both accuracy and computational demand.

Hypothalamus and intracranial volume segmentation at the group level by use of a Gradio-CNN framework.

Vernikouskaya I, Rasche V, Kassubek J, Müller HP

pubmed logopapersJun 6 2025
This study aimed to develop and evaluate a graphical user interface (GUI) for the automated segmentation of the hypothalamus and intracranial volume (ICV) in brain MRI scans. The interface was designed to facilitate efficient and accurate segmentation for research applications, with a focus on accessibility and ease of use for end-users. We developed a web-based GUI using the Gradio library integrating deep learning-based segmentation models trained on annotated brain MRI scans. The model utilizes a U-Net architecture to delineate the hypothalamus and ICV. The GUI allows users to upload high-resolution MRI scans, visualize the segmentation results, calculate hypothalamic volume and ICV, and manually correct individual segmentation results. To ensure widespread accessibility, we deployed the interface using ngrok, allowing users to access the tool via a shared link. As an example for the universality of the approach, the tool was applied to a group of 90 patients with Parkinson's disease (PD) and 39 controls. The GUI demonstrated high usability and efficiency in segmenting the hypothalamus and the ICV, with no significant difference in normalized hypothalamic volume observed between PD patients and controls, consistent with previously published findings. The average processing time per patient volume was 18 s for the hypothalamus and 44 s for the ICV segmentation on a 6 GB NVidia GeForce GTX 1060 GPU. The ngrok-based deployment allowed for seamless access across different devices and operating systems, with an average connection time of less than 5 s. The developed GUI provides a powerful and accessible tool for applications in neuroimaging. The combination of the intuitive interface, accurate deep learning-based segmentation, and easy deployment via ngrok addresses the need for user-friendly tools in brain MRI analysis. This approach has the potential to streamline workflows in neuroimaging research.

CAN TRANSFER LEARNING IMPROVE SUPERVISED SEGMENTATIONOF WHITE MATTER BUNDLES IN GLIOMA PATIENTS?

Riccardi, C., Ghezzi, S., Amorosino, G., Zigiotto, L., Sarubbo, S., Jovicich, J., Avesani, P.

biorxiv logopreprintJun 6 2025
In clinical neuroscience, the segmentation of the main white matter bundles is propaedeutic for many tasks such as pre-operative neurosurgical planning and monitoring of neuro-related diseases. Automating bundle segmentation with data-driven approaches and deep learning models has shown promising accuracy in the context of healthy individuals. The lack of large clinical datasets is preventing the translation of these results to patients. Inference on patients data with models trained on healthy population is not effective because of domain shift. This study aims to carry out an empirical analysis to investigate how transfer learning might be beneficial to overcome these limitations. For our analysis, we consider a public dataset with hundreds of individuals and a clinical dataset of glioma patients. We focus our preliminary investigation on the corticospinal tract. The results show that transfer learning might be effective in partially overcoming the domain shift.
Page 25 of 58579 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.