Sort by:
Page 45 of 55543 results

Multiple deep learning models based on MRI images in discriminating glioblastoma from solitary brain metastases: a multicentre study.

Kong C, Yan D, Liu K, Yin Y, Ma C

pubmed logopapersMay 19 2025
Development of a deep learning model for accurate preoperative identification of glioblastoma and solitary brain metastases by combining multi-centre and multi-sequence magnetic resonance images and comparison of the performance of different deep learning models. Clinical data and MR images of a total of 236 patients with pathologically confirmed glioblastoma and single brain metastases were retrospectively collected from January 2019 to May 2024 at Provincial Hospital of Shandong First Medical University, and the data were randomly divided into a training set and a test set according to the ratio of 8:2, in which the training set contained 197 cases and the test set contained 39 cases; the images were preprocessed and labeled with the tumor regions. The images were pre-processed and labeled with tumor regions, and different MRI sequences were input individually or in combination to train the deep learning model 3D ResNet-18, and the optimal sequence combinations were obtained by five-fold cross-validation enhancement of the data inputs and training of the deep learning models 3D Vision Transformer (3D Vit), 3D DenseNet, and 3D VGG; the working characteristic curves (ROCs) of subjects were plotted, and the area under the curve (AUC) was calculated. The area under the curve (AUC), accuracy, precision, recall and F1 score were used to evaluate the discriminative performance of the models. In addition, 48 patients with glioblastoma and single brain metastases from January 2020 to December 2022 were collected from the Affiliated Cancer Hospital of Shandong First Medical University as an external test set to compare the discriminative performance, robustness and generalization ability of the four deep learning models. In the comparison of the discriminative effect of different MRI sequences, the three sequence combinations of T1-CE, T2, and T2-Flair gained discriminative effect, with the accuracy and AUC values of 0.8718 and 0.9305, respectively; after the four deep learning models were inputted into the aforementioned sequence combinations, the accuracy and AUC of the external validation of the 3D ResNet-18 model were 0.8125, respectively, 0.8899, all of which are the highest among all models. A combination of multi-sequence MR images and a deep learning model can efficiently identify glioblastoma and solitary brain metastases preoperatively, and the deep learning model 3D ResNet-18 has the highest efficacy in identifying the two types of tumours.

A Skull-Adaptive Framework for AI-Based 3D Transcranial Focused Ultrasound Simulation

Vinkle Srivastav, Juliette Puel, Jonathan Vappou, Elijah Van Houten, Paolo Cabras, Nicolas Padoy

arxiv logopreprintMay 19 2025
Transcranial focused ultrasound (tFUS) is an emerging modality for non-invasive brain stimulation and therapeutic intervention, offering millimeter-scale spatial precision and the ability to target deep brain structures. However, the heterogeneous and anisotropic nature of the human skull introduces significant distortions to the propagating ultrasound wavefront, which require time-consuming patient-specific planning and corrections using numerical solvers for accurate targeting. To enable data-driven approaches in this domain, we introduce TFUScapes, the first large-scale, high-resolution dataset of tFUS simulations through anatomically realistic human skulls derived from T1-weighted MRI images. We have developed a scalable simulation engine pipeline using the k-Wave pseudo-spectral solver, where each simulation returns a steady-state pressure field generated by a focused ultrasound transducer placed at realistic scalp locations. In addition to the dataset, we present DeepTFUS, a deep learning model that estimates normalized pressure fields directly from input 3D CT volumes and transducer position. The model extends a U-Net backbone with transducer-aware conditioning, incorporating Fourier-encoded position embeddings and MLP layers to create global transducer embeddings. These embeddings are fused with U-Net encoder features via feature-wise modulation, dynamic convolutions, and cross-attention mechanisms. The model is trained using a combination of spatially weighted and gradient-sensitive loss functions, enabling it to approximate high-fidelity wavefields. The TFUScapes dataset is publicly released to accelerate research at the intersection of computational acoustics, neurotechnology, and deep learning. The project page is available at https://github.com/CAMMA-public/TFUScapes.

Non-orthogonal kV imaging guided patient position verification in non-coplanar radiation therapy with dataset-free implicit neural representation.

Ye S, Chen Y, Wang S, Xing L, Gao Y

pubmed logopapersMay 19 2025
Cone-beam CT (CBCT) is crucial for patient alignment and target verification in radiation therapy (RT). However, for non-coplanar beams, potential collisions between the treatment couch and the on-board imaging system limit the range that the gantry can be rotated. Limited-angle measurements are often insufficient to generate high-quality volumetric images for image-domain registration, therefore limiting the use of CBCT for position verification. An alternative to image-domain registration is to use a few 2D projections acquired by the onboard kV imager to register with the 3D planning CT for patient position verification, which is referred to as 2D-3D registration. The 2D-3D registration involves converting the 3D volume into a set of digitally reconstructed radiographs (DRRs) expected to be comparable to the acquired 2D projections. The domain gap between the generated DRRs and the acquired projections can happen due to the inaccurate geometry modeling in DRR generation and artifacts in the actual acquisitions. We aim to improve the efficiency and accuracy of the challenging 2D-3D registration problem in non-coplanar RT with limited-angle CBCT scans. We designed an accelerated, dataset-free, and patient-specific 2D-3D registration framework based on an implicit neural representation (INR) network and a composite similarity measure. The INR network consists of a lightweight three-layer multilayer perception followed by average pooling to calculate rigid motion parameters, which are used to transform the original 3D volume to the moving position. The Radon transform and imaging specifications at the moving position are used to generate DRRs with higher accuracy. We designed a composite similarity measure consisting of pixel-wise intensity difference and gradient differences between the generated DRRs and acquired projections to further reduce the impact of their domain gap on registration accuracy. We evaluated the proposed method on both simulation data and real phantom data acquired from a Varian TrueBeam machine. Comparisons with a conventional non-deep-learning registration approach and ablation studies on the composite similarity measure were conducted to demonstrate the efficacy of the proposed method. In the simulation data experiments, two X-ray projections of a head-and-neck image with <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><msup><mn>45</mn> <mo>∘</mo></msup> <annotation>${45}^\circ$</annotation></semantics> </math> discrepancy were used for the registration. The accuracy of the registration results was evaluated on experiments set up at four different moving positions with ground-truth moving parameters. The proposed method achieved sub-millimeter accuracy in translations and sub-degree accuracy in rotations. In the phantom experiments, a head-and-neck phantom was scanned at three different positions involving couch translations and rotations. We achieved translation errors of <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo><</mo> <mn>2</mn> <mspace></mspace> <mi>mm</mi></mrow> <annotation>$< 2\nobreakspace {\rm mm}$</annotation></semantics> </math> and subdegree accuracy for pitch and roll. Experiments on registration using different numbers of projections with varying angle discrepancies demonstrate the improved accuracy and robustness of the proposed method, compared to both the conventional registration approach and the proposed approach without certain components of the composite similarity measure. We proposed a dataset-free lightweight INR-based registration with a composite similarity measure for the challenging 2D-3D registration problem with limited-angle CBCT scans. Comprehensive evaluations of both simulation data and experimental phantom data demonstrated the efficiency, accuracy, and robustness of the proposed method.

FreqSelect: Frequency-Aware fMRI-to-Image Reconstruction

Junliang Ye, Lei Wang, Md Zakir Hossain

arxiv logopreprintMay 18 2025
Reconstructing natural images from functional magnetic resonance imaging (fMRI) data remains a core challenge in natural decoding due to the mismatch between the richness of visual stimuli and the noisy, low resolution nature of fMRI signals. While recent two-stage models, combining deep variational autoencoders (VAEs) with diffusion models, have advanced this task, they treat all spatial-frequency components of the input equally. This uniform treatment forces the model to extract meaning features and suppress irrelevant noise simultaneously, limiting its effectiveness. We introduce FreqSelect, a lightweight, adaptive module that selectively filters spatial-frequency bands before encoding. By dynamically emphasizing frequencies that are most predictive of brain activity and suppressing those that are uninformative, FreqSelect acts as a content-aware gate between image features and natural data. It integrates seamlessly into standard very deep VAE-diffusion pipelines and requires no additional supervision. Evaluated on the Natural Scenes dataset, FreqSelect consistently improves reconstruction quality across both low- and high-level metrics. Beyond performance gains, the learned frequency-selection patterns offer interpretable insights into how different visual frequencies are represented in the brain. Our method generalizes across subjects and scenes, and holds promise for extension to other neuroimaging modalities, offering a principled approach to enhancing both decoding accuracy and neuroscientific interpretability.

From Low Field to High Value: Robust Cortical Mapping from Low-Field MRI

Karthik Gopinath, Annabel Sorby-Adams, Jonathan W. Ramirez, Dina Zemlyanker, Jennifer Guo, David Hunt, Christine L. Mac Donald, C. Dirk Keene, Timothy Coalson, Matthew F. Glasser, David Van Essen, Matthew S. Rosen, Oula Puonti, W. Taylor Kimberly, Juan Eugenio Iglesias

arxiv logopreprintMay 18 2025
Three-dimensional reconstruction of cortical surfaces from MRI for morphometric analysis is fundamental for understanding brain structure. While high-field MRI (HF-MRI) is standard in research and clinical settings, its limited availability hinders widespread use. Low-field MRI (LF-MRI), particularly portable systems, offers a cost-effective and accessible alternative. However, existing cortical surface analysis tools are optimized for high-resolution HF-MRI and struggle with the lower signal-to-noise ratio and resolution of LF-MRI. In this work, we present a machine learning method for 3D reconstruction and analysis of portable LF-MRI across a range of contrasts and resolutions. Our method works "out of the box" without retraining. It uses a 3D U-Net trained on synthetic LF-MRI to predict signed distance functions of cortical surfaces, followed by geometric processing to ensure topological accuracy. We evaluate our method using paired HF/LF-MRI scans of the same subjects, showing that LF-MRI surface reconstruction accuracy depends on acquisition parameters, including contrast type (T1 vs T2), orientation (axial vs isotropic), and resolution. A 3mm isotropic T2-weighted scan acquired in under 4 minutes, yields strong agreement with HF-derived surfaces: surface area correlates at r=0.96, cortical parcellations reach Dice=0.98, and gray matter volume achieves r=0.93. Cortical thickness remains more challenging with correlations up to r=0.70, reflecting the difficulty of sub-mm precision with 3mm voxels. We further validate our method on challenging postmortem LF-MRI, demonstrating its robustness. Our method represents a step toward enabling cortical surface analysis on portable LF-MRI. Code is available at https://surfer.nmr.mgh.harvard.edu/fswiki/ReconAny

ML-Driven Alzheimer 's disease prediction: A deep ensemble modeling approach.

Jumaili MLF, Sonuç E

pubmed logopapersMay 17 2025
Alzheimer's disease (AD) is a progressive neurological disorder characterized by cognitive decline due to brain cell death, typically manifesting later in life.Early and accurate detection is critical for effective disease management and treatment. This study proposes an ensemble learning framework that combines five deep learning architectures (VGG16, VGG19, ResNet50, InceptionV3, and EfficientNetB7) to improve the accuracy of AD diagnosis. We use a comprehensive dataset of 3,714 MRI brain scans collected from specialized clinics in Iraq, categorized into three classes: NonDemented (834 images), MildDemented (1,824 images), and VeryDemented (1,056 images). The proposed voting ensemble model achieves a diagnostic accuracy of 99.32% on our dataset. The effectiveness of the model is further validated on two external datasets: OASIS (achieving 86.6% accuracy) and ADNI (achieving 99.5% accuracy), demonstrating competitive performance compared to existing approaches. Moreover, the proposed model exhibits high precision and recall across all stages of dementia, providing a reliable and robust tool for early AD detection. This study highlights the effectiveness of ensemble learning in AD diagnosis and shows promise for clinical applications.

Evaluation of synthetic images derived from a neural network in pediatric brain magnetic resonance imaging.

Nagaraj UD, Meineke J, Sriwastwa A, Tkach JA, Leach JL, Doneva M

pubmed logopapersMay 17 2025
Synthetic MRI (SyMRI) is a technique used to estimate tissue properties and generate multiple MR sequence contrasts from a single acquisition. However, image quality can be suboptimal. To evaluate a neural network approach using artificial intelligence-based direct contrast synthesis (AI-DCS) of the multi-contrast weighted images to improve image quality. This prospective, IRB approved study enrolled 50 pediatric patients undergoing clinical brain MRI. In addition to the standard of care (SOC) clinical protocol, 2D multi-delay multi-echo (MDME) sequence was obtained. SOC 3D T1-weighted (T1W), 2D T2-weighted (T2W) and 2D T2W fluid-attenuated inversion recovery (FLAIR) images from 35 patients were used to train a neural network generating synthetic T1W, T2W, and FLAIR images. Quantitative analysis of grey matter (GM) and white matter (WM) apparent signal to noise (aSNR) and grey-white matter (GWM) apparent contrast to noise (aCNR) ratios was performed. 8 patients were evaluated. When compared to SyMRI, T1W AI-DCS had better overall image quality, reduced noise/artifacts, and better subjective SNR in 100 % (16/16) of evaluations. When compared to SyMRI, T2W AI-DCS overall image quality and diagnostic confidence was better in 93.8 % (15/16) and 87.5 % (14/16) of evaluations, respectively. When compared to SyMRI, FLAIR AI-DCS was better in 93.8 % (15/16) of evaluations in overall image quality and in 100 % (16/16) of evaluations for noise/artifacts and subjective SNR. Quantitative analysis revealed higher WM aSNR compared with SyMRI (p < 0.05) for T1W, T2W and FLAIR. AI-DCS demonstrates better overall image quality than SyMRI on T1W, T2W and FLAIR.

A Robust Automated Segmentation Method for White Matter Hyperintensity of Vascular-origin.

He H, Jiang J, Peng S, He C, Sun T, Fan F, Song H, Sun D, Xu Z, Wu S, Lu D, Zhang J

pubmed logopapersMay 17 2025
White matter hyperintensity (WMH) is a primary manifestation of small vessel disease (SVD), leading to vascular cognitive impairment and other disorders. Accurate WMH quantification is vital for diagnosis and prognosis, but current automatic segmentation methods often fall short, especially across different datasets. The aims of this study are to develop and validate a robust deep learning segmentation method for WMH of vascular-origin. In this study, we developed a transformer-based method for the automatic segmentation of vascular-origin WMH using both 3D T1 and 3D T2-FLAIR images. Our initial dataset comprised 126 participants with varying WMH burdens due to SVD, each with manually segmented WMH masks used for training and testing. External validation was performed on two independent datasets: the WMH Segmentation Challenge 2017 dataset (170 subjects) and an in-house vascular risk factor dataset (70 subjects), which included scans acquired on eight different MRI systems at field strengths of 1.5T, 3T, and 5T. This approach enabled a comprehensive assessment of the method's generalizability across diverse imaging conditions. We further compared our method against LGA, LPA, BIANCA, UBO-detector and TrUE-Net in optimized settings. Our method consistently outperformed others, achieving a median Dice coefficient of 0.78±0.09 in our primary dataset, 0.72±0.15 in the external dataset 1, and 0.72±0.14 in the external dataset 2. The relative volume errors were 0.15±0.14, 0.50±0.86, and 0.47±1.02, respectively. The true positive rates were 0.81±0.13, 0.92±0.09, and 0.92±0.12, while the false positive rates were 0.20±0.09, 0.40±0.18, and 0.40±0.19. None of the external validation datasets were used for model training; instead, they comprise previously unseen MRI scans acquired from different scanners and protocols. This setup closely reflects real-world clinical scenarios and further demonstrates the robustness and generalizability of our model across diverse MRI systems and acquisition settings. As such, the proposed method provides a reliable solution for WMH segmentation in large-scale cohort studies.

A self-supervised multimodal deep learning approach to differentiate post-radiotherapy progression from pseudoprogression in glioblastoma.

Gomaa A, Huang Y, Stephan P, Breininger K, Frey B, Dörfler A, Schnell O, Delev D, Coras R, Donaubauer AJ, Schmitter C, Stritzelberger J, Semrau S, Maier A, Bayer S, Schönecker S, Heiland DH, Hau P, Gaipl US, Bert C, Fietkau R, Schmidt MA, Putz F

pubmed logopapersMay 17 2025
Accurate differentiation of pseudoprogression (PsP) from True Progression (TP) following radiotherapy (RT) in glioblastoma patients is crucial for optimal treatment planning. However, this task remains challenging due to the overlapping imaging characteristics of PsP and TP. This study therefore proposes a multimodal deep-learning approach utilizing complementary information from routine anatomical MR images, clinical parameters, and RT treatment planning information for improved predictive accuracy. The approach utilizes a self-supervised Vision Transformer (ViT) to encode multi-sequence MR brain volumes to effectively capture both global and local context from the high dimensional input. The encoder is trained in a self-supervised upstream task on unlabeled glioma MRI datasets from the open BraTS2021, UPenn-GBM, and UCSF-PDGM datasets (n = 2317 MRI studies) to generate compact, clinically relevant representations from FLAIR and T1 post-contrast sequences. These encoded MR inputs are then integrated with clinical data and RT treatment planning information through guided cross-modal attention, improving progression classification accuracy. This work was developed using two datasets from different centers: the Burdenko Glioblastoma Progression Dataset (n = 59) for training and validation, and the GlioCMV progression dataset from the University Hospital Erlangen (UKER) (n = 20) for testing. The proposed method achieved competitive performance, with an AUC of 75.3%, outperforming the current state-of-the-art data-driven approaches. Importantly, the proposed approach relies solely on readily available anatomical MRI sequences, clinical data, and RT treatment planning information, enhancing its clinical feasibility. The proposed approach addresses the challenge of limited data availability for PsP and TP differentiation and could allow for improved clinical decision-making and optimized treatment plans for glioblastoma patients.

An integrated deep learning model for early and multi-class diagnosis of Alzheimer's disease from MRI scans.

Vinukonda ER, Jagadesh BN

pubmed logopapersMay 17 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that severely affects memory, behavior, and cognitive function. Early and accurate diagnosis is crucial for effective intervention, yet detecting subtle changes in the early stages remains a challenge. In this study, we propose a hybrid deep learning-based multi-class classification system for AD using magnetic resonance imaging (MRI). The proposed approach integrates an improved DeepLabV3+ (IDeepLabV3+) model for lesion segmentation, followed by feature extraction using the LeNet-5 model. A novel feature selection method based on average correlation and error probability is employed to enhance classification efficiency. Finally, an Enhanced ResNext (EResNext) model is used to classify AD into four stages: non-dementia (ND), very mild dementia (VMD), mild dementia (MD), and moderate dementia (MOD). The proposed model achieves an accuracy of 98.12%, demonstrating its superior performance over existing methods. The area under the ROC curve (AUC) further validates its effectiveness, with the highest score of 0.97 for moderate dementia. This study highlights the potential of hybrid deep learning models in improving early AD detection and staging, contributing to more accurate clinical diagnosis and better patient care.
Page 45 of 55543 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.