Sort by:
Page 2 of 1161152 results

Optimized AI-based Neural Decoding from BOLD fMRI Signal for Analyzing Visual and Semantic ROIs in the Human Visual System.

Veronese L, Moglia A, Pecco N, Della Rosa P, Scifo P, Mainardi LT, Cerveri P

pubmed logopapersAug 14 2025
AI-based neural decoding reconstructs visual perception by leveraging generative models to map brain activity measured through functional MRI (fMRI) into the observed visual stimulus. Traditionally, ridge linear models transform fMRI into a latent space, which is then decoded using variational autoencoders (VAE) or latent diffusion models (LDM). Owing to the complexity and noisiness of fMRI data, newer approaches split the reconstruction into two sequential stages, the first one providing a rough visual approximation using a VAE, the second one incorporating semantic information through the adoption of LDM guided by contrastive language-image pre-training (CLIP) embeddings. This work addressed some key scientific and technical gaps of the two-stage neural decoding by: 1) implementing a gated recurrent unit (GRU)-based architecture to establish a non-linear mapping between the fMRI signal and the VAE latent space, 2) optimizing the dimensionality of the VAE latent space, 3) systematically evaluating the contribution of the first reconstruction stage, and 4) analyzing the impact of different brain regions of interest (ROIs) on reconstruction quality. Experiments on the Natural Scenes Dataset, containing 73,000 unique natural images, along with fMRI of eight subjects, demonstrated that the proposed architecture maintained competitive performance while reducing the complexity of its first stage by 85%. The sensitivity analysis showcased that the first reconstruction stage is essential for preserving high structural similarity in the final reconstructions. Restricting analysis to semantic ROIs, while excluding early visual areas, diminished visual coherence, preserving semantics though. The inter-subject repeatability across ROIs was about 92 and 98% for visual and sematic metrics, respectively. This study represents a key step toward optimized neural decoding architectures leveraging non-linear models for stimulus prediction. Sensitivity analysis highlighted the interplay between the two reconstruction stages, while ROI-based analysis provided strong evidence that the two-stage AI model reflects the brain's hierarchical processing of visual information.

A novel hybrid convolutional and recurrent neural network model for automatic pituitary adenoma classification using dynamic contrast-enhanced MRI.

Motamed M, Bastam M, Tabatabaie SM, Elhaie M, Shahbazi-Gahrouei D

pubmed logopapersAug 14 2025
Pituitary adenomas, ranging from subtle microadenomas to mass-effect macroadenomas, pose diagnostic challenges for radiologists due to increasing scan volumes and the complexity of dynamic contrast-enhanced MRI interpretation. A hybrid CNN-LSTM model was trained and validated on a multi-center dataset of 2,163 samples from Tehran and Babolsar, Iran. Transfer learning and preprocessing techniques (e.g., Wiener filters) were utilized to improve classification performance for microadenomas (< 10 mm) and macroadenomas (> 10 mm). The model achieved 90.5% accuracy, an area under the receiver operating characteristic curve (AUROC) of 0.92, and 89.6% sensitivity (93.5% for microadenomas, 88.3% for macroadenomas), outperforming standard CNNs by 5-18% across metrics. With a processing time of 0.17 s per scan, the model demonstrated robustness to variations in imaging conditions, including scanner differences and contrast variations, excelling in real-time detection and differentiation of adenoma subtypes. This dual-path approach, the first to synergize spatial and temporal MRI features for pituitary diagnostics, offers high precision and efficiency. Supported by comparisons with existing models, it provides a scalable, reproducible tool to improve patient outcomes, with potential adaptability to broader neuroimaging challenges.

Delineation of the Centromedian Nucleus for Epilepsy Neuromodulation Using Deep Learning Reconstruction of White Matter-Nulled Imaging.

Ryan MV, Satzer D, Hu H, Litwiller DV, Rettmann DW, Tanabe J, Thompson JA, Ojemann SG, Kramer DR

pubmed logopapersAug 14 2025
Neuromodulation of the centromedian nucleus (CM) of the thalamus has shown promise in treating refractory epilepsy, particularly for idiopathic generalized epilepsy and Lennox-Gastaut syndrome. However, precise targeting of CM remains challenging. The combination of deep learning reconstruction (DLR) and fast gray matter acquisition T1 inversion recovery (FGATIR) offers potential improvements in visualization of CM for deep brain stimulation (DBS) targeting. The goal of the study was to evaluate the visualization of the putative CM on DLR-FGATIR and its alignment with atlas-defined CM boundaries, with the aim of facilitating direct targeting of CM for neuromodulation. This retrospective study included 12 patients with drug-resistant epilepsy treated with thalamic neuromodulation by using DLR-FGATIR for direct targeting. Postcontrast-T1-weighted MRI, DLR-FGATIR, and postoperative CT were coregistered and normalized into Montreal Neurological Institute (MNI) space and compared with the Morel histologic atlas. Contrast-to-noise ratios were measured between CM and neighboring nuclei. CM segmentations were compared between an experienced rater, a trainee rater, the Morel atlas, and the Thalamus Optimized Multi Atlas Segmentation (THOMAS) atlas (derived from expert segmentation of high-field MRI) by using the Sorenson-Dice coefficient (Dice score, a measure of overlap) and volume ratios. The number of electrode contacts within the Morel atlas CM was assessed. On DLR-FGATIR, CM was visible as an ovoid hypointensity in the intralaminar thalamus. Contrast-to-noise ratios were highest (<i>P</i> < .001) for the mediodorsal and medial pulvinar nuclei. Dice score with the Morel atlas CM was higher (median 0.49, interquartile range 0.40-0.58) for the experienced rater (<i>P</i> < .001) than the trainee rater (0.32, 0.19-0.46) and no different (<i>P</i> = .32) than the THOMAS atlas CM (0.56, 0.55-0.58). Both raters and the THOMAS atlas tended to under-segment the lateral portion of the Morel atlas CM, reflected by smaller segmentation volumes (<i>P</i> < .001). All electrodes targeting CM based on DLR-FGATIR traversed the Morel atlas CM. DLR-FGATIR permitted visualization and delineation of CM commensurate with a group atlas derived from high-field MRI. This technique provided reliable guidance for accurate electrode placement within CM, highlighting its potential use for direct targeting.

DINOMotion: advanced robust tissue motion tracking with DINOv2 in 2D-Cine MRI-guided radiotherapy.

Salari S, Spino C, Pharand LA, Lathuiliere F, Rivaz H, Beriault S, Xiao Y

pubmed logopapersAug 14 2025
Accurate tissue motion tracking is critical to ensure treatment outcome and safety in 2D-Cine MRI-guided radiotherapy. This is typically achieved by registration of sequential images, but existing methods often face challenges with large misalignments and lack of interpretability. In this paper, we introduce DINOMotion, a novel deep learning framework based on DINOv2 with Low-Rank Adaptation (LoRA) layers for robust, efficient, and interpretable motion tracking. DINOMotion automatically detects corresponding landmarks to derive optimal image registration, enhancing interpretability by providing explicit visual correspondences between sequential images. The integration of LoRA layers reduces trainable parameters, improving training efficiency, while DINOv2's powerful feature representations offer robustness against large misalignments. Unlike iterative optimization-based methods, DINOMotion directly computes image registration at test time. Our experiments on volunteer and patient datasets demonstrate its effectiveness in estimating both linear and nonlinear transformations, achieving Dice scores of 92.07% for the kidney, 90.90% for the liver, and 95.23% for the lung, with corresponding Hausdorff distances of 5.47 mm, 8.31 mm, and 6.72 mm, respectively. DINOMotion processes each scan in approximately 30ms and consistently outperforms state-of-the-art methods, particularly in handling large misalignments. These results highlight its potential as a robust and interpretable solution for real-time motion tracking in 2D-Cine MRI-guided radiotherapy.

DINOMotion: advanced robust tissue motion tracking with DINOv2 in 2D-Cine MRI-guided radiotherapy

Soorena Salari, Catherine Spino, Laurie-Anne Pharand, Fabienne Lathuiliere, Hassan Rivaz, Silvain Beriault, Yiming Xiao

arxiv logopreprintAug 14 2025
Accurate tissue motion tracking is critical to ensure treatment outcome and safety in 2D-Cine MRI-guided radiotherapy. This is typically achieved by registration of sequential images, but existing methods often face challenges with large misalignments and lack of interpretability. In this paper, we introduce DINOMotion, a novel deep learning framework based on DINOv2 with Low-Rank Adaptation (LoRA) layers for robust, efficient, and interpretable motion tracking. DINOMotion automatically detects corresponding landmarks to derive optimal image registration, enhancing interpretability by providing explicit visual correspondences between sequential images. The integration of LoRA layers reduces trainable parameters, improving training efficiency, while DINOv2's powerful feature representations offer robustness against large misalignments. Unlike iterative optimization-based methods, DINOMotion directly computes image registration at test time. Our experiments on volunteer and patient datasets demonstrate its effectiveness in estimating both linear and nonlinear transformations, achieving Dice scores of 92.07% for the kidney, 90.90% for the liver, and 95.23% for the lung, with corresponding Hausdorff distances of 5.47 mm, 8.31 mm, and 6.72 mm, respectively. DINOMotion processes each scan in approximately 30ms and consistently outperforms state-of-the-art methods, particularly in handling large misalignments. These results highlight its potential as a robust and interpretable solution for real-time motion tracking in 2D-Cine MRI-guided radiotherapy.

Severity Classification of Pediatric Spinal Cord Injuries Using Structural MRI Measures and Deep Learning: A Comprehensive Analysis across All Vertebral Levels.

Sadeghi-Adl Z, Naghizadehkashani S, Middleton D, Krisa L, Alizadeh M, Flanders AE, Faro SH, Wang Z, Mohamed FB

pubmed logopapersAug 14 2025
Spinal cord injury (SCI) in the pediatric population presents a unique challenge in diagnosis and prognosis due to the complexity of performing clinical assessments on children. Accurate evaluation of structural changes in the spinal cord is essential for effective treatment planning. This study aims to evaluate structural characteristics in pediatric patients with SCI by comparing cross-sectional area (CSA), anterior-posterior (AP) width, and right-left (RL) width across all vertebral levels of the spinal cord between typically developing (TD) and participants with SCI. We employed deep learning techniques to utilize these measures for detecting SCI cases and determining their injury severity. Sixty-one pediatric participants (ages 6-18), including 20 with chronic SCI and 41 TD, were enrolled and scanned by using a 3T MRI scanner. All SCI participants underwent the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) test to assess their neurologic function and determine their American Spinal Injury Association (ASIA) Impairment Scale (AIS) category. T2-weighted MRI scans were utilized to measure CSA, AP width, and RL widths along the entire cervical and thoracic cord. These measures were automatically extracted at every vertebral level of the spinal cord by using the spinal cord toolbox. Deep convolutional neural networks (CNNs) were utilized to classify participants into SCI or TD groups and determine their AIS classification based on structural parameters and demographic factors such as age and height. Significant differences (<i>P</i> < .05) were found in CSA, AP width, and RL width between SCI and TD participants, indicating notable structural alterations due to SCI. The CNN-based models demonstrated high performance, achieving 96.59% accuracy in distinguishing SCI from TD participants. Furthermore, the models determined AIS category classification with 94.92% accuracy. The study demonstrates the effectiveness of integrating cross-sectional structural imaging measures with deep learning methods for classification and severity assessment of pediatric SCI. The deep learning approach outperforms traditional machine learning models in diagnostic accuracy, offering potential improvements in patient care in pediatric SCI management.

Machine Learning-Driven Radiomic Profiling of Thalamus-Amygdala Nuclei for Prediction of Postoperative Delirium After STN-DBS in Parkinson's Disease Patients: A Pilot Study.

Radziunas A, Davidavicius G, Reinyte K, Pranckeviciene A, Fedaravicius A, Kucinskas V, Laucius O, Tamasauskas A, Deltuva V, Saudargiene A

pubmed logopapersAug 13 2025
Postoperative delirium is a common complication following sub-thalamic nucleus deep brain stimulation surgery in Parkinson's disease patients. Postoperative delirium has been shown to prolong hospital stays, harm cognitive function, and negatively impact outcomes. Utilizing radiomics as a predictive tool for identifying patients at risk of delirium is a novel and personalized approach. This pilot study analyzed preoperative T1-weighted and T2-weighted magnetic resonance images from 34 Parkinson's disease patients, which were used to segment the thalamus, amygdala, and hippocampus, resulting in 10,680 extracted radiomic features. Feature selection using the minimum redundancy maximal relevance method identified the 20 most informative features, which were input into eight different machine learning algorithms. A high predictive accuracy of postoperative delirium was achieved by applying regularized binary logistic regression and linear discriminant analysis and using 10 most informative radiomic features. Regularized logistic regression resulted in 96.97% (±6.20) balanced accuracy, 99.5% (±4.97) sensitivity, 94.43% (±10.70) specificity, and area under the receiver operating characteristic curve of 0.97 (±0.06). Linear discriminant analysis showed 98.42% (±6.57) balanced accuracy, 98.00% (±9.80) sensitivity, 98.83% (±4.63) specificity, and area under the receiver operating characteristic curve of 0.98 (±0.07). The feed-forward neural network also demonstrated strong predictive capacity, achieving 96.17% (±10.40) balanced accuracy, 94.5% (±19.87) sensitivity, 97.83% (±7.87) specificity, and an area under the receiver operating characteristic curve of 0.96 (±0.10). However, when the feature set was extended to 20 features, both logistic regression and linear discriminant analysis showed reduced performance, while the feed-forward neural network achieved the highest predictive accuracy of 99.28% (±2.71), with 100.0% (±0.00) sensitivity, 98.57% (±5.42) specificity, and an area under the receiver operating characteristic curve of 0.99 (±0.03). Selected radiomic features might indicate network dysfunction between thalamic laterodorsal, reuniens medial ventral, and amygdala basal nuclei with hippocampus cornu ammonis 4 in these patients. This finding expands previous research suggesting the importance of the thalamic-hippocampal-amygdala network for postoperative delirium due to alterations in neuronal activity.

Automatic detection of arterial input function for brain DCE-MRI in multi-site cohorts.

Saca L, Gaggar R, Pappas I, Benzinger T, Reiman EM, Shiroishi MS, Joe EB, Ringman JM, Yassine HN, Schneider LS, Chui HC, Nation DA, Zlokovic BV, Toga AW, Chakhoyan A, Barnes S

pubmed logopapersAug 13 2025
Arterial input function (AIF) extraction is a crucial step in quantitative pharmacokinetic modeling of DCE-MRI. This work proposes a robust deep learning model that can precisely extract an AIF from DCE-MRI images. A diverse dataset of human brain DCE-MRI images from 289 participants, totaling 384 scans, from five different institutions with extracted gadolinium-based contrast agent curves from large penetrating arteries, and with most data collected for blood-brain barrier (BBB) permeability measurement, was retrospectively analyzed. A 3D UNet model was implemented and trained on manually drawn AIF regions. The testing cohort was compared using proposed AIF quality metric AIFitness and K<sup>trans</sup> values from a standard DCE pipeline. This UNet was then applied to a separate dataset of 326 participants with a total of 421 DCE-MRI images with analyzed AIF quality and K<sup>trans</sup> values. The resulting 3D UNet model achieved an average AIFitness score of 93.9 compared to 99.7 for manually selected AIFs, and white matter K<sup>trans</sup> values were 0.45/min × 10<sup>-3</sup> and 0.45/min × 10<sup>-3</sup>, respectively. The intraclass correlation between automated and manual K<sup>trans</sup> values was 0.89. The separate replication dataset yielded an AIFitness score of 97.0 and white matter K<sup>trans</sup> of 0.44/min × 10<sup>-3</sup>. Findings suggest a 3D UNet model with additional convolutional neural network kernels and a modified Huber loss function achieves superior performance for identifying AIF curves from DCE-MRI in a diverse multi-center cohort. AIFitness scores and DCE-MRI-derived metrics, such as K<sup>trans</sup> maps, showed no significant differences in gray and white matter between manually drawn and automated AIFs.

Quantitative Prostate MRI, From the <i>AJR</i> Special Series on Quantitative Imaging.

Margolis DJA, Chatterjee A, deSouza NM, Fedorov A, Fennessy F, Maier SE, Obuchowski N, Punwani S, Purysko AS, Rakow-Penner R, Shukla-Dave A, Tempany CM, Boss M, Malyarenko D

pubmed logopapersAug 13 2025
Prostate MRI has traditionally relied on qualitative interpretation. However, quantitative components hold the potential to markedly improve performance. The ADC from DWI is probably the most widely recognized quantitative MRI biomarker and has shown strong discriminatory value for clinically significant prostate cancer as well as for recurrent cancer after treatment. Advanced diffusion techniques, including intravoxel incoherent motion imaging, diffusion kurtosis imaging, diffusion-tensor imaging, and specific implementations such as restriction spectrum imaging, purport even better discrimination but are more technically challenging. The inherent T1 and T2 of tissue also provide diagnostic value, with more advanced techniques deriving luminal water fraction and hybrid multidimensional MRI metrics. Dynamic contrast-enhanced imaging, primarily using a modified Tofts model, also shows independent discriminatory value. Finally, quantitative lesion size and shape features can be combined with the aforementioned techniques and can be further refined using radiomics, texture analysis, and artificial intelligence. Which technique will ultimately find widespread clinical use will depend on validation across a myriad of platforms and use cases.

Development of a multimodal vision transformer model for predicting traumatic versus degenerative rotator cuff tears on magnetic resonance imaging: A single-centre retrospective study.

Oettl FC, Malayeri AB, Furrer PR, Wieser K, Fürnstahl P, Bouaicha S

pubmed logopapersAug 13 2025
The differentiation between traumatic and degenerative rotator cuff tears (RCTs remains a diagnostic challenge with significant implications for treatment planning. While magnetic resonance imaging (MRI) is standard practice, traditional radiological interpretation has shown limited reliability in distinguishing these etiologies. This study evaluates the potential of artificial intelligence (AI) models, specifically a multimodal vision transformer (ViT), to differentiate between traumatic and degenerative RCT. In this retrospective, single-centre study, 99 shoulder MRIs were analysed from patients who underwent surgery at a specialised university shoulder unit between 2016 and 2019. The cohort was divided into training (n = 79) and validation (n = 20) sets. The traumatic group required a documented relevant trauma (excluding simple lifting injuries), previously asymptomatic shoulder and MRI within 3 months posttrauma. The degenerative group was of similar age and injured tendon, with patients presenting with at least 1 year of constant shoulder pain prior to imaging and no trauma history. The ViT was subsequently combined with demographic data to finalise in a multimodal ViT. Saliency maps are utilised as an explainability tool. The multimodal ViT model achieved an accuracy of 0.75 ± 0.08 with a recall of 0.8 ± 0.08, specificity of 0.71 ± 0.11 and a F1 score of 0.76 ± 0.1. The model maintained consistent performance across different patient subsets, demonstrating robust generalisation. Saliency maps do not show a consistent focus on the rotator cuff. AI shows potential in supporting the challenging differentiation between traumatic and degenerative RCT on MRI. The achieved accuracy of 75% is particularly significant given the similar groups which presented a challenging diagnostic scenario. Saliency maps were utilised to ensure explainability, the given lack of consistent focus on rotator cuff tendons hints towards underappreciated aspects in the differentiation. Not applicable.
Page 2 of 1161152 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.