Sort by:
Page 1 of 87863 results
Next

Nonsuicidal self-injury prediction with pain-processing neural circuits using interpretable graph neural network.

Wu S, Xue Y, Hang Y, Xie Y, Zhang P, Liang M, Zhong Y, Wang C

pubmed logopapersDec 1 2025
Nonsuicidal self-injury (NSSI) involves the intentional destruction of one's own body tissues without suicidal intent. Prior research has shown that individuals with NSSI exhibit abnormal pain perception; however, the pain-processing neural circuits underlying NSSI remain poorly understood. This study leverages graph neural networks to predict NSSI risk and examine the learned connectivity of neural underpinnings using multimodal data. Resting-state functional MRI and diffusion tensor imaging were collected from 50 patients with NSSI, 79 healthy controls (HC), and 44 patients with mental disorder who did not engage in NSSI as disease controls (DC). We constructed pain-related brain networks for each participant. An interpretable graph attention networks (GAT) model was developed, considering demographic factors, to predict NSSI risk and highlight NSSI-specific connectivity using learned attention matrices. The proposed GAT model based on imaging data achieved an accuracy of 80%, and increased to 88% when self-reported pain scales were incorporated alongside imaging data in distinguishing patients with NSSI from HC. It highlighted amygdala-parahippocampus and inferior frontal gyrus (IFG)-insula connectivity as pivotal in NSSI-related pain processing. After incorporating imaging data of DC, the model's accuracy reached 74%, underscoring consistent neural connectivity patterns. The GAT model demonstrates high predictive accuracy for NSSI, enhanced by including self-reported pain scales. Our proposed GAT model underscores the significance in the functional integration of limbic regions, paralimbic regions and IFG in NSSI pain processing. Our findings suggest altered pain processing as a key mechanism in NSSI, providing insights for potential neural modulation intervention strategies.

Deep learning for differential diagnosis of parotid tumors based on 2.5D magnetic resonance imaging.

Mai W, Fan X, Zhang L, Li J, Chen L, Hua X, Zhang D, Li H, Cai M, Shi C, Liu X

pubmed logopapersDec 1 2025
Accurate preoperative diagnosis of parotid gland tumors (PGTs) is crucial for surgical planning since malignant tumors require more extensive excision. Though fine-needle aspiration biopsy is the diagnostic gold standard, its sensitivity in detecting malignancies is limited. While Deep learning (DL) models based on magnetic resonance imaging (MRI) are common in medicine, they are less studied for parotid gland tumors. This study used a 2.5D imaging approach (Incorporating Inter-Slice Information) to train a DL model to differentiate between benign and malignant PGTs. This retrospective study included 122 parotid tumor patients, using MRI and clinical features to build predictive models. In the traditional model, univariate analysis identified statistically significant features, which were then used in multivariate logistic regression to determine independent predictors. The model was built using four-fold cross-validation. The deep learning model was trained using 2D and 2.5D imaging approaches, with a transformer-based architecture employed for transfer learning. The model's performance was evaluated using the area under the receiver operating characteristic curve (AUC) and confusion matrix metrics. In the traditional model, boundary and peritumoral invasion were identified as independent predictors for PGTs, and the model was constructed based on these features. The model achieved an AUC of 0.79 but demonstrated low sensitivity (0.54). In contrast, the DL model based on 2.5D T2 fat-suppressed images showed superior performance, with an AUC of 0.86 and a sensitivity of 0.78. The 2.5D imaging technique, when integrated with a transformer-based transfer learning model, demonstrates significant efficacy in differentiating between PGTs.

Sureness of classification of breast cancers as pure ductal carcinoma <i>in situ</i> or with invasive components on dynamic contrast-enhanced magnetic resonance imaging: application of likelihood assurance metrics for computer-aided diagnosis.

Whitney HM, Drukker K, Edwards A, Giger ML

pubmed logopapersNov 1 2025
Breast cancer may persist within milk ducts (ductal carcinoma <i>in situ</i>, DCIS) or advance into surrounding breast tissue (invasive ductal carcinoma, IDC). Occasionally, invasiveness in cancer may be underestimated during biopsy, leading to adjustments in the treatment plan based on unexpected surgical findings. Artificial intelligence/computer-aided diagnosis (AI/CADx) techniques in medical imaging may have the potential to predict whether a lesion is purely DCIS or exhibits a mixture of IDC and DCIS components, serving as a valuable supplement to biopsy findings. To enhance the evaluation of AI/CADx performance, assessing variability on a lesion-by-lesion basis via likelihood assurance measures could add value. We evaluated the performance in the task of distinguishing between pure DCIS and mixed IDC/DCIS breast cancers using computer-extracted radiomic features from dynamic contrast-enhanced magnetic resonance imaging using 0.632+ bootstrapping methods (2000 folds) on 550 lesions (135 pure DCIS, 415 mixed IDC/DCIS). Lesion-based likelihood assurance was measured using a sureness metric based on the 95% confidence interval of the classifier output for each lesion. The median and 95% CI of the 0.632+-corrected area under the receiver operating characteristic curve for the task of classifying lesions as pure DCIS or mixed IDC/DCIS were 0.81 [0.75, 0.86]. The sureness metric varied across the dataset with a range of 0.0002 (low sureness) to 0.96 (high sureness), with combinations of high and low classifier output and high and low sureness for some lesions. Sureness metrics can provide additional insights into the ability of CADx algorithms to pre-operatively predict whether a lesion is invasive.

Machine learning based differential diagnosis of schizophrenia, major depression disorder and bipolar disorder using structural magnetic resonance imaging.

Cao P, Li R, Li Y, Dong Y, Tang Y, Xu G, Si Q, Chen C, Chen L, Liu W, Yao Y, Sui Y, Zhang J

pubmed logopapersAug 15 2025
Cortical morphological abnormalities in schizophrenia (SCZ), major depressive disorder (MDD), and bipolar disorder (BD) have been identified in past research. However, their potential as objective biomarkers to differentiate these disorders remains uncertain. Machine learning models may offer a novel diagnostic tool. Structural MRI (sMRI) of 220 SCZ, 220 MDD, 220 BD, and 220 healthy controls were obtained using a 3T scanner. Volume, thickness, surface area, and mean curvature of 68 cerebral cortices were extracted using FreeSurfer. 272 features underwent 3 feature selection techniques to isolate important variables for model construction. These features were incorporated into 3 classifiers for classification. After model evaluation and hyperparameter tuning, the best-performing model was identified, along with the most significant brain measures. The univariate feature selection-Naive Bayes model achieved the best performance, with an accuracy of 0.66, macro-average AUC of 0.86, and sensitivities and specificities ranging from 0.58-0.86 to 0.81-0.93, respectively. Key features included thickness of right isthmus-cingulate cortex, area of left inferior temporal gyrus, thickness of right superior temporal gyrus, mean curvature of right pars orbitalis, thickness of left transverse temporal cortex, volume of left caudal anterior-cingulate cortex, area of right banks superior temporal sulcus, and thickness of right temporal pole. The machine learning model based on sMRI data shows promise for aiding in the differential diagnosis of SCZ, MDD, and BD. Cortical features from the cingulate and temporal lobes may highlight distinct biological mechanisms underlying each disorder.

Optimized AI-based Neural Decoding from BOLD fMRI Signal for Analyzing Visual and Semantic ROIs in the Human Visual System.

Veronese L, Moglia A, Pecco N, Della Rosa P, Scifo P, Mainardi LT, Cerveri P

pubmed logopapersAug 14 2025
AI-based neural decoding reconstructs visual perception by leveraging generative models to map brain activity measured through functional MRI (fMRI) into the observed visual stimulus. Traditionally, ridge linear models transform fMRI into a latent space, which is then decoded using variational autoencoders (VAE) or latent diffusion models (LDM). Owing to the complexity and noisiness of fMRI data, newer approaches split the reconstruction into two sequential stages, the first one providing a rough visual approximation using a VAE, the second one incorporating semantic information through the adoption of LDM guided by contrastive language-image pre-training (CLIP) embeddings. This work addressed some key scientific and technical gaps of the two-stage neural decoding by: 1) implementing a gated recurrent unit (GRU)-based architecture to establish a non-linear mapping between the fMRI signal and the VAE latent space, 2) optimizing the dimensionality of the VAE latent space, 3) systematically evaluating the contribution of the first reconstruction stage, and 4) analyzing the impact of different brain regions of interest (ROIs) on reconstruction quality. Experiments on the Natural Scenes Dataset, containing 73,000 unique natural images, along with fMRI of eight subjects, demonstrated that the proposed architecture maintained competitive performance while reducing the complexity of its first stage by 85%. The sensitivity analysis showcased that the first reconstruction stage is essential for preserving high structural similarity in the final reconstructions. Restricting analysis to semantic ROIs, while excluding early visual areas, diminished visual coherence, preserving semantics though. The inter-subject repeatability across ROIs was about 92 and 98% for visual and sematic metrics, respectively. This study represents a key step toward optimized neural decoding architectures leveraging non-linear models for stimulus prediction. Sensitivity analysis highlighted the interplay between the two reconstruction stages, while ROI-based analysis provided strong evidence that the two-stage AI model reflects the brain's hierarchical processing of visual information.

Delineation of the Centromedian Nucleus for Epilepsy Neuromodulation Using Deep Learning Reconstruction of White Matter-Nulled Imaging.

Ryan MV, Satzer D, Hu H, Litwiller DV, Rettmann DW, Tanabe J, Thompson JA, Ojemann SG, Kramer DR

pubmed logopapersAug 14 2025
Neuromodulation of the centromedian nucleus (CM) of the thalamus has shown promise in treating refractory epilepsy, particularly for idiopathic generalized epilepsy and Lennox-Gastaut syndrome. However, precise targeting of CM remains challenging. The combination of deep learning reconstruction (DLR) and fast gray matter acquisition T1 inversion recovery (FGATIR) offers potential improvements in visualization of CM for deep brain stimulation (DBS) targeting. The goal of the study was to evaluate the visualization of the putative CM on DLR-FGATIR and its alignment with atlas-defined CM boundaries, with the aim of facilitating direct targeting of CM for neuromodulation. This retrospective study included 12 patients with drug-resistant epilepsy treated with thalamic neuromodulation by using DLR-FGATIR for direct targeting. Postcontrast-T1-weighted MRI, DLR-FGATIR, and postoperative CT were coregistered and normalized into Montreal Neurological Institute (MNI) space and compared with the Morel histologic atlas. Contrast-to-noise ratios were measured between CM and neighboring nuclei. CM segmentations were compared between an experienced rater, a trainee rater, the Morel atlas, and the Thalamus Optimized Multi Atlas Segmentation (THOMAS) atlas (derived from expert segmentation of high-field MRI) by using the Sorenson-Dice coefficient (Dice score, a measure of overlap) and volume ratios. The number of electrode contacts within the Morel atlas CM was assessed. On DLR-FGATIR, CM was visible as an ovoid hypointensity in the intralaminar thalamus. Contrast-to-noise ratios were highest (<i>P</i> < .001) for the mediodorsal and medial pulvinar nuclei. Dice score with the Morel atlas CM was higher (median 0.49, interquartile range 0.40-0.58) for the experienced rater (<i>P</i> < .001) than the trainee rater (0.32, 0.19-0.46) and no different (<i>P</i> = .32) than the THOMAS atlas CM (0.56, 0.55-0.58). Both raters and the THOMAS atlas tended to under-segment the lateral portion of the Morel atlas CM, reflected by smaller segmentation volumes (<i>P</i> < .001). All electrodes targeting CM based on DLR-FGATIR traversed the Morel atlas CM. DLR-FGATIR permitted visualization and delineation of CM commensurate with a group atlas derived from high-field MRI. This technique provided reliable guidance for accurate electrode placement within CM, highlighting its potential use for direct targeting.

DINOMotion: advanced robust tissue motion tracking with DINOv2 in 2D-Cine MRI-guided radiotherapy.

Salari S, Spino C, Pharand LA, Lathuiliere F, Rivaz H, Beriault S, Xiao Y

pubmed logopapersAug 14 2025
Accurate tissue motion tracking is critical to ensure treatment outcome and safety in 2D-Cine MRI-guided radiotherapy. This is typically achieved by registration of sequential images, but existing methods often face challenges with large misalignments and lack of interpretability. In this paper, we introduce DINOMotion, a novel deep learning framework based on DINOv2 with Low-Rank Adaptation (LoRA) layers for robust, efficient, and interpretable motion tracking. DINOMotion automatically detects corresponding landmarks to derive optimal image registration, enhancing interpretability by providing explicit visual correspondences between sequential images. The integration of LoRA layers reduces trainable parameters, improving training efficiency, while DINOv2's powerful feature representations offer robustness against large misalignments. Unlike iterative optimization-based methods, DINOMotion directly computes image registration at test time. Our experiments on volunteer and patient datasets demonstrate its effectiveness in estimating both linear and nonlinear transformations, achieving Dice scores of 92.07% for the kidney, 90.90% for the liver, and 95.23% for the lung, with corresponding Hausdorff distances of 5.47 mm, 8.31 mm, and 6.72 mm, respectively. DINOMotion processes each scan in approximately 30ms and consistently outperforms state-of-the-art methods, particularly in handling large misalignments. These results highlight its potential as a robust and interpretable solution for real-time motion tracking in 2D-Cine MRI-guided radiotherapy.

A software ecosystem for brain tractometry processing, analysis, and insight.

Kruper J, Richie-Halford A, Qiao J, Gilmore A, Chang K, Grotheer M, Roy E, Caffarra S, Gomez T, Chou S, Cieslak M, Koudoro S, Garyfallidis E, Satthertwaite TD, Yeatman JD, Rokem A

pubmed logopapersAug 14 2025
Tractometry uses diffusion-weighted magnetic resonance imaging (dMRI) to assess physical properties of brain connections. Here, we present an integrative ecosystem of software that performs all steps of tractometry: post-processing of dMRI data, delineation of major white matter pathways, and modeling of the tissue properties within them. This ecosystem also provides a set of interoperable and extensible tools for visualization and interpretation of the results that extract insights from these measurements. These include novel machine learning and statistical analysis methods adapted to the characteristic structure of tract-based data. We benchmark the performance of these statistical analysis methods in different datasets and analysis tasks, including hypothesis testing on group differences and predictive analysis of subject age. We also demonstrate that computational advances implemented in the software offer orders of magnitude of acceleration. Taken together, these open-source software tools-freely available at https://tractometry.org-provide a transformative environment for the analysis of dMRI data.

Comparative evaluation of supervised and unsupervised deep learning strategies for denoising hyperpolarized <sup>129</sup>Xe lung MRI.

Bdaiwi AS, Willmering MM, Hussain R, Hysinger E, Woods JC, Walkup LL, Cleveland ZI

pubmed logopapersAug 14 2025
Reduced signal-to-noise ratio (SNR) in hyperpolarized <sup>129</sup>Xe MR images can affect accurate quantification for research and diagnostic evaluations. Thus, this study explores the application of supervised deep learning (DL) denoising, traditional (Trad) and Noise2Noise (N2N) and unsupervised Noise2void (N2V) approaches for <sup>129</sup>Xe MR imaging. The DL denoising frameworks were trained and tested on 952 <sup>129</sup>Xe MRI data sets (421 ventilation, 125 diffusion-weighted, and 406 gas-exchange acquisitions) from healthy subjects and participants with cardiopulmonary conditions and compared with the block matching 3D denoising technique. Evaluation involved mean signal, noise standard deviation (SD), SNR, and sharpness. Ventilation defect percentage (VDP), apparent diffusion coefficient (ADC), membrane uptake, red blood cell (RBC) transfer, and RBC:Membrane were also evaluated for ventilation, diffusion, and gas-exchange images, respectively. Denoising methods significantly reduced noise SDs and enhanced SNR (p < 0.05) across all imaging types. Traditional ventilation model (Trad<sub>vent</sub>) improved sharpness in ventilation images but underestimated VDP (bias = -1.37%) relative to raw images, whereas N2N<sub>vent</sub> overestimated VDP (bias = +1.88%). Block matching 3D and N2V<sub>vent</sub> showed minimal VDP bias (≤ 0.35%). Denoising significantly reduced ADC mean and SD (p < 0.05, bias ≤ - 0.63 × 10<sup>-2</sup>). The values of Trad<sub>vent</sub> and N2N<sub>vent</sub> increased mean membrane and RBC (p < 0.001) with no change in RBC:Membrane. Denoising also reduced SDs of all gas-exchange metrics (p < 0.01). Low SNR may impair the potential of <sup>129</sup>Xe MRI for clinical diagnosis and lung function assessment. The evaluation of supervised and unsupervised DL denoising methods enhanced <sup>129</sup>Xe imaging quality, offering promise for improved clinical interpretation and diagnosis.

Instantaneous T<sub>2</sub> Mapping via Reduced Field of View Multiple Overlapping-Echo Detachment Imaging: Application in Free-Breathing Abdominal and Myocardial Imaging.

Dai C, Cai C, Wu J, Zhu L, Qu X, Yang Q, Zhou J, Cai S

pubmed logopapersAug 14 2025
Quantitative magnetic resonance imaging (qMRI) has attracted more and more attention in clinical diagnosis and medical sciences due to its capability to non-invasively characterize tissue properties. Nevertheless, most qMRI methods are time-consuming and sensitive to motion, making them inadequate for quantifying organs with physiological movement. In this context, single-shot multiple overlapping-echo detachment (MOLED) imaging technique has been presented, but its acquisition efficiency and image quality are limited when the field of view (FOV) is smaller than the object, especially for abdominal organs and myocardium. A novel single-shot reduced FOV qMRI method was developed based on MOLED (termed rFOV-MOLED). This method combines zonal oblique multislice (ZOOM) and outer volume suppression (OVS) techniques to reduce the FOV and suppress signals outside the FOV. A deep neural network was trained using synthetic data generated from Bloch simulations to achieve high-quality T<sub>2</sub> map reconstruction from rFOV-MOLED iamges. Numerical simulation, water phantom and in vivo abdominal and myocardial imaging experiments were performed to evaluate the method. The coefficient of variation and repeatability index were used to evaluate the reproducibility. Multiple statistical analyses were utilized to evaluate the accuracy and significance of the method, including linear regression, Bland-Altman analysis, Wilcoxon signed-rank test, and Mann-Whitney U test, with the p-value significance level of 0.05. Experimental results show that rFOV-MOLED achieved excellent performance in reducing aliasing signals due to FOV reduction. It provided T<sub>2</sub> maps closely resembling the reference maps. Moreover, it gave finer tissue details than MOLED and was quite repeatable. rFOV-MOLED can ultrafast and stably provide accurate T2 maps for myocardium and specific abdominal organs with improved acquisition efficiency and image quality.
Page 1 of 87863 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.