Sort by:
Page 3 of 3953948 results

Aphasia severity prediction using a multi-modal machine learning approach.

Hu X, Varkanitsa M, Kropp E, Betke M, Ishwar P, Kiran S

pubmed logopapersAug 15 2025
The present study examined an integrated multiple neuroimaging modality (T1 structural, Diffusion Tensor Imaging (DTI), and resting-state FMRI (rsFMRI)) to predict aphasia severity using Western Aphasia Battery-Revised Aphasia Quotient (WAB-R AQ) in 76 individuals with post-stroke aphasia. We employed Support Vector Regression (SVR) and Random Forest (RF) models with supervised feature selection and a stacked feature prediction approach. The SVR model outperformed RF, achieving an average root mean square error (RMSE) of 16.38±5.57, Pearson's correlation coefficient (r) of 0.70±0.13, and mean absolute error (MAE) of 12.67±3.27, compared to RF's RMSE of 18.41±4.34, r of 0.66±0.15, and MAE of 14.64±3.04. Resting-state neural activity and structural integrity emerged as crucial predictors of aphasia severity, appearing in the top 20% of predictor combinations for both SVR and RF. Finally, the feature selection method revealed that functional connectivity in both hemispheres and between homologous language areas is critical for predicting language outcomes in patients with aphasia. The statistically significant difference in performance between the model using only single modality and the optimal multi-modal SVR/RF model (which included both resting-state connectivity and structural information) underscores that aphasia severity is influenced by factors beyond lesion location and volume. These findings suggest that integrating multiple neuroimaging modalities enhances the prediction of language outcomes in aphasia beyond lesion characteristics alone, offering insights that could inform personalized rehabilitation strategies.

Machine learning based differential diagnosis of schizophrenia, major depression disorder and bipolar disorder using structural magnetic resonance imaging.

Cao P, Li R, Li Y, Dong Y, Tang Y, Xu G, Si Q, Chen C, Chen L, Liu W, Yao Y, Sui Y, Zhang J

pubmed logopapersAug 15 2025
Cortical morphological abnormalities in schizophrenia (SCZ), major depressive disorder (MDD), and bipolar disorder (BD) have been identified in past research. However, their potential as objective biomarkers to differentiate these disorders remains uncertain. Machine learning models may offer a novel diagnostic tool. Structural MRI (sMRI) of 220 SCZ, 220 MDD, 220 BD, and 220 healthy controls were obtained using a 3T scanner. Volume, thickness, surface area, and mean curvature of 68 cerebral cortices were extracted using FreeSurfer. 272 features underwent 3 feature selection techniques to isolate important variables for model construction. These features were incorporated into 3 classifiers for classification. After model evaluation and hyperparameter tuning, the best-performing model was identified, along with the most significant brain measures. The univariate feature selection-Naive Bayes model achieved the best performance, with an accuracy of 0.66, macro-average AUC of 0.86, and sensitivities and specificities ranging from 0.58-0.86 to 0.81-0.93, respectively. Key features included thickness of right isthmus-cingulate cortex, area of left inferior temporal gyrus, thickness of right superior temporal gyrus, mean curvature of right pars orbitalis, thickness of left transverse temporal cortex, volume of left caudal anterior-cingulate cortex, area of right banks superior temporal sulcus, and thickness of right temporal pole. The machine learning model based on sMRI data shows promise for aiding in the differential diagnosis of SCZ, MDD, and BD. Cortical features from the cingulate and temporal lobes may highlight distinct biological mechanisms underlying each disorder.

Delineation of the Centromedian Nucleus for Epilepsy Neuromodulation Using Deep Learning Reconstruction of White Matter-Nulled Imaging.

Ryan MV, Satzer D, Hu H, Litwiller DV, Rettmann DW, Tanabe J, Thompson JA, Ojemann SG, Kramer DR

pubmed logopapersAug 14 2025
Neuromodulation of the centromedian nucleus (CM) of the thalamus has shown promise in treating refractory epilepsy, particularly for idiopathic generalized epilepsy and Lennox-Gastaut syndrome. However, precise targeting of CM remains challenging. The combination of deep learning reconstruction (DLR) and fast gray matter acquisition T1 inversion recovery (FGATIR) offers potential improvements in visualization of CM for deep brain stimulation (DBS) targeting. The goal of the study was to evaluate the visualization of the putative CM on DLR-FGATIR and its alignment with atlas-defined CM boundaries, with the aim of facilitating direct targeting of CM for neuromodulation. This retrospective study included 12 patients with drug-resistant epilepsy treated with thalamic neuromodulation by using DLR-FGATIR for direct targeting. Postcontrast-T1-weighted MRI, DLR-FGATIR, and postoperative CT were coregistered and normalized into Montreal Neurological Institute (MNI) space and compared with the Morel histologic atlas. Contrast-to-noise ratios were measured between CM and neighboring nuclei. CM segmentations were compared between an experienced rater, a trainee rater, the Morel atlas, and the Thalamus Optimized Multi Atlas Segmentation (THOMAS) atlas (derived from expert segmentation of high-field MRI) by using the Sorenson-Dice coefficient (Dice score, a measure of overlap) and volume ratios. The number of electrode contacts within the Morel atlas CM was assessed. On DLR-FGATIR, CM was visible as an ovoid hypointensity in the intralaminar thalamus. Contrast-to-noise ratios were highest (<i>P</i> < .001) for the mediodorsal and medial pulvinar nuclei. Dice score with the Morel atlas CM was higher (median 0.49, interquartile range 0.40-0.58) for the experienced rater (<i>P</i> < .001) than the trainee rater (0.32, 0.19-0.46) and no different (<i>P</i> = .32) than the THOMAS atlas CM (0.56, 0.55-0.58). Both raters and the THOMAS atlas tended to under-segment the lateral portion of the Morel atlas CM, reflected by smaller segmentation volumes (<i>P</i> < .001). All electrodes targeting CM based on DLR-FGATIR traversed the Morel atlas CM. DLR-FGATIR permitted visualization and delineation of CM commensurate with a group atlas derived from high-field MRI. This technique provided reliable guidance for accurate electrode placement within CM, highlighting its potential use for direct targeting.

Comparative evaluation of supervised and unsupervised deep learning strategies for denoising hyperpolarized <sup>129</sup>Xe lung MRI.

Bdaiwi AS, Willmering MM, Hussain R, Hysinger E, Woods JC, Walkup LL, Cleveland ZI

pubmed logopapersAug 14 2025
Reduced signal-to-noise ratio (SNR) in hyperpolarized <sup>129</sup>Xe MR images can affect accurate quantification for research and diagnostic evaluations. Thus, this study explores the application of supervised deep learning (DL) denoising, traditional (Trad) and Noise2Noise (N2N) and unsupervised Noise2void (N2V) approaches for <sup>129</sup>Xe MR imaging. The DL denoising frameworks were trained and tested on 952 <sup>129</sup>Xe MRI data sets (421 ventilation, 125 diffusion-weighted, and 406 gas-exchange acquisitions) from healthy subjects and participants with cardiopulmonary conditions and compared with the block matching 3D denoising technique. Evaluation involved mean signal, noise standard deviation (SD), SNR, and sharpness. Ventilation defect percentage (VDP), apparent diffusion coefficient (ADC), membrane uptake, red blood cell (RBC) transfer, and RBC:Membrane were also evaluated for ventilation, diffusion, and gas-exchange images, respectively. Denoising methods significantly reduced noise SDs and enhanced SNR (p < 0.05) across all imaging types. Traditional ventilation model (Trad<sub>vent</sub>) improved sharpness in ventilation images but underestimated VDP (bias = -1.37%) relative to raw images, whereas N2N<sub>vent</sub> overestimated VDP (bias = +1.88%). Block matching 3D and N2V<sub>vent</sub> showed minimal VDP bias (≤ 0.35%). Denoising significantly reduced ADC mean and SD (p < 0.05, bias ≤ - 0.63 × 10<sup>-2</sup>). The values of Trad<sub>vent</sub> and N2N<sub>vent</sub> increased mean membrane and RBC (p < 0.001) with no change in RBC:Membrane. Denoising also reduced SDs of all gas-exchange metrics (p < 0.01). Low SNR may impair the potential of <sup>129</sup>Xe MRI for clinical diagnosis and lung function assessment. The evaluation of supervised and unsupervised DL denoising methods enhanced <sup>129</sup>Xe imaging quality, offering promise for improved clinical interpretation and diagnosis.

AI-based prediction of best-corrected visual acuity in patients with multiple retinal diseases using multimodal medical imaging.

Dong L, Gao W, Niu L, Deng Z, Gong Z, Li HY, Fang LJ, Shao L, Zhang RH, Zhou WD, Ma L, Wei WB

pubmed logopapersAug 14 2025
This study evaluated the performance of artificial intelligence (AI) algorithms in predicting best-corrected visual acuity (BCVA) for patients with multiple retinal diseases, using multimodal medical imaging including macular optical coherence tomography (OCT), optic disc OCT and fundus images. The goal was to enhance clinical BCVA evaluation efficiency and precision. A retrospective study used data from 2545 patients (4028 eyes) for training, 896 (1006 eyes) for testing and 196 (200 eyes) for internal validation, with an external prospective dataset of 741 patients (1381 eyes). Single-modality analyses employed different backbone networks and feature fusion methods, while multimodal fusion combined modalities using average aggregation, concatenation/reduction and maximum feature selection. Predictive accuracy was measured by mean absolute error (MAE), root mean squared error (RMSE) and R² score. Macular OCT achieved better single-modality prediction than optic disc OCT, with MAE of 3.851 vs 4.977 and RMSE of 7.844 vs 10.026. Fundus images showed an MAE of 3.795 and RMSE of 7.954. Multimodal fusion significantly improved accuracy, with the best results using average aggregation, achieving an MAE of 2.865, RMSE of 6.229 and R² of 0.935. External validation yielded an MAE of 8.38 and RMSE of 10.62. Multimodal fusion provided the most accurate BCVA predictions, demonstrating AI's potential to improve clinical evaluation. However, challenges remain regarding disease diversity and applicability in resource-limited settings.

Optimized AI-based Neural Decoding from BOLD fMRI Signal for Analyzing Visual and Semantic ROIs in the Human Visual System.

Veronese L, Moglia A, Pecco N, Della Rosa P, Scifo P, Mainardi LT, Cerveri P

pubmed logopapersAug 14 2025
AI-based neural decoding reconstructs visual perception by leveraging generative models to map brain activity measured through functional MRI (fMRI) into the observed visual stimulus. Traditionally, ridge linear models transform fMRI into a latent space, which is then decoded using variational autoencoders (VAE) or latent diffusion models (LDM). Owing to the complexity and noisiness of fMRI data, newer approaches split the reconstruction into two sequential stages, the first one providing a rough visual approximation using a VAE, the second one incorporating semantic information through the adoption of LDM guided by contrastive language-image pre-training (CLIP) embeddings. This work addressed some key scientific and technical gaps of the two-stage neural decoding by: 1) implementing a gated recurrent unit (GRU)-based architecture to establish a non-linear mapping between the fMRI signal and the VAE latent space, 2) optimizing the dimensionality of the VAE latent space, 3) systematically evaluating the contribution of the first reconstruction stage, and 4) analyzing the impact of different brain regions of interest (ROIs) on reconstruction quality. Experiments on the Natural Scenes Dataset, containing 73,000 unique natural images, along with fMRI of eight subjects, demonstrated that the proposed architecture maintained competitive performance while reducing the complexity of its first stage by 85%. The sensitivity analysis showcased that the first reconstruction stage is essential for preserving high structural similarity in the final reconstructions. Restricting analysis to semantic ROIs, while excluding early visual areas, diminished visual coherence, preserving semantics though. The inter-subject repeatability across ROIs was about 92 and 98% for visual and sematic metrics, respectively. This study represents a key step toward optimized neural decoding architectures leveraging non-linear models for stimulus prediction. Sensitivity analysis highlighted the interplay between the two reconstruction stages, while ROI-based analysis provided strong evidence that the two-stage AI model reflects the brain's hierarchical processing of visual information.

Performance Evaluation of Deep Learning for the Detection and Segmentation of Thyroid Nodules: Systematic Review and Meta-Analysis.

Ni J, You Y, Wu X, Chen X, Wang J, Li Y

pubmed logopapersAug 14 2025
Thyroid cancer is one of the most common endocrine malignancies. Its incidence has steadily increased in recent years. Distinguishing between benign and malignant thyroid nodules (TNs) is challenging due to their overlapping imaging features. The rapid advancement of artificial intelligence (AI) in medical image analysis, particularly deep learning (DL) algorithms, has provided novel solutions for automated TN detection. However, existing studies exhibit substantial heterogeneity in diagnostic performance. Furthermore, no systematic evidence-based research comprehensively assesses the diagnostic performance of DL models in this field. This study aimed to execute a systematic review and meta-analysis to appraise the performance of DL algorithms in diagnosing TN malignancy, identify key factors influencing their diagnostic efficacy, and compare their accuracy with that of clinicians in image-based diagnosis. We systematically searched multiple databases, including PubMed, Cochrane, Embase, Web of Science, and IEEE, and identified 41 eligible studies for systematic review and meta-analysis. Based on the task type, studies were categorized into segmentation (n=14) and detection (n=27) tasks. The pooled sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC) were calculated for each group. Subgroup analyses were performed to examine the impact of transfer learning and compare model performance against clinicians. For segmentation tasks, the pooled sensitivity, specificity, and AUC were 82% (95% CI 79%-84%), 95% (95% CI 92%-96%), and 0.91 (95% CI 0.89-0.94), respectively. For detection tasks, the pooled sensitivity, specificity, and AUC were 91% (95% CI 89%-93%), 89% (95% CI 86%-91%), and 0.96 (95% CI 0.93-0.97), respectively. Some studies demonstrated that DL models could achieve diagnostic performance comparable with, or even exceeding, that of clinicians in certain scenarios. The application of transfer learning contributed to improved model performance. DL algorithms exhibit promising diagnostic accuracy in TN imaging, highlighting their potential as auxiliary diagnostic tools. However, current studies are limited by suboptimal methodological design, inconsistent image quality across datasets, and insufficient external validation, which may introduce bias. Future research should enhance methodological standardization, improve model interpretability, and promote transparent reporting to facilitate the sustainable clinical translation of DL-based solutions.

A software ecosystem for brain tractometry processing, analysis, and insight.

Kruper J, Richie-Halford A, Qiao J, Gilmore A, Chang K, Grotheer M, Roy E, Caffarra S, Gomez T, Chou S, Cieslak M, Koudoro S, Garyfallidis E, Satthertwaite TD, Yeatman JD, Rokem A

pubmed logopapersAug 14 2025
Tractometry uses diffusion-weighted magnetic resonance imaging (dMRI) to assess physical properties of brain connections. Here, we present an integrative ecosystem of software that performs all steps of tractometry: post-processing of dMRI data, delineation of major white matter pathways, and modeling of the tissue properties within them. This ecosystem also provides a set of interoperable and extensible tools for visualization and interpretation of the results that extract insights from these measurements. These include novel machine learning and statistical analysis methods adapted to the characteristic structure of tract-based data. We benchmark the performance of these statistical analysis methods in different datasets and analysis tasks, including hypothesis testing on group differences and predictive analysis of subject age. We also demonstrate that computational advances implemented in the software offer orders of magnitude of acceleration. Taken together, these open-source software tools-freely available at https://tractometry.org-provide a transformative environment for the analysis of dMRI data.

Lung-DDPM: Semantic Layout-guided Diffusion Models for Thoracic CT Image Synthesis.

Jiang Y, Lemarechal Y, Bafaro J, Abi-Rjeile J, Joubert P, Despres P, Manem V

pubmed logopapersAug 14 2025
With the rapid development of artificial intelligence (AI), AI-assisted medical imaging analysis demonstrates remarkable performance in early lung cancer screening. However, the costly annotation process and privacy concerns limit the construction of large-scale medical datasets, hampering the further application of AI in healthcare. To address the data scarcity in lung cancer screening, we propose Lung-DDPM, a thoracic CT image synthesis approach that effectively generates high-fidelity 3D synthetic CT images, which prove helpful in downstream lung nodule segmentation tasks. Our method is based on semantic layout-guided denoising diffusion probabilistic models (DDPM), enabling anatomically reasonable, seamless, and consistent sample generation even from incomplete semantic layouts. Our results suggest that the proposed method outperforms other state-of-the-art (SOTA) generative models in image quality evaluation and downstream lung nodule segmentation tasks. Specifically, Lung-DDPM achieved superior performance on our large validation cohort, with a Fréchet inception distance (FID) of 0.0047, maximum mean discrepancy (MMD) of 0.0070, and mean squared error (MSE) of 0.0024. These results were 7.4×, 3.1×, and 29.5× better than the second-best competitors, respectively. Furthermore, the lung nodule segmentation model, trained on a dataset combining real and Lung-DDPM-generated synthetic samples, attained a Dice Coefficient (Dice) of 0.3914 and sensitivity of 0.4393. This represents 8.8% and 18.6% improvements in Dice and sensitivity compared to the model trained solely on real samples. The experimental results highlight Lung-DDPM's potential for a broader range of medical imaging applications, such as general tumor segmentation, cancer survival estimation, and risk prediction. The code and pretrained models are available at https://github.com/Manem-Lab/Lung-DDPM/.

DINOMotion: advanced robust tissue motion tracking with DINOv2 in 2D-Cine MRI-guided radiotherapy.

Salari S, Spino C, Pharand LA, Lathuiliere F, Rivaz H, Beriault S, Xiao Y

pubmed logopapersAug 14 2025
Accurate tissue motion tracking is critical to ensure treatment outcome and safety in 2D-Cine MRI-guided radiotherapy. This is typically achieved by registration of sequential images, but existing methods often face challenges with large misalignments and lack of interpretability. In this paper, we introduce DINOMotion, a novel deep learning framework based on DINOv2 with Low-Rank Adaptation (LoRA) layers for robust, efficient, and interpretable motion tracking. DINOMotion automatically detects corresponding landmarks to derive optimal image registration, enhancing interpretability by providing explicit visual correspondences between sequential images. The integration of LoRA layers reduces trainable parameters, improving training efficiency, while DINOv2's powerful feature representations offer robustness against large misalignments. Unlike iterative optimization-based methods, DINOMotion directly computes image registration at test time. Our experiments on volunteer and patient datasets demonstrate its effectiveness in estimating both linear and nonlinear transformations, achieving Dice scores of 92.07% for the kidney, 90.90% for the liver, and 95.23% for the lung, with corresponding Hausdorff distances of 5.47 mm, 8.31 mm, and 6.72 mm, respectively. DINOMotion processes each scan in approximately 30ms and consistently outperforms state-of-the-art methods, particularly in handling large misalignments. These results highlight its potential as a robust and interpretable solution for real-time motion tracking in 2D-Cine MRI-guided radiotherapy.
Page 3 of 3953948 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.