Sort by:
Page 49 of 3523515 results

Dynamic neural network modulation associated with rumination in major depressive disorder: a prospective observational comparative analysis of cognitive behavioral therapy and pharmacotherapy.

Katayama N, Shinagawa K, Hirano J, Kobayashi Y, Nakagawa A, Umeda S, Kamiya K, Tajima M, Amano M, Nogami W, Ihara S, Noda S, Terasawa Y, Kikuchi T, Mimura M, Uchida H

pubmed logopapersAug 6 2025
Cognitive behavioral therapy (CBT) and pharmacotherapy are primary treatments for major depressive disorder (MDD). However, their differential effects on the neural networks associated with rumination, or repetitive negative thinking, remain poorly understood. This study included 135 participants, whose rumination severity was measured using the rumination response scale (RRS) and whose resting brain activity was measured using functional magnetic resonance imaging (fMRI) at baseline and after 16 weeks. MDD patients received either standard CBT based on Beck's manual (n = 28) or pharmacotherapy (n = 32). Using a hidden Markov model, we observed that MDD patients exhibited increased activity in the default mode network (DMN) and decreased occupancies in the sensorimotor and central executive networks (CEN). The DMN occurrence rate correlated positively with rumination severity. CBT, while not specifically designed to target rumination, reduced DMN occurrence rate and facilitated transitions toward a CEN-dominant brain state as part of broader therapeutic effects. Pharmacotherapy shifted DMN activity to the posterior region of the brain. These findings suggest that CBT and pharmacotherapy modulate brain network dynamics related to rumination through distinct therapeutic pathways.

Development and validation of the multidimensional machine learning model for preoperative risk stratification in papillary thyroid carcinoma: a multicenter, retrospective cohort study.

Feng JW, Zhang L, Yang YX, Qin RJ, Liu SQ, Qin AC, Jiang Y

pubmed logopapersAug 6 2025
This study aims to develop and validate a multi-modal machine learning model for preoperative risk stratification in papillary thyroid carcinoma (PTC), addressing limitations of current systems that rely on postoperative pathological features. We analyzed 974 PTC patients from three medical centers in China using a multi-modal approach integrating: (1) clinical indicators, (2) immunological indices, (3) ultrasound radiomics features, and (4) CT radiomics features. Our methodology employed gradient boosting machine for feature selection and random forest for classification, with model interpretability provided through SHapley Additive exPlanations (SHAP) analysis. The model was validated on internal (n = 225) and two external cohorts (n = 51, n = 174). The final 15-feature model achieved AUCs of 0.91, 0.84, and 0.77 across validation cohorts, improving to 0.96, 0.95, and 0.89 after cohort-specific refitting. SHAP analysis revealed CT texture features, ultrasound morphological features, and immune-inflammatory markers as key predictors, with consistent patterns across validation sites despite center-specific variations. Subgroup analysis showed superior performance in tumors > 1 cm and patients without extrathyroidal extension. Our multi-modal machine learning approach provides accurate preoperative risk stratification for PTC with robust cross-center applicability. This computational framework for integrating heterogeneous imaging and clinical data demonstrates the potential of multi-modal joint learning in healthcare imaging to transform clinical decision-making by enabling personalized treatment planning.

AI-derived CT biomarker score for robust COVID-19 mortality prediction across multiple waves and regions using machine learning.

De Smet K, De Smet D, De Jaeger P, Dewitte J, Martens GA, Buls N, De Mey J

pubmed logopapersAug 6 2025
This study aimed to develop a simple, interpretable model using routinely available data for predicting COVID-19 mortality at admission, addressing limitations of complex models, and to provide a statistically robust framework for controlled clinical use, managing model uncertainty for responsible healthcare application. Data from Belgium's first COVID-19 wave (UZ Brussel, n = 252) were used for model development. External validation utilized data from unvaccinated patients during the late second and early third waves (AZ Delta, n = 175). Various machine learning methods were trained and compared for diagnostic performance after data preprocessing and feature selection. The final model, the M3-score, incorporated three features: age, white blood cell (WBC) count, and AI-derived total lung involvement (TOTAL<sub>AI</sub>) quantified from CT scans using Icolung software. The M3-score demonstrated strong classification performance in the training cohort (AUC 0.903) and clinically useful performance in the external validation dataset (AUC 0.826), indicating generalizability potential. To enhance clinical utility and interpretability, predicted probabilities were categorized into actionable likelihood ratio (LR) intervals: highly unlikely (LR 0.0), unlikely (LR 0.13), gray zone (LR 0.85), more likely (LR 2.14), and likely (LR 8.19) based on the training cohort. External validation suggested temporal and geographical robustness, though some variability in AUC and LR performance was observed, as anticipated in real-world settings. The parsimonious M3-score, integrating AI-based CT quantification with clinical and laboratory data, offers an interpretable tool for predicting in-hospital COVID-19 mortality, showing robust training performance. Observed performance variations in external validation underscore the need for careful interpretation and further extensive validation across international cohorts to confirm wider applicability and robustness before widespread clinical adoption.

Clinical information prompt-driven retinal fundus image for brain health evaluation.

Tong N, Hui Y, Gou SP, Chen LX, Wang XH, Chen SH, Li J, Li XS, Wu YT, Wu SL, Wang ZC, Sun J, Lv H

pubmed logopapersAug 6 2025
Brain volume measurement serves as a critical approach for assessing brain health status. Considering the close biological connection between the eyes and brain, this study aims to investigate the feasibility of estimating brain volume through retinal fundus imaging integrated with clinical metadata, and to offer a cost-effective approach for assessing brain health. Based on clinical information, retinal fundus images, and neuroimaging data derived from a multicenter, population-based cohort study, the KaiLuan Study, we proposed a cross-modal correlation representation (CMCR) network to elucidate the intricate co-degenerative relationships between the eyes and brain for 755 subjects. Specifically, individual clinical information, which has been followed up for as long as 12 years, was encoded as a prompt to enhance the accuracy of brain volume estimation. Independent internal validation and external validation were performed to assess the robustness of the proposed model. Root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) metrics were employed to quantitatively evaluate the quality of synthetic brain images derived from retinal imaging data. The proposed framework yielded average RMSE, PSNR, and SSIM values of 98.23, 35.78 dB, and 0.64, respectively, which significantly outperformed 5 other methods: multi-channel Variational Autoencoder (mcVAE), Pixel-to-Pixel (Pixel2pixel), transformer-based U-Net (TransUNet), multi-scale transformer network (MT-Net), and residual vision transformer (ResViT). The two- (2D) and three-dimensional (3D) visualization results showed that the shape and texture of the synthetic brain images generated by the proposed method most closely resembled those of actual brain images. Thus, the CMCR framework accurately captured the latent structural correlations between the fundus and the brain. The average difference between predicted and actual brain volumes was 61.36 cm<sup>3</sup>, with a relative error of 4.54%. When all of the clinical information (including age and sex, daily habits, cardiovascular factors, metabolic factors, and inflammatory factors) was encoded, the difference was decreased to 53.89 cm<sup>3</sup>, with a relative error of 3.98%. Based on the synthesized brain MR images from retinal fundus images, the volumes of brain tissues could be estimated with high accuracy. This study provides an innovative, accurate, and cost-effective approach to characterize brain health status through readily accessible retinal fundus images. NCT05453877 ( https://clinicaltrials.gov/ ).

MCA-GAN: A lightweight Multi-scale Context-Aware Generative Adversarial Network for MRI reconstruction.

Hou B, Du H

pubmed logopapersAug 6 2025
Magnetic Resonance Imaging (MRI) is widely utilized in medical imaging due to its high resolution and non-invasive nature. However, the prolonged acquisition time significantly limits its clinical applicability. Although traditional compressed sensing (CS) techniques can accelerate MRI acquisition, they often lead to degraded reconstruction quality under high undersampling rates. Deep learning-based methods, including CNN- and GAN-based approaches, have improved reconstruction performance, yet are limited by their local receptive fields, making it challenging to effectively capture long-range dependencies. Moreover, these models typically exhibit high computational complexity, which hinders their efficient deployment in practical scenarios. To address these challenges, we propose a lightweight Multi-scale Context-Aware Generative Adversarial Network (MCA-GAN), which enhances MRI reconstruction through dual-domain generators that collaboratively optimize both k-space and image-domain representations. MCA-GAN integrates several lightweight modules, including Depthwise Separable Local Attention (DWLA) for efficient local feature extraction, Adaptive Group Rearrangement Block (AGRB) for dynamic inter-group feature optimization, Multi-Scale Spatial Context Modulation Bridge (MSCMB) for multi-scale feature fusion in skip connections, and Channel-Spatial Multi-Scale Self-Attention (CSMS) for improved global context modeling. Extensive experiments conducted on the IXI, MICCAI 2013, and MRNet knee datasets demonstrate that MCA-GAN consistently outperforms existing methods in terms of PSNR and SSIM. Compared to SepGAN, the latest lightweight model, MCA-GAN achieves a 27.3% reduction in parameter size and a 19.6% reduction in computational complexity, while attaining the shortest reconstruction time among all compared methods. Furthermore, MCA-GAN exhibits robust performance across various undersampling masks and acceleration rates. Cross-dataset generalization experiments further confirm its ability to maintain competitive reconstruction quality, underscoring its strong generalization potential. Overall, MCA-GAN improves MRI reconstruction quality while significantly reducing computational cost through a lightweight architecture and multi-scale feature fusion, offering an efficient and accurate solution for accelerated MRI.

Deep learning-based radiomics does not improve residual cancer burden prediction post-chemotherapy in LIMA breast MRI trial.

Janse MHA, Janssen LM, Wolters-van der Ben EJM, Moman MR, Viergever MA, van Diest PJ, Gilhuijs KGA

pubmed logopapersAug 6 2025
This study aimed to evaluate the potential additional value of deep radiomics for assessing residual cancer burden (RCB) in locally advanced breast cancer, after neoadjuvant chemotherapy (NAC) but before surgery, compared to standard predictors: tumor volume and subtype. This retrospective study used a 105-patient single-institution training set and a 41-patient external test set from three institutions in the LIMA trial. DCE-MRI was performed before and after NAC, and RCB was determined post-surgery. Three networks (nnU-Net, Attention U-net and vector-quantized encoder-decoder) were trained for tumor segmentation. For each network, deep features were extracted from the bottleneck layer and used to train random forest regression models to predict RCB score. Models were compared to (1) a model trained on tumor volume and (2) a model combining tumor volume and subtype. The potential complementary performance of combining deep radiomics with a clinical-radiological model was assessed. From the predicted RCB score, three metrics were calculated: area under the curve (AUC) for categories RCB-0/RCB-I versus RCB-II/III, pathological complete response (pCR) versus non-pCR, and Spearman's correlation. Deep radiomics models had an AUC between 0.68-0.74 for pCR and 0.68-0.79 for RCB, while the volume-only model had an AUC of 0.74 and 0.70 for pCR and RCB, respectively. Spearman's correlation varied from 0.45-0.51 (deep radiomics) to 0.53 (combined model). No statistical difference between models was observed. Segmentation network-derived deep radiomics contain similar information to tumor volume and subtype for inferring pCR and RCB after NAC, but do not complement standard clinical predictors in the LIMA trial. Question It is unknown if and which deep radiomics approach is most suitable to extract relevant features to assess neoadjuvant chemotherapy response on breast MRI. Findings Radiomic features extracted from deep-learning networks yield similar results in predicting neoadjuvant chemotherapy response as tumor volume and subtype in the LIMA study. However, they do not provide complementary information. Clinical relevance For predicting response to neoadjuvant chemotherapy in breast cancer patients, tumor volume on MRI and subtype remain important predictors of treatment outcome; deep radiomics might be an alternative when determining tumor volume and/or subtype is not feasible.

The development of a multimodal prediction model based on CT and MRI for the prognosis of pancreatic cancer.

Dou Z, Lin J, Lu C, Ma X, Zhang R, Zhu J, Qin S, Xu C, Li J

pubmed logopapersAug 6 2025
To develop and validate a hybrid radiomics model to predict the overall survival in pancreatic cancer patients and identify risk factors that affect patient prognosis. We conducted a retrospective analysis of 272 pancreatic cancer patients diagnosed at the First Affiliated Hospital of Soochow University from January 2013 to December 2023, and divided them into a training set and a test set at a ratio of 7:3. Pre-treatment contrast-enhanced computed tomography (CT), magnetic resonance imaging (MRI) images, and clinical features were collected. Dimensionality reduction was performed on the radiomics features using principal component analysis (PCA), and important features with non-zero coefficients were selected using the least absolute shrinkage and selection operator (LASSO) with 10-fold cross-validation. In the training set, we built clinical prediction models using both random survival forests (RSF) and traditional Cox regression analysis. These models included a radiomics model based on contrast-enhanced CT, a radiomics model based on MRI, a clinical model, 3 bimodal models combining two types of features, and a multimodal model combining radiomics features with clinical features. Model performance evaluation in the test set was based on two dimensions: discrimination and calibration. In addition, risk stratification was performed in the test set based on predicted risk scores to evaluate the model's prognostic utility. The RSF-based hybrid model performed best with a C-index of 0.807 and a Brier score of 0.101, outperforming the COX hybrid model (C-index of 0.726 and a Brier score of 0.145) and other unimodal and bimodal models. The SurvSHAP(t) plot highlighted CA125 as the most important variable. In the test set, patients were stratified into high- and low-risk groups based on the predicted risk scores, and Kaplan-Meier analysis demonstrated a significant survival difference between the two groups (p < 0.0001). A multi-modal model using radiomics based on clinical tabular data and contrast-enhanced CT and MRI was developed by RSF, presenting strengths in predicting prognosis in pancreatic cancer patients.

On the effectiveness of multimodal privileged knowledge distillation in two vision transformer based diagnostic applications

Simon Baur, Alexandra Benova, Emilio Dolgener Cantú, Jackie Ma

arxiv logopreprintAug 6 2025
Deploying deep learning models in clinical practice often requires leveraging multiple data modalities, such as images, text, and structured data, to achieve robust and trustworthy decisions. However, not all modalities are always available at inference time. In this work, we propose multimodal privileged knowledge distillation (MMPKD), a training strategy that utilizes additional modalities available solely during training to guide a unimodal vision model. Specifically, we used a text-based teacher model for chest radiographs (MIMIC-CXR) and a tabular metadata-based teacher model for mammography (CBIS-DDSM) to distill knowledge into a vision transformer student model. We show that MMPKD can improve the resulting attention maps' zero-shot capabilities of localizing ROI in input images, while this effect does not generalize across domains, as contrarily suggested by prior research.

UNISELF: A Unified Network with Instance Normalization and Self-Ensembled Lesion Fusion for Multiple Sclerosis Lesion Segmentation

Jinwei Zhang, Lianrui Zuo, Blake E. Dewey, Samuel W. Remedios, Yihao Liu, Savannah P. Hays, Dzung L. Pham, Ellen M. Mowry, Scott D. Newsome, Peter A. Calabresi, Aaron Carass, Jerry L. Prince

arxiv logopreprintAug 6 2025
Automated segmentation of multiple sclerosis (MS) lesions using multicontrast magnetic resonance (MR) images improves efficiency and reproducibility compared to manual delineation, with deep learning (DL) methods achieving state-of-the-art performance. However, these DL-based methods have yet to simultaneously optimize in-domain accuracy and out-of-domain generalization when trained on a single source with limited data, or their performance has been unsatisfactory. To fill this gap, we propose a method called UNISELF, which achieves high accuracy within a single training domain while demonstrating strong generalizability across multiple out-of-domain test datasets. UNISELF employs a novel test-time self-ensembled lesion fusion to improve segmentation accuracy, and leverages test-time instance normalization (TTIN) of latent features to address domain shifts and missing input contrasts. Trained on the ISBI 2015 longitudinal MS segmentation challenge training dataset, UNISELF ranks among the best-performing methods on the challenge test dataset. Additionally, UNISELF outperforms all benchmark methods trained on the same ISBI training data across diverse out-of-domain test datasets with domain shifts and missing contrasts, including the public MICCAI 2016 and UMCL datasets, as well as a private multisite dataset. These test datasets exhibit domain shifts and/or missing contrasts caused by variations in acquisition protocols, scanner types, and imaging artifacts arising from imperfect acquisition. Our code is available at https://github.com/uponacceptance.

Towards Globally Predictable k-Space Interpolation: A White-box Transformer Approach

Chen Luo, Qiyu Jin, Taofeng Xie, Xuemei Wang, Huayu Wang, Congcong Liu, Liming Tang, Guoqing Chen, Zhuo-Xu Cui, Dong Liang

arxiv logopreprintAug 6 2025
Interpolating missing data in k-space is essential for accelerating imaging. However, existing methods, including convolutional neural network-based deep learning, primarily exploit local predictability while overlooking the inherent global dependencies in k-space. Recently, Transformers have demonstrated remarkable success in natural language processing and image analysis due to their ability to capture long-range dependencies. This inspires the use of Transformers for k-space interpolation to better exploit its global structure. However, their lack of interpretability raises concerns regarding the reliability of interpolated data. To address this limitation, we propose GPI-WT, a white-box Transformer framework based on Globally Predictable Interpolation (GPI) for k-space. Specifically, we formulate GPI from the perspective of annihilation as a novel k-space structured low-rank (SLR) model. The global annihilation filters in the SLR model are treated as learnable parameters, and the subgradients of the SLR model naturally induce a learnable attention mechanism. By unfolding the subgradient-based optimization algorithm of SLR into a cascaded network, we construct the first white-box Transformer specifically designed for accelerated MRI. Experimental results demonstrate that the proposed method significantly outperforms state-of-the-art approaches in k-space interpolation accuracy while providing superior interpretability.
Page 49 of 3523515 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.