Sort by:
Page 32 of 1621612 results

Deep learning model for predicting lymph node metastasis around rectal cancer based on rectal tumor core area and mesangial imaging features.

Guo L, Fu K, Wang W, Zhou L, Chen L, Jiang M

pubmed logopapersSep 1 2025
Assessing lymph node metastasis (LNM) involvement in patients with rectal cancer (RC) is fundamental in disease management. In this study, we used artificial intelligence (AI) technology to develop a segmentation model that automatically segments the tumor core area and mesangial tissue from magnetic resonance T2-weighted imaging (T2WI) and apparent diffusion coefficient (ADC) images collected from 122 RC patients to improve the accuracy of LNM prediction, after which omics machine modeling was performed on the segmented ROI. An automatic segmentation model was developed using nn-UNet. This pipeline integrates deep learning (DL), specifically 3D U-Net, for semantic segmentation and image processing techniques such as resampling, normalization, connected component analysis, image registration, and radiomics features coupled with machine learning. The results showed that the DL segmentation method could effectively segment the tumor and mesangial areas from MR sequences (the median dice coefficient: 0.90 ± 0.08; mesorectum segmentation: 0.85 ± 0.36), and the radiological characteristics of rectal and mesangial tissues in T2WI and ADC images could help distinguish RC treatments. The nn-UNet model demonstrated promising preliminary results, achieving the highest area under the curve (AUC) values in various scenarios. In the evaluation encompassing both tumor lesions and mesorectum involvement, the model exhibited an AUC of 0.743, highlighting its strong discriminatory ability to predict a combined outcome involving both elements. Specifically targeting tumor lesions, the model achieved an AUC of 0.731, emphasizing its effectiveness in distinguishing between positive and negative cases of tumor lesions. In assessing the prediction of mesorectum involvement, the model displayed moderate predictive utility with an AUC of 0.753. The nn-UNet model demonstrated impressive performance across all evaluated scenarios, including combined tumor lesions and mesorectum involvement, tumor lesions alone, and mesorectum involvement alone. The online version contains supplementary material available at 10.1186/s12880-025-01878-9.

MRI detection and grading of knee osteoarthritis - a pilot study using an AI technique with a novel imaging-based scoring system.

Roy C, Roshan M, Goyal N, Rana P, Ghonge NP, Jena A, Vaishya R, Ghosh S

pubmed logopapersSep 1 2025
Precise and rapid identification of knee osteoarthritis (OA) is essential for efficient management and therapy planning. Conventional diagnostic techniques frequently depend on subjective interpretation, which have shortcomings, particularly during the first phases of the illness. In this study, magnetic resonance imaging (MRI) was used to create knee datasets as novel techniques for evaluating knee OA. This methodology utilizes artificial intelligence (AI) algorithms to identify and evaluate important indications of knee osteoarthritis, including osteophytes, eburnation, bone marrow lesions (BMLs), and cartilage thickness. We conducted training and evaluation on multiple deep learning models, including ResNet50, DenseNet121, VGG16 and ResNet101 utilizing annotated MRI data. By conducting thorough statistical analysis and validation, we have proven the efficacy of our models in precisely diagnosing and grading knee OA. This research presents a new grading method, verified by experienced radiologists, that uses eburnation as a significant indicator of the severity of knee OA. This study provides a new method for an AI-powered automated system designed to diagnose knee OA. This system will simplify the diagnostic process, minimize mistakes made by humans, and enhance the effectiveness of clinical treatment. Through the integration of AI-ML (machine learning) technologies, our goal is to improve patient outcomes, optimize the utilization of healthcare resources, and enable personalized knee OA therapy.

Multimodal dynamic hierarchical clustering model for post-stroke cognitive impairment prediction.

Bai C, Li T, Zheng Y, Yuan G, Zheng J, Zhao H

pubmed logopapersSep 1 2025
Post-stroke cognitive impairment (PSCI) is a common and debilitating consequence of stroke that often arises from complex interactions between diverse brain alterations. The accurate early prediction of PSCI is critical for guiding personalized interventions. However, existing methods often struggle to capture complex structural disruptions and integrate multimodal information effectively. This study proposes the multimodal dynamic hierarchical clustering network (MDHCNet), a graph neural network designed for accurate and interpretable PSCI prediction. MDHCNet constructs brain graphs from diffusion-weighted imaging, magnetic resonance angiography, and T1- and T2-weighted images and integrates them with clinical features using a hierarchical cross-modal fusion module. Experimental results using a real-world stroke cohort demonstrated that MDHCNet consistently outperformed deep learning baselines. Ablation studies validated the benefits of multimodal fusion, while saliency-based interpretation highlighted discriminative brain regions associated with cognitive decline. These findings suggest that MDHCNet is an effective and explainable tool for early PSCI prediction, with the potential to support individualized clinical decision-making in stroke rehabilitation.

Deep Learning-Based Multimodal Prediction of NAC Response in LARC by Integrating MRI and Proteomics.

Li Y, Ding J, Du F, Wang Z, Liu Z, Liu Y, Zhou Y, Zhang Q

pubmed logopapersSep 1 2025
Locally advanced rectal cancer (LARC) exhibits significant heterogeneity in response to neoadjuvant chemotherapy (NAC), with poor responders facing delayed treatment and unnecessary toxicity. Although MRI provides spatial pathophysiological information and proteomics reveals molecular mechanisms, current single-modal approaches cannot integrate these complementary perspectives, resulting in limited predictive accuracy and biological insight. This retrospective study developed a multimodal deep learning framework using a cohort of 274 LARC patients treated with NAC (2012-2021). Graph neural networks analyzed proteomic profiles from FFPE tissues, incorporating KEGG/GO pathways and PPI networks, while a spatially enhanced 3D ResNet152 processed T2WI. A LightGBM classifier integrated both modalities with clinical features using zero-imputation for missing data. Model performance was assessed through AUC-ROC, decision curve analysis, and interpretability techniques (SHAP and Grad-CAM). The integrated model achieved superior NAC response prediction (test AUC 0.828, sensitivity 0.875, specificity 0.750), significantly outperforming single-modal approaches (MRI ΔAUC +0.109; proteomics ΔAUC +0.125). SHAP analysis revealed MRI-derived features contributed 57.7% of predictive power, primarily through peritumoral stromal heterogeneity quantification. Proteomics identified 10 key chemoresistance proteins, including CYBA, GUSB, ATP6AP2, DYNC1I2, DAD1, ACOX1, COPG1, FBP1, DHRS7, and SSR3. Decision curve analysis confirmed clinical utility across threshold probabilities (0-0.75). Our study established a novel MRI-proteomics integration framework for NAC response prediction, with MRI defining spatial resistance patterns and proteomics deciphering molecular drivers, enabling early organ preservation strategies. The zero-imputation design ensured deplorability in diverse clinical settings.

Multi-Modal Machine Learning Framework for Predicting Early Recurrence of Brain Tumors Using MRI and Clinical Biomarkers

Cheng Cheng, Zeping Chen, Rui Xie, Peiyao Zheng, Xavier Wang

arxiv logopreprintSep 1 2025
Accurately predicting early recurrence in brain tumor patients following surgical resection remains a clinical challenge. This study proposes a multi-modal machine learning framework that integrates structural MRI features with clinical biomarkers to improve postoperative recurrence prediction. We employ four machine learning algorithms -- Gradient Boosting Machine (GBM), Random Survival Forest (RSF), CoxBoost, and XGBoost -- and validate model performance using concordance index (C-index), time-dependent AUC, calibration curves, and decision curve analysis. Our model demonstrates promising performance, offering a potential tool for risk stratification and personalized follow-up planning.

Predicting Postoperative Prognosis in Pediatric Malignant Tumor With MRI Radiomics and Deep Learning Models: A Retrospective Study.

Chen Y, Hu X, Fan T, Zhou Y, Yu C, Yu J, Zhou X, Wang B

pubmed logopapersSep 1 2025
The aim of this study is to develop a multimodal machine learning model that integrates magnetic resonance imaging (MRI) radiomics, deep learning features, and clinical indexes to predict the 3-year postoperative disease-free survival (DFS) in pediatric patients with malignant tumors. A cohort of 260 pediatric patients with brain tumors who underwent R0 resection (aged ≤ 14 y) was retrospectively included in the study. Preoperative T1-enhanced MRI images and clinical data were collected. Image preprocessing involved N4 bias field correction and Z-score standardization, with tumor areas manually delineated using 3D Slicer. A total of 1130 radiomics features (Pyradiomics) and 511 deep learning features (3D ResNet-18) were extracted. Six machine learning models (eg, SVM, RF, LightGBM) were developed after dimensionality reduction through Lasso regression analysis, based on selected clinical indexes such as tumor diameter, GCS score, and nutritional status. Bayesian optimization was applied to adjust model parameters. The evaluation metrics included AUC, sensitivity, and specificity. The fusion model (LightGBM) achieved an AUC of 0.859 and an accuracy of 85.2% in the validation set. When combined with clinical indexes, the final model's AUC improved to 0.909. Radiomics features, such as texture heterogeneity, and clinical indexes, including tumor diameter ≥ 5 cm and preoperative low albumin, significantly contributed to prognosis prediction. The multimodal model demonstrated effective prediction of the 3-year postoperative DFS in pediatric brain tumors, offering a scientific foundation for personalized treatment.

Detection of Microscopic Glioblastoma Infiltration in Peritumoral Edema Using Interactive Deep Learning With DTI Biomarkers: Testing via Stereotactic Biopsy.

Tu J, Shen C, Liu J, Hu B, Chen Z, Yan Y, Li C, Xiong J, Daoud AM, Wang X, Li Y, Zhu F

pubmed logopapersSep 1 2025
Microscopic tumor cell infiltration beyond contrast-enhancing regions influences glioblastoma prognosis but remains undetectable using conventional MRI. To develop and evaluate the glioblastoma infiltrating area interactive detection framework (GIAIDF), an interactive deep-learning framework that integrates diffusion tensor imaging (DTI) biomarkers for identifying microscopic infiltration within peritumoral edema. Retrospective. A total of 73 training patients (51.13 ± 13.87 years; 47 M/26F) and 25 internal validation patients (52.82 ± 10.76 years; 14 M/11F) from Center 1; 25 external validation patients (47.29 ± 11.39 years; 16 M/9F) from Center 2; 13 prospective biopsy patients (45.62 ± 9.28 years; 8 M/5F) from Center 1. 3.0 T MRI including three-dimensional contrast-enhanced T1-weighted BRAVO sequence (repetition time = 7.8 ms, echo time = 3.0 ms, inversion time = 450 ms, slice thickness = 1 mm), three-dimensional T2-weighted fluid-attenuated inversion recovery (repetition time = 7000 ms, echo time = 120 ms, inversion time = 2000 ms, slice thickness = 1 mm), and diffusion tensor imaging (repetition time = 8500 ms, echo time = 63 ms, slice thickness = 2 mm). Histopathology of 25 stereotactic biopsy specimens served as the reference standard. Primary metrics included AUC, accuracy, sensitivity, and specificity. GIAIDF heatmaps were co-registered to biopsy trajectories using Ratio-FAcpcic (0.16-0.22) as interactive priors. ROC analysis (DeLong's method) for AUC; recall, precision, and F1 score for prediction validation. GIAIDF demonstrated recall = 0.800 ± 0.060, precision = 0.915 ± 0.057, F1 = 0.852 ± 0.044 in internal validation (n = 25) and recall = 0.778 ± 0.053, precision = 0.890 ± 0.051, F1 = 0.829 ± 0.040 in external validation (n = 25). Among 13 patients undergoing stereotactic biopsy, 25 peri-ED specimens were analyzed: 18 without tumor cell infiltration and seven with infiltration, achieving AUC = 0.929 (95% CI: 0.804-1.000), sensitivity = 0.714, specificity = 0.944, and accuracy = 0.880. Infiltrated sites showed significantly higher risk scores (0.549 ± 0.194 vs. 0.205 ± 0.175 in non-infiltrated sites, p < 0.001). This study has provided a potential tool, GIAIDF, to identify regions of GBM infiltration within areas of peri-ED based on preoperative MR images.

Magnetic Resonance-Based Artificial Intelligence- Supported Osteochondral Allograft Transplantation for Massive Osteochondral Defects of the Knee.

Hangody G, Szoldán P, Egyed Z, Szabó E, Hangody LR, Hangody L

pubmed logopapersSep 1 2025
Transplantation of fresh osteochondral allografts is a possible biological resurfacing option to substitute massive bone loss and provide proper gliding surfaces for extended and deep osteochondral lesions of weight-bearing articular surfaces. Limited chondrocyte survival and technical difficulties may compromise the efficacy of osteochondral transfers. As experimental data suggest that minimizing the time between graft harvest and implantation may improve chondrocyte survival rate a <48 hours donor to recipient time was used to repair massive osteochondral defects. For optimal graft congruency, a magnetic resonance-based artificial intelligence algorithm was also developed to provide proper technical support. Based on 3 years of experience, increased survival rate of transplanted chondrocytes and improved clinical outcomes were observed.

LoRA-PT: Low-rank adapting UNETR for hippocampus segmentation using principal tensor singular values and vectors.

He G, Cheng W, Zhu H, Yu G

pubmed logopapersSep 1 2025
The hippocampus is an important brain structure involved in various psychiatric disorders, and its automatic and accurate segmentation is vital for studying these diseases. Recently, deep learning-based methods have made significant progress in hippocampus segmentation. However, training deep neural network models requires substantial computational resources, time, and a large amount of labeled training data, which is frequently scarce in medical image segmentation. To address these issues, we propose LoRA-PT, a novel parameter-efficient fine-tuning (PEFT) method that transfers the pre-trained UNETR model from the BraTS2021 dataset to the hippocampus segmentation task. Specifically, LoRA-PT divides the parameter matrix of the transformer structure into three distinct sizes, yielding three third-order tensors. These tensors are decomposed using tensor singular value decomposition to generate low-rank tensors consisting of the principal singular values and vectors, with the remaining singular values and vectors forming the residual tensor. During fine-tuning, only the low-rank tensors (i.e., the principal tensor singular values and vectors) are updated, while the residual tensors remain unchanged. We validated the proposed method on three public hippocampus datasets, and the experimental results show that LoRA-PT outperformed state-of-the-art PEFT methods in segmentation accuracy while significantly reducing the number of parameter updates. Our source code is available at https://github.com/WangangCheng/LoRA-PT/tree/LoRA-PT.

Resting-state fMRI Analysis using Quantum Time-series Transformer

Junghoon Justin Park, Jungwoo Seo, Sangyoon Bae, Samuel Yen-Chi Chen, Huan-Hsin Tseng, Jiook Cha, Shinjae Yoo

arxiv logopreprintAug 31 2025
Resting-state functional magnetic resonance imaging (fMRI) has emerged as a pivotal tool for revealing intrinsic brain network connectivity and identifying neural biomarkers of neuropsychiatric conditions. However, classical self-attention transformer models--despite their formidable representational power--struggle with quadratic complexity, large parameter counts, and substantial data requirements. To address these barriers, we introduce a Quantum Time-series Transformer, a novel quantum-enhanced transformer architecture leveraging Linear Combination of Unitaries and Quantum Singular Value Transformation. Unlike classical transformers, Quantum Time-series Transformer operates with polylogarithmic computational complexity, markedly reducing training overhead and enabling robust performance even with fewer parameters and limited sample sizes. Empirical evaluation on the largest-scale fMRI datasets from the Adolescent Brain Cognitive Development Study and the UK Biobank demonstrates that Quantum Time-series Transformer achieves comparable or superior predictive performance compared to state-of-the-art classical transformer models, with especially pronounced gains in small-sample scenarios. Interpretability analyses using SHapley Additive exPlanations further reveal that Quantum Time-series Transformer reliably identifies clinically meaningful neural biomarkers of attention-deficit/hyperactivity disorder (ADHD). These findings underscore the promise of quantum-enhanced transformers in advancing computational neuroscience by more efficiently modeling complex spatio-temporal dynamics and improving clinical interpretability.
Page 32 of 1621612 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.