Sort by:
Page 31 of 2182174 results

Cross-domain subcortical brain structure segmentation algorithm based on low-rank adaptation fine-tuning SAM.

Sui Y, Hu Q, Zhang Y

pubmed logopapersJul 1 2025
Accurate and robust segmentation of anatomical structures in brain MRI provides a crucial basis for the subsequent observation, analysis, and treatment planning of various brain diseases. Deep learning foundation models trained and designed on large-scale natural scene image datasets experience significant performance degradation when applied to subcortical brain structure segmentation in MRI, limiting their direct applicability in clinical diagnosis. This paper proposes a subcortical brain structure segmentation algorithm based on Low-Rank Adaptation (LoRA) to fine-tune SAM (Segment Anything Model) by freezing SAM's image encoder and applying LoRA to approximate low-rank matrix updates to the encoder's training weights, while also fine-tuning SAM's lightweight prompt encoder and mask decoder. The fine-tuned model's learnable parameters (5.92 MB) occupy only 6.39% of the original model's parameter size (92.61 MB). For training, model preheating is employed to stabilize the fine-tuning process. During inference, adaptive prompt learning with point or box prompts is introduced to enhance the model's accuracy for arbitrary brain MRI segmentation. This interactive prompt learning approach provides clinicians with a means of intelligent segmentation for deep brain structures, effectively addressing the challenges of limited data labels and high manual annotation costs in medical image segmentation. We use five MRI datasets of IBSR, MALC, LONI, LPBA, Hammers and CANDI for experiments across various segmentation scenarios, including cross-domain settings with inference samples from diverse MRI datasets and supervised fine-tuning settings, demonstrate the proposed segmentation algorithm's generalization and effectiveness when compared to current mainstream and supervised segmentation algorithms.

Machine learning-based brain magnetic resonance imaging radiomics for identifying rapid eye movement sleep behavior disorder in Parkinson's disease patients.

Lian Y, Xu Y, Hu L, Wei Y, Wang Z

pubmed logopapersJul 1 2025
Traditional clinical diagnostic methods of rapid eye movement sleep behavior disorder (RBD) have certain limitations, especially in the early stages. This study aims to develop and validate an magnetic resonance imaging (MRI) radiomics-based machine learning classifier to accurately detect RBD patients with Parkinson's disease (PD). Data from 183 subjects, including 63 PD patients with RBD, sourced from the PPMI database were utilized in this study. The data were randomly divided into training (70%) and testing (30%) sets. Quantitative radiomic features of white matter, gray matter, and cerebrospinal fluid were extracted from whole-brain structural MRI images. Feature reduction was performed on the training set data to construct radiomics signatures. Additionally, multi-factor logistic regression analysis identified clinical predictors associated with PD-RBD, and these clinical features were integrated with the radiomics signatures to develop predictive models using various machine learning algorithms. The model exhibiting the best performance was selected, and receiver operating characteristic (ROC) curves were used to evaluate its performance in both the training and testing sets. Furthermore, based on the optimal cut-off value of the model, subjects were categorized into low- and high-risk groups, and differences in the actual number of RBD patients between the two sets were compared to assess the clinical effectiveness of the model. The radiomics signatures achieved areas under the curve (AUC) of 0.754 and 0.707 in the training and testing sets, respectively. Multi-factor logistic regression analysis revealed that postural instability was an independent predictor of PD-RBD. The random forest model, which integrated radiomics signatures with postural instability, demonstrated superior performance in predicting PD-RBD. Specifically, its AUCs in the training and testing sets were 0.917 and 0.882, with sensitivities of 0.933 and 0.889, and specificities of 0.786 and 0.722, respectively. Based on the optimal cut-off value of 0.3772, significant differences in the actual number of PD-RBD patients were observed between low-risk and high-risk groups in both the training and testing sets (P < 0.05). MRI-based radiomic signatures have the potential to serve as biomarkers for PD-RBD. The random forest model, which integrates radiomic signatures with postural instability, and shows improved performance in identifying PD-RBD. This approach offers valuable insights for prognostic evaluation and preventive treatment strategies.

MCAUnet: a deep learning framework for automated quantification of body composition in liver cirrhosis patients.

Wang J, Xia S, Zhang J, Wang X, Zhao C, Zheng W

pubmed logopapersJul 1 2025
Traditional methods for measuring body composition in CT scans rely on labor-intensive manual delineation, which is time-consuming and imprecise. This study proposes a deep learning-driven framework, MCAUnet, for accurate and automated quantification of body composition and comprehensive survival analysis in cirrhotic patients. A total of 11,362 L3-level lumbar CT slices were collected to train and validate the segmentation model. The proposed model incorporates an attention mechanism from the channel perspective, enabling adaptive fusion of critical channel features. Experimental results demonstrate that our approach achieves an average Dice coefficient of 0.952 for visceral fat segmentation, significantly outperforming existing segmentation models. Based on the quantified body composition, sarcopenic visceral obesity (SVO) was defined, and an association model was developed to analyze the relationship between SVO and survival rates in cirrhotic patients. The study revealed that 3-year and 5-year survival rates of SVO patients were significantly lower than those of non-SVO patients. Regression analysis further validated the strong correlation between SVO and mortality in cirrhotic patients. In summary, the MCAUnet framework provides a novel, precise, and automated tool for body composition quantification and survival analysis in cirrhotic patients, offering potential support for clinical decision-making and personalized treatment strategies.

MRI radiomics model for predicting tumor immune microenvironment types and efficacy of anti-PD-1/PD-L1 therapy in hepatocellular carcinoma.

Zhang R, Peng W, Wang Y, Jiang Y, Wang J, Zhang S, Li Z, Shi Y, Chen F, Feng Z, Xiao W

pubmed logopapersJul 1 2025
To improve the prediction of immune checkpoint inhibitors (ICIs) efficacy in hepatocellular carcinoma (HCC), this study categorized the tumor immune microenvironment (TIME) into two types: immune-activated (IA), characterized by a high CD8 + score and high PD-L1 combined positive score (CPS), and non-immune-activated (NIA), encompassing all other conditions. We aimed to develop an MRI-based radiomics model to predict TIME types and validate its predictive capability for ICIs efficacy in HCC patients receiving anti-PD-1/PD-L1 therapy. The study included 200 HCC patients who underwent preoperative/pretreatment multiparametric contrast-enhanced MRI (Cohort 1: 168 HCC patients with hepatectomy from two centres; Cohort 2: 42 advanced HCC patients on anti-PD-1/PD-L1 therapy). In Cohort 1, after feature selection, clinical, intratumoral radiomics, peritumoral radiomics, combined radiomics, and clinical-radiomics models were established using machine learning algorithms. In cohort 2, the clinical-radiomics model's predictive ability for ICIs efficacy was assessed. In Cohort 1, the AUC values for intratumoral, peritumoral, and combined radiomics models were 0.825, 0.809, and 0.868, respectively, in the internal validation set, and 0.73, 0.759, and 0.822 in the external validation set; the clinical-radiomics model incorporating neutrophil-to-lymphocyte ratio, tumor size, and combined radiomics score achieved an AUC of 0.887 in the internal validation set, outperforming clinical model (P = 0.049), and an AUC of 0.837 in the external validation set. In cohort 2, the clinical-radiomics model stratified patients into low- and high-score groups, demonstrating a significant difference in objective response rate (p = 0.003) and progression-free survival (p = 0.031). The clinical-radiomics model is effective in predicting TIME types and efficacy of ICIs in HCC, potentially aiding in treatment decision-making.

Federated learning-based CT liver tumor detection using a teacher‒student SANet with semisupervised learning.

Lee CS, Lien JJ, Chain K, Huang LC, Hsu ZW

pubmed logopapersJul 1 2025
Detecting liver tumors via computed tomography (CT) scans is a critical but labor-intensive task. Extensive expert annotations are needed to train effective machine learning models. This study presents an innovative approach that leverages federated learning in combination with a teacher‒student framework, an enhanced slice-aware network (SANet), and semisupervised learning (SSL) techniques to improve the CT-based liver tumor detection process while significantly reducing its labor and time costs. Federated learning enables collaborative model training to be performed across multiple institutions without sharing sensitive patient data, thus ensuring privacy and security. The teacher-student SANet framework takes advantage of both teacher and student models, with the teacher model providing reliable pseudolabels that guide the student model in a semisupervised manner. This method not only improves the accuracy of liver tumor detection but also reduces the dependence on extensively annotated datasets. The proposed method was validated through simulation experiments conducted in four scenarios, and it demonstrated a model accuracy of 83%, which represents an improvement over the original locally trained models. This study presents a promising method for enhancing the CT-based liver tumor detection while reducing the incurred labor and time costs by utilizing federated learning, the teacher-student SANet framework, and SSL techniques. Compared with previous approaches, the proposed method achieved a model accuracy of 83%, representing a significant improvement. Not applicable.

A novel deep learning system for automated diagnosis and grading of lumbar spinal stenosis based on spine MRI: model development and validation.

Wang T, Wang A, Zhang Y, Liu X, Fan N, Yuan S, Du P, Wu Q, Chen R, Xi Y, Gu Z, Fei Q, Zang L

pubmed logopapersJul 1 2025
The study aimed to develop a single-stage deep learning (DL) screening system for automated binary and multiclass grading of lumbar central stenosis (LCS), lateral recess stenosis (LRS), and lumbar foraminal stenosis (LFS). Consecutive inpatients who underwent lumbar MRI at our center were retrospectively reviewed for the internal dataset. Axial and sagittal lumbar MRI scans were collected. Based on a new MRI diagnostic criterion, all MRI studies were labeled by two spine specialists and calibrated by a third spine specialist to serve as reference standard. Furthermore, two spine clinicians labeled all MRI studies independently to compare interobserver reliability with the DL model. Samples were assigned into training, validation, and test sets at a proportion of 8:1:1. Additional patients from another center were enrolled as the external test dataset. A modified single-stage YOLOv5 network was designed for simultaneous detection of regions of interest (ROIs) and grading of LCS, LRS, and LFS. Quantitative evaluation metrics of exactitude and reliability for the model were computed. In total, 420 and 50 patients were enrolled in the internal and external datasets. High recalls of 97.4%-99.8% were achieved for ROI detection of lumbar spinal stenosis (LSS). The system revealed multigrade area under curve (AUC) values of 0.93-0.97 in the internal test set and 0.85-0.94 in the external test set for LCS, LRS, and LFS. In binary grading, the DL model achieved high sensitivities of 0.97 for LCS, 0.98 for LRS, and 0.96 for LFS, slightly better than those achieved by spine clinicians in the internal test set. In the external test set, the binary sensitivities were 0.98 for LCS, 0.96 for LRS, and 0.95 for LFS. For reliability assessment, the kappa coefficients between the DL model and reference standard were 0.92, 0.88, and 0.91 for LCS, LRS, and LFS, respectively, slightly higher than those evaluated by nonexpert spine clinicians. The authors designed a novel DL system that demonstrated promising performance, especially in sensitivity, for automated diagnosis and grading of different types of lumbar spinal stenosis using spine MRI. The reliability of the system was better than that of spine surgeons. The authors' system may serve as a triage tool for LSS to reduce misdiagnosis and optimize routine processes in clinical work.

A multiregional multimodal machine learning model for predicting outcome of surgery for symptomatic hemorrhagic brainstem cavernous malformations.

Dong X, Gui H, Quan K, Li Z, Xiao Y, Zhou J, Zhao Y, Wang D, Liu M, Duan H, Yang S, Lin X, Dong J, Wang L, Ma Y, Zhu W

pubmed logopapersJul 1 2025
Given that resection of brainstem cavernous malformations (BSCMs) ends hemorrhaging but carries a high risk of neurological deficits, it is necessary to develop and validate a model predicting surgical outcomes. This study aimed to construct a BSCM surgery outcome prediction model based on clinical characteristics and T2-weighted MRI-based radiomics. Two separate cohorts of patients undergoing BSCM resection were included as discovery and validation sets. Patient characteristics and imaging data were analyzed. An unfavorable outcome was defined as a modified Rankin Scale score > 2 at the 12-month follow-up. Image features were extracted from regions of interest within lesions and adjacent brainstem. A nomogram was constructed using the risk score from the optimal model. The discovery and validation sets comprised 218 and 49 patients, respectively (mean age 40 ± 14 years, 127 females); 63 patients in the discovery set and 35 in the validation set had an unfavorable outcome. The eXtreme Gradient Boosting imaging model with selected radiomics features achieved the best performance (area under the receiver operating characteristic curve [AUC] 0.82). Patients were stratified into high- and low-risk groups based on risk scores computed from this model (optimal cutoff 0.37). The final integrative multimodal prognostic model attained an AUC of 0.90, surpassing both the imaging and clinical models alone. Inclusion of BSCM and brainstem subregion imaging data in machine learning models yielded significant predictive capability for unfavorable postoperative outcomes. The integration of specific clinical features enhanced prediction accuracy.

Deep learning-based clinical decision support system for intracerebral hemorrhage: an imaging-based AI-driven framework for automated hematoma segmentation and trajectory planning.

Gan Z, Xu X, Li F, Kikinis R, Zhang J, Chen X

pubmed logopapersJul 1 2025
Intracerebral hemorrhage (ICH) remains a critical neurosurgical emergency with high mortality and long-term disability. Despite advancements in minimally invasive techniques, procedural precision remains limited by hematoma complexity and resource disparities, particularly in underserved regions where 68% of global ICH cases occur. Therefore, the authors aimed to introduce a deep learning-based decision support and planning system to democratize surgical planning and reduce operator dependence. A retrospective cohort of 347 patients (31,024 CT slices) from a single hospital (March 2016-June 2024) was analyzed. The framework integrated nnU-Net-based hematoma and skull segmentation, CT reorientation via ocular landmarks (mean angular correction 20.4° [SD 8.7°]), safety zone delineation with dual anatomical corridors, and trajectory optimization prioritizing maximum hematoma traversal and critical structure avoidance. A validated scoring system was implemented for risk stratification. With the artificial intelligence (AI)-driven system, the automated segmentation accuracy reached clinical-grade performance (Dice similarity coefficient 0.90 [SD 0.14] for hematoma and 0.99 [SD 0.035] for skull), with strong interrater reliability (intraclass correlation coefficient 0.91). For trajectory planning of supratentorial hematomas, the system achieved a low-risk trajectory in 80.8% (252/312) and a moderate-risk trajectory in 15.4% (48/312) of patients, while replanning was required due to high-risk designations in 3.8% of patients (12/312). This AI-driven system demonstrated robust efficacy for supratentorial ICH, addressing 60% of prevalent hemorrhage subtypes. While limitations remain in infratentorial hematomas, this novel automated hematoma segmentation and surgical planning system could be helpful in assisting less-experienced neurosurgeons with limited resources in primary healthcare settings.

Generation of synthetic CT-like imaging of the spine from biplanar radiographs: comparison of different deep learning architectures.

Bottini M, Zanier O, Da Mutten R, Gandia-Gonzalez ML, Edström E, Elmi-Terander A, Regli L, Serra C, Staartjes VE

pubmed logopapersJul 1 2025
This study compared two deep learning architectures-generative adversarial networks (GANs) and convolutional neural networks combined with implicit neural representations (CNN-INRs)-for generating synthetic CT (sCT) images of the spine from biplanar radiographs. The aim of the study was to identify the most robust and clinically viable approach for this potential intraoperative imaging technique. A spine CT dataset of 216 training and 54 validation cases was used. Digitally reconstructed radiographs (DRRs) served as 2D inputs for training both models under identical conditions for 170 epochs. Evaluation metrics included the Structural Similarity Index Measure (SSIM), peak signal-to-noise ratio (PSNR), and cosine similarity (CS), complemented by qualitative assessments of anatomical fidelity. The GAN model achieved a mean SSIM of 0.932 ± 0.015, PSNR of 19.85 ± 1.40 dB, and CS of 0.671 ± 0.177. The CNN-INR model demonstrated a mean SSIM of 0.921 ± 0.015, PSNR of 21.96 ± 1.20 dB, and CS of 0.707 ± 0.114. Statistical analysis revealed significant differences for SSIM (p = 0.001) and PSNR (p < 0.001), while CS differences were not statistically significant (p = 0.667). Qualitative evaluations consistently favored the GAN model, which produced more anatomically detailed and visually realistic sCT images. This study demonstrated the feasibility of generating spine sCT images from biplanar radiographs using GAN and CNN-INR models. While neither model achieved clinical-grade outputs, the GAN architecture showed greater potential for generating anatomically accurate and visually realistic images. These findings highlight the promise of sCT image generation from biplanar radiographs as an innovative approach to reducing radiation exposure and improving imaging accessibility, with GANs emerging as the more promising avenue for further research and clinical integration.

Does alignment alone predict mechanical complications after adult spinal deformity surgery? A machine learning comparison of alignment, bone quality, and soft tissue.

Sundrani S, Doss DJ, Johnson GW, Jain H, Zakieh O, Wegner AM, Lugo-Pico JG, Abtahi AM, Stephens BF, Zuckerman SL

pubmed logopapersJul 1 2025
Mechanical complications are a vexing occurrence after adult spinal deformity (ASD) surgery. While achieving ideal spinal alignment in ASD surgery is critical, alignment alone may not fully explain all mechanical complications. The authors sought to determine which combination of inputs produced the most sensitive and specific machine learning model to predict mechanical complications using postoperative alignment, bone quality, and soft tissue data. A retrospective cohort study was performed in patients undergoing ASD surgery from 2009 to 2021. Inclusion criteria were a fusion ≥ 5 levels, sagittal/coronal deformity, and at least 2 years of follow-up. The primary exposure variables were 1) alignment, evaluated in both the sagittal and coronal planes using the L1-pelvic angle ± 3°, L4-S1 lordosis, sagittal vertical axis, pelvic tilt, and coronal vertical axis; 2) bone quality, evaluated by the T-score from a dual-energy x-ray absorptiometry scan; and 3) soft tissue, evaluated by the paraspinal muscle-to-vertebral body ratio and fatty infiltration. The primary outcome was mechanical complications. Alongside demographic data in each model, 7 machine learning models with all combinations of domains (alignment, bone quality, and soft tissue) were trained. The positive predictive value (PPV) was calculated for each model. Of 231 patients (24% male) undergoing ASD surgery with a mean age of 64 ± 17 years, 147 (64%) developed at least one mechanical complication. The model with alignment alone performed poorly, with a PPV of 0.85. However, the model with alignment, bone quality, and soft tissue achieved a high PPV of 0.90, sensitivity of 0.67, and specificity of 0.84. Moreover, the model with alignment alone failed to predict 15 complications of 100, whereas the model with all three domains only failed to predict 10 of 100. These results support the notion that not every mechanical failure is explained by alignment alone. The authors found that a combination of alignment, bone quality, and soft tissue provided the most accurate prediction of mechanical complications after ASD surgery. While achieving optimal alignment is essential, additional data including bone and soft tissue are necessary to minimize mechanical complications.
Page 31 of 2182174 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.