Sort by:
Page 157 of 3993984 results

LADDA: Latent Diffusion-based Domain-adaptive Feature Disentangling for Unsupervised Multi-modal Medical Image Registration.

Yuan P, Dong J, Zhao W, Lyu F, Xue C, Zhang Y, Yang C, Wu Z, Gao Z, Lyu T, Coatrieux JL, Chen Y

pubmed logopapersJul 15 2025
Deformable image registration (DIR) is critical for accurate clinical diagnosis and effective treatment planning. However, patient movement, significant intensity differences, and large breathing deformations hinder accurate anatomical alignment in multi-modal image registration. These factors exacerbate the entanglement of anatomical and modality-specific style information, thereby severely limiting the performance of multi-modal registration. To address this, we propose a novel LAtent Diffusion-based Domain-Adaptive feature disentangling (LADDA) framework for unsupervised multi-modal medical image registration, which explicitly addresses the representation disentanglement. First, LADDA extracts reliable anatomical priors from the Latent Diffusion Model (LDM), facilitating downstream content-style disentangled learning. A Domain-Adaptive Feature Disentangling (DAFD) module is proposed to promote anatomical structure alignment further. This module disentangles image features into content and style information, boosting the network to focus on cross-modal content information. Next, a Neighborhood-Preserving Hashing (NPH) is constructed to further perceive and integrate hierarchical content information through local neighbourhood encoding, thereby maintaining cross-modal structural consistency. Furthermore, a Unilateral-Query-Frozen Attention (UQFA) module is proposed to enhance the coupling between upstream prior and downstream content information. The feature interaction within intra-domain consistent structures improves the fine recovery of detailed textures. The proposed framework is extensively evaluated on large-scale multi-center datasets, demonstrating superior performance across diverse clinical scenarios and strong generalization on out-of-distribution (OOD) data.

OMT and tensor SVD-based deep learning model for segmentation and predicting genetic markers of glioma: A multicenter study.

Zhu Z, Wang H, Li T, Huang TM, Yang H, Tao Z, Tan ZH, Zhou J, Chen S, Ye M, Zhang Z, Li F, Liu D, Wang M, Lu J, Zhang W, Li X, Chen Q, Jiang Z, Chen F, Zhang X, Lin WW, Yau ST, Zhang B

pubmed logopapersJul 15 2025
Glioma is the most common primary malignant brain tumor and preoperative genetic profiling is essential for the management of glioma patients. Our study focused on tumor regions segmentation and predicting the World Health Organization (WHO) grade, isocitrate dehydrogenase (IDH) mutation, and 1p/19q codeletion status using deep learning models on preoperative MRI. To achieve accurate tumor segmentation, we developed an optimal mass transport (OMT) approach to transform irregular MRI brain images into tensors. In addition, we proposed an algebraic preclassification (APC) model utilizing multimode OMT tensor singular value decomposition (SVD) to estimate preclassification probabilities. The fully automated deep learning model named OMT-APC was used for multitask classification. Our study incorporated preoperative brain MRI data from 3,565 glioma patients across 16 datasets spanning Asia, Europe, and America. Among these, 2,551 patients from 5 datasets were used for training and internal validation. In comparison, 1,014 patients from 11 datasets, including 242 patients from The Cancer Genome Atlas (TCGA), were used as independent external test. The OMT segmentation model achieved mean lesion-wise Dice scores of 0.880. The OMT-APC model was evaluated on the TCGA dataset, achieving accuracies of 0.855, 0.917, and 0.809, with AUC scores of 0.845, 0.908, and 0.769 for WHO grade, IDH mutation, and 1p/19q codeletion, respectively, which outperformed the four radiologists in all tasks. These results highlighted the effectiveness of our OMT and tensor SVD-based methods in brain tumor genetic profiling, suggesting promising applications for algebraic and geometric methods in medical image analysis.

Restore-RWKV: Efficient and Effective Medical Image Restoration with RWKV.

Yang Z, Li J, Zhang H, Zhao D, Wei B, Xu Y

pubmed logopapersJul 15 2025
Transformers have revolutionized medical image restoration, but the quadratic complexity still poses limitations for their application to high-resolution medical images. The recent advent of the Receptance Weighted Key Value (RWKV) model in the natural language processing field has attracted much attention due to its ability to process long sequences efficiently. To leverage its advanced design, we propose Restore-RWKV, the first RWKV-based model for medical image restoration. Since the original RWKV model is designed for 1D sequences, we make two necessary modifications for modeling spatial relations in 2D medical images. First, we present a recurrent WKV (Re-WKV) attention mechanism that captures global dependencies with linear computational complexity. Re-WKV incorporates bidirectional attention as basic for a global 16 receptive field and recurrent attention to effectively model 2D dependencies from various scan directions. Second, we develop an omnidirectional token shift (Omni-Shift) layer that enhances local dependencies by shifting tokens from all directions and across a wide context range. These adaptations make the proposed Restore-RWKV an efficient and effective model for medical image restoration. Even a lightweight variant of Restore-RWKV, with only 1.16 million parameters, achieves comparable or even superior results compared to existing state-of-the-art (SOTA) methods. Extensive experiments demonstrate that the resulting Restore-RWKV achieves SOTA performance across a range of medical image restoration tasks, including PET image synthesis, CT image denoising, MRI image superresolution, and all-in-one medical image restoration. Code is available at: https://github.com/Yaziwel/Restore-RWKV.

Vision transformer and complex network analysis for autism spectrum disorder classification in T1 structural MRI.

Gao X, Xu Y

pubmed logopapersJul 15 2025
Autism spectrum disorder (ASD) affects social interaction, communication, and behavior. Early diagnosis is important as it enables timely intervention that can significantly improve long-term outcomes, but current diagnostic, which rely heavily on behavioral observations and clinical interviews, are often subjective and time-consuming. This study introduces an AI-based approach that uses T1-weighted structural MRI (sMRI) scans, network analysis, and vision transformers to automatically diagnose ASD. sMRI data from 79 ASD patients and 105 healthy controls were obtained from the Autism Brain Imaging Data Exchange (ABIDE) database. Complex network analysis (CNA) features and ViT (Vision Transformer) features were developed for predicting ASD. Five models were developed for each type of features: logistic regression, support vector machine (SVM), gradient boosting (GB), K-nearest neighbors (KNN), and neural network (NN). 25 models were further developed by federating the two sets of 5 models. Model performance was evaluated using accuracy, area under the receiver operating characteristic curve (AUC-ROC), sensitivity, and specificity via fivefold cross-validation. The federate model CNA(KNN)-ViT(NN) achieved highest performance, with accuracy 0.951 ± 0.067, AUC-ROC 0.980 ± 0.020, sensitivity 0.963 ± 0.050, and specificity 0.943 ± 0.047. The performance of the ViT-based models exceeds that of the complex network-based models on 80% of the performance metrics. By federating CNA models, the ViT models can achieve better performance. This study demonstrates the feasibility of using CNA and ViT models for the automated diagnosis of ASD. The proposed CNA(KNN)-ViT(NN) model achieved better accuracy in ASD classification based solely on T1 sMRI images. The proposed method's reliance on widely available T1 sMRI scans highlights its potential for integration into routine clinical examinations, facilitating more efficient and accessible ASD screening.

Poincare guided geometric UNet for left atrial epicardial adipose tissue segmentation in Dixon MRI images.

Firouznia M, Ylipää E, Henningsson M, Carlhäll CJ

pubmed logopapersJul 15 2025
Epicardial Adipose Tissue (EAT) is a recognized risk factor for cardiovascular diseases and plays a pivotal role in the pathophysiology of Atrial Fibrillation (AF). Accurate automatic segmentation of the EAT around the Left Atrium (LA) from Magnetic Resonance Imaging (MRI) data remains challenging. While Convolutional Neural Networks excel at multi-scale feature extraction using stacked convolutions, they struggle to capture long-range self-similarity and hierarchical relationships, which are essential in medical image segmentation. In this study, we present and validate PoinUNet, a deep learning model that integrates a Poincaré embedding layer into a 3D UNet to enhance LA wall and fat segmentation from Dixon MRI data. By using hyperbolic space learning, PoinUNet captures complex LA and EAT relationships and addresses class imbalance and fat geometry challenges using a new loss function. Sixty-six participants, including forty-eight AF patients, were scanned at 1.5T. The first network identified fat regions, while the second utilized Poincaré embeddings and convolutional layers for precise segmentation, enhanced by fat fraction maps. PoinUNet achieved a Dice Similarity Coefficient of 0.87 and a Hausdorff distance of 9.42 on the test set. This performance surpasses state-of-the-art methods, providing accurate quantification of the LA wall and LA EAT.

Fully Automated Online Adaptive Radiation Therapy Decision-Making for Cervical Cancer Using Artificial Intelligence.

Sun S, Gong X, Cheng S, Cao R, He S, Liang Y, Yang B, Qiu J, Zhang F, Hu K

pubmed logopapersJul 15 2025
Interfraction variations during radiation therapy pose a challenge for patients with cervical cancer, highlighting the benefits of online adaptive radiation therapy (oART). However, adaptation decisions rely on subjective image reviews by physicians, leading to high interobserver variability and inefficiency. This study explores the feasibility of using artificial intelligence for decision-making in oART. A total of 24 patients with cervical cancer who underwent 671 fractions of daily fan-beam computed tomography (FBCT) guided oART were included in this study, with each fraction consisting of a daily FBCT image series and a pair of scheduled and adaptive plans. Dose deviations of scheduled plans exceeding predefined criteria were labeled as "trigger," otherwise as "nontrigger." A data set comprising 588 fractions from 21 patients was used for model development. For the machine learning model (ML), 101 morphologic, gray-level, and dosimetric features were extracted, with feature selection by the least absolute shrinkage and selection operator (LASSO) and classification by support vector machine (SVM). For deep learning, a Siamese network approach was used: the deep learning model of contour (DL_C) used only imaging data and contours, whereas a deep learning model of contour and dose (DL_D) also incorporated dosimetric data. A 5-fold cross-validation strategy was employed for model training and testing, and model performance was evaluated using the area under the curve (AUC), accuracy, precision, and recall. An independent data set comprising 83 fractions from 3 patients was used for model evaluation, with predictions compared against trigger labels assigned by 3 experienced radiation oncologists. Based on dosimetric labels, the 671 fractions were classified into 492 trigger and 179 nontrigger cases. The ML model selected 39 key features, primarily reflecting morphologic and gray-level changes in the clinical target volume (CTV) of the uterus (CTV_U), the CTV of the cervix, vagina, and parametrial tissues (CTV_C), and the small intestine. It achieved an AUC of 0.884, with accuracy, precision, and recall of 0.825, 0.824, and 0.827, respectively. The DL_C model demonstrated superior performance with an AUC of 0.917, accuracy of 0.869, precision of 0.860, and recall of 0.881. The DL_D model, which incorporated additional dosimetric data, exhibited a slight decline in performance compared with DL_C. Heatmap analyses indicated that for trigger fractions, the deep learning models focused on regions where the reference CT's CTV_U did not fully encompass the daily FBCT's CTV_U. Evaluation on an independent data set confirmed the robustness of all models. The weighted model's prediction accuracy significantly outperformed the physician consensus (0.855 vs 0.795), with comparable precision (0.917 vs 0.925) but substantially higher recall (0.887 vs 0.790). This study proposes machine learning and deep learning models to identify treatment fractions that may benefit from adaptive replanning in radical radiation therapy for cervical cancer, providing a promising decision-support tool to assist clinicians in determining when to trigger the oART workflow during treatment.

Assessment of local recurrence risk in extremity high-grade osteosarcoma through multimodality radiomics integration.

Luo Z, Liu R, Li J, Ye Q, Zhou Z, Shen X

pubmed logopapersJul 15 2025
BackgroundA timely assessment of local recurrence (LoR) risk in extremity high-grade osteosarcoma is crucial for optimizing treatment strategies and improving patient outcomes.PurposeTo explore the potential of machine-learning algorithms in predicting LoR in patients with osteosarcoma.Material and MethodsData from patients with high-grade osteosarcoma who underwent preoperative radiograph and multiparametric magnetic resonance imaging (MRI) were collected. Machine-learning models were developed and trained on this dataset to predict LoR. The study involved selecting relevant features, training the models, and evaluating their performance using the receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC). DeLong's test was utilized for comparing the AUCs.ResultsThe performance (AUC, sensitivity, specificity, and accuracy) of four classifiers (random forest [RF], support vector machine, logistic regression, and extreme gradient boosting) using radiograph-MRI as image inputs were stable (all Hosmer-Lemeshow index >0.05) with the fair to good prognosis efficacy. The RF classifier using radiograph-MRI features as training inputs exhibited better performance (AUC = 0.806, 0.868) than that using MRI only (AUC = 0.774, 0.771) and radiograph only (AUC = 0.613 and 0.627) in the training and testing sets (<i>P</i> <0.05) while the other three classifiers showed no difference between MRI-only and radiograph-MRI models.ConclusionThis study provides valuable insights into the use of machine learning for predicting LoR in osteosarcoma patients. These findings emphasize the potential of integrating radiomics data with algorithms to improve prognostic assessments.

Exploring the robustness of TractOracle methods in RL-based tractography

Jeremi Levesque, Antoine Théberge, Maxime Descoteaux, Pierre-Marc Jodoin

arxiv logopreprintJul 15 2025
Tractography algorithms leverage diffusion MRI to reconstruct the fibrous architecture of the brain's white matter. Among machine learning approaches, reinforcement learning (RL) has emerged as a promising framework for tractography, outperforming traditional methods in several key aspects. TractOracle-RL, a recent RL-based approach, reduces false positives by incorporating anatomical priors into the training process via a reward-based mechanism. In this paper, we investigate four extensions of the original TractOracle-RL framework by integrating recent advances in RL, and we evaluate their performance across five diverse diffusion MRI datasets. Results demonstrate that combining an oracle with the RL framework consistently leads to robust and reliable tractography, regardless of the specific method or dataset used. We also introduce a novel RL training scheme called Iterative Reward Training (IRT), inspired by the Reinforcement Learning from Human Feedback (RLHF) paradigm. Instead of relying on human input, IRT leverages bundle filtering methods to iteratively refine the oracle's guidance throughout training. Experimental results show that RL methods trained with oracle feedback significantly outperform widely used tractography techniques in terms of accuracy and anatomical validity.

Preoperative prediction value of 2.5D deep learning model based on contrast-enhanced CT for lymphovascular invasion of gastric cancer.

Sun X, Wang P, Ding R, Ma L, Zhang H, Zhu L

pubmed logopapersJul 15 2025
To develop and validate artificial intelligence models based on contrast-enhanced CT(CECT) images of venous phase using deep learning (DL) and Radiomics approaches to predict lymphovascular invasion in gastric cancer prior to surgery. We retrospectively analyzed data from 351 gastric cancer patients, randomly splitting them into two cohorts (training cohort, n = 246; testing cohort, n = 105) in a 7:3 ratio. The tumor region of interest (ROI) was outlined on venous phase CT images as the input for the development of radiomics, 2D and 3D DL models (DL2D and DL3D). Of note, by centering the analysis on the tumor's maximum cross-section and incorporating seven adjacent 2D images, we generated stable 2.5D data to establish a multi-instance learning (MIL) model. Meanwhile, the clinical and feature-combined models which integrated traditional CT enhancement parameters (Ratio), radiomics, and MIL features were also constructed. Models' performance was evaluated by the area under the curve (AUC), confusion matrices, and detailed metrics, such as sensitivity and specificity. A nomogram based on the combined model was established and applied to clinical practice. The calibration curve was used to evaluate the consistency between the predicted LVI of each model and the actual LVI of gastric cancer, and the decision curve analysis (DCA) was used to evaluate the net benefit of each model. Among the developed models, 2.5D MIL and combined models exhibited the superior performance in comparison to the clinical model, the radiomics model, the DL2D model, and the DL3D model as evidenced by the AUC values of 0.820, 0.822, 0.748, 0.725, 0.786, and 0.711 on testing set, respectively. Additionally, the 2.5D MIL and combined models also showed good calibration for LVI prediction, and could provide a net clinical benefit when the threshold probability ranged from 0.31 to 0.98, and from 0.28 to 0.84, indicating their clinical usefulness. The MIL and combined models highlight their performance in predicting preoperative lymphovascular invasion in gastric cancer, offering valuable insights for clinicians in selecting appropriate treatment options for gastric cancer patients.

Direct-to-Treatment Adaptive Radiation Therapy: Live Planning of Spine Metastases Using Novel Cone Beam Computed Tomography.

McGrath KM, MacDonald RL, Robar JL, Cherpak A

pubmed logopapersJul 15 2025
Cone beam computed tomography (CBCT)-based online adaptive radiation therapy is carried out using a synthetic CT (sCT) created through deformable registration between the patient-specific fan-beam CT, fan-beam computed tomography (FBCT), and daily CBCT. Ethos 2.0 allows for plan calculation directly on HyperSight CBCT and uses artificial intelligence-informed tools for daily contouring without the use of a priori information. This breaks an important link between daily adaptive sessions and initial reference plan preparation. This study explores adaptive radiation therapy for spine metastases without prior patient-specific imaging or treatment planning. We hypothesize that adaptive plans can be created when patient-specific positioning and anatomy is incorporated only once the patient has arrived at the treatment unit. An Ethos 2.0 emulator was used to create initial reference plans on 10 patient-specific FBCTs. Reference plans were also created using FBCTs of (1) a library patient with clinically acceptable contours and (2) a water-equivalent phantom with placeholder contours. Adaptive sessions were simulated for each patient using the 3 different starting points. Resulting adaptive plans were compared with determine the significance of patient-specific information prior to the start of treatment. The library patient and phantom reference plans did not generate adaptive plans that differed significantly from the standard workflow for all clinical constraints for target coverage and organ at risk sparing (P > .2). Gamma comparison between the 3 adaptive plans for each patient (3%/3 mm) demonstrated overall similarity of dose distributions (pass rate > 95%), for all but 2 cases. Failures occurred mainly in low-dose regions, highlighting difference in fluence used to achieve the same clinical goals. This study confirmed feasibility of a procedure for treatment of spine metastases that does not rely on previously acquired patient-specific imaging, contours or plan. Reference-free direct-to-treatment workflows are possible and can condense a multistep process to a single location with dedicated resources.
Page 157 of 3993984 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.