Sort by:
Page 62 of 1341333 results

Exploring the robustness of TractOracle methods in RL-based tractography

Jeremi Levesque, Antoine Théberge, Maxime Descoteaux, Pierre-Marc Jodoin

arxiv logopreprintJul 15 2025
Tractography algorithms leverage diffusion MRI to reconstruct the fibrous architecture of the brain's white matter. Among machine learning approaches, reinforcement learning (RL) has emerged as a promising framework for tractography, outperforming traditional methods in several key aspects. TractOracle-RL, a recent RL-based approach, reduces false positives by incorporating anatomical priors into the training process via a reward-based mechanism. In this paper, we investigate four extensions of the original TractOracle-RL framework by integrating recent advances in RL, and we evaluate their performance across five diverse diffusion MRI datasets. Results demonstrate that combining an oracle with the RL framework consistently leads to robust and reliable tractography, regardless of the specific method or dataset used. We also introduce a novel RL training scheme called Iterative Reward Training (IRT), inspired by the Reinforcement Learning from Human Feedback (RLHF) paradigm. Instead of relying on human input, IRT leverages bundle filtering methods to iteratively refine the oracle's guidance throughout training. Experimental results show that RL methods trained with oracle feedback significantly outperform widely used tractography techniques in terms of accuracy and anatomical validity.

Direct-to-Treatment Adaptive Radiation Therapy: Live Planning of Spine Metastases Using Novel Cone Beam Computed Tomography.

McGrath KM, MacDonald RL, Robar JL, Cherpak A

pubmed logopapersJul 15 2025
Cone beam computed tomography (CBCT)-based online adaptive radiation therapy is carried out using a synthetic CT (sCT) created through deformable registration between the patient-specific fan-beam CT, fan-beam computed tomography (FBCT), and daily CBCT. Ethos 2.0 allows for plan calculation directly on HyperSight CBCT and uses artificial intelligence-informed tools for daily contouring without the use of a priori information. This breaks an important link between daily adaptive sessions and initial reference plan preparation. This study explores adaptive radiation therapy for spine metastases without prior patient-specific imaging or treatment planning. We hypothesize that adaptive plans can be created when patient-specific positioning and anatomy is incorporated only once the patient has arrived at the treatment unit. An Ethos 2.0 emulator was used to create initial reference plans on 10 patient-specific FBCTs. Reference plans were also created using FBCTs of (1) a library patient with clinically acceptable contours and (2) a water-equivalent phantom with placeholder contours. Adaptive sessions were simulated for each patient using the 3 different starting points. Resulting adaptive plans were compared with determine the significance of patient-specific information prior to the start of treatment. The library patient and phantom reference plans did not generate adaptive plans that differed significantly from the standard workflow for all clinical constraints for target coverage and organ at risk sparing (P > .2). Gamma comparison between the 3 adaptive plans for each patient (3%/3 mm) demonstrated overall similarity of dose distributions (pass rate > 95%), for all but 2 cases. Failures occurred mainly in low-dose regions, highlighting difference in fluence used to achieve the same clinical goals. This study confirmed feasibility of a procedure for treatment of spine metastases that does not rely on previously acquired patient-specific imaging, contours or plan. Reference-free direct-to-treatment workflows are possible and can condense a multistep process to a single location with dedicated resources.

Patient-Specific Deep Learning Tracking Framework for Real-Time 2D Target Localization in Magnetic Resonance Imaging-Guided Radiation Therapy.

Lombardo E, Velezmoro L, Marschner SN, Rabe M, Tejero C, Papadopoulou CI, Sui Z, Reiner M, Corradini S, Belka C, Kurz C, Riboldi M, Landry G

pubmed logopapersJul 15 2025
We propose a tumor tracking framework for 2D cine magnetic resonance imaging (MRI) based on a pair of deep learning (DL) models relying on patient-specific (PS) training. The chosen DL models are: (1) an image registration transformer and (2) an auto-segmentation convolutional neural network (CNN). We collected over 1,400,000 cine MRI frames from 219 patients treated on a 0.35 T MRI-linac plus 7500 frames from additional 35 patients that were manually labeled and subdivided into fine-tuning, validation, and testing sets. The transformer was first trained on the unlabeled data (without segmentations). We then continued training (with segmentations) either on the fine-tuning set or for PS models based on 8 randomly selected frames from the first 5 seconds of each patient's cine MRI. The PS auto-segmentation CNN was trained from scratch with the same 8 frames for each patient, without pre-training. Furthermore, we implemented B-spline image registration as a conventional model, as well as different baselines. Output segmentations of all models were compared on the testing set using the Dice similarity coefficient, the 50% and 95% Hausdorff distance (HD<sub>50%</sub>/HD<sub>95%</sub>), and the root-mean-square-error of the target centroid in superior-inferior direction. The PS transformer and CNN significantly outperformed all other models, achieving a median (interquartile range) dice similarity coefficient of 0.92 (0.03)/0.90 (0.04), HD<sub>50%</sub> of 1.0 (0.1)/1.0 (0.4) mm, HD<sub>95%</sub> of 3.1 (1.9)/3.8 (2.0) mm, and root-mean-square-error of the target centroid in superior-inferior direction of 0.7 (0.4)/0.9 (1.0) mm on the testing set. Their inference time was about 36/8 ms per frame and PS fine-tuning required 3 min for labeling and 8/4 min for training. The transformer was better than the CNN in 9/12 patients, the CNN better in 1/12 patients, and the 2 PS models achieved the same performance on the remaining 2/12 testing patients. For targets in the thorax, abdomen, and pelvis, we found 2 PS DL models to provide accurate real-time target localization during MRI-guided radiotherapy.

OMT and tensor SVD-based deep learning model for segmentation and predicting genetic markers of glioma: A multicenter study.

Zhu Z, Wang H, Li T, Huang TM, Yang H, Tao Z, Tan ZH, Zhou J, Chen S, Ye M, Zhang Z, Li F, Liu D, Wang M, Lu J, Zhang W, Li X, Chen Q, Jiang Z, Chen F, Zhang X, Lin WW, Yau ST, Zhang B

pubmed logopapersJul 15 2025
Glioma is the most common primary malignant brain tumor and preoperative genetic profiling is essential for the management of glioma patients. Our study focused on tumor regions segmentation and predicting the World Health Organization (WHO) grade, isocitrate dehydrogenase (IDH) mutation, and 1p/19q codeletion status using deep learning models on preoperative MRI. To achieve accurate tumor segmentation, we developed an optimal mass transport (OMT) approach to transform irregular MRI brain images into tensors. In addition, we proposed an algebraic preclassification (APC) model utilizing multimode OMT tensor singular value decomposition (SVD) to estimate preclassification probabilities. The fully automated deep learning model named OMT-APC was used for multitask classification. Our study incorporated preoperative brain MRI data from 3,565 glioma patients across 16 datasets spanning Asia, Europe, and America. Among these, 2,551 patients from 5 datasets were used for training and internal validation. In comparison, 1,014 patients from 11 datasets, including 242 patients from The Cancer Genome Atlas (TCGA), were used as independent external test. The OMT segmentation model achieved mean lesion-wise Dice scores of 0.880. The OMT-APC model was evaluated on the TCGA dataset, achieving accuracies of 0.855, 0.917, and 0.809, with AUC scores of 0.845, 0.908, and 0.769 for WHO grade, IDH mutation, and 1p/19q codeletion, respectively, which outperformed the four radiologists in all tasks. These results highlighted the effectiveness of our OMT and tensor SVD-based methods in brain tumor genetic profiling, suggesting promising applications for algebraic and geometric methods in medical image analysis.

Automated Whole-Liver Fat Quantification with Magnetic Resonance Imaging-Derived Proton Density Fat Fraction Map: A Prospective Study in Taiwan.

Wu CH, Yen KC, Wang LY, Hsieh PL, Wu WK, Lee PL, Liu CJ

pubmed logopapersJul 15 2025
Magnetic resonance imaging (MRI) with a proton density fat fraction (PDFF) sequence is the most accurate, noninvasive method for assessing hepatic steatosis. However, manual measurement on the PDFF map is time-consuming. This study aimed to validate automated whole-liver fat quantification for assessing hepatic steatosis with MRI-PDFF. In this prospective study, 80 patients were enrolled from August 2020 to January 2023. Baseline MRI-PDFF and magnetic resonance spectroscopy (MRS) data were collected. The analysis of MRI-PDFF included values from automated whole-liver segmentation (autoPDFF) and the average value from measurements taken from eight segments (avePDFF). Twenty patients with ≥10% autoPDFF values who received 24 weeks of exercise training were also collected for the chronologic evaluation. The correlation and concordance coefficients (r and ρ) among the values and differences were calculated. There were strong correlations between autoPDFF versus avePDFF, autoPDFF versus MRS, and avePDFF versus MRS (r=0.963, r=0.955, and r=0.977, all p<0.001). The autoPDFF values were also highly concordant with the avePDFF and MRS values (ρ=0.941 and ρ=0.942). The autoPDFF, avePDFF, and MRS values consistently decreased after 24 weeks of exercise. The change in autoPDFF was also highly correlated with the changes in avePDFF and MRS (r=0.961 and r=0.870, all p<0.001). Automated whole-liver fat quantification might be feasible for clinical trials and practice, yielding values with high correlations and concordance with the time-consuming manual measurements from the PDFF map and the values from the highly complex processing of MRS (ClinicalTrials.gov identifier: NCT04463667).

Flatten Wisely: How Patch Order Shapes Mamba-Powered Vision for MRI Segmentation

Osama Hardan, Omar Elshenhabi, Tamer Khattab, Mohamed Mabrok

arxiv logopreprintJul 15 2025
Vision Mamba models promise transformer-level performance at linear computational cost, but their reliance on serializing 2D images into 1D sequences introduces a critical, yet overlooked, design choice: the patch scan order. In medical imaging, where modalities like brain MRI contain strong anatomical priors, this choice is non-trivial. This paper presents the first systematic study of how scan order impacts MRI segmentation. We introduce Multi-Scan 2D (MS2D), a parameter-free module for Mamba-based architectures that facilitates exploring diverse scan paths without additional computational cost. We conduct a large-scale benchmark of 21 scan strategies on three public datasets (BraTS 2020, ISLES 2022, LGG), covering over 70,000 slices. Our analysis shows conclusively that scan order is a statistically significant factor (Friedman test: $\chi^{2}_{20}=43.9, p=0.0016$), with performance varying by as much as 27 Dice points. Spatially contiguous paths -- simple horizontal and vertical rasters -- consistently outperform disjointed diagonal scans. We conclude that scan order is a powerful, cost-free hyperparameter, and provide an evidence-based shortlist of optimal paths to maximize the performance of Mamba models in medical imaging.

LUMEN-A Deep Learning Pipeline for Analysis of the 3D Morphology of the Cerebral Lenticulostriate Arteries from Time-of-Flight 7T MRI.

Li R, Chatterjee S, Jiaerken Y, Zhou X, Radhakrishna C, Benjamin P, Nannoni S, Tozer DJ, Markus HS, Rodgers CT

pubmed logopapersJul 15 2025
The lenticulostriate arteries (LSAs) supply critical subcortical brain structures and are affected in cerebral small vessel disease (CSVD). Changes in their morphology are linked to cardiovascular risk factors and may indicate early pathology. 7T Time-of-Flight MR angiography (TOF-MRA) enables clear LSA visualisation. We aimed to develop a semi-automated pipeline for quantifying 3D LSA morphology from 7T TOF-MRA in CSVD patients. We used data from a local 7T CSVD study to create a pipeline, LUMEN, comprising two stages: vessel segmentation and LSA quantification. For segmentation, we fine-tuned a deep learning model, DS6, and compared it against nnU-Net and a Frangi-filter pipeline, MSFDF. For quantification, centrelines of LSAs within basal ganglia were extracted to compute branch counts, length, tortuosity, and maximum curvature. This pipeline was applied to 69 subjects, with results compared to traditional analysis measuring LSA morphology on 2D coronal maximum intensity projection (MIP) images. For vessel segmentation, fine-tuned DS6 achieved the highest test Dice score (0.814±0.029) and sensitivity, whereas nnU-Net achieved the best balanced average Hausdorff distance and precision. Visual inspection confirmed that DS6 was most sensitive in detecting LSAs with weak signals. Across 69 subjects, the pipeline with DS6 identified 23.5±8.5 LSA branches. Branch length inside the basal ganglia was 26.4±3.5 mm, and tortuosity was 1.5±0.1. Extracted LSA metrics from 2D MIP analysis and our 3D analysis showed fair-to-moderate correlations. Outliers highlighted the added value of 3D analysis. This open-source deep-learning-based pipeline offers a validated tool quantifying 3D LSA morphology in CSVD patients from 7T-TOF-MRA for clinical research.

Poincare guided geometric UNet for left atrial epicardial adipose tissue segmentation in Dixon MRI images.

Firouznia M, Ylipää E, Henningsson M, Carlhäll CJ

pubmed logopapersJul 15 2025
Epicardial Adipose Tissue (EAT) is a recognized risk factor for cardiovascular diseases and plays a pivotal role in the pathophysiology of Atrial Fibrillation (AF). Accurate automatic segmentation of the EAT around the Left Atrium (LA) from Magnetic Resonance Imaging (MRI) data remains challenging. While Convolutional Neural Networks excel at multi-scale feature extraction using stacked convolutions, they struggle to capture long-range self-similarity and hierarchical relationships, which are essential in medical image segmentation. In this study, we present and validate PoinUNet, a deep learning model that integrates a Poincaré embedding layer into a 3D UNet to enhance LA wall and fat segmentation from Dixon MRI data. By using hyperbolic space learning, PoinUNet captures complex LA and EAT relationships and addresses class imbalance and fat geometry challenges using a new loss function. Sixty-six participants, including forty-eight AF patients, were scanned at 1.5T. The first network identified fat regions, while the second utilized Poincaré embeddings and convolutional layers for precise segmentation, enhanced by fat fraction maps. PoinUNet achieved a Dice Similarity Coefficient of 0.87 and a Hausdorff distance of 9.42 on the test set. This performance surpasses state-of-the-art methods, providing accurate quantification of the LA wall and LA EAT.

Explainable AI for Precision Oncology: A Task-Specific Approach Using Imaging, Multi-omics, and Clinical Data

Park, Y., Park, S., Bae, E.

medrxiv logopreprintJul 14 2025
Despite continued advances in oncology, cancer remains a leading cause of global mortality, highlighting the need for diagnostic and prognostic tools that are both accurate and interpretable. Unimodal approaches often fail to capture the biological and clinical complexity of tumors. In this study, we present a suite of task-specific AI models that leverage CT imaging, multi-omics profiles, and structured clinical data to address distinct challenges in segmentation, classification, and prognosis. We developed three independent models across large public datasets. Task 1 applied a 3D U-Net to segment pancreatic tumors from CT scans, achieving a Dice Similarity Coefficient (DSC) of 0.7062. Task 2 employed a hierarchical ensemble of omics-based classifiers to distinguish tumor from normal tissue and classify six major cancer types with 98.67% accuracy. Task 3 benchmarked classical machine learning models on clinical data for prognosis prediction across three cancers (LIHC, KIRC, STAD), achieving strong performance (e.g., C-index of 0.820 in KIRC, AUC of 0.978 in LIHC). Across all tasks, explainable AI methods such as SHAP and attention-based visualization enabled transparent interpretation of model outputs. These results demonstrate the value of tailored, modality-aware models and underscore the clinical potential of applying such tailored AI systems for precision oncology. Technical FoundationsO_LISegmentation (Task 1): A custom 3D U-Net was trained using the Task07_Pancreas dataset from the Medical Segmentation Decathlon (MSD). CT images were preprocessed with MONAI-based pipelines, resampled to (64, 96, 96) voxels, and intensity-windowed to HU ranges of -100 to 240. C_LIO_LIClassification (Task 2): Multi-omics data from TCGA--including gene expression, methylation, miRNA, CNV, and mutation profiles--were log-transformed and normalized. Five modality-specific LightGBM classifiers generated meta-features for a late-fusion ensemble. Stratified 5-fold cross-validation was used for evaluation. C_LIO_LIPrognosis (Task 3): Clinical variables from TCGA were curated and imputed (median/mode), with high-missing-rate columns removed. Survival models (e.g., Cox-PH, Random Forest, XGBoost) were trained with early stopping. No omics or imaging data were used in this task. C_LIO_LIInterpretability: SHAP values were computed for all tree-based models, and attention-based overlays were used in imaging tasks to visualize salient regions. C_LI

Graph-based Multi-Modal Interaction Lightweight Network for Brain Tumor Segmentation (GMLN-BTS) in Edge Iterative MRI Lesion Localization System (EdgeIMLocSys)

Guohao Huo, Ruiting Dai, Hao Tang

arxiv logopreprintJul 14 2025
Brain tumor segmentation plays a critical role in clinical diagnosis and treatment planning, yet the variability in imaging quality across different MRI scanners presents significant challenges to model generalization. To address this, we propose the Edge Iterative MRI Lesion Localization System (EdgeIMLocSys), which integrates Continuous Learning from Human Feedback to adaptively fine-tune segmentation models based on clinician feedback, thereby enhancing robustness to scanner-specific imaging characteristics. Central to this system is the Graph-based Multi-Modal Interaction Lightweight Network for Brain Tumor Segmentation (GMLN-BTS), which employs a Modality-Aware Adaptive Encoder (M2AE) to extract multi-scale semantic features efficiently, and a Graph-based Multi-Modal Collaborative Interaction Module (G2MCIM) to model complementary cross-modal relationships via graph structures. Additionally, we introduce a novel Voxel Refinement UpSampling Module (VRUM) that synergistically combines linear interpolation and multi-scale transposed convolutions to suppress artifacts while preserving high-frequency details, improving segmentation boundary accuracy. Our proposed GMLN-BTS model achieves a Dice score of 85.1% on the BraTS2017 dataset with only 4.58 million parameters, representing a 98% reduction compared to mainstream 3D Transformer models, and significantly outperforms existing lightweight approaches. This work demonstrates a synergistic breakthrough in achieving high-accuracy, resource-efficient brain tumor segmentation suitable for deployment in resource-constrained clinical environments.
Page 62 of 1341333 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.