Sort by:
Page 5 of 65646 results

Development of Multiparametric Prognostic Models for Stereotactic Magnetic Resonance Guided Radiation Therapy of Pancreatic Cancers.

Michalet M, Valenzuela G, Nougaret S, Tardieu M, Azria D, Riou O

pubmed logopapersJul 1 2025
Stereotactic magnetic resonance guided adaptive radiation therapy (SMART) is a new option for local treatment of unresectable pancreatic ductal adenocarcinoma, showing interesting survival and local control (LC) results. Despite this, some patients will experience early local and/or metastatic recurrence leading to death. We aimed to develop multiparametric prognostic models for these patients. All patients treated in our institution with SMART for an unresectable pancreatic ductal adenocarcinoma between October 21, 2019, and August 5, 2022 were included. Several initial clinical characteristics as well as dosimetric data of SMART were recorded. Radiomics data from 0.35-T simulation magnetic resonance imaging were extracted. All these data were combined to build prognostic models of overall survival (OS) and LC using machine learning algorithms. Eighty-three patients with a median age of 64.9 years were included. A majority of patients had a locally advanced pancreatic cancer (77%). The median OS was 21 months after SMART completion and 27 months after chemotherapy initiation. The 6- and 12-month post-SMART OS was 87.8% (IC95%, 78.2%-93.2%) and 70.9% (IC95%, 58.8%-80.0%), respectively. The best model for OS was the Cox proportional hazard survival analysis using clinical data, with a concordance index inverse probability of censoring weighted of 0.87. Tested on its 12-month OS prediction capacity, this model had good performance (sensitivity 67%, specificity 71%, and area under the curve 0.90). The median LC was not reached. The 6- and 12-month post-SMART LC was 92.4% [IC95%, 83.7%-96.6%] and 76.3% [IC95%, 62.6%-85.5%], respectively. The best model for LC was the component-wise gradient boosting survival analysis using clinical and radiomics data, with a concordance index inverse probability of censoring weighted of 0.80. Tested on its 9-month LC prediction capacity, this model had good performance (sensitivity 50%, specificity 97%, and area under the curve 0.78). Combining clinical and radiomics data in multiparametric prognostic models using machine learning algorithms showed good performance for the prediction of OS and LC. External validation of these models will be needed.

Deformable image registration with strategic integration pyramid framework for brain MRI.

Zhang Y, Zhu Q, Xie B, Li T

pubmed logopapersJul 1 2025
Medical image registration plays a crucial role in medical imaging, with a wide range of clinical applications. In this context, brain MRI registration is commonly used in clinical practice for accurate diagnosis and treatment planning. In recent years, deep learning-based deformable registration methods have achieved remarkable results. However, existing methods have not been flexible and efficient in handling the feature relationships of anatomical structures at different levels when dealing with large deformations. To address this limitation, we propose a novel strategic integration registration network based on the pyramid structure. Our strategy mainly includes two aspects of integration: fusion of features at different scales, and integration of different neural network structures. Specifically, we design a CNN encoder and a Transformer decoder to efficiently extract and enhance both global and local features. Moreover, to overcome the error accumulation issue inherent in pyramid structures, we introduce progressive optimization iterations at the lowest scale for deformation field generation. This approach more efficiently handles the spatial relationships of images while improving accuracy. We conduct extensive evaluations across multiple brain MRI datasets, and experimental results show that our method outperforms other deep learning-based methods in terms of registration accuracy and robustness.

Multi-modal MRI synthesis with conditional latent diffusion models for data augmentation in tumor segmentation.

Kebaili A, Lapuyade-Lahorgue J, Vera P, Ruan S

pubmed logopapersJul 1 2025
Multimodality is often necessary for improving object segmentation tasks, especially in the case of multilabel tasks, such as tumor segmentation, which is crucial for clinical diagnosis and treatment planning. However, a major challenge in utilizing multimodality with deep learning remains: the limited availability of annotated training data, primarily due to the time-consuming acquisition process and the necessity for expert annotations. Although deep learning has significantly advanced many tasks in medical imaging, conventional augmentation techniques are often insufficient due to the inherent complexity of volumetric medical data. To address this problem, we propose an innovative slice-based latent diffusion architecture for the generation of 3D multi-modal images and their corresponding multi-label masks. Our approach enables the simultaneous generation of the image and mask in a slice-by-slice fashion, leveraging a positional encoding and a Latent Aggregation module to maintain spatial coherence and capture slice sequentiality. This method effectively reduces the computational complexity and memory demands typically associated with diffusion models. Additionally, we condition our architecture on tumor characteristics to generate a diverse array of tumor variations and enhance texture using a refining module that acts like a super-resolution mechanism, mitigating the inherent blurriness caused by data scarcity in the autoencoder. We evaluate the effectiveness of our synthesized volumes using the BRATS2021 dataset to segment the tumor with three tissue labels and compare them with other state-of-the-art diffusion models through a downstream segmentation task, demonstrating the superior performance and efficiency of our method. While our primary application is tumor segmentation, this method can be readily adapted to other modalities. Code is available here : https://github.com/Arksyd96/multi-modal-mri-and-mask-synthesis-with-conditional-slice-based-ldm.

Radiomics-based MRI model to predict hypoperfusion in lacunar infarction.

Chang CP, Huang YC, Tsai YH, Lin LC, Yang JT, Wu KH, Wu PH, Peng SJ

pubmed logopapersJul 1 2025
Approximately 20-30 % of patients with acute ischemic stroke due to lacunar infarction experience early neurological deterioration (END) within the first three days after onset, leading to disability or more severe sequelae. Hemodynamic perfusion deficits may play a crucial role in END, causing growth in the infarcted area and functional impairments, and even poor long-term prognosis. Therefore, it is vitally important to predict which patients may be at risk of perfusion deficits to initiate treatment and close monitoring early, preparing for potential reperfusion. Our goal is to utilize radiomic features from magnetic resonance imaging (MRI) and machine learning techniques to develop a predictive model for hypoperfusion. During January 2011 to December 2020, a retrospective collection of 92 patients with lacunar stroke was conducted, who underwent MRI within 48 h, had clinical laboratory values, follow-up prognosis records, and advanced perfusion image to confirm the presence of hypoperfusion. Using the initial MRI of these patients, radiomics features were extracted and selected from Diffusion Weighted Imaging (DWI), Apparent Diffusion Coefficient (ADC), and Fluid Attenuated Inversion Recovery (FLAIR) sequences. The data was divided into an 80 % training set and a 20 % testing set, and a hypoperfusion prediction model was developed using machine learning. Tthe model trained on DWI + FLAIR sequence showed superior performance with an accuracy of 84.1 %, AUC 0.92, recall 79.5 %, specificity 87.8 %, precision 83.8 %, and F1 score 81.2. Statistically significant clinical factors between patients with and without hypoperfusion included the NIHSS scores and the size of the lacunar infarction. Combining these two features with the top seven weighted radiomics features from DWI + FLAIR sequence, a total of nine features were used to develop a new prediction model through machine learning. This model in test set achieved an accuracy of 88.9 %, AUC 0.91, recall 87.5 %, specificity 90.0 %, precision 87.5 %, and F1 score 87.5. Utilizing radiomics techniques on DWI and FLAIR sequences from MRI of patients with lacunar stroke, it is possible to predict the presence of hypoperfusion, necessitating close monitoring to prevent the deterioration of clinical symptoms. Incorporating stroke volume and NIHSS scores into the prediction model enhances its performance. Future studies of a larger scale are required to validate these findings.

TCDE-Net: An unsupervised dual-encoder network for 3D brain medical image registration.

Yang X, Li D, Deng L, Huang S, Wang J

pubmed logopapersJul 1 2025
Medical image registration is a critical task in aligning medical images from different time points, modalities, or individuals, essential for accurate diagnosis and treatment planning. Despite significant progress in deep learning-based registration methods, current approaches still face considerable challenges, such as insufficient capture of local details, difficulty in effectively modeling global contextual information, and limited robustness in handling complex deformations. These limitations hinder the precision of high-resolution registration, particularly when dealing with medical images with intricate structures. To address these issues, this paper presents a novel registration network (TCDE-Net), an unsupervised medical image registration method based on a dual-encoder architecture. The dual encoders complement each other in feature extraction, enabling the model to effectively handle large-scale nonlinear deformations and capture intricate local details, thereby enhancing registration accuracy. Additionally, the detail-enhancement attention module aids in restoring fine-grained features, improving the network's capability to address complex deformations such as those at gray-white matter boundaries. Experimental results on the OASIS, IXI, and Hammers-n30r95 3D brain MR dataset demonstrate that this method outperforms commonly used registration techniques across multiple evaluation metrics, achieving superior performance and robustness. Our code is available at https://github.com/muzidongxue/TCDE-Net.

Efficient Brain Tumor Detection and Segmentation Using DN-MRCNN With Enhanced Imaging Technique.

N JS, Ayothi S

pubmed logopapersJul 1 2025
This article proposes a method called DenseNet 121-Mask R-CNN (DN-MRCNN) for the detection and segmentation of brain tumors. The main objective is to reduce the execution time and accurately locate and segment the tumor, including its subareas. The input images undergo preprocessing techniques such as median filtering and Gaussian filtering to reduce noise and artifacts, as well as improve image quality. Histogram equalization is used to enhance the tumor regions, and image augmentation is employed to improve the model's diversity and robustness. To capture important patterns, a gated axial self-attention layer is added to the DenseNet 121 model, allowing for increased attention during the analysis of the input images. For accurate segmentation, boundary boxes are generated using a Regional Proposal Network with anchor customization. Post-processing techniques, specifically nonmaximum suppression, are performed to neglect redundant bounding boxes caused by overlapping regions. The Mask R-CNN model is used to accurately detect and segment the entire tumor (WT), tumor core (TC), and enhancing tumor (ET). The proposed model is evaluated using the BraTS 2019 dataset, the UCSF-PDGM dataset, and the UPENN-GBM dataset, which are commonly used for brain tumor detection and segmentation.

Dynamic glucose enhanced imaging using direct water saturation.

Knutsson L, Yadav NN, Mohammed Ali S, Kamson DO, Demetriou E, Seidemo A, Blair L, Lin DD, Laterra J, van Zijl PCM

pubmed logopapersJul 1 2025
Dynamic glucose enhanced (DGE) MRI studies employ CEST or spin lock (CESL) to study glucose uptake. Currently, these methods are hampered by low effect size and sensitivity to motion. To overcome this, we propose to utilize exchange-based linewidth (LW) broadening of the direct water saturation (DS) curve of the water saturation spectrum (Z-spectrum) during and after glucose infusion (DS-DGE MRI). To estimate the glucose-infusion-induced LW changes (ΔLW), Bloch-McConnell simulations were performed for normoglycemia and hyperglycemia in blood, gray matter (GM), white matter (WM), CSF, and malignant tumor tissue. Whole-brain DS-DGE imaging was implemented at 3 T using dynamic Z-spectral acquisitions (1.2 s per offset frequency, 38 s per spectrum) and assessed on four brain tumor patients using infusion of 35 g of D-glucose. To assess ΔLW, a deep learning-based Lorentzian fitting approach was used on voxel-based DS spectra acquired before, during, and post-infusion. Area-under-the-curve (AUC) images, obtained from the dynamic ΔLW time curves, were compared qualitatively to perfusion-weighted imaging parametric maps. In simulations, ΔLW was 1.3%, 0.30%, 0.29/0.34%, 7.5%, and 13% in arterial blood, venous blood, GM/WM, malignant tumor tissue, and CSF, respectively. In vivo, ΔLW was approximately 1% in GM/WM, 5% to 20% for different tumor types, and 40% in CSF. The resulting DS-DGE AUC maps clearly outlined lesion areas. DS-DGE MRI is highly promising for assessing D-glucose uptake. Initial results in brain tumor patients show high-quality AUC maps of glucose-induced line broadening and DGE-based lesion enhancement similar and/or complementary to perfusion-weighted imaging.

SHFormer: Dynamic spectral filtering convolutional neural network and high-pass kernel generation transformer for adaptive MRI reconstruction.

Ramanarayanan S, G S R, Fahim MA, Ram K, Venkatesan R, Sivaprakasam M

pubmed logopapersJul 1 2025
Attention Mechanism (AM) selectively focuses on essential information for imaging tasks and captures relationships between regions from distant pixel neighborhoods to compute feature representations. Accelerated magnetic resonance image (MRI) reconstruction can benefit from AM, as the imaging process involves acquiring Fourier domain measurements that influence the image representation in a non-local manner. However, AM-based models are more adept at capturing low-frequency information and have limited capacity in constructing high-frequency representations, restricting the models to smooth reconstruction. Secondly, AM-based models need mode-specific retraining for multimodal MRI data as their knowledge is restricted to local contextual variations within modes that might be inadequate to capture the diverse transferable features across heterogeneous data domains. To address these challenges, we propose a neuromodulation-based discriminative multi-spectral AM for scalable MRI reconstruction, that can (i) propagate the context-aware high-frequency details for high-quality image reconstruction, and (ii) capture features reusable to deviated unseen domains in multimodal MRI, to offer high practical value for the healthcare industry and researchers. The proposed network consists of a spectral filtering convolutional neural network to capture mode-specific transferable features to generalize to deviated MRI data domains and a dynamic high-pass kernel generation transformer that focuses on high-frequency details for improved reconstruction. We have evaluated our model on various aspects, such as comparative studies in supervised and self-supervised learning, diffusion model-based training, closed-set and open-set generalization under heterogeneous MRI data, and interpretation-based analysis. Our results show that the proposed method offers scalable and high-quality reconstruction with best improvement margins of ∼1 dB in PSNR and ∼0.01 in SSIM under unseen scenarios. Our code is available at https://github.com/sriprabhar/SHFormer.

Comprehensive evaluation of pipelines for classification of psychiatric disorders using multi-site resting-state fMRI datasets.

Takahara Y, Kashiwagi Y, Tokuda T, Yoshimoto J, Sakai Y, Yamashita A, Yoshioka T, Takahashi H, Mizuta H, Kasai K, Kunimitsu A, Okada N, Itai E, Shinzato H, Yokoyama S, Masuda Y, Mitsuyama Y, Okada G, Okamoto Y, Itahashi T, Ohta H, Hashimoto RI, Harada K, Yamagata H, Matsubara T, Matsuo K, Tanaka SC, Imamizu H, Ogawa K, Momosaki S, Kawato M, Yamashita O

pubmed logopapersJul 1 2025
Objective classification biomarkers that are developed using resting-state functional magnetic resonance imaging (rs-fMRI) data are expected to contribute to more effective treatment for psychiatric disorders. Unfortunately, no widely accepted biomarkers are available at present, partially because of the large variety of analysis pipelines for their development. In this study, we comprehensively evaluated analysis pipelines using a large-scale, multi-site fMRI dataset for major depressive disorder (MDD). We explored combinations of options in four sub-processes of the analysis pipelines: six types of brain parcellation, four types of functional connectivity (FC) estimations, three types of site-difference harmonization, and five types of machine-learning methods. A total of 360 different MDD classification biomarkers were constructed using the SRPBS dataset acquired with unified protocols (713 participants from four sites) as the discovery dataset, and datasets from other projects acquired with heterogeneous protocols (449 participants from four sites) were used for independent validation. We repeated the procedure after swapping the roles of the two datasets to identify superior pipelines, regardless of the discovery dataset. The classification results of the top 10 biomarkers showed high similarity, and weight similarity was observed between eight of the biomarkers, except for two that used both data-driven parcellation and FC computation. We applied the top 10 pipelines to the datasets of other psychiatric disorders (autism spectrum disorder and schizophrenia), and eight of the biomarkers exhibited sufficient classification performance for both disorders. Our results will be useful for establishing a standardized pipeline for classification biomarkers.

Dual-type deep learning-based image reconstruction for advanced denoising and super-resolution processing in head and neck T2-weighted imaging.

Fujima N, Shimizu Y, Ikebe Y, Kameda H, Harada T, Tsushima N, Kano S, Homma A, Kwon J, Yoneyama M, Kudo K

pubmed logopapersJul 1 2025
To assess the utility of dual-type deep learning (DL)-based image reconstruction with DL-based image denoising and super-resolution processing by comparing images reconstructed with the conventional method in head and neck fat-suppressed (Fs) T2-weighted imaging (T2WI). We retrospectively analyzed the cases of 43 patients who underwent head/neck Fs-T2WI for the assessment of their head and neck lesions. All patients underwent two sets of Fs-T2WI scans with conventional- and DL-based reconstruction. The Fs-T2WI with DL-based reconstruction was acquired based on a 30% reduction of its spatial resolution in both the x- and y-axes with a shortened scan time. Qualitative and quantitative assessments were performed with both the conventional method- and DL-based reconstructions. For the qualitative assessment, we visually evaluated the overall image quality, visibility of anatomical structures, degree of artifact(s), lesion conspicuity, and lesion edge sharpness based on five-point grading. In the quantitative assessment, we measured the signal-to-noise ratio (SNR) of the lesion and the contrast-to-noise ratio (CNR) between the lesion and the adjacent or nearest muscle. In the qualitative analysis, significant differences were observed between the Fs-T2WI with the conventional- and DL-based reconstruction in all of the evaluation items except the degree of the artifact(s) (p < 0.001). In the quantitative analysis, significant differences were observed in the SNR between the Fs-T2WI with conventional- (21.4 ± 14.7) and DL-based reconstructions (26.2 ± 13.5) (p < 0.001). In the CNR assessment, the CNR between the lesion and adjacent or nearest muscle in the DL-based Fs-T2WI (16.8 ± 11.6) was significantly higher than that in the conventional Fs-T2WI (14.2 ± 12.9) (p < 0.001). Dual-type DL-based image reconstruction by an effective denoising and super-resolution process successfully provided high image quality in head and neck Fs-T2WI with a shortened scan time compared to the conventional imaging method.
Page 5 of 65646 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.