Sort by:
Page 70 of 1241232 results

A Deep Learning Based Method for Fast Registration of Cardiac Magnetic Resonance Images

Benjamin Graham

arxiv logopreprintJun 23 2025
Image registration is used in many medical image analysis applications, such as tracking the motion of tissue in cardiac images, where cardiac kinematics can be an indicator of tissue health. Registration is a challenging problem for deep learning algorithms because ground truth transformations are not feasible to create, and because there are potentially multiple transformations that can produce images that appear correlated with the goal. Unsupervised methods have been proposed to learn to predict effective transformations, but these methods take significantly longer to predict than established baseline methods. For a deep learning method to see adoption in wider research and clinical settings, it should be designed to run in a reasonable time on common, mid-level hardware. Fast methods have been proposed for the task of image registration but often use patch-based methods which can affect registration accuracy for a highly dynamic organ such as the heart. In this thesis, a fast, volumetric registration model is proposed for the use of quantifying cardiac strain. The proposed Deep Learning Neural Network (DLNN) is designed to utilize an architecture that can compute convolutions incredibly efficiently, allowing the model to achieve registration fidelity similar to other state-of-the-art models while taking a fraction of the time to perform inference. The proposed fast and lightweight registration (FLIR) model is used to predict tissue motion which is then used to quantify the non-uniform strain experienced by the tissue. For acquisitions taken from the same patient at approximately the same time, it would be expected that strain values measured between the acquisitions would have very small differences. Using this metric, strain values computed using the FLIR method are shown to be very consistent.

VHU-Net: Variational Hadamard U-Net for Body MRI Bias Field Correction

Xin Zhu

arxiv logopreprintJun 23 2025
Bias field artifacts in magnetic resonance imaging (MRI) scans introduce spatially smooth intensity inhomogeneities that degrade image quality and hinder downstream analysis. To address this challenge, we propose a novel variational Hadamard U-Net (VHU-Net) for effective body MRI bias field correction. The encoder comprises multiple convolutional Hadamard transform blocks (ConvHTBlocks), each integrating convolutional layers with a Hadamard transform (HT) layer. Specifically, the HT layer performs channel-wise frequency decomposition to isolate low-frequency components, while a subsequent scaling layer and semi-soft thresholding mechanism suppress redundant high-frequency noise. To compensate for the HT layer's inability to model inter-channel dependencies, the decoder incorporates an inverse HT-reconstructed transformer block, enabling global, frequency-aware attention for the recovery of spatially consistent bias fields. The stacked decoder ConvHTBlocks further enhance the capacity to reconstruct the underlying ground-truth bias field. Building on the principles of variational inference, we formulate a new evidence lower bound (ELBO) as the training objective, promoting sparsity in the latent space while ensuring accurate bias field estimation. Comprehensive experiments on abdominal and prostate MRI datasets demonstrate the superiority of VHU-Net over existing state-of-the-art methods in terms of intensity uniformity, signal fidelity, and tissue contrast. Moreover, the corrected images yield substantial downstream improvements in segmentation accuracy. Our framework offers computational efficiency, interpretability, and robust performance across multi-center datasets, making it suitable for clinical deployment.

MRI Radiomics and Automated Habitat Analysis Enhance Machine Learning Prediction of Bone Metastasis and High-Grade Gleason Scores in Prostate Cancer.

Yang Y, Zheng B, Zou B, Liu R, Yang R, Chen Q, Guo Y, Yu S, Chen B

pubmed logopapersJun 23 2025
To explore the value of machine learning models based on MRI radiomics and automated habitat analysis in predicting bone metastasis and high-grade pathological Gleason scores in prostate cancer. This retrospective study enrolled 214 patients with pathologically diagnosed prostate cancer from May 2013 to January 2025, including 93 cases with bone metastasis and 159 cases with high-grade Gleason scores. Clinical, pathological and MRI data were collected. An nnUNet model automatically segmented the prostate in MRI scans. K-means clustering identified subregions within the entire prostate in T2-FS images. Senior radiologists manually segmented regions of interest (ROIs) in prostate lesions. Radiomics features were extracted from these habitat subregions and lesion ROIs. These features combined with clinical features were utilized to build multiple machine learning classifiers to predict bone metastasis and high-grade Gleason scores while a K-means clustering method was applied to obtain habitat subregions within the whole prostate. Finally, the models underwent interpretable analysis based on feature importance. The nnUNet model achieved a mean Dice coefficient of 0.970 for segmentation. Habitat analysis using 2 clusters yielded the highest average silhouette coefficient (0.57). Machine learning models based on a combination of lesion radiomics, habitat radiomics, and clinical features achieved the best performance in both prediction tasks. The Extra Trees Classifier achieved the highest AUC (0.900) for predicting bone metastasis, while the CatBoost Classifier performed best (AUC 0.895) for predicting high-grade Gleason scores. The interpretability analysis of the optimal models showed that the PSA clinical feature was crucial for predictions, while both habitat radiomics and lesion radiomics also played important roles. The study proposed an automated prostate habitat analysis for prostate cancer, enabling a comprehensive analysis of tumor heterogeneity. The machine learning models developed achieved excellent performance in predicting the risk of bone metastasis and high-grade Gleason scores in prostate cancer. This approach overcomes the limitations of manual feature extraction, and the inadequate analysis of heterogeneity often encountered in traditional radiomics, thereby improving model performance.

Ensemble-based Convolutional Neural Networks for brain tumor classification in MRI: Enhancing accuracy and interpretability using explainable AI.

Sánchez-Moreno L, Perez-Peña A, Duran-Lopez L, Dominguez-Morales JP

pubmed logopapersJun 23 2025
Accurate and efficient classification of brain tumors, including gliomas, meningiomas, and pituitary adenomas, is critical for early diagnosis and treatment planning. Magnetic resonance imaging (MRI) is a key diagnostic tool, and deep learning models have shown promise in automating tumor classification. However, challenges remain in achieving high accuracy while maintaining interpretability for clinical use. This study explores the use of transfer learning with pre-trained architectures, including VGG16, DenseNet121, and Inception-ResNet-v2, to classify brain tumors from MRI images. An ensemble-based classifier was developed using a majority voting strategy to improve robustness. To enhance clinical applicability, explainability techniques such as Grad-CAM++ and Integrated Gradients were employed, allowing visualization of model decision-making. The ensemble model outperformed individual Convolutional Neural Network (CNN) architectures, achieving an accuracy of 86.17% in distinguishing gliomas, meningiomas, pituitary adenomas, and benign cases. Interpretability techniques provided heatmaps that identified key regions influencing model predictions, aligning with radiological features and enhancing trust in the results. The proposed ensemble-based deep learning framework improves the accuracy and interpretability of brain tumor classification from MRI images. By combining multiple CNN architectures and integrating explainability methods, this approach offers a more reliable and transparent diagnostic tool to support medical professionals in clinical decision-making.

Evaluation of deep learning reconstruction in accelerated knee MRI: comparison of visual and diagnostic performance metrics.

Wen S, Xu Y, Yang G, Huang F, Zeng Z

pubmed logopapersJun 23 2025
To investigate the clinical value of deep learning reconstruction (DLR) in accelerated magnetic resonance imaging (MRI) of the knee and compare its visual quality and diagnostic performance metrics with conventional fast spin-echo T2-weighted imaging with fat suppression (FSE-T2WI-FS). This prospective study included 116 patients with knee injuries. All patients underwent both conventional FSE-T2WI-FS and DLR-accelerated FSE-T2WI-FS scans on a 1.5‑T MRI scanner. Two radiologists independently evaluated overall image quality, artifacts, and image sharpness using a 5-point Likert scale. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of lesion regions were measured. Subjective scores were compared using the Wilcoxon signed-rank test, SNR/CNR differences were analyzed via paired t tests, and inter-reader agreement was assessed using Cohen's kappa. The accelerated sequences with DLR achieved a 36 % reduction in total scan time compared to conventional sequences (p < 0.05), shortening acquisition from 9 min 50 s to 6 min 15 s. Moreover, DLR demonstrated superior artifact suppression and enhanced quantitative image quality, with significantly higher SNR and CNR (p < 0.001). Despite these improvements, diagnostic equivalence was maintained: No significant differences were observed in overall image quality, sharpness (p > 0.05), or lesion detection rates. Inter-reader agreement was good (κ> 0.75), further validating the clinical reliability of the DLR technique. Using DLR-accelerated FSE-T2WI-FS reduces scan time, suppresses artifacts, and improves quantitative image quality while maintaining diagnostic accuracy comparable to conventional sequences. This technology holds promise for optimizing clinical workflows in MRI of the knee.

DCLNet: Double Collaborative Learning Network on Stationary-Dynamic Functional Brain Network for Brain Disease Classification.

Zhou J, Jie B, Wang Z, Zhang Z, Bian W, Yang Y, Li H, Sun F, Liu M

pubmed logopapersJun 23 2025
Stationary functional brain networks (sFBNs) and dynamic functional brain networks (dFBNs) derived from resting-state functional MRI characterize the complex interactions of the human brain from different aspects and could offer complementary information for brain disease analysis. Most current studies focus on sFBN or dFBN analysis, thus limiting the performance of brain network analysis. A few works have explored integrating sFBN and dFBN to identify brain diseases, and achieved better performance than conventional methods. However, these studies still ignore some valuable discriminative information, such as the distribution information of subjects between and within categories. This paper presents a Double Collaborative Learning Network (DCLNet), which takes advantage of both collaborative encoder and collaborative contrastive learning, to learn complementary information of sFBN and dFBN and distribution information of subjects between inter- and intra-categories for brain disease classification. Specifically, we first construct sFBN and dFBN using traditional correlation-based methods with rs-fMRI data, respectively. Then, we build a collaborative encoder to extract brain network features at different levels (i.e., connectivity-based, brain-region-based, and brain-network-based features), and design a prune-graft transformer module to embed the complementary information of the features at each level between two kinds of FBNs. We also develop a collaborative contrastive learning module to capture the distribution information of subjects between and within different categories, thereby learning the more discriminative features of brain networks. We evaluate the DCLNet on two real brain disease datasets with rs-fMRI data, with experimental results demonstrating the superiority of the proposed method.

Adaptive Mask-guided K-space Diffusion for Accelerated MRI Reconstruction

Qinrong Cai, Yu Guan, Zhibo Chen, Dong Liang, Qiuyun Fan, Qiegen Liu

arxiv logopreprintJun 23 2025
As the deep learning revolution marches on, masked modeling has emerged as a distinctive approach that involves predicting parts of the original data that are proportionally masked during training, and has demonstrated exceptional performance in multiple fields. Magnetic Resonance Imaging (MRI) reconstruction is a critical task in medical imaging that seeks to recover high-quality images from under-sampled k-space data. However, previous MRI reconstruction strategies usually optimized the entire image domain or k-space, without considering the importance of different frequency regions in the k-space This work introduces a diffusion model based on adaptive masks (AMDM), which utilizes the adaptive adjustment of frequency distribution based on k-space data to develop a hybrid masks mechanism that adapts to different k-space inputs. This enables the effective separation of high-frequency and low-frequency components, producing diverse frequency-specific representations. Additionally, the k-space frequency distribution informs the generation of adaptive masks, which, in turn, guide a closed-loop diffusion process. Experimental results verified the ability of this method to learn specific frequency information and thereby improved the quality of MRI reconstruction, providing a flexible framework for optimizing k-space data using masks in the future.

DRIMV_TSK: An Interpretable Surgical Evaluation Model for Incomplete Multi-View Rectal Cancer Data

Wei Zhang, Zi Wang, Hanwen Zhou, Zhaohong Deng, Weiping Ding, Yuxi Ge, Te Zhang, Yuanpeng Zhang, Kup-Sze Choi, Shitong Wang, Shudong Hu

arxiv logopreprintJun 21 2025
A reliable evaluation of surgical difficulty can improve the success of the treatment for rectal cancer and the current evaluation method is based on clinical data. However, more data about rectal cancer can be collected with the development of technology. Meanwhile, with the development of artificial intelligence, its application in rectal cancer treatment is becoming possible. In this paper, a multi-view rectal cancer dataset is first constructed to give a more comprehensive view of patients, including the high-resolution MRI image view, pressed-fat MRI image view, and clinical data view. Then, an interpretable incomplete multi-view surgical evaluation model is proposed, considering that it is hard to obtain extensive and complete patient data in real application scenarios. Specifically, a dual representation incomplete multi-view learning model is first proposed to extract the common information between views and specific information in each view. In this model, the missing view imputation is integrated into representation learning, and second-order similarity constraint is also introduced to improve the cooperative learning between these two parts. Then, based on the imputed multi-view data and the learned dual representation, a multi-view surgical evaluation model with the TSK fuzzy system is proposed. In the proposed model, a cooperative learning mechanism is constructed to explore the consistent information between views, and Shannon entropy is also introduced to adapt the view weight. On the MVRC dataset, we compared it with several advanced algorithms and DRIMV_TSK obtained the best results.

Advances of MR imaging in glioma: what the neurosurgeon needs to know.

Falk Delgado A

pubmed logopapersJun 21 2025
Glial tumors and especially glioblastoma present a major challenge in neuro-oncology due to their infiltrative growth, resistance to therapy, and poor overall survival-despite aggressive treatments such as maximal safe resection and chemoradiotherapy. These tumors typically manifest through neurological symptoms such as seizures, headaches, and signs of increased intracranial pressure, prompting urgent neuroimaging. At initial diagnosis, MRI plays a central role in differentiating true neoplasms from tumor mimics, including inflammatory or infectious conditions. Advanced techniques such as perfusion-weighted imaging (PWI) and diffusion-weighted imaging (DWI) enhance diagnostic specificity and may prevent unnecessary surgical intervention. In the preoperative phase, MRI contributes to surgical planning through the use of functional MRI (fMRI) and diffusion tensor imaging (DTI), enabling localization of eloquent cortex and white matter tracts. These modalities support safer resections by informing trajectory planning and risk assessment. Emerging MR techniques, including magnetic resonance spectroscopy, amide proton transfer imaging, and 2HG quantification, offer further potential in delineating tumor infiltration beyond contrast-enhancing margins. Postoperatively, MRI is important for evaluating residual tumor, detecting surgical complications, and guiding radiotherapy planning. During treatment surveillance, MRI assists in distinguishing true progression from pseudoprogression or radiation necrosis, thereby guiding decisions on additional surgery, changes in systemic therapy, or inclusion into clinical trials. The continued evolution of MRI hardware, software, and image analysis-particularly with the integration of machine learning-will be critical for supporting precision neurosurgical oncology. This review highlights how advanced MRI techniques can inform clinical decision-making at each stage of care in patients with high-grade gliomas.

Independent histological validation of MR-derived radio-pathomic maps of tumor cell density using image-guided biopsies in human brain tumors.

Nocera G, Sanvito F, Yao J, Oshima S, Bobholz SA, Teraishi A, Raymond C, Patel K, Everson RG, Liau LM, Connelly J, Castellano A, Mortini P, Salamon N, Cloughesy TF, LaViolette PS, Ellingson BM

pubmed logopapersJun 21 2025
In brain gliomas, non-invasive biomarkers reflecting tumor cellularity would be useful to guide supramarginal resections and to plan stereotactic biopsies. We aim to validate a previously-trained machine learning algorithm that generates cellularity prediction maps (CPM) from multiparametric MRI data to an independent, retrospective external cohort of gliomas undergoing image-guided biopsies, and to compare the performance of CPM and diffusion MRI apparent diffusion coefficient (ADC) in predicting cellularity. A cohort of patients with treatment-naïve or recurrent gliomas were prospectively studied. All patients underwent pre-surgical MRI according to the standardized brain tumor imaging protocol. The surgical sampling site was planned based on image-guided biopsy targets and tissue was stained with hematoxylin-eosin for cell density count. The correlation between MRI-derived CPM values and histological cellularity, and between ADC and histological cellularity, was evaluated both assuming independent observations and accounting for non-independent observations. Sixty-six samples from twenty-seven patients were collected. Thirteen patients had treatment-naïve tumors and fourteen had recurrent lesions. CPM value accurately predicted histological cellularity in treatment-naïve patients (b = 1.4, R<sup>2</sup> = 0.2, p = 0.009, rho = 0.41, p = 0.016, RMSE = 1503 cell/mm<sup>2</sup>), but not in the recurrent sub-cohort. Similarly, ADC values showed a significant association with histological cellularity only in treatment-naive patients (b = 1.3, R<sup>2</sup> = 0.22, p = 0.007; rho = -0.37, p = 0.03), not statistically different from the CPM correlation. These findings were confirmed with statistical tests accounting for non-independent observations. MRI-derived machine learning generated cellularity prediction maps (CPM) enabled a non-invasive evaluation of tumor cellularity in treatment-naïve glioma patients, although CPM did not clearly outperform ADC alone in this cohort.
Page 70 of 1241232 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.