Sort by:
Page 6 of 14132 results

A novel few-shot learning framework for supervised diffeomorphic image registration network.

Chen K, Han H, Wei J, Zhang Y

pubmed logopapersJul 2 2025
Image registration is a key technique in image processing and analysis. Due to its high complexity, the traditional registration frameworks often fail to meet real-time demands in practice. To address the real-time demand, several deep learning networks for registration have been proposed, including the supervised and the unsupervised networks. Unsupervised networks rely on large amounts of training data to minimize specific loss functions, but the lack of physical information constraints results in the lower accuracy compared with the supervised networks. However, the supervised networks in medical image registration face two major challenges: physical mesh folding and the scarcity of labeled training data. To address these two challenges, we propose a novel few-shot learning framework for image registration. The framework contains two parts: random diffeomorphism generator (RDG) and a supervised few-shot learning network for image registration. By randomly generating a complex vector field, the RDG produces a series of diffeomorphism. With the help of diffeomorphism generated by RDG, one can use only a few image data (theoretically, one image data is enough) to generate a series of labels for training the supervised few-shot learning network. Concerning the elimination of the physical mesh folding phenomenon, in the proposed network, the loss function is only required to ensure the smoothness of deformation (no other control for mesh folding elimination is necessary). The experimental results indicate that the proposed method demonstrates superior performance in eliminating physical mesh folding when compared to other existing learning-based methods. Our code is available at this link https://github.com/weijunping111/RDG-TMI.git.

Robust brain age estimation from structural MRI with contrastive learning

Carlo Alberto Barbano, Benoit Dufumier, Edouard Duchesnay, Marco Grangetto, Pietro Gori

arxiv logopreprintJul 2 2025
Estimating brain age from structural MRI has emerged as a powerful tool for characterizing normative and pathological aging. In this work, we explore contrastive learning as a scalable and robust alternative to supervised approaches for brain age estimation. We introduce a novel contrastive loss function, $\mathcal{L}^{exp}$, and evaluate it across multiple public neuroimaging datasets comprising over 20,000 scans. Our experiments reveal four key findings. First, scaling pre-training on diverse, multi-site data consistently improves generalization performance, cutting external mean absolute error (MAE) nearly in half. Second, $\mathcal{L}^{exp}$ is robust to site-related confounds, maintaining low scanner-predictability as training size increases. Third, contrastive models reliably capture accelerated aging in patients with cognitive impairment and Alzheimer's disease, as shown through brain age gap analysis, ROC curves, and longitudinal trends. Lastly, unlike supervised baselines, $\mathcal{L}^{exp}$ maintains a strong correlation between brain age accuracy and downstream diagnostic performance, supporting its potential as a foundation model for neuroimaging. These results position contrastive learning as a promising direction for building generalizable and clinically meaningful brain representations.

Deformation registration based on reconstruction of brain MRI images with pathologies.

Lian L, Chang Q

pubmed logopapersJul 1 2025
Deformable registration between brain tumor images and brain atlas has been an important tool to facilitate pathological analysis. However, registration of images with tumors is challenging due to absent correspondences induced by the tumor. Furthermore, the tumor growth may displace the tissue, causing larger deformations than what is observed in healthy brains. Therefore, we propose a new reconstruction-driven cascade feature warping (RCFW) network for brain tumor images. We first introduce the symmetric-constrained feature reasoning (SFR) module which reconstructs the missed normal appearance within tumor regions, allowing a dense spatial correspondence between the reconstructed quasi-normal appearance and the atlas. The dilated multi-receptive feature fusion module is further introduced, which collects long-range features from different dimensions to facilitate tumor region reconstruction, especially for large tumor cases. Then, the reconstructed tumor images and atlas are jointly fed into the multi-stage feature warping module (MFW) to progressively predict spatial transformations. The method was performed on the Multimodal Brain Tumor Segmentation (BraTS) 2021 challenge database and compared with six existing methods. Experimental results showed that the proposed method effectively handles the problem of brain tumor image registration, which can maintain the smooth deformation of the tumor region while maximizing the image similarity of normal regions.

Mamba-based deformable medical image registration with an annotated brain MR-CT dataset.

Wang Y, Guo T, Yuan W, Shu S, Meng C, Bai X

pubmed logopapersJul 1 2025
Deformable registration is essential in medical image analysis, especially for handling various multi- and mono-modal registration tasks in neuroimaging. Existing studies lack exploration of brain MR-CT registration, and face challenges in both accuracy and efficiency improvements of learning-based methods. To enlarge the practice of multi-modal registration in brain, we present SR-Reg, a new benchmark dataset comprising 180 volumetric paired MR-CT images and annotated anatomical regions. Building on this foundation, we introduce MambaMorph, a novel deformable registration network based on an efficient state space model Mamba for global feature learning, with a fine-grained feature extractor for low-level embedding. Experimental results demonstrate that MambaMorph surpasses advanced ConvNet-based and Transformer-based networks across several multi- and mono-modal tasks, showcasing impressive enhancements of efficacy and efficiency. Code and dataset are available at https://github.com/mileswyn/MambaMorph.

Comparison of Deep Learning Models for fast and accurate dose map prediction in Microbeam Radiation Therapy.

Arsini L, Humphreys J, White C, Mentzel F, Paino J, Bolst D, Caccia B, Cameron M, Ciardiello A, Corde S, Engels E, Giagu S, Rosenfeld A, Tehei M, Tsoi AC, Vogel S, Lerch M, Hagenbuchner M, Guatelli S, Terracciano CM

pubmed logopapersJul 1 2025
Microbeam Radiation Therapy (MRT) is an innovative radiotherapy modality which uses highly focused synchrotron-generated X-ray microbeams. Current pre-clinical research in MRT mostly rely on Monte Carlo (MC) simulations for dose estimation, which are highly accurate but computationally intensive. Recently, Deep Learning (DL) dose engines have been proved effective in generating fast and reliable dose distributions in different RT modalities. However, relatively few studies compare different models on the same task. This work aims to compare a Graph-Convolutional-Network-based DL model, developed in the context of Very High Energy Electron RT, to the Convolutional 3D U-Net that we recently implemented for MRT dose predictions. The two DL solutions are trained with 3D dose maps, generated with the MC-Toolkit Geant4, in rats used in MRT pre-clinical research. The models are evaluated against Geant4 simulations, used as ground truth, and are assessed in terms of Mean Absolute Error, Mean Relative Error, and a voxel-wise version of the γ-index. Also presented are specific comparisons of predictions in relevant tumor regions, tissues boundaries and air pockets. The two models are finally compared from the perspective of the execution time and size. This study finds that the two models achieve comparable overall performance. Main differences are found in their dosimetric accuracy within specific regions, such as air pockets, and their respective inference times. Consequently, the choice between models should be guided primarily by data structure and time constraints, favoring the graph-based method for its flexibility or the 3D U-Net for its faster execution.

TCDE-Net: An unsupervised dual-encoder network for 3D brain medical image registration.

Yang X, Li D, Deng L, Huang S, Wang J

pubmed logopapersJul 1 2025
Medical image registration is a critical task in aligning medical images from different time points, modalities, or individuals, essential for accurate diagnosis and treatment planning. Despite significant progress in deep learning-based registration methods, current approaches still face considerable challenges, such as insufficient capture of local details, difficulty in effectively modeling global contextual information, and limited robustness in handling complex deformations. These limitations hinder the precision of high-resolution registration, particularly when dealing with medical images with intricate structures. To address these issues, this paper presents a novel registration network (TCDE-Net), an unsupervised medical image registration method based on a dual-encoder architecture. The dual encoders complement each other in feature extraction, enabling the model to effectively handle large-scale nonlinear deformations and capture intricate local details, thereby enhancing registration accuracy. Additionally, the detail-enhancement attention module aids in restoring fine-grained features, improving the network's capability to address complex deformations such as those at gray-white matter boundaries. Experimental results on the OASIS, IXI, and Hammers-n30r95 3D brain MR dataset demonstrate that this method outperforms commonly used registration techniques across multiple evaluation metrics, achieving superior performance and robustness. Our code is available at https://github.com/muzidongxue/TCDE-Net.

Regression modeling with convolutional neural network for predicting extent of resection from preoperative MRI in giant pituitary adenomas: a pilot study.

Patel BK, Tariciotti L, DiRocco L, Mandile A, Lohana S, Rodas A, Zohdy YM, Maldonado J, Vergara SM, De Andrade EJ, Revuelta Barbero JM, Reyes C, Solares CA, Garzon-Muvdi T, Pradilla G

pubmed logopapersJul 1 2025
Giant pituitary adenomas (GPAs) are challenging skull base tumors due to their size and proximity to critical neurovascular structures. Achieving gross-total resection (GTR) can be difficult, and residual tumor burden is commonly reported. This study evaluated the ability of convolutional neural networks (CNNs) to predict the extent of resection (EOR) from preoperative MRI with the goals of enhancing surgical planning, improving preoperative patient counseling, and enhancing multidisciplinary postoperative coordination of care. A retrospective study of 100 consecutive patients with GPAs was conducted. Patients underwent surgery via the endoscopic endonasal transsphenoidal approach. CNN models were trained on DICOM images from preoperative MR images to predict EOR, using a split of 80 patients for training and 20 for validation. The models included different architectural modules to refine image selection and predict EOR based on tumor-contained images in various anatomical planes. The model design, training, and validation were conducted in a local environment in Python using the TensorFlow machine learning system. The median preoperative tumor volume was 19.4 cm3. The median EOR was 94.5%, with GTR achieved in 49% of cases. The CNN model showed high predictive accuracy, especially when analyzing images from the coronal plane, with a root mean square error of 2.9916 and a mean absolute error of 2.6225. The coefficient of determination (R2) was 0.9823, indicating excellent model performance. CNN-based models may effectively predict the EOR for GPAs from preoperative MRI scans, offering a promising tool for presurgical assessment and patient counseling. Confirmatory studies with large patient samples are needed to definitively validate these findings.

Association of Psychological Resilience With Decelerated Brain Aging in Cognitively Healthy World Trade Center Responders.

Seeley SH, Fremont R, Schreiber Z, Morris LS, Cahn L, Murrough JW, Schiller D, Charney DS, Pietrzak RH, Perez-Rodriguez MM, Feder A

pubmed logopapersJul 1 2025
Despite their exposure to potentially traumatic stressors, the majority of World Trade Center (WTC) responders-those who worked on rescue, recovery, and cleanup efforts on or following September 11, 2001-have shown psychological resilience, never developing long-term psychopathology. Psychological resilience may be protective against the earlier age-related cognitive changes associated with posttraumatic stress disorder (PTSD) in this cohort. In the current study, we calculated the difference between estimated brain age from structural magnetic resonance imaging (MRI) data and chronological age in WTC responders who participated in a parent functional MRI study of resilience (<i>N</i> = 97). We hypothesized that highly resilient responders would show the least brain aging and explored associations between brain aging and psychological and cognitive measures. WTC responders screened for the absence of cognitive impairment were classified into 3 groups: a WTC-related PTSD group (<i>n</i> = 32), a Highly Resilient group without lifetime psychopathology despite high WTC-related exposure (<i>n</i> = 34), and a Lower WTC-Exposed control group also without lifetime psychopathology (<i>n</i> = 31). We used <i>BrainStructureAges</i>, a deep learning algorithm that estimates voxelwise age from T1-weighted MRI data to calculate decelerated (or accelerated) brain aging relative to chronological age. Globally, brain aging was decelerated in the Highly Resilient group and accelerated in the PTSD group, with a significant group difference (<i>p</i> = .021, Cohen's <i>d</i> = 0.58); the Lower WTC-Exposed control group exhibited no significant brain age gap or group difference. Lesser brain aging was associated with resilience-linked factors including lower emotional suppression, greater optimism, and better verbal learning. Cognitively healthy WTC responders show differences in brain aging related to resilience and PTSD.

Deformable image registration with strategic integration pyramid framework for brain MRI.

Zhang Y, Zhu Q, Xie B, Li T

pubmed logopapersJul 1 2025
Medical image registration plays a crucial role in medical imaging, with a wide range of clinical applications. In this context, brain MRI registration is commonly used in clinical practice for accurate diagnosis and treatment planning. In recent years, deep learning-based deformable registration methods have achieved remarkable results. However, existing methods have not been flexible and efficient in handling the feature relationships of anatomical structures at different levels when dealing with large deformations. To address this limitation, we propose a novel strategic integration registration network based on the pyramid structure. Our strategy mainly includes two aspects of integration: fusion of features at different scales, and integration of different neural network structures. Specifically, we design a CNN encoder and a Transformer decoder to efficiently extract and enhance both global and local features. Moreover, to overcome the error accumulation issue inherent in pyramid structures, we introduce progressive optimization iterations at the lowest scale for deformation field generation. This approach more efficiently handles the spatial relationships of images while improving accuracy. We conduct extensive evaluations across multiple brain MRI datasets, and experimental results show that our method outperforms other deep learning-based methods in terms of registration accuracy and robustness.

Machine-Learning-Based Computed Tomography Radiomics Regression Model for Predicting Pulmonary Function.

Wang W, Sun Y, Wu R, Jin L, Shi Z, Tuersun B, Yang S, Li M

pubmed logopapersJul 1 2025
Chest computed tomography (CT) radiomics can be utilized for categorical predictions; however, models predicting pulmonary function indices directly are lacking. This study aimed to develop machine-learning-based regression models to predict pulmonary function using chest CT radiomics. This retrospective study enrolled patients who underwent chest CT and pulmonary function tests between January 2018 and April 2024. Machine-learning regression models were constructed and validated to predict pulmonary function indices, including forced vital capacity (FVC) and forced expiratory volume in 1 s (FEV<sub>1</sub>). The models incorporated radiomics of the whole lung and clinical features. Model performance was evaluated using mean absolute error, mean squared error, root mean squared error, concordance correlation coefficient (CCC), and R-squared (R<sup>2</sup>) value and compared to spirometry results. Individual explanations of the models' decisions were analyzed using an explainable approach based on SHapley Additive exPlanations. In total, 1585 cases were included in the analysis, with 102 of them being external cases. Across the training, validation, test, and external test sets, the combined model consistently achieved the best performance in the regression task for predicting FVC (e.g. external test set: CCC, 0.745 [95% confidence interval 0.642-0.818]; R<sup>2</sup>, 0.601 [0.453-0.707]) and FEV<sub>1</sub> (e.g. external test set: CCC, 0.744 [0.633-0.824]; R<sup>2</sup>, 0.527 [0.298-0.675]). Age, sex, and emphysema were important factors for both FVC and FEV<sub>1</sub>, while distinct radiomics features contributed to each. Whole-lung-based radiomics features can be used to construct regression models to improve pulmonary function prediction.
Page 6 of 14132 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.