Sort by:
Page 5 of 874 results

Network Occlusion Sensitivity Analysis Identifies Regional Contributions to Brain Age Prediction.

He L, Wang S, Chen C, Wang Y, Fan Q, Chu C, Fan L, Xu J

pubmed logopapersJun 1 2025
Deep learning frameworks utilizing convolutional neural networks (CNNs) have frequently been used for brain age prediction and have achieved outstanding performance. Nevertheless, deep learning remains a black box as it is hard to interpret which brain parts contribute significantly to the predictions. To tackle this challenge, we first trained a lightweight, fully CNN model for brain age estimation on a large sample data set (N = 3054, age range = [8,80 years]) and tested it on an independent data set (N = 555, age range = [8,80 years]). We then developed an interpretable scheme combining network occlusion sensitivity analysis (NOSA) with a fine-grained human brain atlas to uncover the learned invariance of the model. Our findings show that the dorsolateral, dorsomedial frontal cortex, anterior cingulate cortex, and thalamus had the highest contributions to age prediction across the lifespan. More interestingly, we observed that different regions showed divergent patterns in their predictions for specific age groups and that the bilateral hemispheres contributed differently to the predictions. Regions in the frontal lobe were essential predictors in both the developmental and aging stages, with the thalamus remaining relatively stable and saliently correlated with other regional changes throughout the lifespan. The lateral and medial temporal brain regions gradually became involved during the aging phase. At the network level, the frontoparietal and the default mode networks show an inverted U-shape contribution from the developmental to the aging stages. The framework could identify regional contributions to the brain age prediction model, which could help increase the model interpretability when serving as an aging biomarker.

Brain Age Gap Associations with Body Composition and Metabolic Indices in an Asian Cohort: An MRI-Based Study.

Lee HJ, Kuo CY, Tsao YC, Lee PL, Chou KH, Lin CJ, Lin CP

pubmed logopapersJun 1 2025
Global aging raises concerns about cognitive health, metabolic disorders, and sarcopenia. Prevention of reversible decline and diseases in middle-aged individuals is essential for promoting healthy aging. We hypothesize that changes in body composition, specifically muscle mass and visceral fat, and metabolic indices are associated with accelerated brain aging. To explore these relationships, we employed a brain age model to investigate the links between the brain age gap (BAG), body composition, and metabolic markers. Using T1-weighted anatomical brain MRIs, we developed a machine learning model to predict brain age from gray matter features, trained on 2,675 healthy individuals aged 18-92 years. This model was then applied to a separate cohort of 458 Taiwanese adults (57.8 years ± 11.6; 280 men) to assess associations between BAG, body composition quantified by MRI, and metabolic markers. Our model demonstrated reliable generalizability for predicting individual age in the clinical dataset (MAE = 6.11 years, r = 0.900). Key findings included significant correlations between larger BAG and reduced total abdominal muscle area (r = -0.146, p = 0.018), lower BMI-adjusted skeletal muscle indices, (r = -0.134, p = 0.030), increased systemic inflammation, as indicated by high-sensitivity C-reactive protein levels (r = 0.121, p = 0.048), and elevated fasting glucose levels (r = 0.149, p = 0.020). Our findings confirm that muscle mass and metabolic health decline are associated with accelerated brain aging. Interventions to improve muscle health and metabolic control may mitigate adverse effects of brain aging, supporting healthier aging trajectories.

Predicting strength of femora with metastatic lesions from single 2D radiographic projections using convolutional neural networks.

Synek A, Benca E, Licandro R, Hirtler L, Pahr DH

pubmed logopapersJun 1 2025
Patients with metastatic bone disease are at risk of pathological femoral fractures and may require prophylactic surgical fixation. Current clinical decision support tools often overestimate fracture risk, leading to overtreatment. While novel scores integrating femoral strength assessment via finite element (FE) models show promise, they require 3D imaging, extensive computation, and are difficult to automate. Predicting femoral strength directly from single 2D radiographic projections using convolutional neural networks (CNNs) could address these limitations, but this approach has not yet been explored for femora with metastatic lesions. This study aimed to test whether CNNs can accurately predict strength of femora with metastatic lesions from single 2D radiographic projections. CNNs with various architectures were developed and trained using an FE model generated training dataset. This training dataset was based on 36,000 modified computed tomography (CT) scans, created by randomly inserting artificial lytic lesions into the CT scans of 36 intact anatomical femoral specimens. From each modified CT scan, an anterior-posterior 2D projection was generated and femoral strength in one-legged stance was determined using nonlinear FE models. Following training, the CNN performance was evaluated on an independent experimental test dataset consisting of 31 anatomical femoral specimens (16 intact, 15 with artificial lytic lesions). 2D projections of each specimen were created from corresponding CT scans and femoral strength was assessed in mechanical tests. The CNNs' performance was evaluated using linear regression analysis and compared to 2D densitometric predictors (bone mineral density and content) and CT-based 3D FE models. All CNNs accurately predicted the experimentally measured strength in femora with and without metastatic lesions of the test dataset (R²≥0.80, CCC≥0.81). In femora with metastatic lesions, the performance of the CNNs (best: R²=0.84, CCC=0.86) was considerably superior to 2D densitometric predictors (R²≤0.07) and slightly inferior to 3D FE models (R²=0.90, CCC=0.94). CNNs, trained on a large dataset generated via FE models, predicted experimentally measured strength of femora with artificial metastatic lesions with accuracy comparable to 3D FE models. By eliminating the need for 3D imaging and reducing computational demands, this novel approach demonstrates potential for application in a clinical setting.

Towards fast and reliable estimations of 3D pressure, velocity and wall shear stress in aortic blood flow: CFD-based machine learning approach.

Lin D, Kenjereš S

pubmed logopapersJun 1 2025
In this work, we developed deep neural networks for the fast and comprehensive estimation of the most salient features of aortic blood flow. These features include velocity magnitude and direction, 3D pressure, and wall shear stress. Starting from 40 subject-specific aortic geometries obtained from 4D Flow MRI, we applied statistical shape modeling to generate 1,000 synthetic aorta geometries. Complete computational fluid dynamics (CFD) simulations of these geometries were performed to obtain ground-truth values. We then trained deep neural networks for each characteristic flow feature using 900 randomly selected aorta geometries. Testing on remaining 100 geometries resulted in average errors of 3.11% for velocity and 4.48% for pressure. For wall shear stress predictions, we applied two approaches: (i) directly derived from the neural network-predicted velocity, and, (ii) predicted from a separate neural network. Both approaches yielded similar accuracy, with average error of 4.8 and 4.7% compared to complete 3D CFD results, respectively. We recommend the second approach for potential clinical use due to its significantly simplified workflow. In conclusion, this proof-of-concept analysis demonstrates the numerical robustness, rapid calculation speed (less than seconds), and good accuracy of the CFD-based machine learning approach in predicting velocity, pressure, and wall shear stress distributions in subject-specific aortic flows.

MR2US-Pro: Prostate MR to Ultrasound Image Translation and Registration Based on Diffusion Models

Xudong Ma, Nantheera Anantrasirichai, Stefanos Bolomytis, Alin Achim

arxiv logopreprintMay 31 2025
The diagnosis of prostate cancer increasingly depends on multimodal imaging, particularly magnetic resonance imaging (MRI) and transrectal ultrasound (TRUS). However, accurate registration between these modalities remains a fundamental challenge due to the differences in dimensionality and anatomical representations. In this work, we present a novel framework that addresses these challenges through a two-stage process: TRUS 3D reconstruction followed by cross-modal registration. Unlike existing TRUS 3D reconstruction methods that rely heavily on external probe tracking information, we propose a totally probe-location-independent approach that leverages the natural correlation between sagittal and transverse TRUS views. With the help of our clustering-based feature matching method, we enable the spatial localization of 2D frames without any additional probe tracking information. For the registration stage, we introduce an unsupervised diffusion-based framework guided by modality translation. Unlike existing methods that translate one modality into another, we map both MR and US into a pseudo intermediate modality. This design enables us to customize it to retain only registration-critical features, greatly easing registration. To further enhance anatomical alignment, we incorporate an anatomy-aware registration strategy that prioritizes internal structural coherence while adaptively reducing the influence of boundary inconsistencies. Extensive validation demonstrates that our approach outperforms state-of-the-art methods by achieving superior registration accuracy with physically realistic deformations in a completely unsupervised fashion.

A conditional point cloud diffusion model for deformable liver motion tracking via a single arbitrarily-angled x-ray projection.

Xie J, Shao HC, Li Y, Yan S, Shen C, Wang J, Zhang Y

pubmed logopapersMay 30 2025
Deformable liver motion tracking using a single X-ray projection enables real-time motion monitoring and treatment intervention. We introduce a conditional point cloud diffusion model-based framework for accurate and robust liver motion tracking from arbitrarily angled single X-ray projections. We propose a conditional point cloud diffusion model for liver motion tracking (PCD-Liver), which estimates volumetric liver motion by solving deformable vector fields (DVFs) of a prior liver surface point cloud, based on a single X-ray image. It is a patient-specific model of two main components: a rigid alignment model to estimate the liver's overall shifts, and a conditional point cloud diffusion model that further corrects for the liver surface's deformation. Conditioned on the motion-encoded features extracted from a single X-ray projection by a geometry-informed feature pooling layer, the diffusion model iteratively solves detailed liver surface DVFs in a projection angle-agnostic fashion. The liver surface motion solved by PCD-Liver is subsequently fed as the boundary condition into a UNet-based biomechanical model to infer the liver's internal motion to localize liver tumors. A dataset of 10 liver cancer patients was used for evaluation. We used the root mean square error (RMSE) and 95-percentile Hausdorff distance (HD95) metrics to examine the liver point cloud motion estimation accuracy, and the center-of-mass error (COME) to quantify the liver tumor localization error. The mean (±s.d.) RMSE, HD95, and COME of the prior liver or tumor before motion estimation were 8.82 mm (±3.58 mm), 10.84 mm (±4.55 mm), and 9.72 mm (±4.34 mm), respectively. After PCD-Liver's motion estimation, the corresponding values were 3.63 mm (±1.88 mm), 4.29 mm (±1.75 mm), and 3.46 mm (±2.15 mm). Under highly noisy conditions, PCD-Liver maintained stable performance. This study presents an accurate and robust framework for liver deformable motion estimation and tumor localization for image-guided radiotherapy.

Pretraining Deformable Image Registration Networks with Random Images

Junyu Chen, Shuwen Wei, Yihao Liu, Aaron Carass, Yong Du

arxiv logopreprintMay 30 2025
Recent advances in deep learning-based medical image registration have shown that training deep neural networks~(DNNs) does not necessarily require medical images. Previous work showed that DNNs trained on randomly generated images with carefully designed noise and contrast properties can still generalize well to unseen medical data. Building on this insight, we propose using registration between random images as a proxy task for pretraining a foundation model for image registration. Empirical results show that our pretraining strategy improves registration accuracy, reduces the amount of domain-specific data needed to achieve competitive performance, and accelerates convergence during downstream training, thereby enhancing computational efficiency.

Beyond the LUMIR challenge: The pathway to foundational registration models

Junyu Chen, Shuwen Wei, Joel Honkamaa, Pekka Marttinen, Hang Zhang, Min Liu, Yichao Zhou, Zuopeng Tan, Zhuoyuan Wang, Yi Wang, Hongchao Zhou, Shunbo Hu, Yi Zhang, Qian Tao, Lukas Förner, Thomas Wendler, Bailiang Jian, Benedikt Wiestler, Tim Hable, Jin Kim, Dan Ruan, Frederic Madesta, Thilo Sentker, Wiebke Heyer, Lianrui Zuo, Yuwei Dai, Jing Wu, Jerry L. Prince, Harrison Bai, Yong Du, Yihao Liu, Alessa Hering, Reuben Dorent, Lasse Hansen, Mattias P. Heinrich, Aaron Carass

arxiv logopreprintMay 30 2025
Medical image challenges have played a transformative role in advancing the field, catalyzing algorithmic innovation and establishing new performance standards across diverse clinical applications. Image registration, a foundational task in neuroimaging pipelines, has similarly benefited from the Learn2Reg initiative. Building on this foundation, we introduce the Large-scale Unsupervised Brain MRI Image Registration (LUMIR) challenge, a next-generation benchmark designed to assess and advance unsupervised brain MRI registration. Distinct from prior challenges that leveraged anatomical label maps for supervision, LUMIR removes this dependency by providing over 4,000 preprocessed T1-weighted brain MRIs for training without any label maps, encouraging biologically plausible deformation modeling through self-supervision. In addition to evaluating performance on 590 held-out test subjects, LUMIR introduces a rigorous suite of zero-shot generalization tasks, spanning out-of-domain imaging modalities (e.g., FLAIR, T2-weighted, T2*-weighted), disease populations (e.g., Alzheimer's disease), acquisition protocols (e.g., 9.4T MRI), and species (e.g., macaque brains). A total of 1,158 subjects and over 4,000 image pairs were included for evaluation. Performance was assessed using both segmentation-based metrics (Dice coefficient, 95th percentile Hausdorff distance) and landmark-based registration accuracy (target registration error). Across both in-domain and zero-shot tasks, deep learning-based methods consistently achieved state-of-the-art accuracy while producing anatomically plausible deformation fields. The top-performing deep learning-based models demonstrated diffeomorphic properties and inverse consistency, outperforming several leading optimization-based methods, and showing strong robustness to most domain shifts, the exception being a drop in performance on out-of-domain contrasts.

End-to-end 2D/3D registration from pre-operative MRI to intra-operative fluoroscopy for orthopedic procedures.

Ku PC, Liu M, Grupp R, Harris A, Oni JK, Mears SC, Martin-Gomez A, Armand M

pubmed logopapersMay 30 2025
Soft tissue pathologies and bone defects are not easily visible in intra-operative fluoroscopic images; therefore, we develop an end-to-end MRI-to-fluoroscopic image registration framework, aiming to enhance intra-operative visualization for surgeons during orthopedic procedures. The proposed framework utilizes deep learning to segment MRI scans and generate synthetic CT (sCT) volumes. These sCT volumes are then used to produce digitally reconstructed radiographs (DRRs), enabling 2D/3D registration with intra-operative fluoroscopic images. The framework's performance was validated through simulation and cadaver studies for core decompression (CD) surgery, focusing on the registration accuracy of femur and pelvic regions. The framework achieved a mean translational registration accuracy of 2.4 ± 1.0 mm and rotational accuracy of 1.6 ± <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0</mn> <mo>.</mo> <msup><mn>8</mn> <mo>∘</mo></msup> </mrow> </math> for the femoral region in cadaver studies. The method successfully enabled intra-operative visualization of necrotic lesions that were not visible on conventional fluoroscopic images, marking a significant advancement in image guidance for femur and pelvic surgeries. The MRI-to-fluoroscopic registration framework offers a novel approach to image guidance in orthopedic surgeries, exclusively using MRI without the need for CT scans. This approach enhances the visualization of soft tissues and bone defects, reduces radiation exposure, and provides a safer, more effective alternative for intra-operative surgical guidance.

Motion-resolved parametric imaging derived from short dynamic [<sup>18</sup>F]FDG PET/CT scans.

Artesani A, van Sluis J, Providência L, van Snick JH, Slart RHJA, Noordzij W, Tsoumpas C

pubmed logopapersMay 29 2025
This study aims to assess the added value of utilizing short-dynamic whole-body PET/CT scans and implementing motion correction before quantifying metabolic rate, offering more insights into physiological processes. While this approach may not be commonly adopted, addressing motion effects is crucial due to their demonstrated potential to cause significant errors in parametric imaging. A 15-minute dynamic FDG PET acquisition protocol was utilized for four lymphoma patients undergoing therapy evaluation. Parametric imaging was obtained using a population-based input function (PBIF) derived from twelve patients with full 65-minute dynamic FDG PET acquisition. AI-based registration methods were employed to correct misalignments between both PET and ACCT and PET-to-PET. Tumour characteristics were assessed using both parametric images and standardized uptake values (SUV). The motion correction process significantly reduced mismatches between images without significantly altering voxel intensity values, except for SUV<sub>max</sub>. Following the alignment of the attenuation correction map with the PET frame, an increase in SUV<sub>max</sub> in FDG-avid lymph nodes was observed, indicating its susceptibility to spatial misalignments. In contrast, Patlak K<sub>i</sub> parameter was highly sensitive to misalignment across PET frames, that notably altered the Patlak slope. Upon completion of the motion correction process, the parametric representation revealed heterogeneous behaviour among lymph nodes compared to SUV images. Notably, reduced volume of elevated metabolic rate was determined in the mediastinal lymph nodes in contrast with an SUV of 5 g/ml, indicating potential perfusion or inflammation. Motion resolved short-dynamic PET can enhance the utility and reliability of parametric imaging, an aspect often overlooked in commercial software.
Page 5 of 874 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.