Sort by:
Page 1 of 18175 results
Next

Integration of Genetic Information to Improve Brain Age Gap Estimation Models in the UK Biobank.

Mohite A, Ardila K, Charatpangoon P, Munro E, Zhang Q, Long Q, Curtis C, MacDonald ME

pubmed logopapersOct 1 2025
Neurodegeneration occurs when the body's central nervous system becomes impaired as a person ages, which can happen at an accelerated pace. Neurodegeneration impairs quality of life, affecting essential functions, including memory and the ability to self-care. Genetics play an important role in neurodegeneration and longevity. Brain age gap estimation (BrainAGE) is a biomarker that quantifies the difference between a machine learning model-predicted biological age of the brain and the true chronological age for healthy subjects; however, a large portion of the variance remains unaccounted for in these models, attributed to individual differences. This study focuses on predicting the BrainAGE more accurately, aided by genetic information associated with neurodegeneration. To achieve this, a BrainAGE model was developed based on MRI measures, and then the associated genes were determined with a Genome-Wide Association Study. Subsequently, genetic information was incorporated into the models. The incorporation of genetic information yielded improvements in the model performances by 7% to 12%, showing that the incorporation of genetic information can notably reduce unexplained variance. This work helps to define new ways of determining persons susceptible to neurological aging decline and reveals genes for targeted precision medicine therapies.

Improving data-driven gated (DDG) PET and CT registration in thoracic lesions: a comparison of AI registration and DDG CT.

Pan T, Thomas MA, Lu Y, Luo D

pubmed logopapersSep 30 2025
Misregistration between CT and PET can result in mis-localization and inaccurate quantification of the tracer uptake in PET. Data-driven gated (DDG) CT can correct registration and quantification but requires a radiation dose of 1.3 mSv and 1 min of acquisition time. AI registration (AIR) does not require an additional CT and has been validated to improve registration and reduce the 'banana' misregistration artifacts around the diaphragm. We aimed to compare a validated AIR and DDG CT in registration and quantification of avid thoracic lesions misregistered in DDG PET scans. Thirty PET/CT patient data (23 with <sup>18</sup>F-FDG, 4 with <sup>68</sup>Ga-Dotatate, and 3 with <sup>18</sup>F-PSMA piflufolastat) with at least one misregistered avid lesion in the thorax were recruited. Patient studies were conducted using DDG CT to correct misregistration with DDG PET data of the phases 30 to 80% on GE Discovery MI PET/CT scanners. Non-attenuation correction DDG PET and misregistered CT were input to AIR and the AIR-corrected CT data were output to register and quantify the DDG PET data. Registration and quantification of lesion SUV<sub>max</sub> and signal-to-background ratio (SBR) of the lesion SUV<sub>max</sub> to the 2-cm background mean SUV were compared for each of the 51 avid lesions. DDG CT outperformed AIR in misregistration correction and quantification of avid thoracic lesions (1.16 ± 0.45 cm). Most lesions (46/51, 90%) showed improved registration from DDG CT relative to AIR, with 10% (5/51) being similar between AIR and DDG CT. The lesions in the baseline CT were an average of 2.06 ± 1.0 cm from their corresponding lesions in the DDG CT, while those in the AIR CT were an average of 0.97 ± 0.54 cm away. AIR significantly improved lesion registration compared to the baseline CT (P < 0.0001). SUV<sub>max</sub> increased by 18.1 ± 15.3% with AIR, but a statistically significantly larger increase of 34.4 ± 25.4% was observed with DDG CT (P < 0.0001). A statistically significant increase in SBR was also observed, rising from 10.5 ± 12.1% of AIR to 21.1 ± 20.5% of DDG CT (P < 0.0001). Many registration improvements by AIR were still left with misregistration. AIR could mis-localize a lymph node to the lung parenchyma or the ribs, and could also mis-localize a lung nodule to the left atrium. AIR could also distort the rib cage and the circular shape of the aorta cross section. DDG CT outperformed AIR in both localization and quantification of the thoracic avid lesions. AIR improved registration of the misregistered PET/CT. Registered lymph nodes could be falsely misregistered by AIR. AIR-induced distortion of the rib cage can also negatively impact image quality. Further research on AIR's accuracy in modeling true patient respiratory motion without introducing new misregistration or anatomical distortion is warranted.

A Scalable Distributed Framework for Multimodal GigaVoxel Image Registration

Rohit Jena, Vedant Zope, Pratik Chaudhari, James C. Gee

arxiv logopreprintSep 29 2025
In this work, we propose FFDP, a set of IO-aware non-GEMM fused kernels supplemented with a distributed framework for image registration at unprecedented scales. Image registration is an inverse problem fundamental to biomedical and life sciences, but algorithms have not scaled in tandem with image acquisition capabilities. Our framework complements existing model parallelism techniques proposed for large-scale transformer training by optimizing non-GEMM bottlenecks and enabling convolution-aware tensor sharding. We demonstrate unprecedented capabilities by performing multimodal registration of a 100 micron ex-vivo human brain MRI volume at native resolution - an inverse problem more than 570x larger than a standard clinical datum in about a minute using only 8 A6000 GPUs. FFDP accelerates existing state-of-the-art optimization and deep learning registration pipelines by upto 6 - 7x while reducing peak memory consumption by 20 - 59%. Comparative analysis on a 250 micron dataset shows that FFDP can fit upto 64x larger problems than existing SOTA on a single GPU, and highlights both the performance and efficiency gains of FFDP compared to SOTA image registration methods.

[Advances in the application of artificial intelligence for pulmonary function assessment based on chest imaging in thoracic surgery].

Huang LC, Liang HR, Jiang Y, Lin YC, He JX

pubmed logopapersSep 27 2025
In recent years, lung function assessment has attracted increasing attention in the perioperative management of thoracic surgery. However, traditional pulmonary function testing methods remain limited in clinical practice due to high equipment requirements and complex procedures. With the rapid development of artificial intelligence (AI) technology, lung function assessment based on multimodal chest imaging (such as X-rays, CT, and MRI) has become a new research focus. Through deep learning algorithms, AI models can accurately extract imaging features of patients and have made significant progress in quantitative analysis of pulmonary ventilation, evaluation of diffusion capacity, measurement of lung volumes, and prediction of lung function decline. Previous studies have demonstrated that AI models perform well in predicting key indicators such as forced expiratory volume in one second (FEV1), diffusing capacity for carbon monoxide (DLCO), and total lung capacity (TLC). Despite these promising prospects, challenges remain in clinical translation, including insufficient data standardization, limited model interpretability, and the lack of prediction models for postoperative complications. In the future, greater emphasis should be placed on multicenter collaboration, the construction of high-quality databases, the promotion of multimodal data integration, and clinical validation to further enhance the application value of AI technology in precision decision-making for thoracic surgery.

Test-time Uncertainty Estimation for Medical Image Registration via Transformation Equivariance

Lin Tian, Xiaoling Hu, Juan Eugenio Iglesias

arxiv logopreprintSep 27 2025
Accurate image registration is essential for downstream applications, yet current deep registration networks provide limited indications of whether and when their predictions are reliable. Existing uncertainty estimation strategies, such as Bayesian methods, ensembles, or MC dropout, require architectural changes or retraining, limiting their applicability to pretrained registration networks. Instead, we propose a test-time uncertainty estimation framework that is compatible with any pretrained networks. Our framework is grounded in the transformation equivariance property of registration, which states that the true mapping between two images should remain consistent under spatial perturbations of the input. By analyzing the variance of network predictions under such perturbations, we derive a theoretical decomposition of perturbation-based uncertainty in registration. This decomposition separates into two terms: (i) an intrinsic spread, reflecting epistemic noise, and (ii) a bias jitter, capturing how systematic error drifts under perturbations. Across four anatomical structures (brain, cardiac, abdominal, and lung) and multiple registration models (uniGradICON, SynthMorph), the uncertainty maps correlate consistently with registration errors and highlight regions requiring caution. Our framework turns any pretrained registration network into a risk-aware tool at test time, placing medical image registration one step closer to safe deployment in clinical and large-scale research settings.

[Advances in the application of multimodal image fusion technique in stomatology].

Ma TY, Zhu N, Zhang Y

pubmed logopapersSep 26 2025
Within the treatment process of modern stomatology, obtaining exquisite preoperative information is the key to accurate intraoperative planning with implementation and prognostic judgment. However, traditional single mode image has obvious shortcomings, such as "monotonous contents" and "unstable measurement accuracy", which could hardly meet the diversified needs of oral patients. Multimodal medical image fusion (MMIF) technique has been introduced into the studies of stomatology in the 1990s, aiming at realizing personalized patients' data analysis through multiple fusion algorithms, which combines the advantages of multimodal medical images while laying a stable foundation for new treatment technologies. Recently artificial intelligence (AI) has significantly increased the precision and efficiency of MMIF's registration: advanced algorithms and networks have confirmed the great compatibility between AI and MMIF. This article systematically reviews the development history of the multimodal image fusion technique and its current application in stomatology, while analyzing technological progresses within the domain combined with the background of AI's rapid development, in order to provide new ideas for achieving new advancements within the field of stomatology.

Learning KAN-based Implicit Neural Representations for Deformable Image Registration

Nikita Drozdov, Marat Zinovev, Dmitry Sorokin

arxiv logopreprintSep 26 2025
Deformable image registration (DIR) is a cornerstone of medical image analysis, enabling spatial alignment for tasks like comparative studies and multi-modal fusion. While learning-based methods (e.g., CNNs, transformers) offer fast inference, they often require large training datasets and struggle to match the precision of classical iterative approaches on some organ types and imaging modalities. Implicit neural representations (INRs) have emerged as a promising alternative, parameterizing deformations as continuous mappings from coordinates to displacement vectors. However, this comes at the cost of requiring instance-specific optimization, making computational efficiency and seed-dependent learning stability critical factors for these methods. In this work, we propose KAN-IDIR and RandKAN-IDIR, the first integration of Kolmogorov-Arnold Networks (KANs) into deformable image registration with implicit neural representations (INRs). Our proposed randomized basis sampling strategy reduces the required number of basis functions in KAN while maintaining registration quality, thereby significantly lowering computational costs. We evaluated our approach on three diverse datasets (lung CT, brain MRI, cardiac MRI) and compared it with competing instance-specific learning-based approaches, dataset-trained deep learning models, and classical registration approaches. KAN-IDIR and RandKAN-IDIR achieved the highest accuracy among INR-based methods across all evaluated modalities and anatomies, with minimal computational overhead and superior learning stability across multiple random seeds. Additionally, we discovered that our RandKAN-IDIR model with randomized basis sampling slightly outperforms the model with learnable basis function indices, while eliminating its additional training-time complexity.

SHMoAReg: Spark Deformable Image Registration via Spatial Heterogeneous Mixture of Experts and Attention Heads

Yuxi Zheng, Jianhui Feng, Tianran Li, Marius Staring, Yuchuan Qiao

arxiv logopreprintSep 24 2025
Encoder-Decoder architectures are widely used in deep learning-based Deformable Image Registration (DIR), where the encoder extracts multi-scale features and the decoder predicts deformation fields by recovering spatial locations. However, current methods lack specialized extraction of features (that are useful for registration) and predict deformation jointly and homogeneously in all three directions. In this paper, we propose a novel expert-guided DIR network with Mixture of Experts (MoE) mechanism applied in both encoder and decoder, named SHMoAReg. Specifically, we incorporate Mixture of Attention heads (MoA) into encoder layers, while Spatial Heterogeneous Mixture of Experts (SHMoE) into the decoder layers. The MoA enhances the specialization of feature extraction by dynamically selecting the optimal combination of attention heads for each image token. Meanwhile, the SHMoE predicts deformation fields heterogeneously in three directions for each voxel using experts with varying kernel sizes. Extensive experiments conducted on two publicly available datasets show consistent improvements over various methods, with a notable increase from 60.58% to 65.58% in Dice score for the abdominal CT dataset. Furthermore, SHMoAReg enhances model interpretability by differentiating experts' utilities across/within different resolution layers. To the best of our knowledge, we are the first to introduce MoE mechanism into DIR tasks. The code will be released soon.

Improved pharmacokinetic parameter estimation from DCE-MRI via spatial-temporal information-driven unsupervised learning.

He X, Wang L, Yang Q, Wang J, Xing Z, Cao D, Cai C, Cai S

pubmed logopapersSep 23 2025
<b>Objective</b>: Pharmacokinetic (PK) parameters derived from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) provide quantitative characterization of tissue perfusion and permeability. However, existing deep learning methods for PK parameter estimation rely on either temporal or spatial features alone, overlooking the integrated spatial-temporal characteristics of DCE-MRI data. This study aims to remove this barrier by fully leveraging the spatial and temporal information to improve parameter estimation.&#xD;<b>Approach</b>: A spatial-temporal information-driven unsupervised deep learning method (STUDE) was proposed. STUDE combines convolutional neural networks (CNNs) and a customized Vision Transformer (ViT) to separately capture spatial and temporal features, enabling comprehensive modelling of contrast agent dynamics and tissue heterogeneity. Besides, a spatial-temporal attention (STA) feature fusion module was proposed to enable adaptive focus on both dimensions for more effective feature fusion. Moreover, the extended Tofts model imposed physical constraints on PK parameter estimation, enabling unsupervised training of STUDE. The accuracy and diagnostic value of STUDE was compared with the orthodox non-linear least squares (NLLS) and representative deep learning-based methods (i.e., GRU, CNN, U-Net, and VTDCE-Net) on a numerical brain phantom and 87 glioma patients, respectively.&#xD;<b>Main results</b>: On the numerical brain phantom, STUDE produced PK parameter maps with the lowest systematic and random errors even under low SNR conditions (SNR = 10 dB). On glioma data, STUDE generated parameter maps with reduced noise compared to NLLS and demonstrated superior structural clarity compared to other methods. Furthermore, STUDE outshined all other methods in the identification of glioma isocitrate dehydrogenase (IDH) mutation status, achieving the area under the curve (AUC) values at 0.840 and 0.908 for the receiver operating characteristic curves of<i>K<sup>trans</sup></i>and<i>V<sub>e</sub></i>, respectively. A combination of all PK parameters improved AUC to 0.926.&#xD;<b>Significance</b>: STUDE advances spatial-temporal information-driven and physics-informed learning for precise PK parameter estimation, demonstrating its potential clinical significance.&#xD.

Advanced Image-Guidance and Surgical-Navigation Techniques for Real-Time Visualized Surgery.

Fan X, Liu X, Xia Q, Chen G, Cheng J, Shi Z, Fang Y, Khadaroo PA, Qian J, Lin H

pubmed logopapersSep 23 2025
Surgical navigation is a rapidly evolving multidisciplinary system that plays a crucial role in precision medicine. Surgical-navigation systems have substantially enhanced modern surgery by improving the precision of resection, reducing invasiveness, and enhancing patient outcomes. However, clinicians, engineers, and professionals in other fields often view this field from their own perspectives, which usually results in a one-sided viewpoint. This article aims to provide a thorough overview of the recent advancements in surgical-navigation systems and categorizes them on the basis of their unique characteristics and applications. Established techniques (e.g., radiography, intraoperative computed tomography [CT], magnetic resonance imaging [MRI], and ultrasound) and emerging technologies (e.g., photoacoustic imaging and near-infrared [NIR]-II imaging) are systematically analyzed, highlighting their underlying mechanisms, methods of use, and respective advantages and disadvantages. Despite substantial progress, the existing navigation systems face challenges, including limited accuracy, high costs, and extensive training requirements for surgeons. Addressing these limitations is crucial for widespread adoption of these technologies. The review emphasizes the need for developing more intelligent, minimally invasive, precise, personalized, and radiation-free navigation solutions. By integrating advanced imaging modalities, machine learning algorithms, and real-time feedback mechanisms, next-generation surgical-navigation systems can further enhance surgical precision and patient safety. By bridging the knowledge gap between clinical practice and engineering innovation, this review not only provides valuable insights for surgeons seeking optimal navigation strategies, but also offers engineers a deeper understanding of clinical application scenarios.
Page 1 of 18175 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.