Sort by:
Page 1 of 11106 results
Next

Integration of Genetic Information to Improve Brain Age Gap Estimation Models in the UK Biobank.

Mohite A, Ardila K, Charatpangoon P, Munro E, Zhang Q, Long Q, Curtis C, MacDonald ME

pubmed logopapersOct 1 2025
Neurodegeneration occurs when the body's central nervous system becomes impaired as a person ages, which can happen at an accelerated pace. Neurodegeneration impairs quality of life, affecting essential functions, including memory and the ability to self-care. Genetics play an important role in neurodegeneration and longevity. Brain age gap estimation (BrainAGE) is a biomarker that quantifies the difference between a machine learning model-predicted biological age of the brain and the true chronological age for healthy subjects; however, a large portion of the variance remains unaccounted for in these models, attributed to individual differences. This study focuses on predicting the BrainAGE more accurately, aided by genetic information associated with neurodegeneration. To achieve this, a BrainAGE model was developed based on MRI measures, and then the associated genes were determined with a Genome-Wide Association Study. Subsequently, genetic information was incorporated into the models. The incorporation of genetic information yielded improvements in the model performances by 7% to 12%, showing that the incorporation of genetic information can notably reduce unexplained variance. This work helps to define new ways of determining persons susceptible to neurological aging decline and reveals genes for targeted precision medicine therapies.

Improving data-driven gated (DDG) PET and CT registration in thoracic lesions: a comparison of AI registration and DDG CT.

Pan T, Thomas MA, Lu Y, Luo D

pubmed logopapersSep 30 2025
Misregistration between CT and PET can result in mis-localization and inaccurate quantification of the tracer uptake in PET. Data-driven gated (DDG) CT can correct registration and quantification but requires a radiation dose of 1.3 mSv and 1 min of acquisition time. AI registration (AIR) does not require an additional CT and has been validated to improve registration and reduce the 'banana' misregistration artifacts around the diaphragm. We aimed to compare a validated AIR and DDG CT in registration and quantification of avid thoracic lesions misregistered in DDG PET scans. Thirty PET/CT patient data (23 with <sup>18</sup>F-FDG, 4 with <sup>68</sup>Ga-Dotatate, and 3 with <sup>18</sup>F-PSMA piflufolastat) with at least one misregistered avid lesion in the thorax were recruited. Patient studies were conducted using DDG CT to correct misregistration with DDG PET data of the phases 30 to 80% on GE Discovery MI PET/CT scanners. Non-attenuation correction DDG PET and misregistered CT were input to AIR and the AIR-corrected CT data were output to register and quantify the DDG PET data. Registration and quantification of lesion SUV<sub>max</sub> and signal-to-background ratio (SBR) of the lesion SUV<sub>max</sub> to the 2-cm background mean SUV were compared for each of the 51 avid lesions. DDG CT outperformed AIR in misregistration correction and quantification of avid thoracic lesions (1.16 ± 0.45 cm). Most lesions (46/51, 90%) showed improved registration from DDG CT relative to AIR, with 10% (5/51) being similar between AIR and DDG CT. The lesions in the baseline CT were an average of 2.06 ± 1.0 cm from their corresponding lesions in the DDG CT, while those in the AIR CT were an average of 0.97 ± 0.54 cm away. AIR significantly improved lesion registration compared to the baseline CT (P < 0.0001). SUV<sub>max</sub> increased by 18.1 ± 15.3% with AIR, but a statistically significantly larger increase of 34.4 ± 25.4% was observed with DDG CT (P < 0.0001). A statistically significant increase in SBR was also observed, rising from 10.5 ± 12.1% of AIR to 21.1 ± 20.5% of DDG CT (P < 0.0001). Many registration improvements by AIR were still left with misregistration. AIR could mis-localize a lymph node to the lung parenchyma or the ribs, and could also mis-localize a lung nodule to the left atrium. AIR could also distort the rib cage and the circular shape of the aorta cross section. DDG CT outperformed AIR in both localization and quantification of the thoracic avid lesions. AIR improved registration of the misregistered PET/CT. Registered lymph nodes could be falsely misregistered by AIR. AIR-induced distortion of the rib cage can also negatively impact image quality. Further research on AIR's accuracy in modeling true patient respiratory motion without introducing new misregistration or anatomical distortion is warranted.

[Advances in the application of artificial intelligence for pulmonary function assessment based on chest imaging in thoracic surgery].

Huang LC, Liang HR, Jiang Y, Lin YC, He JX

pubmed logopapersSep 27 2025
In recent years, lung function assessment has attracted increasing attention in the perioperative management of thoracic surgery. However, traditional pulmonary function testing methods remain limited in clinical practice due to high equipment requirements and complex procedures. With the rapid development of artificial intelligence (AI) technology, lung function assessment based on multimodal chest imaging (such as X-rays, CT, and MRI) has become a new research focus. Through deep learning algorithms, AI models can accurately extract imaging features of patients and have made significant progress in quantitative analysis of pulmonary ventilation, evaluation of diffusion capacity, measurement of lung volumes, and prediction of lung function decline. Previous studies have demonstrated that AI models perform well in predicting key indicators such as forced expiratory volume in one second (FEV1), diffusing capacity for carbon monoxide (DLCO), and total lung capacity (TLC). Despite these promising prospects, challenges remain in clinical translation, including insufficient data standardization, limited model interpretability, and the lack of prediction models for postoperative complications. In the future, greater emphasis should be placed on multicenter collaboration, the construction of high-quality databases, the promotion of multimodal data integration, and clinical validation to further enhance the application value of AI technology in precision decision-making for thoracic surgery.

[Advances in the application of multimodal image fusion technique in stomatology].

Ma TY, Zhu N, Zhang Y

pubmed logopapersSep 26 2025
Within the treatment process of modern stomatology, obtaining exquisite preoperative information is the key to accurate intraoperative planning with implementation and prognostic judgment. However, traditional single mode image has obvious shortcomings, such as "monotonous contents" and "unstable measurement accuracy", which could hardly meet the diversified needs of oral patients. Multimodal medical image fusion (MMIF) technique has been introduced into the studies of stomatology in the 1990s, aiming at realizing personalized patients' data analysis through multiple fusion algorithms, which combines the advantages of multimodal medical images while laying a stable foundation for new treatment technologies. Recently artificial intelligence (AI) has significantly increased the precision and efficiency of MMIF's registration: advanced algorithms and networks have confirmed the great compatibility between AI and MMIF. This article systematically reviews the development history of the multimodal image fusion technique and its current application in stomatology, while analyzing technological progresses within the domain combined with the background of AI's rapid development, in order to provide new ideas for achieving new advancements within the field of stomatology.

Improved pharmacokinetic parameter estimation from DCE-MRI via spatial-temporal information-driven unsupervised learning.

He X, Wang L, Yang Q, Wang J, Xing Z, Cao D, Cai C, Cai S

pubmed logopapersSep 23 2025
<b>Objective</b>: Pharmacokinetic (PK) parameters derived from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) provide quantitative characterization of tissue perfusion and permeability. However, existing deep learning methods for PK parameter estimation rely on either temporal or spatial features alone, overlooking the integrated spatial-temporal characteristics of DCE-MRI data. This study aims to remove this barrier by fully leveraging the spatial and temporal information to improve parameter estimation.&#xD;<b>Approach</b>: A spatial-temporal information-driven unsupervised deep learning method (STUDE) was proposed. STUDE combines convolutional neural networks (CNNs) and a customized Vision Transformer (ViT) to separately capture spatial and temporal features, enabling comprehensive modelling of contrast agent dynamics and tissue heterogeneity. Besides, a spatial-temporal attention (STA) feature fusion module was proposed to enable adaptive focus on both dimensions for more effective feature fusion. Moreover, the extended Tofts model imposed physical constraints on PK parameter estimation, enabling unsupervised training of STUDE. The accuracy and diagnostic value of STUDE was compared with the orthodox non-linear least squares (NLLS) and representative deep learning-based methods (i.e., GRU, CNN, U-Net, and VTDCE-Net) on a numerical brain phantom and 87 glioma patients, respectively.&#xD;<b>Main results</b>: On the numerical brain phantom, STUDE produced PK parameter maps with the lowest systematic and random errors even under low SNR conditions (SNR = 10 dB). On glioma data, STUDE generated parameter maps with reduced noise compared to NLLS and demonstrated superior structural clarity compared to other methods. Furthermore, STUDE outshined all other methods in the identification of glioma isocitrate dehydrogenase (IDH) mutation status, achieving the area under the curve (AUC) values at 0.840 and 0.908 for the receiver operating characteristic curves of<i>K<sup>trans</sup></i>and<i>V<sub>e</sub></i>, respectively. A combination of all PK parameters improved AUC to 0.926.&#xD;<b>Significance</b>: STUDE advances spatial-temporal information-driven and physics-informed learning for precise PK parameter estimation, demonstrating its potential clinical significance.&#xD.

Advanced Image-Guidance and Surgical-Navigation Techniques for Real-Time Visualized Surgery.

Fan X, Liu X, Xia Q, Chen G, Cheng J, Shi Z, Fang Y, Khadaroo PA, Qian J, Lin H

pubmed logopapersSep 23 2025
Surgical navigation is a rapidly evolving multidisciplinary system that plays a crucial role in precision medicine. Surgical-navigation systems have substantially enhanced modern surgery by improving the precision of resection, reducing invasiveness, and enhancing patient outcomes. However, clinicians, engineers, and professionals in other fields often view this field from their own perspectives, which usually results in a one-sided viewpoint. This article aims to provide a thorough overview of the recent advancements in surgical-navigation systems and categorizes them on the basis of their unique characteristics and applications. Established techniques (e.g., radiography, intraoperative computed tomography [CT], magnetic resonance imaging [MRI], and ultrasound) and emerging technologies (e.g., photoacoustic imaging and near-infrared [NIR]-II imaging) are systematically analyzed, highlighting their underlying mechanisms, methods of use, and respective advantages and disadvantages. Despite substantial progress, the existing navigation systems face challenges, including limited accuracy, high costs, and extensive training requirements for surgeons. Addressing these limitations is crucial for widespread adoption of these technologies. The review emphasizes the need for developing more intelligent, minimally invasive, precise, personalized, and radiation-free navigation solutions. By integrating advanced imaging modalities, machine learning algorithms, and real-time feedback mechanisms, next-generation surgical-navigation systems can further enhance surgical precision and patient safety. By bridging the knowledge gap between clinical practice and engineering innovation, this review not only provides valuable insights for surgeons seeking optimal navigation strategies, but also offers engineers a deeper understanding of clinical application scenarios.

An Implicit Registration Framework Integrating Kolmogorov-Arnold Networks with Velocity Regularization for Image-Guided Radiation Therapy.

Sun P, Zhang C, Yang Z, Yin FF, Liu M

pubmed logopapersSep 22 2025
In image-guided radiation therapy (IGRT), deformable image registration between computed tomography (CT) and cone beam computed tomography (CBCT) images remain challenging due to the computational cost of iterative algorithms and the data dependence of supervised deep learning methods. Implicit neural representation (INR) provides a promising alternative, but conventional multilayer perceptron (MLP) might struggle to efficiently represent complex, nonlinear deformations. This study introduces a novel INR-based registration framework that models the deformation as a continuous, time-varying velocity field, parameterized by a Kolmogorov-Arnold Network (KAN) constructed using Jacobi polynomials. To our knowledge, this is the first integration of KAN into medical image registration, establishing a new paradigm beyond standard MLP-based INR. For improved efficiency, the KAN estimates low-dimensional principal components of the velocity field, which are reconstructed via inverse principal component analysis and temporally integrated to derive the final deformation. This approach achieves a ~70% improvement in computational efficiency relative to direct velocity field modeling while ensuring smooth and topology-preserving transformations through velocity regularization. Evaluation on a publicly available pelvic CT-CBCT dataset demonstrates up to 6% improvement in registration accuracy over traditional iterative methods and ~3% over MLP-based INR baselines, indicating the potential of the proposed method as an efficient and generalizable alternative for deformable registration.

AI-Driven Multimodality Fusion in Cardiac Imaging: Integrating CT, MRI, and Echocardiography for Precision.

Tran HH, Thu A, Twayana AR, Fuertes A, Gonzalez M, Basta M, James M, Mehta KA, Elias D, Figaro YM, Islek D, Frishman WH, Aronow WS

pubmed logopapersSep 19 2025
Artificial intelligence (AI)-enabled multimodal cardiovascular imaging holds significant promise for improving diagnostic accuracy, enhancing risk stratification, and supporting clinical decision-making. However, its translation into routine practice remains limited by multiple technical, infrastructural, and clinical barriers. This review synthesizes current challenges, including variability in image quality, alignment, and acquisition protocols; scarcity of large, annotated multimodality datasets; interoperability limitations across vendors and institutions; clinical skepticism due to limited prospective validation; and substantial development and implementation costs. Drawing from recent advances, we outline future research priorities to bridge the gap between technical feasibility and clinical utility. Key strategies include developing unified, vendor-agnostic AI models resilient to inter-institutional variability; integrating diverse data types such as genomics, wearable biosensors, and longitudinal clinical records; leveraging reinforcement learning for adaptive decision-support systems; and employing longitudinal imaging fusion for disease tracking and predictive analytics. We emphasize the need for rigorous prospective clinical trials, harmonized imaging standards, and collaborative data-sharing frameworks to ensure robust, equitable, and scalable deployment. Addressing these challenges through coordinated multidisciplinary efforts will be essential to realize the full potential of AI-driven multimodal cardiovascular imaging in advancing precision cardiovascular care.

Technical Feasibility of Quantitative Susceptibility Mapping Radiomics for Predicting Deep Brain Stimulation Outcomes in Parkinson Disease.

Roberts AG, Zhang J, Tozlu C, Romano D, Akkus S, Kim H, Sabuncu MR, Spincemaille P, Li J, Wang Y, Wu X, Kopell BH

pubmed logopapersSep 18 2025
Parkinson disease (PD) patients with motor complications are often considered for deep brain stimulation (DBS) surgery. Predicting symptom improvement to separate DBS responders and nonresponders remains an unmet need. Currently, DBS candidacy is evaluated using the levodopa challenge test (LCT) to confirm dopamine responsiveness and diagnosis. However, prediction of DBS success by measuring presurgical symptom improvement associated with levodopa dosage changes is highly problematic. Quantitative susceptibility mapping (QSM) is a recently developed MRI method that depicts brain iron distribution. As the substantia nigra and subthalamic nuclei are well visualized, QSM has been used in presurgical planning of DBS. Spatial features resulting from iron distribution in these nuclei have been previously linked with disease progression and motor symptom severity. Given its clear target depiction and prior findings regarding susceptibility and PD, this study demonstrates the technical feasibility of predicting DBS outcomes from presurgical QSM. A novel presurgical QSM radiomics approach using a regression model is presented to predict DBS outcome according to spatial features in QSM deep gray nuclei. To overcome limited and noisy training data, data augmentation using label noise injection or "compensation" was used to improve outcome prediction of the regression model. The QSM radiomics model was evaluated on 67 patients with PD who underwent DBS at 2 medical centers. The QSM radiomics model predicted DBS improvement in the Unified Parkinson Disease Rating Scale at Center 1 and Center 2 with Pearson correlation , () and , (), respectively. LCT failed to predict DBS improvement at Center 1 and Center 2 with Pearson correlation () and (), respectively. QSM radiomics has potential to accurately predict DBS outcome in treating patients with PD, offering a valuable alternative to the time-consuming and low-accuracy LCT.

Development and validation of machine learning predictive models for gastric volume based on ultrasonography: A multicentre study.

Liu J, Li S, Li M, Li G, Huang N, Shu B, Chen J, Zhu T, Huang H, Duan G

pubmed logopapersSep 18 2025
Aspiration of gastric contents is a serious complication associated with anaesthesia. Accurate prediction of gastric volume may assist in risk stratification and help prevent aspiration. This study aimed to develop and validate machine learning models to predict gastric volume based on ultrasound and clinical features. This cross-sectional multicentre study was conducted at two hospitals and included adult patients undergoing gastroscopy under intravenous anaesthesia. Patients from Centre 1 were prospectively enrolled and randomly divided into a training set (Cohort A, n = 415) and an internal validation set (Cohort B, n = 179), while patients from Centre 2 were used as an external validation set (Cohort C, n = 199). The primary outcome was gastric volume, which was measured by endoscopic aspiration immediately following ultrasonographic examination. Least absolute shrinkage and selection operator (LASSO) regression was used for feature selection, and eight machine learning models were developed and evaluated using Bland-Altman analysis. The models' ability to predict medium-to-high and high gastric volumes was assessed. The top-performing models were externally validated, and their predictive performance was compared with the traditional Perlas model. Among the 793 enrolled patients, the number and proportion of patients with high gastric volume were as follows: 23 (5.5 %) in the development cohort, 10 (5.6 %) in the internal validation cohort, and 3 (1.5 %) in the external validation cohort. Eight models were developed using age, cross-sectional area of gastric antrum in right lateral decubitus (RLD-CSA) position, and Perlas grade, with these variables selected through LASSO regression. In internal validation, Bland-Altman analysis showed that the Perlas model overestimated gastric volume (mean bias 23.5 mL), while the new models provided accurate estimates (mean bias -0.1 to 2.0 mL). The models significantly improved prediction of medium-high gastric volume (area under the curve [AUC]: 0.74-0.77 vs. 0.63) and high gastric volume (AUC: 0.85-0.94 vs. 0.74). The best-performing adaptive boosting and linear regression models underwent externally validation, with AUCs of 0.81 (95 % confidence interval [CI], 0.74-0.89) and 0.80 (95 %CI, 0.72-0.89) for medium-high and 0.96 (95 %CI, 0.91-1) and 0.96 (95 %CI, 0.89-1) for high gastric volume. We propose a novel machine learning-based predictive model that outperforms Perlas model by incorporating the key features of age, RLD-CSA, and Perlas grade, enabling accurate prediction of gastric volume.
Page 1 of 11106 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.