Sort by:
Page 25 of 99986 results

4D Virtual Imaging Platform for Dynamic Joint Assessment via Uni-Plane X-ray and 2D-3D Registration

Hao Tang, Rongxi Yi, Lei Li, Kaiyi Cao, Jiapeng Zhao, Yihan Xiao, Minghai Shi, Peng Yuan, Yan Xi, Hui Tang, Wei Li, Zhan Wu, Yixin Zhou

arxiv logopreprintAug 22 2025
Conventional computed tomography (CT) lacks the ability to capture dynamic, weight-bearing joint motion. Functional evaluation, particularly after surgical intervention, requires four-dimensional (4D) imaging, but current methods are limited by excessive radiation exposure or incomplete spatial information from 2D techniques. We propose an integrated 4D joint analysis platform that combines: (1) a dual robotic arm cone-beam CT (CBCT) system with a programmable, gantry-free trajectory optimized for upright scanning; (2) a hybrid imaging pipeline that fuses static 3D CBCT with dynamic 2D X-rays using deep learning-based preprocessing, 3D-2D projection, and iterative optimization; and (3) a clinically validated framework for quantitative kinematic assessment. In simulation studies, the method achieved sub-voxel accuracy (0.235 mm) with a 99.18 percent success rate, outperforming conventional and state-of-the-art registration approaches. Clinical evaluation further demonstrated accurate quantification of tibial plateau motion and medial-lateral variance in post-total knee arthroplasty (TKA) patients. This 4D CBCT platform enables fast, accurate, and low-dose dynamic joint imaging, offering new opportunities for biomechanical research, precision diagnostics, and personalized orthopedic care.

Deep learning ensemble for abdominal aortic calcification scoring from lumbar spine X-ray and DXA images.

Voss A, Suoranta S, Nissinen T, Hurskainen O, Masarwah A, Sund R, Tohka J, Väänänen SP

pubmed logopapersAug 22 2025
Abdominal aortic calcification (AAC) is an independent predictor of cardiovascular diseases (CVDs). AAC is typically detected as an incidental finding in spine scans. Early detection of AAC through opportunistic screening using any available imaging modalities could help identify individuals with a higher risk of developing clinical CVDs. However, AAC is not routinely assessed in clinics, and manual scoring from projection images is time-consuming and prone to inter-rater variability. Also, automated AAC scoring methods exist, but earlier methods have not accounted for the inherent variability in AAC scoring and were developed for a single imaging modality at a time. We propose an automated method for quantifying AAC from lumbar spine X-ray and Dual-energy X-ray Absorptiometry (DXA) images using an ensemble of convolutional neural network models that predicts a distribution of probable AAC scores. We treat AAC score as a normally distributed random variable to account for the variability of manual scoring. The mean and variance of the assumed normal AAC distributions are estimated based on manual annotations, and the models in the ensemble are trained by simulating AAC scores from these distributions. Our proposed ensemble approach successfully extracted AAC scores from both X-ray and DXA images with predicted score distributions demonstrating strong agreement with manual annotations, as evidenced by concordance correlation coefficients of 0.930 for X-ray and 0.912 for DXA. The prediction error between the average estimates of our approach and the average manual annotations was lower than the errors reported previously, highlighting the benefit of incorporating uncertainty in AAC scoring.

Ascending Aortic Dimensions and Body Size: Allometric Scaling, Normative Values, and Prognostic Performance.

Tavolinejad H, Beeche C, Dib MJ, Pourmussa B, Damrauer SM, DePaolo J, Azzo JD, Salman O, Duda J, Gee J, Kun S, Witschey WR, Chirinos JA

pubmed logopapersAug 21 2025
Ascending aortic (AscAo) dimensions partially depend on body size. Ratiometric (linear) indexing of AscAo dimensions to height and body surface area (BSA) are currently recommended, but it is unclear whether these allometric relationships are indeed linear. This study aimed to evaluate allometric relations, normative values, and the prognostic performance of AscAo dimension indices. We studied UK Biobank (UKB) (n = 49,271) and Penn Medicine BioBank (PMBB) (n = 8,426) participants. A convolutional neural network was used to segment the thoracic aorta from available magnetic resonance and computed tomography thoracic images. Normal allometric exponents of AscAo dimensions were derived from log-log models among healthy reference subgroups. Prognostic associations of AscAo dimensions were assessed with the use of Cox models. Among reference subgroups of both UKB (n = 11,310; age 52 ± 8 years; 37% male) and PMBB (n = 799; age 50 ± 16 years; 41% male), diameter/height, diameter/BSA, and area/BSA exhibited highly nonlinear relationships. In contrast, the allometric exponent of the area/height index was close to unity (UKB: 1.04; PMBB: 1.13). Accordingly, the linear ratio of area/height index did not exhibit residual associations with height (UKB: R<sup>2</sup> = 0.04 [P = 0.411]; PMBB: R<sup>2</sup> = 0.08 [P = 0.759]). Across quintiles of height and BSA, area/height was the only ratiometric index that consistently classified aortic dilation, whereas all other indices systematically underestimated or overestimated AscAo dilation at the extremes of body size. Area/height was robustly associated with thoracic aorta events in the UKB (HR: 3.73; P < 0.001) and the PMBB (HR: 1.83; P < 0.001). Among AscAo indices, area/height was allometrically correct, did not exhibit residual associations with body size, and was consistently associated with adverse events.

When Age Is More Than a Number: Acceleration of Brain Aging in Neurodegenerative Diseases.

Doering E, Hoenig MC, Cole JH, Drzezga A

pubmed logopapersAug 21 2025
Aging of the brain is characterized by deleterious processes at various levels including cellular/molecular and structural/functional changes. Many of these processes can be assessed in vivo by means of modern neuroimaging procedures, allowing the quantification of brain age in different modalities. Brain age can be measured by suitable machine learning strategies. The deviation (in both directions) between a person's measured brain age and chronologic age is referred to as the brain age gap (BAG). Although brain age, as defined by these methods, generally is related to the chronologic age of a person, this relationship is not always parallel and can also vary significantly between individuals. Importantly, whereas neurodegenerative disorders are not equivalent to accelerated brain aging, they may induce brain changes that resemble those of older adults, which can be captured by brain age models. Inversely, healthy brain aging may involve a resistance or delay of the onset of neurodegenerative pathologies in the brain. This continuing education article elaborates how the BAG can be computed and explores how BAGs, derived from diverse neuroimaging modalities, offer unique insights into the phenotypes of age-related neurodegenerative diseases. Structural BAGs from T1-weighted MRI have shown promise as phenotypic biomarkers for monitoring neurodegenerative disease progression especially in Alzheimer disease. Additionally, metabolic and molecular BAGs from molecular imaging, functional BAGs from functional MRI, and microstructural BAGs from diffusion MRI, although researched considerably less, each may provide distinct perspectives on particular brain aging processes and their deviations from healthy aging. We suggest that BAG estimation, when based on the appropriate modality, could potentially be useful for disease monitoring and offer interesting insights concerning the impact of therapeutic interventions.

Task-Generalized Adaptive Cross-Domain Learning for Multimodal Image Fusion

Mengyu Wang, Zhenyu Liu, Kun Li, Yu Wang, Yuwei Wang, Yanyan Wei, Fei Wang

arxiv logopreprintAug 21 2025
Multimodal Image Fusion (MMIF) aims to integrate complementary information from different imaging modalities to overcome the limitations of individual sensors. It enhances image quality and facilitates downstream applications such as remote sensing, medical diagnostics, and robotics. Despite significant advancements, current MMIF methods still face challenges such as modality misalignment, high-frequency detail destruction, and task-specific limitations. To address these challenges, we propose AdaSFFuse, a novel framework for task-generalized MMIF through adaptive cross-domain co-fusion learning. AdaSFFuse introduces two key innovations: the Adaptive Approximate Wavelet Transform (AdaWAT) for frequency decoupling, and the Spatial-Frequency Mamba Blocks for efficient multimodal fusion. AdaWAT adaptively separates the high- and low-frequency components of multimodal images from different scenes, enabling fine-grained extraction and alignment of distinct frequency characteristics for each modality. The Spatial-Frequency Mamba Blocks facilitate cross-domain fusion in both spatial and frequency domains, enhancing this process. These blocks dynamically adjust through learnable mappings to ensure robust fusion across diverse modalities. By combining these components, AdaSFFuse improves the alignment and integration of multimodal features, reduces frequency loss, and preserves critical details. Extensive experiments on four MMIF tasks -- Infrared-Visible Image Fusion (IVF), Multi-Focus Image Fusion (MFF), Multi-Exposure Image Fusion (MEF), and Medical Image Fusion (MIF) -- demonstrate AdaSFFuse's superior fusion performance, ensuring both low computational cost and a compact network, offering a strong balance between performance and efficiency. The code will be publicly available at https://github.com/Zhen-yu-Liu/AdaSFFuse.

Multimodal Integration in Health Care: Development With Applications in Disease Management.

Hao Y, Cheng C, Li J, Li H, Di X, Zeng X, Jin S, Han X, Liu C, Wang Q, Luo B, Zeng X, Li K

pubmed logopapersAug 21 2025
Multimodal data integration has emerged as a transformative approach in the health care sector, systematically combining complementary biological and clinical data sources such as genomics, medical imaging, electronic health records, and wearable device outputs. This approach provides a multidimensional perspective of patient health that enhances the diagnosis, treatment, and management of various medical conditions. This viewpoint presents an overview of the current state of multimodal integration in health care, spanning clinical applications, current challenges, and future directions. We focus primarily on its applications across different disease domains, particularly in oncology and ophthalmology. Other diseases are briefly discussed due to the few available literature. In oncology, the integration of multimodal data enables more precise tumor characterization and personalized treatment plans. Multimodal fusion demonstrates accurate prediction of anti-human epidermal growth factor receptor 2 therapy response (area under the curve=0.91). In ophthalmology, multimodal integration through the combination of genetic and imaging data facilitates the early diagnosis of retinal diseases. However, substantial challenges remain regarding data standardization, model deployment, and model interpretability. We also highlight the future directions of multimodal integration, including its expanded disease applications, such as neurological and otolaryngological diseases, and the trend toward large-scale multimodal models, which enhance accuracy. Overall, the innovative potential of multimodal integration is expected to further revolutionize the health care industry, providing more comprehensive and personalized solutions for disease management.

RANSAC-based global 3DUS to CT/MR rigid registration using liver surface and vessels.

Goto T, Igarashi R, Cho I, Numata K, Ishino Y, Kitamura Y, Noguchi M, Hirai T, Waki K

pubmed logopapersAug 21 2025
Fusion imaging requires initial registration of ultrasound (US) images using computed tomography (CT) or magnetic resonance (MR) imaging. The sweep position of US depends on the procedure. For instance, the liver may be observed in intercostal, subcostal, or epigastric positions. However, no well-established method for automatic initial registration accommodates all positions. A global rigid 3D-3D registration technique aimed at developing an automatic registration method independent of the US sweep position is proposed. The proposed technique utilizes the liver surface and vessels, such as the portal and hepatic veins, as landmarks. The algorithm segments the liver region and vessels from both US and CT/MR images using deep learning models. Based on these outputs, the point clouds of the liver surface and vessel centerlines were extracted. The rigid transformation parameters were estimated through point cloud registration using a RANSAC-based algorithm. To enhance speed and robustness, the RANSAC procedure incorporated constraints regarding the possible ranges for each registration parameter based on the relative position and orientation of the probe and body surface. Registration accuracy was quantitatively evaluated using clinical data from 80 patients, including US images taken from the intercostal, subcostal, and epigastric regions. The registration errors were 7.3 ± 3.2, 9.3 ± 3.7, and 8.4 ± 3.9 mm for the intercostal, subcostal, and epigastric regions, respectively. The proposed global rigid registration technique fully automated the complex manual registration required for liver fusion imaging and enhanced the workflow efficiency of physicians and sonographers.

MedImg: An Integrated Database for Public Medical Image.

Zhong B, Fan R, Ma Y, Ji X, Cui Q, Cui C

pubmed logopapersAug 20 2025
The advancements in deep learning algorithms for medical image analysis have garnered significant attention in recent years. While several studies show promising results, with models achieving or even surpassing human performance, translating these advancements into clinical practice is still accompanied by various challenges. A primary obstacle lies in the availability of large-scale, well-characterized datasets for validating the generalization of approaches. To address this challenge, we curated a diverse collection of medical image datasets from multiple public sources, containing 105 datasets and a total of 1,995,671 images. These images span 14 modalities, including X-ray, computed tomography, magnetic resonance imaging, optical coherence tomography, ultrasound, and endoscopy, and originate from 13 organs, such as the lung, brain, eye, and heart. Subsequently, we constructed an online database, MedImg, which incorporates and systematically organizes these medical images to facilitate data accessibility. MedImg serves as an intuitive and open-access platform for facilitating research in deep learning-based medical image analysis, accessible at https://www.cuilab.cn/medimg/.

A machine learning-based decision support tool for standardizing intracavitary versus interstitial brachytherapy technique selection in high-dose-rate cervical cancer.

Kajikawa T, Masui K, Sakai K, Takenaka T, Suzuki G, Yoshino Y, Nemoto H, Yamazaki H, Yamada K

pubmed logopapersAug 20 2025
To develop and evaluate a machine-learning (ML) decision-support tool that standardizes selection of intracavitary brachytherapy (ICBT) versus hybrid intracavitary/interstitial brachytherapy (IC/ISBT) in high-dose-rate (HDR) cervical cancer. We retrospectively analyzed 159 HDR brachytherapy plans from 50 consecutive patients treated between April 2022 and June 2024. Brachytherapy techniques (ICBT or IC/ISBT) were determined by an experienced radiation oncologist using CT/MRI-based 3-D image-guided brachytherapy. For each plan, 144 shape- and distance-based geometric features describing the high-risk clinical target volume (HR-CTV), bladder, rectum, and applicator were extracted. Nested five-fold cross-validation combined minimum-redundancy-maximum-relevance feature selection with five classifiers (k-nearest neighbors, logistic regression, naïve Bayes, random forest, support-vector classifier) and two voting ensembles (hard and soft voting). Model performance was benchmarked against single-factor rules (HR-CTV > 30 cm³; maximum lateral HR-CTV-tandem distance > 25 mm). Logistic regression achieved the highest test accuracy 0.849 ± 0.023 and a mean area-under-the-curve (AUC) 0.903 ± 0.033, outperforming the volume rule and matching the distance rule's AUC 0.907 ± 0.057 while providing greater accuracy 0.805 ± 0.114. These differences were not statistically significant. Feature-importance analysis showed that the maximum HR-CTV-tandem lateral distance and the bladder's minimal short-axis length consistently dominated model decisions.​ CONCLUSIONS: A compact ML tool using two readily measurable geometric features can reliably assist clinicians in choosing between ICBT and IC/ISBT, thereby reducing inter-physician variability and promoting standardized HDR cervical brachytherapy technique selection.

[Digital and intelligent medicine empowering precision abdominal surgery:today and the future].

Dong Q, Wang JM, Xiu WL

pubmed logopapersAug 20 2025
The complex anatomical structure of abdominal organs demands high precision in surgical procedures, which also increases postoperative complication risks. Advancements in digital medicine have created new opportunities for precision surgery. This article summarizes the current applications of digital intelligence in precision abdominal surgery. The processing and real-time monitoring technologies of medical imaging provide powerful tools for accurate diagnosis and treatment. Meanwhile, big data analysis and precise classification capabilities of artificial intelligence further enhance diagnostic efficiency and safety. Additionally, the paper analyzes the advantages and limitations of digital intelligence in empowering precision abdominal surgery, while exploring future development directions.
Page 25 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.