Sort by:
Page 24 of 99982 results

Radiomics-Driven Diffusion Model and Monte Carlo Compression Sampling for Reliable Medical Image Synthesis.

Zhao J, Li S

pubmed logopapersAug 25 2025
Reliable medical image synthesis is crucial for clinical applications and downstream tasks, where high-quality anatomical structure and predictive confidence are essential. Existing studies have made significant progress by embedding prior conditional knowledge, such as conditional images or textual information, to synthesize natural images. However, medical image synthesis remains a challenging task due to: 1) Data scarcity: High-quality medical text prompt are extremely rare and require specialized expertise. 2) Insufficient uncertainty estimation: The uncertainty estimation is critical for evaluating the confidence of reliable medical image synthesis. This paper presents a novel approach for medical image synthesis, driven by radiomics prompts and combined with Monte Carlo Compression Sampling (MCCS) to ensure reliability. For the first time, our method leverages clinically focused radiomics prompts to condition the generation process, guiding the model to produce reliable medical images. Furthermore, the innovative MCCS algorithm employs Monte Carlo methods to randomly select and compress sampling steps within the denoising diffusion implicit models (DDIM), enabling efficient uncertainty quantification. Additionally, we introduce a MambaTrans architecture to model long-range dependencies in medical images and embed prior conditions (e.g., radiomics prompts). Extensive experiments on benchmark medical imaging datasets demonstrate that our approach significantly improves image quality and reliability, outperforming SoTA methods in both qualitative and quantitative evaluations.

MIE: Magnification-integrated ensemble method for improving glomeruli segmentation in medical imaging.

Han Y, Kim J, Park S, Moon JS, Lee JH

pubmed logopapersAug 24 2025
Glomeruli are crucial for blood filtration, waste removal, and regulation of essential substances in the body. Traditional methods for detecting glomeruli rely on human interpretation, which can lead to variability. AI techniques have improved this process; however, most studies have used images with fixed magnification. This study proposes a novel magnification-integrated ensemble method to enhance glomerular segmentation accuracy. Whole-slide images (WSIs) from 12 patients were used for training, two for validation, and one for testing. Patch and mask images were extracted at 256 × 256 size × x2, x3, and x4 magnification levels. Data augmentation techniques, such as RandomResize, RandomCrop, and RandomFlip, were used. The segmentation model underwent 80,000 iterations with a stochastic gradient descent (SGD). Performance varied with changes in magnification. The models trained on high-magnification images showed significant drops when tested at lower magnifications, and vice versa. The proposed method improved segmentation accuracy across different magnifications, achieving 87.72 mIoU and 93.04 Dice score with the U-Net model. The magnification-integrated ensemble method significantly enhanced glomeruli segmentation accuracy across varying magnifications, thereby addressing the limitations of fixed magnification models. This approach improves the robustness and reliability of AI-driven diagnostic tools, potentially benefiting various medical imaging applications by ensuring consistent performance despite changes in image magnification.

Towards generalist foundation model for radiology by leveraging web-scale 2D&3D medical data.

Wu C, Zhang X, Zhang Y, Hui H, Wang Y, Xie W

pubmed logopapersAug 23 2025
In this study, as a proof-of-concept, we aim to initiate the development of Radiology Foundation Model, termed as RadFM. We consider three perspectives: dataset construction, model design, and thorough evaluation, concluded as follows: (i), we contribute 4 multimodal datasets with 13M 2D images and 615K 3D scans. When combined with a vast collection of existing datasets, this forms our training dataset, termed as Medical Multi-modal Dataset, MedMD. (ii), we propose an architecture that enables to integrate text input with 2D or 3D medical scans, and generates responses for diverse radiologic tasks, including diagnosis, visual question answering, report generation, and rationale diagnosis; (iii), beyond evaluation on 9 existing datasets, we propose a new benchmark, RadBench, comprising three tasks aiming to assess foundation models comprehensively. We conduct both automatic and human evaluations on RadBench. RadFM outperforms former accessible multi-modal foundation models, including GPT-4V. Additionally, we adapt RadFM for diverse public benchmarks, surpassing various existing SOTAs.

Non-invasive intracranial pressure assessment in adult critically ill patients: A narrative review on current approaches and future perspectives.

Deana C, Biasucci DG, Aspide R, Bagatto D, Brasil S, Brunetti D, Saitta T, Vapireva M, Zanza C, Longhitano Y, Bignami EG, Vetrugno L

pubmed logopapersAug 23 2025
Intracranial hypertension (IH) is a life-threatening complication that may occur after acute brain injury. Early recognition of IH allows prompt interventions that improve outcomes. Even if invasive intracranial monitoring is considered the gold standard for the most severely injured patients, scarce availability of resources, the need for advanced skills, and potential for complications often limit its utilization. On the other hand, different non-invasive methods to evaluate acutely brain-injured patients for elevated intracranial pressure have been investigated. Clinical examination and neuroradiology represent the cornerstone of a patient's evaluation in the intensive care unit (ICU). However, multimodal neuromonitoring, employing widely used different tools, such as brain ultrasound, automated pupillometry, and skull micro-deformation recordings, increase the possibility for continuous or semi-continuous intracranial pressure monitoring. Furthermore, artificial intelligence (AI) has been investigated to as a tool to predict elevated intracranial pressure, shedding light on new diagnostic and treatment horizons with the potential to improve patient outcomes. This narrative review, based on a systematic literature search, summarizes the best available evidence on the use of non-invasive monitoring tools and methods for the assessment of intracranial pressure.

Deep learning ensemble for abdominal aortic calcification scoring from lumbar spine X-ray and DXA images.

Voss A, Suoranta S, Nissinen T, Hurskainen O, Masarwah A, Sund R, Tohka J, Väänänen SP

pubmed logopapersAug 22 2025
Abdominal aortic calcification (AAC) is an independent predictor of cardiovascular diseases (CVDs). AAC is typically detected as an incidental finding in spine scans. Early detection of AAC through opportunistic screening using any available imaging modalities could help identify individuals with a higher risk of developing clinical CVDs. However, AAC is not routinely assessed in clinics, and manual scoring from projection images is time-consuming and prone to inter-rater variability. Also, automated AAC scoring methods exist, but earlier methods have not accounted for the inherent variability in AAC scoring and were developed for a single imaging modality at a time. We propose an automated method for quantifying AAC from lumbar spine X-ray and Dual-energy X-ray Absorptiometry (DXA) images using an ensemble of convolutional neural network models that predicts a distribution of probable AAC scores. We treat AAC score as a normally distributed random variable to account for the variability of manual scoring. The mean and variance of the assumed normal AAC distributions are estimated based on manual annotations, and the models in the ensemble are trained by simulating AAC scores from these distributions. Our proposed ensemble approach successfully extracted AAC scores from both X-ray and DXA images with predicted score distributions demonstrating strong agreement with manual annotations, as evidenced by concordance correlation coefficients of 0.930 for X-ray and 0.912 for DXA. The prediction error between the average estimates of our approach and the average manual annotations was lower than the errors reported previously, highlighting the benefit of incorporating uncertainty in AAC scoring.

4D Virtual Imaging Platform for Dynamic Joint Assessment via Uni-Plane X-ray and 2D-3D Registration

Hao Tang, Rongxi Yi, Lei Li, Kaiyi Cao, Jiapeng Zhao, Yihan Xiao, Minghai Shi, Peng Yuan, Yan Xi, Hui Tang, Wei Li, Zhan Wu, Yixin Zhou

arxiv logopreprintAug 22 2025
Conventional computed tomography (CT) lacks the ability to capture dynamic, weight-bearing joint motion. Functional evaluation, particularly after surgical intervention, requires four-dimensional (4D) imaging, but current methods are limited by excessive radiation exposure or incomplete spatial information from 2D techniques. We propose an integrated 4D joint analysis platform that combines: (1) a dual robotic arm cone-beam CT (CBCT) system with a programmable, gantry-free trajectory optimized for upright scanning; (2) a hybrid imaging pipeline that fuses static 3D CBCT with dynamic 2D X-rays using deep learning-based preprocessing, 3D-2D projection, and iterative optimization; and (3) a clinically validated framework for quantitative kinematic assessment. In simulation studies, the method achieved sub-voxel accuracy (0.235 mm) with a 99.18 percent success rate, outperforming conventional and state-of-the-art registration approaches. Clinical evaluation further demonstrated accurate quantification of tibial plateau motion and medial-lateral variance in post-total knee arthroplasty (TKA) patients. This 4D CBCT platform enables fast, accurate, and low-dose dynamic joint imaging, offering new opportunities for biomechanical research, precision diagnostics, and personalized orthopedic care.

Sex-specific body fat distribution predicts cardiovascular ageing.

Losev V, Lu C, Tahasildar S, Senevirathne DS, Inglese P, Bai W, King AP, Shah M, de Marvao A, O'Regan DP

pubmed logopapersAug 22 2025
Cardiovascular ageing is a progressive loss of physiological reserve, modified by environmental and genetic risk factors, that contributes to multi-morbidity due to accumulated damage across diverse cell types, tissues, and organs. Obesity is implicated in premature ageing, but the effect of body fat distribution in humans is unknown. This study determined the influence of sex-dependent fat phenotypes on human cardiovascular ageing. Data from 21 241 participants in the UK Biobank were analysed. Machine learning was used to predict cardiovascular age from 126 image-derived traits of vascular function, cardiac motion, and myocardial fibrosis. An age-delta was calculated as the difference between predicted age and chronological age. The volume and distribution of body fat was assessed from whole-body imaging. The association between fat phenotypes and cardiovascular age-delta was assessed using multivariable linear regression with age and sex as co-covariates, reporting β coefficients with 95% confidence intervals (CI). Two-sample Mendelian randomization was used to assess causal associations. Visceral adipose tissue volume [β = 0.656, (95% CI, .537-.775), P < .0001], muscle adipose tissue infiltration [β = 0.183, (95% CI, .122-.244), P = .0003], and liver fat fraction [β = 1.066, (95% CI .835-1.298), P < .0001] were the strongest predictors of increased cardiovascular age-delta for both sexes. Abdominal subcutaneous adipose tissue volume [β = 0.432, (95% CI, .269-.596), P < .0001] and android fat mass [β = 0.983, (95% CI, .64-1.326), P < .0001] were each associated with increased age-delta only in males. Genetically predicted gynoid fat showed an association with decreased age-delta. Shared and sex-specific patterns of body fat are associated with both protective and harmful changes in cardiovascular ageing, highlighting adipose tissue distribution and function as a key target for interventions to extend healthy lifespan.

Advancements in deep learning for image-guided tumor ablation therapies: a comprehensive review.

Zhao Z, Hu Y, Xu LX, Sun J

pubmed logopapersAug 22 2025
Image-guided tumor ablation (IGTA) has revolutionized modern oncological treatments by providing minimally invasive options that ensure precise tumor eradication with minimal patient discomfort. Traditional techniques such as ultrasound (US), Computed Tomography (CT), and Magnetic Resonance Imaging (MRI) have been instrumental in the planning, execution, and evaluation of ablation therapies. However, these methods often face limitations, including poor contrast, susceptibility to artifacts, and variability in operator expertise, which can undermine the accuracy of tumor targeting and therapeutic outcomes. Incorporating deep learning (DL) into IGTA represents a significant advancement that addresses these challenges. This review explores the role and potential of DL in different phases of tumor ablation therapy: preoperative, intraoperative, and postoperative. In the preoperative stage, DL excels in advanced image segmentation, enhancement, and synthesis, facilitating precise surgical planning and optimized treatment strategies. During the intraoperative phase, DL supports image registration and fusion, and real-time surgical planning, enhancing navigation accuracy and ensuring precise ablation while safeguarding surrounding healthy tissues. In the postoperative phase, DL is pivotal in automating the monitoring of treatment responses and in the early detection of recurrences through detailed analyses of follow-up imaging. This review highlights the essential role of deep learning in modernizing IGTA, showcasing its significant implications for procedural safety, efficacy, and patient outcomes in oncology. As deep learning technologies continue to evolve, they are poised to redefine the standards of care in tumor ablation therapies, making treatments more accurate, personalized, and patient-friendly.

Multimodal Integration in Health Care: Development With Applications in Disease Management.

Hao Y, Cheng C, Li J, Li H, Di X, Zeng X, Jin S, Han X, Liu C, Wang Q, Luo B, Zeng X, Li K

pubmed logopapersAug 21 2025
Multimodal data integration has emerged as a transformative approach in the health care sector, systematically combining complementary biological and clinical data sources such as genomics, medical imaging, electronic health records, and wearable device outputs. This approach provides a multidimensional perspective of patient health that enhances the diagnosis, treatment, and management of various medical conditions. This viewpoint presents an overview of the current state of multimodal integration in health care, spanning clinical applications, current challenges, and future directions. We focus primarily on its applications across different disease domains, particularly in oncology and ophthalmology. Other diseases are briefly discussed due to the few available literature. In oncology, the integration of multimodal data enables more precise tumor characterization and personalized treatment plans. Multimodal fusion demonstrates accurate prediction of anti-human epidermal growth factor receptor 2 therapy response (area under the curve=0.91). In ophthalmology, multimodal integration through the combination of genetic and imaging data facilitates the early diagnosis of retinal diseases. However, substantial challenges remain regarding data standardization, model deployment, and model interpretability. We also highlight the future directions of multimodal integration, including its expanded disease applications, such as neurological and otolaryngological diseases, and the trend toward large-scale multimodal models, which enhance accuracy. Overall, the innovative potential of multimodal integration is expected to further revolutionize the health care industry, providing more comprehensive and personalized solutions for disease management.

Task-Generalized Adaptive Cross-Domain Learning for Multimodal Image Fusion

Mengyu Wang, Zhenyu Liu, Kun Li, Yu Wang, Yuwei Wang, Yanyan Wei, Fei Wang

arxiv logopreprintAug 21 2025
Multimodal Image Fusion (MMIF) aims to integrate complementary information from different imaging modalities to overcome the limitations of individual sensors. It enhances image quality and facilitates downstream applications such as remote sensing, medical diagnostics, and robotics. Despite significant advancements, current MMIF methods still face challenges such as modality misalignment, high-frequency detail destruction, and task-specific limitations. To address these challenges, we propose AdaSFFuse, a novel framework for task-generalized MMIF through adaptive cross-domain co-fusion learning. AdaSFFuse introduces two key innovations: the Adaptive Approximate Wavelet Transform (AdaWAT) for frequency decoupling, and the Spatial-Frequency Mamba Blocks for efficient multimodal fusion. AdaWAT adaptively separates the high- and low-frequency components of multimodal images from different scenes, enabling fine-grained extraction and alignment of distinct frequency characteristics for each modality. The Spatial-Frequency Mamba Blocks facilitate cross-domain fusion in both spatial and frequency domains, enhancing this process. These blocks dynamically adjust through learnable mappings to ensure robust fusion across diverse modalities. By combining these components, AdaSFFuse improves the alignment and integration of multimodal features, reduces frequency loss, and preserves critical details. Extensive experiments on four MMIF tasks -- Infrared-Visible Image Fusion (IVF), Multi-Focus Image Fusion (MFF), Multi-Exposure Image Fusion (MEF), and Medical Image Fusion (MIF) -- demonstrate AdaSFFuse's superior fusion performance, ensuring both low computational cost and a compact network, offering a strong balance between performance and efficiency. The code will be publicly available at https://github.com/Zhen-yu-Liu/AdaSFFuse.
Page 24 of 99982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.