Sort by:
Page 1 of 23226 results
Next

Role of Brain Age Gap as a Mediator in the Relationship Between Cognitive Impairment Risk Factors and Cognition.

Tan WY, Huang X, Huang J, Robert C, Cui J, Chen CPLH, Hilal S

pubmed logopapersJul 22 2025
Cerebrovascular disease (CeVD) and cognitive impairment risk factors contribute to cognitive decline, but the role of brain age gap (BAG) in mediating this relationship remains unclear, especially in Southeast Asian populations. This study investigated the influence of cognitive impairment risk factors on cognition and examined how BAG mediates this relationship, particularly in individuals with varying CeVD burden. This cross-sectional study analyzed Singaporean community and memory clinic participants. Cognitive impairment risk factors were assessed using the Cognitive Impairment Scoring System (CISS), encompassing 11 sociodemographic and vascular factors. Cognition was assessed through a neuropsychological battery, evaluating global cognition and 6 cognitive domains: executive function, attention, memory, language, visuomotor speed, and visuoconstruction. Brain age was derived from structural MRI features using ensemble machine learning model. Propensity score matching balanced risk profiles between model training and the remaining sample. Structural equation modeling examined the mediation effect of BAG on CISS-cognition relationship, stratified by CeVD burden (high: CeVD+, low: CeVD-). The study included 1,437 individuals without dementia, with 646 in the matched sample (mean age 66.4 ± 6.0 years, 47% female, 60% with no cognitive impairment). Higher CISS was consistently associated with poorer cognitive performance across all domains, with the strongest negative associations in visuomotor speed (β = -2.70, <i>p</i> < 0.001) and visuoconstruction (β = -3.02, <i>p</i> < 0.001). Among the CeVD+ group, BAG significantly mediated the relationship between CISS and global cognition (proportion mediated: 19.95%, <i>p</i> = 0.01), with the strongest mediation effects in executive function (34.1%, <i>p</i> = 0.03) and language (26.6%, <i>p</i> = 0.008). BAG also mediated the relationship between CISS and memory (21.1%) and visuoconstruction (14.4%) in the CeVD+ group, but these effects diminished after statistical adjustments. Our findings suggest that BAG is a key intermediary linking cognitive impairment risk factors to cognitive function, particularly in individuals with high CeVD burden. This mediation effect is domain-specific, with executive function, language, and visuoconstruction being the most vulnerable to accelerated brain aging. Limitations of this study include the cross-sectional design, limiting causal inference, and the focus on Southeast Asian populations, limiting generalizability. Future longitudinal studies should verify these relationships and explore additional factors not captured in our model.

Acquisition and Reconstruction Techniques for Coronary CT Angiography: Current Status and Trends over the Past Decade.

Fukui R, Harashima S, Samejima W, Shimizu Y, Washizuka F, Kariyasu T, Nishikawa M, Yamaguchi H, Takeuchi H, Machida H

pubmed logopapersJul 1 2025
Coronary CT angiography (CCTA) has been widely used as a noninvasive modality for accurate assessment of coronary artery disease (CAD) in clinical settings. However, the following limitations of CCTA remain issues of interest: motion, stair-step, and blooming artifacts; suboptimal image noise; ionizing radiation exposure; administration of contrast medium; and complex imaging workflow. Various acquisition and reconstruction techniques have been introduced over the past decade to overcome these limitations. Low-tube-voltage acquisition using a high-output x-ray tube can reasonably reduce the contrast medium and radiation dose. Fast x-ray tube and gantry rotation, dual-source CT, and a motion-correction algorithm (MCA) can improve temporal resolution and reduce coronary motion artifacts. High-definition CT (HDCT), ultrahigh-resolution CT (UHRCT), and superresolution deep learning reconstruction (DLR) algorithms can improve the spatial resolution and delineation of the vessel lumen with coronary calcifications or stents by reducing blooming artifacts. Whole-heart coverage using area-detector CT can eliminate stair-step artifacts. The DLR algorithm can effectively reduce image noise and radiation dose while maintaining image quality, particularly during high-resolution acquisition using HDCT or UHRCT, during low-tube-voltage acquisition, or when imaging patients with a large body habitus. Automatic cardiac protocol selection, automatic optimal cardiac phase selection, and MCA can improve the imaging workflow for each CCTA examination. A sufficient understanding of current and novel acquisition and reconstruction techniques is important to enhance the clinical value of CCTA for noninvasive assessment of CAD. <sup>©</sup>RSNA, 2025 Supplemental material is available for this article.

Use of Artificial Intelligence and Machine Learning in Critical Care Ultrasound.

Peck M, Conway H

pubmed logopapersJul 1 2025
This article explores the transformative potential of artificial intelligence (AI) in critical care ultrasound AI technologies, notably deep learning and convolutional neural networks, now assisting in image acquisition, interpretation, and quality assessment, streamlining workflow and reducing operator variability. By automating routine tasks, AI enhances diagnostic accuracy and bridges training gaps, potentially democratizing advanced ultrasound techniques. Furthermore, AI's integration into tele-ultrasound systems shows promise in extending expert-level diagnostics to underserved areas, significantly broadening access to quality care. The article highlights the ongoing need for explainable AI systems to gain clinician trust and facilitate broader adoption.

A lung structure and function information-guided residual diffusion model for predicting idiopathic pulmonary fibrosis progression.

Jiang C, Xing X, Nan Y, Fang Y, Zhang S, Walsh S, Yang G, Shen D

pubmed logopapersJul 1 2025
Idiopathic Pulmonary Fibrosis (IPF) is a progressive lung disease that continuously scars and thickens lung tissue, leading to respiratory difficulties. Timely assessment of IPF progression is essential for developing treatment plans and improving patient survival rates. However, current clinical standards require multiple (usually two) CT scans at certain intervals to assess disease progression. This presents a dilemma: the disease progression is identified only after the disease has already progressed. To address this issue, a feasible solution is to generate the follow-up CT image from the patient's initial CT image to achieve early prediction of IPF. To this end, we propose a lung structure and function information-guided residual diffusion model. The key components of our model include (1) using a 2.5D generation strategy to reduce computational cost of generating 3D images with the diffusion model; (2) designing structural attention to mitigate negative impact of spatial misalignment between the two CT images on generation performance; (3) employing residual diffusion to accelerate model training and inference while focusing more on differences between the two CT images (i.e., the lesion areas); and (4) developing a CLIP-based text extraction module to extract lung function test information and further using such extracted information to guide the generation. Extensive experiments demonstrate that our method can effectively predict IPF progression and achieve superior generation performance compared to state-of-the-art methods.

Radiomics for lung cancer diagnosis, management, and future prospects.

Boubnovski Martell M, Linton-Reid K, Chen M, Aboagye EO

pubmed logopapersJul 1 2025
Lung cancer remains the leading cause of cancer-related mortality worldwide, with its early detection and effective treatment posing significant clinical challenges. Radiomics, the extraction of quantitative features from medical imaging, has emerged as a promising approach for enhancing diagnostic accuracy, predicting treatment responses, and personalising patient care. This review explores the role of radiomics in lung cancer diagnosis and management, with methods ranging from handcrafted radiomics to deep learning techniques that can capture biological intricacies. The key applications are highlighted across various stages of lung cancer care, including nodule detection, histology prediction, and disease staging, where artificial intelligence (AI) models demonstrate superior specificity and sensitivity. The article also examines future directions, emphasising the integration of large language models, explainable AI (XAI), and super-resolution imaging techniques as transformative developments. By merging diverse data sources and incorporating interpretability into AI models, radiomics stands poised to redefine clinical workflows, offering more robust and reliable tools for lung cancer diagnosis, treatment planning, and outcome prediction. These advancements underscore radiomics' potential in supporting precision oncology and improving patient outcomes through data-driven insights.

Impact of CT reconstruction algorithms on pericoronary and epicardial adipose tissue attenuation.

Xiao H, Wang X, Yang P, Wang L, Xi J, Xu J

pubmed logopapersJul 1 2025
This study aims to investigate the impact of adaptive statistical iterative reconstruction-Veo (ASIR-V) and deep learning image reconstruction (DLIR) algorithms on the quantification of pericoronary adipose tissue (PCAT) and epicardial adipose tissue (EAT). Furthermore, we propose to explore the feasibility of correcting the effects through fat threshold adjustment. A retrospective analysis was conducted on the imaging data of 134 patients who underwent coronary CT angiography (CCTA) between December 2023 and January 2024. These data were reconstructed into seven datasets using filtered back projection (FBP), ASIR-V at three different intensities (ASIR-V 30%, ASIR-V 50%, ASIR-V 70%), and DLIR at three different intensities (DLIR-L, DLIR-M, DLIR-H). Repeated-measures ANOVA was used to compare differences in fat, PCAT and EAT attenuation values among the reconstruction algorithms, and Bland-Altman plots were used to analyze the agreement between ASIR-V or DLIR and FBP algorithms in PCAT attenuation values. Compared to FBP, ASIR-V 30 %, ASIR-V 50 %, ASIR-V 70 %, DLIR-L, DLIR-M, and DLIR-H significantly increased fat attenuation values (-103.91 ± 12.99 HU, -102.53 ± 12.68 HU, -101.14 ± 12.78 HU, -101.81 ± 12.41 HU, -100.87 ± 12.25 HU, -99.08 ± 12.00 HU vs. -105.95 ± 13.01 HU, all p < 0.001). When the fat threshold was set at -190 to -30 HU, ASIR-V and DLIR algorithms significantly increased PCAT and EAT attenuation values compared to FBP algorithm (all p < 0.05), with these values increasing as the reconstruction intensity level increased. After correction with a fat threshold of -200 to -35 HU for ASIR-V 30 %, -200 to -40 HU for ASIR-V 50 % and DLIR-L, and -200 to -45 HU for ASIR-V 70 %, DLIR-M, and DLIR-H, the mean differences in PCAT attenuation values between ASIR-V or DLIR and FBP algorithms decreased (-0.03 to 1.68 HU vs. 2.35 to 8.69 HU), and no significant difference was found in PCAT attenuation values between FBP and ASIR-V 30 %, ASIR-V 50 %, ASIR-V 70 %, DLIR-L, and DLIR-M (all p > 0.05). Compared to the FBP algorithm, ASIR-V and DLIR algorithms increase PCAT and EAT attenuation values. Adjusting the fat threshold can mitigate the impact of ASIR-V and DLIR algorithms on PCAT attenuation values.

Medical image translation with deep learning: Advances, datasets and perspectives.

Chen J, Ye Z, Zhang R, Li H, Fang B, Zhang LB, Wang W

pubmed logopapersJul 1 2025
Traditional medical image generation often lacks patient-specific clinical information, limiting its clinical utility despite enhancing downstream task performance. In contrast, medical image translation precisely converts images from one modality to another, preserving both anatomical structures and cross-modal features, thus enabling efficient and accurate modality transfer and offering unique advantages for model development and clinical practice. This paper reviews the latest advancements in deep learning(DL)-based medical image translation. Initially, it elaborates on the diverse tasks and practical applications of medical image translation. Subsequently, it provides an overview of fundamental models, including convolutional neural networks (CNNs), transformers, and state space models (SSMs). Additionally, it delves into generative models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Autoregressive Models (ARs), diffusion Models, and flow Models. Evaluation metrics for assessing translation quality are discussed, emphasizing their importance. Commonly used datasets in this field are also analyzed, highlighting their unique characteristics and applications. Looking ahead, the paper identifies future trends, challenges, and proposes research directions and solutions in medical image translation. It aims to serve as a valuable reference and inspiration for researchers, driving continued progress and innovation in this area.

The Evolution of Radiology Image Annotation in the Era of Large Language Models.

Flanders AE, Wang X, Wu CC, Kitamura FC, Shih G, Mongan J, Peng Y

pubmed logopapersJul 1 2025
Although there are relatively few diverse, high-quality medical imaging datasets on which to train computer vision artificial intelligence models, even fewer datasets contain expertly classified observations that can be repurposed to train or test such models. The traditional annotation process is laborious and time-consuming. Repurposing annotations and consolidating similar types of annotations from disparate sources has never been practical. Until recently, the use of natural language processing to convert a clinical radiology report into labels required custom training of a language model for each use case. Newer technologies such as large language models have made it possible to generate accurate and normalized labels at scale, using only clinical reports and specific prompt engineering. The combination of automatically generated labels extracted and normalized from reports in conjunction with foundational image models provides a means to create labels for model training. This article provides a short history and review of the annotation and labeling process of medical images, from the traditional manual methods to the newest semiautomated methods that provide a more scalable solution for creating useful models more efficiently. <b>Keywords:</b> Feature Detection, Diagnosis, Semi-supervised Learning © RSNA, 2025.

Multi-label pathology editing of chest X-rays with a Controlled Diffusion Model.

Chu H, Qi X, Wang H, Liang Y

pubmed logopapersJul 1 2025
Large-scale generative models have garnered significant attention in the field of medical imaging, particularly for image editing utilizing diffusion models. However, current research has predominantly concentrated on pathological editing involving single or a limited number of labels, making it challenging to achieve precise modifications. Inaccurate alterations may lead to substantial discrepancies between the generated and original images, thereby impacting the clinical applicability of these models. This paper presents a diffusion model with untangling capabilities applied to chest X-ray image editing, incorporating a mask-based mechanism for bone and organ information. We successfully perform multi-label pathological editing of chest X-ray images without compromising the integrity of the original thoracic structure. The proposed technology comprises a chest X-ray image classifier and an intricate organ mask; the classifier supplies essential feature labels that require untangling for the stabilized diffusion model, while the complex organ mask facilitates directed and controllable edits to chest X-rays. We assessed the outcomes of our proposed algorithm, named Chest X-rays_Mpe, using MS-SSIM and CLIP scores alongside qualitative evaluations conducted by radiology experts. The results indicate that our approach surpasses existing algorithms across both quantitative and qualitative metrics.
Page 1 of 23226 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.