Sort by:
Page 54 of 1421416 results

Wall Shear Stress Estimation in Abdominal Aortic Aneurysms: Towards Generalisable Neural Surrogate Models

Patryk Rygiel, Julian Suk, Christoph Brune, Kak Khee Yeung, Jelmer M. Wolterink

arxiv logopreprintJul 30 2025
Abdominal aortic aneurysms (AAAs) are pathologic dilatations of the abdominal aorta posing a high fatality risk upon rupture. Studying AAA progression and rupture risk often involves in-silico blood flow modelling with computational fluid dynamics (CFD) and extraction of hemodynamic factors like time-averaged wall shear stress (TAWSS) or oscillatory shear index (OSI). However, CFD simulations are known to be computationally demanding. Hence, in recent years, geometric deep learning methods, operating directly on 3D shapes, have been proposed as compelling surrogates, estimating hemodynamic parameters in just a few seconds. In this work, we propose a geometric deep learning approach to estimating hemodynamics in AAA patients, and study its generalisability to common factors of real-world variation. We propose an E(3)-equivariant deep learning model utilising novel robust geometrical descriptors and projective geometric algebra. Our model is trained to estimate transient WSS using a dataset of CT scans of 100 AAA patients, from which lumen geometries are extracted and reference CFD simulations with varying boundary conditions are obtained. Results show that the model generalizes well within the distribution, as well as to the external test set. Moreover, the model can accurately estimate hemodynamics across geometry remodelling and changes in boundary conditions. Furthermore, we find that a trained model can be applied to different artery tree topologies, where new and unseen branches are added during inference. Finally, we find that the model is to a large extent agnostic to mesh resolution. These results show the accuracy and generalisation of the proposed model, and highlight its potential to contribute to hemodynamic parameter estimation in clinical practice.

Trabecular bone analysis: ultra-high-resolution CT goes far beyond high-resolution CT and gets closer to micro-CT (a study using Canon Medical CT devices).

Gillet R, Puel U, Amer A, Doyen M, Boubaker F, Assabah B, Hossu G, Gillet P, Blum A, Teixeira PAG

pubmed logopapersJul 30 2025
High-resolution CT (HR-CT) cannot image trabecular bone due to insufficient spatial resolution. Ultra-high-resolution CT may be a valuable alternative. We aimed to describe the accuracy of Canon Medical HR, super-high-resolution (SHR), and ultra-high-resolution (UHR)-CT in measuring trabecular bone microarchitectural parameters using micro-CT as a reference. Sixteen cadaveric distal tibial epiphyses were enrolled in this pre-clinical study. Images were acquired with HR-CT (i.e., 0.5 mm slice thickness/512<sup>2</sup> matrix) and SHR-CT (i.e., 0.25 mm slice thickness and 1024<sup>2</sup> matrix) with and without deep learning reconstruction (DLR) and UHR-CT (i.e., 0.25 mm slice thickness/2048<sup>2</sup> matrix) without DLR. Trabecular bone parameters were compared. Trabecular thickness was closest with UHR-CT but remained 1.37 times that of micro-CT (P < 0.001). With SHR-CT without and with DLR, it was 1.75 and 1.79 times that of micro-CT, respectively (P < 0.001), and 3.58 and 3.68 times that of micro-CT with HR-CT without and with DLR, respectively (P < 0.001). Trabecular separation was 0.7 times that of micro-CT with UHR-CT (P < 0.001), 0.93 and 0.94 times that of micro-CT with SHR-CT without and with DLR (P = 0.36 and 0.79, respectively), and 1.52 and 1.36 times that of micro-CT with HR-CT without and with DLR (P < 0.001). Bone volume/total volume was overestimated (i.e., 1.66 to 1.92 times that of micro-CT) by all techniques (P < 0.001). However, HR-CT values were superior to UHR-CT values (P = 0.03 and 0.01, without and with DLR, respectively). UHR and SHR-CT were the closest techniques to micro-CT and surpassed HR-CT.

A deep learning model for predicting radiation-induced xerostomia in patients with head and neck cancer based on multi-channel fusion.

Lin L, Ren Y, Jian W, Yang G, Zhang B, Zhu L, Zhao W, Meng H, Wang X, He Q

pubmed logopapersJul 30 2025
Radiation-induced xerostomia is a common sequela in patients who undergo head and neck radiation therapy. This study aims to develop a three-dimensional deep learning model to predict xerostomia by fusing data from the gross tumor volume primary (GTVp) channel and parotid glands (PGs) channel. Retrospective data were collected from 180 head and neck cancer patients. Xerostomia was defined as xerostomia of grade ≥ 2 occurring in the 6th month of radiation therapy. The dataset was split into 137 cases (58.4% xerostomia, 41.6% non-xerostomia) for training and 43 (55.8% xerostomia, 44.2% non-xerostomia) for testing. XeroNet was composed of GNet, PNet, and a Naive Bayes decision fusion layer. GNet processed data from the GTVp channel (CT, dose distributions corresponding and the GTVp contours). PNet processed data from the PGs channel (CT, dose distributions and the PGs contours). The Naive Bayes decision fusion layer was used to integrate the results from GNet and PNet. Model performance was evaluated using accuracy, F-score, sensitivity, specificity, and area under the receiver operator characteristic curve (AUC). The proposed model achieved promising prediction results. The accuracy, AUC, F-score, sensitivity and specificity were 0.779, 0.858, 0.797, 0.777, and 0.782, respectively. Features extracted from the CT and dose distributions in the GTVp and PGs regions were used to construct machine learning models. However, the performance of these models was inferior to our method. Compared with recent studies on xerostomia prediction, our method also showed better performance. The proposed model could effectively extract features from the GTVp and PGs channels, achieving good performance in xerostomia prediction.

Role of Artificial Intelligence in Surgical Training by Assessing GPT-4 and GPT-4o on the Japan Surgical Board Examination With Text-Only and Image-Accompanied Questions: Performance Evaluation Study.

Maruyama H, Toyama Y, Takanami K, Takase K, Kamei T

pubmed logopapersJul 30 2025
Artificial intelligence and large language models (LLMs)-particularly GPT-4 and GPT-4o-have demonstrated high correct-answer rates in medical examinations. GPT-4o has enhanced diagnostic capabilities, advanced image processing, and updated knowledge. Japanese surgeons face critical challenges, including a declining workforce, regional health care disparities, and work-hour-related challenges. Nonetheless, although LLMs could be beneficial in surgical education, no studies have yet assessed GPT-4o's surgical knowledge or its performance in the field of surgery. This study aims to evaluate the potential of GPT-4 and GPT-4o in surgical education by using them to take the Japan Surgical Board Examination (JSBE), which includes both textual questions and medical images-such as surgical and computed tomography scans-to comprehensively assess their surgical knowledge. We used 297 multiple-choice questions from the 2021-2023 JSBEs. The questions were in Japanese, and 104 of them included images. First, the GPT-4 and GPT-4o responses to only the textual questions were collected via OpenAI's application programming interface to evaluate their correct-answer rate. Subsequently, the correct-answer rate of their responses to questions that included images was assessed by inputting both text and images. The overall correct-answer rates of GPT-4o and GPT-4 for the text-only questions were 78% (231/297) and 55% (163/297), respectively, with GPT-4o outperforming GPT-4 by 23% (P=<.01). By contrast, there was no significant improvement in the correct-answer rate for questions that included images compared with the results for the text-only questions. GPT-4o outperformed GPT-4 on the JSBE. However, the results of the LLMs were lower than those of the examinees. Despite the capabilities of LLMs, image recognition remains a challenge for them, and their clinical application requires caution owing to the potential inaccuracy of their results.

LAMA-Net: A Convergent Network Architecture for Dual-Domain Reconstruction

Chi Ding, Qingchao Zhang, Ge Wang, Xiaojing Ye, Yunmei Chen

arxiv logopreprintJul 30 2025
We propose a learnable variational model that learns the features and leverages complementary information from both image and measurement domains for image reconstruction. In particular, we introduce a learned alternating minimization algorithm (LAMA) from our prior work, which tackles two-block nonconvex and nonsmooth optimization problems by incorporating a residual learning architecture in a proximal alternating framework. In this work, our goal is to provide a complete and rigorous convergence proof of LAMA and show that all accumulation points of a specified subsequence of LAMA must be Clarke stationary points of the problem. LAMA directly yields a highly interpretable neural network architecture called LAMA-Net. Notably, in addition to the results shown in our prior work, we demonstrate that the convergence property of LAMA yields outstanding stability and robustness of LAMA-Net in this work. We also show that the performance of LAMA-Net can be further improved by integrating a properly designed network that generates suitable initials, which we call iLAMA-Net. To evaluate LAMA-Net/iLAMA-Net, we conduct several experiments and compare them with several state-of-the-art methods on popular benchmark datasets for Sparse-View Computed Tomography.

Multiple Tumor-related autoantibodies test enhances CT-based deep learning performance in diagnosing lung cancer with diameters < 70 mm: a prospective study in China.

Meng Q, Ren P, Guo L, Gao P, Liu T, Chen W, Liu W, Peng H, Fang M, Meng S, Ge H, Li M, Chen X

pubmed logopapersJul 29 2025
Deep learning (DL) demonstrates high sensitivity but low specificity in lung cancer (LC) detection during CT screening, and the seven Tumor-associated antigens autoantibodies (7-TAAbs), known for its high specificity in LC, was employed to improve the DL's specificity for the efficiency of LC screening in China. To develop and evaluate a risk model combining 7-TAAbs test and DL scores for diagnosing LC with pulmonary lesions < 70 mm. Four hundreds and six patients with 406 lesions were enrolled and assigned into training set (n = 313) and test set (n = 93) randomly. The malignant lesions were defined as those lesions with high malignant risks by DL or those with positive expression of 7-TAAbs panel. Model performance was assessed using the area under the receiver operating characteristic curves (AUC). In the training set, the AUCs for DL, 7-TAAbs, combined model (DL and 7-TAAbs) and combined model (DL or 7-TAAbs) were 0.771, 0.638, 0.606, 0.809 seperately. In the test set, the combined model (DL or 7-TAAbs) achieved achieved the highest sensitivity (82.6%), NPV (81.8%) and accuracy (79.6%) among four models, and the AUCs of DL model, 7-TAAbs model, combined model (DL and 7-TAAbs), and combined model (DL or 7-TAAbs) were 0.731, 0.679, 0.574, and 0.794, respectively. The 7-TAAbs test significantly enhances DL performance in predicting LC with pulmonary leisons < 70 mm in China.

Evaluation and analysis of risk factors for fractured vertebral recompression post-percutaneous kyphoplasty: a retrospective cohort study based on logistic regression analysis.

Zhao Y, Li B, Qian L, Chen X, Wang Y, Cui L, Xin Y, Liu L

pubmed logopapersJul 29 2025
Vertebral recompression after percutaneous kyphoplasty (PKP) for osteoporotic vertebral compression fractures (OVCFs) may lead to recurrent pain, deformity, and neurological impairment, compromising prognosis and quality of life. To identify independent risk factors for postoperative recompression and develop predictive models for risk assessment. We retrospectively analyzed 284 OVCF patients treated with PKP, grouped by recompression status. Predictors were screened using univariate and correlation analyses. Multicollinearity was assessed using variance inflation factor (VIF). A multivariable logistic regression model was constructed and validated via 10-fold cross-validation and temporal validation. Five independent predictors were identified: incomplete anterior cortex (odds ratio [OR] = 9.38), high paravertebral muscle fat infiltration (OR = 218.68), low vertebral CT value (OR = 0.87), large Cobb change (OR = 1.45), and high vertebral height recovery rate (OR = 22.64). The logistic regression model achieved strong performance: accuracy 97.67%, precision 97.06%, recall 97.06%, F1 score 97.06%, specificity 98.08%, area under the receiver operating characteristic curve (AUC) 0.998. Machine learning models (e.g., random forest) were also evaluated but did not outperform logistic regression in accuracy or interpretability. Five imaging-based predictors of vertebral recompression were identified. The logistic regression model showed excellent predictive accuracy and generalizability, supporting its clinical utility for early risk stratification and personalized decision-making in OVCF patients undergoing PKP.

Cardiac-CLIP: A Vision-Language Foundation Model for 3D Cardiac CT Images

Yutao Hu, Ying Zheng, Shumei Miao, Xiaolei Zhang, Jiahao Xia, Yaolei Qi, Yiyang Zhang, Yuting He, Qian Chen, Jing Ye, Hongyan Qiao, Xiuhua Hu, Lei Xu, Jiayin Zhang, Hui Liu, Minwen Zheng, Yining Wang, Daimin Zhang, Ji Zhang, Wenqi Shao, Yun Liu, Longjiang Zhang, Guanyu Yang

arxiv logopreprintJul 29 2025
Foundation models have demonstrated remarkable potential in medical domain. However, their application to complex cardiovascular diagnostics remains underexplored. In this paper, we present Cardiac-CLIP, a multi-modal foundation model designed for 3D cardiac CT images. Cardiac-CLIP is developed through a two-stage pre-training strategy. The first stage employs a 3D masked autoencoder (MAE) to perform self-supervised representation learning from large-scale unlabeled volumetric data, enabling the visual encoder to capture rich anatomical and contextual features. In the second stage, contrastive learning is introduced to align visual and textual representations, facilitating cross-modal understanding. To support the pre-training, we collect 16641 real clinical CT scans, supplemented by 114k publicly available data. Meanwhile, we standardize free-text radiology reports into unified templates and construct the pathology vectors according to diagnostic attributes, based on which the soft-label matrix is generated to supervise the contrastive learning process. On the other hand, to comprehensively evaluate the effectiveness of Cardiac-CLIP, we collect 6,722 real-clinical data from 12 independent institutions, along with the open-source data to construct the evaluation dataset. Specifically, Cardiac-CLIP is comprehensively evaluated across multiple tasks, including cardiovascular abnormality classification, information retrieval and clinical analysis. Experimental results demonstrate that Cardiac-CLIP achieves state-of-the-art performance across various downstream tasks in both internal and external data. Particularly, Cardiac-CLIP exhibits great effectiveness in supporting complex clinical tasks such as the prospective prediction of acute coronary syndrome, which is notoriously difficult in real-world scenarios.

Diabetes and longitudinal changes in deep learning-derived measures of vertebral bone mineral density using conventional CT: the Multi-Ethnic Study of Atherosclerosis.

Ghotbi E, Hadidchi R, Hathaway QA, Bancks MP, Bluemke DA, Barr RG, Smith BM, Post WS, Budoff M, Lima JAC, Demehri S

pubmed logopapersJul 29 2025
To investigate the longitudinal association between diabetes and changes in vertebral bone mineral density (BMD) derived from conventional chest CT and to evaluate whether kidney function (estimated glomerular filtration rate (eGFR)) modifies this relationship. This longitudinal study included 1046 participants from the Multi-Ethnic Study of Atherosclerosis Lung Study with vertebral BMD measurements from chest CTs at Exam 5 (2010-2012) and Exam 6 (2016-2018). Diabetes was classified based on the American Diabetes Association criteria, and those with impaired fasting glucose (i.e., prediabetes) were excluded. Volumetric BMD was derived using a validated deep learning model to segment trabecular bone of thoracic vertebrae. Linear mixed-effects models estimated the association between diabetes and BMD changes over time. Following a significant interaction between diabetes status and eGFR, additional stratified analyses examined the impact of kidney function (i.e., diabetic nephropathy), categorized by eGFR (≥ 60 vs. < 60 mL/min/body surface area). Participants with diabetes had a higher baseline vertebral BMD than those without (202 vs. 190 mg/cm<sup>3</sup>) and experienced a significant increase over a median followpup of 6.2 years (β = 0.62 mg/cm<sup>3</sup>/year; 95% CI 0.26, 0.98). This increase was more pronounced among individuals with diabetes and reduced kidney function (β = 1.52 mg/cm<sup>3</sup>/year; 95% CI 0.66, 2.39) compared to the diabetic individuals with preserved kidney function (β = 0.48 mg/cm<sup>3</sup>/year; 95% CI 0.10, 0.85). Individuals with diabetes exhibited an increase in vertebral BMD over time in comparison to the non-diabetes group which is more pronounced in those with diabetic nephropathy. These findings suggest that conventional BMD measurements may not fully capture the well-known fracture risk in diabetes. Further studies incorporating bone microarchitecture using advanced imaging and fracture outcomes are needed to refine skeletal health assessments in the diabetic population.

Determining the scanning range of coronary computed tomography angiography based on deep learning.

Zhao YH, Fan YH, Wu XY, Qin T, Sun QT, Liang BH

pubmed logopapersJul 28 2025
Coronary computed tomography angiography (CCTA) is essential for diagnosing coronary artery disease as it provides detailed images of the heart's blood vessels to identify blockages or abnormalities. Traditionally, determining the computed tomography (CT) scanning range has relied on manual methods due to limited automation in this area. To develop and evaluate a novel deep learning approach to automate the determination of CCTA scan ranges using anteroposterior scout images. A retrospective analysis was conducted on chest CT data from 1388 patients at the Radiology Department of the First Affiliated Hospital of a university-affiliated hospital, collected between February 27 and March 27, 2024. A deep learning model was trained on anteroposterior scout images with annotations based on CCTA standards. The dataset was split into training (672 cases), validation (167 cases), and test (167 cases) sets to ensure robust model evaluation. The study demonstrated exceptional performance on the test set, achieving a mean average precision (mAP50) of 0.995 and mAP50-95 of 0.994 for determining CCTA scan ranges. This study demonstrates that: (1) Anteroposterior scout images can effectively estimate CCTA scan ranges; and (2) Estimates can be dynamically adjusted to meet the needs of various medical institutions.
Page 54 of 1421416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.