Sort by:
Page 1 of 766 results
Next

Aortic atherosclerosis evaluation using deep learning based on non-contrast CT: A retrospective multi-center study.

Yang M, Lyu J, Xiong Y, Mei A, Hu J, Zhang Y, Wang X, Bian X, Huang J, Li R, Xing X, Su S, Gao J, Lou X

pubmed logopapersAug 15 2025
Non-contrast CT (NCCT) is widely used in clinical practice and holds potential for large-scale atherosclerosis screening, yet its application in detecting and grading aortic atherosclerosis remains limited. To address this, we propose Aortic-AAE, an automated segmentation system based on a cascaded attention mechanism within the nnU-Net framework. The cascaded attention module enhances feature learning across complex anatomical structures, outperforming existing attention modules. Integrated preprocessing and post-processing ensure anatomical consistency and robustness across multi-center data. Trained on 435 labeled NCCT scans from three centers and validated on 388 independent cases, Aortic-AAE achieved 81.12% accuracy in aortic stenosis classification and 92.37% in Agatston scoring of calcified plaques, surpassing five state-of-the-art models. This study demonstrates the feasibility of using deep learning for accurate detection and grading of aortic atherosclerosis from NCCT, supporting improved diagnostic decisions and enhanced clinical workflows.

Fully Automatic Volume Segmentation Using Deep Learning Approaches to Assess the Thoracic Aorta, Visceral Abdominal Aorta, and Visceral Vasculature.

Pouncey AL, Charles E, Bicknell C, Bérard X, Ducasse E, Caradu C

pubmed logopapersAug 12 2025
Computed tomography angiography (CTA) imaging is essential to evaluate and analyse complex abdominal and thoraco-abdominal aortic aneurysms. However, CTA analyses are labour intensive, time consuming, and prone to interphysician variability. Fully automatic volume segmentation (FAVS) using artificial intelligence with deep learning has been validated for infrarenal aorta imaging but requires further testing for thoracic and visceral aorta segmentation. This study assessed FAVS accuracy against physician controlled manual segmentation (PCMS) in the descending thoracic aorta, visceral abdominal aorta, and visceral vasculature. This was a retrospective, multicentre, observational cohort study. Fifty pre-operative CTAs of patients with abdominal aortic aneurysm were randomly selected. Comparisons between FAVS and PCMS and assessment of inter- and intra-observer reliability of PCMS were performed. Volumetric segmentation performance was evaluated using sensitivity, specificity, Dice similarity coefficient (DSC), and Jaccard index (JI). Visceral vessel identification was compared by analysing branchpoint coordinates. Bland-Altman limits of agreement (BA-LoA) were calculated for proximal visceral diameters (excluding duplicate renals). FAVS demonstrated performance comparable with PCMS for volumetric segmentation, with a median DSC of 0.93 (interquartile range [IQR] 0.03), JI of 0.87 (IQR 0.05), sensitivity of 0.99 (IQR 0.01), and specificity of 1.00 (IQR 0.00). These metrics are similar to interphysician comparisons: median DSC 0.93 (IQR 0.07), JI 0.87 (IQR 0.12), sensitivity 0.90 (IQR 0.08), and specificity 1.00 (IQR 0.00). FAVS correctly identified 99.5% (183/184) of visceral vessels. Branchpoint coordinates for FAVS and PCMS were within the limits of CTA spatial resolution (Δx -0.33 [IQR 2.82], Δy 0.61 [IQR 4.85], Δz 2.10 [IQR 4.69] mm). BA-LoA for proximal visceral diameter measurements showed reasonable agreement: FAVS vs. PCMS mean difference -0.11 ± 5.23 mm compared with interphysician variability of 0.03 ± 5.27 mm. FAVS provides accurate, efficient segmentation of the thoracic and visceral aorta, delivering performance comparable with manual segmentation by expert physicians. This technology may enhance clinical workflows for monitoring and planning treatments for complex abdominal and thoraco-abdominal aortic aneurysms.

Reducing motion artifacts in the aorta: super-resolution deep learning reconstruction with motion reduction algorithm.

Yasaka K, Tsujimoto R, Miyo R, Abe O

pubmed logopapersAug 9 2025
To assess the efficacy of super-resolution deep learning reconstruction (SR-DLR) with motion reduction algorithm (SR-DLR-M) in mitigating aorta motion artifacts compared to SR-DLR and deep learning reconstruction with motion reduction algorithm (DLR-M). This retrospective study included 86 patients (mean age, 65.0 ± 14.1 years; 53 males) who underwent contrast-enhanced CT including the chest region. CT images were reconstructed with SR-DLR-M, SR-DLR, and DLR-M. Circular or ovoid regions of interest were placed on the aorta, and the standard deviation of the CT attenuation was recorded as quantitative noise. From the CT attenuation profile along a line region of interest that intersected the left common carotid artery wall, edge rise slope and edge rise distance were calculated. Two readers assessed the images based on artifact, sharpness, noise, structure depiction, and diagnostic acceptability (for aortic dissection). Quantitative noise was 7.4/5.4/8.3 Hounsfield unit (HU) in SR-DLR-M/SR-DLR/DLR-M. Significant differences were observed between SR-DLR-M vs. SR-DLR and DLR-M (p < 0.001). Edge rise slope and edge rise distance were 107.1/108.8/85.8 HU/mm and 1.6/1.5/2.0 mm, respectively, in SR-DLR-M/SR-DLR/DLR-M. Statistically significant differences were detected between SR-DLR-M vs. DLR-M (p ≤ 0.001 for both). Two readers scored artifacts in SR-DLR-M as significantly better than those in SR-DLR (p < 0.001). Scores for sharpness, noise, and structure depiction in SR-DLR-M were significantly better than those in DLR-M (p ≤ 0.005). Diagnostic acceptability in SR-DLR-M was significantly better than that in SR-DLR and DLR-M (p < 0.001). SR-DLR-M provided significantly better CT images in diagnosing aortic dissection compared to SR-DLR and DLR-M.

Lower Extremity Bypass Surveillance and Peak Systolic Velocities Value Prediction Using Recurrent Neural Networks.

Luo X, Tahabi FM, Rollins DM, Sawchuk AP

pubmed logopapersAug 7 2025
Routine duplex ultrasound surveillance is recommended after femoral-popliteal and femoral-tibial-pedal vein bypass grafts at various post-operative intervals. Currently, there is no systematic method for bypass graft surveillance using a set of peak systolic velocities (PSVs) collected during these exams. This research aims to explore the use of recurrent neural networks to predict the next set of PSVs, which can then indicate occlusion status. Recurrent neural network models were developed to predict occlusion and stenosis based on one to three prior sets of PSVs, with a sequence-to-sequence model utilized to forecast future PSVs within the stent graft and nearby arteries. The study employed 5-fold cross-validation for model performance comparison, revealing that the BiGRU model outperformed BiLSTM when two or more sets of PSVs were included, demonstrating that increasing duplex ultrasound exams improve prediction accuracy and reduces error rates. This work establishes a basis for integrating comprehensive clinical data, including demographics, comorbidities, symptoms, and other risk factors, with PSVs to enhance lower extremity bypass graft surveillance predictions.

Automated ultrasound doppler angle estimation using deep learning

Nilesh Patil, Ajay Anand

arxiv logopreprintAug 6 2025
Angle estimation is an important step in the Doppler ultrasound clinical workflow to measure blood velocity. It is widely recognized that incorrect angle estimation is a leading cause of error in Doppler-based blood velocity measurements. In this paper, we propose a deep learning-based approach for automated Doppler angle estimation. The approach was developed using 2100 human carotid ultrasound images including image augmentation. Five pre-trained models were used to extract images features, and these features were passed to a custom shallow network for Doppler angle estimation. Independently, measurements were obtained by a human observer reviewing the images for comparison. The mean absolute error (MAE) between the automated and manual angle estimates ranged from 3.9{\deg} to 9.4{\deg} for the models evaluated. Furthermore, the MAE for the best performing model was less than the acceptable clinical Doppler angle error threshold thus avoiding misclassification of normal velocity values as a stenosis. The results demonstrate potential for applying a deep-learning based technique for automated ultrasound Doppler angle estimation. Such a technique could potentially be implemented within the imaging software on commercial ultrasound scanners.

Beyond unimodal analysis: Multimodal ensemble learning for enhanced assessment of atherosclerotic disease progression.

Guarrasi V, Bertgren A, Näslund U, Wennberg P, Soda P, Grönlund C

pubmed logopapersAug 5 2025
Atherosclerosis is a leading cardiovascular disease typified by fatty streaks accumulating within arterial walls, culminating in potential plaque ruptures and subsequent strokes. Existing clinical risk scores, such as systematic coronary risk estimation and Framingham risk score, profile cardiovascular risks based on factors like age, cholesterol, and smoking, among others. However, these scores display limited sensitivity in early disease detection. Parallelly, ultrasound-based risk markers, such as the carotid intima media thickness, while informative, only offer limited predictive power. Notably, current models largely focus on either ultrasound image-derived risk markers or clinical risk factor data without combining both for a comprehensive, multimodal assessment. This study introduces a multimodal ensemble learning framework to assess atherosclerosis severity, especially in its early sub-clinical stage. We utilize a multi-objective optimization targeting both performance and diversity, aiming to integrate features from each modality effectively. Our objective is to measure the efficacy of models using multimodal data in assessing vascular aging, i.e., plaque presence and vascular age, over a six-year period. We also delineate a procedure for optimal model selection from a vast pool, focusing on best-suited models for classification tasks. Additionally, through eXplainable Artificial Intelligence techniques, this work delves into understanding key model contributors and discerning unique subject subgroups.

Temporal consistency-aware network for renal artery segmentation in X-ray angiography.

Yang B, Li C, Fezzi S, Fan Z, Wei R, Chen Y, Tavella D, Ribichini FL, Zhang S, Sharif F, Tu S

pubmed logopapersAug 2 2025
Accurate segmentation of renal arteries from X-ray angiography videos is crucial for evaluating renal sympathetic denervation (RDN) procedures but remains challenging due to dynamic changes in contrast concentration and vessel morphology across frames. The purpose of this study is to propose TCA-Net, a deep learning model that improves segmentation consistency by leveraging local and global contextual information in angiography videos. Our approach utilizes a novel deep learning framework that incorporates two key modules: a local temporal window vessel enhancement module and a global vessel refinement module (GVR). The local module fuses multi-scale temporal-spatial features to improve the semantic representation of vessels in the current frame, while the GVR module integrates decoupled attention strategies (video-level and object-level attention) and gating mechanisms to refine global vessel information and eliminate redundancy. To further improve segmentation consistency, a temporal perception consistency loss function is introduced during training. We evaluated our model using 195 renal artery angiography sequences for development and tested it on an external dataset from 44 patients. The results demonstrate that TCA-Net achieves an F1-score of 0.8678 for segmenting renal arteries, outperforming existing state-of-the-art segmentation methods. We present TCA-Net, a deep learning-based model that significantly improves segmentation consistency for renal artery angiography videos. By effectively leveraging both local and global temporal contextual information, TCA-Net outperforms current methods and provides a reliable tool for assessing RDN procedures.

Enhanced stroke risk prediction in hypertensive patients through deep learning integration of imaging and clinical data.

Li H, Zhang T, Han G, Huang Z, Xiao H, Ni Y, Liu B, Lin W, Lin Y

pubmed logopapersJul 31 2025
Stroke is one of the leading causes of death and disability worldwide, with a significantly elevated incidence among individuals with hypertension. Conventional risk assessment methods primarily rely on a limited set of clinical parameters and often exclude imaging-derived structural features, resulting in suboptimal predictive accuracy. This study aimed to develop a deep learning-based multimodal stroke risk prediction model by integrating carotid ultrasound imaging with multidimensional clinical data to enable precise identification of high-risk individuals among hypertensive patients. A total of 2,176 carotid artery ultrasound images from 1,088 hypertensive patients were collected. ResNet50 was employed to automatically segment the carotid intima-media and extract key structural features. These imaging features, along with clinical variables such as age, blood pressure, and smoking history, were fused using a Vision Transformer (ViT) and fed into a Radial Basis Probabilistic Neural Network (RBPNN) for risk stratification. The model's performance was systematically evaluated using metrics including AUC, Dice coefficient, IoU, and Precision-Recall curves. The proposed multimodal fusion model achieved outstanding performance on the test set, with an AUC of 0.97, a Dice coefficient of 0.90, and an IoU of 0.80. Ablation studies demonstrated that the inclusion of ViT and RBPNN modules significantly enhanced predictive accuracy. Subgroup analysis further confirmed the model's robust performance in high-risk populations, such as those with diabetes or smoking history. The deep learning-based multimodal fusion model effectively integrates carotid ultrasound imaging and clinical features, significantly improving the accuracy of stroke risk prediction in hypertensive patients. The model demonstrates strong generalizability and clinical application potential, offering a valuable tool for early screening and personalized intervention planning for stroke prevention. Not applicable.

Wall Shear Stress Estimation in Abdominal Aortic Aneurysms: Towards Generalisable Neural Surrogate Models

Patryk Rygiel, Julian Suk, Christoph Brune, Kak Khee Yeung, Jelmer M. Wolterink

arxiv logopreprintJul 30 2025
Abdominal aortic aneurysms (AAAs) are pathologic dilatations of the abdominal aorta posing a high fatality risk upon rupture. Studying AAA progression and rupture risk often involves in-silico blood flow modelling with computational fluid dynamics (CFD) and extraction of hemodynamic factors like time-averaged wall shear stress (TAWSS) or oscillatory shear index (OSI). However, CFD simulations are known to be computationally demanding. Hence, in recent years, geometric deep learning methods, operating directly on 3D shapes, have been proposed as compelling surrogates, estimating hemodynamic parameters in just a few seconds. In this work, we propose a geometric deep learning approach to estimating hemodynamics in AAA patients, and study its generalisability to common factors of real-world variation. We propose an E(3)-equivariant deep learning model utilising novel robust geometrical descriptors and projective geometric algebra. Our model is trained to estimate transient WSS using a dataset of CT scans of 100 AAA patients, from which lumen geometries are extracted and reference CFD simulations with varying boundary conditions are obtained. Results show that the model generalizes well within the distribution, as well as to the external test set. Moreover, the model can accurately estimate hemodynamics across geometry remodelling and changes in boundary conditions. Furthermore, we find that a trained model can be applied to different artery tree topologies, where new and unseen branches are added during inference. Finally, we find that the model is to a large extent agnostic to mesh resolution. These results show the accuracy and generalisation of the proposed model, and highlight its potential to contribute to hemodynamic parameter estimation in clinical practice.

Implicit Spatiotemporal Bandwidth Enhancement Filter by Sine-activated Deep Learning Model for Fast 3D Photoacoustic Tomography

I Gede Eka Sulistyawan, Takuro Ishii, Riku Suzuki, Yoshifumi Saijo

arxiv logopreprintJul 28 2025
3D photoacoustic tomography (3D-PAT) using high-frequency hemispherical transducers offers near-omnidirectional reception and enhanced sensitivity to the finer structural details encoded in the high-frequency components of the broadband photoacoustic (PA) signal. However, practical constraints such as limited number of channels with bandlimited sampling rate often result in sparse and bandlimited sensors that degrade image quality. To address this, we revisit the 2D deep learning (DL) approach applied directly to sensor-wise PA radio-frequency (PARF) data. Specifically, we introduce sine activation into the DL model to restore the broadband nature of PARF signals given the observed band-limited and high-frequency PARF data. Given the scarcity of 3D training data, we employ simplified training strategies by simulating random spherical absorbers. This combination of sine-activated model and randomized training is designed to emphasize bandwidth learning over dataset memorization. Our model was evaluated on a leaf skeleton phantom, a micro-CT-verified 3D spiral phantom and in-vivo human palm vasculature. The results showed that the proposed training mechanism on sine-activated model was well-generalized across the different tests by effectively increasing the sensor density and recovering the spatiotemporal bandwidth. Qualitatively, the sine-activated model uniquely enhanced high-frequency content that produces clearer vascular structure with fewer artefacts. Quantitatively, the sine-activated model exhibits full bandwidth at -12 dB spectrum and significantly higher contrast-to-noise ratio with minimal loss of structural similarity index. Lastly, we optimized our approach to enable fast enhanced 3D-PAT at 2 volumes-per-second for better practical imaging of a free-moving targets.
Page 1 of 766 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.