Sort by:
Page 3 of 323 results

A diffusion-stimulated CT-US registration model with self-supervised learning and synthetic-to-real domain adaptation.

Li S, Jia B, Huang W, Zhang X, Zhou W, Wang C, Teng G

pubmed logopapersMay 8 2025
In abdominal interventional procedures, achieving precise registration of 2D ultrasound (US) frames with 3D computed tomography (CT) scans presents a significant challenge. Traditional tracking methods often rely on high-precision sensors, which can be prohibitively expensive. Furthermore, the clinical need for real-time registration with a broad capture range frequently exceeds the performance of standard image-based optimization techniques. Current automatic registration methods that utilize deep learning are either heavily reliant on manual annotations for training or struggle to effectively bridge the gap between different imaging domains. To address these challenges, we propose a novel diffusion-stimulated CT-US registration model. This model harnesses the physical diffusion properties of US to generate synthetic US images from preoperative CT data. Additionally, we introduce a synthetic-to-real domain adaptation strategy using a diffusion model to mitigate the discrepancies between real and synthetic US images. A dual-stream self-supervised regression neural network, trained on these synthetic images, is then used to estimate the pose within the CT space. The effectiveness of our proposed approach is verified through validation using US and CT scans from a dual-modality human abdominal phantom. The results of our experiments confirm that our method can accurately initialize the US image pose within an acceptable range of error and subsequently refine it to achieve precise alignment. This enables real-time, tracker-independent, and robust rigid registration of CT and US images.

Automated Bi-Ventricular Segmentation and Regional Cardiac Wall Motion Analysis for Rat Models of Pulmonary Hypertension.

Niglas M, Baxan N, Ashek A, Zhao L, Duan J, O'Regan D, Dawes TJW, Nien-Chen C, Xie C, Bai W, Zhao L

pubmed logopapersApr 1 2025
Artificial intelligence-based cardiac motion mapping offers predictive insights into pulmonary hypertension (PH) disease progression and its impact on the heart. We proposed an automated deep learning pipeline for bi-ventricular segmentation and 3D wall motion analysis in PH rodent models for bridging the clinical developments. A data set of 163 short-axis cine cardiac magnetic resonance scans were collected longitudinally from monocrotaline (MCT) and Sugen-hypoxia (SuHx) PH rats and used for training a fully convolutional network for automated segmentation. The model produced an accurate annotation in < 1 s for each scan (Dice metric > 0.92). High-resolution atlas fitting was performed to produce 3D cardiac mesh models and calculate the regional wall motion between end-diastole and end-systole. Prominent right ventricular hypokinesia was observed in PH rats (-37.7% ± 12.2 MCT; -38.6% ± 6.9 SuHx) compared to healthy controls, attributed primarily to the loss in basal longitudinal and apical radial motion. This automated bi-ventricular rat-specific pipeline provided an efficient and novel translational tool for rodent studies in alignment with clinical cardiac imaging AI developments.

Metal artifact reduction combined with deep learning image reconstruction algorithm for CT image quality optimization: a phantom study.

Zou H, Wang Z, Guo M, Peng K, Zhou J, Zhou L, Fan B

pubmed logopapersJan 1 2025
Aiming to evaluate the effects of the smart metal artifact reduction (MAR) algorithm and combinations of various scanning parameters, including radiation dose levels, tube voltage, and reconstruction algorithms, on metal artifact reduction and overall image quality, to identify the optimal protocol for clinical application. A phantom with a pacemaker was examined using standard dose (effective dose (ED): 3 mSv) and low dose (ED: 0.5 mSv), with three scan voltages (70, 100, and 120 kVp) selected for each dose. Raw data were reconstructed using 50% adaptive statistical iterative reconstruction-V (ASIR-V), ASIR-V with MAR, high-strength deep learning image reconstruction (DLIR-H), and DLIR-H with MAR. Quantitative analyses (artifact index (AI), noise, signal-to-noise ratio (SNR) of artifact-impaired pulmonary nodules (PNs), and noise power spectrum (NPS) of artifact-free regions) and qualitative evaluation were performed. Quantitatively, the deep learning image recognition (DLIR) algorithm or high tube voltages exhibited lower noise compared to the ASIR-V or low tube voltages (<i>p</i> < 0.001). AI of images with MAR or high tube voltages was significantly lower than that of images without MAR or low tube voltages (<i>p</i> < 0.001). No significant difference was observed in AI between low-dose images with 120 kVp DLIR-H MAR and standard-dose images with 70 kVp ASIR-V MAR (<i>p</i> = 0.143). Only the 70 kVp 3 mSv protocol demonstrated statistically significant differences in SNR for artifact-impaired PNs (<i>p</i> = 0.041). The f<sub>peak</sub> and f<sub>avg</sub> values were similar across various scenarios, indicating that the MAR algorithm did not alter the image texture in artifact-free regions. The qualitative results of the extent of metal artifacts, the confidence in diagnosing artifact-impaired PNs, and the overall image quality were generally consistent with the quantitative results. The MAR algorithm combined with DLIR-H can reduce metal artifacts and enhance the overall image quality, particularly at high kVp tube voltages.
Page 3 of 323 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.