Sort by:
Page 1 of 13 results

Application of a pulmonary nodule detection program using AI technology to ultra-low-dose CT: differences in detection ability among various image reconstruction methods.

Tsuchiya N, Kobayashi S, Nakachi R, Tomori Y, Yogi A, Iida G, Ito J, Nishie A

pubmed logopapersMay 9 2025
This study aimed to investigate the performance of an artificial intelligence (AI)-based lung nodule detection program in ultra-low-dose CT (ULDCT) imaging, with a focus on the influence of various image reconstruction methods on detection accuracy. A chest phantom embedded with artificial lung nodules (solid and ground-glass nodules [GGNs]; diameters: 12 mm, 8 mm, 5 mm, and 3 mm) was scanned using six combinations of tube currents (160 mA, 80 mA, and 10 mA) and voltages (120 kV and 80 kV) on a Canon Aquilion One CT scanner. Images were reconstructed using filtered back projection (FBP), hybrid iterative reconstruction (HIR), model-based iterative reconstruction (MBIR), and deep learning reconstruction (DLR). Nodule detection was performed using an AI-based lung nodule detection program, and performance metrics were analyzed across different reconstruction methods and radiation dose protocols. At the lowest dose protocol (80 kV, 10 mA), FBP showed a 0% detection rate for all nodule sizes. HIR and DLR consistently achieved 100% detection rates for solid nodules ≥ 5 mm and GGNs ≥ 8 mm. No method detected 3 mm GGNs under any protocol. DLR demonstrated the highest detection rates, even under ultra-low-dose settings, while maintaining high image quality. AI-based lung nodule detection in ULDCT is strongly dependent on the choice of image reconstruction method.

A diffusion-stimulated CT-US registration model with self-supervised learning and synthetic-to-real domain adaptation.

Li S, Jia B, Huang W, Zhang X, Zhou W, Wang C, Teng G

pubmed logopapersMay 8 2025
In abdominal interventional procedures, achieving precise registration of 2D ultrasound (US) frames with 3D computed tomography (CT) scans presents a significant challenge. Traditional tracking methods often rely on high-precision sensors, which can be prohibitively expensive. Furthermore, the clinical need for real-time registration with a broad capture range frequently exceeds the performance of standard image-based optimization techniques. Current automatic registration methods that utilize deep learning are either heavily reliant on manual annotations for training or struggle to effectively bridge the gap between different imaging domains. To address these challenges, we propose a novel diffusion-stimulated CT-US registration model. This model harnesses the physical diffusion properties of US to generate synthetic US images from preoperative CT data. Additionally, we introduce a synthetic-to-real domain adaptation strategy using a diffusion model to mitigate the discrepancies between real and synthetic US images. A dual-stream self-supervised regression neural network, trained on these synthetic images, is then used to estimate the pose within the CT space. The effectiveness of our proposed approach is verified through validation using US and CT scans from a dual-modality human abdominal phantom. The results of our experiments confirm that our method can accurately initialize the US image pose within an acceptable range of error and subsequently refine it to achieve precise alignment. This enables real-time, tracker-independent, and robust rigid registration of CT and US images.

Automated Bi-Ventricular Segmentation and Regional Cardiac Wall Motion Analysis for Rat Models of Pulmonary Hypertension.

Niglas M, Baxan N, Ashek A, Zhao L, Duan J, O'Regan D, Dawes TJW, Nien-Chen C, Xie C, Bai W, Zhao L

pubmed logopapersApr 1 2025
Artificial intelligence-based cardiac motion mapping offers predictive insights into pulmonary hypertension (PH) disease progression and its impact on the heart. We proposed an automated deep learning pipeline for bi-ventricular segmentation and 3D wall motion analysis in PH rodent models for bridging the clinical developments. A data set of 163 short-axis cine cardiac magnetic resonance scans were collected longitudinally from monocrotaline (MCT) and Sugen-hypoxia (SuHx) PH rats and used for training a fully convolutional network for automated segmentation. The model produced an accurate annotation in < 1 s for each scan (Dice metric > 0.92). High-resolution atlas fitting was performed to produce 3D cardiac mesh models and calculate the regional wall motion between end-diastole and end-systole. Prominent right ventricular hypokinesia was observed in PH rats (-37.7% ± 12.2 MCT; -38.6% ± 6.9 SuHx) compared to healthy controls, attributed primarily to the loss in basal longitudinal and apical radial motion. This automated bi-ventricular rat-specific pipeline provided an efficient and novel translational tool for rodent studies in alignment with clinical cardiac imaging AI developments.
Page 1 of 13 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.