Sort by:
Page 12 of 1391390 results

Incidental Cardiovascular Findings in Lung Cancer Screening and Noncontrast Chest Computed Tomography.

Cham MD, Shemesh J

pubmed logopapersSep 24 2025
While the primary goal of lung cancer screening CT is to detect early-stage lung cancer in high-risk populations, it often reveals asymptomatic cardiovascular abnormalities that can be clinically significant. These findings include coronary artery calcifications (CACs), myocardial pathologies, cardiac chamber enlargement, valvular lesions, and vascular disease. CAC, a marker of subclinical atherosclerosis, is particularly emphasized due to its strong predictive value for cardiovascular events and mortality. Guidelines recommend qualitative or quantitative CAC scoring on all noncontrast chest CTs. Other actionable findings include aortic aneurysms, pericardial disease, and myocardial pathology, some of which may indicate past or impending cardiac events. This article explores the wide range of incidental cardiovascular findings detectable during low-dose CT (LDCT) scans for lung cancer screening, as well as noncontrast chest CT scans. Distinguishing which findings warrant further evaluation is essential to avoid overdiagnosis, unnecessary anxiety, and resource misuse. The article advocates for a structured approach to follow-up based on the clinical significance of each finding and the patient's overall risk profile. It also notes the rising role of artificial intelligence in automatically detecting and quantifying these abnormalities, potentiating early behavioral modification or medical and surgical interventions. Ultimately, this piece highlights the opportunity to reframe LDCT as a comprehensive cardiothoracic screening tool.

Dose reduction in radiotherapy treatment planning CT via deep learning-based reconstruction: a single‑institution study.

Yasui K, Kasugai Y, Morishita M, Saito Y, Shimizu H, Uezono H, Hayashi N

pubmed logopapersSep 24 2025
To quantify radiation dose reduction in radiotherapy treatment-planning CT (RTCT) using a deep learning-based reconstruction (DLR; AiCE) algorithm compared with adaptive iterative dose reduction (IR; AIDR). To evaluate its potential to inform RTCT-specific diagnostic reference levels (DRLs). In this single-institution retrospective study, 4-part RTCT scans (head, head and neck, lung, and pelvis) were acquired on a large-bore CT. Scans reconstructed with IR (n = 820) and DLR (n = 854) were compared. The 75th-percentile CTDI<sub>vol</sub> and DLP (CTDI<sub>IR</sub>, DLP<sub>IR</sub> vs. CTDI<sub>DLR</sub>, DLP<sub>DLR</sub>) were determined per site. Dose reduction rates were calculated as (CTDI<sub>DLR</sub> - CTDI<sub>IR</sub>)/CTDI<sub>IR</sub> × 100% and similarly for DLP. Statistical significance was assessed by the Mann-Whitney U-test. DLR yielded CTDI<sub>vol</sub> reductions of 30.4-75.4% and DLP reductions of 23.1-73.5% across sites (p < 0.001), with the greatest reductions in head and neck RTCT (CTDI<sub>vol</sub>: 75.4%; DLP: 73.5%). Variability also narrowed. Compared with published national DRLs, DLR achieved 34.8 mGy and 18.8 mGy lower CTDI<sub>vol</sub> for head and neck versus UK-DRLs and Japanese multi-institutional data, respectively. DLR substantially lowers RTCT dose indices, providing quantitative data to guide RTCT-specific DRLs and optimize clinical workflows.

Photon-counting detector computed tomography in thoracic oncology: revolutionizing tumor imaging through precision and detail.

Yanagawa M, Ueno M, Ito R, Ueda D, Saida T, Kurokawa R, Takumi K, Nishioka K, Sugawara S, Ide S, Honda M, Iima M, Kawamura M, Sakata A, Sofue K, Oda S, Watabe T, Hirata K, Naganawa S

pubmed logopapersSep 24 2025
Photon-counting detector computed tomography (PCD-CT) is an emerging imaging technology that promises to overcome the limitations of conventional energy-integrating detector (EID)-CT, particularly in thoracic oncology. This narrative review summarizes technical advances and clinical applications of PCD-CT in the thorax with emphasis on spatial resolution, dose-image-quality balance, and intrinsic spectral imaging, and it outlines practical implications relevant to thoracic oncology. A literature review of PubMed through May 31, 2025, was conducted using combinations of "photon counting," "computed tomography," "thoracic oncology," and "artificial intelligence." We screened the retrieved records and included studies with direct relevance to lung and mediastinal tumors, image quality, radiation dose, spectral/iodine imaging, or artificial intelligence-based reconstruction; case reports, editorials, and animal-only or purely methodological reports were excluded. PCD-CT demonstrated superior spatial resolution compared with EID-CT, enabling clearer visualization of fine pulmonary structures, such as bronchioles and subsolid nodules; slice thicknesses of approximately 0.4 mm and <i>ex vivo</i> resolvable structures approaching 0.11 mm have been reported. Across intraindividual clinical comparisons, radiation-dose reductions of 16%-43% have been achieved while maintaining or improving diagnostic image quality. Intrinsic spectral imaging enables accurate iodine mapping and low-keV virtual monoenergetic images and has shown quantitative advantages versus dual-energy CT in phantoms and early clinical work. Artificial intelligence-based deep-learning reconstruction and super-resolution can complement detector capabilities to reduce noise and stabilize fine-structure depiction without increasing dose. Potential reductions in contrast volume are biologically plausible given improved low-keV contrast-to-noise ratio, although clinical dose-finding data remain limited, and routine K-edge imaging has not yet translated to clinical thoracic practice. In conclusion, PCD-CT provides higher spatial and spectral fidelity at lower or comparable doses, supporting earlier and more precise tumor detection and characterization; future work should prioritize outcome-oriented trials, protocol harmonization, and implementation studies aligned with "Green Radiology".

An Anisotropic Cross-View Texture Transfer with Multi-Reference Non-Local Attention for CT Slice Interpolation

Kwang-Hyun Uhm, Hyunjun Cho, Sung-Hoo Hong, Seung-Won Jung

arxiv logopreprintSep 24 2025
Computed tomography (CT) is one of the most widely used non-invasive imaging modalities for medical diagnosis. In clinical practice, CT images are usually acquired with large slice thicknesses due to the high cost of memory storage and operation time, resulting in an anisotropic CT volume with much lower inter-slice resolution than in-plane resolution. Since such inconsistent resolution may lead to difficulties in disease diagnosis, deep learning-based volumetric super-resolution methods have been developed to improve inter-slice resolution. Most existing methods conduct single-image super-resolution on the through-plane or synthesize intermediate slices from adjacent slices; however, the anisotropic characteristic of 3D CT volume has not been well explored. In this paper, we propose a novel cross-view texture transfer approach for CT slice interpolation by fully utilizing the anisotropic nature of 3D CT volume. Specifically, we design a unique framework that takes high-resolution in-plane texture details as a reference and transfers them to low-resolution through-plane images. To this end, we introduce a multi-reference non-local attention module that extracts meaningful features for reconstructing through-plane high-frequency details from multiple in-plane images. Through extensive experiments, we demonstrate that our method performs significantly better in CT slice interpolation than existing competing methods on public CT datasets including a real-paired benchmark, verifying the effectiveness of the proposed framework. The source code of this work is available at https://github.com/khuhm/ACVTT.

Artificial Intelligence Chest CT Imaging for the Diagnosis of Tuberculosis-Destroyed Lung with PH.

Yu W, Liu M, Qin W, Liu J, Chen S, Chen Y, Hu B, Chen Y, Liu E, Jin X, Liu S, Li C, Zhu Z

pubmed logopapersSep 24 2025
Explore the clinical characteristics of Tuberculosis Destroyed Lung (TDL) with pulmonary hypertension. Use Artificial Intelligence (AI) CT Imaging for the Diagnosis of TDL Patients with PH. 51 cases of TDL patients. Based on the results of the right heart catheterization examination, the patients were divided into two groups: TDL with group (n=31) and TDL Non-PH (n=20). The original chest CT data of the patients were reconstructed, segmented, and rendered using AI, and lung volume-related data were calculated. The differences in clinical data, hemodynamic data, and lung volume-related data between the two groups of patients were compared. The proportion of TDL patients with PH is significantly higher than those without TDL (61.82% vs. 22.64%, P<0.01). There were significant differences between the two groups of patients in terms of pulmonary function, PCWP/PVR, PASP/TRV and total volume of destroyed lung tissue (V<sub>TDLT</sub>) (P<0.05), and V<sub>TDLT</sub> is positively correlated with mean pulmonary arterial pressure (mPAP). Combined Diagnosis (V<sub>TDLT</sub> + PSAP): The area under the AUC was 0.917 (95%CI: 0.802-1), with a predicted probability of 0.51 and a Youden index of 0.789. The sensitivity was 90% and specificity was 88.9%. Patients with TDL accompanied by pulmonary hypertension are related to restrictive disorders. The V<sub>TDLT</sub> is positively correlated with mPAP. By calculating the V<sub>TDLT</sub> and combining it with the estimated PASP from echocardiography, it assists in the diagnosis of PH in these patients.

Scan-do Attitude: Towards Autonomous CT Protocol Management using a Large Language Model Agent

Xingjian Kang, Linda Vorberg, Andreas Maier, Alexander Katzmann, Oliver Taubmann

arxiv logopreprintSep 24 2025
Managing scan protocols in Computed Tomography (CT), which includes adjusting acquisition parameters or configuring reconstructions, as well as selecting postprocessing tools in a patient-specific manner, is time-consuming and requires clinical as well as technical expertise. At the same time, we observe an increasing shortage of skilled workforce in radiology. To address this issue, a Large Language Model (LLM)-based agent framework is proposed to assist with the interpretation and execution of protocol configuration requests given in natural language or a structured, device-independent format, aiming to improve the workflow efficiency and reduce technologists' workload. The agent combines in-context-learning, instruction-following, and structured toolcalling abilities to identify relevant protocol elements and apply accurate modifications. In a systematic evaluation, experimental results indicate that the agent can effectively retrieve protocol components, generate device compatible protocol definition files, and faithfully implement user requests. Despite demonstrating feasibility in principle, the approach faces limitations regarding syntactic and semantic validity due to lack of a unified device API, and challenges with ambiguous or complex requests. In summary, the findings show a clear path towards LLM-based agents for supporting scan protocol management in CT imaging.

Deep Learning-based Automated Detection of Pulmonary Embolism: Is It Reliable?

Babacan Ö, Karkaş AY, Durak G, Uysal E, Durak Ü, Shrestha R, Bingöl Z, Okumuş G, Medetalibeyoğlu A, Ertürk ŞM

pubmed logopapersSep 24 2025
To assess the diagnostic accuracy and clinical applicability of the artificial intelligence (AI) program "Canon Automation Platform" for the automated detection and localization of pulmonary embolisms (PEs) in chest computed tomography pulmonary angiograms (CTPAs). A total of 1474 CTPAs suspected of PEs were retrospectively evaluated by 2 senior radiology residents with 5 years of experience. The final diagnosis was verified through radiology reports by 2 thoracic radiologists with 20 and 25 years of experience, along with the patients' clinical records and histories. The images were transferred to the Canon Automation Platform, which integrates with the picture archiving and communication system (PACS), and the diagnostic success of the platform was evaluated. This study examined all anatomic levels of the pulmonary arteries, including the left pulmonary artery, right pulmonary artery, and interlobar, segmental, and subsegmental branches. The confusion matrix data obtained at all anatomic levels considered in our study were as follows: AUC-ROC score of 0.945 to 0.996, accuracy of 95.4% to 99.7%, sensitivity of 81.4% to 99.1%, specificity of 98.7% to 100%, PPV of 89.1% to 100%, NPV of 95.6% to 99.9%, F1 score of 0.868 to 0.987, and Cohen Kappa of 0.842 to 0.986. Notably, sensitivity in the subsegmental branches was lower (81.4% to 84.7%) compared with more central locations, whereas specificity remained consistent (98.7% to 98.9%). The results showed that the chest pain package of the Canon Automation Platform accurately provides rapid automatic PE detection in chest CTPAs by leveraging deep learning algorithms to facilitate the clinical workflow. This study demonstrates that AI can provide physicians with robust diagnostic support for acute PE, particularly in hospitals without 24/7 access to radiology specialists.

3D CoAt U SegNet-enhanced deep learning framework for accurate segmentation of acute ischemic stroke lesions from non-contrast CT scans.

Nag MK, Sadhu AK, Das S, Kumar C, Choudhary S

pubmed logopapersSep 23 2025
Segmenting ischemic stroke lesions from Non-Contrast CT (NCCT) scans is a complex task due to the hypo-intense nature of these lesions compared to surrounding healthy brain tissue and their iso-intensity with lateral ventricles in many cases. Identifying early acute ischemic stroke lesions in NCCT remains particularly challenging. Computer-assisted detection and segmentation can serve as valuable tools to support clinicians in stroke diagnosis. This paper introduces CoAt U SegNet, a novel deep learning model designed to detect and segment acute ischemic stroke lesions from NCCT scans. Unlike conventional 3D segmentation models, this study presents an advanced 3D deep learning approach to enhance delineation accuracy. Traditional machine learning models have struggled to achieve satisfactory segmentation performance, highlighting the need for more sophisticated techniques. For model training, 50 NCCT scans were used, with 10 scans for validation and 500 scans for testing. The encoder convolution blocks incorporated dilation rates of 1, 3, and 5 to capture multi-scale features effectively. Performance evaluation on 500 unseen NCCT scans yielded a Dice similarity score of 75% and a Jaccard index of 70%, demonstrating notable improvement in segmentation accuracy. An enhanced similarity index was employed to refine lesion segmentation, which can further aid in distinguishing the penumbra from the core infarct area, contributing to improved clinical decision-making.

Enhancing the CAD-RADS™ 2.0 Category Assignment Performance of ChatGPT and DeepSeek Through "Few-shot" Prompting.

Kaya HE

pubmed logopapersSep 23 2025
To assess whether few-shot prompting improves the performance of 2 popular large language models (LLMs) (ChatGPT o1 and DeepSeek-R1) in assigning Coronary Artery Disease Reporting and Data System (CAD-RADS™ 2.0) categories. A detailed few-shot prompt based on CAD-RADS™ 2.0 framework was developed using 20 reports from the MIMIC-IV database. Subsequently, 100 modified reports from the same database were categorized using zero-shot and few-shot prompts through the models' user interface. Model accuracy was evaluated by comparing assignments to a reference radiologist's classifications, including stenosis categories and modifiers. To assess reproducibility, 50 reports were reclassified using the same few-shot prompt. McNemar tests and Cohen kappa were used for statistical analysis. Using zero-shot prompting, accuracy was low for both models (ChatGPT: 14%, DeepSeek: 8%), with correct assignments occurring almost exclusively in CAD-RADS 0 cases. Hallucinations occurred frequently (ChatGPT: 19%, DeepSeek: 54%). Few-shot prompting significantly improved accuracy to 98% for ChatGPT and 93% for DeepSeek (both P<0.001) and eliminated hallucinations. Kappa values for agreement between model-generated and radiologist-assigned classifications were 0.979 (0.950, 1.000) (P<0.001) for ChatGPT and 0.916 (0.859, 0.973) (P<0.001) for DeepSeek, indicating almost perfect agreement for both models without a significant difference between the models (P=0.180). Reproducibility analysis yielded kappa values of 0.957 (0.900, 1.000) (P<0.001) for ChatGPT and 0.873 [0.779, 0.967] (P<0.001) for DeepSeek, indicating almost perfect and strong agreement between repeated assignments, respectively, with no significant difference between the models (P=0.125). Few-shot prompting substantially enhances LLMs' accuracy in assigning CAD-RADS™ 2.0 categories, suggesting potential for clinical application and facilitating system adoption.

Deep Learning for Standardized Head CT Reformatting: A Quantitative Analysis of Image Quality and Operator Variability.

Chang PD, Chu E, Floriolli D, Soun J, Fussell D

pubmed logopapersSep 23 2025
To validate a deep learning foundation model for automated head computed tomography (CT) reformatting and to quantify the quality, speed, and variability of conventional manual reformats in a real-world dataset. A foundation artificial intelligence (AI) model was used to create automated reformats for 1,763 consecutive non-contrast head CT examinations. Model accuracy was first validated on a 100-exam subset by assessing landmark detection as well as rotational, centering, and zoom error against expert manual annotations. The validated model was subsequently used as a reference standard to evaluate the quality and speed of the original technician-generated reformats from the full dataset. The AI model demonstrated high concordance with expert annotations, with a mean landmark localization error of 0.6-0.9 mm. Compared to expert-defined planes, AI-generated reformats exhibited a mean rotational error of 0.7 degrees, a mean centering error of 0.3%, and a mean zoom error of 0.4%. By contrast, technician-generated reformats demonstrated a mean rotational error of 11.2 degrees, a mean centering error of 6.4%, and a mean zoom error of 6.2%. Significant variability in manual reformat quality was observed across different factors including patient age, scanner location, report findings, and individual technician operators. Manual head CT reformatting is subject to substantial variability in both quality and speed. A single-shot deep learning foundation model can generate reformats with high accuracy and consistency. The implementation of such an automated method offers the potential to improve standardization, increase workflow efficiency, and reduce operational costs in clinical practice.
Page 12 of 1391390 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.