Sort by:
Page 33 of 46453 results

CT-Agent: A Multimodal-LLM Agent for 3D CT Radiology Question Answering

Yuren Mao, Wenyi Xu, Yuyang Qin, Yunjun Gao

arxiv logopreprintMay 22 2025
Computed Tomography (CT) scan, which produces 3D volumetric medical data that can be viewed as hundreds of cross-sectional images (a.k.a. slices), provides detailed anatomical information for diagnosis. For radiologists, creating CT radiology reports is time-consuming and error-prone. A visual question answering (VQA) system that can answer radiologists' questions about some anatomical regions on the CT scan and even automatically generate a radiology report is urgently needed. However, existing VQA systems cannot adequately handle the CT radiology question answering (CTQA) task for: (1) anatomic complexity makes CT images difficult to understand; (2) spatial relationship across hundreds slices is difficult to capture. To address these issues, this paper proposes CT-Agent, a multimodal agentic framework for CTQA. CT-Agent adopts anatomically independent tools to break down the anatomic complexity; furthermore, it efficiently captures the across-slice spatial relationship with a global-local token compression strategy. Experimental results on two 3D chest CT datasets, CT-RATE and RadGenome-ChestCT, verify the superior performance of CT-Agent.

On factors that influence deep learning-based dose prediction of head and neck tumors.

Gao R, Mody P, Rao C, Dankers F, Staring M

pubmed logopapersMay 22 2025
<i>Objective.</i>This study investigates key factors influencing deep learning-based dose prediction models for head and neck cancer radiation therapy. The goal is to evaluate model accuracy, robustness, and computational efficiency, and to identify key components necessary for optimal performance.<i>Approach.</i>We systematically analyze the impact of input and dose grid resolution, input type, loss function, model architecture, and noise on model performance. Two datasets are used: a public dataset (OpenKBP) and an in-house clinical dataset. Model performance is primarily evaluated using two metrics: dose score and dose-volume histogram (DVH) score.<i>Main results.</i>High-resolution inputs improve prediction accuracy (dose score and DVH score) by 8.6%-13.5% compared to low resolution. Using a combination of CT, planning target volumes, and organs-at-risk as input significantly enhances accuracy, with improvements of 57.4%-86.8% over using CT alone. Integrating mean absolute error (MAE) loss with value-based and criteria-based DVH loss functions further boosts DVH score by 7.2%-7.5% compared to MAE loss alone. In the robustness analysis, most models show minimal degradation under Poisson noise (0-0.3 Gy) but are more susceptible to adversarial noise (0.2-7.8 Gy). Notably, certain models, such as SwinUNETR, demonstrate superior robustness against adversarial perturbations.<i>Significance.</i>These findings highlight the importance of optimizing deep learning models and provide valuable guidance for achieving more accurate and reliable radiotherapy dose prediction.

High-resolution deep learning reconstruction to improve the accuracy of CT fractional flow reserve.

Tomizawa N, Fan R, Fujimoto S, Nozaki YO, Kawaguchi YO, Takamura K, Hiki M, Aikawa T, Takahashi N, Okai I, Okazaki S, Kumamaru KK, Minamino T, Aoki S

pubmed logopapersMay 22 2025
This study aimed to compare the diagnostic performance of CT-derived fractional flow reserve (CT-FFR) using model-based iterative reconstruction (MBIR) and high-resolution deep learning reconstruction (HR-DLR) images to detect functionally significant stenosis with invasive FFR as the reference standard. This single-center retrospective study included 79 consecutive patients (mean age, 70 ± 11 [SD] years; 57 male) who underwent coronary CT angiography followed by invasive FFR between February 2022 and March 2024. CT-FFR was calculated using a mesh-free simulation. The cutoff for functionally significant stenosis was defined as FFR ≤ 0.80. CT-FFR was compared with MBIR and HR-DLR using receiver operating characteristic curve analysis. The mean invasive FFR value was 0.81 ± 0.09, and 46 of 98 vessels (47%) had FFR ≤ 0.80. The mean noise of HR-DLR was lower than that of MBIR (14.4 ± 1.7 vs 23.5 ± 3.1, p < 0.001). The area under the receiver operating characteristic curve for the diagnosis of functionally significant stenosis of HR-DLR (0.88; 95% CI: 0.80, 0.95) was higher than that of MBIR (0.76; 95% CI: 0.67, 0.86; p = 0.003). The diagnostic accuracy of HR-DLR (88%; 86 of 98 vessels; 95% CI: 80, 94) was higher than that of MBIR (70%; 69 of 98 vessels; 95% CI: 60, 79; p < 0.001). HR-DLR improves image quality and the diagnostic performance of CT-FFR for the diagnosis of functionally significant stenosis. Question The effect of HR-DLR on the diagnostic performance of CT-FFR has not been investigated. Findings HR-DLR improved the diagnostic performance of CT-FFR over MBIR for the diagnosis of functionally significant stenosis as assessed by invasive FFR. Clinical relevance HR-DLR would further enhance the clinical utility of CT-FFR in diagnosing the functional significance of coronary stenosis.

Influence of content-based image retrieval on the accuracy and inter-reader agreement of usual interstitial pneumonia CT pattern classification.

Park S, Hwang HJ, Yun J, Chae EJ, Choe J, Lee SM, Lee HN, Shin SY, Park H, Jeong H, Kim MJ, Lee JH, Jo KW, Baek S, Seo JB

pubmed logopapersMay 22 2025
To investigate whether a content-based image retrieval (CBIR) of similar chest CT images can help usual interstitial pneumonia (UIP) CT pattern classifications among readers with varying levels of experience. This retrospective study included patients who underwent high-resolution chest CT between 2013 and 2015 for the initial workup for fibrosing interstitial lung disease. UIP classifications were assigned to CT images by three thoracic radiologists, which served as the ground truth. One hundred patients were selected as queries. The CBIR retrieved the top three similar CT images with UIP classifications using a deep learning algorithm. The diagnostic accuracies and inter-reader agreement of nine readers before and after CBIR were evaluated. Of 587 patients (mean age, 63 years; 356 men), 100 query cases (26 UIP patterns, 26 probable UIP patterns, 5 indeterminate for UIP, and 43 alternative diagnoses) were selected. After CBIR, the mean accuracy (61.3% to 67.1%; p = 0.011) and inter-reader agreement (Fleiss Kappa, 0.400 to 0.476; p = 0.003) were slightly improved. The accuracies of the radiologist group for all CT patterns except indeterminate for UIP increased after CBIR; however, they did not reach statistical significance. The resident and pulmonologist groups demonstrated mixed results: accuracy decreased for UIP pattern, increased for alternative diagnosis, and varied for others. CBIR slightly improved diagnostic accuracy and inter-reader agreement in UIP pattern classifications. However, its impact varied depending on the readers' level of experience, suggesting that the current CBIR system may be beneficial when used to complement the interpretations of experienced readers. Question CT pattern classification is important for the standardized assessment and management of idiopathic pulmonary fibrosis, but requires radiologic expertise and shows inter-reader variability. Findings CBIR slightly improved diagnostic accuracy and inter-reader agreement for UIP CT pattern classifications overall. Clinical relevance The proposed CBIR system may guide consistent work-up and treatment strategies by enhancing accuracy and inter-reader agreement in UIP CT pattern classifications by experienced readers whose expertise and experience can effectively interact with CBIR results.

HealthiVert-GAN: A Novel Framework of Pseudo-Healthy Vertebral Image Synthesis for Interpretable Compression Fracture Grading.

Zhang Q, Chuang C, Zhang S, Zhao Z, Wang K, Xu J, Sun J

pubmed logopapersMay 22 2025
Osteoporotic vertebral compression fractures (OVCFs) are prevalent in the elderly population, typically assessed on computed tomography (CT) scans by evaluating vertebral height loss. This assessment helps determine the fracture's impact on spinal stability and the need for surgical intervention. However, the absence of pre-fracture CT scans and standardized vertebral references leads to measurement errors and inter-observer variability, while irregular compression patterns further challenge the precise grading of fracture severity. While deep learning methods have shown promise in aiding OVCFs screening, they often lack interpretability and sufficient sensitivity, limiting their clinical applicability. To address these challenges, we introduce a novel vertebra synthesis-height loss quantification-OVCFs grading framework. Our proposed model, HealthiVert-GAN, utilizes a coarse-to-fine synthesis network designed to generate pseudo-healthy vertebral images that simulate the pre-fracture state of fractured vertebrae. This model integrates three auxiliary modules that leverage the morphology and height information of adjacent healthy vertebrae to ensure anatomical consistency. Additionally, we introduce the Relative Height Loss of Vertebrae (RHLV) as a quantification metric, which divides each vertebra into three sections to measure height loss between pre-fracture and post-fracture states, followed by fracture severity classification using a Support Vector Machine (SVM). Our approach achieves state-of-the-art classification performance on both the Verse2019 dataset and in-house dataset, and it provides cross-sectional distribution maps of vertebral height loss. This practical tool enhances diagnostic accuracy in clinical settings and assisting in surgical decision-making.

Daily proton dose re-calculation on deep-learning corrected cone-beam computed tomography scans.

Vestergaard CD, Muren LP, Elstrøm UV, Stolarczyk L, Nørrevang O, Petersen SE, Taasti VT

pubmed logopapersMay 22 2025
Synthetic CT (sCT) generation from cone-beam CT (CBCT) must maintain stable performance and allow for accurate dose calculation across all treatment fractions to effectively support adaptive proton therapy. This study evaluated a 3D deep-learning (DL) network for sCT generation for prostate cancer patients over the full treatment course. Patient data from 25/6 prostate cancer patients were used to train/test the DL network. Patients in the test set had a planning CT, 39 CBCT images, and at least one repeat CT (reCT) used for replanning. The generated sCT images were compared to fan-beam planning and reCT images in terms of i) CT number accuracy and stability within spherical regions-of-interest (ROIs) in the bladder, prostate, and femoral heads, ii) proton range calculation accuracy through single-spot plans, and iii) dose trends in target coverage over the treatment course (one patient). The sCT images demonstrated image quality comparable to CT, while preserving the CBCT anatomy. The mean CT numbers on the sCT and CT images were comparable, e.g. for the prostate ROI they ranged from 29 HU to 59 HU for sCT, and from 36 HU to 50 HU for CT. The largest median proton range difference was 1.9 mm. Proton dose calculations showed excellent target coverage (V95%≥99.6 %) for the high-dose target. The DL network effectively generated high-quality sCT images with CT numbers, proton range, and dose characteristics comparable to fan-beam CT. Its robustness against intra-patient variations makes it a feasible tool for adaptive proton therapy.

Patient Reactions to Artificial Intelligence-Clinician Discrepancies: Web-Based Randomized Experiment.

Madanay F, O'Donohue LS, Zikmund-Fisher BJ

pubmed logopapersMay 22 2025
As the US Food and Drug Administration (FDA)-approved use of artificial intelligence (AI) for medical imaging rises, radiologists are increasingly integrating AI into their clinical practices. In lung cancer screening, diagnostic AI offers a second set of eyes with the potential to detect cancer earlier than human radiologists. Despite AI's promise, a potential problem with its integration is the erosion of patient confidence in clinician expertise when there is a discrepancy between the radiologist's and the AI's interpretation of the imaging findings. We examined how discrepancies between AI-derived recommendations and radiologists' recommendations affect patients' agreement with radiologists' recommendations and satisfaction with their radiologists. We also analyzed how patients' medical maximizing-minimizing preferences moderate these relationships. We conducted a randomized, between-subjects experiment with 1606 US adult participants. Assuming the role of patients, participants imagined undergoing a low-dose computerized tomography scan for lung cancer screening and receiving results and recommendations from (1) a radiologist only, (2) AI and a radiologist in agreement, (3) a radiologist who recommended more testing than AI (ie, radiologist overcalled AI), or (4) a radiologist who recommended less testing than AI (ie, radiologist undercalled AI). Participants rated the radiologist on three criteria: agreement with the radiologist's recommendation, how likely they would be to recommend the radiologist to family and friends, and how good of a provider they perceived the radiologist to be. We measured medical maximizing-minimizing preferences and categorized participants as maximizers (ie, those who seek aggressive intervention), minimizers (ie, those who prefer no or passive intervention), and neutrals (ie, those in the middle). Participants' agreement with the radiologist's recommendation was significantly lower when the radiologist undercalled AI (mean 4.01, SE 0.07, P<.001) than in the other 3 conditions, with no significant differences among them (radiologist overcalled AI [mean 4.63, SE 0.06], agreed with AI [mean 4.55, SE 0.07], or had no AI [mean 4.57, SE 0.06]). Similarly, participants were least likely to recommend (P<.001) and positively rate (P<.001) the radiologist who undercalled AI, with no significant differences among the other conditions. Maximizers agreed with the radiologist who overcalled AI (β=0.82, SE 0.14; P<.001) and disagreed with the radiologist who undercalled AI (β=-0.47, SE 0.14; P=.001). However, whereas minimizers disagreed with the radiologist who overcalled AI (β=-0.43, SE 0.18, P=.02), they did not significantly agree with the radiologist who undercalled AI (β=0.14, SE 0.17, P=.41). Radiologists who recommend less testing than AI may face decreased patient confidence in their expertise, but they may not face this same penalty for giving more aggressive recommendations than AI. Patients' reactions may depend in part on whether their general preferences to maximize or minimize align with the radiologists' recommendations. Future research should test communication strategies for radiologists' disclosure of AI discrepancies to patients.

ActiveNaf: A novel NeRF-based approach for low-dose CT image reconstruction through active learning.

Zidane A, Shimshoni I

pubmed logopapersMay 22 2025
CT imaging provides essential information about internal anatomy; however, conventional CT imaging delivers radiation doses that can become problematic for patients requiring repeated imaging, highlighting the need for dose-reduction techniques. This study aims to reduce radiation doses without compromising image quality. We propose an approach that combines Neural Attenuation Fields (NAF) with an active learning strategy to better optimize CT reconstructions given a limited number of X-ray projections. Our method uses a secondary neural network to predict the Peak Signal-to-Noise Ratio (PSNR) of 2D projections generated by NAF from a range of angles in the operational range of the CT scanner. This prediction serves as a guide for the active learning process in choosing the most informative projections. In contrast to conventional techniques that acquire all X-ray projections in a single session, our technique iteratively acquires projections. The iterative process improves reconstruction quality, reduces the number of required projections, and decreases patient radiation exposure. We tested our methodology on spinal imaging using a limited subset of the VerSe 2020 dataset. We compare image quality metrics (PSNR3D, SSIM3D, and PSNR2D) to the baseline method and find significant improvements. Our method achieves the same quality with 36 projections as the baseline method achieves with 60. Our findings demonstrate that our approach achieves high-quality 3D CT reconstructions from sparse data, producing clearer and more detailed images of anatomical structures. This work lays the groundwork for advanced imaging techniques, paving the way for safer and more efficient medical imaging procedures.

Deep Learning Image Reconstruction (DLIR) Algorithm to Maintain High Image Quality and Diagnostic Accuracy in Quadruple-low CT Angiography of Children with Pulmonary Sequestration: A Case Control Study.

Li H, Zhang Y, Hua S, Sun R, Zhang Y, Yang Z, Peng Y, Sun J

pubmed logopapersMay 22 2025
CT angiography (CTA) is a commonly used clinical examination to detect abnormal arteries and diagnose pulmonary sequestration (PS). Reducing the radiation dose, contrast medium dosage, and injection pressure in CTA, especially in children, has always been an important research topic, but few research is proven by pathology. The current study aimed to evaluate the diagnostic accuracy for children with PS in a quadruple-low CTA (4L-CTA: low tube voltage, radiation, contrast medium, and injection flow rate) using deep learning image reconstruction (DLIR) in comparison with routine protocol CTA with adaptive statistical iterative reconstruction-V (ASIR-V) MATERIALS AND METHODS: 53 patients (1.50±1.36years) suspected with PS were enrolled to undergo chest 4L-CTA using 70kVp tube voltage with radiation dose or 0.90 mGy in volumetric CT dose index (CTDIvol) and contrast medium dose of 0.8 ml/kg injected in 16 s. Images were reconstructed using DLIR. Another 53 patients (1.25±1.02years) with a routine dose protocol was used for comparison, and images were reconstructed with ASIR-V. The contrast-to-noise ratio (CNR) and edge-rise distance (ERD) of the aorta were calculated. The subjective overall image quality and artery visualization were evaluated using a 5-point scale (5, excellent; 3, acceptable). All patients underwent surgery after CT, the sensitivity and specificity for diagnosing PS were calculated. 4L-CTA reduced radiation dose by 51%, contrast dose by 47%, injection flow rate by 44% and injection pressure by 44% compared to the routine CTA (all p<0.05). Both groups had satisfactory subjective image quality and achieved 100% in both sensitivity and specificity for diagnosing PS. 4L-CTA had a reduced CNR (by 27%, p<0.05) but similar ERD, which reflects the image spatial resolution (p>0.05) compared to the routine CTA. 4L-CTA revealed small arteries with a diameter of 0.8 mm. DLIR ensures the realization of 4L-CTA in children with PS for significant radiation and contrast dose reduction, while maintaining image quality, visualization of small arteries, and high diagnostic accuracy.

Render-FM: A Foundation Model for Real-time Photorealistic Volumetric Rendering

Zhongpai Gao, Meng Zheng, Benjamin Planche, Anwesa Choudhuri, Terrence Chen, Ziyan Wu

arxiv logopreprintMay 22 2025
Volumetric rendering of Computed Tomography (CT) scans is crucial for visualizing complex 3D anatomical structures in medical imaging. Current high-fidelity approaches, especially neural rendering techniques, require time-consuming per-scene optimization, limiting clinical applicability due to computational demands and poor generalizability. We propose Render-FM, a novel foundation model for direct, real-time volumetric rendering of CT scans. Render-FM employs an encoder-decoder architecture that directly regresses 6D Gaussian Splatting (6DGS) parameters from CT volumes, eliminating per-scan optimization through large-scale pre-training on diverse medical data. By integrating robust feature extraction with the expressive power of 6DGS, our approach efficiently generates high-quality, real-time interactive 3D visualizations across diverse clinical CT data. Experiments demonstrate that Render-FM achieves visual fidelity comparable or superior to specialized per-scan methods while drastically reducing preparation time from nearly an hour to seconds for a single inference step. This advancement enables seamless integration into real-time surgical planning and diagnostic workflows. The project page is: https://gaozhongpai.github.io/renderfm/.
Page 33 of 46453 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.