Sort by:
Page 333 of 3423413 results

ActiveNaf: A novel NeRF-based approach for low-dose CT image reconstruction through active learning.

Zidane A, Shimshoni I

pubmed logopapersMay 22 2025
CT imaging provides essential information about internal anatomy; however, conventional CT imaging delivers radiation doses that can become problematic for patients requiring repeated imaging, highlighting the need for dose-reduction techniques. This study aims to reduce radiation doses without compromising image quality. We propose an approach that combines Neural Attenuation Fields (NAF) with an active learning strategy to better optimize CT reconstructions given a limited number of X-ray projections. Our method uses a secondary neural network to predict the Peak Signal-to-Noise Ratio (PSNR) of 2D projections generated by NAF from a range of angles in the operational range of the CT scanner. This prediction serves as a guide for the active learning process in choosing the most informative projections. In contrast to conventional techniques that acquire all X-ray projections in a single session, our technique iteratively acquires projections. The iterative process improves reconstruction quality, reduces the number of required projections, and decreases patient radiation exposure. We tested our methodology on spinal imaging using a limited subset of the VerSe 2020 dataset. We compare image quality metrics (PSNR3D, SSIM3D, and PSNR2D) to the baseline method and find significant improvements. Our method achieves the same quality with 36 projections as the baseline method achieves with 60. Our findings demonstrate that our approach achieves high-quality 3D CT reconstructions from sparse data, producing clearer and more detailed images of anatomical structures. This work lays the groundwork for advanced imaging techniques, paving the way for safer and more efficient medical imaging procedures.

Deep Learning-Based Multimodal Feature Interaction-Guided Fusion: Enhancing the Evaluation of EGFR in Advanced Lung Adenocarcinoma.

Xu J, Feng B, Chen X, Wu F, Liu Y, Yu Z, Lu S, Duan X, Chen X, Li K, Zhang W, Dai X

pubmed logopapersMay 22 2025
The aim of this study is to develop a deep learning-based multimodal feature interaction-guided fusion (DL-MFIF) framework that integrates macroscopic information from computed tomography (CT) images with microscopic information from whole-slide images (WSIs) to predict the epidermal growth factor receptor (EGFR) mutations of primary lung adenocarcinoma in patients with advanced-stage disease. Data from 396 patients with lung adenocarcinoma across two medical institutions were analyzed. The data from 243 cases were divided into a training set (n=145) and an internal validation set (n=98) in a 6:4 ratio, and data from an additional 153 cases from another medical institution were included as an external validation set. All cases included CT scan images and WSIs. To integrate multimodal information, we developed the DL-MFIF framework, which leverages deep learning techniques to capture the interactions between radiomic macrofeatures derived from CT images and microfeatures obtained from WSIs. Compared to other classification models, the DL-MFIF model achieved significantly higher area under the curve (AUC) values. Specifically, the model outperformed others on both the internal validation set (AUC=0.856, accuracy=0.750) and the external validation set (AUC=0.817, accuracy=0.708). Decision curve analysis (DCA) demonstrated that the model provided superior net benefits(range 0.15-0.87). Delong's test for external validation confirmed the statistical significance of the results (P<0.05). The DL-MFIF model demonstrated excellent performance in evaluating and distinguishing the EGFR in patients with advanced lung adenocarcinoma. This model effectively aids radiologists in accurately classifying EGFR mutations in patients with primary lung adenocarcinoma, thereby improving treatment outcomes for this population.

Deep Learning Image Reconstruction (DLIR) Algorithm to Maintain High Image Quality and Diagnostic Accuracy in Quadruple-low CT Angiography of Children with Pulmonary Sequestration: A Case Control Study.

Li H, Zhang Y, Hua S, Sun R, Zhang Y, Yang Z, Peng Y, Sun J

pubmed logopapersMay 22 2025
CT angiography (CTA) is a commonly used clinical examination to detect abnormal arteries and diagnose pulmonary sequestration (PS). Reducing the radiation dose, contrast medium dosage, and injection pressure in CTA, especially in children, has always been an important research topic, but few research is proven by pathology. The current study aimed to evaluate the diagnostic accuracy for children with PS in a quadruple-low CTA (4L-CTA: low tube voltage, radiation, contrast medium, and injection flow rate) using deep learning image reconstruction (DLIR) in comparison with routine protocol CTA with adaptive statistical iterative reconstruction-V (ASIR-V) MATERIALS AND METHODS: 53 patients (1.50±1.36years) suspected with PS were enrolled to undergo chest 4L-CTA using 70kVp tube voltage with radiation dose or 0.90 mGy in volumetric CT dose index (CTDIvol) and contrast medium dose of 0.8 ml/kg injected in 16 s. Images were reconstructed using DLIR. Another 53 patients (1.25±1.02years) with a routine dose protocol was used for comparison, and images were reconstructed with ASIR-V. The contrast-to-noise ratio (CNR) and edge-rise distance (ERD) of the aorta were calculated. The subjective overall image quality and artery visualization were evaluated using a 5-point scale (5, excellent; 3, acceptable). All patients underwent surgery after CT, the sensitivity and specificity for diagnosing PS were calculated. 4L-CTA reduced radiation dose by 51%, contrast dose by 47%, injection flow rate by 44% and injection pressure by 44% compared to the routine CTA (all p<0.05). Both groups had satisfactory subjective image quality and achieved 100% in both sensitivity and specificity for diagnosing PS. 4L-CTA had a reduced CNR (by 27%, p<0.05) but similar ERD, which reflects the image spatial resolution (p>0.05) compared to the routine CTA. 4L-CTA revealed small arteries with a diameter of 0.8 mm. DLIR ensures the realization of 4L-CTA in children with PS for significant radiation and contrast dose reduction, while maintaining image quality, visualization of small arteries, and high diagnostic accuracy.

Deep learning-based model for difficult transfemoral access prediction compared with human assessment in stroke thrombectomy.

Canals P, Garcia-Tornel A, Requena M, Jabłońska M, Li J, Balocco S, Díaz O, Tomasello A, Ribo M

pubmed logopapersMay 22 2025
In mechanical thrombectomy (MT), extracranial vascular tortuosity is among the main determinants of procedure duration and success. Currently, no rapid and reliable method exists to identify the anatomical features precluding fast and stable access to the cervical vessels. A retrospective sample of 513 patients were included in this study. Patients underwent first-line transfemoral MT following anterior circulation large vessel occlusion stroke. Difficult transfemoral access (DTFA) was defined as impossible common carotid catheterization or time from groin puncture to first carotid angiogram >30 min. A machine learning model based on 29 anatomical features automatically extracted from head-and-neck computed tomography angiography (CTA) was developed to predict DTFA. Three experienced raters independently assessed the likelihood of DTFA on a reduced cohort of 116 cases using a Likert scale as benchmark for the model, using preprocedural CTA as well as automatic 3D vascular segmentation separately. Among the study population, 11.5% of procedures (59/513) presented DTFA. Six different features from the aortic, supra-aortic, and cervical regions were included in the model. Cross-validation resulted in an area under the receiver operating characteristic (AUROC) curve of 0.76 (95% CI 0.75 to 0.76) for DTFA prediction, with high sensitivity for impossible access identification (0.90, 95% CI 0.81 to 0.94). The model outperformed human assessment in the reduced cohort [F1-score (95% CI) by experts with CTA: 0.43 (0.37 to 0.50); experts with 3D segmentation: 0.50 (0.46 to 0.54); and model: 0.70 (0.65 to 0.75)]. A fully automatic model for DTFA prediction was developed and validated. The presented method improved expert assessment of difficult access prediction in stroke MT. Derived information could be used to guide decisions regarding arterial access for MT.

An Interpretable Deep Learning Approach for Autism Spectrum Disorder Detection in Children Using NASNet-Mobile.

K VRP, Hima Bindu C, Devi KRM

pubmed logopapersMay 22 2025
Autism spectrum disorder (ASD) is a multifaceted neurodevelopmental disorder featuring impaired social interactions and communication abilities engaging the individuals in a restrictive or repetitive behaviour. Though incurable early detection and intervention can reduce the severity of symptoms. Structural magnetic resonance imaging (sMRI) can improve diagnostic accuracy, facilitating early diagnosis to offer more tailored care. With the emergence of deep learning (DL), neuroimaging-based approaches for ASD diagnosis have been focused. However, many existing models lack interpretability of their decisions for diagnosis. The prime objective of this work is to perform ASD classification precisely and to interpret the classification process in a better way so as to discern the major features that are appropriate for the prediction of disorder. The proposed model employs neural architecture search network - mobile(NASNet-Mobile) model for ASD detection, which is integrated with an explainable artificial intelligence (XAI) technique called local interpretable model-agnostic explanations (LIME) for increased transparency of ASD classification. The model is trained on sMRI images of two age groups taken from autism brain imaging data exchange-I (ABIDE-I) dataset. The proposed model yielded accuracy of 0.9607, F1-score of 0.9614, specificity of 0.9774, sensitivity of 0.9451, negative predicted value (NPV) of 0.9429, positive predicted value (PPV) of 0.9783 and the diagnostic odds ratio of 745.59 for 2 to 11 years age group compared to 12 to 18 years group. These results are superior compared to other state of the art models Inception v3 and SqueezeNet.

Generative adversarial DacFormer network for MRI brain tumor segmentation.

Zhang M, Sun Q, Han Y, Zhang M, Wang W, Zhang J

pubmed logopapersMay 22 2025
Current brain tumor segmentation methods often utilize a U-Net architecture based on efficient convolutional neural networks. While effective, these architectures primarily model local dependencies, lacking the ability to capture global interactions like pure Transformer. However, using pure Transformer directly causes the network to lose local feature information. To address this limitation, we propose the Generative Adversarial Dilated Attention Convolutional Transformer(GDacFormer). GDacFormer enhances interactions between tumor regions while balancing global and local information through the integration of adversarial learning with an improved transformer module. Specifically, GDacFormer leverages a generative adversarial segmentation network to learn richer and more detailed features. It integrates a novel Transformer module, DacFormer, featuring multi-scale dilated attention and a next convolution block. This module, embedded within the generator, aggregates semantic multi-scale information, efficiently reduces the redundancy in the self-attention mechanism, and enhances local feature representations, thus refining the brain tumor segmentation results. GDacFormer achieves Dice values for whole tumor, core tumor, and enhancing tumor segmentation of 90.9%/90.8%/93.7%, 84.6%/85.7%/93.5%, and 77.9%/79.3%/86.3% on BraTS2019-2021 datasets. Extensive evaluations demonstrate the effectiveness and competitiveness of GDacFormer. The code for GDacFormer will be made publicly available at https://github.com/MuqinZ/GDacFormer.

Leveraging deep learning-based kernel conversion for more precise airway quantification on CT.

Choe J, Yun J, Kim MJ, Oh YJ, Bae S, Yu D, Seo JB, Lee SM, Lee HY

pubmed logopapersMay 22 2025
To evaluate the variability of fully automated airway quantitative CT (QCT) measures caused by different kernels and the effect of kernel conversion. This retrospective study included 96 patients who underwent non-enhanced chest CT at two centers. CT scans were reconstructed using four kernels (medium soft, medium sharp, sharp, very sharp) from three vendors. Kernel conversion targeting the medium soft kernel as reference was applied to sharp kernel images. Fully automated airway quantification was performed before and after conversion. The effects of kernel type and conversion on airway quantification were evaluated using analysis of variance, paired t-tests, and concordance correlation coefficient (CCC). Airway QCT measures (e.g., Pi10, wall thickness, wall area percentage, lumen diameter) decreased with sharper kernels (all, p < 0.001), with varying degrees of variability across variables and vendors. Kernel conversion substantially reduced variability between medium soft and sharp kernel images for vendors A (pooled CCC: 0.59 vs. 0.92) and B (0.40 vs. 0.91) and lung-dedicated sharp kernels of vendor C (0.26 vs. 0.71). However, it was ineffective for non-lung-dedicated sharp kernels of vendor C (0.81 vs. 0.43) and showed limited improvement in variability of QCT measures at the subsegmental level. Consistent airway segmentation and identical anatomic labeling improved subsegmental airway variability in theoretical tests. Deep learning-based kernel conversion reduced the measurement variability of airway QCT across various kernels and vendors but was less effective for non-lung-dedicated kernels and subsegmental airways. Consistent airway segmentation and precise anatomic labeling can further enhance reproducibility for reliable automated quantification. Question How do different CT reconstruction kernels affect the measurement variability of automated airway measurements, and can deep learning-based kernel conversion reduce this variability? Findings Kernel conversion improved measurement consistency across vendors for lung-dedicated kernels, but showed limited effectiveness for non-lung-dedicated kernels and subsegmental airways. Clinical relevance Understanding kernel-related variability in airway quantification and mitigating it through deep learning enables standardized analysis, but further refinements are needed for robust airway segmentation, particularly for improving measurement variability in subsegmental airways and specific kernels.

CT-Agent: A Multimodal-LLM Agent for 3D CT Radiology Question Answering

Yuren Mao, Wenyi Xu, Yuyang Qin, Yunjun Gao

arxiv logopreprintMay 22 2025
Computed Tomography (CT) scan, which produces 3D volumetric medical data that can be viewed as hundreds of cross-sectional images (a.k.a. slices), provides detailed anatomical information for diagnosis. For radiologists, creating CT radiology reports is time-consuming and error-prone. A visual question answering (VQA) system that can answer radiologists' questions about some anatomical regions on the CT scan and even automatically generate a radiology report is urgently needed. However, existing VQA systems cannot adequately handle the CT radiology question answering (CTQA) task for: (1) anatomic complexity makes CT images difficult to understand; (2) spatial relationship across hundreds slices is difficult to capture. To address these issues, this paper proposes CT-Agent, a multimodal agentic framework for CTQA. CT-Agent adopts anatomically independent tools to break down the anatomic complexity; furthermore, it efficiently captures the across-slice spatial relationship with a global-local token compression strategy. Experimental results on two 3D chest CT datasets, CT-RATE and RadGenome-ChestCT, verify the superior performance of CT-Agent.

SAMba-UNet: Synergizing SAM2 and Mamba in UNet with Heterogeneous Aggregation for Cardiac MRI Segmentation

Guohao Huo, Ruiting Dai, Hao Tang

arxiv logopreprintMay 22 2025
To address the challenge of complex pathological feature extraction in automated cardiac MRI segmentation, this study proposes an innovative dual-encoder architecture named SAMba-UNet. The framework achieves cross-modal feature collaborative learning by integrating the vision foundation model SAM2, the state-space model Mamba, and the classical UNet. To mitigate domain discrepancies between medical and natural images, a Dynamic Feature Fusion Refiner is designed, which enhances small lesion feature extraction through multi-scale pooling and a dual-path calibration mechanism across channel and spatial dimensions. Furthermore, a Heterogeneous Omni-Attention Convergence Module (HOACM) is introduced, combining global contextual attention with branch-selective emphasis mechanisms to effectively fuse SAM2's local positional semantics and Mamba's long-range dependency modeling capabilities. Experiments on the ACDC cardiac MRI dataset demonstrate that the proposed model achieves a Dice coefficient of 0.9103 and an HD95 boundary error of 1.0859 mm, significantly outperforming existing methods, particularly in boundary localization for complex pathological structures such as right ventricular anomalies. This work provides an efficient and reliable solution for automated cardiac disease diagnosis, and the code will be open-sourced.

On factors that influence deep learning-based dose prediction of head and neck tumors.

Gao R, Mody P, Rao C, Dankers F, Staring M

pubmed logopapersMay 22 2025
<i>Objective.</i>This study investigates key factors influencing deep learning-based dose prediction models for head and neck cancer radiation therapy. The goal is to evaluate model accuracy, robustness, and computational efficiency, and to identify key components necessary for optimal performance.<i>Approach.</i>We systematically analyze the impact of input and dose grid resolution, input type, loss function, model architecture, and noise on model performance. Two datasets are used: a public dataset (OpenKBP) and an in-house clinical dataset. Model performance is primarily evaluated using two metrics: dose score and dose-volume histogram (DVH) score.<i>Main results.</i>High-resolution inputs improve prediction accuracy (dose score and DVH score) by 8.6%-13.5% compared to low resolution. Using a combination of CT, planning target volumes, and organs-at-risk as input significantly enhances accuracy, with improvements of 57.4%-86.8% over using CT alone. Integrating mean absolute error (MAE) loss with value-based and criteria-based DVH loss functions further boosts DVH score by 7.2%-7.5% compared to MAE loss alone. In the robustness analysis, most models show minimal degradation under Poisson noise (0-0.3 Gy) but are more susceptible to adversarial noise (0.2-7.8 Gy). Notably, certain models, such as SwinUNETR, demonstrate superior robustness against adversarial perturbations.<i>Significance.</i>These findings highlight the importance of optimizing deep learning models and provide valuable guidance for achieving more accurate and reliable radiotherapy dose prediction.
Page 333 of 3423413 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.