Sort by:
Page 57 of 66652 results

Can intraoperative improvement of radial endobronchial ultrasound imaging enhance the diagnostic yield in peripheral pulmonary lesions?

Nishida K, Ito T, Iwano S, Okachi S, Nakamura S, Chrétien B, Chen-Yoshikawa TF, Ishii M

pubmed logopapersMay 26 2025
Data regarding the diagnostic efficacy of radial endobronchial ultrasound (R-EBUS) findings obtained via transbronchial needle aspiration (TBNA)/biopsy (TBB) with endobronchial ultrasonography with a guide sheath (EBUS-GS) for peripheral pulmonary lesions (PPLs) are lacking. We evaluated whether intraoperative probe repositioning improves R-EBUS imaging and affects diagnostic yield and safety of EBUS-guided sampling for PPLs. We retrospectively studied 363 patients with PPLs who underwent TBNA/TBB (83 lesions) or TBB (280 lesions) using EBUS-GS. Based on the R-EBUS findings before and after these procedures, patients were categorized into three groups: the improved R-EBUS image (n = 52), unimproved R-EBUS image (n = 69), and initial within-lesion groups (n = 242). The impact of improved R-EBUS findings on diagnostic yield and complications was assessed using multivariable logistic regression, adjusting for lesion size, lesion location, and the presence of a bronchus leading to the lesion on CT. A separate exploratory random-forest model with SHAP analysis was used to explore factors associated with successful repositioning in lesions not initially "within." The diagnostic yield in the improved R-EBUS group was significantly higher than that in the unimproved R-EBUS group (76.9% vs. 46.4%, p = 0.001). The regression model revealed that the improvement in intraoperative R-EBUS findings was associated with a high diagnostic yield (odds ratio: 3.55, 95% confidence interval, 1.57-8.06, p = 0.002). Machine learning analysis indicated that inner lesion location and radiographic visibility were the most influential predictors of successful repositioning. The complication rates were similar across all groups (total complications: 5.8% vs. 4.3% vs. 6.2%, p = 0.943). Improved R-EBUS findings during TBNA/TBB or TBB with EBUS-GS were associated with a high diagnostic yield without an increase in complications, even when the initial R-EBUS findings were inadequate. This suggests that repeated intraoperative probe repositioning can safely boost outcomes.

Deep Learning for Pneumonia Diagnosis: A Custom CNN Approach with Superior Performance on Chest Radiographs

Mehta, A., Vyas, M.

medrxiv logopreprintMay 26 2025
A major global health and wellness issue causing major health problems and death, pneumonia underlines the need of quickly and precisely identifying and treating it. Though imaging technology has advanced, radiologists manual reading of chest X-rays still constitutes the basic method for pneumonia detection, which causes delays in both treatment and medical diagnosis. This study proposes a pneumonia detection method to automate the process using deep learning techniques. The concept employs a bespoke convolutional neural network (CNN) trained on different pneumonia-positive and pneumonia-negative cases from several healthcare providers. Various pre-processing steps were done on the chest radiographs to increase integrity and efficiency before teaching the design. Based on the comparison study with VGG19, ResNet50, InceptionV3, DenseNet201, and MobileNetV3, our bespoke CNN model was discovered to be the most efficient in balancing accuracy, recall, and parameter complexity. It shows 96.5% accuracy and 96.6% F1 score. This study contributes to the expansion of an automated, paired with a reliable, pneumonia finding system, which could improve personal outcomes and increase healthcare efficiency. The full project is available at here.

Deep learning reconstruction combined with contrast-enhancement boost in dual-low dose CT pulmonary angiography: a two-center prospective trial.

Shen L, Lu J, Zhou C, Bi Z, Ye X, Zhao Z, Xu M, Zeng M, Wang M

pubmed logopapersMay 24 2025
To investigate whether the deep learning reconstruction (DLR) combined with contrast-enhancement-boost (CE-boost) technique can improve the diagnostic quality of CT pulmonary angiography (CTPA) at low radiation and contrast doses, compared with routine CTPA using hybrid iterative reconstruction (HIR). This prospective two-center study included 130 patients who underwent CTPA for suspected pulmonary embolism. Patients were randomly divided into two groups: the routine CTPA group, reconstructed using HIR; and the dual-low dose CTPA group, reconstructed using HIR and DLR, additionally combined with the CE-boost to generate HIR-boost and DLR-boost images. Signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of pulmonary arteries were quantitatively assessed. Two experienced radiologists independently ordered CT images (5, best; 1, worst) based on overall image noise and vascular contrast. Diagnostic performance for PE detection was calculated for each dataset. Patient demographics were similar between groups. Compared to HIR images of the routine group, DLR-boost images of the dual-low dose group were significantly better at qualitative scores (p < 0.001). The CT values of pulmonary arteries between the DLR-boost and the HIR images were comparable (p > 0.05), whereas the SNRs and CNRs of pulmonary arteries in the DLR-boost images were the highest among all five datasets (p < 0.001). The AUCs of DLR, HIR-boost, and DLR-boost were 0.933, 0.924, and 0.986, respectively (all p > 0.05). DLR combined with CE-boost technique can significantly improve the image quality of CTPA with reduced radiation and contrast doses, facilitating a more accurate diagnosis of pulmonary embolism. Question The dual-low dose protocol is essential for detecting pulmonary emboli (PE) in follow-up CT pulmonary angiography (PA), yet effective solutions are still lacking. Findings Deep learning reconstruction (DLR)-boost with reduced radiation and contrast doses demonstrated higher quantitative and qualitative image quality than hybrid-iterative reconstruction in the routine CTPA. Clinical relevance DLR-boost based low-radiation and low-contrast-dose CTPA protocol offers a novel strategy to further enhance the image quality and diagnosis accuracy for pulmonary embolism patients.

Evaluation of locoregional invasiveness of early lung adenocarcinoma manifesting as ground-glass nodules via [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT imaging.

Ruan D, Shi S, Guo W, Pang Y, Yu L, Cai J, Wu Z, Wu H, Sun L, Zhao L, Chen H

pubmed logopapersMay 24 2025
Accurate differentiation of the histologic invasiveness of early-stage lung adenocarcinoma is crucial for determining surgical strategies. This study aimed to investigate the potential of [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT in assessing the invasiveness of early lung adenocarcinoma presenting as ground-glass nodules (GGNs) and identifying imaging features with strong predictive potential. This prospective study (NCT04588064) was conducted between July 2020 and July 2022, focusing on GGNs that were confirmed postoperatively to be either invasive adenocarcinoma (IAC), minimally invasive adenocarcinoma (MIA), or precursor glandular lesions (PGL). A total of 45 patients with 53 pulmonary GGNs were included in the study: 19 patients with GGNs associated with PGL-MIA and 34 with IAC. Lung nodules were segmented using the Segment Anything Model in Medical Images (MedSAM) and the PET Tumor Segmentation Extension. Clinical characteristics, along with conventional and high-throughput radiomics features from High-resolution CT (HRCT) and PET scans, were analysed. The predictive performance of these features in differentiating between PGL or MIA (PGL-MIA) and IAC was assessed using 5-fold cross-validation across six machine learning algorithms. Model validation was performed on an independent external test set (n = 11). The Chi-squared, Fisher's exact, and DeLong tests were employed to compare the performance of the models. The maximum standardised uptake value (SUVmax) derived from [<sup>68</sup>Ga]Ga-FAPI-46 PET was identified as an independent predictor of IAC. A cut-off value of 1.82 yielded a sensitivity of 94% (32/34), specificity of 84% (16/19), and an overall accuracy of 91% (48/53) in the training set, while achieving 100% (12/12) accuracy in the external test set. Radiomics-based classification further improved diagnostic performance, achieving a sensitivity of 97% (33/34), specificity of 89% (17/19), accuracy of 94% (50/53), and an area under the receiver operating characteristic curve (AUC) of 0.97 [95% CI: 0.93-1.00]. Compared with the CT-based radiomics model and the PET-based model, the combined PET/CT radiomics model did not show significant improvement in predictive performance. The key predictive feature was [<sup>68</sup>Ga]Ga-FAPI-46 PET log-sigma-7-mm-3D_firstorder_RootMeanSquared. The SUVmax derived from [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT can effectively differentiate the invasiveness of early-stage lung adenocarcinoma manifesting as GGNs. Integrating high-throughput features from [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT images can considerably enhance classification accuracy. NCT04588064; URL: https://clinicaltrials.gov/study/NCT04588064 .

A novel multimodal computer-aided diagnostic model for pulmonary embolism based on hybrid transformer-CNN and tabular transformer.

Zhang W, Gu Y, Ma H, Yang L, Zhang B, Wang J, Chen M, Lu X, Li J, Liu X, Yu D, Zhao Y, Tang S, He Q

pubmed logopapersMay 24 2025
Pulmonary embolism (PE) is a life-threatening clinical problem where early diagnosis and prompt treatment are essential to reducing morbidity and mortality. While the combination of CT images and electronic health records (EHR) can help improve computer-aided diagnosis, there are many challenges that need to be addressed. The primary objective of this study is to leverage both 3D CT images and EHR data to improve PE diagnosis. First, for 3D CT images, we propose a network combining Swin Transformers with 3D CNNs, enhanced by a Multi-Scale Feature Fusion (MSFF) module to address fusion challenges between different encoders. Secondly, we introduce a Polarized Self-Attention (PSA) module to enhance the attention mechanism within the 3D CNN. And then, for EHR data, we design the Tabular Transformer for effective feature extraction. Finally, we design and evaluate three multimodal attention fusion modules to integrate CT and EHR features, selecting the most effective one for final fusion. Experimental results on the RadFusion dataset demonstrate that our model significantly outperforms existing state-of-the-art methods, achieving an AUROC of 0.971, an F1 score of 0.926, and an accuracy of 0.920. These results underscore the effectiveness and innovation of our multimodal approach in advancing PE diagnosis.

Meta-analysis of AI-based pulmonary embolism detection: How reliable are deep learning models?

Lanza E, Ammirabile A, Francone M

pubmed logopapersMay 23 2025
Deep learning (DL)-based methods show promise in detecting pulmonary embolism (PE) on CT pulmonary angiography (CTPA), potentially improving diagnostic accuracy and workflow efficiency. This meta-analysis aimed to (1) determine pooled performance estimates of DL algorithms for PE detection; and (2) compare the diagnostic efficacy of convolutional neural network (CNN)- versus U-Net-based architectures. Following PRISMA guidelines, we searched PubMed and EMBASE through April 15, 2025 for English-language studies (2010-2025) reporting DL models for PE detection with extractable 2 × 2 data or performance metrics. True/false positives and negatives were reconstructed when necessary under an assumed 50 % PE prevalence (with 0.5 continuity correction). We approximated AUROC as the mean of sensitivity and specificity if not directly reported. Sensitivity, specificity, accuracy, PPV and NPV were pooled using a DerSimonian-Laird random-effects model with Freeman-Tukey transformation; AUROC values were combined via a fixed-effect inverse-variance approach. Heterogeneity was assessed by Cochran's Q and I<sup>2</sup>. Subgroup analyses contrasted CNN versus U-Net models. Twenty-four studies (n = 22,984 patients) met inclusion criteria. Pooled estimates were: AUROC 0.895 (95 % CI: 0.874-0.917), sensitivity 0.894 (0.856-0.923), specificity 0.871 (0.831-0.903), accuracy 0.857 (0.833-0.882), PPV 0.832 (0.794-0.869) and NPV 0.902 (0.874-0.929). Between-study heterogeneity was high (I<sup>2</sup> ≈ 97 % for sensitivity/specificity). U-Net models exhibited higher sensitivity (0.899 vs 0.893) and CNN models higher specificity (0.926 vs 0.900); subgroup Q-tests confirmed significant differences for both sensitivity (p = 0.0002) and specificity (p < 0.001). DL algorithms demonstrate high diagnostic accuracy for PE detection on CTPA, with complementary strengths: U-Net architectures excel in true-positive identification, whereas CNNs yield fewer false positives. However, marked heterogeneity underscores the need for standardized, prospective validation before routine clinical implementation.

A Unified Multi-Scale Attention-Based Network for Automatic 3D Segmentation of Lung Parenchyma & Nodules In Thoracic CT Images

Muhammad Abdullah, Furqan Shaukat

arxiv logopreprintMay 23 2025
Lung cancer has been one of the major threats across the world with the highest mortalities. Computer-aided detection (CAD) can help in early detection and thus can help increase the survival rate. Accurate lung parenchyma segmentation (to include the juxta-pleural nodules) and lung nodule segmentation, the primary symptom of lung cancer, play a crucial role in the overall accuracy of the Lung CAD pipeline. Lung nodule segmentation is quite challenging because of the diverse nodule types and other inhibit structures present within the lung lobes. Traditional machine/deep learning methods suffer from generalization and robustness. Recent Vision Language Models/Foundation Models perform well on the anatomical level, but they suffer on fine-grained segmentation tasks, and their semi-automatic nature limits their effectiveness in real-time clinical scenarios. In this paper, we propose a novel method for accurate 3D segmentation of lung parenchyma and lung nodules. The proposed architecture is an attention-based network with residual blocks at each encoder-decoder state. Max pooling is replaced by strided convolutions at the encoder, and trilinear interpolation is replaced by transposed convolutions at the decoder to maximize the number of learnable parameters. Dilated convolutions at each encoder-decoder stage allow the model to capture the larger context without increasing computational costs. The proposed method has been evaluated extensively on one of the largest publicly available datasets, namely LUNA16, and is compared with recent notable work in the domain using standard performance metrics like Dice score, IOU, etc. It can be seen from the results that the proposed method achieves better performance than state-of-the-art methods. The source code, datasets, and pre-processed data can be accessed using the link: https://github.com/EMeRALDsNRPU/Attention-Based-3D-ResUNet.

Pixels to Prognosis: Harmonized Multi-Region CT-Radiomics and Foundation-Model Signatures Across Multicentre NSCLC Data

Shruti Atul Mali, Zohaib Salahuddin, Danial Khan, Yumeng Zhang, Henry C. Woodruff, Eduardo Ibor-Crespo, Ana Jimenez-Pastor, Luis Marti-Bonmati, Philippe Lambin

arxiv logopreprintMay 23 2025
Purpose: To evaluate the impact of harmonization and multi-region CT image feature integration on survival prediction in non-small cell lung cancer (NSCLC) patients, using handcrafted radiomics, pretrained foundation model (FM) features, and clinical data from a multicenter dataset. Methods: We analyzed CT scans and clinical data from 876 NSCLC patients (604 training, 272 test) across five centers. Features were extracted from the whole lung, tumor, mediastinal nodes, coronary arteries, and coronary artery calcium (CAC). Handcrafted radiomics and FM deep features were harmonized using ComBat, reconstruction kernel normalization (RKN), and RKN+ComBat. Regularized Cox models predicted overall survival; performance was assessed using the concordance index (C-index), 5-year time-dependent area under the curve (t-AUC), and hazard ratio (HR). SHapley Additive exPlanations (SHAP) values explained feature contributions. A consensus model used agreement across top region of interest (ROI) models to stratify patient risk. Results: TNM staging showed prognostic utility (C-index = 0.67; HR = 2.70; t-AUC = 0.85). The clinical + tumor radiomics model with ComBat achieved a C-index of 0.7552 and t-AUC of 0.8820. FM features (50-voxel cubes) combined with clinical data yielded the highest performance (C-index = 0.7616; t-AUC = 0.8866). An ensemble of all ROIs and FM features reached a C-index of 0.7142 and t-AUC of 0.7885. The consensus model, covering 78% of valid test cases, achieved a t-AUC of 0.92, sensitivity of 97.6%, and specificity of 66.7%. Conclusion: Harmonization and multi-region feature integration improve survival prediction in multicenter NSCLC data. Combining interpretable radiomics, FM features, and consensus modeling enables robust risk stratification across imaging centers.

Lung volume assessment for mean dark-field coefficient calculation using different determination methods.

Gassert FT, Heuchert J, Schick R, Bast H, Urban T, Dorosti T, Zimmermann GS, Ziegelmayer S, Marka AW, Graf M, Makowski MR, Pfeiffer D, Pfeiffer F

pubmed logopapersMay 23 2025
Accurate lung volume determination is crucial for reliable dark-field imaging. We compared different approaches for the determination of lung volume in mean dark-field coefficient calculation. In this retrospective analysis of data prospectively acquired between October 2018 and October 2020, patients at least 18 years of age who underwent chest computed tomography (CT) were screened for study participation. Inclusion criteria were the ability to consent and to stand upright without help. Exclusion criteria were pregnancy, lung cancer, pleural effusion, atelectasis, air space disease, ground-glass opacities, and pneumothorax. Lung volume was calculated using four methods: conventional radiography (CR) using shape information; a convolutional neural network (CNN) trained for CR; CT-based volume estimation; and results from pulmonary function testing (PFT). Results were compared using a Student t-test and Spearman ρ correlation statistics. We studied 81 participants (51 men, 30 women), aged 64 ± 12 years (mean ± standard deviation). All lung volumes derived from the various methods were different from each other: CR, 7.27 ± 1.64 L; CNN, 4.91 ± 1.05 L; CT, 5.25 ± 1.36 L; PFT, 6.54 L ± 1.52 L; p < 0.001 for all comparisons. A high positive correlation was found for all combinations (p < 0.001 for all), the highest one being between CT and CR (ρ = 0.88) and the lowest one between PFT and CNN (ρ = 0.78). Lung volume and therefore mean dark-field coefficient calculation is highly dependent on the method used, taking into consideration different positioning and inhalation depths. This study underscores the impact of the method used for lung volume determination. In the context of mean dark-field coefficient calculation, CR-based methods are more desirable because both dark-field images and conventional images are acquired at the same breathing state, and therefore, biases due to differences in inhalation depth are eliminated. Lung volume measurements vary significantly between different determination methods. Mean dark-field coefficient calculations require the same method to ensure comparability. Radiography-based methods simplify workflows and minimize biases, making them most suitable.
Page 57 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.