Sort by:
Page 26 of 46453 results

Deep learning reconstruction improves computer-aided pulmonary nodule detection and measurement accuracy for ultra-low-dose chest CT.

Wang J, Zhu Z, Pan Z, Tan W, Han W, Zhou Z, Hu G, Ma Z, Xu Y, Ying Z, Sui X, Jin Z, Song L, Song W

pubmed logopapersMay 30 2025
To compare the image quality and pulmonary nodule detectability and measurement accuracy between deep learning reconstruction (DLR) and hybrid iterative reconstruction (HIR) of chest ultra-low-dose CT (ULDCT). Participants who underwent chest standard-dose CT (SDCT) followed by ULDCT from October 2020 to January 2022 were prospectively included. ULDCT images reconstructed with HIR and DLR were compared with SDCT images to evaluate image quality, nodule detection rate, and measurement accuracy using a commercially available deep learning-based nodule evaluation system. Wilcoxon signed-rank test was used to evaluate the percentage errors of nodule size and nodule volume between HIR and DLR images. Eighty-four participants (54 ± 13 years; 26 men) were finally enrolled. The effective radiation doses of ULDCT and SDCT were 0.16 ± 0.02 mSv and 1.77 ± 0.67 mSv, respectively (P < 0.001). The mean ± standard deviation of the lung tissue noises was 61.4 ± 3.0 HU for SDCT, 61.5 ± 2.8 HU and 55.1 ± 3.4 HU for ULDCT reconstructed with HIR-Strong setting (HIR-Str) and DLR-Strong setting (DLR-Str), respectively (P < 0.001). A total of 535 nodules were detected. The nodule detection rates of ULDCT HIR-Str and ULDCT DLR-Str were 74.0% and 83.4%, respectively (P < 0.001). The absolute percentage error in nodule volume from that of SDCT was 19.5% in ULDCT HIR-Str versus 17.9% in ULDCT DLR-Str (P < 0.001). Compared with HIR, DLR reduced image noise, increased nodule detection rate, and improved measurement accuracy of nodule volume at chest ULDCT. Not applicable.

Machine learning-based hemodynamics quantitative assessment of pulmonary circulation using computed tomographic pulmonary angiography.

Xie H, Zhao X, Zhang N, Liu J, Yang G, Cao Y, Xu J, Xu L, Sun Z, Wen Z, Chai S, Liu D

pubmed logopapersMay 30 2025
Pulmonary hypertension (pH) is a malignant pulmonary circulation disease. Right heart catheterization (RHC) is the gold standard procedure for quantitative evaluation of pulmonary hemodynamics. Accurate and noninvasive quantitative evaluation of pulmonary hemodynamics is challenging due to the limitations of currently available assessment methods. Patients who underwent computed tomographic pulmonary angiography (CTPA) and RHC examinations within 2 weeks were included. The dataset was randomly divided into a training set and a test set at an 8:2 ratio. A radiomic feature model and another two-dimensional (2D) feature model aimed to quantitatively evaluate of pulmonary hemodynamics were constructed. The performance of models was determined by calculating the mean squared error, the intraclass correlation coefficient (ICC) and the area under the precision-recall curve (AUC-PR) and performing Bland-Altman analyses. 345 patients: 271 patients with PH (mean age 50 ± 17 years, 93 men) and 74 without PH (mean age 55 ± 16 years, 26 men) were identified. The predictive results of pulmonary hemodynamics of radiomic feature model integrating 5 2D features and other 30 radiomic features were consistent with the results from RHC, and outperformed another 2D feature model. The radiomic feature model exhibited moderate to good reproducibility to predict pulmonary hemodynamic parameters (ICC reached 0.87). In addition, pH can be accurately identified based on a classification model (AUC-PR =0.99). This study provides a noninvasive method for comprehensively and quantitatively evaluating pulmonary hemodynamics using CTPA images, which has the potential to serve as an alternative to RHC, pending further validation.

Multimodal AI framework for lung cancer diagnosis: Integrating CNN and ANN models for imaging and clinical data analysis.

Oncu E, Ciftci F

pubmed logopapersMay 30 2025
Lung cancer remains a leading cause of cancer-related mortality worldwide, emphasizing the critical need for accurate and early diagnostic solutions. This study introduces a novel multimodal artificial intelligence (AI) framework that integrates Convolutional Neural Networks (CNNs) and Artificial Neural Networks (ANNs) to improve lung cancer classification and severity assessment. The CNN model, trained on 1019 preprocessed CT images, classified lung tissue into four histological categories, adenocarcinoma, large cell carcinoma, squamous cell carcinoma, and normal, with a weighted accuracy of 92 %. Interpretability is enhanced using Gradient-weighted Class Activation Mapping (Grad-CAM), which highlights the salient image regions influencing the model's predictions. In parallel, an ANN trained on clinical data from 999 patients-spanning 24 key features such as demographic, symptomatic, and genetic factors-achieves 99 % accuracy in predicting cancer severity (low, medium, high). SHapley Additive exPlanations (SHAP) are employed to provide both global and local interpretability of the ANN model, enabling transparent decision-making. Both models were rigorously validated using k-fold cross-validation to ensure robustness and reduce overfitting. This hybrid approach effectively combines spatial imaging data and structured clinical information, demonstrating strong predictive performance and offering an interpretable and comprehensive AI-based solution for lung cancer diagnosis and management.

Manual and automated facial de-identification techniques for patient imaging with preservation of sinonasal anatomy.

Ding AS, Nagururu NV, Seo S, Liu GS, Sahu M, Taylor RH, Creighton FX

pubmed logopapersMay 29 2025
Facial recognition of reconstructed computed tomography (CT) scans poses patient privacy risks, necessitating reliable facial de-identification methods. Current methods obscure sinuses, turbinates, and other anatomy relevant for otolaryngology. We present a facial de-identification method that preserves these structures, along with two automated workflows for large-volume datasets. A total of 20 adult head CTs from the New Mexico Decedent Image Database were included. Using 3D Slicer, a seed-growing technique was performed to label the skin around the face. This label was dilated bidirectionally to form a 6-mm mask that obscures facial features. This technique was then automated using: (1) segmentation propagation that deforms an atlas head CT and corresponding mask to match other scans and (2) a deep learning model (nnU-Net). Accuracy of these methods against manually generated masks was evaluated with Dice scores and modified Hausdorff distances (mHDs). Manual de-identification resulted in facial match rates of 45.0% (zero-fill), 37.5% (deletion), and 32.5% (re-face). Dice scores for automated face masks using segmentation propagation and nnU-Net were 0.667 ± 0.109 and 0.860 ± 0.029, respectively, with mHDs of 4.31 ± 3.04 mm and 1.55 ± 0.71 mm. Match rates after de-identification using segmentation propagation (zero-fill: 42.5%; deletion: 40.0%; re-face: 35.0%) and nnU-Net (zero-fill: 42.5%; deletion: 35.0%; re-face: 30.0%) were comparable to manual masks. We present a simple facial de-identification approach for head CTs, as well as automated methods for large-scale implementation. These techniques show promise for preventing patient identification while preserving underlying sinonasal anatomy, but further studies using live patient photographs are necessary to fully validate its effectiveness.

CT-denoimer: efficient contextual transformer network for low-dose CT denoising.

Zhang Y, Xu F, Zhang R, Guo Y, Wang H, Wei B, Ma F, Meng J, Liu J, Lu H, Chen Y

pubmed logopapersMay 29 2025
Low-dose computed tomography (LDCT) effectively reduces radiation exposure to patients, but introduces severe noise artifacts that affect diagnostic accuracy. Recently, Transformer-based network architectures have been widely applied to LDCT image denoising, generally achieving superior results compared to traditional convolutional methods. However, these methods are often hindered by high computational costs and struggles in capturing complex local contextual features, which negatively impact denoising performance. In this work, we propose CT-Denoimer, an efficient CT Denoising Transformer network that captures both global correlations and intricate, spatially varying local contextual details in CT images, enabling the generation of high-quality images. The core of our framework is a Transformer module that consists of two key components: the Multi-Dconv head Transposed Attention (MDTA) and the Mixed Contextual Feed-forward Network (MCFN). The MDTA block captures global correlations in the image with linear computational complexity, while the MCFN block manages multi-scale local contextual information, both static and dynamic, through a series of Enhanced Contextual Transformer (eCoT) modules. In addition, we incorporate Operation-Wise Attention Layers (OWALs) to enable collaborative refinement in the proposed CT-Denoimer, enhancing its ability to more effectively handle complex and varying noise patterns in LDCT images. Extensive experimental validation on both the AAPM-Mayo public dataset and a real-world clinical dataset demonstrated the state-of-the-art performance of the proposed CT-Denoimer. It achieved a peak signal-to-noise ratio (PSNR) of 33.681 dB, a structural similarity index measure (SSIM) of 0.921, an information fidelity criterion (IFC) of 2.857 and a visual information fidelity (VIF) of 0.349. Subjective assessment by radiologists gave an average score of 4.39, confirming its clinical applicability and clear advantages over existing methods. This study presents an innovative CT denoising Transformer network that sets a new benchmark in LDCT image denoising, excelling in both noise reduction and fine structure preservation.

Automated classification of midpalatal suture maturation stages from CBCTs using an end-to-end deep learning framework.

Milani OH, Mills L, Nikho A, Tliba M, Allareddy V, Ansari R, Cetin AE, Elnagar MH

pubmed logopapersMay 29 2025
Accurate classification of midpalatal suture maturation stages is critical for orthodontic diagnosis, treatment planning, and the assessment of maxillary growth. Cone Beam Computed Tomography (CBCT) imaging offers detailed insights into this craniofacial structure but poses unique challenges for deep learning image recognition model design due to its high dimensionality, noise artifacts, and variability in image quality. To address these challenges, we propose a novel technique that highlights key image features through a simple filtering process to improve image clarity prior to analysis, thereby enhancing the learning process and better aligning with the distribution of the input data domain. Our preprocessing steps include region-of-interest extraction, followed by high-pass and Sobel filtering for emphasis of low-level features. The feature extraction integrates Convolutional Neural Networks (CNN) architectures, such as EfficientNet and ResNet18, alongside our novel Multi-Filter Convolutional Residual Attention Network (MFCRAN) enhanced with Discrete Cosine Transform (DCT) layers. Moreover, to better capture the inherent order within the data classes, we augment the supervised training process with a ranking loss by attending to the relationship within the label domain. Furthermore, to adhere to diagnostic constraints while training the model, we introduce a tailored data augmentation strategy to improve classification accuracy and robustness. In order to validate our method, we employed a k-fold cross-validation protocol on a private dataset comprising 618 CBCT images, annotated into five stages (A, B, C, D, and E) by expert evaluators. The experimental results demonstrate the effectiveness of our proposed approach, achieving the highest classification accuracy of 79.02%, significantly outperforming competing architectures, which achieved accuracies ranging from 71.87 to 78.05%. This work introduces a novel and fully automated framework for midpalatal suture maturation classification, marking a substantial advancement in orthodontic diagnostics and treatment planning.

CT-Based Radiomics for Predicting PD-L1 Expression in Non-small Cell Lung Cancer: A Systematic Review and Meta-analysis.

Salimi M, Vadipour P, Khosravi A, Salimi B, Mabani M, Rostami P, Seifi S

pubmed logopapersMay 29 2025
The efficacy of immunotherapy in non-small cell lung cancer (NSCLC) is intricately associated with baseline PD-L1 expression rates. The standard method for measuring PD-L1 is immunohistochemistry, which is invasive and may not capture tumor heterogeneity. The primary aim of the current study is to assess whether CT-based radiomics models can accurately predict PD-L1 expression status in NSCLC and evaluate their quality and potential gaps in their design. Scopus, PubMed, Web of Science, Embase, and IEEE databases were systematically searched up until February 14, 2025, to retrieve relevant studies. Data from validation cohorts of models that classified patients by tumor proportion score (TPS) of 1% (TPS1) and 50% (TPS50) were extracted and analyzed separately. Quality assessment was performed through METRICS and QUADAS-2 tools. Diagnostic test accuracy meta-analysis was conducted using a bivariate random-effects approach to pool values of performance metrics. The qualitative synthesis included twenty-two studies, and the meta-analysis analyzed 11 studies with 997 individual subjects. The pooled AUC, sensitivity, and specificity of TPS1 models were 0.85, 0.76, and 0.79, respectively. The pooled AUC, sensitivity, and specificity of TPS50 models were 0.88, 0.72, and 0.86, accordingly. The QUADAS-2 tool identified a substantial risk of bias regarding the flow and timing and index test domains. Certain methodological limitations were highlighted by the METRICS score, which averaged 58.1% and ranged from 24% to 83.4%. CT-based radiomics demonstrates strong potential as a non-invasive method for predicting PD-L1 expression in NSCLC. While promising, significant methodological gaps must be addressed to achieve the generalizability and reliability required for clinical application.

Classification of biomedical lung cancer images using optimized binary bat technique by constructing oblique decision trees.

Aswal S, Ahuja NJ, Mehra R

pubmed logopapersMay 29 2025
Due to imbalanced data values and high-dimensional features of lung cancer from CT scans images creates significant challenges in clinical research. The improper classification of these images leads towards higher complexity in classification process. These critical issues compromise the extraction of biomedical traits and also design incomplete classification of lung cancer. As the conventional approaches are partially successful in dealing with the complex nature of high-dimensional and imbalanced biomedical data for lung cancer classification. Thus, there is a crucial need to develop a robust classification technique which can address these major concerns in the classification of lung cancer images. In this paper, we propose a novel structural formation of the oblique decision tree (OBT) using a swarm intelligence technique, namely, the Binary Bat Swarm Algorithm (BBSA). The application of BBSA enables a competitive recognition rate to make structural reforms while building OBT. Such integration improves the ability of the machine learning swarm classifier (MLSC) to handle high-dimensional features and imbalanced biomedical datasets. The adaptive feature selection using BBSA allows for the exploration and selection of relevant features required for classification from ODT. The ODT classifier introduces flexibility in decision boundaries, which enables it to capture complex linkages between biomedical data. The proposed MLSC model effectively handles high-dimensional, imbalanced lung cancer datasets using TCGA_LUSC_2016 and TCGA_LUAD_2016 modalities, achieving superior precision, recall, F-measure, and execution efficiency. The experiments are conducted in Python to evaluate the performance metrics that consistently demonstrate enhanced classification accuracy and reduced misclassification rates compared to existing methods. The MLSC is assessed in terms of both qualitative and quantitative measurements to study the capability of the proposed MLSC in classifying the instances more effectively than the conventional state-of-the-art methods.

Mild to moderate COPD, vitamin D deficiency, and longitudinal bone loss: The MESA study.

Ghotbi E, Hathaway QA, Hadidchi R, Momtazmanesh S, Bancks MP, Bluemke DA, Barr RG, Post WS, Budoff M, Smith BM, Lima JAC, Demehri S

pubmed logopapersMay 29 2025
Despite the established association between chronic obstructive pulmonary disease (COPD) severity and risk of osteoporosis, even after accounting for the known shared confounding variables (e.g., age, smoking, history of exacerbations, steroid use), there is paucity of data on bone loss among mild to moderate COPD, which is more prevalent in the general population. We conducted a longitudinal analysis using data from the Multi-Ethnic Study of Atherosclerosis. Participants with chest CT at Exam 5 (2010-2012) and Exam 6 (2016-2018) were included. Mild to moderate COPD was defined as forced expiratory volume in 1 s (FEV<sub>1</sub>) to forced vital capacity ratio of <0.70 and FEV<sub>1</sub> of 50 % or higher. Vitamin D deficiency was defined as serum vitamin D < 20 ng/mL. We utilized a validated deep learning algorithm to perform automated multilevel segmentation of vertebral bodies (T1-T10) from chest CT and derive 3D volumetric thoracic vertebral BMD measurements at Exam 5 and 6. Of the 1226 participants, 173 had known mild to moderate COPD at baseline, while 1053 had no known COPD. After adjusting for age, race/ethnicity, sex, body mass, index, bisphosphonate use, alcohol consumption, smoking, diabetes, physical activity, C-reactive protein and vitamin D deficiency, mild to moderate COPD was associated with faster decline in BMD (estimated difference, β = -0.38 g/cm<sup>3</sup>/year; 95 % CI: -0.74, -0.02). A significant interaction between COPD and vitamin D deficiency (p = 0.001) prompted stratified analyses. Among participants with vitamin D deficiency (47 % of participants), COPD was associated with faster decline in BMD (-0.64 g/cm<sup>3</sup>/year; 95 % CI: -1.17 to -0.12), whereas no significant association was observed among those with normal vitamin D in both crude and adjusted models. Mild to moderate COPD is associated with longitudinal declines in vertebral BMD exclusively in participants with vitamin D deficiency over 6-year follow-up. Vitamin D deficiency may play a crucial role in bone loss among patients with mild to moderate COPD.

ImmunoDiff: A Diffusion Model for Immunotherapy Response Prediction in Lung Cancer

Moinak Bhattacharya, Judy Huang, Amna F. Sher, Gagandeep Singh, Chao Chen, Prateek Prasanna

arxiv logopreprintMay 29 2025
Accurately predicting immunotherapy response in Non-Small Cell Lung Cancer (NSCLC) remains a critical unmet need. Existing radiomics and deep learning-based predictive models rely primarily on pre-treatment imaging to predict categorical response outcomes, limiting their ability to capture the complex morphological and textural transformations induced by immunotherapy. This study introduces ImmunoDiff, an anatomy-aware diffusion model designed to synthesize post-treatment CT scans from baseline imaging while incorporating clinically relevant constraints. The proposed framework integrates anatomical priors, specifically lobar and vascular structures, to enhance fidelity in CT synthesis. Additionally, we introduce a novel cbi-Adapter, a conditioning module that ensures pairwise-consistent multimodal integration of imaging and clinical data embeddings, to refine the generative process. Additionally, a clinical variable conditioning mechanism is introduced, leveraging demographic data, blood-based biomarkers, and PD-L1 expression to refine the generative process. Evaluations on an in-house NSCLC cohort treated with immune checkpoint inhibitors demonstrate a 21.24% improvement in balanced accuracy for response prediction and a 0.03 increase in c-index for survival prediction. Code will be released soon.
Page 26 of 46453 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.