Sort by:
Page 26 of 42415 results

Association between age and lung cancer risk: evidence from lung lobar radiomics.

Li Y, Lin C, Cui L, Huang C, Shi L, Huang S, Yu Y, Zhou X, Zhou Q, Chen K, Shi L

pubmed logopapersJun 5 2025
Previous studies have highlighted the prominent role of age in lung cancer risk, with signs of lung aging visible in computed tomography (CT) imaging. This study aims to characterize lung aging using quantitative radiomic features extracted from five delineated lung lobes and explore how age contributes to lung cancer development through these features. We analyzed baseline CT scans from the Wenling lung cancer screening cohort, consisting of 29,810 participants. Deep learning-based segmentation method was used to delineate lung lobes. A total of 1,470 features were extracted from each lobe. The minimum redundancy maximum relevance algorithm was applied to identify the top 10 age-related radiomic features among 13,137 never smokers. Multiple regression analyses were used to adjust for confounders in the association of age, lung lobar radiomic features, and lung cancer. Linear, Cox proportional hazards, and parametric accelerated failure time models were applied as appropriate. Mediation analyses were conducted to evaluate whether lobar radiomic features mediate the relationship between age and lung cancer risk. Age was significantly associated with an increased lung cancer risk, particularly among current smokers (hazard ratio = 1.07, P = 2.81 × 10<sup>- 13</sup>). Age-related radiomic features exhibited distinct effects across lung lobes. Specifically, the first order mean (mean attenuation value) filtered by wavelet in the right upper lobe increased with age (β = 0.019, P = 2.41 × 10<sup>- 276</sup>), whereas it decreased in the right lower lobe (β = -0.028, P = 7.83 × 10<sup>- 277</sup>). Three features, namely wavelet_HL_firstorder_Mean of the right upper lobe, wavelet_LH_firstorder_Mean of the right lower lobe, and original_shape_MinorAxisLength of the left upper lobe, were independently associated with lung cancer risk at Bonferroni-adjusted P value. Mediation analyses revealed that density and shape features partially mediated the relationship between age and lung cancer risk while a suppression effect was observed in the wavelet first order mean of right upper lobe. The study reveals lobe-specific heterogeneity in lung aging patterns through radiomics and their associations with lung cancer risk. These findings may contribute to identify new approaches for early intervention in lung cancer related to aging. Not applicable.

Deep learning-based cone-beam CT motion compensation with single-view temporal resolution.

Maier J, Sawall S, Arheit M, Paysan P, Kachelrieß M

pubmed logopapersJun 4 2025
Cone-beam CT (CBCT) scans that are affected by motion often require motion compensation to reduce artifacts or to reconstruct 4D (3D+time) representations of the patient. To do so, most existing strategies rely on some sort of gating strategy that sorts the acquired projections into motion bins. Subsequently, these bins can be reconstructed individually before further post-processing may be applied to improve image quality. While this concept is useful for periodic motion patterns, it fails in case of non-periodic motion as observed, for example, in irregularly breathing patients. To address this issue and to increase temporal resolution, we propose the deep single angle-based motion compensation (SAMoCo). To avoid gating, and therefore its downsides, the deep SAMoCo trains a U-net-like network to predict displacement vector fields (DVFs) representing the motion that occurred between any two given time points of the scan. To do so, 4D clinical CT scans are used to simulate 4D CBCT scans as well as the corresponding ground truth DVFs that map between the different motion states of the scan. The network is then trained to predict these DVFs as a function of the respective projection views and an initial 3D reconstruction. Once the network is trained, an arbitrary motion state corresponding to a certain projection view of the scan can be recovered by estimating DVFs from any other state or view and by considering them during reconstruction. Applied to 4D CBCT simulations of breathing patients, the deep SAMoCo provides high-quality reconstructions for periodic and non-periodic motion. Here, the deviations with respect to the ground truth are less than 27 HU on average, while respiratory motion, or the diaphragm position, can be resolved with an accuracy of about 0.75 mm. Similar results were obtained for real measurements where a high correlation with external motion monitoring signals could be observed, even in patients with highly irregular respiration. The ability to estimate DVFs as a function of two arbitrary projection views and an initial 3D reconstruction makes deep SAMoCo applicable to arbitrary motion patterns with single-view temporal resolution. Therefore, the deep SAMoCo is particularly useful for cases with unsteady breathing, compensation of residual motion during a breath-hold scan, or scans with fast gantry rotation times in which the data acquisition only covers a very limited number of breathing cycles. Furthermore, not requiring gating signals may simplify the clinical workflow and reduces the time needed for patient preparation.

ReXVQA: A Large-scale Visual Question Answering Benchmark for Generalist Chest X-ray Understanding

Ankit Pal, Jung-Oh Lee, Xiaoman Zhang, Malaikannan Sankarasubbu, Seunghyeon Roh, Won Jung Kim, Meesun Lee, Pranav Rajpurkar

arxiv logopreprintJun 4 2025
We present ReXVQA, the largest and most comprehensive benchmark for visual question answering (VQA) in chest radiology, comprising approximately 696,000 questions paired with 160,000 chest X-rays studies across training, validation, and test sets. Unlike prior efforts that rely heavily on template based queries, ReXVQA introduces a diverse and clinically authentic task suite reflecting five core radiological reasoning skills: presence assessment, location analysis, negation detection, differential diagnosis, and geometric reasoning. We evaluate eight state-of-the-art multimodal large language models, including MedGemma-4B-it, Qwen2.5-VL, Janus-Pro-7B, and Eagle2-9B. The best-performing model (MedGemma) achieves 83.24% overall accuracy. To bridge the gap between AI performance and clinical expertise, we conducted a comprehensive human reader study involving 3 radiology residents on 200 randomly sampled cases. Our evaluation demonstrates that MedGemma achieved superior performance (83.84% accuracy) compared to human readers (best radiology resident: 77.27%), representing a significant milestone where AI performance exceeds expert human evaluation on chest X-ray interpretation. The reader study reveals distinct performance patterns between AI models and human experts, with strong inter-reader agreement among radiologists while showing more variable agreement patterns between human readers and AI models. ReXVQA establishes a new standard for evaluating generalist radiological AI systems, offering public leaderboards, fine-grained evaluation splits, structured explanations, and category-level breakdowns. This benchmark lays the foundation for next-generation AI systems capable of mimicking expert-level clinical reasoning beyond narrow pathology classification. Our dataset will be open-sourced at https://huggingface.co/datasets/rajpurkarlab/ReXVQA

Computed tomography-based radiomics model for predicting station 4 lymph node metastasis in non-small cell lung cancer.

Kang Y, Li M, Xing X, Qian K, Liu H, Qi Y, Liu Y, Cui Y, Zhang H

pubmed logopapersJun 4 2025
This study aimed to develop and validate machine learning models for preoperative identification of metastasis to station 4 mediastinal lymph nodes (MLNM) in non-small cell lung cancer (NSCLC) patients at pathological N0-N2 (pN0-pN2) stage, thereby enhancing the precision of clinical decision-making. We included a total of 356 NSCLC patients at pN0-pN2 stage, divided into training (n = 207), internal test (n = 90), and independent test (n = 59) sets. Station 4 mediastinal lymph nodes (LNs) regions of interest (ROIs) were semi-automatically segmented on venous-phase computed tomography (CT) images for radiomics feature extraction. Using least absolute shrinkage and selection operator (LASSO) regression to select features with non-zero coefficients. Four machine learning algorithms-decision tree (DT), logistic regression (LR), random forest (RF), and support vector machine (SVM)-were employed to construct radiomics models. Clinical predictors were identified through univariate and multivariate logistic regression, which were subsequently integrated with radiomics features to develop combined models. Models performance were evaluated using receiver operating characteristic (ROC) analysis, calibration curves, decision curve analysis (DCA), and DeLong's test. Out of 1721 radiomics features, eight radiomics features were selected using LASSO regression. The RF-based combined model exhibited the strongest discriminative power, with an area under the curve (AUC) of 0.934 for the training set and 0.889 for the internal test set. The calibration curve and DCA further indicated the superior performance of the combined model based on RF. The independent test set further verified the model's robustness. The combined model based on RF, integrating radiomics and clinical features, effectively and non-invasively identifies metastasis to the station 4 mediastinal LNs in NSCLC patients at pN0-pN2 stage. This model serves as an effective auxiliary tool for clinical decision-making and has the potential to optimize treatment strategies and improve prognostic assessment for pN0-pN2 patients. Not applicable.

FPA-based weighted average ensemble of deep learning models for classification of lung cancer using CT scan images.

Zhou L, Jain A, Dubey AK, Singh SK, Gupta N, Panwar A, Kumar S, Althaqafi TA, Arya V, Alhalabi W, Gupta BB

pubmed logopapersJun 3 2025
Cancer is among the most dangerous diseases contributing to rising global mortality rates. Lung cancer, particularly adenocarcinoma, is one of the deadliest forms and severely impacts human life. Early diagnosis and appropriate treatment significantly increase patient survival rates. Computed Tomography (CT) is a preferred imaging modality for detecting lung cancer, as it offers detailed visualization of tumor structure and growth. With the advancement of deep learning, the automated identification of lung cancer from CT images has become increasingly effective. This study proposes a novel lung cancer detection framework using a Flower Pollination Algorithm (FPA)-based weighted ensemble of three high-performing pretrained Convolutional Neural Networks (CNNs): VGG16, ResNet101V2, and InceptionV3. Unlike traditional ensemble approaches that assign static or equal weights, the FPA adaptively optimizes the contribution of each CNN based on validation performance. This dynamic weighting significantly enhances diagnostic accuracy. The proposed FPA-based ensemble achieved an impressive accuracy of 98.2%, precision of 98.4%, recall of 98.6%, and an F1 score of 0.985 on the test dataset. In comparison, the best individual CNN (VGG16) achieved 94.6% accuracy, highlighting the superiority of the ensemble approach. These results confirm the model's effectiveness in accurate and reliable cancer diagnosis. The proposed study demonstrates the potential of deep learning and neural networks to transform cancer diagnosis, helping early detection and improving treatment outcomes.

Effect of contrast enhancement on diagnosis of interstitial lung abnormality in automatic quantitative CT measurement.

Choi J, Ahn Y, Kim Y, Noh HN, Do KH, Seo JB, Lee SM

pubmed logopapersJun 3 2025
To investigate the effect of contrast enhancement on the diagnosis of interstitial lung abnormalities (ILA) in automatic quantitative CT measurement in patients with paired pre- and post-contrast scans. Patients who underwent chest CT for thoracic surgery between April 2017 and December 2020 were retrospectively analyzed. ILA quantification was performed using deep learning-based automated software. Cases were categorized as ILA or non-ILA according to the Fleischner Society's definition, based on the quantification results or radiologist assessment (reference standard). Measurement variability, agreement, and diagnostic performance between the pre- and post-contrast scans were evaluated. In 1134 included patients, post-contrast scans quantified a slightly larger volume of nonfibrotic ILA (mean difference: -0.2%), due to increased ground-glass opacity and reticulation volumes (-0.2% and -0.1%), whereas the fibrotic ILA volume remained unchanged (0.0%). ILA was diagnosed in 15 (1.3%), 22 (1.9%), and 40 (3.5%) patients by pre- and post-contrast scans and radiologists, respectively. The agreement between the pre- and post-contrast scans was substantial (κ = 0.75), but both pre-contrast (κ = 0.46) and post-contrast (κ = 0.54) scans demonstrated moderate agreement with the radiologist. The sensitivity for ILA (32.5% vs. 42.5%, p = 0.221) and specificity for non-ILA (99.8% vs. 99.5%, p = 0.248) were comparable between pre- and post-contrast scans. Radiologist's reclassification for equivocal ILA due to unilateral abnormalities increased the sensitivity for ILA (67.5% and 75.0%, respectively) in both pre- and post-contrast scans. Applying automated quantification on post-contrast scans appears to be acceptable in terms of agreement and diagnostic performance; however, radiologists may need to improve sensitivity reclassifying equivocal ILA. Question The effect of contrast enhancement on the automated quantification of interstitial lung abnormality (ILA) remains unknown. Findings Automated quantification measured slightly larger ground-glass opacity and reticulation volumes on post-contrast scans than on pre-contrast scans; however, contrast enhancement did not affect the sensitivity for interstitial lung abnormality. Clinical relevance Applying automated quantification on post-contrast scans appears to be acceptable in terms of agreement and diagnostic performance.

Multi-Organ metabolic profiling with [<sup>18</sup>F]F-FDG PET/CT predicts pathological response to neoadjuvant immunochemotherapy in resectable NSCLC.

Ma Q, Yang J, Guo X, Mu W, Tang Y, Li J, Hu S

pubmed logopapersJun 2 2025
To develop and validate a novel nomogram combining multi-organ PET metabolic metrics for major pathological response (MPR) prediction in resectable non-small cell lung cancer (rNSCLC) patients receiving neoadjuvant immunochemotherapy. This retrospective cohort included rNSCLC patients who underwent baseline [<sup>18</sup>F]F-FDG PET/CT prior to neoadjuvant immunochemotherapy at Xiangya Hospital from April 2020 to April 2024. Patients were randomly stratified into training (70%) and validation (30%) cohorts. Using deep learning-based automated segmentation, we quantified metabolic parameters (SUV<sub>mean</sub>, SUV<sub>max</sub>, SUV<sub>peak</sub>, MTV, TLG) and their ratio to liver metabolic parameters for primary tumors and nine key organs. Feature selection employed a tripartite approach: univariate analysis, LASSO regression, and random forest optimization. The final multivariable model was translated into a clinically interpretable nomogram, with validation assessing discrimination, calibration, and clinical utility. Among 115 patients (MPR rate: 63.5%, n = 73), five metabolic parameters emerged as predictive biomarkers for MPR: Spleen_SUV<sub>mean</sub>, Colon_SUV<sub>peak</sub>, Spine_TLG, Lesion_TLG, and Spleen-to-Liver SUV<sub>max</sub> ratio. The nomogram demonstrated consistent performance across cohorts (training AUC = 0.78 [95%CI 0.67-0.88]; validation AUC = 0.78 [95%CI 0.62-0.94]), with robust calibration and enhanced clinical net benefit on decision curve analysis. Compared to tumor-only parameters, the multi-organ model showed higher specificity (100% vs. 92%) and positive predictive value (100% vs. 90%) in the validation set, maintaining 76% overall accuracy. This first-reported multi-organ metabolic nomogram noninvasively predicts MPR in rNSCLC patients receiving neoadjuvant immunochemotherapy, outperforming conventional tumor-centric approaches. By quantifying systemic host-tumor metabolic crosstalk, this tool could help guide personalized therapeutic decisions while mitigating treatment-related risks, representing a paradigm shift towards precision immuno-oncology management.

Robust Detection of Out-of-Distribution Shifts in Chest X-ray Imaging.

Karimi F, Farnia F, Bae KT

pubmed logopapersJun 2 2025
This study addresses the critical challenge of detecting out-of-distribution (OOD) chest X-rays, where subtle view differences between lateral and frontal radiographs can lead to diagnostic errors. We develop a GAN-based framework that learns the inherent feature distribution of frontal views from the MIMIC-CXR dataset through latent space optimization and Kolmogorov-Smirnov statistical testing. Our approach generates similarity scores to reliably identify OOD cases, achieving exceptional performance with 100% precision, and 97.5% accuracy in detecting lateral views. The method demonstrates consistent reliability across operating conditions, maintaining accuracy above 92.5% and precision exceeding 93% under varying detection thresholds. These results provide both theoretical insights and practical solutions for OOD detection in medical imaging, demonstrating how GANs can establish feature representations for identifying distributional shifts. By significantly improving model reliability when encountering view-based anomalies, our framework enhances the clinical applicability of deep learning systems, ultimately contributing to improved diagnostic safety and patient outcomes.

SPCF-YOLO: An Efficient Feature Optimization Model for Real-Time Lung Nodule Detection.

Ren Y, Shi C, Zhu D, Zhou C

pubmed logopapersJun 2 2025
Accurate pulmonary nodule detection in CT imaging remains challenging due to fragmented feature integration in conventional deep learning models. This paper proposes SPCF-YOLO, a real-time detection framework that synergizes hierarchical feature fusion with anatomical context modeling. First, the space-to-depth convolution (SPDConv) module preserves fine-grained features in low-resolution images through spatial dimension reorganization. Second, the shared feature pyramid convolution (SFPConv) module is designed to dynamically extract multi-scale contextual information using multi-dilation-rate convolutional layers. Incorporating a small object detection layer aims to improve sensitivity to small nodules. This is achieved in combination with the improved pyramid squeeze attention (PSA) module and the improved contextual transformer (CoTB) module, which enhance global channel dependencies and reduce feature loss. The model achieves 82.8% mean average precision (mAP) and 82.9% F1 score on LUNA16 at 151 frames per second (representing improvements of 17.5% and 82.9% over YOLOv8 respectively), demonstrating real-time clinical viability. Cross-modality validation on SIIM-COVID-19 shows 1.5% improvement, confirming robust generalization.

Efficient Medical Vision-Language Alignment Through Adapting Masked Vision Models.

Lian C, Zhou HY, Liang D, Qin J, Wang L

pubmed logopapersJun 2 2025
Medical vision-language alignment through cross-modal contrastive learning shows promising performance in image-text matching tasks, such as retrieval and zero-shot classification. However, conventional cross-modal contrastive learning (CLIP-based) methods suffer from suboptimal visual representation capabilities, which also limits their effectiveness in vision-language alignment. In contrast, although the models pretrained via multimodal masked modeling struggle with direct cross-modal matching, they excel in visual representation. To address this contradiction, we propose ALTA (ALign Through Adapting), an efficient medical vision-language alignment method that utilizes only about 8% of the trainable parameters and less than 1/5 of the computational consumption required for masked record modeling. ALTA achieves superior performance in vision-language matching tasks like retrieval and zero-shot classification by adapting the pretrained vision model from masked record modeling. Additionally, we integrate temporal-multiview radiograph inputs to enhance the information consistency between radiographs and their corresponding descriptions in reports, further improving the vision-language alignment. Experimental evaluations show that ALTA outperforms the best-performing counterpart by over 4% absolute points in text-to-image accuracy and approximately 6% absolute points in image-to-text retrieval accuracy. The adaptation of vision-language models during efficient alignment also promotes better vision and language understanding. Code is publicly available at https://github.com/DopamineLcy/ALTA.
Page 26 of 42415 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.