Sort by:
Page 20 of 66652 results

The association of symptoms, pulmonary function test and computed tomography in interstitial lung disease at the onset of connective tissue disease: an observational study with artificial intelligence analysis of high-resolution computed tomography.

Hoffmann T, Teichgräber U, Brüheim LB, Lassen-Schmidt B, Renz D, Weise T, Krämer M, Oelzner P, Böttcher J, Güttler F, Wolf G, Pfeil A

pubmed logopapersAug 12 2025
Interstitial lung disease (ILD) is a common and serious organ manifestation in patients with connective tissue disease (CTD), but it is uncertain whether there is a difference in ILD between symptomatic and asymptomatic patients. Therefore, we conducted a study to evaluate differences in the extent of ILD based on radiological findings between symptomatic/asymptomatic patients, using an artificial intelligence (AI)-based quantification of pulmonary high-resolution computed tomography (AIpqHRCT). Within the study, 67 cross-sectional HRCT datasets and clinical data (including pulmonary function test) of consecutively patients (mean age: 57.1 ± 14.7 years, woman n = 45; 67.2%) with both, initial diagnosis of CTD, with systemic sclerosis being the most frequent (n = 21, 31.3%), and ILD (all without immunosuppressive therapy), were analysed using AIqpHRCT. 25.4% (n = 17) of the patients with ILD at initial diagnosis of CTD had no pulmonary symptoms. Regarding the baseline characteristics (age, gender, disease), there were no significant difference between the symptomatic and asymptomatic group. The pulmonary function test (PFT) revealed the following mean values (%predicted) in the symptomatic and asymptomatic group, respectively: Forced vital capacity (FVC) 69.4 ± 17.4% versus 86.1 ± 15.8% (p = 0.001), and diffusing capacity of the lung for carbon monoxide (DLCO) 49.7 ± 17.9% versus 60.0 ± 15.8% (p = 0.043). AIqpHRCT data showed a significant higher amount of high attenuated volume (HAV) (14.8 ± 11.0% versus 8.9 ± 3.9%; p = 0.021) and reticulations (5.4 ± 8.7% versus 1.4 ± 1.5%; p = 0.035) in symptomatic patients. A quarter of patients with ILD at the time of initial CTD diagnosis had no pulmonary symptoms, showing DLCO were reduced in both groups. Also, AIqpHRCT demonstrated clinically relevant ILD in asymptomatic patients. These results underline the importance of an early risk adapted screening for ILD also in asymptomatic CTD patients, as ILD is associated with increased mortality.

Deep learning and radiomics fusion for predicting the invasiveness of lung adenocarcinoma within ground glass nodules.

Sun Q, Yu L, Song Z, Wang C, Li W, Chen W, Xu J, Han S

pubmed logopapersAug 11 2025
Microinvasive adenocarcinoma (MIA) and invasive adenocarcinoma (IAC) require distinct treatment strategies and are associated with different prognoses, underscoring the importance of accurate differentiation. This study aims to develop a predictive model that combines radiomics and deep learning to effectively distinguish between MIA and IAC. In this retrospective study, 252 pathologically confirmed cases of ground-glass nodules (GGNs) were included, with 177 allocated to the training set and 75 to the testing set. Radiomics, 2D deep learning, and 3D deep learning models were constructed based on CT images. In addition, two fusion strategies were employed to integrate these modalities: early fusion, which concatenates features from all modalities prior to classification, and late fusion, which ensembles the output probabilities of the individual models. The predictive performance of all five models was evaluated using the area under the receiver operating characteristic curve (AUC), and DeLong's test was performed to compare differences in AUC between models. The radiomics model achieved an AUC of 0.794 (95% CI: 0.684-0.898), while the 2D and 3D deep learning models achieved AUCs of 0.754 (95% CI: 0.594-0.882) and 0.847 (95% CI: 0.724-0.945), respectively, in the testing set. Among the fusion models, the late fusion strategy demonstrated the highest predictive performance, with an AUC of 0.898 (95% CI: 0.784-0.962), outperforming the early fusion model, which achieved an AUC of 0.857 (95% CI: 0.731-0.936). Although the differences were not statistically significant, the late fusion model yielded the highest numerical values for diagnostic accuracy, sensitivity, and specificity across all models. The fusion of radiomics and deep learning features shows potential in improving the differentiation of MIA and IAC in GGNs. The late fusion strategy demonstrated promising results, warranting further validation in larger, multicenter studies.

Leveraging an Image-Enhanced Cross-Modal Fusion Network for Radiology Report Generation.

Guo Y, Hou X, Liu Z, Zhang Y

pubmed logopapersAug 11 2025
Radiology report generation (RRG) tasks leverage computer-aided technology to automatically produce descriptive text reports for medical images, aiming to ease radiologists' workload, reduce misdiagnosis rates, and lessen the pressure on medical resources. However, previous works have yet to focus on enhancing feature extraction of low-quality images, incorporating cross-modal interaction information, and mitigating latency in report generation. We propose an Image-Enhanced Cross-Modal Fusion Network (IFNet) for automatic RRG to tackle these challenges. IFNet includes three key components. First, the image enhancement module enhances the detailed representation of typical and atypical structures in X-ray images, thereby boosting detection success rates. Second, the cross-modal fusion networks efficiently and comprehensively capture the interactions of cross-modal features. Finally, a more efficient transformer report generation module is designed to optimize report generation efficiency while being suitable for low-resource devices. Experimental results on public datasets IU X-ray and MIMIC-CXR demonstrate that IFNet significantly outperforms the current state-of-the-art methods.

LR-COBRAS: A logic reasoning-driven interactive medical image data annotation algorithm.

Zhou N, Cao J

pubmed logopapersAug 11 2025
The volume of image data generated in the medical field is continuously increasing. Manual annotation is both costly and prone to human error. Additionally, deep learning-based medical image algorithms rely on large, accurately annotated training datasets, which are expensive to produce and often result in instability. This study introduces LR-COBRAS, an interactive computer-aided data annotation algorithm designed for medical experts. LR-COBRAS aims to assist healthcare professionals in achieving more precise annotation outcomes through interactive processes, thereby optimizing medical image annotation tasks. The algorithm enhances must-link and cannot-link constraints during interactions through a logic reasoning module. It automatically generates potential constraint relationships, reducing the frequency of user interactions and improving clustering accuracy. By utilizing rules such as symmetry, transitivity, and consistency, LR-COBRAS effectively balances automation with clinical relevance. Experimental results based on the MedMNIST+ dataset and ChestX-ray8 dataset demonstrate that LR-COBRAS significantly outperforms existing methods in clustering accuracy, efficiency, and interactive burden, showcasing superior robustness and applicability. This algorithm provides a novel solution for intelligent medical image analysis. The source code for our implementation is available on https://github.com/cjw-bbxc/MILR-COBRAS.

Machine learning models for the prediction of preclinical coal workers' pneumoconiosis: integrating CT radiomics and occupational health surveillance records.

Ma Y, Cui F, Yao Y, Shen F, Qin H, Li B, Wang Y

pubmed logopapersAug 11 2025
This study aims to integrate CT imaging with occupational health surveillance data to construct a multimodal model for preclinical CWP identification and individualized risk evaluation. CT images and occupational health surveillance data were retrospectively collected from 874 coal workers, including 228 Stage I and 4 Stage II pneumoconiosis patients, along with 600 healthy and 42 subcategory 0/1 coal workers. First, the YOLOX was employed for automated 3D lung extraction to extract radiomics features. Second, two feature selection algorithms were applied to select critical features from both CT radiomics and occupational health data. Third, three distinct feature sets were constructed for model training: CT radiomics features, occupational health data, and their multimodal integration. Finally, five machine learning models were implemented to predict the preclinical stage of CWP. The model's performance was evaluated using the receiver operating characteristic curve (ROC), accuracy, sensitivity, and specificity. SHapley Additive exPlanation (SHAP) values were calculated to determine the prediction role of each feature in the model with the highest predictive performance. The YOLOX-based lung extraction demonstrated robust performance, achieving an Average Precision (AP) of 0.98. 8 CT radiomic features and 4 occupational health surveillance data were selected for the multimodal model. The optimal occupational health surveillance feature subset comprised the Length of service. Among 5 machine learning algorithms evaluated, the Decision Tree-based multimodal model showed superior predictive capacity on the test set of 142 samples, with an AUC of 0.94 (95% CI 0.88-0.99), accuracy 0.95, specificity 1.00, and Youden's index 0.83. SHAP analysis indicated that Total Protein Results, original shape Flatness, diagnostics Image original Mean were the most influential contributors. Our study demonstrated that the multimodal model demonstrated strong predictive capability for the preclinical stage of CWP by integrating CT radiomic features with occupational health data.

Perceptual Evaluation of GANs and Diffusion Models for Generating X-rays

Gregory Schuit, Denis Parra, Cecilia Besa

arxiv logopreprintAug 10 2025
Generative image models have achieved remarkable progress in both natural and medical imaging. In the medical context, these techniques offer a potential solution to data scarcity-especially for low-prevalence anomalies that impair the performance of AI-driven diagnostic and segmentation tools. However, questions remain regarding the fidelity and clinical utility of synthetic images, since poor generation quality can undermine model generalizability and trust. In this study, we evaluate the effectiveness of state-of-the-art generative models-Generative Adversarial Networks (GANs) and Diffusion Models (DMs)-for synthesizing chest X-rays conditioned on four abnormalities: Atelectasis (AT), Lung Opacity (LO), Pleural Effusion (PE), and Enlarged Cardiac Silhouette (ECS). Using a benchmark composed of real images from the MIMIC-CXR dataset and synthetic images from both GANs and DMs, we conducted a reader study with three radiologists of varied experience. Participants were asked to distinguish real from synthetic images and assess the consistency between visual features and the target abnormality. Our results show that while DMs generate more visually realistic images overall, GANs can report better accuracy for specific conditions, such as absence of ECS. We further identify visual cues radiologists use to detect synthetic images, offering insights into the perceptual gaps in current models. These findings underscore the complementary strengths of GANs and DMs and point to the need for further refinement to ensure generative models can reliably augment training datasets for AI diagnostic systems.

Pulmonary diseases accurate recognition using adaptive multiscale feature fusion in chest radiography.

Zhou M, Gao L, Bian K, Wang H, Wang N, Chen Y, Liu S

pubmed logopapersAug 10 2025
Pulmonary disease can severely impair respiratory function and be life-threatening. Accurately recognizing pulmonary diseases in chest X-ray images is challenging due to overlapping body structures and the complex anatomy of the chest. We propose an adaptive multiscale feature fusion model for recognizing Chest X-ray images of pneumonia, tuberculosis, and COVID-19, which are common pulmonary diseases. We introduce an Adaptive Multiscale Fusion Network (AMFNet) for pulmonary disease classification in chest X-ray images. AMFNet consists of a lightweight Multiscale Fusion Network (MFNet) and ResNet50 as the secondary feature extraction network. MFNet employs Fusion Blocks with self-calibrated convolution (SCConv) and Attention Feature Fusion (AFF) to capture multiscale semantic features, and integrates a custom activation function, MFReLU, which is employed to reduce the model's memory access time. A fusion module adaptively combines features from both networks. Experimental results show that AMFNet achieves 97.48% accuracy and an F1 score of 0.9781 on public datasets, outperforming models like ResNet50, DenseNet121, ConvNeXt-Tiny, and Vision Transformer while using fewer parameters.

Quantitative radiomic analysis of computed tomography scans using machine and deep learning techniques accurately predicts histological subtypes of non-small cell lung cancer: A retrospective analysis.

Panchawagh S, Halder A, Haldule S, Sanker V, Lalwani D, Sequeria R, Naik H, Desai A

pubmed logopapersAug 9 2025
Non-small cell lung cancer (NSCLC) histological subtypes impact treatment decisions. While pre-surgical histopathological examination is ideal, it's not always possible. CT radiomic analysis shows promise in predicting NSCLC histological subtypes. To predict NSCLC histological subtypes using machine learning and deep learning models using Radiomic features. 422 lung CT scans from The Cancer Imaging Archive (TCIA) were analyzed. Primary neoplasms were segmented by expert radiologists. Using PyRadiomics, 2446 radiomic features were extracted; post-selection, 179 features remained. Machine learning models like logistic regression (LR), Support vector machine (SVM), Random Forest (RF), XGBoost, LightGBM, and CatBoost were employed, alongside a deep neural network (DNN) model. RF demonstrated the highest accuracy at 78 % (95 % CI: 70 %-84 %) and AUC-ROC at 94 % (95 % CI: 90 %-96 %). LightGBM, XGBoost, and CatBoost had AUC-ROC values of 95 %, 93 %, and 93 % respectively. The DNN's AUC was 94.4 % (95 % CI: 94.1 %-94.6 %). Logistic regression had the least efficacy. For histological subtype prediction, random forest, boosting models, and DNN were superior. Quantitative radiomic analysis with machine learning can accurately determine NSCLC histological subtypes. Random forest, ensemble models, and DNNs show significant promise for pre-operative NSCLC classification, which can streamline therapy decisions.

"AI tumor delineation for all breathing phases in early-stage NSCLC".

DelaO-Arevalo LR, Sijtsema NM, van Dijk LV, Langendijk JA, Wijsman R, van Ooijen PMA

pubmed logopapersAug 9 2025
Accurate delineation of the Gross Tumor Volume (GTV) and the Internal Target Volume (ITV) in early-stage lung tumors is crucial in Stereotactic Body Radiation Therapy (SBRT). Traditionally, the ITVs, which account for breathing motion, are generated by manually contouring GTVs across all breathing phases (BPs), a time-consuming process. This research aims to streamline this workflow by developing a deep learning algorithm to automatically delineate GTVs in all four-dimensional computed tomography (4D-CT) BPs for early-stage Non-Small Cell Lung Cancer Patients (NSCLC). A dataset of 214 early-stage NSCLC patients treated with SBRT was used. Each patient had a 4D-CT scan containing ten reconstructed BPs. The data were divided into a training set (75 %) and a testing set (25 %). Three models SwinUNetR and Dynamic UNet (DynUnet), and a hybrid model combining both (Swin + Dyn)were trained and evaluated using the Dice Similarity Coefficient (DSC), 3 mm Surface Dice Similarity Coefficient (SDSC), and the 95<sup>th</sup> percentile Hausdorff distance (HD95). The best performing model was used to delineate GTVs in all test set BPs, creating the ITVs using two methods: all 10 phases and the maximum inspiration/expiration phases. The ITVs were compared to the ground truth ITVs. The Swin + Dyn model achieved the highest performance, with a test set SDSC of 0.79 ± 0.14 for GTV 50 %. For the ITVs, the SDSC was 0.79 ± 0.16 using all 10 BPs and 0.77 ± 0.14 using 2 BPs. At the voxel level, the Swin + DynNet network achieved a sensitivity of 0.75 ± 0.14 and precision of 0.84 ± 0.10 for the ITV 2 breathing phases, and a sensitivity of 0.79 ± 0.12 and precision of 0.80 ± 0.11 for the 10 breathing phases. The Swin + Dyn Net algorithm, trained on the maximum expiration CT-scan effectively delineated gross tumor volumes in all breathing phases and the resulting ITV showed a good agreement with the ground truth (surface DSC = 0.79 ± 0.16 using all 10 BPs and 0.77 ± 0.14 using 2 BPs.). The proposed approach could reduce delineation time and inter-performer variability in the tumor contouring process for NSCLC SBRT workflows.

Enhanced hyper tuning using bioinspired-based deep learning model for accurate lung cancer detection and classification.

Kumari J, Sinha S, Singh L

pubmed logopapersAug 9 2025
Lung cancer (LC) is one of the leading causes of cancer related deaths worldwide and early recognition is critical for enhancing patient outcomes. However, existing LC detection techniques face challenges such as high computational demands, complex data integration, scalability limitations, and difficulties in achieving rigorous clinical validation. This research proposes an Enhanced Hyper Tuning Deep Learning (EHTDL) model utilizing bioinspired algorithms to overcome these limitations and improve accuracy and efficiency of LC detection and classification. The methodology begins with the Smooth Edge Enhancement (SEE) technique for preprocessing CT images, followed by feature extraction using GLCM-based Texture Analysis. To refine the features and reduce dimensionality, a Hybrid Feature Selection approach combining Grey Wolf optimization (GWO) and Differential Evolution (DE) is employed. Precise lung segmentation is performed using Mask R-CNN to ensure accurate delineation of lung regions. A Deep Fractal Edge Classifier (DFEC) is introduced, consisting of five fractal blocks with convolutional layers and pooling to progressively learn LC characteristics. The proposed EHTDL model achieves remarkable performance metrics, including 99% accuracy, 100% precision, 98% recall, and 99% <i>F</i>1-score, demonstrating its robustness and effectiveness. The model's scalability and efficiency make it suitable for real-time clinical application offering a promising solution for early LC detection and significantly enhancing patient care.
Page 20 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.