Sort by:
Page 1 of 34338 results
Next

Comparative evaluation of supervised and unsupervised deep learning strategies for denoising hyperpolarized <sup>129</sup>Xe lung MRI.

Bdaiwi AS, Willmering MM, Hussain R, Hysinger E, Woods JC, Walkup LL, Cleveland ZI

pubmed logopapersAug 14 2025
Reduced signal-to-noise ratio (SNR) in hyperpolarized <sup>129</sup>Xe MR images can affect accurate quantification for research and diagnostic evaluations. Thus, this study explores the application of supervised deep learning (DL) denoising, traditional (Trad) and Noise2Noise (N2N) and unsupervised Noise2void (N2V) approaches for <sup>129</sup>Xe MR imaging. The DL denoising frameworks were trained and tested on 952 <sup>129</sup>Xe MRI data sets (421 ventilation, 125 diffusion-weighted, and 406 gas-exchange acquisitions) from healthy subjects and participants with cardiopulmonary conditions and compared with the block matching 3D denoising technique. Evaluation involved mean signal, noise standard deviation (SD), SNR, and sharpness. Ventilation defect percentage (VDP), apparent diffusion coefficient (ADC), membrane uptake, red blood cell (RBC) transfer, and RBC:Membrane were also evaluated for ventilation, diffusion, and gas-exchange images, respectively. Denoising methods significantly reduced noise SDs and enhanced SNR (p < 0.05) across all imaging types. Traditional ventilation model (Trad<sub>vent</sub>) improved sharpness in ventilation images but underestimated VDP (bias = -1.37%) relative to raw images, whereas N2N<sub>vent</sub> overestimated VDP (bias = +1.88%). Block matching 3D and N2V<sub>vent</sub> showed minimal VDP bias (≤ 0.35%). Denoising significantly reduced ADC mean and SD (p < 0.05, bias ≤ - 0.63 × 10<sup>-2</sup>). The values of Trad<sub>vent</sub> and N2N<sub>vent</sub> increased mean membrane and RBC (p < 0.001) with no change in RBC:Membrane. Denoising also reduced SDs of all gas-exchange metrics (p < 0.01). Low SNR may impair the potential of <sup>129</sup>Xe MRI for clinical diagnosis and lung function assessment. The evaluation of supervised and unsupervised DL denoising methods enhanced <sup>129</sup>Xe imaging quality, offering promise for improved clinical interpretation and diagnosis.

Lung-DDPM: Semantic Layout-guided Diffusion Models for Thoracic CT Image Synthesis.

Jiang Y, Lemarechal Y, Bafaro J, Abi-Rjeile J, Joubert P, Despres P, Manem V

pubmed logopapersAug 14 2025
With the rapid development of artificial intelligence (AI), AI-assisted medical imaging analysis demonstrates remarkable performance in early lung cancer screening. However, the costly annotation process and privacy concerns limit the construction of large-scale medical datasets, hampering the further application of AI in healthcare. To address the data scarcity in lung cancer screening, we propose Lung-DDPM, a thoracic CT image synthesis approach that effectively generates high-fidelity 3D synthetic CT images, which prove helpful in downstream lung nodule segmentation tasks. Our method is based on semantic layout-guided denoising diffusion probabilistic models (DDPM), enabling anatomically reasonable, seamless, and consistent sample generation even from incomplete semantic layouts. Our results suggest that the proposed method outperforms other state-of-the-art (SOTA) generative models in image quality evaluation and downstream lung nodule segmentation tasks. Specifically, Lung-DDPM achieved superior performance on our large validation cohort, with a Fréchet inception distance (FID) of 0.0047, maximum mean discrepancy (MMD) of 0.0070, and mean squared error (MSE) of 0.0024. These results were 7.4×, 3.1×, and 29.5× better than the second-best competitors, respectively. Furthermore, the lung nodule segmentation model, trained on a dataset combining real and Lung-DDPM-generated synthetic samples, attained a Dice Coefficient (Dice) of 0.3914 and sensitivity of 0.4393. This represents 8.8% and 18.6% improvements in Dice and sensitivity compared to the model trained solely on real samples. The experimental results highlight Lung-DDPM's potential for a broader range of medical imaging applications, such as general tumor segmentation, cancer survival estimation, and risk prediction. The code and pretrained models are available at https://github.com/Manem-Lab/Lung-DDPM/.

The association of symptoms, pulmonary function test and computed tomography in interstitial lung disease at the onset of connective tissue disease: an observational study with artificial intelligence analysis of high-resolution computed tomography.

Hoffmann T, Teichgräber U, Brüheim LB, Lassen-Schmidt B, Renz D, Weise T, Krämer M, Oelzner P, Böttcher J, Güttler F, Wolf G, Pfeil A

pubmed logopapersAug 12 2025
Interstitial lung disease (ILD) is a common and serious organ manifestation in patients with connective tissue disease (CTD), but it is uncertain whether there is a difference in ILD between symptomatic and asymptomatic patients. Therefore, we conducted a study to evaluate differences in the extent of ILD based on radiological findings between symptomatic/asymptomatic patients, using an artificial intelligence (AI)-based quantification of pulmonary high-resolution computed tomography (AIpqHRCT). Within the study, 67 cross-sectional HRCT datasets and clinical data (including pulmonary function test) of consecutively patients (mean age: 57.1 ± 14.7 years, woman n = 45; 67.2%) with both, initial diagnosis of CTD, with systemic sclerosis being the most frequent (n = 21, 31.3%), and ILD (all without immunosuppressive therapy), were analysed using AIqpHRCT. 25.4% (n = 17) of the patients with ILD at initial diagnosis of CTD had no pulmonary symptoms. Regarding the baseline characteristics (age, gender, disease), there were no significant difference between the symptomatic and asymptomatic group. The pulmonary function test (PFT) revealed the following mean values (%predicted) in the symptomatic and asymptomatic group, respectively: Forced vital capacity (FVC) 69.4 ± 17.4% versus 86.1 ± 15.8% (p = 0.001), and diffusing capacity of the lung for carbon monoxide (DLCO) 49.7 ± 17.9% versus 60.0 ± 15.8% (p = 0.043). AIqpHRCT data showed a significant higher amount of high attenuated volume (HAV) (14.8 ± 11.0% versus 8.9 ± 3.9%; p = 0.021) and reticulations (5.4 ± 8.7% versus 1.4 ± 1.5%; p = 0.035) in symptomatic patients. A quarter of patients with ILD at the time of initial CTD diagnosis had no pulmonary symptoms, showing DLCO were reduced in both groups. Also, AIqpHRCT demonstrated clinically relevant ILD in asymptomatic patients. These results underline the importance of an early risk adapted screening for ILD also in asymptomatic CTD patients, as ILD is associated with increased mortality.

Leveraging an Image-Enhanced Cross-Modal Fusion Network for Radiology Report Generation.

Guo Y, Hou X, Liu Z, Zhang Y

pubmed logopapersAug 11 2025
Radiology report generation (RRG) tasks leverage computer-aided technology to automatically produce descriptive text reports for medical images, aiming to ease radiologists' workload, reduce misdiagnosis rates, and lessen the pressure on medical resources. However, previous works have yet to focus on enhancing feature extraction of low-quality images, incorporating cross-modal interaction information, and mitigating latency in report generation. We propose an Image-Enhanced Cross-Modal Fusion Network (IFNet) for automatic RRG to tackle these challenges. IFNet includes three key components. First, the image enhancement module enhances the detailed representation of typical and atypical structures in X-ray images, thereby boosting detection success rates. Second, the cross-modal fusion networks efficiently and comprehensively capture the interactions of cross-modal features. Finally, a more efficient transformer report generation module is designed to optimize report generation efficiency while being suitable for low-resource devices. Experimental results on public datasets IU X-ray and MIMIC-CXR demonstrate that IFNet significantly outperforms the current state-of-the-art methods.

Deep learning and radiomics fusion for predicting the invasiveness of lung adenocarcinoma within ground glass nodules.

Sun Q, Yu L, Song Z, Wang C, Li W, Chen W, Xu J, Han S

pubmed logopapersAug 11 2025
Microinvasive adenocarcinoma (MIA) and invasive adenocarcinoma (IAC) require distinct treatment strategies and are associated with different prognoses, underscoring the importance of accurate differentiation. This study aims to develop a predictive model that combines radiomics and deep learning to effectively distinguish between MIA and IAC. In this retrospective study, 252 pathologically confirmed cases of ground-glass nodules (GGNs) were included, with 177 allocated to the training set and 75 to the testing set. Radiomics, 2D deep learning, and 3D deep learning models were constructed based on CT images. In addition, two fusion strategies were employed to integrate these modalities: early fusion, which concatenates features from all modalities prior to classification, and late fusion, which ensembles the output probabilities of the individual models. The predictive performance of all five models was evaluated using the area under the receiver operating characteristic curve (AUC), and DeLong's test was performed to compare differences in AUC between models. The radiomics model achieved an AUC of 0.794 (95% CI: 0.684-0.898), while the 2D and 3D deep learning models achieved AUCs of 0.754 (95% CI: 0.594-0.882) and 0.847 (95% CI: 0.724-0.945), respectively, in the testing set. Among the fusion models, the late fusion strategy demonstrated the highest predictive performance, with an AUC of 0.898 (95% CI: 0.784-0.962), outperforming the early fusion model, which achieved an AUC of 0.857 (95% CI: 0.731-0.936). Although the differences were not statistically significant, the late fusion model yielded the highest numerical values for diagnostic accuracy, sensitivity, and specificity across all models. The fusion of radiomics and deep learning features shows potential in improving the differentiation of MIA and IAC in GGNs. The late fusion strategy demonstrated promising results, warranting further validation in larger, multicenter studies.

Machine learning models for the prediction of preclinical coal workers' pneumoconiosis: integrating CT radiomics and occupational health surveillance records.

Ma Y, Cui F, Yao Y, Shen F, Qin H, Li B, Wang Y

pubmed logopapersAug 11 2025
This study aims to integrate CT imaging with occupational health surveillance data to construct a multimodal model for preclinical CWP identification and individualized risk evaluation. CT images and occupational health surveillance data were retrospectively collected from 874 coal workers, including 228 Stage I and 4 Stage II pneumoconiosis patients, along with 600 healthy and 42 subcategory 0/1 coal workers. First, the YOLOX was employed for automated 3D lung extraction to extract radiomics features. Second, two feature selection algorithms were applied to select critical features from both CT radiomics and occupational health data. Third, three distinct feature sets were constructed for model training: CT radiomics features, occupational health data, and their multimodal integration. Finally, five machine learning models were implemented to predict the preclinical stage of CWP. The model's performance was evaluated using the receiver operating characteristic curve (ROC), accuracy, sensitivity, and specificity. SHapley Additive exPlanation (SHAP) values were calculated to determine the prediction role of each feature in the model with the highest predictive performance. The YOLOX-based lung extraction demonstrated robust performance, achieving an Average Precision (AP) of 0.98. 8 CT radiomic features and 4 occupational health surveillance data were selected for the multimodal model. The optimal occupational health surveillance feature subset comprised the Length of service. Among 5 machine learning algorithms evaluated, the Decision Tree-based multimodal model showed superior predictive capacity on the test set of 142 samples, with an AUC of 0.94 (95% CI 0.88-0.99), accuracy 0.95, specificity 1.00, and Youden's index 0.83. SHAP analysis indicated that Total Protein Results, original shape Flatness, diagnostics Image original Mean were the most influential contributors. Our study demonstrated that the multimodal model demonstrated strong predictive capability for the preclinical stage of CWP by integrating CT radiomic features with occupational health data.

LR-COBRAS: A logic reasoning-driven interactive medical image data annotation algorithm.

Zhou N, Cao J

pubmed logopapersAug 11 2025
The volume of image data generated in the medical field is continuously increasing. Manual annotation is both costly and prone to human error. Additionally, deep learning-based medical image algorithms rely on large, accurately annotated training datasets, which are expensive to produce and often result in instability. This study introduces LR-COBRAS, an interactive computer-aided data annotation algorithm designed for medical experts. LR-COBRAS aims to assist healthcare professionals in achieving more precise annotation outcomes through interactive processes, thereby optimizing medical image annotation tasks. The algorithm enhances must-link and cannot-link constraints during interactions through a logic reasoning module. It automatically generates potential constraint relationships, reducing the frequency of user interactions and improving clustering accuracy. By utilizing rules such as symmetry, transitivity, and consistency, LR-COBRAS effectively balances automation with clinical relevance. Experimental results based on the MedMNIST+ dataset and ChestX-ray8 dataset demonstrate that LR-COBRAS significantly outperforms existing methods in clustering accuracy, efficiency, and interactive burden, showcasing superior robustness and applicability. This algorithm provides a novel solution for intelligent medical image analysis. The source code for our implementation is available on https://github.com/cjw-bbxc/MILR-COBRAS.

Pulmonary diseases accurate recognition using adaptive multiscale feature fusion in chest radiography.

Zhou M, Gao L, Bian K, Wang H, Wang N, Chen Y, Liu S

pubmed logopapersAug 10 2025
Pulmonary disease can severely impair respiratory function and be life-threatening. Accurately recognizing pulmonary diseases in chest X-ray images is challenging due to overlapping body structures and the complex anatomy of the chest. We propose an adaptive multiscale feature fusion model for recognizing Chest X-ray images of pneumonia, tuberculosis, and COVID-19, which are common pulmonary diseases. We introduce an Adaptive Multiscale Fusion Network (AMFNet) for pulmonary disease classification in chest X-ray images. AMFNet consists of a lightweight Multiscale Fusion Network (MFNet) and ResNet50 as the secondary feature extraction network. MFNet employs Fusion Blocks with self-calibrated convolution (SCConv) and Attention Feature Fusion (AFF) to capture multiscale semantic features, and integrates a custom activation function, MFReLU, which is employed to reduce the model's memory access time. A fusion module adaptively combines features from both networks. Experimental results show that AMFNet achieves 97.48% accuracy and an F1 score of 0.9781 on public datasets, outperforming models like ResNet50, DenseNet121, ConvNeXt-Tiny, and Vision Transformer while using fewer parameters.

"AI tumor delineation for all breathing phases in early-stage NSCLC".

DelaO-Arevalo LR, Sijtsema NM, van Dijk LV, Langendijk JA, Wijsman R, van Ooijen PMA

pubmed logopapersAug 9 2025
Accurate delineation of the Gross Tumor Volume (GTV) and the Internal Target Volume (ITV) in early-stage lung tumors is crucial in Stereotactic Body Radiation Therapy (SBRT). Traditionally, the ITVs, which account for breathing motion, are generated by manually contouring GTVs across all breathing phases (BPs), a time-consuming process. This research aims to streamline this workflow by developing a deep learning algorithm to automatically delineate GTVs in all four-dimensional computed tomography (4D-CT) BPs for early-stage Non-Small Cell Lung Cancer Patients (NSCLC). A dataset of 214 early-stage NSCLC patients treated with SBRT was used. Each patient had a 4D-CT scan containing ten reconstructed BPs. The data were divided into a training set (75 %) and a testing set (25 %). Three models SwinUNetR and Dynamic UNet (DynUnet), and a hybrid model combining both (Swin + Dyn)were trained and evaluated using the Dice Similarity Coefficient (DSC), 3 mm Surface Dice Similarity Coefficient (SDSC), and the 95<sup>th</sup> percentile Hausdorff distance (HD95). The best performing model was used to delineate GTVs in all test set BPs, creating the ITVs using two methods: all 10 phases and the maximum inspiration/expiration phases. The ITVs were compared to the ground truth ITVs. The Swin + Dyn model achieved the highest performance, with a test set SDSC of 0.79 ± 0.14 for GTV 50 %. For the ITVs, the SDSC was 0.79 ± 0.16 using all 10 BPs and 0.77 ± 0.14 using 2 BPs. At the voxel level, the Swin + DynNet network achieved a sensitivity of 0.75 ± 0.14 and precision of 0.84 ± 0.10 for the ITV 2 breathing phases, and a sensitivity of 0.79 ± 0.12 and precision of 0.80 ± 0.11 for the 10 breathing phases. The Swin + Dyn Net algorithm, trained on the maximum expiration CT-scan effectively delineated gross tumor volumes in all breathing phases and the resulting ITV showed a good agreement with the ground truth (surface DSC = 0.79 ± 0.16 using all 10 BPs and 0.77 ± 0.14 using 2 BPs.). The proposed approach could reduce delineation time and inter-performer variability in the tumor contouring process for NSCLC SBRT workflows.

Prediction of Early Recurrence After Bronchial Arterial Chemoembolization in Non-small Cell Lung Cancer Patients Using Dual-energy CT: An Interpretable Model Based on SHAP Methodology.

Feng Y, Xu Y, Wang J, Cao Z, Liu B, Du Z, Zhou L, Hua H, Wang W, Mei J, Lai L, Tu J

pubmed logopapersAug 9 2025
Bronchial artery chemoembolization (BACE) is a new treatment method for lung cancer. This study aimed to investigate the ability of dual-energy computed tomography (DECT) to predict early recurrence (ER) after BACE among patients with non-small cell lung cancer (NSCLC) who failed first-line therapy. Clinical and imaging data from NSCLC patients undergoing BACE at Wenzhou Medical University Affiliated Fifth *** Hospital (10/2023-06/2024) were retrospectively analyzed. Logistic regression (LR) machine learning models were developed using 5 arterial-phase (AP) virtual monoenergetic images (VMIs; 40, 70, 100, 120, and 150 keV), while deep learning models utilized ResNet50/101/152 architectures with iodine maps. A combined model integrating optimal Rad-score, DL-score, and clinical features was established. Model performance was assessed via area under the receiver operating characteristic curve analysis (AUC), with SHapley Additive exPlanations (SHAP) framework applied for interpretability. A total of 196 patients were enrolled in this study (training cohort: n=158; testing cohort: n=38). The 100 keV machine learning model demonstrated superior performance (AUC=0.751) compared to other VMIs. The deep learning model based on the ResNet101 method (AUC=0.791) performed better than other approaches. The hybrid model combining Rad-score-100keV-A, Rad-score-100keV-V, DL-score-ResNet101-A, DL-score-ResNet101-V, and clinical features exhibited the best performance (AUC=0.798) among all models. DECT holds promise for predicting ER after BACE among NSCLC patients who have failed first-line therapy, offering valuable guidance for clinical treatment planning.
Page 1 of 34338 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.