Sort by:
Page 15 of 66652 results

Benign-Malignant Classification of Pulmonary Nodules in CT Images Based on Fractal Spectrum Analysis

Ma, Y., Lei, S., Wang, B., Qiao, Y., Xing, F., Liang, T.

medrxiv logopreprintAug 26 2025
This study reveals that pulmonary nodules exhibit distinct multifractal characteristics, with malignant nodules demonstrating significantly higher fractal dimensions at larger scales. Based on this fundamental finding, an automatic benign-malignant classification method for pulmonary nodules in CT images was developed using fractal spectrum analysis. By computing continuous three-dimensional fractal dimensions on 121 nodule samples from the LIDC-IDRI database, a 201-dimensional fractal feature spectrum was extracted, and a simplified multilayer perceptron neural network (with only 6x6 minimal neural network nodes in the intermediate layers) was constructed for pulmonary nodule classification. Experimental results demonstrate that this method achieved 96.69% accuracy in distinguishing benign from malignant pulmonary nodules. The discovery of scale-dependent multifractal properties enables fractal spectrum analysis to effectively capture the complexity differences in multi-scale structures of malignant nodules, providing an efficient and interpretable AI-aided diagnostic method for early lung cancer diagnosis.

Multimodal Positron Emission Tomography/Computed Tomography Radiomics Combined with a Clinical Model for Preoperative Prediction of Invasive Pulmonary Adenocarcinoma in Ground-Glass Nodules.

Wang X, Li P, Li Y, Zhang R, Duan F, Wang D

pubmed logopapersAug 25 2025
To develop and validate predictive models based on <sup>18</sup>F-fluorodeoxyglucose positron emission tomography/computed tomography (<sup>18</sup>F-FDG PET/CT) radiomics and a clinical model for differentiating invasive adenocarcinoma (IAC) from non-invasive ground-glass nodules (GGNs) in early-stage lung cancer. A total of 164 patients with GGNs histologically confirmed as part of the lung adenocarcinoma spectrum (including both invasive and non-invasive subtypes) who underwent preoperative <sup>18</sup>F-FDG PET/CT and surgery. Radiomic features were extracted from PET and CT images. Models were constructed using support vector machine (SVM), random forest (RF), and extreme gradient boosting (XGBoost). Five predictive models (CT, PET, PET/CT, Clinical, Combined) were evaluated using receiver operating characteristic (ROC) curves, decision curve analysis (DCA), and calibration curves. Statistical comparisons were performed using DeLong's test, net reclassification improvement (NRI), and integrated discrimination improvement (IDI). The Combined model, integrating PET/CT radiomic features with the clinical model, achieved the highest diagnostic performance (AUC: 0.950 in training, 0.911 in test). It consistently showed superior IDI and NRI across both cohorts and significantly outperformed the clinical model (DeLong p = 0.027), confirming its enhanced predictive power through multimodal integration. A clinical nomogram was constructed from the final model to support individualized risk stratification. Integrating PET/CT radiomic features with a clinical model significantly enhances the preoperative prediction of GGN invasiveness. This multimodal image data may assist in preoperative risk stratification and support personalized surgical decision-making in early-stage lung adenocarcinoma.

ESR Essentials: lung cancer screening with low-dose CT-practice recommendations by the European Society of Thoracic Imaging.

Revel MP, Biederer J, Nair A, Silva M, Jacobs C, Snoeckx A, Prokop M, Prosch H, Parkar AP, Frauenfelder T, Larici AR

pubmed logopapersAug 23 2025
Low-dose CT screening for lung cancer reduces the risk of death from lung cancer by at least 21% in high-risk participants and should be offered to people aged between 50 and 75 with at least 20 pack-years of smoking. Iterative reconstruction or deep learning algorithms should be used to keep the effective dose below 1 mSv. Deep learning algorithms are required to facilitate the detection of nodules and the measurement of their volumetric growth. Only large solid nodules larger than 500 mm<sup>3</sup> or those with spiculations, bubble-like lucencies, or pleural indentation and complex cysts should be investigated further. Short-term follow-up at 3 or 6 months is required for solid nodules of 100 to 500 mm<sup>3</sup>. A watchful waiting approach is recommended for most subsolid nodules, to limit the risk of overtreatment. Finally, the description of additional findings must be limited if LCS is to be cost-effective. KEY POINTS: Low-dose CT screening reduces the risk of death from lung cancer by at least 21% in high-risk individuals, with a greater benefit in women. Quality assurance of screening is essential to control radiation dose and the number of false positives. Screening with low-dose CT scans detects incidental findings of variable clinical relevance, only those of importance should be reported.

Predicting pediatric age from chest X-rays using deep learning: a novel approach.

Li M, Zhao J, Liu H, Jin B, Cui X, Wang D

pubmed logopapersAug 23 2025
Accurate age estimation is essential for assessing pediatric developmental stages and for forensics. Conventionally, pediatric age is clinically estimated by bone age through wrist X-rays. However, recent advances in deep learning enable other radiological modalities to serve as a promising complement. This study aims to explore the effectiveness of deep learning for pediatric age estimation using chest X-rays. We developed a ResNet-based deep neural network model enhanced with Coordinate Attention mechanism to predict pediatric age from chest X-rays. A dataset comprising 128,008 images was retrospectively collected from two large tertiary hospitals in Shanghai. Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) were employed as main evaluation metrics across age groups. Further analysis was conducted using Spearman correlation and heatmap visualizations. The model achieved an MAE of 5.86 months for males and 5.80 months for females on the internal validation set. On the external test set, the MAE was 7.40 months for males and 7.29 months for females. The Spearman correlation coefficient was above 0.98, indicating a strong positive correlation between the predicted and true age. Heatmap analysis revealed the deep learning model mainly focused on the spine, mediastinum, heart and great vessels, with additional attention given to surrounding bones. We successfully constructed a large dataset of pediatric chest X-rays and developed a neural network model integrated with Coordinate Attention for age prediction. Experiments demonstrated the model's robustness and proved that chest X-rays can be effectively utilized for accurate pediatric age estimation. By integrating pediatric chest X-rays with age data using deep learning, we can provide more support for predicting children's age, thereby aiding in the screening of abnormal growth and development in children. This study explores whether deep learning could leverage chest X-rays for pediatric age prediction. Trained on over 120,000 images, the model shows high accuracy on internal and external validation sets. This method provides a potential complement for traditional bone age assessment and could reduce radiation exposure.

NLSTseg: A Pixel-level Lung Cancer Dataset Based on NLST LDCT Images.

Chen KH, Lin YH, Wu S, Shih NW, Meng HC, Lin YY, Huang CR, Huang JW

pubmed logopapersAug 23 2025
Low-dose computed tomography (LDCT) is the most effective tools for early detection of lung cancer. With advancements in artificial intelligence, various Computer-Aided Diagnosis (CAD) systems are now supported in clinical practice. For radiologists dealing with a huge volume of CT scans, CAD systems are helpful. However, the development of these systems depends on precisely annotated datasets, which are currently limited. Although several lung imaging datasets exist, there is only few of publicly available datasets with segmentation annotations on LDCT images. To address this problem, we developed a dataset based on NLST LDCT images with pixel-level annotations of lung lesions. The dataset includes LDCT scans from 605 patients and 715 annotated lesions, including 662 lung tumors and 53 lung nodules. Lesion volumes range from 0.03 cm<sup>3</sup> to 372.21 cm<sup>3</sup>, with 500 lesions smaller than 5 cm<sup>3</sup>, mostly located in the right upper lung. A 2D U-Net model trained on the dataset achieved a 0.95 IoU on training dataset. This dataset enhances the diversity and usability of lung cancer annotation resources.

Generating Synthetic Contrast-Enhanced Chest CT Images from Non-Contrast Scans Using Slice-Consistent Brownian Bridge Diffusion Network

Pouya Shiri, Xin Yi, Neel P. Mistry, Samaneh Javadinia, Mohammad Chegini, Seok-Bum Ko, Amirali Baniasadi, Scott J. Adams

arxiv logopreprintAug 23 2025
Contrast-enhanced computed tomography (CT) imaging is essential for diagnosing and monitoring thoracic diseases, including aortic pathologies. However, contrast agents pose risks such as nephrotoxicity and allergic-like reactions. The ability to generate high-fidelity synthetic contrast-enhanced CT angiography (CTA) images without contrast administration would be transformative, enhancing patient safety and accessibility while reducing healthcare costs. In this study, we propose the first bridge diffusion-based solution for synthesizing contrast-enhanced CTA images from non-contrast CT scans. Our approach builds on the Slice-Consistent Brownian Bridge Diffusion Model (SC-BBDM), leveraging its ability to model complex mappings while maintaining consistency across slices. Unlike conventional slice-wise synthesis methods, our framework preserves full 3D anatomical integrity while operating in a high-resolution 2D fashion, allowing seamless volumetric interpretation under a low memory budget. To ensure robust spatial alignment, we implement a comprehensive preprocessing pipeline that includes resampling, registration using the Symmetric Normalization method, and a sophisticated dilated segmentation mask to extract the aorta and surrounding structures. We create two datasets from the Coltea-Lung dataset: one containing only the aorta and another including both the aorta and heart, enabling a detailed analysis of anatomical context. We compare our approach against baseline methods on both datasets, demonstrating its effectiveness in preserving vascular structures while enhancing contrast fidelity.

Unlocking the potential of radiomics in identifying fibrosing and inflammatory patterns in interstitial lung disease.

Colligiani L, Marzi C, Uggenti V, Colantonio S, Tavanti L, Pistelli F, Alì G, Neri E, Romei C

pubmed logopapersAug 22 2025
To differentiate interstitial lung diseases (ILDs) with fibrotic and inflammatory patterns using high-resolution computed tomography (HRCT) and a radiomics-based artificial intelligence (AI) pipeline. This single-center study included 84 patients: 50 with idiopathic pulmonary fibrosis (IPF)-representative of fibrotic pattern-and 34 with cellular non-specific interstitial pneumonia (NSIP) secondary to connective tissue disease (CTD)-as an example of mostly inflammatory pattern. For a secondary objective, we analyzed 50 additional patients with COVID-19 pneumonia. We performed semi-automatic segmentation of ILD regions using a deep learning model followed by manual review. From each segmented region, 103 radiomic features were extracted. Classification was performed using an XGBoost model with 1000 bootstrap repetitions and SHapley Additive exPlanations (SHAP) were applied to identify the most predictive features. The model accurately distinguished a fibrotic ILD pattern from an inflammatory ILD one, achieving an average test set accuracy of 0.91 and AUROC of 0.98. The classification was driven by radiomic features capturing differences in lung morphology, intensity distribution, and textural heterogeneity between the two disease patterns. In differentiating cellular NSIP from COVID-19, the model achieved an average accuracy of 0.89. Inflammatory ILDs exhibited more uniform imaging patterns compared to the greater variability typically observed in viral pneumonia. Radiomics combined with explainable AI offers promising diagnostic support in distinguishing fibrotic from inflammatory ILD patterns and differentiating inflammatory ILDs from viral pneumonias. This approach could enhance diagnostic precision and provide quantitative support for personalized ILD management.

Performance of chest X-ray with computer-aided detection powered by deep learning-based artificial intelligence for tuberculosis presumptive identification during case finding in the Philippines.

Marquez N, Carpio EJ, Santiago MR, Calderon J, Orillaza-Chi R, Salanap SS, Stevens L

pubmed logopapersAug 22 2025
The Philippines' high tuberculosis (TB) burden calls for effective point-of-care screening. Systematic TB case finding using chest X-ray (CXR) with computer-aided detection powered by deep learning-based artificial intelligence (AI-CAD) provided this opportunity. We aimed to comprehensively review AI-CAD's real-life performance in the local context to support refining its integration into the country's programmatic TB elimination efforts. Retrospective cross-sectional data analysis was done on case-finding activities conducted in four regions of the Philippines between May 2021 and March 2024. Individuals 15 years and older with complete CXR and molecular World Health Organization-recommended rapid diagnostic (mWRD) test results were included. TB presumptive was detected either by CXR or TB signs and symptoms and/or official radiologist readings. The overall diagnostic accuracy of CXR with AI-CAD, stratified by different factors, was assessed using a fixed abnormality threshold and mWRD as the standard reference. Given the imbalanced dataset, we evaluated both precision-recall (PRC) and receiver operating characteristic (ROC) plots. Due to limited verification of CAD-negative individuals, we used "pseudo-sensitivity" and "pseudo-specificity" to reflect estimates based on partial testing. We identified potential factors that may affect performance metrics. Using a 0.5 abnormality threshold in analyzing 5740 individuals, the AI-CAD model showed high pseudo-sensitivity at 95.6% (95% CI, 95.1-96.1) but low pseudo-specificity at 28.1% (26.9-29.2) and positive predictive value (PPV) at 18.4% (16.4-20.4). The area under the operating characteristic curve was 0.820, whereas the area under the precision-recall curve was 0.489. Pseudo-sensitivity was higher among males, younger individuals, and newly diagnosed TB. Threshold analysis revealed trade-offs, as increasing the threshold score to 0.68 saved more mWRD tests (42%) but led to an increase in missed cases (10%). Threshold adjustments affected PPV, tests saved, and case detection differently across settings. Scaling up AI-CAD use in TB screening to improve TB elimination efforts could be beneficial. There is a need to calibrate threshold scores based on resource availability, prevalence, and program goals. ROC and PRC plots, which specify PPV, could serve as valuable metrics for capturing the best estimate of model performance and cost-benefit ratios within the context-specific implementation of resource-limited settings.

Covid-19 diagnosis using privacy-preserving data monitoring: an explainable AI deep learning model with blockchain security.

Bala K, Kumar KA, Venu D, Dudi BP, Veluri SP, Nirmala V

pubmed logopapersAug 22 2025
The COVID-19 pandemic emphasised necessity for prompt, precise diagnostics, secure data storage, and robust privacy protection in healthcare. Existing diagnostic systems often suffer from limited transparency, inadequate performance, and challenges in ensuring data security and privacy. The research proposes a novel privacy-preserving diagnostic framework, Heterogeneous Convolutional-recurrent attention Transfer learning based ResNeXt with Modified Greater Cane Rat optimisation (HCTR-MGR), that integrates deep learning, Explainable Artificial Intelligence (XAI), and blockchain technology. The HCTR model combines convolutional layers for spatial feature extraction, recurrent layers for capturing spatial dependencies, and attention mechanisms to highlight diagnostically significant regions. A ResNeXt-based transfer learning backbone enhances performance, while the MGR algorithm improves robustness and convergence. A trust-based permissioned blockchain stores encrypted patient metadata to ensure data security and integrity and eliminates centralised vulnerabilities. The framework also incorporates SHAP and LIME for interpretable predictions. Experimental evaluation on two benchmark chest X-ray datasets demonstrates superior diagnostic performance, achieving 98-99% accuracy, 97-98% precision, 95-97% recall, 99% specificity, and 95-98% F1-score, offering a 2-6% improvement over conventional models such as ResNet, SARS-Net, and PneuNet. These results underscore the framework's potential for scalable, secure, and clinically trustworthy deployment in real-world healthcare systems.

Automatic analysis of negation cues and scopes for medical texts in French using language models.

Sadoune S, Richard A, Talbot F, Guyet T, Boussel L, Berry H

pubmed logopapersAug 22 2025
Correct automatic analysis of a medical report requires the identification of negations and their scopes. Since most of available training data comes from medical texts in English, it usually takes additional work to apply to non-English languages. Here, we introduce a supervised learning method for automatically identifying and determining the scopes and negation cues in French medical reports using language models based on BERT. Using a new private corpus of French-language chest CT scan reports with consistent annotation, we first fine-tuned five available transformer models on the negation cue and scope identification task. Subsequently, we extended the methodology by modifying the optimal model to encompass a wider range of clinical notes and reports (not limited to radiology reports) and more heterogeneous annotations. Lastly, we tested the generated model on its initial mask-filling task to ensure there is no catastrophic forgetting. On a corpus of thoracic CT scan reports annotated by four annotators within our team, our method reaches a F1-score of 99.4% for cue detection and 94.5% for scope detection, thus equaling or improving state-of-the art performance. On more generic biomedical reports, annotated with more heterogeneous rules, the quality of the automatic analysis of course decreases, but our best-of-the class model still delivers very good performance, with F1-scores of 98.2% (cue detection), and 90.9% (scope detection). Moreover, we show that fine-tuning the original model for the negation identification task preserves or even improves its performance on its initial fill-mask task, depending on the lemmatization. Considering the performance of our fine-tuned model for the detection of negation cues and scopes in medical reports in French and its robustness with respect to the diversity of the annotation rules and the type of biomedical data, we conclude that it is suited for use in a real-life clinical context.
Page 15 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.