Sort by:
Page 26 of 66652 results

Evaluating the accuracy of artificial intelligence-powered chest X-ray diagnosis for paediatric pulmonary tuberculosis (EVAL-PAEDTBAID): Study protocol for a multi-centre diagnostic accuracy study.

Aurangzeb B, Robert D, Baard C, Qureshi AA, Shaheen A, Ambreen A, McFarlane D, Javed H, Bano I, Chiramal JA, Workman L, Pillay T, Franckling-Smith Z, Mustafa T, Andronikou S, Zar HJ

pubmed logopapersJul 28 2025
Diagnosing pulmonary tuberculosis (PTB) in children is challenging owing to paucibacillary disease, non-specific symptoms and signs and challenges in microbiological confirmation. Chest X-ray (CXR) interpretation is fundamental for diagnosis and classifying disease as severe or non-severe. In adults with PTB, there is substantial evidence showing the usefulness of artificial intelligence (AI) in CXR interpretation, but very limited data exist in children. A prospective two-stage study of children with presumed PTB in three sites (one in South Africa and two in Pakistan) will be conducted. In stage I, eligible children will be enrolled and comprehensively investigated for PTB. A CXR radiological reference standard (RRS) will be established by an expert panel of blinded radiologists. CXRs will be classified into those with findings consistent with PTB or not based on RRS. Cases will be classified as confirmed, unconfirmed or unlikely PTB according to National Institutes of Health definitions. Data from 300 confirmed and unconfirmed PTB cases and 250 unlikely PTB cases will be collected. An AI-CXR algorithm (qXR) will be used to process CXRs. The primary endpoint will be sensitivity and specificity of AI to detect confirmed and unconfirmed PTB cases (composite reference standard); a secondary endpoint will be evaluated for confirmed PTB cases (microbiological reference standard). In stage II, a multi-reader multi-case study using a cross-over design will be conducted with 16 readers and 350 CXRs to assess the usefulness of AI-assisted CXR interpretation for readers (clinicians and radiologists). The primary endpoint will be the difference in the area under the receiver operating characteristic curve of readers with and without AI assistance in correctly classifying CXRs as per RRS. The study has been approved by a local institutional ethics committee at each site. Results will be published in academic journals and presented at conferences. Data will be made available as an open-source database. PACTR202502517486411.

Multi-Attention Stacked Ensemble for Lung Cancer Detection in CT Scans

Uzzal Saha, Surya Prakash

arxiv logopreprintJul 27 2025
In this work, we address the challenge of binary lung nodule classification (benign vs malignant) using CT images by proposing a multi-level attention stacked ensemble of deep neural networks. Three pretrained backbones - EfficientNet V2 S, MobileViT XXS, and DenseNet201 - are each adapted with a custom classification head tailored to 96 x 96 pixel inputs. A two-stage attention mechanism learns both model-wise and class-wise importance scores from concatenated logits, and a lightweight meta-learner refines the final prediction. To mitigate class imbalance and improve generalization, we employ dynamic focal loss with empirically calculated class weights, MixUp augmentation during training, and test-time augmentation at inference. Experiments on the LIDC-IDRI dataset demonstrate exceptional performance, achieving 98.09 accuracy and 0.9961 AUC, representing a 35 percent reduction in error rate compared to state-of-the-art methods. The model exhibits balanced performance across sensitivity (98.73) and specificity (98.96), with particularly strong results on challenging cases where radiologist disagreement was high. Statistical significance testing confirms the robustness of these improvements across multiple experimental runs. Our approach can serve as a robust, automated aid for radiologists in lung cancer screening.

KC-UNIT: Multi-kernel conversion using unpaired image-to-image translation with perceptual guidance in chest computed tomography imaging.

Choi C, Kim D, Park S, Lee H, Kim H, Lee SM, Kim N

pubmed logopapersJul 26 2025
Computed tomography (CT) images are reconstructed from raw datasets including sinogram using various convolution kernels through back projection. Kernels are typically chosen depending on the anatomical structure being imaged and the specific purpose of the scan, balancing the trade-off between image sharpness and pixel noise. Generally, a sinogram requires large storage capacity, and storage space is often limited in clinical settings. Thus, CT images are generally reconstructed with only one specific kernel in clinical settings, and the sinogram is typically discarded after a week. Therefore, many researchers have proposed deep learning-based image-to-image translation methods for CT kernel conversion. However, transferring the style of the target kernel while preserving anatomical structure remains challenging, particularly when translating CT images from a source domain to a target domain in an unpaired manner, which is often encountered in real-world settings. Thus, we propose a novel kernel conversion method using unpaired image-to-image translation (KC-UNIT). This approach utilizes discriminator regularization, using feature maps from the generator to improve semantic representation learning. To capture content and style features, cosine similarity content and contrastive style losses were defined between the feature map of generator and semantic label map of discriminator. This can be easily incorporated by modifying the discriminator's architecture without requiring any additional learnable or pre-trained networks. The KC-UNIT demonstrated the ability to preserve fine-grained anatomical structure from the source domain during transfer. Our method outperformed existing generative adversarial network-based methods across most kernel conversion methods in three kernel domains. The code is available at https://github.com/cychoi97/KC-UNIT.

Diagnostic performance of artificial intelligence models for pulmonary nodule classification: a multi-model evaluation.

Herber SK, Müller L, Pinto Dos Santos D, Jorg T, Souschek F, Bäuerle T, Foersch S, Galata C, Mildenberger P, Halfmann MC

pubmed logopapersJul 25 2025
Lung cancer is the leading cause of cancer-related mortality. While early detection improves survival, distinguishing malignant from benign pulmonary nodules remains challenging. Artificial intelligence (AI) has been proposed to enhance diagnostic accuracy, but its clinical reliability is still under investigation. Here, we aimed to evaluate the diagnostic performance of AI models in classifying pulmonary nodules. This single-center retrospective study analyzed pulmonary nodules (4-30 mm) detected on CT scans, using three AI software models. Sensitivity, specificity, false-positive and false-negative rates were calculated. The diagnostic accuracy was assessed using the area under the receiver operating characteristic (ROC) curve (AUC), with histopathology serving as the gold standard. Subgroup analyses were based on nodule size and histopathological classification. The impact of imaging parameters was evaluated using regression analysis. A total of 158 nodules (n = 30 benign, n = 128 malignant) were analyzed. One AI model classified most nodules as intermediate risk, preventing further accuracy assessment. The other models demonstrated moderate sensitivity (53.1-70.3%) but low specificity (46.7-66.7%), leading to a high false-positive rate (45.5-52.4%). AUC values were between 0.5 and 0.6 (95% CI). Subgroup analyses revealed decreased sensitivity (47.8-61.5%) but increased specificity (100%), highlighting inconsistencies. In total, up to 49.0% of the pulmonary nodules were classified as intermediate risk. CT scan type influenced performance (p = 0.03), with better classification accuracy on breath-held CT scans. AI-based software models are not ready for standalone clinical use in pulmonary nodule classification due to low specificity, a high false-negative rate and a high proportion of intermediate-risk classifications. Question How accurate are commercially available AI models for the classification of pulmonary nodules compared to the gold standard of histopathology? Findings The evaluated AI models demonstrated moderate sensitivity, low specificity and high false-negative rates. Up to 49% of pulmonary nodules were classified as intermediate risk. Clinical relevance The high false-negative rates could influence radiologists' decision-making, leading to an increased number of interventions or unnecessary surgical procedures.

A DCT-UNet-based framework for pulmonary airway segmentation integrating label self-updating and terminal region growing.

Zhao S, Wu Y, Xu J, Li M, Feng J, Xia S, Chen R, Liang Z, Qian W, Qi S

pubmed logopapersJul 25 2025

Intrathoracic airway segmentation in computed tomography (CT) is important for quantitative and qualitative analysis of various chronic respiratory diseases and bronchial surgery navigation. However, the airway tree's morphological complexity, incomplete labels resulting from annotation difficulty, and intra-class imbalance between main and terminal airways limit the segmentation performance.
Methods:
Three methodological improvements are proposed to deal with the challenges. Firstly, we design a DCT-UNet to collect better information on neighbouring voxels and ones within a larger spatial region. Secondly, an airway label self-updating (ALSU) strategy is proposed to iteratively update the reference labels to conquer the problem of incomplete labels. Thirdly, a deep learning-based terminal region growing (TRG) is adopted to extract terminal airways. Extensive experiments were conducted on two internal datasets and three public datasets.
Results:
Compared to the counterparts, the proposed method can achieve a higher Branch Detected, Tree-length Detected, Branch Ratio, and Tree-length Ratio (ISICDM2021 dataset, 95.19%, 94.89%, 166.45%, and 172.29%; BAS dataset, 96.03%, 95.11%, 129.35%, and 137.00%). Ablation experiments show the effectiveness of three proposed solutions. Our method is applied to an in-house Chorionic Obstructive Pulmonary Disease (COPD) dataset. The measures of branch count, tree length, endpoint count, airway volume, and airway surface area are significantly different between COPD severity stages.
Conclusions:
The proposed methods can segment more terminal bronchi and larger length of airway, even some bronchi which are real but missed in the manual annotation can be detected. Potential application significance has been presented in characterizing COPD airway lesions and severity stages.&#xD.

XVertNet: Unsupervised Contrast Enhancement of Vertebral Structures with Dynamic Self-Tuning Guidance and Multi-Stage Analysis.

Eidlin E, Hoogi A, Rozen H, Badarne M, Netanyahu NS

pubmed logopapersJul 25 2025
Chest X-ray is one of the main diagnostic tools in emergency medicine, yet its limited ability to capture fine anatomical details can result in missed or delayed diagnoses. To address this, we introduce XVertNet, a novel deep-learning framework designed to enhance vertebral structure visualization in X-ray images significantly. Our framework introduces two key innovations: (1) an unsupervised learning architecture that eliminates reliance on manually labeled training data-a persistent bottleneck in medical imaging, and (2) a dynamic self-tuned internal guidance mechanism featuring an adaptive feedback loop for real-time image optimization. Extensive validation across four major public datasets revealed that XVertNet outperforms state-of-the-art enhancement methods, as demonstrated by improvements in evaluation measures such as entropy, the Tenengrad criterion, LPC-SI, TMQI, and PIQE. Furthermore, clinical validation conducted by two board-certified clinicians confirmed that the enhanced images enabled more sensitive examination of vertebral structural changes. The unsupervised nature of XVertNet facilitates immediate clinical deployment without requiring additional training overhead. This innovation represents a transformative advancement in emergency radiology, providing a scalable and time-efficient solution to enhance diagnostic accuracy in high-pressure clinical environments.

Counterfactual Explanations in Medical Imaging: Exploring SPN-Guided Latent Space Manipulation

Julia Siekiera, Stefan Kramer

arxiv logopreprintJul 25 2025
Artificial intelligence is increasingly leveraged across various domains to automate decision-making processes that significantly impact human lives. In medical image analysis, deep learning models have demonstrated remarkable performance. However, their inherent complexity makes them black box systems, raising concerns about reliability and interpretability. Counterfactual explanations provide comprehensible insights into decision processes by presenting hypothetical "what-if" scenarios that alter model classifications. By examining input alterations, counterfactual explanations provide patterns that influence the decision-making process. Despite their potential, generating plausible counterfactuals that adhere to similarity constraints providing human-interpretable explanations remains a challenge. In this paper, we investigate this challenge by a model-specific optimization approach. While deep generative models such as variational autoencoders (VAEs) exhibit significant generative power, probabilistic models like sum-product networks (SPNs) efficiently represent complex joint probability distributions. By modeling the likelihood of a semi-supervised VAE's latent space with an SPN, we leverage its dual role as both a latent space descriptor and a classifier for a given discrimination task. This formulation enables the optimization of latent space counterfactuals that are both close to the original data distribution and aligned with the target class distribution. We conduct experimental evaluation on the cheXpert dataset. To evaluate the effectiveness of the integration of SPNs, our SPN-guided latent space manipulation is compared against a neural network baseline. Additionally, the trade-off between latent variable regularization and counterfactual quality is analyzed.

Image quality in ultra-low-dose chest CT versus chest x-rays guiding paediatric cystic fibrosis care.

Moore N, O'Regan P, Young R, Curran G, Waldron M, O'Mahony A, Suleiman ME, Murphy MJ, Maher M, England A, McEntee MF

pubmed logopapersJul 25 2025
Cystic fibrosis (CF) is a prevalent autosomal recessive disorder, with lung complications being the primary cause of morbidity and mortality. In paediatric patients, structural lung changes begin early, necessitating prompt detection to guide treatment and delay disease progression. This study evaluates ultra-low-dose CT (ULDCT) versus chest x-rays  (CXR) for children with CF (CwCF) lung disease assessment. ULDCT uses AI-enhanced deep-learning iterative reconstruction to achieve radiation doses comparable to a CXR. This prospective study recruited radiographers and radiologists to assess the image quality (IQ) of ten paired ULDCT and CXR images of CwCF from a single centre. Statistical analyses, including the Wilcoxon Signed Rank test and visual grading characteristic (VGC) analysis, compared diagnostic confidence and anatomical detail. Seventy-five participants were enrolled, 25 radiologists and 50 radiographers. The majority (88%) preferred ULDCT over CXR for monitoring CF lung disease due to higher perceived confidence (p ≤ 0.001) and better IQ ratings (p ≤ 0.05), especially among radiologists (area under the VGC curve and its 95% CI was 0.63 (asymmetric 95% CI: 0.51-0.73; p ≤ 0.05). While ULDCT showed no significant differences in anatomical visualisation compared to CXR, the overall IQ for lung pathology assessment was rated superior. ULDCT offers superior IQ over CXR in CwCF, with similar radiation doses. It also enhances diagnostic confidence, supporting its use as a viable CXR alternative. Standardising CT protocols to optimise IQ and minimise radiation is essential to improve disease monitoring in this vulnerable group. Question How does chest X-ray (CXR) IQ in children compare to ULDCT at similar radiation doses for assessing CF-related lung disease? Findings ULDCT offers superior IQ over CXR in CwCF. Participants preferred ULDCT due to higher perceived confidence levels and superior IQ. Clinical relevance ULDCT can enhance diagnosis in CwCF while maintaining comparable radiation doses. ULDCT also enhances diagnostic confidence, supporting its use as a viable CXR alternative.

Current evidence of low-dose CT screening benefit.

Yip R, Mulshine JL, Oudkerk M, Field J, Silva M, Yankelevitz DF, Henschke CI

pubmed logopapersJul 25 2025
Lung cancer is the leading cause of cancer-related mortality worldwide, largely due to late-stage diagnosis. Low-dose computed tomography (LDCT) screening has emerged as a powerful tool for early detection, enabling diagnosis at curable stages and reducing lung cancer mortality. Despite strong evidence, LDCT screening uptake remains suboptimal globally. This review synthesizes current evidence supporting LDCT screening, highlights ongoing global implementation efforts, and discusses key insights from the 1st AGILE conference. Lung cancer screening is gaining global momentum, with many countries advancing plans for national LDCT programs. Expanding eligibility through risk-based models and targeting high-risk never- and light-smokers are emerging strategies to improve efficiency and equity. Technological advancements, including AI-assisted interpretation and image-based biomarkers, are addressing concerns around false positives, overdiagnosis, and workforce burden. Integrating cardiac and smoking-related disease assessment within LDCT screening offers added preventive health benefits. To maximize global impact, screening strategies must be tailored to local health systems and populations. Efforts should focus on increasing awareness, standardizing protocols, optimizing screening intervals, and strengthening multidisciplinary care pathways. International collaboration and shared infrastructure can accelerate progress and ensure sustainability. LDCT screening represents a cost-effective opportunity to reduce lung cancer mortality and premature deaths.

Disease probability-enhanced follow-up chest X-ray radiology report summary generation.

Wang Z, Deng Q, So TY, Chiu WH, Lee K, Hui ES

pubmed logopapersJul 24 2025
A chest X-ray radiology report describes abnormal findings not only from X-ray obtained at a given examination, but also findings on disease progression or change in device placement with reference to the X-ray from previous examination. Majority of the efforts on automatic generation of radiology report pertain to reporting the former, but not the latter, type of findings. To the best of the authors' knowledge, there is only one work dedicated to generating summary of the latter findings, i.e., follow-up radiology report summary. In this study, we propose a transformer-based framework to tackle this task. Motivated by our observations on the significance of medical lexicon on the fidelity of report summary generation, we introduce two mechanisms to bestow clinical insight to our model, namely disease probability soft guidance and masked entity modeling loss. The former mechanism employs a pretrained abnormality classifier to guide the presence level of specific abnormalities, while the latter directs the model's attention toward medical lexicon. Extensive experiments were conducted to demonstrate that the performance of our model exceeded the state-of-the-art.
Page 26 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.