Sort by:
Page 38 of 42412 results

DCSNet: A Lightweight Knowledge Distillation-Based Model with Explainable AI for Lung Cancer Diagnosis from Histopathological Images

Sadman Sakib Alif, Nasim Anzum Promise, Fiaz Al Abid, Aniqua Nusrat Zereen

arxiv logopreprintMay 14 2025
Lung cancer is a leading cause of cancer-related deaths globally, where early detection and accurate diagnosis are critical for improving survival rates. While deep learning, particularly convolutional neural networks (CNNs), has revolutionized medical image analysis by detecting subtle patterns indicative of early-stage lung cancer, its adoption faces challenges. These models are often computationally expensive and require significant resources, making them unsuitable for resource constrained environments. Additionally, their lack of transparency hinders trust and broader adoption in sensitive fields like healthcare. Knowledge distillation addresses these challenges by transferring knowledge from large, complex models (teachers) to smaller, lightweight models (students). We propose a knowledge distillation-based approach for lung cancer detection, incorporating explainable AI (XAI) techniques to enhance model transparency. Eight CNNs, including ResNet50, EfficientNetB0, EfficientNetB3, and VGG16, are evaluated as teacher models. We developed and trained a lightweight student model, Distilled Custom Student Network (DCSNet) using ResNet50 as the teacher. This approach not only ensures high diagnostic performance in resource-constrained settings but also addresses transparency concerns, facilitating the adoption of AI-driven diagnostic tools in healthcare.

Explainability Through Human-Centric Design for XAI in Lung Cancer Detection

Amy Rafferty, Rishi Ramaesh, Ajitha Rajan

arxiv logopreprintMay 14 2025
Deep learning models have shown promise in lung pathology detection from chest X-rays, but widespread clinical adoption remains limited due to opaque model decision-making. In prior work, we introduced ClinicXAI, a human-centric, expert-guided concept bottleneck model (CBM) designed for interpretable lung cancer diagnosis. We now extend that approach and present XpertXAI, a generalizable expert-driven model that preserves human-interpretable clinical concepts while scaling to detect multiple lung pathologies. Using a high-performing InceptionV3-based classifier and a public dataset of chest X-rays with radiology reports, we compare XpertXAI against leading post-hoc explainability methods and an unsupervised CBM, XCBs. We assess explanations through comparison with expert radiologist annotations and medical ground truth. Although XpertXAI is trained for multiple pathologies, our expert validation focuses on lung cancer. We find that existing techniques frequently fail to produce clinically meaningful explanations, omitting key diagnostic features and disagreeing with radiologist judgments. XpertXAI not only outperforms these baselines in predictive accuracy but also delivers concept-level explanations that better align with expert reasoning. While our focus remains on explainability in lung cancer detection, this work illustrates how human-centric model design can be effectively extended to broader diagnostic contexts - offering a scalable path toward clinically meaningful explainable AI in medical diagnostics.

Application of artificial intelligence medical imaging aided diagnosis system in the diagnosis of pulmonary nodules.

Yang Y, Wang P, Yu C, Zhu J, Sheng J

pubmed logopapersMay 14 2025
The application of artificial intelligence (AI) technology has realized the transformation of people's production and lifestyle, and also promoted the rapid development of the medical field. At present, the application of intelligence in the medical field is increasing. Using its advanced methods and technologies of AI, this paper aims to realize the integration of medical imaging-aided diagnosis system and AI, which is helpful to analyze and solve the loopholes and errors of traditional artificial diagnosis in the diagnosis of pulmonary nodules. Drawing on the principles and rules of image segmentation methods, the construction and optimization of a medical image-aided diagnosis system is carried out to realize the precision of the diagnosis system in the diagnosis of pulmonary nodules. In the diagnosis of pulmonary nodules carried out by traditional artificial and medical imaging-assisted diagnosis systems, 231 nodules with pathology or no change in follow-up for more than two years were also tested in 200 cases. The results showed that the AI software detected a total of 881 true nodules with a sensitivity of 99.10% (881/889). The radiologists detected 385 true nodules with a sensitivity of 43.31% (385/889). The sensitivity of AI software in detecting non-calcified nodules was significantly higher than that of radiologists (99.01% vs 43.30%, P < 0.001), and the difference was statistically significant.

A Deep Learning-Driven Inhalation Injury Grading Assistant Using Bronchoscopy Images

Yifan Li, Alan W Pang, Jo Woon Chong

arxiv logopreprintMay 13 2025
Inhalation injuries present a challenge in clinical diagnosis and grading due to Conventional grading methods such as the Abbreviated Injury Score (AIS) being subjective and lacking robust correlation with clinical parameters like mechanical ventilation duration and patient mortality. This study introduces a novel deep learning-based diagnosis assistant tool for grading inhalation injuries using bronchoscopy images to overcome subjective variability and enhance consistency in severity assessment. Our approach leverages data augmentation techniques, including graphic transformations, Contrastive Unpaired Translation (CUT), and CycleGAN, to address the scarcity of medical imaging data. We evaluate the classification performance of two deep learning models, GoogLeNet and Vision Transformer (ViT), across a dataset significantly expanded through these augmentation methods. The results demonstrate GoogLeNet combined with CUT as the most effective configuration for grading inhalation injuries through bronchoscopy images and achieves a classification accuracy of 97.8%. The histograms and frequency analysis evaluations reveal variations caused by the augmentation CUT with distribution changes in the histogram and texture details of the frequency spectrum. PCA visualizations underscore the CUT substantially enhances class separability in the feature space. Moreover, Grad-CAM analyses provide insight into the decision-making process; mean intensity for CUT heatmaps is 119.6, which significantly exceeds 98.8 of the original datasets. Our proposed tool leverages mechanical ventilation periods as a novel grading standard, providing comprehensive diagnostic support.

Trustworthy AI for stage IV non-small cell lung cancer: Automatic segmentation and uncertainty quantification.

Dedeken S, Conze PH, Damerjian Pieters V, Gallinato O, Faure J, Colin T, Visvikis D

pubmed logopapersMay 13 2025
Accurate segmentation of lung tumors is essential for advancing personalized medicine in non-small cell lung cancer (NSCLC). However, stage IV NSCLC presents significant challenges due to heterogeneous tumor morphology and the presence of associated conditions including infection, atelectasis and pleural effusion. The complexity of multicentric datasets further complicates robust segmentation across diverse clinical settings. In this study, we evaluate deep-learning-based approaches for automated segmentation of advanced-stage lung tumors using 3D architectures on 387 CT scans from the Deep-Lung-IV study. Through comprehensive experiments, we assess the impact of model design, HU windowing, and dataset size on delineation performance, providing practical guidelines for robust implementation. Additionally, we propose a confidence score using deep ensembles to quantify prediction uncertainty and automate the identification of complex cases that require further review. Our results demonstrate the potential of attention-based architectures and specific preprocessing strategies to improve segmentation quality in such a challenging clinical scenario, while emphasizing the importance of uncertainty estimation to build trustworthy AI systems in medical imaging. Code is available at: https://github.com/Sacha-Dedeken/SegStageIVNSCLC.

Development and validation of an early diagnosis model for severe mycoplasma pneumonia in children based on interpretable machine learning.

Xie S, Wu M, Shang Y, Tuo W, Wang J, Cai Q, Yuan C, Yao C, Xiang Y

pubmed logopapersMay 13 2025
Pneumonia is a major threat to the health of children, especially those under the age of five. Mycoplasma  pneumoniae infection is a core cause of pediatric pneumonia, and the incidence of severe mycoplasma pneumoniae pneumonia (SMPP) has increased in recent years. Therefore, there is an urgent need to establish an early warning model for SMPP to improve the prognosis of pediatric pneumonia. The study comprised 597 SMPP patients aged between 1 month and 18 years. Clinical data were selected through Lasso regression analysis, followed by the application of eight machine learning algorithms to develop early warning model. The accuracy of the model was assessed using validation and prospective cohort. To facilitate clinical assessment, the study simplified the indicators and constructed visualized simplified model. The clinical applicability of the model was evaluated by DCA and CIC curve. After variable selection, eight machine learning models were developed using age, sex and 21 serum indicators identified as predictive factors for SMPP. A Light Gradient Boosting Machine (LightGBM) model demonstrated strong performance, achieving AUC of 0.92 for prospective validation. The SHAP analysis was utilized to screen advantageous variables, which contains of serum S100A8/A9, tracheal computed tomography (CT), retinol-binding protein(RBP), platelet larger cell ratio(P-LCR) and CD4+CD25+Treg cell counts, for constructing a simplified model (SCRPT) to improve clinical applicability. The SCRPT diagnostic model exhibited favorable diagnostic efficacy (AUC > 0.8). Additionally, the study found that S100A8/A9 outperformed clinical inflammatory markers can also differentiate the severity of MPP. The SCRPT model consisting of five dominant variables (S100A8/A9, CT, RBP, PLCR and Treg cell) screened based on eight machine learning is expected to be a tool for early diagnosis of SMPP. S100A8/A9 can also be used as a biomarker for validity differentiation of SMPP when medical conditions are limited.

Blockchain enabled collective and combined deep learning framework for COVID19 diagnosis.

Periyasamy S, Kaliyaperumal P, Thirumalaisamy M, Balusamy B, Elumalai T, Meena V, Jadoun VK

pubmed logopapersMay 13 2025
The rapid spread of SARS-CoV-2 has highlighted the need for intelligent methodologies in COVID-19 diagnosis. Clinicians face significant challenges due to the virus's fast transmission rate and the lack of reliable diagnostic tools. Although artificial intelligence (AI) has improved image processing, conventional approaches still rely on centralized data storage and training. This reliance increases complexity and raises privacy concerns, which hinder global data exchange. Therefore, it is essential to develop collaborative models that balance accuracy with privacy protection. This research presents a novel framework that combines blockchain technology with a combined learning paradigm to ensure secure data distribution and reduced complexity. The proposed Combined Learning Collective Deep Learning Blockchain Model (CLCD-Block) aggregates data from multiple institutions and leverages a hybrid capsule learning network for accurate predictions. Extensive testing with lung CT images demonstrates that the model outperforms existing models, achieving an accuracy exceeding 97%. Specifically, on four benchmark datasets, CLCD-Block achieved up to 98.79% Precision, 98.84% Recall, 98.79% Specificity, 98.81% F1-Score, and 98.71% Accuracy, showcasing its superior diagnostic capability. Designed for COVID-19 diagnosis, the CLCD-Block framework is adaptable to other applications, integrating AI, decentralized training, privacy protection, and secure blockchain collaboration. It addresses challenges in diagnosing chronic diseases, facilitates cross-institutional research and monitors infectious outbreaks. Future work will focus on enhancing scalability, optimizing real-time performance and adapting the model for broader healthcare datasets.

A Deep Learning-Driven Framework for Inhalation Injury Grading Using Bronchoscopy Images

Yifan Li, Alan W Pang, Jo Woon Chong

arxiv logopreprintMay 13 2025
Inhalation injuries face a challenge in clinical diagnosis and grading due to the limitations of traditional methods, such as Abbreviated Injury Score (AIS), which rely on subjective assessments and show weak correlations with clinical outcomes. This study introduces a novel deep learning-based framework for grading inhalation injuries using bronchoscopy images with the duration of mechanical ventilation as an objective metric. To address the scarcity of medical imaging data, we propose enhanced StarGAN, a generative model that integrates Patch Loss and SSIM Loss to improve synthetic images' quality and clinical relevance. The augmented dataset generated by enhanced StarGAN significantly improved classification performance when evaluated using the Swin Transformer, achieving an accuracy of 77.78%, an 11.11% improvement over the original dataset. Image quality was assessed using the Fr\'echet Inception Distance (FID), where Enhanced StarGAN achieved the lowest FID of 30.06, outperforming baseline models. Burn surgeons confirmed the realism and clinical relevance of the generated images, particularly the preservation of bronchial structures and color distribution. These results highlight the potential of enhanced StarGAN in addressing data limitations and improving classification accuracy for inhalation injury grading.

Evaluation of an artificial intelligence noise reduction tool for conventional X-ray imaging - a visual grading study of pediatric chest examinations at different radiation dose levels using anthropomorphic phantoms.

Hultenmo M, Pernbro J, Ahlin J, Bonnier M, Båth M

pubmed logopapersMay 13 2025
Noise reduction tools developed with artificial intelligence (AI) may be implemented to improve image quality and reduce radiation dose, which is of special interest in the more radiosensitive pediatric population. The aim of the present study was to examine the effect of the AI-based intelligent noise reduction (INR) on image quality at different dose levels in pediatric chest radiography. Anteroposterior and lateral images of two anthropomorphic phantoms were acquired with both standard noise reduction and INR at different dose levels. In total, 300 anteroposterior and 420 lateral images were included. Image quality was evaluated by three experienced pediatric radiologists. Gradings were analyzed with visual grading characteristics (VGC) resulting in area under the VGC curve (AUC<sub>VGC</sub>) values and associated confidence intervals (CI). Image quality of different anatomical structures and overall clinical image quality were statistically significantly better in the anteroposterior INR images than in the corresponding standard noise reduced images at each dose level. Compared with reference anteroposterior images at a dose level of 100% with standard noise reduction, the image quality of the anteroposterior INR images was graded as significantly better at dose levels of ≥ 80%. Statistical significance was also achieved at lower dose levels for some structures. The assessments of the lateral images showed similar trends but with fewer significant results. The results of the present study indicate that the AI-based INR may potentially be used to improve image quality at a specific dose level or to reduce dose and maintain the image quality in pediatric chest radiography.

AI-based volumetric six-tissue body composition quantification from CT cardiac attenuation scans for mortality prediction: a multicentre study.

Yi J, Marcinkiewicz AM, Shanbhag A, Miller RJH, Geers J, Zhang W, Killekar A, Manral N, Lemley M, Buchwald M, Kwiecinski J, Zhou J, Kavanagh PB, Liang JX, Builoff V, Ruddy TD, Einstein AJ, Feher A, Miller EJ, Sinusas AJ, Berman DS, Dey D, Slomka PJ

pubmed logopapersMay 12 2025
CT attenuation correction (CTAC) scans are routinely obtained during cardiac perfusion imaging, but currently only used for attenuation correction and visual calcium estimation. We aimed to develop a novel artificial intelligence (AI)-based approach to obtain volumetric measurements of chest body composition from CTAC scans and to evaluate these measures for all-cause mortality risk stratification. We applied AI-based segmentation and image-processing techniques on CTAC scans from a large international image-based registry at four sites (Yale University, University of Calgary, Columbia University, and University of Ottawa), to define the chest rib cage and multiple tissues. Volumetric measures of bone, skeletal muscle, subcutaneous adipose tissue, intramuscular adipose tissue (IMAT), visceral adipose tissue (VAT), and epicardial adipose tissue (EAT) were quantified between automatically identified T5 and T11 vertebrae. The independent prognostic value of volumetric attenuation and indexed volumes were evaluated for predicting all-cause mortality, adjusting for established risk factors and 18 other body composition measures via Cox regression models and Kaplan-Meier curves. The end-to-end processing time was less than 2 min per scan with no user interaction. Between 2009 and 2021, we included 11 305 participants from four sites participating in the REFINE SPECT registry, who underwent single-photon emission computed tomography cardiac scans. After excluding patients who had incomplete T5-T11 scan coverage, missing clinical data, or who had been used for EAT model training, the final study group comprised 9918 patients. 5451 (55%) of 9918 participants were male and 4467 (45%) of 9918 participants were female. Median follow-up time was 2·48 years (IQR 1·46-3·65), during which 610 (6%) patients died. High VAT, EAT, and IMAT attenuation were associated with an increased all-cause mortality risk (adjusted hazard ratio 2·39, 95% CI 1·92-2·96; p<0·0001, 1·55, 1·26-1·90; p<0·0001, and 1·30, 1·06-1·60; p=0·012, respectively). Patients with high bone attenuation were at reduced risk of death (0·77, 0·62-0·95; p=0·016). Likewise, high skeletal muscle volume index was associated with a reduced risk of death (0·56, 0·44-0·71; p<0·0001). CTAC scans obtained routinely during cardiac perfusion imaging contain important volumetric body composition biomarkers that can be automatically measured and offer important additional prognostic value. The National Heart, Lung, and Blood Institute, National Institutes of Health.
Page 38 of 42412 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.