Sort by:
Page 28 of 66652 results

Faithful, Interpretable Chest X-ray Diagnosis with Anti-Aliased B-cos Networks

Marcel Kleinmann, Shashank Agnihotri, Margret Keuper

arxiv logopreprintJul 22 2025
Faithfulness and interpretability are essential for deploying deep neural networks (DNNs) in safety-critical domains such as medical imaging. B-cos networks offer a promising solution by replacing standard linear layers with a weight-input alignment mechanism, producing inherently interpretable, class-specific explanations without post-hoc methods. While maintaining diagnostic performance competitive with state-of-the-art DNNs, standard B-cos models suffer from severe aliasing artifacts in their explanation maps, making them unsuitable for clinical use where clarity is essential. Additionally, the original B-cos formulation is limited to multi-class settings, whereas chest X-ray analysis often requires multi-label classification due to co-occurring abnormalities. In this work, we address both limitations: (1) we introduce anti-aliasing strategies using FLCPooling (FLC) and BlurPool (BP) to significantly improve explanation quality, and (2) we extend B-cos networks to support multi-label classification. Our experiments on chest X-ray datasets demonstrate that the modified $\text{B-cos}_\text{FLC}$ and $\text{B-cos}_\text{BP}$ preserve strong predictive performance while providing faithful and artifact-free explanations suitable for clinical application in multi-label settings. Code available at: $\href{https://github.com/mkleinma/B-cos-medical-paper}{GitHub repository}$.

Transfer Learning for Automated Two-class Classification of Pulmonary Tuberculosis in Chest X-Ray Images.

Nayyar A, Shrivastava R, Jain S

pubmed logopapersJul 21 2025
Early and precise diagnosis is essential for effectively treating and managing pulmonary tuberculosis. The purpose of this research is to leverage artificial intelligence (AI), specifically convolutional neural networks (CNNs), to expedite the diagnosis of tuberculosis (TB) using chest X-ray (CXR) images. Mycobacterium tuberculosis, an aerobic bacterium, is the causative agent of TB. The disease remains a global health challenge, particularly in densely populated countries. Early detection via chest X-rays is crucial, but limited medical expertise hampers timely diagnosis. This study explores the application of CNNs, a highly efficient method, for automated TB detection, especially in areas with limited medical expertise. Previously trained models, specifically VGG-16, VGG-19, ResNet 50, and Inception v3, were used to validate the data. Effective feature extraction and classification in medical image analysis, especially in TB diagnosis, is facilitated by the distinct design and capabilities that each model offers. VGG-16 and VGG-19 are very good at identifying minute distinctions and hierarchical characteristics from CXR images; on the other hand, ResNet 50 avoids overfitting while retaining both low and high-level features. The inception v3 model is quite useful for examining various complex patterns in a CXR image with its capacity to extract multi-scale features. Inception v3 outperformed other models, attaining 97.60% accuracy without pre-processing and 98.78% with pre-processing. The proposed model shows promising results as a tool for improving TB diagnosis, and reducing the global impact of the disease, but further validation with larger and more diverse datasets is needed.

Imaging-aided diagnosis and treatment based on artificial intelligence for pulmonary nodules: A review.

Gao H, Li J, Wu Y, Tang Z, He X, Zhao F, Chen Y, He X

pubmed logopapersJul 21 2025
Pulmonary nodules are critical indicators for the early detection of lung cancer; however, their diagnosis and management pose significant challenges due to the variability in nodule characteristics, reader fatigue, and limited clinical expertise, often leading to diagnostic errors. The rapid advancement of artificial intelligence (AI) presents promising solutions to address these issues. This review compares traditional rule-based methods, handcrafted feature-based machine learning, radiomics, deep learning, and hybrid models incorporating Transformers or attention mechanisms. It systematically compares their methodologies, clinical applications (diagnosis, treatment, prognosis), and dataset usage to evaluate performance, applicability, and limitations in pulmonary nodule management. AI advances have significantly improved pulmonary nodule management, with transformer-based models achieving leading accuracy in segmentation, classification, and subtyping. The fusion of multimodal imaging CT, PET, and MRI further enhances diagnostic precision. Additionally, AI aids treatment planning and prognosis prediction by integrating radiomics with clinical data. Despite these advances, challenges remain, including domain shift, high computational demands, limited interpretability, and variability across multi-center datasets. Artificial intelligence (AI) has transformative potential in improving the diagnosis and treatment of lung nodules, especially in improving the accuracy of lung cancer treatment and patient prognosis, where significant progress has been made.

Lysophospholipid metabolism, clinical characteristics, and artificial intelligence-based quantitative assessments of chest CT in patients with stable COPD and healthy smokers.

Zhou Q, Xing L, Ma M, Qiongda B, Li D, Wang P, Chen Y, Liang Y, ChuTso M, Sun Y

pubmed logopapersJul 21 2025
The specific role of lysophospholipids (LysoPLs) in the pathogenesis of chronic obstructive pulmonary disease (COPD) is not yet fully understood. We determined serum LysoPLs in 20 patients with stable COPD and 20 healthy smokers using liquid chromatography-mass spectrometry (LC-MS) and matching with the lipidIMMS library, and integrated these data with spirometry, systemic inflammation markers, and quantitative chest CT generated by an automated 3D-U-Net artificial intelligence algorithm model. Our findings identified three differential LysoPLs, lysophosphatidylcholine (LPC) (18:0), LPC (18:1), and LPC (18:2), which were significantly lower in the COPD group than in healthy smokers. Significant negative correlations were observed between these LPCs and the inflammatory markers C-reactive protein and Interleukin-6. LPC (18:0) and (18:2) correlated with higher post-bronchodilator FEV1, and the latter also correlated with FEV1% predicted, forced vital capacity (FVC), and FEV1/FVC ratio. Additionally, these three LPCs were negatively correlated with the volume and percentage of low attenuation areas (LAA), high-attenuation areas (HAA), honeycombing, reticular patterns, ground-glass opacities (GGO), and consolidation on CT imaging. In the patients with COPD, the three LPCs were most significantly associated with HAA and GGO. In conclusion, patients with stable COPD exhibited a unique LysoPL metabolism profile, with LPC (18:0), LPC (18:1), and LPC (18:2) being the most significantly altered lipid molecules. The reduction in these three LPCs was associated with impaired pulmonary function and were also linked to a greater extent of emphysema and interstitial lung abnormalities.

MedSR-Impact: Transformer-Based Super-Resolution for Lung CT Segmentation, Radiomics, Classification, and Prognosis

Marc Boubnovski Martell, Kristofer Linton-Reid, Mitchell Chen, Sumeet Hindocha, Benjamin Hunter, Marco A. Calzado, Richard Lee, Joram M. Posma, Eric O. Aboagye

arxiv logopreprintJul 21 2025
High-resolution volumetric computed tomography (CT) is essential for accurate diagnosis and treatment planning in thoracic diseases; however, it is limited by radiation dose and hardware costs. We present the Transformer Volumetric Super-Resolution Network (\textbf{TVSRN-V2}), a transformer-based super-resolution (SR) framework designed for practical deployment in clinical lung CT analysis. Built from scalable components, including Through-Plane Attention Blocks (TAB) and Swin Transformer V2 -- our model effectively reconstructs fine anatomical details in low-dose CT volumes and integrates seamlessly with downstream analysis pipelines. We evaluate its effectiveness on three critical lung cancer tasks -- lobe segmentation, radiomics, and prognosis -- across multiple clinical cohorts. To enhance robustness across variable acquisition protocols, we introduce pseudo-low-resolution augmentation, simulating scanner diversity without requiring private data. TVSRN-V2 demonstrates a significant improvement in segmentation accuracy (+4\% Dice), higher radiomic feature reproducibility, and enhanced predictive performance (+0.06 C-index and AUC). These results indicate that SR-driven recovery of structural detail significantly enhances clinical decision support, positioning TVSRN-V2 as a well-engineered, clinically viable system for dose-efficient imaging and quantitative analysis in real-world CT workflows.

AI-Assisted Semiquantitative Measurement of Murine Bleomycin-Induced Lung Fibrosis Using In Vivo Micro-CT: An End-to-End Approach.

Cheng H, Gao T, Sun Y, Huang F, Gu X, Shan C, Wang B, Luo S

pubmed logopapersJul 21 2025
Small animal models are crucial for investigating idiopathic pulmonary fibrosis (IPF) and developing preclinical therapeutic strategies. However, there are several limitations to the quantitative measurements used in the longitudinal assessment of experimental lung fibrosis, e.g., histological or biochemical analyses introduce inter-individual variability, while image-derived biomarker has yet to directly and accurately quantify the severity of lung fibrosis. This study investigates artificial intelligence (AI)-assisted, end-to-end, semi-quantitative measurement of lung fibrosis using in vivo micro-CT. Based on the bleomycin (BLM)-induced lung fibrosis mouse model, the AI model predicts histopathological scores from in vivo micro-CT images, directly correlating these images with the severity of lung fibrosis in mice. Fibrosis severity was graded by the Ashcroft scale: none (0), mild (1-3), moderate (4-5), severe (≥6).The overall accuracy, precision, recall, and F1 scores of the lung fibrosis severity-stratified 3-fold cross validation on 225 micro-CT images for the proposed AI model were 92.9%, 90.9%, 91.6%, and 91.0%. The overall area under the receiver operating characteristic curve (AUROC) was 0.990 (95% CI: 0.977, 1.000), with AUROC values of 1.000 for none (100 images, 95% CI: 0.997, 1.000), 0.969 for mild (43 images, 95% CI: 0.918, 1.000), 0.992 for moderate (36 images, 95% CI: 0.962, 1.000), and 0.992 for severe (46 images, 95% CI: 0.967, 1.000). Preliminary results indicate that AI-assisted, in vivo micro-CT-based semi-quantitative measurements of murine are feasible and likely accurate. This novel method holds promise as a tool to improve the reproducibility of experimental studies in animal models of IPF.

DREAM: A framework for discovering mechanisms underlying AI prediction of protected attributes

Gadgil, S. U., DeGrave, A. J., Janizek, J. D., Xu, S., Nwandu, L., Fonjungo, F., Lee, S.-I., Daneshjou, R.

medrxiv logopreprintJul 21 2025
Recent advances in Artificial Intelligence (AI) have started disrupting the healthcare industry, especially medical imaging, and AI devices are increasingly being deployed into clinical practice. Such classifiers have previously demonstrated the ability to discern a range of protected demographic attributes (like race, age, sex) from medical images with unexpectedly high performance, a sensitive task which is difficult even for trained physicians. In this study, we motivate and introduce a general explainable AI (XAI) framework called DREAM (DiscoveRing and Explaining AI Mechanisms) for interpreting how AI models trained on medical images predict protected attributes. Focusing on two modalities, radiology and dermatology, we are successfully able to train high-performing classifiers for predicting race from chest x-rays (ROC-AUC score of [~]0.96) and sex from dermoscopic lesions (ROC-AUC score of [~]0.78). We highlight how incorrect use of these demographic shortcuts can have a detrimental effect on the performance of a clinically relevant downstream task like disease diagnosis under a domain shift. Further, we employ various XAI techniques to identify specific signals which can be leveraged to predict sex. Finally, we propose a technique, which we callremoval via balancing, to quantify how much a signal contributes to the classification performance. Using this technique and the signals identified, we are able to explain [~]15% of the total performance for radiology and [~]42% of the total performance for dermatology. We envision DREAM to be broadly applicable to other modalities and demographic attributes. This analysis not only underscores the importance of cautious AI application in healthcare but also opens avenues for improving the transparency and reliability of AI-driven diagnostic tools.

Automated Quantitative Evaluation of Age-Related Thymic Involution on Plain Chest CT.

Okamura YT, Endo K, Toriihara A, Fukuda I, Isogai J, Sato Y, Yasuoka K, Kagami SI

pubmed logopapersJul 19 2025
The thymus is an important immune organ involved in T-cell generation. Age-related involution of the thymus has been linked to various age-related pathologies in recent studies. However, there has been no method proposed to quantify age-related thymic involution based on a clinical image. The purpose of this study was to establish an objective and automatic method to quantify age-related thymic involution based on plain chest computed tomography (CT) images. We newly defined the thymic region for quantification (TRQ) as the target anatomical region. We manually segmented the TRQ in 135 CT studies, followed by construction of segmentation neural network (NN) models using the data. We developed the estimator of thymic volume (ETV), a quantitative indicator of the thymic tissue volume inside the segmented TRQ, based on simple mathematical modeling. The Hounsfield unit (HU) value and volume of the NN-segmented TRQ were measured, and the ETV was calculated in each CT study from 853 healthy subjects. We investigated how these measures were related to age and sex using quantile additive regression models. A significant correlation between the NN-segmented and manually segmented TRQ was seen for both the HU value and volume (r = 0.996 and r = 0.986, respectively). ETV declined exponentially with age (p < 0.001), consistent with age-related decline in the thymic tissue volume. In conclusion, our method enabled robust quantification of age-related thymic involution. Our method may aid in the prediction and risk classification of pathologies related to thymic involution.

CXR-TFT: Multi-Modal Temporal Fusion Transformer for Predicting Chest X-ray Trajectories

Mehak Arora, Ayman Ali, Kaiyuan Wu, Carolyn Davis, Takashi Shimazui, Mahmoud Alwakeel, Victor Moas, Philip Yang, Annette Esper, Rishikesan Kamaleswaran

arxiv logopreprintJul 19 2025
In intensive care units (ICUs), patients with complex clinical conditions require vigilant monitoring and prompt interventions. Chest X-rays (CXRs) are a vital diagnostic tool, providing insights into clinical trajectories, but their irregular acquisition limits their utility. Existing tools for CXR interpretation are constrained by cross-sectional analysis, failing to capture temporal dynamics. To address this, we introduce CXR-TFT, a novel multi-modal framework that integrates temporally sparse CXR imaging and radiology reports with high-frequency clinical data, such as vital signs, laboratory values, and respiratory flow sheets, to predict the trajectory of CXR findings in critically ill patients. CXR-TFT leverages latent embeddings from a vision encoder that are temporally aligned with hourly clinical data through interpolation. A transformer model is then trained to predict CXR embeddings at each hour, conditioned on previous embeddings and clinical measurements. In a retrospective study of 20,000 ICU patients, CXR-TFT demonstrated high accuracy in forecasting abnormal CXR findings up to 12 hours before they became radiographically evident. This predictive capability in clinical data holds significant potential for enhancing the management of time-sensitive conditions like acute respiratory distress syndrome, where early intervention is crucial and diagnoses are often delayed. By providing distinctive temporal resolution in prognostic CXR analysis, CXR-TFT offers actionable 'whole patient' insights that can directly improve clinical outcomes.

Machine learning and discriminant analysis model for predicting benign and malignant pulmonary nodules.

Li Z, Zhang W, Huang J, Lu L, Xie D, Zhang J, Liang J, Sui Y, Liu L, Zou J, Lin A, Yang L, Qiu F, Hu Z, Wu M, Deng Y, Zhang X, Lu J

pubmed logopapersJul 18 2025
Pulmonary Nodules (PNs) are a trend considered as the early manifestation of lung cancer. Among them, PNs that remain stable for more than two years or whose pathological results suggest not being lung cancer are considered benign PNs (BPNs), while PNs that conform to the growth pattern of tumors or whose pathological results indicate lung cancer are considered malignant PNs (MPNs). Currently, more than 90% of PNs detected by screening tests are benign, with a false positive rate of up to 96.4%. While a range of predictive models have been developed for the identification of MPNs, there are still some challenges in distinguishing between BPNs and MPNs. We included a total of 5197 patients for the case-control study according to the preset exclusion criteria and sample size. Among them, 4735 with BPNs and 2509 with MPNs were randomly divided into training, validation, and test sets according to a 7:1.5:1.5 ratio. Three widely applicable machine learning algorithms (Random Forests, Gradient Boosting Machine, and XGBoost) were used to screen the metrics, and then the corresponding predictive models were constructed using discriminative analysis, and the best performing model was selected as the target model. The model is internally validated with 10-fold cross validation and compared with PKUPH and Block models. We collated information from chest CT examinations performed from 2018 to 2021 in the physical examination population and found that the detection rate of PNs was 21.57% and showed an overall upward trend. The GMU_D model constructed by discriminative analysis based on machine learning screening features had an excellent discriminative performance (AUC = 0.866, 95% CI: 0.858-0.874), and higher accuracy than the PKUPH model (AUC = 0.559, 95% CI: 0.552-0.567) and the Block model (AUC = 0.823, 95% CI: 0.814-0.833). Moreover, the cross-validation results also exhibit excellent performance (AUC = 0.866, 95% CI: 0.858-0.874). The detection rate of PNs was 21.57% in the physical examination population undergoing chest CT. Meanwhile, based on real-world studies of PNs, a greater prediction tool was developed and validated that can be used to accurately distinguish between BPNs and MPNs with the excellent predictive performance and differentiation.
Page 28 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.