Sort by:
Page 14 of 66652 results

Ontology-Based Concept Distillation for Radiology Report Retrieval and Labeling

Felix Nützel, Mischa Dombrowski, Bernhard Kainz

arxiv logopreprintAug 27 2025
Retrieval-augmented learning based on radiology reports has emerged as a promising direction to improve performance on long-tail medical imaging tasks, such as rare disease detection in chest X-rays. Most existing methods rely on comparing high-dimensional text embeddings from models like CLIP or CXR-BERT, which are often difficult to interpret, computationally expensive, and not well-aligned with the structured nature of medical knowledge. We propose a novel, ontology-driven alternative for comparing radiology report texts based on clinically grounded concepts from the Unified Medical Language System (UMLS). Our method extracts standardised medical entities from free-text reports using an enhanced pipeline built on RadGraph-XL and SapBERT. These entities are linked to UMLS concepts (CUIs), enabling a transparent, interpretable set-based representation of each report. We then define a task-adaptive similarity measure based on a modified and weighted version of the Tversky Index that accounts for synonymy, negation, and hierarchical relationships between medical entities. This allows efficient and semantically meaningful similarity comparisons between reports. We demonstrate that our approach outperforms state-of-the-art embedding-based retrieval methods in a radiograph classification task on MIMIC-CXR, particularly in long-tail settings. Additionally, we use our pipeline to generate ontology-backed disease labels for MIMIC-CXR, offering a valuable new resource for downstream learning tasks. Our work provides more explainable, reliable, and task-specific retrieval strategies in clinical AI systems, especially when interpretability and domain knowledge integration are essential. Our code is available at https://github.com/Felix-012/ontology-concept-distillation

Ultra-Low-Dose CTPA Using Sparse Sampling CT Combined with the U-Net for Deep Learning-Based Artifact Reduction: An Exploratory Study.

Sauter AP, Thalhammer J, Meurer F, Dorosti T, Sasse D, Ritter J, Leonhardt Y, Pfeiffer F, Schaff F, Pfeiffer D

pubmed logopapersAug 27 2025
This retrospective study evaluates U-Net-based artifact reduction for dose-reduced sparse-sampling CT (SpSCT) in terms of image quality and diagnostic performance using a reader study and automated detection. CT pulmonary angiograms from 89 patients were used to generate SpSCT data with 16 to 512 views. Twenty patients were reserved for a reader study and test set, the remaining 69 were used to train (53) and validate (16) a dual-frame U-Net for artifact reduction. U-Net post-processed images were assessed for image quality, diagnostic performance, and automated pulmonary embolism (PE) detection using the top-performing network from the 2020 RSNA PE detection challenge. Statistical comparisons were made using two-sided Wilcoxon signed-rank and DeLong two-sided tests. Post-processing with the dual-frame U-Net significantly improved image quality in the internal test set, with a structural similarity index of 0.634/0.378/0.234/0.152 for FBP and 0.894/0.892/0.866/0.778 for U-Net at 128/64/32/16 views, respectively. The reader study showed significantly enhanced image quality (3.15 vs. 3.53 for 256 views, 0.00 vs. 2.52 for 32 views), increased diagnostic confidence (0.00 vs. 2.38 for 32 views), and fewer artifacts across all subsets (P < 0.05). Diagnostic performance, measured by the Sørensen-Dice coefficient, was significantly better for 64- and 32-view images (0.23 vs. 0.44 and 0.00 vs. 0.09, P < 0.05). Automated PE detection was better at fewer views (64 views: 0.77 vs. 0.80, 16 views: 0.59 vs. 0.80), although the differences were not statistically significant. U-Net-based post-processing of SpSCT data significantly enhances image quality and diagnostic performance, supporting substantial dose reduction in CT pulmonary angiography.

Automatic opportunistic osteoporosis screening using chest X-ray images via deep neural networks.

Tang J, Yin X, Lai J, Luo K, Wu D

pubmed logopapersAug 27 2025
Osteoporosis is a bone disease characterized by reduced bone mineral density and quality, which increases the risk of fragility fractures. The current diagnostic gold standard, dual-energy X-ray absorptiometry (DXA), faces limitations such as low equipment penetration, high testing costs, and radiation exposure, restricting its feasibility as a screening tool. To address these limitations, We retrospectively collected data from 1995 patients who visited Daping Hospital in Chongqing from January 2019 to August 2024. We developed an opportunistic screening method using chest X-rays. Furthermore, we designed three innovative deep neural network models using transfer learning: Inception v3, VGG16, and ResNet50. These models were evaluated based on their classification performance for osteoporosis using chest X-ray images, with external validation via multi-center data. The ResNet50 model demonstrated superior performance, achieving average accuracies of 87.85 % and 90.38 % in the internal test dataset across two experiments, with AUC values of 0.945 and 0.957, respectively. These results outperformed traditional convolutional neural networks. In the external validation, the ResNet50 model achieved an AUC of 0.904, accuracy of 89 %, sensitivity of 90 %, and specificity of 88.57 %, demonstrating strong generalization ability. And the model shows robust performance despite concurrent pulmonary pathologies. This study provides an automatic screening method for osteoporosis using chest X-rays, without additional radiation exposure or cost. The ResNet50 model's high performance supports clinicians in the early identification and treatment of osteoporosis patients.

SWiFT: Soft-Mask Weight Fine-tuning for Bias Mitigation

Junyu Yan, Feng Chen, Yuyang Xue, Yuning Du, Konstantinos Vilouras, Sotirios A. Tsaftaris, Steven McDonagh

arxiv logopreprintAug 26 2025
Recent studies have shown that Machine Learning (ML) models can exhibit bias in real-world scenarios, posing significant challenges in ethically sensitive domains such as healthcare. Such bias can negatively affect model fairness, model generalization abilities and further risks amplifying social discrimination. There is a need to remove biases from trained models. Existing debiasing approaches often necessitate access to original training data and need extensive model retraining; they also typically exhibit trade-offs between model fairness and discriminative performance. To address these challenges, we propose Soft-Mask Weight Fine-Tuning (SWiFT), a debiasing framework that efficiently improves fairness while preserving discriminative performance with much less debiasing costs. Notably, SWiFT requires only a small external dataset and only a few epochs of model fine-tuning. The idea behind SWiFT is to first find the relative, and yet distinct, contributions of model parameters to both bias and predictive performance. Then, a two-step fine-tuning process updates each parameter with different gradient flows defined by its contribution. Extensive experiments with three bias sensitive attributes (gender, skin tone, and age) across four dermatological and two chest X-ray datasets demonstrate that SWiFT can consistently reduce model bias while achieving competitive or even superior diagnostic accuracy under common fairness and accuracy metrics, compared to the state-of-the-art. Specifically, we demonstrate improved model generalization ability as evidenced by superior performance on several out-of-distribution (OOD) datasets.

Benign-Malignant Classification of Pulmonary Nodules in CT Images Based on Fractal Spectrum Analysis

Ma, Y., Lei, S., Wang, B., Qiao, Y., Xing, F., Liang, T.

medrxiv logopreprintAug 26 2025
This study reveals that pulmonary nodules exhibit distinct multifractal characteristics, with malignant nodules demonstrating significantly higher fractal dimensions at larger scales. Based on this fundamental finding, an automatic benign-malignant classification method for pulmonary nodules in CT images was developed using fractal spectrum analysis. By computing continuous three-dimensional fractal dimensions on 121 nodule samples from the LIDC-IDRI database, a 201-dimensional fractal feature spectrum was extracted, and a simplified multilayer perceptron neural network (with only 6x6 minimal neural network nodes in the intermediate layers) was constructed for pulmonary nodule classification. Experimental results demonstrate that this method achieved 96.69% accuracy in distinguishing benign from malignant pulmonary nodules. The discovery of scale-dependent multifractal properties enables fractal spectrum analysis to effectively capture the complexity differences in multi-scale structures of malignant nodules, providing an efficient and interpretable AI-aided diagnostic method for early lung cancer diagnosis.

A Machine Learning Approach to Volumetric Computations of Solid Pulmonary Nodules

Yihan Zhou, Haocheng Huang, Yue Yu, Jianhui Shang

arxiv logopreprintAug 26 2025
Early detection of lung cancer is crucial for effective treatment and relies on accurate volumetric assessment of pulmonary nodules in CT scans. Traditional methods, such as consolidation-to-tumor ratio (CTR) and spherical approximation, are limited by inconsistent estimates due to variability in nodule shape and density. We propose an advanced framework that combines a multi-scale 3D convolutional neural network (CNN) with subtype-specific bias correction for precise volume estimation. The model was trained and evaluated on a dataset of 364 cases from Shanghai Chest Hospital. Our approach achieved a mean absolute deviation of 8.0 percent compared to manual nonlinear regression, with inference times under 20 seconds per scan. This method outperforms existing deep learning and semi-automated pipelines, which typically have errors of 25 to 30 percent and require over 60 seconds for processing. Our results show a reduction in error by over 17 percentage points and a threefold acceleration in processing speed. These advancements offer a highly accurate, efficient, and scalable tool for clinical lung nodule screening and monitoring, with promising potential for improving early lung cancer detection.

Improved pulmonary embolism detection in CT pulmonary angiogram scans with hybrid vision transformers and deep learning techniques.

Abdelhamid A, El-Ghamry A, Abdelhay EH, Abo-Zahhad MM, Moustafa HE

pubmed logopapersAug 26 2025
Pulmonary embolism (PE) represents a severe, life-threatening cardiovascular condition and is notably the third leading cause of cardiovascular mortality, after myocardial infarction and stroke. This pathology occurs when blood clots obstruct the pulmonary arteries, impeding blood flow and oxygen exchange in the lungs. Prompt and accurate detection of PE is critical for appropriate clinical decision-making and patient survival. The complexity involved in interpreting medical images can often results misdiagnosis. However, recent advances in Deep Learning (DL) have substantially improved the capabilities of Computer-Aided Diagnosis (CAD) systems. Despite these advancements, existing single-model DL methods are limited when handling complex, diverse, and imbalanced medical imaging datasets. Addressing this gap, our research proposes an ensemble framework for classifying PE, capitalizing on the unique capabilities of ResNet50, DenseNet121, and Swin Transformer models. This ensemble method harnesses the complementary strengths of convolutional neural networks (CNNs) and vision transformers (ViTs), leading to improved prediction accuracy and model robustness. The proposed methodology includes a sophisticated preprocessing pipeline leveraging autoencoder (AE)-based dimensionality reduction, data augmentation to avoid overfitting, discrete wavelet transform (DWT) for multiscale feature extraction, and Sobel filtering for effective edge detection and noise reduction. The proposed model was rigorously evaluated using the public Radiological Society of North America (RSNA-STR) PE dataset, demonstrating remarkable performance metrics of 97.80% accuracy and a 0.99 for Area Under Receiver Operating Curve (AUROC). Comparative analysis demonstrated superior performance over state-of-the-art pre-trained models and recent ViT-based approaches, highlighting our method's effectiveness in improving early PE detection and providing robust support for clinical decision-making.

Classifiers Combined with DenseNet Models for Lung Cancer Computed Tomography Image Classification: A Comparative Analysis.

Mahmoud MA, Wu S, Su R, Wen Y, Liu S, Guan Y

pubmed logopapersAug 26 2025
Lung cancer remains a leading cause of cancer-related mortality worldwide. While deep learning approaches show promise in medical imaging, comprehensive comparisons of classifier combinations with DenseNet architectures for lung cancer classification are limited. The study investigates the performance of different classifier combinations, Support Vector Machine (SVM), Artificial Neural Network (ANN), and Multi-Layer Perceptron (MLP), with DenseNet architectures for lung cancer classification using chest CT scan images. A comparative analysis was conducted on 1,000 chest CT scan images comprising Adenocarcinoma, Large Cell Carcinoma, Squamous Cell Carcinoma, and normal tissue samples. Three DenseNet variants (DenseNet-121, DenseNet-169, DenseNet-201) were combined with three classifiers: SVM, ANN, and MLP. Performance was evaluated using accuracy, Area Under the Curve (AUC), precision, recall, specificity, and F1- score with an 80-20 train-test split. The optimal model achieved 92% training accuracy and 83% test accuracy. Performance across models ranged from 81% to 92% for training accuracy and 73% to 83% for test accuracy. The most balanced combination demonstrated robust results (training: 85% accuracy, 0.99 AUC; test: 79% accuracy, 0.95 AUC) with minimal overfitting. Deep learning approaches effectively categorize chest CT scans for lung cancer detection. The MLP-DenseNet-169 combination's 83% test accuracy represents a promising benchmark. Limitations include retrospective design and a limited sample size from a single source. This evaluation demonstrates the effectiveness of combining DenseNet architectures with different classifiers for lung cancer CT classification. The MLP-DenseNet-169 achieved optimal performance, while SVM-DenseNet-169 showed superior stability, providing valuable benchmarks for automated lung cancer detection systems.

AT-CXR: Uncertainty-Aware Agentic Triage for Chest X-rays

Xueyang Li, Mingze Jiang, Gelei Xu, Jun Xia, Mengzhao Jia, Danny Chen, Yiyu Shi

arxiv logopreprintAug 26 2025
Agentic AI is advancing rapidly, yet truly autonomous medical-imaging triage, where a system decides when to stop, escalate, or defer under real constraints, remains relatively underexplored. To address this gap, we introduce AT-CXR, an uncertainty-aware agent for chest X-rays. The system estimates per-case confidence and distributional fit, then follows a stepwise policy to issue an automated decision or abstain with a suggested label for human intervention. We evaluate two router designs that share the same inputs and actions: a deterministic rule-based router and an LLM-decided router. Across five-fold evaluation on a balanced subset of NIH ChestX-ray14 dataset, both variants outperform strong zero-shot vision-language models and state-of-the-art supervised classifiers, achieving higher full-coverage accuracy and superior selective-prediction performance, evidenced by a lower area under the risk-coverage curve (AURC) and a lower error rate at high coverage, while operating with lower latency that meets practical clinical constraints. The two routers provide complementary operating points, enabling deployments to prioritize maximal throughput or maximal accuracy. Our code is available at https://github.com/XLIAaron/uncertainty-aware-cxr-agent.

Random forest-based out-of-distribution detection for robust lung cancer segmentation

Aneesh Rangnekar, Harini Veeraraghavan

arxiv logopreprintAug 26 2025
Accurate detection and segmentation of cancerous lesions from computed tomography (CT) scans is essential for automated treatment planning and cancer treatment response assessment. Transformer-based models with self-supervised pretraining can produce reliably accurate segmentation from in-distribution (ID) data but degrade when applied to out-of-distribution (OOD) datasets. We address this challenge with RF-Deep, a random forest classifier that utilizes deep features from a pretrained transformer encoder of the segmentation model to detect OOD scans and enhance segmentation reliability. The segmentation model comprises a Swin Transformer encoder, pretrained with masked image modeling (SimMIM) on 10,432 unlabeled 3D CT scans covering cancerous and non-cancerous conditions, with a convolution decoder, trained to segment lung cancers in 317 3D scans. Independent testing was performed on 603 3D CT public datasets that included one ID dataset and four OOD datasets comprising chest CTs with pulmonary embolism (PE) and COVID-19, and abdominal CTs with kidney cancers and healthy volunteers. RF-Deep detected OOD cases with a FPR95 of 18.26%, 27.66%, and less than 0.1% on PE, COVID-19, and abdominal CTs, consistently outperforming established OOD approaches. The RF-Deep classifier provides a simple and effective approach to enhance reliability of cancer segmentation in ID and OOD scenarios.
Page 14 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.