Sort by:
Page 18 of 46453 results

Analyzing pediatric forearm X-rays for fracture analysis using machine learning.

Lam V, Parida A, Dance S, Tabaie S, Cleary K, Anwar SM

pubmed logopapersJul 24 2025
Forearm fractures constitute a significant proportion of emergency department presentations in pediatric population. The treatment goal is to restore length and alignment between the distal and proximal bone fragments. While immobilization through splinting or casting is enough for non-displaced and minimally displaced fractures. However, moderately or severely displaced fractures often require reduction for realignment. However, appropriate treatment in current practices has challenges due to the lack of resources required for specialized pediatric care leading to delayed and unnecessary transfers between medical centers, which potentially create treatment complications and burdens. The purpose of this study is to build a machine learning model for analyzing forearm fractures to assist clinical centers that lack surgical expertise in pediatric orthopedics. X-ray scans from 1250 children were curated, preprocessed, and manually annotated at our clinical center. Several machine learning models were fine-tuned using a pretraining strategy leveraging self-supervised learning model with vision transformer backbone. We further employed strategies to identify the most important region related to fractures within the forearm X-ray. The model performance was evaluated with and without region of interest (ROI) detection to find an optimal model for forearm fracture analyses. Our proposed strategy leverages self-supervised pretraining (without labels) followed by supervised fine-tuning (with labels). The fine-tuned model using regions cropped with ROI identification resulted in the highest classification performance with a true-positive rate (TPR) of 0.79, true-negative rate (TNR) of 0.74, AUROC of 0.81, and AUPR of 0.86 when evaluated on the testing data. The results showed the feasibility of using machine learning models in predicting the appropriate treatment for forearm fractures in pediatric cases. With further improvement, the algorithm could potentially be used as a tool to assist non-specialized orthopedic providers in diagnosing and providing treatment.

Illicit object detection in X-ray imaging using deep learning techniques: A comparative evaluation

Jorgen Cani, Christos Diou, Spyridon Evangelatos, Vasileios Argyriou, Panagiotis Radoglou-Grammatikis, Panagiotis Sarigiannidis, Iraklis Varlamis, Georgios Th. Papadopoulos

arxiv logopreprintJul 23 2025
Automated X-ray inspection is crucial for efficient and unobtrusive security screening in various public settings. However, challenges such as object occlusion, variations in the physical properties of items, diversity in X-ray scanning devices, and limited training data hinder accurate and reliable detection of illicit items. Despite the large body of research in the field, reported experimental evaluations are often incomplete, with frequently conflicting outcomes. To shed light on the research landscape and facilitate further research, a systematic, detailed, and thorough comparative evaluation of recent Deep Learning (DL)-based methods for X-ray object detection is conducted. For this, a comprehensive evaluation framework is developed, composed of: a) Six recent, large-scale, and widely used public datasets for X-ray illicit item detection (OPIXray, CLCXray, SIXray, EDS, HiXray, and PIDray), b) Ten different state-of-the-art object detection schemes covering all main categories in the literature, including generic Convolutional Neural Network (CNN), custom CNN, generic transformer, and hybrid CNN-transformer architectures, and c) Various detection (mAP50 and mAP50:95) and time/computational-complexity (inference time (ms), parameter size (M), and computational load (GFLOPS)) metrics. A thorough analysis of the results leads to critical observations and insights, emphasizing key aspects such as: a) Overall behavior of the object detection schemes, b) Object-level detection performance, c) Dataset-specific observations, and d) Time efficiency and computational complexity analysis. To support reproducibility of the reported experimental results, the evaluation code and model weights are made publicly available at https://github.com/jgenc/xray-comparative-evaluation.

Interpretable AI Framework for Secure and Reliable Medical Image Analysis in IoMT Systems.

Matthew UO, Rosa RL, Saadi M, Rodriguez DZ

pubmed logopapersJul 23 2025
The integration of artificial intelligence (AI) into medical image analysis has transformed healthcare, offering unprecedented precision in diagnosis, treatment planning, and disease monitoring. However, its adoption within the Internet of Medical Things (IoMT) raises significant challenges related to transparency, trustworthiness, and security. This paper introduces a novel Explainable AI (XAI) framework tailored for Medical Cyber-Physical Systems (MCPS), addressing these challenges by combining deep neural networks with symbolic knowledge reasoning to deliver clinically interpretable insights. The framework incorporates an Enhanced Dynamic Confidence-Weighted Attention (Enhanced DCWA) mechanism, which improves interpretability and robustness by dynamically refining attention maps through adaptive normalization and multi-level confidence weighting. Additionally, a Resilient Observability and Detection Engine (RODE) leverages sparse observability principles to detect and mitigate adversarial threats, ensuring reliable performance in dynamic IoMT environments. Evaluations conducted on benchmark datasets, including CheXpert, RSNA Pneumonia Detection Challenge, and NIH Chest X-ray Dataset, demonstrate significant advancements in classification accuracy, adversarial robustness, and explainability. The framework achieves a 15% increase in lesion classification accuracy, a 30% reduction in robustness loss, and a 20% improvement in the Explainability Index compared to state-of-the-art methods.

Area detection improves the person-based performance of a deep learning system for classifying the presence of carotid artery calcifications on panoramic radiographs.

Kuwada C, Mitsuya Y, Fukuda M, Yang S, Kise Y, Mori M, Naitoh M, Ariji Y, Ariji E

pubmed logopapersJul 22 2025
This study investigated deep learning (DL) systems for diagnosing carotid artery calcifications (CAC) on panoramic radiographs. To this end, two DL systems, one with preceding and one with simultaneous area detection functions, were developed to classify CAC on panoramic radiographs, and their person-based classification performances were compared with that of a DL model directly created using entire panoramic radiographs. A total of 580 panoramic radiographs from 290 patients (with CAC) and 290 controls (without CAC) were used to create and evaluate the DL systems. Two convolutional neural networks, GoogLeNet and YOLOv7, were utilized. The following three systems were created: (1) direct classification of entire panoramic images (System 1), (2) preceding region-of-interest (ROI) detection followed by classification (System 2), and (3) simultaneous ROI detection and classification (System 3). Person-based evaluation using the same test data was performed to compare the three systems. A side-based (left and right sides of participants) evaluation was also performed on Systems 2 and 3. Between-system differences in area under the receiver-operating characteristics curve (AUC) were assessed using DeLong's test. For the side-based evaluation, the AUCs of Systems 2 and 3 were 0.89 and 0.84, respectively, and in the person-based evaluation, Systems 2 and 3 had significantly higher AUC values of 0.86 and 0.90, respectively, compared with System 1 (P < 0.001). No significant difference was found between Systems 2 and 3. Preceding or simultaneous use of area detection improved the person-based performance of DL for classifying the presence of CAC on panoramic radiographs.

Divisive Decisions: Improving Salience-Based Training for Generalization in Binary Classification Tasks

Jacob Piland, Chris Sweet, Adam Czajka

arxiv logopreprintJul 22 2025
Existing saliency-guided training approaches improve model generalization by incorporating a loss term that compares the model's class activation map (CAM) for a sample's true-class ({\it i.e.}, correct-label class) against a human reference saliency map. However, prior work has ignored the false-class CAM(s), that is the model's saliency obtained for incorrect-label class. We hypothesize that in binary tasks the true and false CAMs should diverge on the important classification features identified by humans (and reflected in human saliency maps). We use this hypothesis to motivate three new saliency-guided training methods incorporating both true- and false-class model's CAM into the training strategy and a novel post-hoc tool for identifying important features. We evaluate all introduced methods on several diverse binary close-set and open-set classification tasks, including synthetic face detection, biometric presentation attack detection, and classification of anomalies in chest X-ray scans, and find that the proposed methods improve generalization capabilities of deep learning models over traditional (true-class CAM only) saliency-guided training approaches. We offer source codes and model weights\footnote{GitHub repository link removed to preserve anonymity} to support reproducible research.

Robust Noisy Pseudo-label Learning for Semi-supervised Medical Image Segmentation Using Diffusion Model

Lin Xi, Yingliang Ma, Cheng Wang, Sandra Howell, Aldo Rinaldi, Kawal S. Rhode

arxiv logopreprintJul 22 2025
Obtaining pixel-level annotations in the medical domain is both expensive and time-consuming, often requiring close collaboration between clinical experts and developers. Semi-supervised medical image segmentation aims to leverage limited annotated data alongside abundant unlabeled data to achieve accurate segmentation. However, existing semi-supervised methods often struggle to structure semantic distributions in the latent space due to noise introduced by pseudo-labels. In this paper, we propose a novel diffusion-based framework for semi-supervised medical image segmentation. Our method introduces a constraint into the latent structure of semantic labels during the denoising diffusion process by enforcing prototype-based contrastive consistency. Rather than explicitly delineating semantic boundaries, the model leverages class prototypes centralized semantic representations in the latent space as anchors. This strategy improves the robustness of dense predictions, particularly in the presence of noisy pseudo-labels. We also introduce a new publicly available benchmark: Multi-Object Segmentation in X-ray Angiography Videos (MOSXAV), which provides detailed, manually annotated segmentation ground truth for multiple anatomical structures in X-ray angiography videos. Extensive experiments on the EndoScapes2023 and MOSXAV datasets demonstrate that our method outperforms state-of-the-art medical image segmentation approaches under the semi-supervised learning setting. This work presents a robust and data-efficient diffusion model that offers enhanced flexibility and strong potential for a wide range of clinical applications.

Faithful, Interpretable Chest X-ray Diagnosis with Anti-Aliased B-cos Networks

Marcel Kleinmann, Shashank Agnihotri, Margret Keuper

arxiv logopreprintJul 22 2025
Faithfulness and interpretability are essential for deploying deep neural networks (DNNs) in safety-critical domains such as medical imaging. B-cos networks offer a promising solution by replacing standard linear layers with a weight-input alignment mechanism, producing inherently interpretable, class-specific explanations without post-hoc methods. While maintaining diagnostic performance competitive with state-of-the-art DNNs, standard B-cos models suffer from severe aliasing artifacts in their explanation maps, making them unsuitable for clinical use where clarity is essential. Additionally, the original B-cos formulation is limited to multi-class settings, whereas chest X-ray analysis often requires multi-label classification due to co-occurring abnormalities. In this work, we address both limitations: (1) we introduce anti-aliasing strategies using FLCPooling (FLC) and BlurPool (BP) to significantly improve explanation quality, and (2) we extend B-cos networks to support multi-label classification. Our experiments on chest X-ray datasets demonstrate that the modified $\text{B-cos}_\text{FLC}$ and $\text{B-cos}_\text{BP}$ preserve strong predictive performance while providing faithful and artifact-free explanations suitable for clinical application in multi-label settings. Code available at: $\href{https://github.com/mkleinma/B-cos-medical-paper}{GitHub repository}$.

Faithful, Interpretable Chest X-ray Diagnosis with Anti-Aliased B-cos Networks

Marcel Kleinmann, Shashank Agnihotri, Margret Keuper

arxiv logopreprintJul 22 2025
Faithfulness and interpretability are essential for deploying deep neural networks (DNNs) in safety-critical domains such as medical imaging. B-cos networks offer a promising solution by replacing standard linear layers with a weight-input alignment mechanism, producing inherently interpretable, class-specific explanations without post-hoc methods. While maintaining diagnostic performance competitive with state-of-the-art DNNs, standard B-cos models suffer from severe aliasing artifacts in their explanation maps, making them unsuitable for clinical use where clarity is essential. In this work, we address these limitations by introducing anti-aliasing strategies using FLCPooling (FLC) and BlurPool (BP) to significantly improve explanation quality. Our experiments on chest X-ray datasets demonstrate that the modified $\text{B-cos}_\text{FLC}$ and $\text{B-cos}_\text{BP}$ preserve strong predictive performance while providing faithful and artifact-free explanations suitable for clinical application in multi-class and multi-label settings. Code available at: GitHub repository (url: https://github.com/mkleinma/B-cos-medical-paper).

Deep learning algorithm for the automatic assessment of axial vertebral rotation in patients with scoliosis using the Nash-Moe method.

Kim JK, Wang MX, Park D, Chang MC

pubmed logopapersJul 22 2025
Accurate assessments of axial vertebral rotation (AVR) is essential for managing idiopathic scoliosis. The Nash-Moe classification method has been extensively used for AVR assessment; however, its subjective nature can lead to measurement variability. Therefore, herein, we propose an automated deep learning (DL) model for AVR assessment based on posteroanterior spinal radiographs. We develop a two-stage DL framework using the MMRotate toolbox and analyze 1080 posteroanterior spinal radiographs of patients aged 4-18 years. The framework comprises a vertebra detection model (864 training and 216 validation images) and a pedicle detection model (14,608 training and 3652 validation images). We improved the Nash-Moe classification method by implementing a 12-segment division system and width ratio metric for precise pedicle assessment. The vertebra and pedicle detection models achieved mean average precision values of 0.909 and 0.905, respectively. The overall classification accuracy was 0.74, with grade-specific performance between 0.70 and 1.00 for precision and 0.33 and 0.93 for recall across Grades 0-3. The proposed DL framework processed complete posteroanterior radiographs in < 5 s per case compared with conventional manual measurements (114 s per radiograph). The best performance was observed in mild to moderate rotation cases, with performance in severe rotation cases limited by insufficient data. The implementation of DL framework for the automated Nash-Moe classification method exhibited satisfactory accuracy and exceptional efficiency. However, this study is limited by low recall (0.33) for Grade 3 and the inability to classify Grade 4 towing to dataset constraints. Further validation using augmented datasets that include severe rotation cases is necessary.

DREAM: A framework for discovering mechanisms underlying AI prediction of protected attributes

Gadgil, S. U., DeGrave, A. J., Janizek, J. D., Xu, S., Nwandu, L., Fonjungo, F., Lee, S.-I., Daneshjou, R.

medrxiv logopreprintJul 21 2025
Recent advances in Artificial Intelligence (AI) have started disrupting the healthcare industry, especially medical imaging, and AI devices are increasingly being deployed into clinical practice. Such classifiers have previously demonstrated the ability to discern a range of protected demographic attributes (like race, age, sex) from medical images with unexpectedly high performance, a sensitive task which is difficult even for trained physicians. In this study, we motivate and introduce a general explainable AI (XAI) framework called DREAM (DiscoveRing and Explaining AI Mechanisms) for interpreting how AI models trained on medical images predict protected attributes. Focusing on two modalities, radiology and dermatology, we are successfully able to train high-performing classifiers for predicting race from chest x-rays (ROC-AUC score of [~]0.96) and sex from dermoscopic lesions (ROC-AUC score of [~]0.78). We highlight how incorrect use of these demographic shortcuts can have a detrimental effect on the performance of a clinically relevant downstream task like disease diagnosis under a domain shift. Further, we employ various XAI techniques to identify specific signals which can be leveraged to predict sex. Finally, we propose a technique, which we callremoval via balancing, to quantify how much a signal contributes to the classification performance. Using this technique and the signals identified, we are able to explain [~]15% of the total performance for radiology and [~]42% of the total performance for dermatology. We envision DREAM to be broadly applicable to other modalities and demographic attributes. This analysis not only underscores the importance of cautious AI application in healthcare but also opens avenues for improving the transparency and reliability of AI-driven diagnostic tools.
Page 18 of 46453 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.