Sort by:
Page 12 of 46453 results

A Computer Vision and Machine Learning Approach to Classify Views in Distal Radius Radiographs.

Vemu R, Birhiray D, Darwish B, Hollis R, Unnam S, Chilukuri S, Deveza L

pubmed logopapersAug 17 2025
Advances in computer vision and machine learning have augmented the ability to analyze orthopedic radiographs. A critical but underexplored component of this process is the accurate classification of radiographic views and localization of relevant anatomical regions, both of which can impact the performance of downstream diagnostic models. This study presents a deep learning object detection model and mobile application designed to classify distal radius radiographs into standard views-anterior-posterior (AP), lateral (LAT), and oblique (OB)- while localizing the anatomical region most relevant to distal radius fractures. A total of 1593 deidentified radiographs were collected from a single institution between 2021 and 2023 (544 AP, 538 LAT, and 521 OB). Each image was annotated using Labellerr software to draw bounding boxes encompassing the region spanning from the second digit MCP joint to the distal third of the radius, with annotations verified by an experienced orthopedic surgeon. A YOLOv5 object detection model was fine-tuned and trained using a 70/15/15 train/validation/test split. The model achieved an overall accuracy of 97.3%, with class-specific accuracies of 99% for AP, 100% for LAT, and 93% for OB. Overall precision and recall were 96.8% and 97.5%, respectively. Model performance exceeded the expected accuracy from random guessing (p < 0.001, binomial test). A Streamlit-based mobile application was developed to support clinical deployment. This automated view classification step reduces feature space by isolating only the relevant anatomy. Focusing subsequent models on the targeted region can minimize distraction from irrelevant areas and improve the accuracy of downstream fracture classification models.

X-Ray-CoT: Interpretable Chest X-ray Diagnosis with Vision-Language Models via Chain-of-Thought Reasoning

Chee Ng, Liliang Sun, Shaoqing Tang

arxiv logopreprintAug 17 2025
Chest X-ray imaging is crucial for diagnosing pulmonary and cardiac diseases, yet its interpretation demands extensive clinical experience and suffers from inter-observer variability. While deep learning models offer high diagnostic accuracy, their black-box nature hinders clinical adoption in high-stakes medical settings. To address this, we propose X-Ray-CoT (Chest X-Ray Chain-of-Thought), a novel framework leveraging Vision-Language Large Models (LVLMs) for intelligent chest X-ray diagnosis and interpretable report generation. X-Ray-CoT simulates human radiologists' "chain-of-thought" by first extracting multi-modal features and visual concepts, then employing an LLM-based component with a structured Chain-of-Thought prompting strategy to reason and produce detailed natural language diagnostic reports. Evaluated on the CORDA dataset, X-Ray-CoT achieves competitive quantitative performance, with a Balanced Accuracy of 80.52% and F1 score of 78.65% for disease diagnosis, slightly surpassing existing black-box models. Crucially, it uniquely generates high-quality, explainable reports, as validated by preliminary human evaluations. Our ablation studies confirm the integral role of each proposed component, highlighting the necessity of multi-modal fusion and CoT reasoning for robust and transparent medical AI. This work represents a significant step towards trustworthy and clinically actionable AI systems in medical imaging.

Determination of Skeletal Age From Hand Radiographs Using Deep Learning.

Bram JT, Pareek A, Beber SA, Jones RH, Shariatnia MM, Daliliyazdi A, Tracey OC, Green DW, Fabricant PD

pubmed logopapersAug 15 2025
Surgeons treating skeletally immature patients use skeletal age to determine appropriate surgical strategies. Traditional bone age estimation methods utilizing hand radiographs are time-consuming. To develop highly accurate/reliable deep learning (DL) models for determination of accurate skeletal age from hand radiographs. Cohort Study. The authors utilized 3 publicly available hand radiograph data sets for model development/validation from (1) the Radiological Society of North America (RSNA), (2) the Radiological Hand Pose Estimation (RHPE) data set, and (3) the Digital Hand Atlas (DHA). All 3 data sets report corresponding sex and skeletal age. The RHPE and DHA also contain chronological age. After image preprocessing, a ConvNeXt model was trained first on the RSNA data set using sex/skeletal age as inputs using 5-fold cross-validation, with subsequent training on the RHPE with addition of chronological age. Final model validation was performed on the DHA and an institutional data set of 200 images. The first model, trained on the RSNA, achieved a mean absolute error (MAE) of 3.68 months on the RSNA test set and 5.66 months on the DHA. This outperformed the 4.2 months achieved on the RSNA test set by the best model from previous work (12.4% improvement) and 3.9 months by the open-source software Deeplasia (5.6% improvement). After incorporation of chronological age from the RHPE in model 2, this error improved to an MAE of 4.65 months on the DHA, again surpassing the best previously published models (19.8% improvement). Leveraging newer DL technologies trained on >20,000 hand radiographs across 3 distinct, diverse data sets, this study developed a robust model for predicting bone age. Utilizing features extracted from an RSNA model, combined with chronological age inputs, this model outperforms previous state-of-the-art models when applied to validation data sets. These results indicate that the models provide a highly accurate/reliable platform for clinical use to improve confidence about appropriate surgical selection (eg, physeal-sparing procedures) and time savings for orthopaedic surgeons/radiologists evaluating skeletal age. Development of an accurate DL model for determination of bone age from the hand reduces the time required for age estimation. Additionally, streamlined skeletal age estimation can aid practitioners in determining optimal treatment strategies and may be useful in research settings to decrease workload and improve reporting.

Enhancing Diagnostic Accuracy of Fresh Vertebral Compression Fractures With Deep Learning Models.

Li KY, Ye HB, Zhang YL, Huang JW, Li HL, Tian NF

pubmed logopapersAug 15 2025
Retrospective study. The study aimed to develop and authenticated a deep learning model based on X-ray images to accurately diagnose fresh thoracolumbar vertebral compression fractures. In clinical practice, diagnosing fresh vertebral compression fractures often requires MRI. However, due to the scarcity of MRI resources and the high time and economic costs involved, some patients may not receive timely diagnosis and treatment. Using a deep learning model combined with X-rays for diagnostic assistance could potentially serve as an alternative to MRI. In this study, the main collection included X-ray images suspected of thoracolumbar vertebral compression fractures from the municipal shared database between December 2012 and February 2024. Deep learning models were constructed using frameworks of EfficientNet, MobileNet, and MnasNet, respectively. We conducted a preliminary evaluation of the deep learning model using the validation set. The diagnostic performance of the models was evaluated using metrics such as AUC value, accuracy, sensitivity, specificity, F1 score, precision, and ROC curve. Finally, the deep learning models were compared with evaluations from two spine surgeons of different experience levels on the control set. This study included a total of 3025 lateral X-ray images from 2224 patients. The data set was divided into a training set of 2388 cases, a validation set of 482 cases, and a control set of 155 cases. In the validation set, the three groups of DL models had accuracies of 83.0%, 82.4%, and 82.2%, respectively. The AUC values were 0.861, 0.852, and 0.865, respectively. In the control set, the accuracies of the three groups of DL models were 78.1%, 78.1%, and 80.7%, respectively, all higher than spinal surgeons and significantly higher than junior spine surgeon. This study developed deep learning models for detecting fresh vertebral compression fractures, demonstrating high accuracy.

Development of a deep learning algorithm for radiographic detection of syndesmotic instability in ankle fractures with intraoperative validation.

Kubach J, Pogarell T, Uder M, Perl M, Betsch M, Pasurka M, Söllner S, Heiss R

pubmed logopapersAug 14 2025
Identifying syndesmotic instability in ankle fractures using conventional radiographs is still a major challenge. In this study we trained a convolutional neural network (CNN) to classify the fracture utilizing the AO-classification (AO-44 A/B/C) and to simultaneously detect syndesmosis instability in the conventional radiograph by leveraging the intraoperative stress testing as the gold standard. In this retrospective exploratory study we identified 700 patients with rotational ankle fractures at a university hospital from 2019 to 2024, from whom 1588 digital radiographs were extracted to train, validate, and test a CNN. Radiographs were classified based on the therapy-decisive gold standard of the intraoperative hook-test and the preoperatively determined AO-classification from the surgical report. To perform internal validation and quality control, the algorithm results were visualized using Guided Score Class activation maps (GSCAM).The AO44-classification sensitivity over all subclasses was 91%. Furthermore, the syndesmosis instability could be identified with a sensitivity of 0.84 (95% confidence interval (CI) 0.78, 0.92) and specificity 0.8 (95% CI 0.67, 0.9). Consistent visualization results were obtained from the GSCAMs. The integration of an explainable deep-learning algorithm, trained on an intraoperative gold standard showed a 0.84 sensitivity for syndesmotic stability testing. Thus, providing clinically interpretable outputs, suggesting potential for enhanced preoperative decision-making in complex ankle trauma.

GNN-based Unified Deep Learning

Furkan Pala, Islem Rekik

arxiv logopreprintAug 14 2025
Deep learning models often struggle to maintain generalizability in medical imaging, particularly under domain-fracture scenarios where distribution shifts arise from varying imaging techniques, acquisition protocols, patient populations, demographics, and equipment. In practice, each hospital may need to train distinct models - differing in learning task, width, and depth - to match local data. For example, one hospital may use Euclidean architectures such as MLPs and CNNs for tabular or grid-like image data, while another may require non-Euclidean architectures such as graph neural networks (GNNs) for irregular data like brain connectomes. How to train such heterogeneous models coherently across datasets, while enhancing each model's generalizability, remains an open problem. We propose unified learning, a new paradigm that encodes each model into a graph representation, enabling unification in a shared graph learning space. A GNN then guides optimization of these unified models. By decoupling parameters of individual models and controlling them through a unified GNN (uGNN), our method supports parameter sharing and knowledge transfer across varying architectures (MLPs, CNNs, GNNs) and distributions, improving generalizability. Evaluations on MorphoMNIST and two MedMNIST benchmarks - PneumoniaMNIST and BreastMNIST - show that unified learning boosts performance when models are trained on unique distributions and tested on mixed ones, demonstrating strong robustness to unseen data with large distribution shifts. Code and benchmarks: https://github.com/basiralab/uGNN

Beam Hardening Correction in Clinical X-ray Dark-Field Chest Radiography using Deep Learning-Based Bone Segmentation

Lennard Kaster, Maximilian E. Lochschmidt, Anne M. Bauer, Tina Dorosti, Sofia Demianova, Thomas Koehler, Daniela Pfeiffer, Franz Pfeiffer

arxiv logopreprintAug 14 2025
Dark-field radiography is a novel X-ray imaging modality that enables complementary diagnostic information by visualizing the microstructural properties of lung tissue. Implemented via a Talbot-Lau interferometer integrated into a conventional X-ray system, it allows simultaneous acquisition of perfectly temporally and spatially registered attenuation-based conventional and dark-field radiographs. Recent clinical studies have demonstrated that dark-field radiography outperforms conventional radiography in diagnosing and staging pulmonary diseases. However, the polychromatic nature of medical X-ray sources leads to beam-hardening, which introduces structured artifacts in the dark-field radiographs, particularly from osseous structures. This so-called beam-hardening-induced dark-field signal is an artificial dark-field signal and causes undesired cross-talk between attenuation and dark-field channels. This work presents a segmentation-based beam-hardening correction method using deep learning to segment ribs and clavicles. Attenuation contribution masks derived from dual-layer detector computed tomography data, decomposed into aluminum and water, were used to refine the material distribution estimation. The method was evaluated both qualitatively and quantitatively on clinical data from healthy subjects and patients with chronic obstructive pulmonary disease and COVID-19. The proposed approach reduces bone-induced artifacts and improves the homogeneity of the lung dark-field signal, supporting more reliable visual and quantitative assessment in clinical dark-field chest radiography.

Comparative evaluation of CAM methods for enhancing explainability in veterinary radiography.

Dusza P, Banzato T, Burti S, Bendazzoli M, Müller H, Wodzinski M

pubmed logopapersAug 13 2025
Explainable Artificial Intelligence (XAI) encompasses a broad spectrum of methods that aim to enhance the transparency of deep learning models, with Class Activation Mapping (CAM) methods widely used for visual interpretability. However, systematic evaluations of these methods in veterinary radiography remain scarce. This study presents a comparative analysis of eleven CAM methods, including GradCAM, XGradCAM, ScoreCAM, and EigenCAM, on a dataset of 7362 canine and feline X-ray images. A ResNet18 model was chosen based on the specificity of the dataset and preliminary results where it outperformed other models. Quantitative and qualitative evaluations were performed to determine how well each CAM method produced interpretable heatmaps relevant to clinical decision-making. Among the techniques evaluated, EigenGradCAM achieved the highest mean score and standard deviation (SD) of 2.571 (SD = 1.256), closely followed by EigenCAM at 2.519 (SD = 1.228) and GradCAM++ at 2.512 (SD = 1.277), with methods such as FullGrad and XGradCAM achieving worst scores of 2.000 (SD = 1.300) and 1.858 (SD = 1.198) respectively. Despite variations in saliency visualization, no single method universally improved veterinarians' diagnostic confidence. While certain CAM methods provide better visual cues for some pathologies, they generally offered limited explainability and didn't substantially improve veterinarians' diagnostic confidence.

PPEA: Personalized positioning and exposure assistant based on multi-task shared pose estimation transformer.

Zhao J, Liu J, Yang C, Tang H, Chen Y, Zhang Y

pubmed logopapersAug 13 2025
Hand and foot digital radiography (DR) is an indispensable tool in medical imaging, with varying diagnostic requirements necessitating different hand and foot positionings. Accurate positioning is crucial for obtaining diagnostically valuable images. Furthermore, adjusting exposure parameters such as exposure area based on patient conditions helps minimize the likelihood of image retakes. We propose a personalized positioning and exposure assistant capable of automatically recognizing hand and foot positionings and recommending appropriate exposure parameters to achieve these objectives. The assistant comprises three modules: (1) Progressive Iterative Hand-Foot Tracker (PIHFT) to iteratively locate hands or feet in RGB images, providing the foundation for accurate pose estimation; (2) Multi-Task Shared Pose Estimation Transformer (MTSPET), a Transformer-based model that encompasses hand and foot estimation branches with similar network architectures, sharing a common backbone. MTSPET outperformed MediaPipe in the hand pose estimation task and successfully transferred this capability to the foot pose estimation task; (3) Domain Expertise-embedded Positioning and Exposure Assistant (DEPEA), which combines the key-point coordinates of hands and feet with specific positioning and exposure parameter requirements, capable of checking patient positioning and inferring exposure areas and Regions of Interest (ROIs) of Digital Automatic Exposure Control (DAEC). Additionally, two datasets were collected and used to train MTSPET. A preliminary clinical trial showed strong agreement between PPEA's outputs and manual annotations, indicating the system's effectiveness in typical clinical scenarios. The contributions of this study lay the foundation for personalized, patient-specific imaging strategies, ultimately enhancing diagnostic outcomes and minimizing the risk of errors in clinical settings.

The Role of Radiographic Knee Alignment in Knee Replacement Outcomes and Opportunities for Artificial Intelligence-Driven Assessment

Zhisen Hu, David S. Johnson, Aleksei Tiulpin, Timothy F. Cootes, Claudia Lindner

arxiv logopreprintAug 13 2025
Prevalent knee osteoarthritis (OA) imposes substantial burden on health systems with no cure available. Its ultimate treatment is total knee replacement (TKR). Complications from surgery and recovery are difficult to predict in advance, and numerous factors may affect them. Radiographic knee alignment is one of the key factors that impacts TKR outcomes, affecting outcomes such as postoperative pain or function. Recently, artificial intelligence (AI) has been introduced to the automatic analysis of knee radiographs, for example, to automate knee alignment measurements. Existing review articles tend to focus on knee OA diagnosis and segmentation of bones or cartilages in MRI rather than exploring knee alignment biomarkers for TKR outcomes and their assessment. In this review, we first examine the current scoring protocols for evaluating TKR outcomes and potential knee alignment biomarkers associated with these outcomes. We then discuss existing AI-based approaches for generating knee alignment biomarkers from knee radiographs, and explore future directions for knee alignment assessment and TKR outcome prediction.
Page 12 of 46453 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.