Sort by:
Page 14 of 35341 results

External validation of an artificial intelligence tool for fracture detection in children with osteogenesis imperfecta: a multireader study.

Pauling C, Laidlow-Singh H, Evans E, Garbera D, Williamson R, Fernando R, Thomas K, Martin H, Arthurs OJ, Shelmerdine SC

pubmed logopapersJul 7 2025
To determine the performance of a commercially available AI tool for fracture detection when used in children with osteogenesis imperfecta (OI). All appendicular and pelvic radiographs from an OI clinic at a single centre from 48 patients were included. Seven radiologists evaluated anonymised images in two rounds, first without, then with AI assistance. Differences in diagnostic accuracy between the rounds were analysed. 48 patients (mean 12 years) provided 336 images, containing 206 fractures established by consensus opinion of two radiologists. AI produced a per-examination accuracy of 74.8% [95% CI: 65.4%, 82.7%], compared to average radiologist performance at 83.4% [95% CI: 75.2%, 89.8%]. Radiologists using AI assistance improved average radiologist accuracy per examination to 90.7% [95% CI: 83.5%, 95.4%]. AI gave more false negatives than radiologists, with 80 missed fractures versus 41, respectively. Radiologists were more likely (74.6%) to alter their original decision to agree with AI at the per-image level, 82.8% of which led to a correct result, 64.0% of which were changing from a false positive to a true negative. Despite inferior standalone performance, AI assistance can still improve radiologist fracture detection in a rare disease paediatric population. Radiologists using AI typically led to more accurate diagnostic outcomes through reduced false positives. Future studies focusing on the real-world application of AI tools in a larger population of children with bone fragility disorders will help better evaluate whether these improvements in accuracy translate into improved patient outcomes. Question How well does a commercially available artificial intelligence (AI) tool identify fractures, on appendicular radiographs of children with osteogenesis imperfecta (OI), and can it also improve radiologists' identification of fractures in this population? Findings Specialist human radiologists outperformed the AI fracture detection tool when acting alone; however, their diagnostic performance overall improved with AI assistance. Clinical relevance AI assistance improves specialist radiologist fracture detection in children with osteogenesis imperfecta, even with AI performance alone inferior to the radiologists acting alone. The reason for this was due to the AI moderating the number of false positives generated by the radiologists.

SV-DRR: High-Fidelity Novel View X-Ray Synthesis Using Diffusion Model

Chun Xie, Yuichi Yoshii, Itaru Kitahara

arxiv logopreprintJul 7 2025
X-ray imaging is a rapid and cost-effective tool for visualizing internal human anatomy. While multi-view X-ray imaging provides complementary information that enhances diagnosis, intervention, and education, acquiring images from multiple angles increases radiation exposure and complicates clinical workflows. To address these challenges, we propose a novel view-conditioned diffusion model for synthesizing multi-view X-ray images from a single view. Unlike prior methods, which are limited in angular range, resolution, and image quality, our approach leverages the Diffusion Transformer to preserve fine details and employs a weak-to-strong training strategy for stable high-resolution image generation. Experimental results demonstrate that our method generates higher-resolution outputs with improved control over viewing angles. This capability has significant implications not only for clinical applications but also for medical education and data extension, enabling the creation of diverse, high-quality datasets for training and analysis. Our code is available at GitHub.

Bridging Vision and Language: Optimal Transport-Driven Radiology Report Generation via LLMs

Haifeng Zhao, Yufei Zhang, Leilei Ma, Shuo Xu, Dengdi Sun

arxiv logopreprintJul 5 2025
Radiology report generation represents a significant application within medical AI, and has achieved impressive results. Concurrently, large language models (LLMs) have demonstrated remarkable performance across various domains. However, empirical validation indicates that general LLMs tend to focus more on linguistic fluency rather than clinical effectiveness, and lack the ability to effectively capture the relationship between X-ray images and their corresponding texts, thus resulting in poor clinical practicability. To address these challenges, we propose Optimal Transport-Driven Radiology Report Generation (OTDRG), a novel framework that leverages Optimal Transport (OT) to align image features with disease labels extracted from reports, effectively bridging the cross-modal gap. The core component of OTDRG is Alignment \& Fine-Tuning, where OT utilizes results from the encoding of label features and image visual features to minimize cross-modal distances, then integrating image and text features for LLMs fine-tuning. Additionally, we design a novel disease prediction module to predict disease labels contained in X-ray images during validation and testing. Evaluated on the MIMIC-CXR and IU X-Ray datasets, OTDRG achieves state-of-the-art performance in both natural language generation (NLG) and clinical efficacy (CE) metrics, delivering reports that are not only linguistically coherent but also clinically accurate.

Improving prediction of fragility fractures in postmenopausal women using random forest.

Mateo J, Usategui-Martín R, Torres AM, Campillo-Sánchez F, de Temiño ÁR, Gil J, Martín-Millán M, Hernandez JL, Pérez-Castrillón JL

pubmed logopapersJul 5 2025
Osteoporosis is a chronic disease characterized by a progressive decline in bone density and quality, leading to increased bone fragility and a higher susceptibility to fractures, even in response to minimal trauma. Osteoporotic fractures represent a major source of morbidity and mortality among postmenopausal women. This condition poses both clinical and societal challenges, as its consequences include a significant reduction in quality of life, prolonged dependency, and a substantial increase in healthcare costs. Therefore, the development of reliable tools for predicting fracture risk is essential for the effective management of affected patients. In this study, we developed a predictive model based on the Random Forest (RF) algorithm for risk stratification of fragility fractures, integrating clinical, demographic, and imaging variables derived from dual-energy X-ray absorptiometry (DXA) and 3D modeling. Two independent cohorts were analyzed: the HURH cohort and the Camargo cohort, enabling both internal and external validation of the model. The results showed that the RF model consistently outperformed other classification algorithms, including k-nearest neighbors (KNN), support vector machines (SVM), decision trees (DT), and Gaussian naive Bayes (GNB), demonstrating high accuracy, sensitivity, specificity, area under the ROC curve (AUC), and Matthews correlation coefficient (MCC). Additionally, variable importance analysis highlighted that previous fracture history, parathyroid hormone (PTH) levels, and lumbar spine T-score, along with other densitometric parameters, were key predictors of fracture risk. These findings suggest that the integration of advanced machine learning techniques with clinical and imaging data can optimize early identification of high-risk patients, enabling personalized preventive strategies and improving the clinical management of osteoporosis.

Quantifying features from X-ray images to assess early stage knee osteoarthritis.

Helaly T, Faisal TR, Moni ASB, Naznin M

pubmed logopapersJul 5 2025
Knee osteoarthritis (KOA) is a progressive degenerative joint disease and a leading cause of disability worldwide. Manual diagnosis of KOA from X-ray images is subjective and prone to inter- and intra-observer variability, making early detection challenging. While deep learning (DL)-based models offer automation, they often require large labeled datasets, lack interpretability, and do not provide quantitative feature measurements. Our study presents an automated KOA severity assessment system that integrates a pretrained DL model with image processing techniques to extract and quantify key KOA imaging biomarkers. The pipeline includes contrast limited adaptive histogram equalization (CLAHE) for contrast enhancement, DexiNed-based edge extraction, and thresholding for noise reduction. We design customized algorithms that automatically detect and quantify joint space narrowing (JSN) and osteophytes from the extracted edges. The proposed model quantitatively assesses JSN and finds the number of intercondylar osteophytes, contributing to severity classification. The system achieves accuracies of 88% for JSN detection, 80% for osteophyte identification, and 73% for KOA classification. Its key strength lies in eliminating the need for any expensive training process and, consequently, the dependency on labeled data except for validation. Additionally, it provides quantitative data that can support classification in other OA grading frameworks.

ChestGPT: Integrating Large Language Models and Vision Transformers for Disease Detection and Localization in Chest X-Rays

Shehroz S. Khan, Petar Przulj, Ahmed Ashraf, Ali Abedi

arxiv logopreprintJul 4 2025
The global demand for radiologists is increasing rapidly due to a growing reliance on medical imaging services, while the supply of radiologists is not keeping pace. Advances in computer vision and image processing technologies present significant potential to address this gap by enhancing radiologists' capabilities and improving diagnostic accuracy. Large language models (LLMs), particularly generative pre-trained transformers (GPTs), have become the primary approach for understanding and generating textual data. In parallel, vision transformers (ViTs) have proven effective at converting visual data into a format that LLMs can process efficiently. In this paper, we present ChestGPT, a deep-learning framework that integrates the EVA ViT with the Llama 2 LLM to classify diseases and localize regions of interest in chest X-ray images. The ViT converts X-ray images into tokens, which are then fed, together with engineered prompts, into the LLM, enabling joint classification and localization of diseases. This approach incorporates transfer learning techniques to enhance both explainability and performance. The proposed method achieved strong global disease classification performance on the VinDr-CXR dataset, with an F1 score of 0.76, and successfully localized pathologies by generating bounding boxes around the regions of interest. We also outline several task-specific prompts, in addition to general-purpose prompts, for scenarios radiologists might encounter. Overall, this framework offers an assistive tool that can lighten radiologists' workload by providing preliminary findings and regions of interest to facilitate their diagnostic process.

A tailored deep learning approach for early detection of oral cancer using a 19-layer CNN on clinical lip and tongue images.

Liu P, Bagi K

pubmed logopapersJul 4 2025
Early and accurate detection of oral cancer plays a pivotal role in improving patient outcomes. This research introduces a custom-designed, 19-layer convolutional neural network (CNN) for the automated diagnosis of oral cancer using clinical images of the lips and tongue. The methodology integrates advanced preprocessing steps, including min-max normalization and histogram-based contrast enhancement, to optimize image features critical for reliable classification. The model is extensively validated on the publicly available Oral Cancer (Lips and Tongue) Images (OCI) dataset, which is divided into 80% training and 20% testing subsets. Comprehensive performance evaluation employs established metrics-accuracy, sensitivity, specificity, precision, and F1-score. Our CNN architecture achieved an accuracy of 99.54%, sensitivity of 95.73%, specificity of 96.21%, precision of 96.34%, and F1-score of 96.03%, demonstrating substantial improvements over prominent transfer learning benchmarks, including SqueezeNet, AlexNet, Inception, VGG19, and ResNet50, all tested under identical experimental protocols. The model's robust performance, efficient computation, and high reliability underline its practicality for clinical application and support its superiority over existing approaches. This study provides a reproducible pipeline and a new reference point for deep learning-based oral cancer detection, facilitating translation into real-world healthcare environments and promising enhanced diagnostic confidence.

ViT-GCN: A Novel Hybrid Model for Accurate Pneumonia Diagnosis from X-ray Images.

Xu N, Wu J, Cai F, Li X, Xie HB

pubmed logopapersJul 4 2025
This study aims to enhance the accuracy of pneumonia diagnosis from X-ray images by developing a model that integrates Vision Transformer (ViT) and Graph Convolutional Networks (GCN) for improved feature extraction and diagnostic performance. The ViT-GCN model was designed to leverage the strengths of both ViT, which captures global image information by dividing the image into fixed-size patches and processing them in sequence, and GCN, which captures node features and relationships through message passing and aggregation in graph data. A composite loss function combining multivariate cross-entropy, focal loss, and GHM loss was introduced to address dataset imbalance and improve training efficiency on small datasets. The ViT-GCN model demonstrated superior performance, achieving an accuracy of 91.43\% on the COVID-19 chest X-ray database, surpassing existing models in diagnostic accuracy for pneumonia. The study highlights the effectiveness of combining ViT and GCN architectures in medical image diagnosis, particularly in addressing challenges related to small datasets. This approach can lead to more accurate and efficient pneumonia diagnoses, especially in resource-constrained settings where small datasets are common.

Dual-Branch Attention Fusion Network for Pneumonia Detection.

Li T, Li B, Zheng C

pubmed logopapersJul 4 2025
Pneumonia, as a serious respiratory disease caused by bacterial, viral or fungal infections, is an important cause of increased morbidity and mortality in high-risk populations (e.g.the elderly, infants and young children, and immunodeficient patients) worldwide. Early diagnosis is decisive for improving patient prognosis. In this study, we propose a Dual-Branch Attention Fusion Network based on transfer learning, aiming to improve the accuracy of pneumonia classification in lung X-ray images. The model adopts a dual-branch feature extraction architecture: independent feature extraction paths are constructed based on pre-trained convolutional neural networks (CNNs) and structural spatial state models, respectively, and feature complementarity is achieved through a feature fusion strategy. In the fusion stage, a Self-Attention Mechanism is introduced to dynamically weight the feature representations of different paths, which effectively improves the characterisation of key lesion regions. The experiments are carried out based on the publicly available ChestX-ray dataset, and through data enhancement, migration learning optimisation and hyper-parameter tuning, the model achieves an accuracy of 97.78% on an independent test set, and the experimental results fully demonstrate the excellent performance of the model in the field of pneumonia diagnosis, which provides a new and powerful tool for the rapid and accurate diagnosis of pneumonia in clinical practice, and our methodology provides a high--performance computational framework for intelligent pneumonia Early screening provides a high-performance computing framework, and its architecture design of multipath and attention fusion can provide a methodological reference for other medical image analysis tasks.&#xD.

Progression risk of adolescent idiopathic scoliosis based on SHAP-Explained machine learning models: a multicenter retrospective study.

Fang X, Weng T, Zhang Z, Gong W, Zhang Y, Wang M, Wang J, Ding Z, Lai C

pubmed logopapersJul 4 2025
To develop an interpretable machine learning model, explained using SHAP, based on imaging features of adolescent idiopathic scoliosis extracted by convolutional neural networks (CNNs), in order to predict the risk of curve progression and identify the most accurate predictive model. This study included 233 patients with adolescent idiopathic scoliosis from three medical centers. CNNs were used to extract features from full-spine coronal X-ray images taken at three follow-up points for each patient. Imaging and clinical features from center 1 were analyzed using the Boruta algorithm to identify independent predictors. Data from center 1 were divided into training (80%) and testing (20%) sets, while data from centers 2 and 3 were used as external validation sets. Six machine learning models were constructed. Receiver operating characteristic (ROC) curves were plotted, and model performance was assessed by calculating the area under the curve (AUC), accuracy, sensitivity, and specificity in the training, testing, and external validation sets. The SHAP interpreter was used to analyze the most effective model. The six models yielded AUCs ranging from 0.565 to 0.989, accuracies from 0.600 to 0.968, sensitivities from 0.625 to 1.0, and specificities from 0.571 to 0.974. The XGBoost model achieved the best performance, with an AUC of 0.896 in the external validation set. SHAP analysis identified the change in the main Cobb angle between the second and first follow-ups [Cobb1(2−1)] as the most important predictor, followed by the main Cobb angle at the second follow-up (Cobb1-2) and the change in the secondary Cobb angle [Cobb2(2−1)]. The XGBoost model demonstrated the best predictive performance in the external validation cohort, confirming its preliminary stability and generalizability. SHAP analysis indicated that Cobb1(2−1) was the most important feature for predicting scoliosis progression. This model offers a valuable tool for clinical decision-making by enabling early identification of high-risk patients and supporting early intervention strategies through automated feature extraction and interpretable analysis. The online version contains supplementary material available at 10.1186/s12891-025-08841-3.
Page 14 of 35341 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.