Sort by:
Page 29 of 35341 results

A dataset for quality evaluation of pelvic X-ray and diagnosis of developmental dysplasia of the hip.

Qi G, Jiao X, Li J, Qin C, Li X, Sun Z, Zhao Y, Jiang R, Zhu Z, Zhao G, Yu G

pubmed logopapersMay 26 2025
Developmental Dysplasia of the Hip (DDH) stands as one of the preeminent hip disorders prevalent in pediatric orthopedics. Automated diagnostic instruments, driven by artificial intelligence methodologies, are capable of providing substantial assistance to clinicians in the diagnosis of DDH. We have developed a dataset designated as Multitasking DDH (MTDDH), which is composed of two sub-datasets. Dataset 1 encompasses 1,250 pelvic X-ray images, with annotations demarcating four discrete regions for the evaluation of pelvic X-ray quality, in tandem with eight pivotal points serving as support for DDH diagnosis. Dataset 2 contains 906 pelvic X-ray images, and each image has been annotated with eight key points for assisting in the diagnosis of DDH. Notably, MTDDH represents the pioneering dataset engineered for the comprehensive evaluation of pelvic X-ray quality while concurrently offering the most exhaustive set of eight key points to bolster DDH diagnosis, thus fulfilling the exigency for enhanced diagnostic precision. Ultimately, we presented the elaborate process of constructing the MTDDH and furnished a concise introduction regarding its application.

PolyPose: Localizing Deformable Anatomy in 3D from Sparse 2D X-ray Images using Polyrigid Transforms

Vivek Gopalakrishnan, Neel Dey, Polina Golland

arxiv logopreprintMay 25 2025
Determining the 3D pose of a patient from a limited set of 2D X-ray images is a critical task in interventional settings. While preoperative volumetric imaging (e.g., CT and MRI) provides precise 3D localization and visualization of anatomical targets, these modalities cannot be acquired during procedures, where fast 2D imaging (X-ray) is used instead. To integrate volumetric guidance into intraoperative procedures, we present PolyPose, a simple and robust method for deformable 2D/3D registration. PolyPose parameterizes complex 3D deformation fields as a composition of rigid transforms, leveraging the biological constraint that individual bones do not bend in typical motion. Unlike existing methods that either assume no inter-joint movement or fail outright in this under-determined setting, our polyrigid formulation enforces anatomically plausible priors that respect the piecewise rigid nature of human movement. This approach eliminates the need for expensive deformation regularizers that require patient- and procedure-specific hyperparameter optimization. Across extensive experiments on diverse datasets from orthopedic surgery and radiotherapy, we show that this strong inductive bias enables PolyPose to successfully align the patient's preoperative volume to as few as two X-ray images, thereby providing crucial 3D guidance in challenging sparse-view and limited-angle settings where current registration methods fail.

Deep learning-based identification of vertebral fracture and osteoporosis in lateral spine radiographs and DXA vertebral fracture assessment to predict incident fracture.

Hong N, Cho SW, Lee YH, Kim CO, Kim HC, Rhee Y, Leslie WD, Cummings SR, Kim KM

pubmed logopapersMay 24 2025
Deep learning (DL) identification of vertebral fractures and osteoporosis in lateral spine radiographs and DXA vertebral fracture assessment (VFA) images may improve fracture risk assessment in older adults. In 26 299 lateral spine radiographs from 9276 individuals attending a tertiary-level institution (60% train set; 20% validation set; 20% test set; VERTE-X cohort), DL models were developed to detect prevalent vertebral fracture (pVF) and osteoporosis. The pre-trained DL models from lateral spine radiographs were then fine-tuned in 30% of a DXA VFA dataset (KURE cohort), with performance evaluated in the remaining 70% test set. The area under the receiver operating characteristics curve (AUROC) for DL models to detect pVF and osteoporosis was 0.926 (95% CI 0.908-0.955) and 0.848 (95% CI 0.827-0.869) from VERTE-X spine radiographs, respectively, and 0.924 (95% CI 0.905-0.942) and 0.867 (95% CI 0.853-0.881) from KURE DXA VFA images, respectively. A total of 13.3% and 13.6% of individuals sustained an incident fracture during a median follow-up of 5.4 years and 6.4 years in the VERTE-X test set (n = 1852) and KURE test set (n = 2456), respectively. Incident fracture risk was significantly greater among individuals with DL-detected vertebral fracture (hazard ratios [HRs] 3.23 [95% CI 2.51-5.17] and 2.11 [95% CI 1.62-2.74] for the VERTE-X and KURE test sets) or DL-detected osteoporosis (HR 2.62 [95% CI 1.90-3.63] and 2.14 [95% CI 1.72-2.66]), which remained significant after adjustment for clinical risk factors and femoral neck bone mineral density. DL scores improved incident fracture discrimination and net benefit when combined with clinical risk factors. In summary, DL-detected pVF and osteoporosis in lateral spine radiographs and DXA VFA images enhanced fracture risk prediction in older adults.

Optimizing the power of AI for fracture detection: from blind spots to breakthroughs.

Behzad S, Eibschutz L, Lu MY, Gholamrezanezhad A

pubmed logopapersMay 23 2025
Artificial Intelligence (AI) is increasingly being integrated into the field of musculoskeletal (MSK) radiology, from research methods to routine clinical practice. Within the field of fracture detection, AI is allowing for precision and speed previously unimaginable. Yet, AI's decision-making processes are sometimes wrought with deficiencies, undermining trust, hindering accountability, and compromising diagnostic precision. To make AI a trusted ally for radiologists, we recommend incorporating clinical history, rationalizing AI decisions by explainable AI (XAI) techniques, increasing the variety and scale of training data to approach the complexity of a clinical situation, and active interactions between clinicians and developers. By bridging these gaps, the true potential of AI can be unlocked, enhancing patient outcomes and fundamentally transforming radiology through a harmonious integration of human expertise and intelligent technology. In this article, we aim to examine the factors contributing to AI inaccuracies and offer recommendations to address these challenges-benefiting both radiologists and developers striving to improve future algorithms.

Generalizable AI approach for detecting projection type and left-right reversal in chest X-rays.

Ohta Y, Katayama Y, Ichida T, Utsunomiya A, Ishida T

pubmed logopapersMay 23 2025
The verification of chest X-ray images involves several checkpoints, including orientation and reversal. To address the challenges of manual verification, this study developed an artificial intelligence (AI)-based system using a deep convolutional neural network (DCNN) to automatically verify the consistency between the imaging direction and examination orders. The system classified the chest X-ray images into four categories: anteroposterior (AP), posteroanterior (PA), flipped AP, and flipped PA. To evaluate the impact of internal and external datasets on the classification accuracy, the DCNN was trained using multiple publicly available chest X-ray datasets and tested on both internal and external data. The results demonstrated that the DCNN accurately classified the imaging directions and detected image reversal. However, the classification accuracy was strongly influenced by the training dataset. When trained exclusively on NIH data, the network achieved an accuracy of 98.9% on the same dataset; however, this reduced to 87.8% when evaluated with PADChest data. When trained on a mixed dataset, the accuracy improved to 96.4%; however, it decreased to 76.0% when tested on an external COVID-CXNet dataset. Further, using Grad-CAM, we visualized the decision-making process of the network, highlighting the areas of influence, such as the cardiac silhouette and arm positioning, depending on the imaging direction. Thus, this study demonstrated the potential of AI in assisting in automating the verification of imaging direction and positioning in chest X-rays. However, the network must be fine-tuned to local data characteristics to achieve optimal performance.

Artificial intelligence automated measurements of spinopelvic parameters in adult spinal deformity-a systematic review.

Bishara A, Patel S, Warman A, Jo J, Hughes LP, Khalifeh JM, Azad TD

pubmed logopapersMay 23 2025
This review evaluates advances made in deep learning (DL) applications to automatic spinopelvic parameter estimation, comparing their accuracy to manual measurements performed by surgeons. The PubMed database was queried for studies on DL measurement of adult spinopelvic parameters between 2014 and 2024. Studies were excluded if they focused on pediatric patients, non-deformity-related conditions, non-human subjects, or if they lacked sufficient quantitative data comparing DL models to human measurements. Included studies were assessed based on model architecture, patient demographics, training, validation, testing methods, and sample sizes, as well as performance compared to manual methods. Of 442 screened articles, 16 were included, with sample sizes ranging from 15 to 9,832 radiograph images and reporting interclass correlation coefficients (ICCs) of 0.56 to 1.00. Measurements of pelvic tilt, pelvic incidence, T4-T12 kyphosis, L1-L4 lordosis, and SVA showed consistently high ICCs (>0.80) and low mean absolute deviations (MADs <6°), with substantial number of studies reporting pelvic tilt achieving an excellent ICC of 0.90 or greater. In contrast, T1-T12 kyphosis and L4-S1 lordosis exhibited lower ICCs and higher measurement errors. Overall, most DL models demonstrated strong correlations (>0.80) with clinician measurements and minimal differences compared to manual references, except for T1-T12 kyphosis (average Pearson correlation: 0.68), L1-L4 lordosis (average Pearson correlation: 0.75), and L4-S1 lordosis (average Pearson correlation: 0.65). Novel computer vision algorithms show promising accuracy in measuring spinopelvic parameters, comparable to manual surgeon measurements. Future research should focus on external validation, additional imaging modalities, and the feasibility of integration in clinical settings to assess model reliability and predictive capacity.

Lung volume assessment for mean dark-field coefficient calculation using different determination methods.

Gassert FT, Heuchert J, Schick R, Bast H, Urban T, Dorosti T, Zimmermann GS, Ziegelmayer S, Marka AW, Graf M, Makowski MR, Pfeiffer D, Pfeiffer F

pubmed logopapersMay 23 2025
Accurate lung volume determination is crucial for reliable dark-field imaging. We compared different approaches for the determination of lung volume in mean dark-field coefficient calculation. In this retrospective analysis of data prospectively acquired between October 2018 and October 2020, patients at least 18 years of age who underwent chest computed tomography (CT) were screened for study participation. Inclusion criteria were the ability to consent and to stand upright without help. Exclusion criteria were pregnancy, lung cancer, pleural effusion, atelectasis, air space disease, ground-glass opacities, and pneumothorax. Lung volume was calculated using four methods: conventional radiography (CR) using shape information; a convolutional neural network (CNN) trained for CR; CT-based volume estimation; and results from pulmonary function testing (PFT). Results were compared using a Student t-test and Spearman ρ correlation statistics. We studied 81 participants (51 men, 30 women), aged 64 ± 12 years (mean ± standard deviation). All lung volumes derived from the various methods were different from each other: CR, 7.27 ± 1.64 L; CNN, 4.91 ± 1.05 L; CT, 5.25 ± 1.36 L; PFT, 6.54 L ± 1.52 L; p < 0.001 for all comparisons. A high positive correlation was found for all combinations (p < 0.001 for all), the highest one being between CT and CR (ρ = 0.88) and the lowest one between PFT and CNN (ρ = 0.78). Lung volume and therefore mean dark-field coefficient calculation is highly dependent on the method used, taking into consideration different positioning and inhalation depths. This study underscores the impact of the method used for lung volume determination. In the context of mean dark-field coefficient calculation, CR-based methods are more desirable because both dark-field images and conventional images are acquired at the same breathing state, and therefore, biases due to differences in inhalation depth are eliminated. Lung volume measurements vary significantly between different determination methods. Mean dark-field coefficient calculations require the same method to ensure comparability. Radiography-based methods simplify workflows and minimize biases, making them most suitable.

Development and validation of a radiomics model using plain radiographs to predict spine fractures with posterior wall injury.

Liu W, Zhang X, Yu C, Chen D, Zhao K, Liang J

pubmed logopapersMay 23 2025
When spine fractures involve posterior wall damage, they pose a heightened risk of instability, consequently influencing treatment strategies. To enhance early diagnosis and refine treatment planning for these fractures, we implemented a radiomics analysis using deep learning techniques, based on both anteroposterior and lateral plain X-ray images. Retrospective data were collected for 130 patients with spine fractures who underwent anteroposterior and lateral imaging at two centers (Center 1, training cohort; Center 2, validation cohort) between January 2010 and June 2024. The Vision Transformer (ViT) technique was employed to extract imaging features. The features selected through multiple methods were then used to construct a machine learning model using NaiveBayes and Support Vector Machine (SVM). The model's performance was evaluated using the area under the curve (AUC) metric. 12 features were selected to form the deep learning features. The SVM model using a combination of anteroposterior and lateral plain images showed good performance in both centers with a high AUC for predicting spine fractures with posterior wall injury (Center 1, AUC: 0.909, 95% CI: 0.763-1.000; Center 2, AUC: 0.837, 95% CI: 0.678-0.996). The SVM model based on the combined images outperformed both the individual position images and a spine surgeon with 3 years of clinical experience in classification performance. Our study demonstrates that a radiomic model created by integrating anteroposterior and lateral plain X-ray images of the spine can more effectively predict spine fractures with posterior wall injury, aiding clinicians in making accurate diagnoses and treatment decisions.

An X-ray bone age assessment method for hands and wrists of adolescents in Western China based on feature fusion deep learning models.

Wang YH, Zhou HM, Wan L, Guo YC, Li YZ, Liu TA, Guo JX, Li DY, Chen T

pubmed logopapersMay 22 2025
The epiphyses of the hand and wrist serve as crucial indicators for assessing skeletal maturity in adolescents. This study aimed to develop a deep learning (DL) model for bone age (BA) assessment using hand and wrist X-ray images, addressing the challenge of classifying BA in adolescents. The results of this DL-based classification were then compared and analyzed with those obtained from manual assessment. A retrospective analysis was conducted on 688 hand and wrist X-ray images of adolescents aged 11.00-23.99 years from western China, which were randomly divided into training set, validation set and test set. The BA assessment results were initially analyzed and compared using four DL network models: InceptionV3, InceptionV3 + SE + Sex, InceptionV3 + Bilinear and InceptionV3 + Bilinear. + SE + Sex, to identify the DL model with the best classification performance. Subsequently, the results of the top-performing model were compared with those of manual classification. The study findings revealed that the InceptionV3 + Bilinear + SE + Sex model exhibited the best performance, achieving classification accuracies of 96.15% and 90.48% for the training and test set, respectively. Furthermore, based on the InceptionV3 + Bilinear + SE + Sex model, classification accuracies were calculated for four age groups (< 14.0 years, 14.0 years ≤ age < 16.0 years, 16.0 years ≤ age < 18.0 years, ≥ 18.0 years), with notable accuracies of 100% for the age groups 16.0 years ≤ age < 18.0 years and ≥ 18.0 years. The BA classification, utilizing the feature fusion DL network model, holds significant reference value for determining the age of criminal responsibility of adolescents, particularly at the critical legal age boundaries of 14.0, 16.0, and 18.0 years.

A Deep Learning Vision-Language Model for Diagnosing Pediatric Dental Diseases

Pham, T.

medrxiv logopreprintMay 22 2025
This study proposes a deep learning vision-language model for the automated diagnosis of pediatric dental diseases, with a focus on differentiating between caries and periapical infections. The model integrates visual features extracted from panoramic radiographs using methods of non-linear dynamics and textural encoding with textual descriptions generated by a large language model. These multimodal features are concatenated and used to train a 1D-CNN classifier. Experimental results demonstrate that the proposed model outperforms conventional convolutional neural networks and standalone language-based approaches, achieving high accuracy (90%), sensitivity (92%), precision (92%), and an AUC of 0.96. This work highlights the value of combining structured visual and textual representations in improving diagnostic accuracy and interpretability in dental radiology. The approach offers a promising direction for the development of context-aware, AI-assisted diagnostic tools in pediatric dental care.
Page 29 of 35341 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.