Sort by:
Page 1 of 13125 results
Next

Application of Artificial Intelligence in rheumatic disease classification: an example of ankylosing spondylitis severity inspection model.

Chen CW, Tsai HH, Yeh CY, Yang CK, Tsou HK, Leong PY, Wei JC

pubmed logopapersDec 1 2025
The development of the Artificial Intelligence (AI)-based severity inspection model for ankylosing spondylitis (AS) could support health professionals to rapidly assess the severity of the disease, enhance proficiency, and reduce the demands of human resources. This paper aims to develop an AI-based severity inspection model for AS using patients' X-ray images and modified Stoke Ankylosing Spondylitis Spinal Score (mSASSS). The numerical simulation with AI is developed following the progress of data preprocessing, building and testing the model, and then the model. The training data is preprocessed by inviting three experts to check the X-ray images of 222 patients following the Gold Standard. The model is then developed through two stages, including keypoint detection and mSASSS evaluation. The two-stage AI-based severity inspection model for AS was developed to automatically detect spine points and evaluate mSASSS scores. At last, the data obtained from the developed model was compared with those from experts' assessment to analyse the accuracy of the model. The study was conducted in accordance with the ethical principles outlined in the Declaration of Helsinki. The spine point detection at the first stage achieved 1.57 micrometres in mean error distance with the ground truth, and the second stage of the classification network can reach 0.81 in mean accuracy. The model can correctly identify 97.4% patches belonging to mSASSS score 3, while those belonging to score 0 can still be classified into scores 1 or 2. The automatic severity inspection model for AS developed in this paper is accurate and can support health professionals in rapidly assessing the severity of AS, enhancing assessment proficiency, and reducing the demands of human resources.

Automated Measurements of Spinal Parameters for Scoliosis Using Deep Learning.

Meng X, Zhu S, Yang Q, Zhu F, Wang Z, Liu X, Dong P, Wang S, Fan L

pubmed logopapersJun 15 2025
Retrospective single-institution study. To develop and validate an automated convolutional neural network (CNN) to measure the Cobb angle, T1 tilt angle, coronal balance, clavicular angle, height of the shoulders, T5-T12 Cobb angle, and sagittal balance for accurate scoliosis diagnosis. Scoliosis, characterized by a Cobb angle >10°, requires accurate and reliable measurements to guide treatment. Traditional manual measurements are time-consuming and have low interobserver and intraobserver reliability. While some automated tools exist, they often require manual intervention and focus primarily on the Cobb angle. In this study, we utilized four data sets comprising the anterior-posterior (AP) and lateral radiographs of 1682 patients with scoliosis. The CNN includes coarse segmentation, landmark localization, and fine segmentation. The measurements were evaluated using the dice coefficient, mean absolute error (MAE), and percentage of correct key-points (PCK) with a 3-mm threshold. An internal testing set, including 87 adolescent (7-16 yr) and 26 older adult patients (≥60 yr), was used to evaluate the agreement between automated and manual measurements. The automated measures by the CNN achieved high mean dice coefficients (>0.90), PCK of 89.7%-93.7%, and MAE for vertebral corners of 2.87-3.62 mm on AP radiographs. Agreement on the internal testing set for manual measurements was acceptable, with an MAE of 0.26 mm or degree-0.51 mm or degree for the adolescent subgroup and 0.29 mm or degree-4.93 mm or degree for the older adult subgroup on AP radiographs. The MAE for the T5-T12 Cobb angle and sagittal balance, on lateral radiographs, was 1.03° and 0.84 mm, respectively, in adolescents, and 4.60° and 9.41 mm, respectively, in older adults. Automated measurement time was significantly shorter compared with manual measurements. The deep learning automated system provides rapid, accurate, and reliable measurements for scoliosis diagnosis, which could improve clinical workflow efficiency and guide scoliosis treatment. Level III.

Inference of single cell profiles from histology stains with the Single-Cell omics from Histology Analysis Framework (SCHAF)

Comiter, C., Chen, X., Vaishnav, E. D., Kobayashi-Kirschvink, K. J., Ciapmricotti, M., Zhang, K., Murray, J., Monticolo, F., Qi, J., Tanaka, R., Brodowska, S. E., Li, B., Yang, Y., Rodig, S. J., Karatza, A., Quintanal Villalonga, A., Turner, M., Pfaff, K. L., Jane-Valbuena, J., Slyper, M., Waldman, J., Vigneau, S., Wu, J., Blosser, T. R., Segerstolpe, A., Abravanel, D., Wagle, N., Demehri, S., Zhuang, X., Rudin, C. M., Klughammer, J., Rozenblatt-Rosen, O., Stultz, C. M., Shu, J., Regev, A.

biorxiv logopreprintJun 13 2025
Tissue biology involves an intricate balance between cell-intrinsic processes and interactions between cells organized in specific spatial patterns, which can be respectively captured by single cell profiling methods, such as single cell RNA-seq (scRNA-seq) and spatial transcriptomics, and histology imaging data, such as Hematoxylin-and-Eosin (H&E) stains. While single cell profiles provide rich molecular information, they can be challenging to collect routinely in the clinic and either lack spatial resolution or high gene throughput. Conversely, histological H&E assays have been a cornerstone of tissue pathology for decades, but do not directly report on molecular details, although the observed structure they capture arises from molecules and cells. Here, we leverage vision transformers and adversarial deep learning to develop the Single Cell omics from Histology Analysis Framework (SCHAF), which generates a tissue sample's spatially-resolved whole transcriptome single cell omics dataset from its H&E histology image. We demonstrate SCHAF on a variety of tissues--including lung cancer, metastatic breast cancer, placentae, and whole mouse pups--training with matched samples analyzed by sc/snRNA-seq, H&E staining, and, when available, spatial transcriptomics. SCHAF generated appropriate single cell profiles from histology images in test data, related them spatially, and compared well to ground-truth scRNA-Seq, expert pathologist annotations, or direct spatial transcriptomic measurements, with some limitations. SCHAF opens the way to next-generation H&E analyses and an integrated understanding of cell and tissue biology in health and disease.

Anatomy-Grounded Weakly Supervised Prompt Tuning for Chest X-ray Latent Diffusion Models

Konstantinos Vilouras, Ilias Stogiannidis, Junyu Yan, Alison Q. O'Neil, Sotirios A. Tsaftaris

arxiv logopreprintJun 12 2025
Latent Diffusion Models have shown remarkable results in text-guided image synthesis in recent years. In the domain of natural (RGB) images, recent works have shown that such models can be adapted to various vision-language downstream tasks with little to no supervision involved. On the contrary, text-to-image Latent Diffusion Models remain relatively underexplored in the field of medical imaging, primarily due to limited data availability (e.g., due to privacy concerns). In this work, focusing on the chest X-ray modality, we first demonstrate that a standard text-conditioned Latent Diffusion Model has not learned to align clinically relevant information in free-text radiology reports with the corresponding areas of the given scan. Then, to alleviate this issue, we propose a fine-tuning framework to improve multi-modal alignment in a pre-trained model such that it can be efficiently repurposed for downstream tasks such as phrase grounding. Our method sets a new state-of-the-art on a standard benchmark dataset (MS-CXR), while also exhibiting robust performance on out-of-distribution data (VinDr-CXR). Our code will be made publicly available.

Identification of Atypical Scoliosis Patterns Using X-ray Images Based on Fine-Grained Techniques in Deep Learning.

Chen Y, He Z, Yang KG, Qin X, Lau AY, Liu Z, Lu N, Cheng JC, Lee WY, Chui EC, Qiu Y, Liu X, Chen X, Zhu Z

pubmed logopapersJun 11 2025
Study DesignRetrospective diagnostic study.ObjectivesTo develop a fine-grained classification model based on deep learning using X-ray images, to screen for scoliosis, and further to screen for atypical scoliosis patterns associated with Chiari Malformation type I (CMS).MethodsA total of 508 pairs of coronal and sagittal X-ray images from patients with CMS, adolescent idiopathic scoliosis (AIS), and normal controls (NC) were processed through construction of the ResNet-50 model, including the development of ResNet-50 Coronal, ResNet-50 Sagittal, ResNet-50 Dual, ResNet-50 Concat, and ResNet-50 Bilinear models. Evaluation metrics calculated included accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for both the scoliosis diagnosis system and the CMS diagnosis system, along with the generation of receiver operating characteristic (ROC) curves and heatmaps for CMS diagnosis.ResultsThe classification results for the scoliosis diagnosis system showed that the ResNet-50 Coronal model had the best overall performance. For the CMS diagnosis system, the ResNet-50 Coronal and ResNet-50 Dual models demonstrated optimal performance. Specifically, the ResNet-50 Dual model reached the diagnostic level of senior spine surgeons, and the ResNet-50 Coronal model even surpassed senior surgeons in specificity and PPV. The CMS heatmaps revealed that major classification weights were concentrated on features such as atypical curve types, significant lateral shift of scoliotic segments, longer affected segments, and severe trunk tilt.ConclusionsThe fine-grained classification model based on the ResNet-50 network can accurately screen for atypical scoliosis patterns associated with CMS, highlighting the importance of radiographic features such as atypical curve types in model classification.

A fully open AI foundation model applied to chest radiography.

Ma D, Pang J, Gotway MB, Liang J

pubmed logopapersJun 11 2025
Chest radiography frequently serves as baseline imaging for most lung diseases<sup>1</sup>. Deep learning has great potential for automating the interpretation of chest radiography<sup>2</sup>. However, existing chest radiographic deep learning models are limited in diagnostic scope, generalizability, adaptability, robustness and extensibility. To overcome these limitations, we have developed Ark<sup>+</sup>, a foundation model applied to chest radiography and pretrained by cyclically accruing and reusing the knowledge from heterogeneous expert labels in numerous datasets. Ark<sup>+</sup> excels in diagnosing thoracic diseases. It expands the diagnostic scope and addresses potential misdiagnosis. It can adapt to evolving diagnostic needs and respond to novel diseases. It can learn rare conditions from a few samples and transfer to new diagnostic settings without training. It tolerates data biases and long-tailed distributions, and it supports federated learning to preserve privacy. All codes and pretrained models have been released, so that Ark<sup>+</sup> is open for fine-tuning, local adaptation and improvement. It is extensible to several modalities. Thus, it is a foundation model for medical imaging. The exceptional capabilities of Ark<sup>+</sup> stem from our insight: aggregating various datasets diversifies the patient populations and accrues knowledge from many experts to yield unprecedented performance while reducing annotation costs<sup>3</sup>. The development of Ark<sup>+</sup> reveals that open models trained by accruing and reusing knowledge from heterogeneous expert annotations with a multitude of public (big or small) datasets can surpass the performance of proprietary models trained on large data. We hope that our findings will inspire more researchers to share code and datasets or federate privacy-preserving data to create open foundation models with diverse, global expertise and patient populations, thus accelerating open science and democratizing AI for medicine.

Autonomous Computer Vision Development with Agentic AI

Jin Kim, Muhammad Wahi-Anwa, Sangyun Park, Shawn Shin, John M. Hoffman, Matthew S. Brown

arxiv logopreprintJun 11 2025
Agentic Artificial Intelligence (AI) systems leveraging Large Language Models (LLMs) exhibit significant potential for complex reasoning, planning, and tool utilization. We demonstrate that a specialized computer vision system can be built autonomously from a natural language prompt using Agentic AI methods. This involved extending SimpleMind (SM), an open-source Cognitive AI environment with configurable tools for medical image analysis, with an LLM-based agent, implemented using OpenManus, to automate the planning (tool configuration) for a particular computer vision task. We provide a proof-of-concept demonstration that an agentic system can interpret a computer vision task prompt, plan a corresponding SimpleMind workflow by decomposing the task and configuring appropriate tools. From the user input prompt, "provide sm (SimpleMind) config for lungs, heart, and ribs segmentation for cxr (chest x-ray)"), the agent LLM was able to generate the plan (tool configuration file in YAML format), and execute SM-Learn (training) and SM-Think (inference) scripts autonomously. The computer vision agent automatically configured, trained, and tested itself on 50 chest x-ray images, achieving mean dice scores of 0.96, 0.82, 0.83, for lungs, heart, and ribs, respectively. This work shows the potential for autonomous planning and tool configuration that has traditionally been performed by a data scientist in the development of computer vision applications.

Improving Patient Communication by Simplifying AI-Generated Dental Radiology Reports With ChatGPT: Comparative Study.

Stephan D, Bertsch AS, Schumacher S, Puladi B, Burwinkel M, Al-Nawas B, Kämmerer PW, Thiem DG

pubmed logopapersJun 9 2025
Medical reports, particularly radiology findings, are often written for professional communication, making them difficult for patients to understand. This communication barrier can reduce patient engagement and lead to misinterpretation. Artificial intelligence (AI), especially large language models such as ChatGPT, offers new opportunities for simplifying medical documentation to improve patient comprehension. We aimed to evaluate whether AI-generated radiology reports simplified by ChatGPT improve patient understanding, readability, and communication quality compared to original AI-generated reports. In total, 3 versions of radiology reports were created using ChatGPT: an original AI-generated version (text 1), a patient-friendly, simplified version (text 2), and a further simplified and accessibility-optimized version (text 3). A total of 300 patients (n=100, 33.3% per group), excluding patients with medical education, were randomly assigned to review one text version and complete a standardized questionnaire. Readability was assessed using the Flesch Reading Ease (FRE) score and LIX indices. Both simplified texts showed significantly higher readability scores (text 1: FRE score=51.1; text 2: FRE score=55.0; and text 3: FRE score=56.4; P<.001) and lower LIX scores, indicating enhanced clarity. Text 3 had the shortest sentences, had the fewest long words, and scored best on all patient-rated dimensions. Questionnaire results revealed significantly higher ratings for texts 2 and 3 across clarity (P<.001), tone (P<.001), structure, and patient engagement. For example, patients rated the ability to understand findings without help highest for text 3 (mean 1.5, SD 0.7) and lowest for text 1 (mean 3.1, SD 1.4). Both simplified texts significantly improved patients' ability to prepare for clinical conversations and promoted shared decision-making. AI-generated simplification of radiology reports significantly enhances patient comprehension and engagement. These findings highlight the potential of ChatGPT as a tool to improve patient-centered communication. While promising, future research should focus on ensuring clinical accuracy and exploring applications across diverse patient populations to support equitable and effective integration of AI in health care communication.

Comparative accuracy of two commercial AI algorithms for musculoskeletal trauma detection in emergency radiographs.

Huhtanen JT, Nyman M, Blanco Sequeiros R, Koskinen SK, Pudas TK, Kajander S, Niemi P, Aronen HJ, Hirvonen J

pubmed logopapersJun 9 2025
Missed fractures are the primary cause of interpretation errors in emergency radiology, and artificial intelligence has recently shown great promise in radiograph interpretation. This study compared the diagnostic performance of two AI algorithms, BoneView and RBfracture, in detecting traumatic abnormalities (fractures and dislocations) in MSK radiographs. AI algorithms analyzed 998 radiographs (585 normal, 413 abnormal), against the consensus of two MSK specialists. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, and interobserver agreement (Cohen's Kappa) were calculated. 95% confidence intervals (CI) assessed robustness, and McNemar's tests compared sensitivity and specificity between the AI algorithms. BoneView demonstrated a sensitivity of 0.893 (95% CI: 0.860-0.920), specificity of 0.885 (95% CI: 0.857-0.909), PPV of 0.846, NPV of 0.922, and accuracy of 0.889. RBfracture demonstrated a sensitivity of 0.872 (95% CI: 0.836-0.901), specificity of 0.892 (95% CI: 0.865-0.915), PPV of 0.851, NPV of 0.908, and accuracy of 0.884. No statistically significant differences were found in sensitivity (p = 0.151) or specificity (p = 0.708). Kappa was 0.81 (95% CI: 0.77-0.84), indicating almost perfect agreement between the two AI algorithms. Performance was similar in adults and children. Both AI algorithms struggled more with subtle abnormalities, which constituted 66% and 70% of false negatives but only 20% and 18% of true positives for the two AI algorithms, respectively (p < 0.001). BoneView and RBfracture exhibited high diagnostic performance and almost perfect agreement, with consistent results across adults and children, highlighting the potential of AI in emergency radiograph interpretation.

Sex estimation from the variables of talocrural joint by using machine learning algorithms.

Ray A, Ray G, Kürtül İ, Şenol GT

pubmed logopapersJun 9 2025
This study has focused on sex determination from the variables estimated on X-ray images of the talocrural joint by using machine learning algorithms (ML). The variables of the mediolateral diameter of tibia (TMLD) and fibula (FMLD), the distance between the innermost points of the talocrural joint (DIT), the distance between the outermost points of the talocrural joint (DOT), and the distal articular surface of the tibia (TAS) estimated using X-ray images of 150 women and 150 men were evaluated by applying different ML methods. Logistic regression classifier, Decision Tree classifier, K-Nearest Neighbor classifier, Linear Discriminant Analysis, Naive Bayes and Random Forest classifier were used as algorithms. As a result of ML, an accuracy between 82 and 92 % was found. The highest rate of accuracy was achieved with RFC algorithm. DOT was the variable which contributed to the model at highest degree. Except for the variables of the age and FMLD, the other variables were found to be statistically significant in terms of sex difference. It was found that the variables of the talocrural joint were classified with high accuracy in terms of sex. In addition, morphometric data were found about the population and racial differences were emphasized.
Page 1 of 13125 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.