Sort by:
Page 1 of 13122 results
Next

Application of Artificial Intelligence in rheumatic disease classification: an example of ankylosing spondylitis severity inspection model.

Chen CW, Tsai HH, Yeh CY, Yang CK, Tsou HK, Leong PY, Wei JC

pubmed logopapersDec 1 2025
The development of the Artificial Intelligence (AI)-based severity inspection model for ankylosing spondylitis (AS) could support health professionals to rapidly assess the severity of the disease, enhance proficiency, and reduce the demands of human resources. This paper aims to develop an AI-based severity inspection model for AS using patients' X-ray images and modified Stoke Ankylosing Spondylitis Spinal Score (mSASSS). The numerical simulation with AI is developed following the progress of data preprocessing, building and testing the model, and then the model. The training data is preprocessed by inviting three experts to check the X-ray images of 222 patients following the Gold Standard. The model is then developed through two stages, including keypoint detection and mSASSS evaluation. The two-stage AI-based severity inspection model for AS was developed to automatically detect spine points and evaluate mSASSS scores. At last, the data obtained from the developed model was compared with those from experts' assessment to analyse the accuracy of the model. The study was conducted in accordance with the ethical principles outlined in the Declaration of Helsinki. The spine point detection at the first stage achieved 1.57 micrometres in mean error distance with the ground truth, and the second stage of the classification network can reach 0.81 in mean accuracy. The model can correctly identify 97.4% patches belonging to mSASSS score 3, while those belonging to score 0 can still be classified into scores 1 or 2. The automatic severity inspection model for AS developed in this paper is accurate and can support health professionals in rapidly assessing the severity of AS, enhancing assessment proficiency, and reducing the demands of human resources.

A quantitative tumor-wide analysis of morphological heterogeneity of colorectal adenocarcinoma.

Dragomir MP, Popovici V, Schallenberg S, Čarnogurská M, Horst D, Nenutil R, Bosman F, Budinská E

pubmed logopapersJul 1 2025
The intertumoral and intratumoral heterogeneity of colorectal adenocarcinoma (CRC) at the morphologic level is poorly understood. Previously, we identified morphological patterns associated with CRC molecular subtypes and their distinct molecular motifs. Here we aimed to evaluate the heterogeneity of these patterns across CRC. Three pathologists evaluated dominant, secondary, and tertiary morphology on four sections from four different FFPE blocks per tumor in a pilot set of 22 CRCs. An AI-based image analysis tool was trained on these tumors to evaluate the morphologic heterogeneity on an extended set of 161 stage I-IV primary CRCs (n = 644 H&E sections). We found that most tumors had two or three different dominant morphotypes and the complex tubular (CT) morphotype was the most common. The CT morphotype showed no combinatorial preferences. Desmoplastic (DE) morphotype was rarely dominant and rarely combined with other dominant morphotypes. Mucinous (MU) morphotype was mostly combined with solid/trabecular (TB) and papillary (PP) morphotypes. Most tumors showed medium or high heterogeneity, but no associations were found between heterogeneity and clinical parameters. A higher proportion of DE morphotype was associated with higher T-stage, N-stage, distant metastases, AJCC stage, and shorter overall survival (OS) and relapse-free survival (RFS). A higher proportion of MU morphotype was associated with higher grade, right side, and microsatellite instability (MSI). PP morphotype was associated with earlier T- and N-stage, absence of metastases, and improved OS and RFS. CT was linked to left side, lower grade, and better survival in stage I-III patients. MSI tumors showed higher proportions of MU and TB, and lower CT and PP morphotypes. These findings suggest that morphological shifts accompany tumor progression and highlight the need for extensive sampling and AI-based analysis. In conclusion, we observed unexpectedly high intratumoral morphological heterogeneity of CRC and found that it is not heterogeneity per se, but the proportions of morphologies that are associated with clinical outcomes.

Reconstruction-based approach for chest X-ray image segmentation and enhanced multi-label chest disease classification.

Hage Chehade A, Abdallah N, Marion JM, Hatt M, Oueidat M, Chauvet P

pubmed logopapersJul 1 2025
U-Net is a commonly used model for medical image segmentation. However, when applied to chest X-ray images that show pathologies, it often fails to include these critical pathological areas in the generated masks. To address this limitation, in our study, we tackled the challenge of precise segmentation and mask generation by developing a novel approach, using CycleGAN, that encompasses the areas affected by pathologies within the region of interest, allowing the extraction of relevant radiomic features linked to pathologies. Furthermore, we adopted a feature selection approach to focus the analysis on the most significant features. The results of our proposed pipeline are promising, with an average accuracy of 92.05% and an average AUC of 89.48% for the multi-label classification of effusion and infiltration acquired from the ChestX-ray14 dataset, using the XGBoost model. Furthermore, applying our methodology to the classification of the 14 diseases in the ChestX-ray14 dataset resulted in an average AUC of 83.12%, outperforming previous studies. This research highlights the importance of effective pathological mask generation and features selection for accurate classification of chest diseases. The promising results of our approach underscore its potential for broader applications in the classification of chest diseases.

Multi-label pathology editing of chest X-rays with a Controlled Diffusion Model.

Chu H, Qi X, Wang H, Liang Y

pubmed logopapersJul 1 2025
Large-scale generative models have garnered significant attention in the field of medical imaging, particularly for image editing utilizing diffusion models. However, current research has predominantly concentrated on pathological editing involving single or a limited number of labels, making it challenging to achieve precise modifications. Inaccurate alterations may lead to substantial discrepancies between the generated and original images, thereby impacting the clinical applicability of these models. This paper presents a diffusion model with untangling capabilities applied to chest X-ray image editing, incorporating a mask-based mechanism for bone and organ information. We successfully perform multi-label pathological editing of chest X-ray images without compromising the integrity of the original thoracic structure. The proposed technology comprises a chest X-ray image classifier and an intricate organ mask; the classifier supplies essential feature labels that require untangling for the stabilized diffusion model, while the complex organ mask facilitates directed and controllable edits to chest X-rays. We assessed the outcomes of our proposed algorithm, named Chest X-rays_Mpe, using MS-SSIM and CLIP scores alongside qualitative evaluations conducted by radiology experts. The results indicate that our approach surpasses existing algorithms across both quantitative and qualitative metrics.

Evaluating a large language model's accuracy in chest X-ray interpretation for acute thoracic conditions.

Ostrovsky AM

pubmed logopapersJul 1 2025
The rapid advancement of artificial intelligence (AI) has great ability to impact healthcare. Chest X-rays are essential for diagnosing acute thoracic conditions in the emergency department (ED), but interpretation delays due to radiologist availability can impact clinical decision-making. AI models, including deep learning algorithms, have been explored for diagnostic support, but the potential of large language models (LLMs) in emergency radiology remains largely unexamined. This study assessed ChatGPT's feasibility in interpreting chest X-rays for acute thoracic conditions commonly encountered in the ED. A subset of 1400 images from the NIH Chest X-ray dataset was analyzed, representing seven pathology categories: Atelectasis, Effusion, Emphysema, Pneumothorax, Pneumonia, Mass, and No Finding. ChatGPT 4.0, utilizing the "X-Ray Interpreter" add-on, was evaluated for its diagnostic performance across these categories. ChatGPT demonstrated high performance in identifying normal chest X-rays, with a sensitivity of 98.9 %, specificity of 93.9 %, and accuracy of 94.7 %. However, the model's performance varied across pathologies. The best results were observed in diagnosing pneumonia (sensitivity 76.2 %, specificity 93.7 %) and pneumothorax (sensitivity 77.4 %, specificity 89.1 %), while performance for atelectasis and emphysema was lower. ChatGPT demonstrates potential as a supplementary tool for differentiating normal from abnormal chest X-rays, with promising results for certain pathologies like pneumonia. However, its diagnostic accuracy for more subtle conditions requires improvement. Further research integrating ChatGPT with specialized image recognition models could enhance its performance, offering new possibilities in medical imaging and education.

Added value of artificial intelligence for the detection of pelvic and hip fractures.

Jaillat A, Cyteval C, Baron Sarrabere MP, Ghomrani H, Maman Y, Thouvenin Y, Pastor M

pubmed logopapersJul 1 2025
To assess the added value of artificial intelligence (AI) for radiologists and emergency physicians in the radiographic detection of pelvic fractures. In this retrospective study, one junior radiologist reviewed 940 X-rays of patients admitted to emergency for a fall with suspicion of pelvic fracture between March 2020 and June 2021. The radiologist analyzed the X-rays alone and then using an AI system (BoneView). In a random sample of 100 exams, the same procedure was repeated alongside five other readers (three radiologists and two emergency physicians with 3-30 years of experience). The reference diagnosis was based on the patient's full set of medical imaging exams and medical records in the months following emergency admission. A total of 633 confirmed pelvic fractures (64.8% from hip and 35.2% from pelvic ring) in 940 patients and 68 pelvic fractures (60% from hip and 40% from pelvic ring) in the 100-patient sample were included. In the whole dataset, the junior radiologist achieved a significant sensitivity improvement with AI assistance (Se<sub>-PELVIC</sub> = 77.25% to 83.73%; p < 0.001, Se<sub>-HIP</sub> 93.24 to 96.49%; p < 0.001 and Se<sub>-PELVIC RING</sub> 54.60% to 64.50%; p < 0.001). However, there was a significant decrease in specificity with AI assistance (Spe<sub>-PELVIC</sub> = 95.24% to 93.25%; p = 0.005 and Spe<sub>-HIP</sub> = 98.30% to 96.90%; p = 0.005). In the 100-patient sample, the two emergency physicians obtained an improvement in fracture detection sensitivity across the pelvic area + 14.70% (p = 0.0011) and + 10.29% (p < 0.007) respectively without a significant decrease in specificity. For hip fractures, E1's sensitivity increased from 59.46% to 70.27% (p = 0.04), and E2's sensitivity increased from 78.38% to 86.49% (p = 0.08). For pelvic ring fractures, E1's sensitivity increased from 12.90% to 32.26% (p = 0.012), and E2's sensitivity increased from 19.35% to 32.26% (p = 0.043). AI improved the diagnostic performance for emergency physicians and radiologists with limited experience in pelvic fracture screening.

Estimating Periodontal Stability Using Computer Vision.

Feher B, Werdich AA, Chen CY, Barrow J, Lee SJ, Palmer N, Feres M

pubmed logopapersJul 1 2025
Periodontitis is a severe infection affecting oral and systemic health and is traditionally diagnosed through clinical probing-a process that is time-consuming, uncomfortable for patients, and subject to variability based on the operator's skill. We hypothesized that computer vision can be used to estimate periodontal stability from radiographs alone. At the tooth level, we used intraoral radiographs to detect and categorize individual teeth according to their periodontal stability and corresponding treatment needs: healthy (prevention), stable (maintenance), and unstable (active treatment). At the patient level, we assessed full-mouth series and classified patients as stable or unstable by the presence of at least 1 unstable tooth. Our 3-way tooth classification model achieved an area under the receiver operating characteristic curve of 0.71 for healthy teeth, 0.56 for stable, and 0.67 for unstable. The model achieved an F<sub>1</sub> score of 0.45 for healthy teeth, 0.57 for stable, and 0.54 for unstable (recall, 0.70). Saliency maps generated by gradient-weighted class activation mapping primarily showed highly activated areas corresponding to clinically probed regions around teeth. Our binary patient classifier achieved an area under the receiver operating characteristic curve of 0.68 and an F<sub>1</sub> score of 0.74 (recall, 0.70). Taken together, our results suggest that it is feasible to estimate periodontal stability, which traditionally requires clinical and radiographic examination, from radiographic signal alone using computer vision. Variations in model performance across different classes at the tooth level indicate the necessity of further refinement.

The Chest X- Ray: The Ship has Sailed, But Has It?

Iacovino JR

pubmed logopapersJul 1 2025
In the past, the chest X-ray (CXR) was a traditional age and amount requirement used to assess potential mortality risk in life insurance applicants. It fell out of favor due to inconvenience to the applicant, cost, and lack of protective value. With the advent of deep learning techniques, can the results of the CXR, as a requirement, now add additional value to underwriting risk analysis?

Automated Scoliosis Cobb Angle Classification in Biplanar Radiograph Imaging With Explainable Machine Learning Models.

Yu J, Lahoti YS, McCandless KC, Namiri NK, Miyasaka MS, Ahmed H, Song J, Corvi JJ, Berman DC, Cho SK, Kim JS

pubmed logopapersJul 1 2025
Retrospective cohort study. To quantify the pathology of the spine in patients with scoliosis through one-dimensional feature analysis. Biplanar radiograph (EOS) imaging is a low-dose technology offering high-resolution spinal curvature measurement, crucial for assessing scoliosis severity and guiding treatment decisions. Machine learning (ML) algorithms, utilizing one-dimensional image features, can enable automated Cobb angle classification, improving accuracy and efficiency in scoliosis evaluation while reducing the need for manual measurements, thus supporting clinical decision-making. This study used 816 annotated AP EOS spinal images with a spine segmentation mask and a 10° polynomial to represent curvature. Engineered features included the first and second derivatives, Fourier transform, and curve energy, normalized for robustness. XGBoost selected the top 32 features. The models classified scoliosis into multiple groups based on curvature degree, measured through Cobb angle. To address the class imbalance, stratified sampling, undersampling, and oversampling techniques were used, with 10-fold stratified K-fold cross-validation for generalization. An automatic grid search was used for hyperparameter optimization, with K-fold cross-validation (K=3). The top-performing model was Random Forest, achieving an ROC AUC of 91.8%. An accuracy of 86.1%, precision of 86.0%, recall of 86.0%, and an F1 score of 85.1% were also achieved. Of the three techniques used to address class imbalance, stratified sampling produced the best out-of-sample results. SHAP values were generated for the top 20 features, including spine curve length and linear regression error, with the most predictive features ranked at the top, enhancing model explainability. Feature engineering with classical ML methods offers an effective approach for classifying scoliosis severity based on Cobb angle ranges. The high interpretability of features in representing spinal pathology, along with the ease of use of classical ML techniques, makes this an attractive solution for developing automated tools to manage complex spinal measurements.
Page 1 of 13122 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.