Sort by:
Page 29 of 46453 results

Estimating Periodontal Stability Using Computer Vision.

Feher B, Werdich AA, Chen CY, Barrow J, Lee SJ, Palmer N, Feres M

pubmed logopapersJul 1 2025
Periodontitis is a severe infection affecting oral and systemic health and is traditionally diagnosed through clinical probing-a process that is time-consuming, uncomfortable for patients, and subject to variability based on the operator's skill. We hypothesized that computer vision can be used to estimate periodontal stability from radiographs alone. At the tooth level, we used intraoral radiographs to detect and categorize individual teeth according to their periodontal stability and corresponding treatment needs: healthy (prevention), stable (maintenance), and unstable (active treatment). At the patient level, we assessed full-mouth series and classified patients as stable or unstable by the presence of at least 1 unstable tooth. Our 3-way tooth classification model achieved an area under the receiver operating characteristic curve of 0.71 for healthy teeth, 0.56 for stable, and 0.67 for unstable. The model achieved an F<sub>1</sub> score of 0.45 for healthy teeth, 0.57 for stable, and 0.54 for unstable (recall, 0.70). Saliency maps generated by gradient-weighted class activation mapping primarily showed highly activated areas corresponding to clinically probed regions around teeth. Our binary patient classifier achieved an area under the receiver operating characteristic curve of 0.68 and an F<sub>1</sub> score of 0.74 (recall, 0.70). Taken together, our results suggest that it is feasible to estimate periodontal stability, which traditionally requires clinical and radiographic examination, from radiographic signal alone using computer vision. Variations in model performance across different classes at the tooth level indicate the necessity of further refinement.

Added value of artificial intelligence for the detection of pelvic and hip fractures.

Jaillat A, Cyteval C, Baron Sarrabere MP, Ghomrani H, Maman Y, Thouvenin Y, Pastor M

pubmed logopapersJul 1 2025
To assess the added value of artificial intelligence (AI) for radiologists and emergency physicians in the radiographic detection of pelvic fractures. In this retrospective study, one junior radiologist reviewed 940 X-rays of patients admitted to emergency for a fall with suspicion of pelvic fracture between March 2020 and June 2021. The radiologist analyzed the X-rays alone and then using an AI system (BoneView). In a random sample of 100 exams, the same procedure was repeated alongside five other readers (three radiologists and two emergency physicians with 3-30 years of experience). The reference diagnosis was based on the patient's full set of medical imaging exams and medical records in the months following emergency admission. A total of 633 confirmed pelvic fractures (64.8% from hip and 35.2% from pelvic ring) in 940 patients and 68 pelvic fractures (60% from hip and 40% from pelvic ring) in the 100-patient sample were included. In the whole dataset, the junior radiologist achieved a significant sensitivity improvement with AI assistance (Se<sub>-PELVIC</sub> = 77.25% to 83.73%; p < 0.001, Se<sub>-HIP</sub> 93.24 to 96.49%; p < 0.001 and Se<sub>-PELVIC RING</sub> 54.60% to 64.50%; p < 0.001). However, there was a significant decrease in specificity with AI assistance (Spe<sub>-PELVIC</sub> = 95.24% to 93.25%; p = 0.005 and Spe<sub>-HIP</sub> = 98.30% to 96.90%; p = 0.005). In the 100-patient sample, the two emergency physicians obtained an improvement in fracture detection sensitivity across the pelvic area + 14.70% (p = 0.0011) and + 10.29% (p < 0.007) respectively without a significant decrease in specificity. For hip fractures, E1's sensitivity increased from 59.46% to 70.27% (p = 0.04), and E2's sensitivity increased from 78.38% to 86.49% (p = 0.08). For pelvic ring fractures, E1's sensitivity increased from 12.90% to 32.26% (p = 0.012), and E2's sensitivity increased from 19.35% to 32.26% (p = 0.043). AI improved the diagnostic performance for emergency physicians and radiologists with limited experience in pelvic fracture screening.

Evaluating a large language model's accuracy in chest X-ray interpretation for acute thoracic conditions.

Ostrovsky AM

pubmed logopapersJul 1 2025
The rapid advancement of artificial intelligence (AI) has great ability to impact healthcare. Chest X-rays are essential for diagnosing acute thoracic conditions in the emergency department (ED), but interpretation delays due to radiologist availability can impact clinical decision-making. AI models, including deep learning algorithms, have been explored for diagnostic support, but the potential of large language models (LLMs) in emergency radiology remains largely unexamined. This study assessed ChatGPT's feasibility in interpreting chest X-rays for acute thoracic conditions commonly encountered in the ED. A subset of 1400 images from the NIH Chest X-ray dataset was analyzed, representing seven pathology categories: Atelectasis, Effusion, Emphysema, Pneumothorax, Pneumonia, Mass, and No Finding. ChatGPT 4.0, utilizing the "X-Ray Interpreter" add-on, was evaluated for its diagnostic performance across these categories. ChatGPT demonstrated high performance in identifying normal chest X-rays, with a sensitivity of 98.9 %, specificity of 93.9 %, and accuracy of 94.7 %. However, the model's performance varied across pathologies. The best results were observed in diagnosing pneumonia (sensitivity 76.2 %, specificity 93.7 %) and pneumothorax (sensitivity 77.4 %, specificity 89.1 %), while performance for atelectasis and emphysema was lower. ChatGPT demonstrates potential as a supplementary tool for differentiating normal from abnormal chest X-rays, with promising results for certain pathologies like pneumonia. However, its diagnostic accuracy for more subtle conditions requires improvement. Further research integrating ChatGPT with specialized image recognition models could enhance its performance, offering new possibilities in medical imaging and education.

Multi-label pathology editing of chest X-rays with a Controlled Diffusion Model.

Chu H, Qi X, Wang H, Liang Y

pubmed logopapersJul 1 2025
Large-scale generative models have garnered significant attention in the field of medical imaging, particularly for image editing utilizing diffusion models. However, current research has predominantly concentrated on pathological editing involving single or a limited number of labels, making it challenging to achieve precise modifications. Inaccurate alterations may lead to substantial discrepancies between the generated and original images, thereby impacting the clinical applicability of these models. This paper presents a diffusion model with untangling capabilities applied to chest X-ray image editing, incorporating a mask-based mechanism for bone and organ information. We successfully perform multi-label pathological editing of chest X-ray images without compromising the integrity of the original thoracic structure. The proposed technology comprises a chest X-ray image classifier and an intricate organ mask; the classifier supplies essential feature labels that require untangling for the stabilized diffusion model, while the complex organ mask facilitates directed and controllable edits to chest X-rays. We assessed the outcomes of our proposed algorithm, named Chest X-rays_Mpe, using MS-SSIM and CLIP scores alongside qualitative evaluations conducted by radiology experts. The results indicate that our approach surpasses existing algorithms across both quantitative and qualitative metrics.

Reconstruction-based approach for chest X-ray image segmentation and enhanced multi-label chest disease classification.

Hage Chehade A, Abdallah N, Marion JM, Hatt M, Oueidat M, Chauvet P

pubmed logopapersJul 1 2025
U-Net is a commonly used model for medical image segmentation. However, when applied to chest X-ray images that show pathologies, it often fails to include these critical pathological areas in the generated masks. To address this limitation, in our study, we tackled the challenge of precise segmentation and mask generation by developing a novel approach, using CycleGAN, that encompasses the areas affected by pathologies within the region of interest, allowing the extraction of relevant radiomic features linked to pathologies. Furthermore, we adopted a feature selection approach to focus the analysis on the most significant features. The results of our proposed pipeline are promising, with an average accuracy of 92.05% and an average AUC of 89.48% for the multi-label classification of effusion and infiltration acquired from the ChestX-ray14 dataset, using the XGBoost model. Furthermore, applying our methodology to the classification of the 14 diseases in the ChestX-ray14 dataset resulted in an average AUC of 83.12%, outperforming previous studies. This research highlights the importance of effective pathological mask generation and features selection for accurate classification of chest diseases. The promising results of our approach underscore its potential for broader applications in the classification of chest diseases.

A quantitative tumor-wide analysis of morphological heterogeneity of colorectal adenocarcinoma.

Dragomir MP, Popovici V, Schallenberg S, Čarnogurská M, Horst D, Nenutil R, Bosman F, Budinská E

pubmed logopapersJul 1 2025
The intertumoral and intratumoral heterogeneity of colorectal adenocarcinoma (CRC) at the morphologic level is poorly understood. Previously, we identified morphological patterns associated with CRC molecular subtypes and their distinct molecular motifs. Here we aimed to evaluate the heterogeneity of these patterns across CRC. Three pathologists evaluated dominant, secondary, and tertiary morphology on four sections from four different FFPE blocks per tumor in a pilot set of 22 CRCs. An AI-based image analysis tool was trained on these tumors to evaluate the morphologic heterogeneity on an extended set of 161 stage I-IV primary CRCs (n = 644 H&E sections). We found that most tumors had two or three different dominant morphotypes and the complex tubular (CT) morphotype was the most common. The CT morphotype showed no combinatorial preferences. Desmoplastic (DE) morphotype was rarely dominant and rarely combined with other dominant morphotypes. Mucinous (MU) morphotype was mostly combined with solid/trabecular (TB) and papillary (PP) morphotypes. Most tumors showed medium or high heterogeneity, but no associations were found between heterogeneity and clinical parameters. A higher proportion of DE morphotype was associated with higher T-stage, N-stage, distant metastases, AJCC stage, and shorter overall survival (OS) and relapse-free survival (RFS). A higher proportion of MU morphotype was associated with higher grade, right side, and microsatellite instability (MSI). PP morphotype was associated with earlier T- and N-stage, absence of metastases, and improved OS and RFS. CT was linked to left side, lower grade, and better survival in stage I-III patients. MSI tumors showed higher proportions of MU and TB, and lower CT and PP morphotypes. These findings suggest that morphological shifts accompany tumor progression and highlight the need for extensive sampling and AI-based analysis. In conclusion, we observed unexpectedly high intratumoral morphological heterogeneity of CRC and found that it is not heterogeneity per se, but the proportions of morphologies that are associated with clinical outcomes.

Convolutional neural network-based measurement of crown-implant ratio for implant-supported prostheses.

Zhang JP, Wang ZH, Zhang J, Qiu J

pubmed logopapersJul 1 2025
Research has revealed that the crown-implant ratio (CIR) is a critical variable influencing the long-term stability of implant-supported prostheses in the oral cavity. Nevertheless, inefficient manual measurement and varied measurement methods have caused significant inconvenience in both clinical and scientific work. This study aimed to develop an automated system for detecting the CIR of implant-supported prostheses from radiographs, with the objective of enhancing the efficiency of radiograph interpretation for dentists. The method for measuring the CIR of implant-supported prostheses was based on convolutional neural networks (CNNs) and was designed to recognize implant-supported prostheses and identify key points around it. The experiment used the You Only Look Once version 4 (Yolov4) to locate the implant-supported prosthesis using a rectangular frame. Subsequently, two CNNs were used to identify key points. The first CNN determined the general position of the feature points, while the second CNN finetuned the output of the first network to precisely locate the key points. The network underwent testing on a self-built dataset, and the anatomic CIR and clinical CIR were obtained simultaneously through the vertical distance method. Key point accuracy was validated through Normalized Error (NE) values, and a set of data was selected to compare machine and manual measurement results. For statistical analysis, the paired t test was applied (α=.05). A dataset comprising 1106 images was constructed. The integration of multiple networks demonstrated satisfactory recognition of implant-supported prostheses and their surrounding key points. The average NE value for key points indicated a high level of accuracy. Statistical studies confirmed no significant difference in the crown-implant ratio between machine and manual measurement results (P>.05). Machine learning proved effective in identifying implant-supported prostheses and detecting their crown-implant ratios. If applied as a clinical tool for analyzing radiographs, this research can assist dentists in efficiently and accurately obtaining crown-implant ratio results.

Efficient Chest X-Ray Feature Extraction and Feature Fusion for Pneumonia Detection Using Lightweight Pretrained Deep Learning Models

Chandola, Y., Uniyal, V., Bachheti, Y.

medrxiv logopreprintJun 30 2025
Pneumonia is a respiratory condition characterized by inflammation of the alveolar sacs in the lungs, which disrupts normal oxygen exchange. This disease disproportionately impacts vulnerable populations, including young children (under five years of age) and elderly individuals (over 65 years), primarily due to their compromised immune systems. The mortality rate associated with pneumonia remains alarmingly high, particularly in low-resource settings where healthcare access is limited. Although effective prevention strategies exist, pneumonia continues to claim the lives of approximately one million children each year, earning its reputation as a "silent killer." Globally, an estimated 500 million cases are documented annually, underscoring its widespread public health burden. This study explores the design and evaluation of the CNN-based Computer-Aided Diagnostic (CAD) systems with an aim of carrying out competent as well as resourceful classification and categorization of chest radiographs into binary classes (Normal, Pneumonia). An augmented Kaggle dataset of 18,200 chest radiographs, split between normal and pneumonia cases, was utilized. This study conducts a series of experiments to evaluate lightweight CNN models--ShuffleNet, NASNet-Mobile, and EfficientNet-b0--using transfer learning that achieved accuracy of 90%, 88% and 89%, prompting the task for deep feature extraction from each of the networks and applying feature fusion to further pair it with SVM classifier and XGBoost classifier, achieving an accuracy of 97% and 98% resepectively. The proposed research emphasizes the crucial role of CAD systems in advancing radiological diagnostics, delivering effective solutions to aid radiologists in distinguishing between diagnoses by applying feature fusion, feature selection along with various machine learning algorithms and deep learning architectures.

Self-Supervised Multiview Xray Matching

Mohamad Dabboussi, Malo Huard, Yann Gousseau, Pietro Gori

arxiv logopreprintJun 30 2025
Accurate interpretation of multi-view radiographs is crucial for diagnosing fractures, muscular injuries, and other anomalies. While significant advances have been made in AI-based analysis of single images, current methods often struggle to establish robust correspondences between different X-ray views, an essential capability for precise clinical evaluations. In this work, we present a novel self-supervised pipeline that eliminates the need for manual annotation by automatically generating a many-to-many correspondence matrix between synthetic X-ray views. This is achieved using digitally reconstructed radiographs (DRR), which are automatically derived from unannotated CT volumes. Our approach incorporates a transformer-based training phase to accurately predict correspondences across two or more X-ray views. Furthermore, we demonstrate that learning correspondences among synthetic X-ray views can be leveraged as a pretraining strategy to enhance automatic multi-view fracture detection on real data. Extensive evaluations on both synthetic and real X-ray datasets show that incorporating correspondences improves performance in multi-view fracture classification.
Page 29 of 46453 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.