Sort by:
Page 21 of 46453 results

Evaluation of Artificial Intelligence-based diagnosis for facial fractures, advantages compared with conventional imaging diagnosis: a systematic review and meta-analysis.

Ju J, Qu Z, Qing H, Ding Y, Peng L

pubmed logopapersJul 15 2025
Currently, the application of convolutional neural networks (CNNs) in artificial intelligence (AI) for medical imaging diagnosis has emerged as a highly promising tool. In particular, AI-assisted diagnosis holds significant potential for orthopedic and emergency department physicians by improving diagnostic efficiency and enhancing the overall patient experience. This systematic review and meta-analysis has the objective of assessing the application of AI in diagnosing facial fractures and evaluating its diagnostic performance. This study adhered to the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and PRISMA-Diagnostic Test Accuracy (PRISMA-DTA). A comprehensive literature search was conducted in the PubMed, Cochrane Library, and Web of Science databases to identify original articles published up to December 2024. The risk of bias and applicability of the included studies were assessed using the QUADAS-2 tool. The results were analyzed using a Summary Receiver Operating Characteristic (SROC) curve. A total of 16 studies were included in the analysis, with contingency tables extracted from 11 of them. The pooled sensitivity was 0.889 (95% CI: 0.844-0.922), and the pooled specificity was 0.888 (95% CI: 0.834-0.926). The area under the Summary Receiver Operating Characteristic (SROC) curve was 0.911. In the subgroup analysis of nasal and mandibular fractures, the pooled sensitivity for nasal fractures was 0.851 (95% CI: 0.806-0.887), and the pooled specificity was 0.883 (95% CI: 0.862-0.902). For mandibular fractures, the pooled sensitivity was 0.905 (95% CI: 0.836-0.947), and the pooled specificity was 0.895 (95% CI: 0.824-0.940). AI can be developed as an auxiliary tool to assist clinicians in diagnosing facial fractures. The results demonstrate high overall sensitivity and specificity, along with a robust performance reflected by the high area under the SROC curve. This study has been prospectively registered on Prospero, ID:CRD42024618650, Creat Date:10 Dec 2024. https://www.crd.york.ac.uk/PROSPERO/view/CRD42024618650 .

Semantically Informed Salient Regions Guided Radiology Report Generation

Zeyi Hou, Zeqiang Wei, Ruixin Yan, Ning Lang, Xiuzhuang Zhou

arxiv logopreprintJul 15 2025
Recent advances in automated radiology report generation from chest X-rays using deep learning algorithms have the potential to significantly reduce the arduous workload of radiologists. However, due to the inherent massive data bias in radiology images, where abnormalities are typically subtle and sparsely distributed, existing methods often produce fluent yet medically inaccurate reports, limiting their applicability in clinical practice. To address this issue effectively, we propose a Semantically Informed Salient Regions-guided (SISRNet) report generation method. Specifically, our approach explicitly identifies salient regions with medically critical characteristics using fine-grained cross-modal semantics. Then, SISRNet systematically focuses on these high-information regions during both image modeling and report generation, effectively capturing subtle abnormal findings, mitigating the negative impact of data bias, and ultimately generating clinically accurate reports. Compared to its peers, SISRNet demonstrates superior performance on widely used IU-Xray and MIMIC-CXR datasets.

Learning quality-guided multi-layer features for classifying visual types with ball sports application.

Huang X, Liu T, Yu Y

pubmed logopapersJul 15 2025
Nowadays, breast cancer is one of the leading causes of death among women. This highlights the need for precise X-ray image analysis in the medical and imaging fields. In this study, we present an advanced perceptual deep learning framework that extracts key features from large X-ray datasets, mimicking human visual perception. We begin by using a large dataset of breast cancer images and apply the BING objectness measure to identify relevant visual and semantic patches. To manage the large number of object-aware patches, we propose a new ranking technique in the weak annotation context. This technique identifies the patches that are most aligned with human visual judgment. These key patches are then aggregated to extract meaningful features from each image. We leverage these features to train a multi-class SVM classifier, which categorizes the images into various breast cancer stages. The effectiveness of our deep learning model is demonstrated through extensive comparative analysis and visual examples.

A Clinically-Informed Framework for Evaluating Vision-Language Models in Radiology Report Generation: Taxonomy of Errors and Risk-Aware Metric

Guan, H., Hou, P. C., Hong, P., Wang, L., Zhang, W., Du, X., Zhou, Z., Zhou, L.

medrxiv logopreprintJul 14 2025
Recent advances in vision-language models (VLMs) have enabled automatic radiology report generation, yet current evaluation methods remain limited to general-purpose NLP metrics or coarse classification-based clinical scores. In this study, we propose a clinically informed evaluation framework for VLM-generated radiology reports that goes beyond traditional performance measures. We define a taxonomy of 12 radiology-specific error types, each annotated with clinical risk levels (low, medium, high) in collaboration with physicians. Using this framework, we conduct a comprehensive error analysis of three representative VLMs, i.e., DeepSeek VL2, CXR-LLaVA, and CheXagent, on 685 gold-standard, expert-annotated MIMIC-CXR cases. We further introduce a risk-aware evaluation metric, the Clinical Risk-weighted Error Score for Text-generation (CREST), to quantify safety impact. Our findings reveal critical model vulnerabilities, common error patterns, and condition-specific risk profiles, offering actionable insights for model development and deployment. This work establishes a safety-centric foundation for evaluating and improving medical report generation models. The source code of our evaluation framework, including CREST computation and error taxonomy analysis, is available at https://github.com/guanharry/VLM-CREST.

Three-dimensional high-content imaging of unstained soft tissue with subcellular resolution using a laboratory-based multi-modal X-ray microscope

Esposito, M., Astolfo, A., Zhou, Y., Buchanan, I., Teplov, A., Endrizzi, M., Egido Vinogradova, A., Makarova, O., Divan, R., Tang, C.-M., Yagi, Y., Lee, P. D., Walsh, C. L., Ferrara, J. D., Olivo, A.

medrxiv logopreprintJul 14 2025
With increasing interest in studying biological systems across spatial scales--from centimetres down to nanometres--histology continues to be the gold standard for tissue imaging at cellular resolution, providing an essential bridge between macroscopic and nanoscopic analysis. However, its inherently destructive and two-dimensional nature limits its ability to capture the full three-dimensional complexity of tissue architecture. Here we show that phase-contrast X-ray microscopy can enable three-dimensional virtual histology with subcellular resolution. This technique provides direct quantification of electron density without restrictive assumptions, allowing for direct characterisation of cellular nuclei in a standard laboratory setting. By combining high spatial resolution and soft tissue contrast, with automated segmentation of cell nuclei, we demonstrated virtual H&E staining using machine learning-based style transfer, yielding volumetric datasets compatible with existing histopathological analysis tools. Furthermore, by integrating electron density and the sensitivity to nanometric features of the dark field contrast channel, we achieve stain-free, high-content imaging capable of distinguishing nuclei and extracellular matrix.

Feasibility study of fully automatic measurement of adenoid size on lateral neck and head radiographs using deep learning.

Hao D, Tang L, Li D, Miao S, Dong C, Cui J, Gao C, Li J

pubmed logopapersJul 14 2025
The objective and reliable quantification of adenoid size is pivotal for precise clinical diagnosis and the formulation of effective treatment strategies. Conventional manual measurement techniques, however, are often labor-intensive and time-consuming. To develop and validate a fully automated system for measuring adenoid size using deep learning (DL) on lateral head and neck radiographs. In this retrospective study, we analyzed 711 lateral head and neck radiographs collected from two centers between February and July 2023. A DL-based adenoid size measurement system was developed, utilizing Fujioka's method. The system employed the RTMDet network and RTMPose networks for accurate landmark detection, and mathematical formulas were applied to determine adenoid size. To evaluate consistency and reliability of the system, we employed the intra-class correlation coefficient (ICC), mean absolute difference (MAD), and Bland-Altman plots as key assessment metrics. The DL-based system exhibited high reliability in the prediction of adenoid, nasopharynx, and adenoid-nasopharyngeal ratio measurements, showcasing strong agreement with the reference standard. The results indicated an ICC for adenoid measurements of 0.902 [95%CI, 0.872-0.925], with a MAD of 1.189 and a root mean square (RMS) of 1.974. For nasopharynx measurements, the ICC was 0.868 [95%CI, 0.828-0.899], with a MAD of 1.671 and an RMS of 1.916. Additionally, the adenoid-nasopharyngeal ratio measurements yielded an ICC of 0.911 [95%CI, 0.883-0.932], a MAD of 0.054, and an RMS of 0.076. The developed DL-based system effectively automates the measurement of the adenoid-nasopharyngeal ratio, adenoid, and nasopharynx on lateral neck or head radiographs, showcasing high reliability.

Pathological omics prediction of early and advanced colon cancer based on artificial intelligence model.

Wang Z, Wu Y, Li Y, Wang Q, Yi H, Shi H, Sun X, Liu C, Wang K

pubmed logopapersJul 14 2025
Artificial intelligence (AI) models based on pathological slides have great potential to assist pathologists in disease diagnosis and have become an important research direction in the field of medical image analysis. The aim of this study was to develop an AI model based on whole-slide images to predict the stage of colon cancer. In this study, a total of 100 pathological slides of colon cancer patients were collected as the training set, and 421 pathological slides of colon cancer were downloaded from The Cancer Genome Atlas (TCGA) database as the external validation set. Cellprofiler and CLAM tools were used to extract pathological features, and machine learning algorithms and deep learning algorithms were used to construct prediction models. The area under the curve (AUC) of the best machine learning model was 0.78 in the internal test set and 0.68 in the external test set. The AUC of the deep learning model in the internal test set was 0.889, and the accuracy of the model was 0.854. The AUC of the deep learning model in the external test set was 0.700. The prediction model has the potential to generalize in the process of combining pathological omics diagnosis. Compared with machine learning, deep learning has higher recognition and accuracy of images, and the performance of the model is better.

A generative model uses healthy and diseased image pairs for pixel-level chest X-ray pathology localization.

Dong K, Cheng Y, He K, Suo J

pubmed logopapersJul 14 2025
Medical artificial intelligence (AI) offers potential for automatic pathological interpretation, but a practicable AI model demands both pixel-level accuracy and high explainability for diagnosis. The construction of such models relies on substantial training data with fine-grained labelling, which is impractical in real applications. To circumvent this barrier, we propose a prompt-driven constrained generative model to produce anatomically aligned healthy and diseased image pairs and learn a pathology localization model in a supervised manner. This paradigm provides high-fidelity labelled data and addresses the lack of chest X-ray images with labelling at fine scales. Benefitting from the emerging text-driven generative model and the incorporated constraint, our model presents promising localization accuracy of subtle pathologies, high explainability for clinical decisions, and good transferability to many unseen pathological categories such as new prompts and mixed pathologies. These advantageous features establish our model as a promising solution to assist chest X-ray analysis. In addition, the proposed approach is also inspiring for other tasks lacking massive training data and time-consuming manual labelling.

Landmark Detection for Medical Images using a General-purpose Segmentation Model

Ekaterina Stansfield, Jennifer A. Mitterer, Abdulrahman Altahhan

arxiv logopreprintJul 13 2025
Radiographic images are a cornerstone of medical diagnostics in orthopaedics, with anatomical landmark detection serving as a crucial intermediate step for information extraction. General-purpose foundational segmentation models, such as SAM (Segment Anything Model), do not support landmark segmentation out of the box and require prompts to function. However, in medical imaging, the prompts for landmarks are highly specific. Since SAM has not been trained to recognize such landmarks, it cannot generate accurate landmark segmentations for diagnostic purposes. Even MedSAM, a medically adapted variant of SAM, has been trained to identify larger anatomical structures, such as organs and their parts, and lacks the fine-grained precision required for orthopaedic pelvic landmarks. To address this limitation, we propose leveraging another general-purpose, non-foundational model: YOLO. YOLO excels in object detection and can provide bounding boxes that serve as input prompts for SAM. While YOLO is efficient at detection, it is significantly outperformed by SAM in segmenting complex structures. In combination, these two models form a reliable pipeline capable of segmenting not only a small pilot set of eight anatomical landmarks but also an expanded set of 72 landmarks and 16 regions with complex outlines, such as the femoral cortical bone and the pelvic inlet. By using YOLO-generated bounding boxes to guide SAM, we trained the hybrid model to accurately segment orthopaedic pelvic radiographs. Our results show that the proposed combination of YOLO and SAM yields excellent performance in detecting anatomical landmarks and intricate outlines in orthopaedic pelvic radiographs.

AI-Enhanced Pediatric Pneumonia Detection: A CNN-Based Approach Using Data Augmentation and Generative Adversarial Networks (GANs)

Abdul Manaf, Nimra Mughal

arxiv logopreprintJul 13 2025
Pneumonia is a leading cause of mortality in children under five, requiring accurate chest X-ray diagnosis. This study presents a machine learning-based Pediatric Chest Pneumonia Classification System to assist healthcare professionals in diagnosing pneumonia from chest X-ray images. The CNN-based model was trained on 5,863 labeled chest X-ray images from children aged 0-5 years from the Guangzhou Women and Children's Medical Center. To address limited data, we applied augmentation techniques (rotation, zooming, shear, horizontal flipping) and employed GANs to generate synthetic images, addressing class imbalance. The system achieved optimal performance using combined original, augmented, and GAN-generated data, evaluated through accuracy and F1 score metrics. The final model was deployed via a Flask web application, enabling real-time classification with probability estimates. Results demonstrate the potential of deep learning and GANs in improving diagnostic accuracy and efficiency for pediatric pneumonia classification, particularly valuable in resource-limited clinical settings https://github.com/AbdulManaf12/Pediatric-Chest-Pneumonia-Classification
Page 21 of 46453 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.