Sort by:
Page 5 of 34338 results

Time-series X-ray image prediction of dental skeleton treatment progress via neural networks.

Kwon SW, Moon JK, Song SC, Cha JY, Kim YW, Choi YJ, Lee JS

pubmed logopapersJul 29 2025
Accurate prediction of skeletal changes during orthodontic treatment in growing patients remains challenging due to significant individual variability in craniofacial growth and treatment responses. Conventional methods, such as support vector regression and multilayer perceptrons, require multiple sequential radiographs to achieve acceptable accuracy. However, they are limited by increased radiation exposure, susceptibility to landmark identification errors, and the lack of visually interpretable predictions. To overcome these limitations, this study explored advanced generative approaches, including denoising diffusion probabilistic models (DDPMs), latent diffusion models (LDMs), and ControlNet, to predict future cephalometric radiographs using minimal input data. We evaluated three diffusion-based models-a DDPM utilizing three sequential cephalometric images (3-input DDPM), a single-image DDPM (1-input DDPM), and a single-image LDM-and a vision-based generative model, ControlNet, conditioned on patient-specific attributes such as age, sex, and orthodontic treatment type. Quantitative evaluations demonstrated that the 3-input DDPM achieved the highest numerical accuracy, whereas the single-image LDM delivered comparable predictive performance with significantly reduced clinical requirements. ControlNet also exhibited competitive accuracy, highlighting its potential effectiveness in clinical scenarios. These findings indicate that the single-image LDM and ControlNet offer practical solutions for personalized orthodontic treatment planning, reducing patient visits and radiation exposure while maintaining robust predictive accuracy.

ToothMaker: Realistic Panoramic Dental Radiograph Generation via Disentangled Control.

Yu W, Guo X, Li W, Liu X, Chen H, Yuan Y

pubmed logopapersJul 28 2025
Generating high-fidelity dental radiographs is essential for training diagnostic models. Despite the development of numerous methods for other medical data, generative approaches in dental radiology remain unexplored. Due to the intricate tooth structures and specialized terminology, these methods often yield ambiguous tooth regions and incorrect dental concepts when applied to dentistry. In this paper, we take the first attempt to investigate diffusion-based teeth X-ray image generation and propose ToothMaker, a novel framework specifically designed for the dental domain. Firstly, to synthesize X-ray images that possess accurate tooth structures and realistic radiological styles simultaneously, we design control-disentangled fine-tuning (CDFT) strategy. Specifically, we present two separate controllers to handle style and layout control respectively, and introduce a gradient-based decoupling method that optimizes each using their corresponding disentangled gradients. Secondly, to enhance model's understanding of dental terminology, we propose prior-disentangled guidance module (PDGM), enabling precise synthesis of dental concepts. It utilizes large language model to decompose dental terminology into a series of meta-knowledge elements and performs interactions and refinements through hypergraph neural network. These elements are then fed into the network to guide the generation of dental concepts. Extensive experiments demonstrate the high fidelity and diversity of the images synthesized by our approach. By incorporating the generated data, we achieve substantial performance improvements on downstream segmentation and visual question answering tasks, indicating that our method can greatly reduce the reliance on manually annotated data. Code will be public available at https://github.com/CUHK-AIM-Group/ToothMaker.

Evaluating the impact of view position in X-ray imaging for the classification of lung diseases.

Hage Chehade A, Abdallah N, Marion JM, Oueidat M, Chauvet P

pubmed logopapersJul 28 2025
Clinical information associated with chest X-ray images, such as view position, patient age and gender, plays a crucial role in image interpretation, as it influences the visibility of anatomical structures and pathologies. However, most classification models using the ChestX-ray14 dataset relied solely on image data, disregarding the impact of these clinical variables. This study aims to investigate which clinical variable affects image characteristics and assess its impact on classification performance. To explore the relationships between clinical variables and image characteristics, unsupervised clustering was applied to group images based on their similarities. Afterwards, a statistical analysis was then conducted on each cluster to examine their clinical composition, by analyzing the distribution of age, gender, and view position. An attention-based CNN model was developed separately for each value of the clinical variable with the greatest influence on image characteristics to assess its impact on lung disease classification. The analysis identified view position as the most influential variable affecting image characteristics. Accounting for this, the proposed approach achieved a weighted area under the curve (AUC) of 0.8176 for pneumonia classification, surpassing the base model (without considering view position) by 1.65% and outperforming previous studies by 6.76%. Furthermore, it demonstrated improved performance across all 14 diseases in the ChestX-ray14 dataset. The findings highlight the importance of considering view position when developing classification models for chest X-ray analysis. Accounting for this characteristic allows for more precise disease identification, demonstrating potential for broader clinical application in lung disease evaluation.

Self-Assessment of acute rib fracture detection system from chest X-ray: Preliminary study for early radiological diagnosis.

Lee HK, Kim HS, Kim SG, Park JY

pubmed logopapersJul 28 2025
ObjectiveDetecting and accurately diagnosing rib fractures in chest radiographs is a challenging and time-consuming task for radiologists. This study presents a novel deep learning system designed to automate the detection and segmentation of rib fractures in chest radiographs.MethodsThe proposed method combines CenterNet with HRNet v2 for precise fracture region identification and HRNet-W48 with contextual representation to enhance rib segmentation. A dataset consisting of 1006 chest radiographs from a tertiary hospital in Korea was used, with a split of 7:2:1 for training, validation, and testing.ResultsThe rib fracture detection component achieved a sensitivity of 0.7171, indicating its effectiveness in identifying fractures. Additionally, the rib segmentation performance was measured by a dice score of 0.86, demonstrating its accuracy in delineating rib structures. Visual assessment results further highlight the model's capability to pinpoint fractures and segment ribs accurately.ConclusionThis innovative approach holds promise for improving rib fracture detection and rib segmentation, offering potential benefits in clinical practice for more efficient and accurate diagnosis in the field of medical image analysis.

Evaluation of the impact of artificial intelligence-assisted image interpretation on the diagnostic performance of clinicians in identifying endotracheal tube position on plain chest X-ray: a multi-case multi-reader study.

Novak A, Ather S, Morgado ATE, Maskell G, Cowell GW, Black D, Shah A, Bowness JS, Shadmaan A, Bloomfield C, Oke JL, Johnson H, Beggs M, Gleeson F, Aylward P, Hafeez A, Elramlawy M, Lam K, Griffiths B, Harford M, Aaron L, Seeley C, Luney M, Kirkland J, Wing L, Qamhawi Z, Mandal I, Millard T, Chimbani M, Sharazi A, Bryant E, Haithwaite W, Medonica A

pubmed logopapersJul 28 2025
Incorrectly placed endotracheal tubes (ETTs) can lead to serious clinical harm. Studies have demonstrated the potential for artificial intelligence (AI)-led algorithms to detect ETT placement on chest X-Ray (CXR) images, however their effect on clinician accuracy remains unexplored. This study measured the impact of an AI-assisted ETT detection algorithm on the ability of clinical staff to correctly identify ETT misplacement on CXR images. Four hundred CXRs of intubated adult patients were retrospectively sourced from the John Radcliffe Hospital (Oxford) and two other UK NHS hospitals. Images were de-identified and selected from a range of clinical settings, including the intensive care unit (ICU) and emergency department (ED). Each image was independently reported by a panel of thoracic radiologists, whose consensus classification of ETT placement (correct, too low [distal], or too high [proximal]) served as the reference standard for the study. Correct ETT position was defined as the tip located 3-7 cm above the carina, in line with established guidelines. Eighteen clinical readers of varying seniority from six clinical specialties were recruited across four NHS hospitals. Readers viewed the dataset using an online platform and recorded a blinded classification of ETT position for each image. After a four-week washout period, this was repeated with assistance from an AI-assisted image interpretation tool. Reader accuracy, reported confidence, and timings were measured during each study phase. 14,400 image interpretations were undertaken. Pooled accuracy for tube placement classification improved from 73.6 to 77.4% (p = 0.002). Accuracy for identification of critically misplaced tubes increased from 79.3 to 89.0% (p = 0.001). Reader confidence improved with AI assistance, with no change in mean interpretation time at 36 s per image. Use of assistive AI technology improved accuracy and confidence in interpreting ETT placement on CXR, especially for identification of critically misplaced tubes. AI assistance may potentially provide a useful adjunct to support clinicians in identifying misplaced ETTs on CXR.

Evaluating the accuracy of artificial intelligence-powered chest X-ray diagnosis for paediatric pulmonary tuberculosis (EVAL-PAEDTBAID): Study protocol for a multi-centre diagnostic accuracy study.

Aurangzeb B, Robert D, Baard C, Qureshi AA, Shaheen A, Ambreen A, McFarlane D, Javed H, Bano I, Chiramal JA, Workman L, Pillay T, Franckling-Smith Z, Mustafa T, Andronikou S, Zar HJ

pubmed logopapersJul 28 2025
Diagnosing pulmonary tuberculosis (PTB) in children is challenging owing to paucibacillary disease, non-specific symptoms and signs and challenges in microbiological confirmation. Chest X-ray (CXR) interpretation is fundamental for diagnosis and classifying disease as severe or non-severe. In adults with PTB, there is substantial evidence showing the usefulness of artificial intelligence (AI) in CXR interpretation, but very limited data exist in children. A prospective two-stage study of children with presumed PTB in three sites (one in South Africa and two in Pakistan) will be conducted. In stage I, eligible children will be enrolled and comprehensively investigated for PTB. A CXR radiological reference standard (RRS) will be established by an expert panel of blinded radiologists. CXRs will be classified into those with findings consistent with PTB or not based on RRS. Cases will be classified as confirmed, unconfirmed or unlikely PTB according to National Institutes of Health definitions. Data from 300 confirmed and unconfirmed PTB cases and 250 unlikely PTB cases will be collected. An AI-CXR algorithm (qXR) will be used to process CXRs. The primary endpoint will be sensitivity and specificity of AI to detect confirmed and unconfirmed PTB cases (composite reference standard); a secondary endpoint will be evaluated for confirmed PTB cases (microbiological reference standard). In stage II, a multi-reader multi-case study using a cross-over design will be conducted with 16 readers and 350 CXRs to assess the usefulness of AI-assisted CXR interpretation for readers (clinicians and radiologists). The primary endpoint will be the difference in the area under the receiver operating characteristic curve of readers with and without AI assistance in correctly classifying CXRs as per RRS. The study has been approved by a local institutional ethics committee at each site. Results will be published in academic journals and presented at conferences. Data will be made available as an open-source database. PACTR202502517486411.

Counterfactual Explanations in Medical Imaging: Exploring SPN-Guided Latent Space Manipulation

Julia Siekiera, Stefan Kramer

arxiv logopreprintJul 25 2025
Artificial intelligence is increasingly leveraged across various domains to automate decision-making processes that significantly impact human lives. In medical image analysis, deep learning models have demonstrated remarkable performance. However, their inherent complexity makes them black box systems, raising concerns about reliability and interpretability. Counterfactual explanations provide comprehensible insights into decision processes by presenting hypothetical "what-if" scenarios that alter model classifications. By examining input alterations, counterfactual explanations provide patterns that influence the decision-making process. Despite their potential, generating plausible counterfactuals that adhere to similarity constraints providing human-interpretable explanations remains a challenge. In this paper, we investigate this challenge by a model-specific optimization approach. While deep generative models such as variational autoencoders (VAEs) exhibit significant generative power, probabilistic models like sum-product networks (SPNs) efficiently represent complex joint probability distributions. By modeling the likelihood of a semi-supervised VAE's latent space with an SPN, we leverage its dual role as both a latent space descriptor and a classifier for a given discrimination task. This formulation enables the optimization of latent space counterfactuals that are both close to the original data distribution and aligned with the target class distribution. We conduct experimental evaluation on the cheXpert dataset. To evaluate the effectiveness of the integration of SPNs, our SPN-guided latent space manipulation is compared against a neural network baseline. Additionally, the trade-off between latent variable regularization and counterfactual quality is analyzed.

SP-Mamba: Spatial-Perception State Space Model for Unsupervised Medical Anomaly Detection

Rui Pan, Ruiying Lu

arxiv logopreprintJul 25 2025
Radiography imaging protocols target on specific anatomical regions, resulting in highly consistent images with recurrent structural patterns across patients. Recent advances in medical anomaly detection have demonstrated the effectiveness of CNN- and transformer-based approaches. However, CNNs exhibit limitations in capturing long-range dependencies, while transformers suffer from quadratic computational complexity. In contrast, Mamba-based models, leveraging superior long-range modeling, structural feature extraction, and linear computational efficiency, have emerged as a promising alternative. To capitalize on the inherent structural regularity of medical images, this study introduces SP-Mamba, a spatial-perception Mamba framework for unsupervised medical anomaly detection. The window-sliding prototype learning and Circular-Hilbert scanning-based Mamba are introduced to better exploit consistent anatomical patterns and leverage spatial information for medical anomaly detection. Furthermore, we excavate the concentration and contrast characteristics of anomaly maps for improving anomaly detection. Extensive experiments on three diverse medical anomaly detection benchmarks confirm the proposed method's state-of-the-art performance, validating its efficacy and robustness. The code is available at https://github.com/Ray-RuiPan/SP-Mamba.

PerioDet: Large-Scale Panoramic Radiograph Benchmark for Clinical-Oriented Apical Periodontitis Detection

Xiaocheng Fang, Jieyi Cai, Huanyu Liu, Chengju Zhou, Minhua Lu, Bingzhi Chen

arxiv logopreprintJul 25 2025
Apical periodontitis is a prevalent oral pathology that presents significant public health challenges. Despite advances in automated diagnostic systems across various medical fields, the development of Computer-Aided Diagnosis (CAD) applications for apical periodontitis is still constrained by the lack of a large-scale, high-quality annotated dataset. To address this issue, we release a large-scale panoramic radiograph benchmark called "PerioXrays", comprising 3,673 images and 5,662 meticulously annotated instances of apical periodontitis. To the best of our knowledge, this is the first benchmark dataset for automated apical periodontitis diagnosis. This paper further proposes a clinical-oriented apical periodontitis detection (PerioDet) paradigm, which jointly incorporates Background-Denoising Attention (BDA) and IoU-Dynamic Calibration (IDC) mechanisms to address the challenges posed by background noise and small targets in automated detection. Extensive experiments on the PerioXrays dataset demonstrate the superiority of PerioDet in advancing automated apical periodontitis detection. Additionally, a well-designed human-computer collaborative experiment underscores the clinical applicability of our method as an auxiliary diagnostic tool for professional dentists.

XVertNet: Unsupervised Contrast Enhancement of Vertebral Structures with Dynamic Self-Tuning Guidance and Multi-Stage Analysis.

Eidlin E, Hoogi A, Rozen H, Badarne M, Netanyahu NS

pubmed logopapersJul 25 2025
Chest X-ray is one of the main diagnostic tools in emergency medicine, yet its limited ability to capture fine anatomical details can result in missed or delayed diagnoses. To address this, we introduce XVertNet, a novel deep-learning framework designed to enhance vertebral structure visualization in X-ray images significantly. Our framework introduces two key innovations: (1) an unsupervised learning architecture that eliminates reliance on manually labeled training data-a persistent bottleneck in medical imaging, and (2) a dynamic self-tuned internal guidance mechanism featuring an adaptive feedback loop for real-time image optimization. Extensive validation across four major public datasets revealed that XVertNet outperforms state-of-the-art enhancement methods, as demonstrated by improvements in evaluation measures such as entropy, the Tenengrad criterion, LPC-SI, TMQI, and PIQE. Furthermore, clinical validation conducted by two board-certified clinicians confirmed that the enhanced images enabled more sensitive examination of vertebral structural changes. The unsupervised nature of XVertNet facilitates immediate clinical deployment without requiring additional training overhead. This innovation represents a transformative advancement in emergency radiology, providing a scalable and time-efficient solution to enhance diagnostic accuracy in high-pressure clinical environments.
Page 5 of 34338 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.