Sort by:
Page 31 of 46453 results

Determination of Kennedy's classification in panoramic X-rays by automated tooth labeling.

Meine H, Metzger MC, Weingart P, Wüster J, Schmelzeisen R, Rörich A, Georgii J, Brandenburg LS

pubmed logopapersJun 24 2025
Panoramic X-rays (PX) are extensively utilized in dental and maxillofacial diagnostics, offering comprehensive imaging of teeth and surrounding structures. This study investigates the automatic determination of Kennedy's classification in partially edentulous jaws. A retrospective study involving 209 PX images from 206 patients was conducted. The established Mask R-CNN, a deep learning-based instance segmentation model, was trained for the automatic detection, position labeling (according to the international dental federation's scheme), and segmentation of teeth in PX. Subsequent post-processing steps filter duplicate outputs by position label and by geometric overlap. Finally, a rule-based determination of Kennedy's class of partially edentulous jaws was performed. In a fivefold cross-validation, Kennedy's classification was correctly determined in 83.0% of cases, with the most common errors arising from the mislabeling of morphologically similar teeth. The underlying algorithm demonstrated high sensitivity (97.1%) and precision (98.1%) in tooth detection, with an F1 score of 97.6%. FDI position label accuracy was 94.7%. Ablation studies indicated that post-processing steps, such as duplicate filtering, significantly improved algorithm performance. Our findings show that automatic dentition analysis in PX images can be extended to include clinically relevant jaw classification, reducing the workload associated with manual labeling and classification.

NAADA: A Noise-Aware Attention Denoising Autoencoder for Dental Panoramic Radiographs

Khuram Naveed, Bruna Neves de Freitas, Ruben Pauwels

arxiv logopreprintJun 24 2025
Convolutional denoising autoencoders (DAEs) are powerful tools for image restoration. However, they inherit a key limitation of convolutional neural networks (CNNs): they tend to recover low-frequency features, such as smooth regions, more effectively than high-frequency details. This leads to the loss of fine details, which is particularly problematic in dental radiographs where preserving subtle anatomical structures is crucial. While self-attention mechanisms can help mitigate this issue by emphasizing important features, conventional attention methods often prioritize features corresponding to cleaner regions and may overlook those obscured by noise. To address this limitation, we propose a noise-aware self-attention method, which allows the model to effectively focus on and recover key features even within noisy regions. Building on this approach, we introduce the noise-aware attention-enhanced denoising autoencoder (NAADA) network for enhancing noisy panoramic dental radiographs. Compared with the recent state of the art (and much heavier) methods like Uformer, MResDNN etc., our method improves the reconstruction of fine details, ensuring better image quality and diagnostic accuracy.

Angio-Diff: Learning a Self-Supervised Adversarial Diffusion Model for Angiographic Geometry Generation

Zhifeng Wang, Renjiao Yi, Xin Wen, Chenyang Zhu, Kai Xu, Kunlun He

arxiv logopreprintJun 24 2025
Vascular diseases pose a significant threat to human health, with X-ray angiography established as the gold standard for diagnosis, allowing for detailed observation of blood vessels. However, angiographic X-rays expose personnel and patients to higher radiation levels than non-angiographic X-rays, which are unwanted. Thus, modality translation from non-angiographic to angiographic X-rays is desirable. Data-driven deep approaches are hindered by the lack of paired large-scale X-ray angiography datasets. While making high-quality vascular angiography synthesis crucial, it remains challenging. We find that current medical image synthesis primarily operates at pixel level and struggles to adapt to the complex geometric structure of blood vessels, resulting in unsatisfactory quality of blood vessel image synthesis, such as disconnections or unnatural curvatures. To overcome this issue, we propose a self-supervised method via diffusion models to transform non-angiographic X-rays into angiographic X-rays, mitigating data shortages for data-driven approaches. Our model comprises a diffusion model that learns the distribution of vascular data from diffusion latent, a generator for vessel synthesis, and a mask-based adversarial module. To enhance geometric accuracy, we propose a parametric vascular model to fit the shape and distribution of blood vessels. The proposed method contributes a pipeline and a synthetic dataset for X-ray angiography. We conducted extensive comparative and ablation experiments to evaluate the Angio-Diff. The results demonstrate that our method achieves state-of-the-art performance in synthetic angiography image quality and more accurately synthesizes the geometric structure of blood vessels. The code is available at https://github.com/zfw-cv/AngioDiff.

Benchmarking Foundation Models and Parameter-Efficient Fine-Tuning for Prognosis Prediction in Medical Imaging

Filippo Ruffini, Elena Mulero Ayllon, Linlin Shen, Paolo Soda, Valerio Guarrasi

arxiv logopreprintJun 23 2025
Artificial Intelligence (AI) holds significant promise for improving prognosis prediction in medical imaging, yet its effective application remains challenging. In this work, we introduce a structured benchmark explicitly designed to evaluate and compare the transferability of Convolutional Neural Networks and Foundation Models in predicting clinical outcomes in COVID-19 patients, leveraging diverse publicly available Chest X-ray datasets. Our experimental methodology extensively explores a wide set of fine-tuning strategies, encompassing traditional approaches such as Full Fine-Tuning and Linear Probing, as well as advanced Parameter-Efficient Fine-Tuning methods including Low-Rank Adaptation, BitFit, VeRA, and IA3. The evaluations were conducted across multiple learning paradigms, including both extensive full-data scenarios and more clinically realistic Few-Shot Learning settings, which are critical for modeling rare disease outcomes and rapidly emerging health threats. By implementing a large-scale comparative analysis involving a diverse selection of pretrained models, including general-purpose architectures pretrained on large-scale datasets such as CLIP and DINOv2, to biomedical-specific models like MedCLIP, BioMedCLIP, and PubMedCLIP, we rigorously assess each model's capacity to effectively adapt and generalize to prognosis tasks, particularly under conditions of severe data scarcity and pronounced class imbalance. The benchmark was designed to capture critical conditions common in prognosis tasks, including variations in dataset size and class distribution, providing detailed insights into the strengths and limitations of each fine-tuning strategy. This extensive and structured evaluation aims to inform the practical deployment and adoption of robust, efficient, and generalizable AI-driven solutions in real-world clinical prognosis prediction workflows.

Towards a comprehensive characterization of arteries and veins in retinal imaging.

Andreini P, Bonechi S

pubmed logopapersJun 23 2025
Retinal fundus imaging is crucial for diagnosing and monitoring eye diseases, which are often linked to systemic health conditions such as diabetes and hypertension. Current deep learning techniques often narrowly focus on segmenting retinal blood vessels, lacking a more comprehensive analysis and characterization of the retinal vascular system. This study fills this gap by proposing a novel, integrated approach that leverages multiple stages to accurately determine vessel paths and extract informative features from them. The segmentation of veins and arteries, achieved through a deep semantic segmentation network, is used by a newly designed algorithm to reconstruct individual vessel paths. The reconstruction process begins at the optic disc, identified by a localization network, and uses a recurrent neural network to predict the vessel paths at various junctions. The different stages of the proposed approach are validated both qualitatively and quantitatively, demonstrating robust performance. The proposed approach enables the extraction of critical features at the individual vessel level, such as vessel tortuosity and diameter. This work lays the foundation for a comprehensive retinal image evaluation, going beyond isolated tasks like vessel segmentation, with significant potential for clinical diagnosis.

Chest X-ray Foundation Model with Global and Local Representations Integration.

Yang Z, Xu X, Zhang J, Wang G, Kalra MK, Yan P

pubmed logopapersJun 23 2025
Chest X-ray (CXR) is the most frequently ordered imaging test, supporting diverse clinical tasks from thoracic disease detection to postoperative monitoring. However, task-specific classification models are limited in scope, require costly labeled data, and lack generalizability to out-of-distribution datasets. To address these challenges, we introduce CheXFound, a self-supervised vision foundation model that learns robust CXR representations and generalizes effectively across a wide range of downstream tasks. We pretrained CheXFound on a curated CXR-987K dataset, comprising over approximately 987K unique CXRs from 12 publicly available sources. We propose a Global and Local Representations Integration (GLoRI) head for downstream adaptations, by incorporating fine- and coarse-grained disease-specific local features with global image features for enhanced performance in multilabel classification. Our experimental results showed that CheXFound outperformed state-of-the-art models in classifying 40 disease findings across different prevalence levels on the CXR-LT 24 dataset and exhibited superior label efficiency on downstream tasks with limited training data. Additionally, CheXFound achieved significant improvements on downstream tasks with out-of-distribution datasets, including opportunistic cardiovascular disease risk estimation, mortality prediction, malpositioned tube detection, and anatomical structure segmentation. The above results demonstrate CheXFound's strong generalization capabilities, which will enable diverse downstream adaptations with improved label efficiency in future applications. The project source code is publicly available at https://github.com/RPIDIAL/CheXFound.

Deep Learning-based Alignment Measurement in Knee Radiographs

Zhisen Hu, Dominic Cullen, Peter Thompson, David Johnson, Chang Bian, Aleksei Tiulpin, Timothy Cootes, Claudia Lindner

arxiv logopreprintJun 22 2025
Radiographic knee alignment (KA) measurement is important for predicting joint health and surgical outcomes after total knee replacement. Traditional methods for KA measurements are manual, time-consuming and require long-leg radiographs. This study proposes a deep learning-based method to measure KA in anteroposterior knee radiographs via automatically localized knee anatomical landmarks. Our method builds on hourglass networks and incorporates an attention gate structure to enhance robustness and focus on key anatomical features. To our knowledge, this is the first deep learning-based method to localize over 100 knee anatomical landmarks to fully outline the knee shape while integrating KA measurements on both pre-operative and post-operative images. It provides highly accurate and reliable anatomical varus/valgus KA measurements using the anatomical tibiofemoral angle, achieving mean absolute differences ~1{\deg} when compared to clinical ground truth measurements. Agreement between automated and clinical measurements was excellent pre-operatively (intra-class correlation coefficient (ICC) = 0.97) and good post-operatively (ICC = 0.86). Our findings demonstrate that KA assessment can be automated with high accuracy, creating opportunities for digitally enhanced clinical workflows.

Automated detection and classification of osteolytic lesions in panoramic radiographs using CNNs and vision transformers.

van Nistelrooij N, Ghanad I, Bigdeli AK, Thiem DGE, von See C, Rendenbach C, Maistreli I, Xi T, Bergé S, Heiland M, Vinayahalingam S, Gaudin R

pubmed logopapersJun 21 2025
Diseases underlying osteolytic lesions in jaws are characterized by the absorption of bone tissue and are often asymptomatic, delaying their diagnosis. Well-defined lesions (benign cyst-like lesions) and ill-defined lesions (osteomyelitis or malignancy) can be detected early in a panoramic radiograph (PR) by an experienced examiner, but most dentists lack appropriate training. To support dentists, this study aimed to develop and evaluate deep learning models for the detection of osteolytic lesions in PRs. A dataset of 676 PRs (165 well-defined, 181 ill-defined, 330 control) was collected from the Department of Oral and Maxillofacial Surgery at Charité Berlin, Germany. The osteolytic lesions were pixel-wise segmented and labeled as well-defined or ill-defined. Four model architectures for instance segmentation (Mask R-CNN with a Swin-Tiny or ResNet-50 backbone, Mask DINO, and YOLOv5) were employed with five-fold cross-validation. Their effectiveness was evaluated with sensitivity, specificity, F1-score, and AUC and failure cases were shown. Mask R-CNN with a Swin-Tiny backbone was most effective (well-defined F1 = 0.784, AUC = 0.881; ill-defined F1 = 0.904, AUC = 0.971) and the model architectures including vision transformer components were more effective than those without. Model mistakes were observed around the maxillary sinus, at tooth extraction sites, and for radiolucent bands. Promising deep learning models were developed for the detection of osteolytic lesions in PRs, particularly those with vision transformer components (Mask R-CNN with Swin-Tiny and Mask DINO). These results underline the potential of vision transformers for enhancing the automated detection of osteolytic lesions, offering a significant improvement over traditional deep learning models.

Generative deep-learning-model based contrast enhancement for digital subtraction angiography using a text-conditioned image-to-image model.

Takata T, Yamada K, Yamamoto M, Kondo H

pubmed logopapersJun 20 2025
Digital subtraction angiography (DSA) is an essential imaging technique in interventional radiology, enabling detailed visualization of blood vessels by subtracting pre- and post-contrast images. However, reduced contrast, either accidental or intentional, can impair the clarity of vascular structures. This issue becomes particularly critical in patients with chronic kidney disease (CKD), where minimizing iodinated contrast is necessary to reduce the risk of contrast-induced nephropathy (CIN). This study explored the potential of using a generative deep-learning-model based contrast enhancement technique for DSA. A text-conditioned image-to-image model was developed using Stable Diffusion, augmented with ControlNet to reduce hallucinations and Low-Rank Adaptation for model fine-tuning. A total of 1207 DSA series were used for training and testing, with additional low-contrast images generated through data augmentation. The model was trained using tagged text labels and evaluated using metrics such as Root Mean Square (RMS) contrast, Michelson contrast, signal-to-noise ratio (SNR), and entropy. Evaluation results indicated significant improvements, with RMS contrast, Michelson contrast, and entropy respectively increased from 7.91 to 17.7, 0.875 to 0.992, and 3.60 to 5.60, reflecting enhanced detail. However, SNR decreased from 21.3 to 8.50, indicating increased noise. This study demonstrated the feasibility of deep learning-based contrast enhancement for DSA images and highlights the potential for generative deep-learning-model to improve angiographic imaging. Further refinements, particularly in artifact suppression and clinical validation, are necessary for practical implementation in medical settings.

Artificial intelligence-assisted decision-making in third molar assessment using ChatGPT: is it really a valid tool?

Grinberg N, Ianculovici C, Whitefield S, Kleinman S, Feldman S, Peleg O

pubmed logopapersJun 20 2025
Artificial intelligence (AI) is becoming increasingly popular in medicine. The current study aims to investigate whether an AI-based chatbot, such as ChatGPT, could be a valid tool for assisting in decision-making when assessing mandibular third molars before extractions. Panoramic radiographs were collected from a publicly available library. Mandibular third molars were assessed by position and depth. Two specialists evaluated each case regarding the need for CBCT referral, followed by introducing all cases to ChatGPT under a uniform script to decide the need for further CBCT radiographs. The process was performed first without any guidelines, Second, after introducing the guidelines presented by Rood et al. (1990), and third, with additional test cases. ChatGPT and a specialist's decision were compared and analyzed using Cohen's kappa test and the Cochrane-Mantel--Haenszel test to consider the effect of different tooth positions. All analyses were made under a 95% confidence level. The study evaluated 184 molars. Without any guidelines, ChatGPT correlated with the specialist in 49% of cases, with no statistically significant agreement (kappa < 0.1), followed by 70% and 91% with moderate (kappa = 0.39) and near-perfect (kappa = 0.81) agreement, respectively, after the second and third rounds (p < 0.05). The high correlation between the specialist and the chatbot was preserved when analyzed by the different tooth locations and positions (p < 0.01). ChatGPT has shown the ability to analyze third molars prior to surgical interventions using accepted guidelines with substantial correlation to specialists.
Page 31 of 46453 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.