Sort by:
Page 4 of 34333 results

Deep learning-driven incidental detection of vertebral fractures in cancer patients: advancing diagnostic precision and clinical management.

Mniai EM, Laletin V, Tselikas L, Assi T, Bonnet B, Camez AO, Zemmouri A, Muller S, Moussa T, Chaibi Y, Kiewsky J, Quenet S, Avare C, Lassau N, Balleyguier C, Ayobi A, Ammari S

pubmed logopapersAug 2 2025
Vertebral compression fractures (VCFs) are the most prevalent skeletal manifestations of osteoporosis in cancer patients. Yet, they are frequently missed or not reported in routine clinical radiology, adversely impacting patient outcomes and quality of life. This study evaluates the diagnostic performance of a deep-learning (DL)-based application and its potential to reduce the miss rate of incidental VCFs in a high-risk cancer population. We retrospectively analysed thoraco-abdomino-pelvic (TAP) CT scans from 1556 patients with stage IV cancer collected consecutively over a 4-month period (September-December 2023) in a tertiary cancer center. A DL-based application flagged cases positive for VCFs, which were subsequently reviewed by two expert radiologists for validation. Additionally, grade 3 fractures identified by the application were independently assessed by two expert interventional radiologists to determine their eligibility for vertebroplasty. Of the 1556 cases, 501 were flagged as positive for VCF by the application, with 436 confirmed as true positives by expert review, yielding a positive predictive value (PPV) of 87%. Common causes of false positives included sclerotic vertebral metastases, scoliosis, and vertebrae misidentification. Notably, 83.5% (364/436) of true positive VCFs were absent from radiology reports, indicating a substantial non-report rate in routine practice. Ten grade 3 fractures were overlooked or not reported by radiologists. Among them, 9 were deemed suitable for vertebroplasty by expert interventional radiologists. This study underscores the potential of DL-based applications to improve the detection of VCFs. The analyzed tool can assist radiologists in detecting more incidental vertebral fractures in adult cancer patients, optimising timely treatment and reducing associated morbidity and economic burden. Moreover, it might enhance patient access to interventional treatments such as vertebroplasty. These findings highlight the transformative role that DL can play in optimising clinical management and outcomes for osteoporosis-related VCFs in cancer patients.

High-grade glioma: combined use of 5-aminolevulinic acid and intraoperative ultrasound for resection and a predictor algorithm for detection.

Aibar-Durán JÁ, Mirapeix RM, Gallardo Alcañiz A, Salgado-López L, Freixer-Palau B, Casitas Hernando V, Hernández FM, de Quintana-Schmidt C

pubmed logopapersAug 1 2025
The primary goal in neuro-oncology is the maximally safe resection of high-grade glioma (HGG). A more extensive resection improves both overall and disease-free survival, while a complication-free surgery enables better tolerance to adjuvant therapies such as chemotherapy and radiotherapy. Techniques such as 5-aminolevulinic acid (5-ALA) fluorescence and intraoperative ultrasound (ioUS) are valuable for safe resection and cost-effective. However, the benefits of combining these techniques remain undocumented. The aim of this study was to investigate outcomes when combining 5-ALA and ioUS. From January 2019 to January 2024, 72 patients (mean age 62.2 years, 62.5% male) underwent HGG resection at a single hospital. Tumor histology included glioblastoma (90.3%), grade IV astrocytoma (4.1%), grade III astrocytoma (2.8%), and grade III oligodendroglioma (2.8%). Tumor resection was performed under natural light, followed by using 5-ALA and ioUS to detect residual tumor. Biopsies from the surgical bed were analyzed for tumor presence and categorized based on 5-ALA and ioUS results. Results of 5-ALA and ioUS were classified into positive, weak/doubtful, or negative. Histological findings of the biopsies were categorized into solid tumor, infiltration, or no tumor. Sensitivity, specificity, and predictive values for both techniques, separately and combined, were calculated. A machine learning algorithm (HGGPredictor) was developed to predict tumor presence in biopsies. The overall sensitivities of 5-ALA and ioUS were 84.9% and 76%, with specificities of 57.8% and 84.5%, respectively. The combination of both methods in a positive/positive scenario yielded the highest performance, achieving a sensitivity of 91% and specificity of 86%. The positive/doubtful combination followed, with sensitivity of 67.9% and specificity of 95.2%. Area under the curve analysis indicated superior performance when both techniques were combined, in comparison to each method used individually. Additionally, the HGGPredictor tool effectively estimated the quantity of tumor cells in surgical margins. Combining 5-ALA and ioUS enhanced diagnostic accuracy for HGG resection, suggesting a new surgical standard. An intraoperative predictive algorithm could further automate decision-making.

A Modified VGG19-Based Framework for Accurate and Interpretable Real-Time Bone Fracture Detection

Md. Ehsanul Haque, Abrar Fahim, Shamik Dey, Syoda Anamika Jahan, S. M. Jahidul Islam, Sakib Rokoni, Md Sakib Morshed

arxiv logopreprintJul 31 2025
Early and accurate detection of the bone fracture is paramount to initiating treatment as early as possible and avoiding any delay in patient treatment and outcomes. Interpretation of X-ray image is a time consuming and error prone task, especially when resources for such interpretation are limited by lack of radiology expertise. Additionally, deep learning approaches used currently, typically suffer from misclassifications and lack interpretable explanations to clinical use. In order to overcome these challenges, we propose an automated framework of bone fracture detection using a VGG-19 model modified to our needs. It incorporates sophisticated preprocessing techniques that include Contrast Limited Adaptive Histogram Equalization (CLAHE), Otsu's thresholding, and Canny edge detection, among others, to enhance image clarity as well as to facilitate the feature extraction. Therefore, we use Grad-CAM, an Explainable AI method that can generate visual heatmaps of the model's decision making process, as a type of model interpretability, for clinicians to understand the model's decision making process. It encourages trust and helps in further clinical validation. It is deployed in a real time web application, where healthcare professionals can upload X-ray images and get the diagnostic feedback within 0.5 seconds. The performance of our modified VGG-19 model attains 99.78\% classification accuracy and AUC score of 1.00, making it exceptionally good. The framework provides a reliable, fast, and interpretable solution for bone fracture detection that reasons more efficiently for diagnoses and better patient care.

Medical Image De-Identification Benchmark Challenge

Linmin Pei, Granger Sutton, Michael Rutherford, Ulrike Wagner, Tracy Nolan, Kirk Smith, Phillip Farmer, Peter Gu, Ambar Rana, Kailing Chen, Thomas Ferleman, Brian Park, Ye Wu, Jordan Kojouharov, Gargi Singh, Jon Lemon, Tyler Willis, Milos Vukadinovic, Grant Duffy, Bryan He, David Ouyang, Marco Pereanez, Daniel Samber, Derek A. Smith, Christopher Cannistraci, Zahi Fayad, David S. Mendelson, Michele Bufano, Elmar Kotter, Hamideh Haghiri, Rajesh Baidya, Stefan Dvoretskii, Klaus H. Maier-Hein, Marco Nolden, Christopher Ablett, Silvia Siggillino, Sandeep Kaushik, Hongzhu Jiang, Sihan Xie, Zhiyu Wan, Alex Michie, Simon J Doran, Angeline Aurelia Waly, Felix A. Nathaniel Liang, Humam Arshad Mustagfirin, Michelle Grace Felicia, Kuo Po Chih, Rahul Krish, Ghulam Rasool, Nidhal Bouaynaya, Nikolas Koutsoubis, Kyle Naddeo, Kartik Pandit, Tony O'Sullivan, Raj Krish, Qinyan Pan, Scott Gustafson, Benjamin Kopchick, Laura Opsahl-Ong, Andrea Olvera-Morales, Jonathan Pinney, Kathryn Johnson, Theresa Do, Juergen Klenk, Maria Diaz, Arti Singh, Rong Chai, David A. Clunie, Fred Prior, Keyvan Farahani

arxiv logopreprintJul 31 2025
The de-identification (deID) of protected health information (PHI) and personally identifiable information (PII) is a fundamental requirement for sharing medical images, particularly through public repositories, to ensure compliance with patient privacy laws. In addition, preservation of non-PHI metadata to inform and enable downstream development of imaging artificial intelligence (AI) is an important consideration in biomedical research. The goal of MIDI-B was to provide a standardized platform for benchmarking of DICOM image deID tools based on a set of rules conformant to the HIPAA Safe Harbor regulation, the DICOM Attribute Confidentiality Profiles, and best practices in preservation of research-critical metadata, as defined by The Cancer Imaging Archive (TCIA). The challenge employed a large, diverse, multi-center, and multi-modality set of real de-identified radiology images with synthetic PHI/PII inserted. The MIDI-B Challenge consisted of three phases: training, validation, and test. Eighty individuals registered for the challenge. In the training phase, we encouraged participants to tune their algorithms using their in-house or public data. The validation and test phases utilized the DICOM images containing synthetic identifiers (of 216 and 322 subjects, respectively). Ten teams successfully completed the test phase of the challenge. To measure success of a rule-based approach to image deID, scores were computed as the percentage of correct actions from the total number of required actions. The scores ranged from 97.91% to 99.93%. Participants employed a variety of open-source and proprietary tools with customized configurations, large language models, and optical character recognition (OCR). In this paper we provide a comprehensive report on the MIDI-B Challenge's design, implementation, results, and lessons learned.

Out-of-Distribution Detection in Medical Imaging via Diffusion Trajectories

Lemar Abdi, Francisco Caetano, Amaan Valiuddin, Christiaan Viviers, Hamdi Joudeh, Fons van der Sommen

arxiv logopreprintJul 31 2025
In medical imaging, unsupervised out-of-distribution (OOD) detection offers an attractive approach for identifying pathological cases with extremely low incidence rates. In contrast to supervised methods, OOD-based approaches function without labels and are inherently robust to data imbalances. Current generative approaches often rely on likelihood estimation or reconstruction error, but these methods can be computationally expensive, unreliable, and require retraining if the inlier data changes. These limitations hinder their ability to distinguish nominal from anomalous inputs efficiently, consistently, and robustly. We propose a reconstruction-free OOD detection method that leverages the forward diffusion trajectories of a Stein score-based denoising diffusion model (SBDDM). By capturing trajectory curvature via the estimated Stein score, our approach enables accurate anomaly scoring with only five diffusion steps. A single SBDDM pre-trained on a large, semantically aligned medical dataset generalizes effectively across multiple Near-OOD and Far-OOD benchmarks, achieving state-of-the-art performance while drastically reducing computational cost during inference. Compared to existing methods, SBDDM achieves a relative improvement of up to 10.43% and 18.10% for Near-OOD and Far-OOD detection, making it a practical building block for real-time, reliable computer-aided diagnosis.

Impact of AI assistance on radiologist interpretation of knee MRI.

Herpe G, Vesoul T, Zille P, Pluot E, Guillin R, Rizk B, Ardon R, Adam C, d'Assignies G, Gondim Teixeira PA

pubmed logopapersJul 31 2025
Knee injuries frequently require Magnetic Resonance Imaging (MRI) evaluation, increasing radiologists' workload. This study evaluates the impact of a Knee AI assistant on radiologists' diagnostic accuracy and efficiency in detecting anterior cruciate ligament (ACL), meniscus, cartilage, and medial collateral ligament (MCL) lesions on knee MRI exams. This retrospective reader study was conducted from January 2024 to April 2024. Knee MRI studies were evaluated with and without AI assistance by six radiologists with between 2 and 10 years of experience in musculoskeletal imaging in two sessions, 1 month apart. The AI algorithm was trained on 23,074 MRI studies separate from the study dataset and tested on various knee structures, including ACL, MCL, menisci, and cartilage. The reference standard was established by the consensus of three expert MSK radiologists. Statistical analysis included sensitivity, specificity, accuracy, and Fleiss' Kappa. The study dataset involved 165 knee MRIs (89 males, 76 females; mean age, 42.3 ± 15.7 years). AI assistance improved sensitivity from 81% (134/165, 95% CI = [79.7, 83.3]) to 86%(142/165, 95% CI = [84.2, 87.5]) (p < 0.001), accuracy from 86% (142/165, 95% CI = [85.4, 86.9]) to 91%(150/165, 95% CI = [90.7, 92.1]) (p < 0.001), and specificity from 88% (145/165, 95% CI = [87.1, 88.5]) to 93% (153/165, 95% CI = [92.7, 93.8]) (p < 0.001). Sensitivity and accuracy improvements were observed across all knee structures with varied statistical significance ranging from < 0.001 to 0.28. The Fleiss' Kappa values among readers increased from 54% (95% CI = [53.0, 55.3]) to 78% (95% CI = [76.6, 79.0]) (p < 0.001) post-AI integration. The integration of AI improved diagnostic accuracy, efficiency, and inter-reader agreement in knee MRI interpretation, highlighting the value of this approach in clinical practice. Question Can artificial intelligence (AI) assistance improve the diagnostic accuracy and efficiency of radiologists in detecting main lesions anterior cruciate ligament, meniscus, cartilage, and medial collateral ligament lesions in knee MRI? Findings AI assistance in knee MRI interpretation increased radiologists' sensitivity from 81 to 86% and accuracy from 86 to 91% for detecting knee lesions while improving inter-reader agreement (p < 0.001). Clinical relevance AI-assisted knee MRI interpretation enhances diagnostic precision and consistency among radiologists, potentially leading to more accurate injury detection, improved patient outcomes, and reduced diagnostic variability in musculoskeletal imaging.

External Validation of a Winning Artificial Intelligence Algorithm from the RSNA 2022 Cervical Spine Fracture Detection Challenge.

Harper JP, Lee GR, Pan I, Nguyen XV, Quails N, Prevedello LM

pubmed logopapersJul 31 2025
The Radiological Society of North America has actively promoted artificial intelligence (AI) challenges since 2017. Algorithms emerging from the recent RSNA 2022 Cervical Spine Fracture Detection Challenge demonstrated state-of-the-art performance in the competition's data set, surpassing results from prior publications. However, their performance in real-world clinical practice is not known. As an initial step toward the goal of assessing feasibility of these models in clinical practice, we conducted a generalizability test by using one of the leading algorithms of the competition. The deep learning algorithm was selected due to its performance, portability, and ease of use, and installed locally. One hundred examinations (50 consecutive cervical spine CT scans with at least 1 fracture present and 50 consecutive negative CT scans) from a level 1 trauma center not represented in the competition data set were processed at 6.4 seconds per examination. Ground truth was established based on the radiology report with retrospective confirmation of positive fracture cases. Sensitivity, specificity, F1 score, and area under the curve were calculated. The external validation data set comprised older patients in comparison to the competition set (53.5 ± 21.8 years versus 58 ± 22.0, respectively; <i>P</i> < .05). Sensitivity and specificity were 86% and 70% in the external validation group and 85% and 94% in the competition group, respectively. Fractures misclassified by the convolutional neural networks frequently had features of advanced degenerative disease, subtle nondisplaced fractures not easily identified on the axial plane, and malalignment. The model performed with a similar sensitivity on the test and external data set, suggesting that such a tool could be potentially generalizable as a triage tool in the emergency setting. Discordant factors such as age-associated comorbidities may affect accuracy and specificity of AI models when used in certain populations. Further research should be encouraged to help elucidate the potential contributions and pitfalls of these algorithms in supporting clinical care.

Enhanced Detection, Using Deep Learning Technology, of Medial Meniscal Posterior Horn Ramp Lesions in Patients with ACL Injury.

Park HJ, Ham S, Shim E, Suh DH, Kim JG

pubmed logopapersJul 31 2025
Meniscal ramp lesions can impact knee stability, particularly when associated with anterior cruciate ligament (ACL) injuries. Although magnetic resonance imaging (MRI) is the primary diagnostic tool, its diagnostic accuracy remains suboptimal. We aimed to determine whether deep learning technology could enhance MRI-based ramp lesion detection. We reviewed the records of 236 patients who underwent arthroscopic procedures documenting ACL injuries and the status of the medial meniscal posterior horn. A deep learning model was developed using MRI data for ramp lesion detection. Ramp lesion risk factors among patients who underwent ACL reconstruction were analyzed using logistic regression, extreme gradient boosting (XGBoost), and random forest models and were integrated into a final prediction model using Swin Transformer Large architecture. The deep learning model using MRI data demonstrated superior overall diagnostic performance to the clinicians' assessment (accuracy of 73.3% compared with 68.1%, specificity of 78.0% compared with 62.9%, and sensitivity of 64.7% compared with 76.4%). Incorporating risk factors (age, posteromedial tibial bone marrow edema, and lateral meniscal tears) improved the model's accuracy to 80.7%, with a sensitivity of 81.8% and a specificity of 80.9%. Integrating deep learning with MRI data and risk factors significantly enhanced diagnostic accuracy for ramp lesions, surpassing that of the model using MRI alone and that of clinicians. This study highlights the potential of artificial intelligence to provide clinicians with more accurate diagnostic tools for detecting ramp lesions, potentially enhancing treatment and patient outcomes. Diagnostic Level III. See Instructions for Authors for a complete description of levels of evidence.

DICOM De-Identification via Hybrid AI and Rule-Based Framework for Scalable, Uncertainty-Aware Redaction

Kyle Naddeo, Nikolas Koutsoubis, Rahul Krish, Ghulam Rasool, Nidhal Bouaynaya, Tony OSullivan, Raj Krish

arxiv logopreprintJul 31 2025
Access to medical imaging and associated text data has the potential to drive major advances in healthcare research and patient outcomes. However, the presence of Protected Health Information (PHI) and Personally Identifiable Information (PII) in Digital Imaging and Communications in Medicine (DICOM) files presents a significant barrier to the ethical and secure sharing of imaging datasets. This paper presents a hybrid de-identification framework developed by Impact Business Information Solutions (IBIS) that combines rule-based and AI-driven techniques, and rigorous uncertainty quantification for comprehensive PHI/PII removal from both metadata and pixel data. Our approach begins with a two-tiered rule-based system targeting explicit and inferred metadata elements, further augmented by a large language model (LLM) fine-tuned for Named Entity Recognition (NER), and trained on a suite of synthetic datasets simulating realistic clinical PHI/PII. For pixel data, we employ an uncertainty-aware Faster R-CNN model to localize embedded text, extract candidate PHI via Optical Character Recognition (OCR), and apply the NER pipeline for final redaction. Crucially, uncertainty quantification provides confidence measures for AI-based detections to enhance automation reliability and enable informed human-in-the-loop verification to manage residual risks. This uncertainty-aware deidentification framework achieves robust performance across benchmark datasets and regulatory standards, including DICOM, HIPAA, and TCIA compliance metrics. By combining scalable automation, uncertainty quantification, and rigorous quality assurance, our solution addresses critical challenges in medical data de-identification and supports the secure, ethical, and trustworthy release of imaging data for research.

Deep learning for tooth detection and segmentation in panoramic radiographs: a systematic review and meta-analysis.

Bonfanti-Gris M, Herrera A, Salido Rodríguez-Manzaneque MP, Martínez-Rus F, Pradíes G

pubmed logopapersJul 30 2025
This systematic review and meta-analysis aimed to summarize and evaluate the available information regarding the performance of deep learning methods for tooth detection and segmentation in orthopantomographies. Electronic databases (Medline, Embase and Cochrane) were searched up to September 2023 for relevant observational studies and both, randomized and controlled clinical trials. Two reviewers independently conducted the study selection, data extraction, and quality assessments. GRADE (Grading of Recommendations, Assessment, Development, and Evaluation) assessment was adopted for collective grading of the overall body of evidence. From the 2,207 records identified, 20 studies were included in the analysis. Meta-analysis was conducted for the comparison of mesiodens detection and segmentation (n = 6) using sensitivity and specificity as the two main diagnostic parameters. A graphical summary of the analysis was also plotted and a Hierarchical Summary Receiver Operating Characteristic curve, prediction region, summary point, and confidence region were illustrated. The included studies quantitative analysis showed pooled sensitivity, specificity, positive LR, negative LR, and diagnostic odds ratio of 0.92 (95% confidence interval [CI], 0.84-0.96), 0.94 (95% CI, 0.89-0.97), 15.7 (95% CI, 7.6-32.2), 0.08 (95% CI, 0.04-0.18), and 186 (95% CI, 44-793), respectively. A graphical summary of the meta-analysis was plotted based on sensitivity and specificity. Hierarchical Summary Receiver Operating Characteristic curves showed a positive correlation between logit-transformed sensitivity and specificity (r = 0.886). Based on the results of the meta-analysis and GRADE assessment, a moderate recommendation is advised to dental operators when relying on AI-based tools for tooth detection and segmentation in panoramic radiographs.
Page 4 of 34333 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.