Sort by:
Page 6 of 34334 results

Could a New Method of Acromiohumeral Distance Measurement Emerge? Artificial Intelligence vs. Physician.

Dede BT, Çakar İ, Oğuz M, Alyanak B, Bağcıer F

pubmed logopapersJul 25 2025
The aim of this study was to evaluate the reliability of ChatGPT-4 measurement of acromiohumeral distance (AHD), a popular assessment in patients with shoulder pain. In this retrospective study, 71 registered shoulder magnetic resonance imaging (MRI) scans were included. AHD measurements were performed on a coronal oblique T1 sequence with a clear view of the acromion and humerus. Measurements were performed by an experienced radiologist twice at 3-day intervals and by ChatGPT-4 twice at 3-day intervals in different sessions. The first, second, and mean values of AHD measured by the physician were 7.6 ± 1.7, 7.5 ± 1.6, and 7.6 ± 1.7, respectively. The first, second, and mean values measured by ChatGPT-4 were 6.7 ± 0.8, 7.3 ± 1.1, and 7.1 ± 0.8, respectively. There was a significant difference between the physician and ChatGPT-4 between the first and mean measurements (p < 0.0001 and p = 0.009, respectively). However, there was no significant difference between the second measurements (p = 0.220). Intrarater reliability for the physician was excellent (ICC = 0.99); intrarater reliability for ChatGPT-4 was poor (ICC = 0.41). Interrater reliability was poor (ICC = 0.45). In conclusion, this study demonstrated that the reliability of ChatGPT-4 in AHD measurements is inferior to that of an experienced radiologist. This study may help improve the possible future contribution of large language models to medical science.

Exploring AI-Based System Design for Pixel-Level Protected Health Information Detection in Medical Images.

Truong T, Baltruschat IM, Klemens M, Werner G, Lenga M

pubmed logopapersJul 25 2025
De-identification of medical images is a critical step to ensure privacy during data sharing in research and clinical settings. The initial step in this process involves detecting Protected Health Information (PHI), which can be found in image metadata or imprinted within image pixels. Despite the importance of such systems, there has been limited evaluation of existing AI-based solutions, creating barriers to the development of reliable and robust tools. In this study, we present an AI-based pipeline for PHI detection, comprising three key modules: text detection, text extraction, and text analysis. We benchmark three models-YOLOv11, EasyOCR, and GPT-4o- across different setups corresponding to these modules, evaluating their performance on two different datasets encompassing multiple imaging modalities and PHI categories. Our findings indicate that the optimal setup involves utilizing dedicated vision and language models for each module, which achieves a commendable balance in performance, latency, and cost associated with the usage of large language models (LLMs). Additionally, we show that the application of LLMs not only involves identifying PHI content but also enhances OCR tasks and facilitates an end-to-end PHI detection pipeline, showcasing promising outcomes through our analysis.

Current evidence of low-dose CT screening benefit.

Yip R, Mulshine JL, Oudkerk M, Field J, Silva M, Yankelevitz DF, Henschke CI

pubmed logopapersJul 25 2025
Lung cancer is the leading cause of cancer-related mortality worldwide, largely due to late-stage diagnosis. Low-dose computed tomography (LDCT) screening has emerged as a powerful tool for early detection, enabling diagnosis at curable stages and reducing lung cancer mortality. Despite strong evidence, LDCT screening uptake remains suboptimal globally. This review synthesizes current evidence supporting LDCT screening, highlights ongoing global implementation efforts, and discusses key insights from the 1st AGILE conference. Lung cancer screening is gaining global momentum, with many countries advancing plans for national LDCT programs. Expanding eligibility through risk-based models and targeting high-risk never- and light-smokers are emerging strategies to improve efficiency and equity. Technological advancements, including AI-assisted interpretation and image-based biomarkers, are addressing concerns around false positives, overdiagnosis, and workforce burden. Integrating cardiac and smoking-related disease assessment within LDCT screening offers added preventive health benefits. To maximize global impact, screening strategies must be tailored to local health systems and populations. Efforts should focus on increasing awareness, standardizing protocols, optimizing screening intervals, and strengthening multidisciplinary care pathways. International collaboration and shared infrastructure can accelerate progress and ensure sustainability. LDCT screening represents a cost-effective opportunity to reduce lung cancer mortality and premature deaths.

PerioDet: Large-Scale Panoramic Radiograph Benchmark for Clinical-Oriented Apical Periodontitis Detection

Xiaocheng Fang, Jieyi Cai, Huanyu Liu, Chengju Zhou, Minhua Lu, Bingzhi Chen

arxiv logopreprintJul 25 2025
Apical periodontitis is a prevalent oral pathology that presents significant public health challenges. Despite advances in automated diagnostic systems across various medical fields, the development of Computer-Aided Diagnosis (CAD) applications for apical periodontitis is still constrained by the lack of a large-scale, high-quality annotated dataset. To address this issue, we release a large-scale panoramic radiograph benchmark called "PerioXrays", comprising 3,673 images and 5,662 meticulously annotated instances of apical periodontitis. To the best of our knowledge, this is the first benchmark dataset for automated apical periodontitis diagnosis. This paper further proposes a clinical-oriented apical periodontitis detection (PerioDet) paradigm, which jointly incorporates Background-Denoising Attention (BDA) and IoU-Dynamic Calibration (IDC) mechanisms to address the challenges posed by background noise and small targets in automated detection. Extensive experiments on the PerioXrays dataset demonstrate the superiority of PerioDet in advancing automated apical periodontitis detection. Additionally, a well-designed human-computer collaborative experiment underscores the clinical applicability of our method as an auxiliary diagnostic tool for professional dentists.

Methinks AI software for identifying large vessel occlusion in non-contrast head CT: A pilot retrospective study in American population.

Sanders JV, Keigher K, Oliver M, Joshi K, Lopes D

pubmed logopapersJul 25 2025
BackgroundNon-contrast computed tomography (NCCT) is the first image for stroke assessment, but its sensitivity for detecting large vessel occlusion (LVO) is limited. Artificial intelligence (AI) algorithms may contribute to a faster LVO diagnosis using only NCCT. This study evaluates the performance and the potential diagnostic time saving of Methinks LVO AI algorithm in a U.S. multi-facility stroke network.MethodsThis retrospective pilot study reviewed NCCT and computed tomography angiography (CTA) images between 2015 and 2023. The Methinks AI algorithm, designed to detect LVOs in the internal carotid artery and middle cerebral artery, was tested for sensitivity, specificity, and predictive values. A neuroradiologist reviewed cases to establish a gold standard. To evaluate potential time saving in workflow, time gaps between NCCT and CTA were analyzed and stratified into four groups in true positive cases: Group 1 (<10 min), Group 2 (10-30 min), Group 3 (30-60 min), and Group 4 (>60 min).ResultsFrom a total of 1155 stroke codes, 608 NCCT exams were analyzed. Methinks LVO demonstrated 75% sensitivity and 83% specificity, identifying 146 out of 194 confirmed LVO cases correctly. The PPV of the algorithm was 72%. The NPV was 83% (considering 'other occlusion', 'stenosis' and 'posteriors' as negatives), and 73% considered the same conditions as positives. Among the true positive cases, we found 112 patients Group 1, 32 patients in Group 2, 15 patients in Group 3, 3 patients in Group 4.ConclusionThe Methinks AI algorithm shows promise for improving LVO detection from NCCT, especially in resource limited settings. However, its sensitivity remains lower than CTA-based systems, suggesting the need for further refinement.

SP-Mamba: Spatial-Perception State Space Model for Unsupervised Medical Anomaly Detection

Rui Pan, Ruiying Lu

arxiv logopreprintJul 25 2025
Radiography imaging protocols target on specific anatomical regions, resulting in highly consistent images with recurrent structural patterns across patients. Recent advances in medical anomaly detection have demonstrated the effectiveness of CNN- and transformer-based approaches. However, CNNs exhibit limitations in capturing long-range dependencies, while transformers suffer from quadratic computational complexity. In contrast, Mamba-based models, leveraging superior long-range modeling, structural feature extraction, and linear computational efficiency, have emerged as a promising alternative. To capitalize on the inherent structural regularity of medical images, this study introduces SP-Mamba, a spatial-perception Mamba framework for unsupervised medical anomaly detection. The window-sliding prototype learning and Circular-Hilbert scanning-based Mamba are introduced to better exploit consistent anatomical patterns and leverage spatial information for medical anomaly detection. Furthermore, we excavate the concentration and contrast characteristics of anomaly maps for improving anomaly detection. Extensive experiments on three diverse medical anomaly detection benchmarks confirm the proposed method's state-of-the-art performance, validating its efficacy and robustness. The code is available at https://github.com/Ray-RuiPan/SP-Mamba.

OCSVM-Guided Representation Learning for Unsupervised Anomaly Detection

Nicolas Pinon, Carole Lartizien

arxiv logopreprintJul 25 2025
Unsupervised anomaly detection (UAD) aims to detect anomalies without labeled data, a necessity in many machine learning applications where anomalous samples are rare or not available. Most state-of-the-art methods fall into two categories: reconstruction-based approaches, which often reconstruct anomalies too well, and decoupled representation learning with density estimators, which can suffer from suboptimal feature spaces. While some recent methods attempt to couple feature learning and anomaly detection, they often rely on surrogate objectives, restrict kernel choices, or introduce approximations that limit their expressiveness and robustness. To address this challenge, we propose a novel method that tightly couples representation learning with an analytically solvable one-class SVM (OCSVM), through a custom loss formulation that directly aligns latent features with the OCSVM decision boundary. The model is evaluated on two tasks: a new benchmark based on MNIST-C, and a challenging brain MRI subtle lesion detection task. Unlike most methods that focus on large, hyperintense lesions at the image level, our approach succeeds to target small, non-hyperintense lesions, while we evaluate voxel-wise metrics, addressing a more clinically relevant scenario. Both experiments evaluate a form of robustness to domain shifts, including corruption types in MNIST-C and scanner/age variations in MRI. Results demonstrate performance and robustness of our proposed mode,highlighting its potential for general UAD and real-world medical imaging applications. The source code is available at https://github.com/Nicolas-Pinon/uad_ocsvm_guided_repr_learning

Real-time Monitoring of Urinary Stone Status During Shockwave Lithotripsy.

Noble PA

pubmed logopapersJul 24 2025
To develop a standardized, real-time feedback system for monitoring urinary stone fragmentation during shockwave lithotripsy (SWL), thereby optimizing treatment efficacy and minimizing patient risk. A two-pronged approach was implemented to quantify stone fragmentation in C-arm X-ray images. First, the initial pre-treatment stone image was compared to subsequent images to measure stone area loss. Second, a Convolutional Neural Network (CNN) was trained to estimate the probability that an image contains a urinary stone. These two criteria were integrated to create a real-time signaling system capable of evaluating shockwave efficacy during SWL. The system was developed using data from 522 shockwave treatments encompassing 4,057 C-arm X-ray images. The combined area-loss metric and CNN output enabled consistent real-time assessment of stone fragmentation, providing actionable feedback to guide SWL in diverse clinical contexts. The proposed system offers a novel and reliable method for monitoring of urinary stone fragmentation during SWL. By helping to balance treatment efficacy with patient safety, it holds significant promise for semi-automated SWL platforms, particularly in resource-limited or remote environments such as arid regions and extended space missions.

Q-Former Autoencoder: A Modern Framework for Medical Anomaly Detection

Francesco Dalmonte, Emirhan Bayar, Emre Akbas, Mariana-Iuliana Georgescu

arxiv logopreprintJul 24 2025
Anomaly detection in medical images is an important yet challenging task due to the diversity of possible anomalies and the practical impossibility of collecting comprehensively annotated data sets. In this work, we tackle unsupervised medical anomaly detection proposing a modernized autoencoder-based framework, the Q-Former Autoencoder, that leverages state-of-the-art pretrained vision foundation models, such as DINO, DINOv2 and Masked Autoencoder. Instead of training encoders from scratch, we directly utilize frozen vision foundation models as feature extractors, enabling rich, multi-stage, high-level representations without domain-specific fine-tuning. We propose the usage of the Q-Former architecture as the bottleneck, which enables the control of the length of the reconstruction sequence, while efficiently aggregating multiscale features. Additionally, we incorporate a perceptual loss computed using features from a pretrained Masked Autoencoder, guiding the reconstruction towards semantically meaningful structures. Our framework is evaluated on four diverse medical anomaly detection benchmarks, achieving state-of-the-art results on BraTS2021, RESC, and RSNA. Our results highlight the potential of vision foundation model encoders, pretrained on natural images, to generalize effectively to medical image analysis tasks without further fine-tuning. We release the code and models at https://github.com/emirhanbayar/QFAE.

DEEP Q-NAS: A new algorithm based on neural architecture search and reinforcement learning for brain tumor identification from MRI.

Hasan MS, Komol MMR, Fahim F, Islam J, Pervin T, Hasan MM

pubmed logopapersJul 24 2025
A significant obstacle in brain tumor treatment planning is determining the tumor's actual size. Magnetic resonance imaging (MRI) is one of the first-line brain tumor diagnosis. It takes a lot of effort and mostly depends on the operator's experience to manually separate the size of a brain tumor from 3D MRI volumes. Machine learning has been vastly enhanced by deep learning and computer-aided tumor detection methods. This study proposes to investigate the architecture of object detectors, specifically focusing on search efficiency. In order to provide more specificity, our goal is to effectively explore the Feature Pyramid Network (FPN) and prediction head of a straightforward anchor-free object detector called DEEP Q-NAS. The study utilized the BraTS 2021 dataset which includes multi-parametric magnetic resonance imaging (mpMRI) scans. The architecture we found outperforms the latest object detection models (like Fast R-CNN, YOLOv7, and YOLOv8) by 2.2 to 7 points with average precision (AP) on the MS COCO 2017 dataset. It has a similar level of complexity and less memory usage, which shows how effective our proposed NAS is for object detection. The DEEP Q-NAS with ResNeXt-152 model demonstrates the highest level of detection accuracy, achieving a rate of 99%.
Page 6 of 34334 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.