Sort by:
Page 7 of 34334 results

Real-time Monitoring of Urinary Stone Status During Shockwave Lithotripsy.

Noble PA

pubmed logopapersJul 24 2025
To develop a standardized, real-time feedback system for monitoring urinary stone fragmentation during shockwave lithotripsy (SWL), thereby optimizing treatment efficacy and minimizing patient risk. A two-pronged approach was implemented to quantify stone fragmentation in C-arm X-ray images. First, the initial pre-treatment stone image was compared to subsequent images to measure stone area loss. Second, a Convolutional Neural Network (CNN) was trained to estimate the probability that an image contains a urinary stone. These two criteria were integrated to create a real-time signaling system capable of evaluating shockwave efficacy during SWL. The system was developed using data from 522 shockwave treatments encompassing 4,057 C-arm X-ray images. The combined area-loss metric and CNN output enabled consistent real-time assessment of stone fragmentation, providing actionable feedback to guide SWL in diverse clinical contexts. The proposed system offers a novel and reliable method for monitoring of urinary stone fragmentation during SWL. By helping to balance treatment efficacy with patient safety, it holds significant promise for semi-automated SWL platforms, particularly in resource-limited or remote environments such as arid regions and extended space missions.

Unsupervised anomaly detection using Bayesian flow networks: application to brain FDG PET in the context of Alzheimer's disease

Hugues Roy, Reuben Dorent, Ninon Burgos

arxiv logopreprintJul 23 2025
Unsupervised anomaly detection (UAD) plays a crucial role in neuroimaging for identifying deviations from healthy subject data and thus facilitating the diagnosis of neurological disorders. In this work, we focus on Bayesian flow networks (BFNs), a novel class of generative models, which have not yet been applied to medical imaging or anomaly detection. BFNs combine the strength of diffusion frameworks and Bayesian inference. We introduce AnoBFN, an extension of BFNs for UAD, designed to: i) perform conditional image generation under high levels of spatially correlated noise, and ii) preserve subject specificity by incorporating a recursive feedback from the input image throughout the generative process. We evaluate AnoBFN on the challenging task of Alzheimer's disease-related anomaly detection in FDG PET images. Our approach outperforms other state-of-the-art methods based on VAEs (beta-VAE), GANs (f-AnoGAN), and diffusion models (AnoDDPM), demonstrating its effectiveness at detecting anomalies while reducing false positive rates.

Illicit object detection in X-ray imaging using deep learning techniques: A comparative evaluation

Jorgen Cani, Christos Diou, Spyridon Evangelatos, Vasileios Argyriou, Panagiotis Radoglou-Grammatikis, Panagiotis Sarigiannidis, Iraklis Varlamis, Georgios Th. Papadopoulos

arxiv logopreprintJul 23 2025
Automated X-ray inspection is crucial for efficient and unobtrusive security screening in various public settings. However, challenges such as object occlusion, variations in the physical properties of items, diversity in X-ray scanning devices, and limited training data hinder accurate and reliable detection of illicit items. Despite the large body of research in the field, reported experimental evaluations are often incomplete, with frequently conflicting outcomes. To shed light on the research landscape and facilitate further research, a systematic, detailed, and thorough comparative evaluation of recent Deep Learning (DL)-based methods for X-ray object detection is conducted. For this, a comprehensive evaluation framework is developed, composed of: a) Six recent, large-scale, and widely used public datasets for X-ray illicit item detection (OPIXray, CLCXray, SIXray, EDS, HiXray, and PIDray), b) Ten different state-of-the-art object detection schemes covering all main categories in the literature, including generic Convolutional Neural Network (CNN), custom CNN, generic transformer, and hybrid CNN-transformer architectures, and c) Various detection (mAP50 and mAP50:95) and time/computational-complexity (inference time (ms), parameter size (M), and computational load (GFLOPS)) metrics. A thorough analysis of the results leads to critical observations and insights, emphasizing key aspects such as: a) Overall behavior of the object detection schemes, b) Object-level detection performance, c) Dataset-specific observations, and d) Time efficiency and computational complexity analysis. To support reproducibility of the reported experimental results, the evaluation code and model weights are made publicly available at https://github.com/jgenc/xray-comparative-evaluation.

Artificial Intelligence for Detecting Pulmonary Embolisms <i>via</i> CT: A Workflow-oriented Implementation.

Abed S, Hergan K, Dörrenberg J, Brandstetter L, Lauschmann M

pubmed logopapersJul 23 2025
Detecting Pulmonary Embolism (PE) is critical for effective patient care, and Artificial Intelligence (AI) has shown promise in supporting radiologists in this task. Integrating AI into radiology workflows requires not only evaluation of its diagnostic accuracy but also assessment of its acceptance among clinical staff. This study aims to evaluate the performance of an AI algorithm in detecting pulmonary embolisms (PEs) on contrast-enhanced computed tomography pulmonary angiograms (CTPAs) and to assess the level of acceptance of the algorithm among radiology department staff. This retrospective study analyzed anonymized computed tomography pulmonary angiography (CTPA) data from a university clinic. Surveys were conducted at three and nine months after the implementation of a commercially available AI algorithm designed to flag CTPA scans with suspected PE. A thoracic radiologist and a cardiac radiologist served as the reference standard for evaluating the performance of the algorithm. The AI analyzed 59 CTPA cases during the initial evaluation and 46 cases in the follow-up assessment. In the first evaluation, the AI algorithm demonstrated a sensitivity of 84.6% and a specificity of 94.3%. By the second evaluation, its performance had improved, achieving a sensitivity of 90.9% and a specificity of 96.7%. Radiologists' acceptance of the AI tool increased over time. Nevertheless, despite this growing acceptance, many radiologists expressed a preference for hiring an additional physician over adopting the AI solution if the costs were comparable. Our study demonstrated high sensitivity and specificity of the AI algorithm, with improved performance over time and a reduced rate of unanalyzed scans. These improvements likely reflect both algorithmic refinement and better data integration. Departmental feedback indicated growing user confidence and trust in the tool. However, many radiologists continued to prefer the addition of a resident over reliance on the algorithm. Overall, the AI showed promise as a supportive "second-look" tool in emergency radiology settings. The AI algorithm demonstrated diagnostic performance comparable to that reported in similar studies for detecting PE on CTPA, with both sensitivity and specificity showing improvement over time. Radiologists' acceptance of the algorithm increased throughout the study period, underscoring its potential as a complementary tool to physician expertise in clinical practice.

Deep learning algorithm for the automatic assessment of axial vertebral rotation in patients with scoliosis using the Nash-Moe method.

Kim JK, Wang MX, Park D, Chang MC

pubmed logopapersJul 22 2025
Accurate assessments of axial vertebral rotation (AVR) is essential for managing idiopathic scoliosis. The Nash-Moe classification method has been extensively used for AVR assessment; however, its subjective nature can lead to measurement variability. Therefore, herein, we propose an automated deep learning (DL) model for AVR assessment based on posteroanterior spinal radiographs. We develop a two-stage DL framework using the MMRotate toolbox and analyze 1080 posteroanterior spinal radiographs of patients aged 4-18 years. The framework comprises a vertebra detection model (864 training and 216 validation images) and a pedicle detection model (14,608 training and 3652 validation images). We improved the Nash-Moe classification method by implementing a 12-segment division system and width ratio metric for precise pedicle assessment. The vertebra and pedicle detection models achieved mean average precision values of 0.909 and 0.905, respectively. The overall classification accuracy was 0.74, with grade-specific performance between 0.70 and 1.00 for precision and 0.33 and 0.93 for recall across Grades 0-3. The proposed DL framework processed complete posteroanterior radiographs in < 5 s per case compared with conventional manual measurements (114 s per radiograph). The best performance was observed in mild to moderate rotation cases, with performance in severe rotation cases limited by insufficient data. The implementation of DL framework for the automated Nash-Moe classification method exhibited satisfactory accuracy and exceptional efficiency. However, this study is limited by low recall (0.33) for Grade 3 and the inability to classify Grade 4 towing to dataset constraints. Further validation using augmented datasets that include severe rotation cases is necessary.

Area detection improves the person-based performance of a deep learning system for classifying the presence of carotid artery calcifications on panoramic radiographs.

Kuwada C, Mitsuya Y, Fukuda M, Yang S, Kise Y, Mori M, Naitoh M, Ariji Y, Ariji E

pubmed logopapersJul 22 2025
This study investigated deep learning (DL) systems for diagnosing carotid artery calcifications (CAC) on panoramic radiographs. To this end, two DL systems, one with preceding and one with simultaneous area detection functions, were developed to classify CAC on panoramic radiographs, and their person-based classification performances were compared with that of a DL model directly created using entire panoramic radiographs. A total of 580 panoramic radiographs from 290 patients (with CAC) and 290 controls (without CAC) were used to create and evaluate the DL systems. Two convolutional neural networks, GoogLeNet and YOLOv7, were utilized. The following three systems were created: (1) direct classification of entire panoramic images (System 1), (2) preceding region-of-interest (ROI) detection followed by classification (System 2), and (3) simultaneous ROI detection and classification (System 3). Person-based evaluation using the same test data was performed to compare the three systems. A side-based (left and right sides of participants) evaluation was also performed on Systems 2 and 3. Between-system differences in area under the receiver-operating characteristics curve (AUC) were assessed using DeLong's test. For the side-based evaluation, the AUCs of Systems 2 and 3 were 0.89 and 0.84, respectively, and in the person-based evaluation, Systems 2 and 3 had significantly higher AUC values of 0.86 and 0.90, respectively, compared with System 1 (P < 0.001). No significant difference was found between Systems 2 and 3. Preceding or simultaneous use of area detection improved the person-based performance of DL for classifying the presence of CAC on panoramic radiographs.

Results from a Swedish model-based analysis of the cost-effectiveness of AI-assisted digital mammography.

Lyth J, Gialias P, Husberg M, Bernfort L, Bjerner T, Wiberg MK, Levin LÅ, Gustafsson H

pubmed logopapersJul 19 2025
To evaluate the cost-effectiveness of AI-assisted digital mammography (AI-DM) compared to conventional biennial breast cancer digital mammography screening (cDM) with double reading of screening mammograms, and to investigate the change in cost-effectiveness based on four different sub-strategies of AI-DM. A decision-analytic state-transition Markov model was used to analyse the decision of whether to use cDM or AI-DM in breast cancer screening. In this Markov model, one-year cycles were used, and the analysis was performed from a healthcare perspective with a lifetime horizon. In the model, we analysed 1000 hypothetical individuals attending mammography screenings assessed with AI-DM compared with 1000 hypothetical individuals assessed with cDM. The total costs, including both screening-related costs and breast cancer-related costs, were €3,468,967 and €3,528,288 for AI-DM and cDM, respectively. AI-DM resulted in a cost saving of €59,320 compared to cDM. Per 1000 individuals, AI-DM gained 10.8 quality-adjusted life years (QALYs) compared to cDM. Gained QALYs at a lower cost means that the AI-DM screening strategy was dominant compared to cDM. Break-even occurred at the second screening at age 42 years. This analysis showed that AI-assisted mammography for biennial breast cancer screening in a Swedish population of women aged 40-74 years is a cost-saving strategy compared to a conventional strategy using double human screen reading. Further clinical studies are needed, as scenario analyses showed that other strategies, more dependent on AI, are also cost-saving. Question To evaluate the cost-effectiveness of AI-DM in comparison to conventional biennial breast cDM screening. Findings AI-DM is cost-effective, and the break-even point occurred at the second screening at age 42 years. Clinical relevance The implementation of AI is clearly cost-effective as it reduces the total cost for the healthcare system and simultaneously results in a gain in QALYs.

Detecting Fifth Metatarsal Fractures on Radiographs through the Lens of Smartphones: A FIXUS AI Algorithm

Taseh, A., Shah, A., Eftekhari, M., Flaherty, A., Ebrahimi, A., Jones, S., Nukala, V., Nazarian, A., Waryasz, G., Ashkani-Esfahani, S.

medrxiv logopreprintJul 18 2025
BackgroundFifth metatarsal (5MT) fractures are common but challenging to diagnose, particularly with limited expertise or subtle fractures. Deep learning shows promise but faces limitations due to image quality requirements. This study develops a deep learning model to detect 5MT fractures from smartphone-captured radiograph images, enhancing accessibility of diagnostic tools. MethodsA retrospective study included patients aged >18 with 5MT fractures (n=1240) and controls (n=1224). Radiographs (AP, oblique, lateral) from Electronic Health Records (EHR) were obtained and photographed using a smartphone, creating a new dataset (SP). Models using ResNet 152V2 were trained on EHR, SP, and combined datasets, then evaluated on a separate smartphone test dataset (SP-test). ResultsOn validation, the SP model achieved optimal performance (AUROC: 0.99). On the SP-test dataset, the EHR models performance decreased (AUROC: 0.83), whereas SP and combined models maintained high performance (AUROC: 0.99). ConclusionsSmartphone-specific deep learning models effectively detect 5MT fractures, suggesting their practical utility in resource-limited settings.

OrthoInsight: Rib Fracture Diagnosis and Report Generation Based on Multi-Modal Large Models

Ningyong Wu, Jinzhi Wang, Wenhong Zhao, Chenzhan Yu, Zhigang Xiu, Duwei Dai

arxiv logopreprintJul 18 2025
The growing volume of medical imaging data has increased the need for automated diagnostic tools, especially for musculoskeletal injuries like rib fractures, commonly detected via CT scans. Manual interpretation is time-consuming and error-prone. We propose OrthoInsight, a multi-modal deep learning framework for rib fracture diagnosis and report generation. It integrates a YOLOv9 model for fracture detection, a medical knowledge graph for retrieving clinical context, and a fine-tuned LLaVA language model for generating diagnostic reports. OrthoInsight combines visual features from CT images with expert textual data to deliver clinically useful outputs. Evaluated on 28,675 annotated CT images and expert reports, it achieves high performance across Diagnostic Accuracy, Content Completeness, Logical Coherence, and Clinical Guidance Value, with an average score of 4.28, outperforming models like GPT-4 and Claude-3. This study demonstrates the potential of multi-modal learning in transforming medical image analysis and providing effective support for radiologists.

DUSTrack: Semi-automated point tracking in ultrasound videos

Praneeth Namburi, Roger Pallarès-López, Jessica Rosendorf, Duarte Folgado, Brian W. Anthony

arxiv logopreprintJul 18 2025
Ultrasound technology enables safe, non-invasive imaging of dynamic tissue behavior, making it a valuable tool in medicine, biomechanics, and sports science. However, accurately tracking tissue motion in B-mode ultrasound remains challenging due to speckle noise, low edge contrast, and out-of-plane movement. These challenges complicate the task of tracking anatomical landmarks over time, which is essential for quantifying tissue dynamics in many clinical and research applications. This manuscript introduces DUSTrack (Deep learning and optical flow-based toolkit for UltraSound Tracking), a semi-automated framework for tracking arbitrary points in B-mode ultrasound videos. We combine deep learning with optical flow to deliver high-quality and robust tracking across diverse anatomical structures and motion patterns. The toolkit includes a graphical user interface that streamlines the generation of high-quality training data and supports iterative model refinement. It also implements a novel optical-flow-based filtering technique that reduces high-frequency frame-to-frame noise while preserving rapid tissue motion. DUSTrack demonstrates superior accuracy compared to contemporary zero-shot point trackers and performs on par with specialized methods, establishing its potential as a general and foundational tool for clinical and biomechanical research. We demonstrate DUSTrack's versatility through three use cases: cardiac wall motion tracking in echocardiograms, muscle deformation analysis during reaching tasks, and fascicle tracking during ankle plantarflexion. As an open-source solution, DUSTrack offers a powerful, flexible framework for point tracking to quantify tissue motion from ultrasound videos. DUSTrack is available at https://github.com/praneethnamburi/DUSTrack.
Page 7 of 34334 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.