Sort by:
Page 26 of 45442 results

Development and Validation an AI Model to Improve the Diagnosis of Deep Infiltrating Endometriosis for Junior Sonologists.

Xu J, Zhang A, Zheng Z, Cao J, Zhang X

pubmed logopapersJul 1 2025
This study aims to develop and validate an artificial intelligence (AI) model based on ultrasound (US) videos and images to improve the performance of junior sonologists in detecting deep infiltrating endometriosis (DE). In this retrospective study, data were collected from female patients who underwent US examinations and had DE. The US image records were divided into two parts. First, during the model development phase, an AI-DE model was trained employing YOLOv8 to detect pelvic DE nodules. Subsequently, its clinical applicability was evaluated by comparing the diagnostic performance of junior sonologists with and without AI-model assistance. The AI-DE model was trained using 248 images, which demonstrated high performance, with a mAP50 (mean Average Precision at IoU threshold 0.5) of 0.9779 on the test set. Total 147 images were used for evaluate the diagnostic performance. The diagnostic performance of junior sonologists improved with the assistance of the AI-DE model. The area under the receiver operating characteristic (AUROC) curve improved from 0.748 (95% CI, 0.624-0.867) to 0.878 (95% CI, 0.792-0.964; p < 0.0001) for junior sonologist A, and from 0.713 (95% CI, 0.592-0.835) to 0.798 (95% CI, 0.677-0.919; p < 0.0001) for junior sonologist B. Notably, the sensitivity of both sonologists increased significantly, with the largest increase from 77.42% to 94.35%. The AI-DE model based on US images showed good performance in DE detection and significantly improved the diagnostic performance of junior sonologists.

Deep learning-assisted detection of meniscus and anterior cruciate ligament combined tears in adult knee magnetic resonance imaging: a crossover study with arthroscopy correlation.

Behr J, Nich C, D'Assignies G, Zavastin C, Zille P, Herpe G, Triki R, Grob C, Pujol N

pubmed logopapersJul 1 2025
We aimed to compare the diagnostic performance of physicians in the detection of arthroscopically confirmed meniscus and anterior cruciate ligament (ACL) tears on knee magnetic resonance imaging (MRI), with and without assistance from a deep learning (DL) model. We obtained preoperative MR images from 88 knees of patients who underwent arthroscopic meniscal repair, with or without ACL reconstruction. Ninety-eight MR images of knees without signs of meniscus or ACL tears were obtained from a publicly available database after matching on age and ACL status (normal or torn), resulting in a global dataset of 186 MRI examinations. The Keros<sup>®</sup> (Incepto, Paris) DL algorithm, previously trained for the detection and characterization of meniscus and ACL tears, was used for MRI assessment. Magnetic resonance images were individually, and blindly annotated by three physicians and the DL algorithm. After three weeks, the three human raters repeated image assessment with model assistance, performed in a different order. The Keros<sup>®</sup> algorithm achieved an area under the curve (AUC) of 0.96 (95% CI 0.93, 0.99), 0.91 (95% CI 0.85, 0.96), and 0.99 (95% CI 0.98, 0.997) in the detection of medial meniscus, lateral meniscus and ACL tears, respectively. With model assistance, physicians achieved higher sensitivity (91% vs. 83%, p = 0.04) and similar specificity (91% vs. 87%, p = 0.09) in the detection of medial meniscus tears. Regarding lateral meniscus tears, sensitivity and specificity were similar with/without model assistance. Regarding ACL tears, physicians achieved higher specificity when assisted by the algorithm (70% vs. 51%, p = 0.01) but similar sensitivity with/without model assistance (93% vs. 96%, p = 0.13). The current model consistently helped physicians in the detection of medial meniscus and ACL tears, notably when they were combined. Diagnostic study, Level III.

TIER-LOC: Visual Query-based Video Clip Localization in fetal ultrasound videos with a multi-tier transformer.

Mishra D, Saha P, Zhao H, Hernandez-Cruz N, Patey O, Papageorghiou AT, Noble JA

pubmed logopapersJul 1 2025
In this paper, we introduce the Visual Query-based task of Video Clip Localization (VQ-VCL) for medical video understanding. Specifically, we aim to retrieve a video clip containing frames similar to a given exemplar frame from a given input video. To solve the task, we propose a novel visual query-based video clip localization model called TIER-LOC. TIER-LOC is designed to improve video clip retrieval, especially in fine-grained videos by extracting features from different levels, i.e., coarse to fine-grained, referred to as TIERS. The aim is to utilize multi-Tier features for detecting subtle differences, and adapting to scale or resolution variations, leading to improved video-clip retrieval. TIER-LOC has three main components: (1) a Multi-Tier Spatio-Temporal Transformer to fuse spatio-temporal features extracted from multiple Tiers of video frames with features from multiple Tiers of the visual query enabling better video understanding. (2) a Multi-Tier, Dual Anchor Contrastive Loss to deal with real-world annotation noise which can be notable at event boundaries and in videos featuring highly similar objects. (3) a Temporal Uncertainty-Aware Localization Loss designed to reduce the model sensitivity to imprecise event boundary. This is achieved by relaxing hard boundary constraints thus allowing the model to learn underlying class patterns and not be influenced by individual noisy samples. To demonstrate the efficacy of TIER-LOC, we evaluate it on two ultrasound video datasets and an open-source egocentric video dataset. First, we develop a sonographer workflow assistive task model to detect standard-frame clips in fetal ultrasound heart sweeps. Second, we assess our model's performance in retrieving standard-frame clips for detecting fetal anomalies in routine ultrasound scans, using the large-scale PULSE dataset. Lastly, we test our model's performance on an open-source computer vision video dataset by creating a VQ-VCL fine-grained video dataset based on the Ego4D dataset. Our model outperforms the best-performing state-of-the-art model by 7%, 4%, and 4% on the three video datasets, respectively.

Adoption of artificial intelligence in healthcare: survey of health system priorities, successes, and challenges.

Poon EG, Lemak CH, Rojas JC, Guptill J, Classen D

pubmed logopapersJul 1 2025
The US healthcare system faces significant challenges, including clinician burnout, operational inefficiencies, and concerns about patient safety. Artificial intelligence (AI), particularly generative AI, has the potential to address these challenges, but its adoption, effectiveness, and barriers to implementation are not well understood. To evaluate the current state of AI adoption in US healthcare systems, assess successes and barriers to implementation during the early generative AI era. This cross-sectional survey was conducted in Fall 2024, and included 67 health systems members of the Scottsdale Institute, a collaborative of US non-profit healthcare organizations. Forty-three health systems completed the survey (64% response rate). Respondents provided data on the deployment status and perceived success of 37 AI use cases across 10 categories. The primary outcomes were the extent of AI use case development, piloting, or deployment, the degree of reported success for AI use cases, and the most significant barriers to adoption. Across the 43 responding health systems, AI adoption and perceptions of success varied significantly. Ambient Notes, a generative AI tool for clinical documentation, was the only use case with 100% of respondents reporting adoption activities, and 53% reported a high degree of success with using AI for Clinical Documentation. Imaging and radiology emerged as the most widely deployed clinical AI use case, with 90% of organizations reporting at least partial deployment, although successes with diagnostic use cases were limited. Similarly, many organizations have deployed AI for clinical risk stratification such as early sepsis detection, but only 38% report high success in this area. Immature AI tools were identified a significant barrier to adoption, cited by 77% of respondents, followed by financial concerns (47%) and regulatory uncertainty (40%). Ambient Notes is rapidly advancing in US healthcare systems and demonstrating early success. Other AI use cases show varying degrees of adoption and success, constrained by barriers such as immature AI tools, financial concerns, and regulatory uncertainty. Addressing these challenges through robust evaluations, shared strategies, and governance models will be essential to ensure effective integration and adoption of AI into healthcare practice.

Novel artificial intelligence approach in neurointerventional practice: Preliminary findings on filter movement and ischemic lesions in carotid artery stenting.

Sagawa H, Sakakura Y, Hanazawa R, Takahashi S, Wakabayashi H, Fujii S, Fujita K, Hirai S, Hirakawa A, Kono K, Sumita K

pubmed logopapersJul 1 2025
Embolic protection devices (EPDs) used during carotid artery stenting (CAS) are crucial in reducing ischemic complications. Although minimizing the filter-type EPD movement is considered important, limited research has demonstrated this practice. We used an artificial intelligence (AI)-based device recognition technology to investigate the correlation between filter movements and ischemic complications. We retrospectively studied 28 consecutive patients who underwent CAS using FilterWire EZ (Boston Scientific, Marlborough, MA, USA) from April 2022 to September 2023. Clinical data, procedural videos, and postoperative magnetic resonance imaging were collected. An AI-based device detection function in the Neuro-Vascular Assist (iMed Technologies, Tokyo, Japan) was used to quantify the filter movement. Multivariate proportional odds model analysis was performed to explore the correlations between postoperative diffusion-weighted imaging (DWI) hyperintense lesions and potential ischemic risk factors, including filter movement. In total, 23 patients had sufficient information and were eligible for quantitative analysis. Fourteen patients (60.9 %) showed postoperative DWI hyperintense lesions. Multivariate analysis revealed significant associations between filter movement distance (odds ratio, 1.01; 95 % confidence interval, 1.00-1.02; p = 0.003) and high-intensity signals in time-of-flight magnetic resonance angiography with DWI hyperintense lesions. Age, symptomatic status, and operative time were not significantly correlated. Increased filter movement during CAS was correlated with a higher incidence of postoperative DWI hyperintense lesions. AI-based quantitative evaluation of endovascular techniques may enable demonstration of previously unproven recommendations. To the best of our knowledge, this is the first study to use an AI system for quantitative evaluation to address real-world clinical issues.

Cephalometric landmark detection using vision transformers with direct coordinate prediction.

Laitenberger F, Scheuer HT, Scheuer HA, Lilienthal E, You S, Friedrich RE

pubmed logopapersJul 1 2025
Cephalometric Landmark Detection (CLD), i.e. annotating interest points in lateral X-ray images, is the crucial first step of every orthodontic therapy. While CLD has immense potential for automation using Deep Learning methods, carefully crafted contemporary approaches using convolutional neural networks and heatmap prediction do not qualify for large-scale clinical application due to insufficient performance. We propose a novel approach using Vision Transformers (ViTs) with direct coordinate prediction, avoiding the memory-intensive heatmap prediction common in previous work. Through extensive ablation studies comparing our method against contemporary CNN architectures (ConvNext V2) and heatmap-based approaches (Segformer), we demonstrate that ViTs with coordinate prediction achieve superior performance with more than 2 mm improvement in mean radial error compared to state-of-the-art CLD methods. Our results show that while non-adapted CNN architectures perform poorly on the given task, contemporary approaches may be too tailored to specific datasets, failing to generalize to different and especially sparse datasets. We conclude that using general-purpose Vision Transformers with direct coordinate prediction shows great promise for future research on CLD and medical computer vision.

Magnetic resonance imaging of cruciate ligament disorders: current updates.

Yang T, Li Y, Yang L, Liu Q

pubmed logopapersJul 1 2025
While conventional structural magnetic resonance imaging (MRI) can detect cruciate ligament anatomy and injuries, it has inherent limitations. Recently, novel MRI technologies such as quantitative MRI and artificial intelligence (AI) have emerged to mitigate these shortcomings, providing critical quantitative insights beyond gross morphological imaging and poised to expand current knowledge in assessing cruciate ligament injuries and to facilitate clinical decision making. Quantitative MRI serves as a noninvasive histological and quantification tool, which significantly improves the evaluation of degeneration and repair processes. AI plays a crucial role in automating radiological estimations and enabling data-driven predictions of future events. Despite the transformative impact of advanced MRI techniques on the analytical and diagnostic algorithms related to cruciate ligament disorders, future efforts are warranted to address challenges such as economic burdens and ethical considerations.

A novel deep learning system for automated diagnosis and grading of lumbar spinal stenosis based on spine MRI: model development and validation.

Wang T, Wang A, Zhang Y, Liu X, Fan N, Yuan S, Du P, Wu Q, Chen R, Xi Y, Gu Z, Fei Q, Zang L

pubmed logopapersJul 1 2025
The study aimed to develop a single-stage deep learning (DL) screening system for automated binary and multiclass grading of lumbar central stenosis (LCS), lateral recess stenosis (LRS), and lumbar foraminal stenosis (LFS). Consecutive inpatients who underwent lumbar MRI at our center were retrospectively reviewed for the internal dataset. Axial and sagittal lumbar MRI scans were collected. Based on a new MRI diagnostic criterion, all MRI studies were labeled by two spine specialists and calibrated by a third spine specialist to serve as reference standard. Furthermore, two spine clinicians labeled all MRI studies independently to compare interobserver reliability with the DL model. Samples were assigned into training, validation, and test sets at a proportion of 8:1:1. Additional patients from another center were enrolled as the external test dataset. A modified single-stage YOLOv5 network was designed for simultaneous detection of regions of interest (ROIs) and grading of LCS, LRS, and LFS. Quantitative evaluation metrics of exactitude and reliability for the model were computed. In total, 420 and 50 patients were enrolled in the internal and external datasets. High recalls of 97.4%-99.8% were achieved for ROI detection of lumbar spinal stenosis (LSS). The system revealed multigrade area under curve (AUC) values of 0.93-0.97 in the internal test set and 0.85-0.94 in the external test set for LCS, LRS, and LFS. In binary grading, the DL model achieved high sensitivities of 0.97 for LCS, 0.98 for LRS, and 0.96 for LFS, slightly better than those achieved by spine clinicians in the internal test set. In the external test set, the binary sensitivities were 0.98 for LCS, 0.96 for LRS, and 0.95 for LFS. For reliability assessment, the kappa coefficients between the DL model and reference standard were 0.92, 0.88, and 0.91 for LCS, LRS, and LFS, respectively, slightly higher than those evaluated by nonexpert spine clinicians. The authors designed a novel DL system that demonstrated promising performance, especially in sensitivity, for automated diagnosis and grading of different types of lumbar spinal stenosis using spine MRI. The reliability of the system was better than that of spine surgeons. The authors' system may serve as a triage tool for LSS to reduce misdiagnosis and optimize routine processes in clinical work.

Federated learning-based CT liver tumor detection using a teacher‒student SANet with semisupervised learning.

Lee CS, Lien JJ, Chain K, Huang LC, Hsu ZW

pubmed logopapersJul 1 2025
Detecting liver tumors via computed tomography (CT) scans is a critical but labor-intensive task. Extensive expert annotations are needed to train effective machine learning models. This study presents an innovative approach that leverages federated learning in combination with a teacher‒student framework, an enhanced slice-aware network (SANet), and semisupervised learning (SSL) techniques to improve the CT-based liver tumor detection process while significantly reducing its labor and time costs. Federated learning enables collaborative model training to be performed across multiple institutions without sharing sensitive patient data, thus ensuring privacy and security. The teacher-student SANet framework takes advantage of both teacher and student models, with the teacher model providing reliable pseudolabels that guide the student model in a semisupervised manner. This method not only improves the accuracy of liver tumor detection but also reduces the dependence on extensively annotated datasets. The proposed method was validated through simulation experiments conducted in four scenarios, and it demonstrated a model accuracy of 83%, which represents an improvement over the original locally trained models. This study presents a promising method for enhancing the CT-based liver tumor detection while reducing the incurred labor and time costs by utilizing federated learning, the teacher-student SANet framework, and SSL techniques. Compared with previous approaches, the proposed method achieved a model accuracy of 83%, representing a significant improvement. Not applicable.
Page 26 of 45442 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.