Sort by:
Page 32 of 45442 results

Comparative accuracy of two commercial AI algorithms for musculoskeletal trauma detection in emergency radiographs.

Huhtanen JT, Nyman M, Blanco Sequeiros R, Koskinen SK, Pudas TK, Kajander S, Niemi P, Aronen HJ, Hirvonen J

pubmed logopapersJun 9 2025
Missed fractures are the primary cause of interpretation errors in emergency radiology, and artificial intelligence has recently shown great promise in radiograph interpretation. This study compared the diagnostic performance of two AI algorithms, BoneView and RBfracture, in detecting traumatic abnormalities (fractures and dislocations) in MSK radiographs. AI algorithms analyzed 998 radiographs (585 normal, 413 abnormal), against the consensus of two MSK specialists. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, and interobserver agreement (Cohen's Kappa) were calculated. 95% confidence intervals (CI) assessed robustness, and McNemar's tests compared sensitivity and specificity between the AI algorithms. BoneView demonstrated a sensitivity of 0.893 (95% CI: 0.860-0.920), specificity of 0.885 (95% CI: 0.857-0.909), PPV of 0.846, NPV of 0.922, and accuracy of 0.889. RBfracture demonstrated a sensitivity of 0.872 (95% CI: 0.836-0.901), specificity of 0.892 (95% CI: 0.865-0.915), PPV of 0.851, NPV of 0.908, and accuracy of 0.884. No statistically significant differences were found in sensitivity (p = 0.151) or specificity (p = 0.708). Kappa was 0.81 (95% CI: 0.77-0.84), indicating almost perfect agreement between the two AI algorithms. Performance was similar in adults and children. Both AI algorithms struggled more with subtle abnormalities, which constituted 66% and 70% of false negatives but only 20% and 18% of true positives for the two AI algorithms, respectively (p < 0.001). BoneView and RBfracture exhibited high diagnostic performance and almost perfect agreement, with consistent results across adults and children, highlighting the potential of AI in emergency radiograph interpretation.

A Narrative Review on Large AI Models in Lung Cancer Screening, Diagnosis, and Treatment Planning

Jiachen Zhong, Yiting Wang, Di Zhu, Ziwei Wang

arxiv logopreprintJun 8 2025
Lung cancer remains one of the most prevalent and fatal diseases worldwide, demanding accurate and timely diagnosis and treatment. Recent advancements in large AI models have significantly enhanced medical image understanding and clinical decision-making. This review systematically surveys the state-of-the-art in applying large AI models to lung cancer screening, diagnosis, prognosis, and treatment. We categorize existing models into modality-specific encoders, encoder-decoder frameworks, and joint encoder architectures, highlighting key examples such as CLIP, BLIP, Flamingo, BioViL-T, and GLoRIA. We further examine their performance in multimodal learning tasks using benchmark datasets like LIDC-IDRI, NLST, and MIMIC-CXR. Applications span pulmonary nodule detection, gene mutation prediction, multi-omics integration, and personalized treatment planning, with emerging evidence of clinical deployment and validation. Finally, we discuss current limitations in generalizability, interpretability, and regulatory compliance, proposing future directions for building scalable, explainable, and clinically integrated AI systems. Our review underscores the transformative potential of large AI models to personalize and optimize lung cancer care.

Deep learning-based prospective slice tracking for continuous catheter visualization during MRI-guided cardiac catheterization.

Neofytou AP, Kowalik G, Vidya Shankar R, Kunze K, Moon T, Mellor N, Neji R, Razavi R, Pushparajah K, Roujol S

pubmed logopapersJun 8 2025
This proof-of-concept study introduces a novel, deep learning-based, parameter-free, automatic slice-tracking technique for continuous catheter tracking and visualization during MR-guided cardiac catheterization. The proposed sequence includes Calibration and Runtime modes. Initially, Calibration mode identifies the catheter tip's three-dimensional coordinates using a fixed stack of contiguous slices. A U-Net architecture with a ResNet-34 encoder is used to identify the catheter tip location. Once identified, the sequence then switches to Runtime mode, dynamically acquiring three contiguous slices automatically centered on the catheter tip. The catheter location is estimated from each Runtime stack using the same network and fed back to the sequence, enabling prospective slice tracking to keep the catheter in the central slice. If the catheter remains unidentified over several dynamics, the sequence reverts to Calibration mode. This artificial intelligence (AI)-based approach was evaluated prospectively in a three-dimensional-printed heart phantom and 3 patients undergoing MR-guided cardiac catheterization. This technique was also compared retrospectively in 2 patients with a previous non-AI automatic tracking method relying on operator-defined parameters. In the phantom study, the tracking framework achieved 100% accuracy/sensitivity/specificity in both modes. Across all patients, the average accuracy/sensitivity/specificity were 100 ± 0/100 ± 0/100 ± 0% (Calibration) and 98.4 ± 0.8/94.1 ± 2.9/100.0 ± 0.0% (Runtime). The parametric, non-AI technique and the proposed parameter-free AI-based framework yielded identical accuracy (100%) in Calibration mode and similar accuracy range in Runtime mode (Patients 1 and 2: 100%-97%, and 100%-98%, respectively). An AI-based prospective slice-tracking framework was developed for real-time, parameter-free, operator-independent, automatic tracking of gadolinium-filled balloon catheters. Its feasibility was successfully demonstrated in patients undergoing MRI-guided cardiac catheterization.

De-identification of medical imaging data: a comprehensive tool for ensuring patient privacy.

Rempe M, Heine L, Seibold C, Hörst F, Kleesiek J

pubmed logopapersJun 7 2025
Medical imaging data employed in research frequently comprises sensitive Protected Health Information (PHI) and Personal Identifiable Information (PII), which is subject to rigorous legal frameworks such as the General Data Protection Regulation (GDPR) or the Health Insurance Portability and Accountability Act (HIPAA). Consequently, these types of data must be de-identified prior to utilization, which presents a significant challenge for many researchers. Given the vast array of medical imaging data, it is necessary to employ a variety of de-identification techniques. To facilitate the de-identification process for medical imaging data, we have developed an open-source tool that can be used to de-identify Digital Imaging and Communications in Medicine (DICOM) magnetic resonance images, computer tomography images, whole slide images and magnetic resonance twix raw data. Furthermore, the implementation of a neural network enables the removal of text within the images. The proposed tool reaches comparable results to current state-of-the-art algorithms at reduced computational time (up to × 265). The tool also manages to fully de-identify image data of various types, such as Neuroimaging Informatics Technology Initiative (NIfTI) or Whole Slide Image (WSI-)DICOMS. The proposed tool automates an elaborate de-identification pipeline for multiple types of inputs, reducing the need for additional tools used for de-identification of imaging data. Question How can researchers effectively de-identify sensitive medical imaging data while complying with legal frameworks to protect patient health information? Findings We developed an open-source tool that automates the de-identification of various medical imaging formats, enhancing the efficiency of de-identification processes. Clinical relevance This tool addresses the critical need for robust and user-friendly de-identification solutions in medical imaging, facilitating data exchange in research while safeguarding patient privacy.

Simulating workload reduction with an AI-based prostate cancer detection pathway using a prediction uncertainty metric.

Fransen SJ, Bosma JS, van Lohuizen Q, Roest C, Simonis FFJ, Kwee TC, Yakar D, Huisman H

pubmed logopapersJun 7 2025
This study compared two uncertainty quantification (UQ) metrics to rule out prostate MRI scans with a high-confidence artificial intelligence (AI) prediction and investigated the resulting potential radiologist's workload reduction in a clinically significant prostate cancer (csPCa) detection pathway. This retrospective study utilized 1612 MRI scans from three institutes for csPCa (Gleason Grade Group ≥ 2) assessment. We compared the standard diagnostic pathway (radiologist reading) to an AI-based rule-out pathway in terms of efficacy and accuracy in diagnosing csPCa. In the rule-out pathway, 15 AI submodels (trained on 7756 cases) diagnosed each MRI scan, and any prediction deemed uncertain was referred to a radiologist for reading. We compared the mean (meanUQ) and variability (varUQ) of predictions using the DeLong test on the area under the receiver operating characteristic curves (AUROC). The level of workload reduction of the best UQ method was determined based on a maintained sensitivity at non-inferior specificity using the margins 0.05 and 0.10. The workload reduction of the proposed pathway was institute-specific: up to 20% at a 0.10 non-inferiority margin (p < 0.05) and non-significant workload reduction at a 0.05 margin. VarUQ-based rule out gave higher but non-significant AUROC scores than meanUQ in certain selected cases (+0.05 AUROC, p > 0.05). MeanUQ and varUQ showed promise in AI-based rule-out csPCa detection. Using varUQ in an AI-based csPCa detection pathway could reduce the number of scans radiologists need to read. The varying performance of the UQ rule-out indicates the need for institute-specific UQ thresholds. Question AI can autonomously assess prostate MRI scans with high certainty at a non-inferior performance compared to radiologists, potentially reducing the workload of radiologists. Findings The optimal ratio of AI-model and radiologist readings is institute-dependent and requires calibration. Clinical relevance Semi-autonomous AI-based prostate cancer detection with variational UQ scores shows promise in reducing the number of scans radiologists need to read.

Current utilization and impact of AI LVO detection tools in acute stroke triage: a multicenter survey analysis.

Darkhabani Z, Ezzeldin R, Delora A, Kass-Hout O, Alderazi Y, Nguyen TN, El-Ghanem M, Anwoju T, Ali Z, Ezzeldin M

pubmed logopapersJun 7 2025
Artificial intelligence (AI) tools for large vessel occlusion (LVO) detection are increasingly used in acute stroke triage to expedite diagnosis and intervention. However, variability in access and workflow integration limits their potential impact. This study assessed current usage patterns, access disparities, and integration levels across U.S. stroke programs. Cross-sectional, web-based survey of 97 multidisciplinary stroke care providers from diverse institutions. Descriptive statistics summarized demographics, AI tool usage, access, and integration. Two-proportion Z-tests assessed differences across institutional types. Most respondents (97.9%) reported AI tool use, primarily Viz AI and Rapid AI, but only 62.1% consistently used them for triage prior to radiologist interpretation. Just 37.5% reported formal protocol integration, and 43.6% had designated personnel for AI alert response. Access varied significantly across departments, and in only 61.7% of programs did all relevant team members have access. Formal implementation of the AI detection tools did not differ based on the certification (z = -0.2; <i>p</i> = 0.4) or whether the program was academic or community-based (z =-0.3; <i>p</i> = 0.3). AI-enabled LVO detection tools have the potential to improve stroke care and patient outcomes by expediting workflows and reducing treatment delays. This survey effectively evaluated current utilization of these tools and revealed widespread adoption alongside significant variability in access, integration, and workflow standardization. Larger, more diverse samples are needed to validate these findings across different hospital types, and further prospective research is essential to determine how formal integration of AI tools can enhance stroke care delivery, reduce disparities, and improve clinical outcomes.

Detecting neurodegenerative changes in glaucoma using deep mean kurtosis-curve-corrected tractometry

Kasa, L. W., Schierding, W., Kwon, E., Holdsworth, S., Danesh-Meyer, H. V.

medrxiv logopreprintJun 6 2025
Glaucoma is increasingly recognized as a neurodegenerative condition involving both retinal and central nervous system structures. Here, we present an integrated framework that combines MK-Curve-corrected diffusion kurtosis imaging (DKI), tractometry, and deep autoencoder-based normative modeling to detect localized white matter abnormalities associated with glaucoma. Using UK Biobank diffusion MRI data, we show that MK-Curve approach corrects anatomically implausible values and improves the reliability of DKI metrics - particularly mean (MK), radial (RK), and axial kurtosis (AK) - in regions of complex fiber architecture. Tractometry revealed reduced MK in glaucoma patients along the optic radiation, inferior longitudinal fasciculus, and inferior fronto-occipital fasciculus, but not in a non-visual control tract, supporting disease specificity. These abnormalities were spatially localized, with significant changes observed at multiple points along the tracts. MK demonstrated greater sensitivity than MD and exhibited altered distributional features, reflecting microstructural heterogeneity not captured by standard metrics. Node-wise MK values in the right optic radiation showed weak but significant correlations with retinal OCT measures (ganglion cell layer and retinal nerve fiber layer thickness), reinforcing the biological relevance of these findings. Deep autoencoder-based modeling further enabled subject-level anomaly detection that aligned spatially with group-level changes and outperformed traditional approaches. Together, our results highlight the potential of advanced diffusion modeling and deep learning for sensitive, individualized detection of glaucomatous neurodegeneration and support their integration into future multimodal imaging pipelines in neuro-ophthalmology.

Inconsistency of AI in intracranial aneurysm detection with varying dose and image reconstruction.

Goelz L, Laudani A, Genske U, Scheel M, Bohner G, Bauknecht HC, Mutze S, Hamm B, Jahnke P

pubmed logopapersJun 6 2025
Scanner-related changes in data quality are common in medical imaging, yet monitoring their impact on diagnostic AI performance remains challenging. In this study, we performed standardized consistency testing of an FDA-cleared and CE-marked AI for triage and notification of intracranial aneurysms across changes in image data quality caused by dose and image reconstruction. Our assessment was based on repeated examinations of a head CT phantom designed for AI evaluation, replicating a patient with three intracranial aneurysms in the anterior, middle and posterior circulation. We show that the AI maintains stable performance within the medium dose range but produces inconsistent results at reduced dose and, unexpectedly, at higher dose when filtered back projection is used. Data quality standards required for AI are stricter than those for neuroradiologists, who report higher aneurysm visibility rates and experience performance degradation only at substantially lower doses, with no decline at higher doses.

Advances in disease detection through retinal imaging: A systematic review.

Bilal H, Keles A, Bendechache M

pubmed logopapersJun 6 2025
Ocular and non-ocular diseases significantly impact millions of people worldwide, leading to vision impairment or blindness if not detected and managed early. Many individuals could be prevented from becoming blind by treating these diseases early on and stopping their progression. Despite advances in medical imaging and diagnostic tools, the manual detection of these diseases remains labor-intensive, time-consuming, and dependent on the expert's experience. Computer-aided diagnosis (CAD) has been transformed by machine learning (ML), providing promising methods for the automated detection and grading of diseases using various retinal imaging modalities. In this paper, we present a comprehensive systematic literature review that discusses the use of ML techniques to detect diseases from retinal images, utilizing both single and multi-modal imaging approaches. We analyze the efficiency of various Deep Learning and classical ML models, highlighting their achievements in accuracy, sensitivity, and specificity. Even with these advancements, the review identifies several critical challenges. We propose future research directions to address these issues. By overcoming these challenges, the potential of ML to enhance diagnostic accuracy and patient outcomes can be fully realized, opening the way for more reliable and effective ocular and non-ocular disease management.

A Decade of Advancements in Musculoskeletal Imaging.

Wojack P, Fritz J, Khodarahmi I

pubmed logopapersJun 6 2025
The past decade has witnessed remarkable advancements in musculoskeletal radiology, driven by increasing demand for medical imaging and rapid technological innovations. Contrary to early concerns about artificial intelligence (AI) replacing radiologists, AI has instead enhanced imaging capabilities, aiding in automated abnormality detection and workflow efficiency. MRI has benefited from acceleration techniques that significantly reduce scan times while maintaining high-quality imaging. In addition, novel MRI methodologies now support precise anatomic and quantitative imaging across a broad spectrum of field strengths. In CT, dual-energy and photon-counting technologies have expanded diagnostic possibilities for musculoskeletal applications. This review explores these key developments, examining their impact on clinical practice and the future trajectory of musculoskeletal radiology.
Page 32 of 45442 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.