Sort by:
Page 1 of 15 results

Real-world Evaluation of Computer-aided Pulmonary Nodule Detection Software Sensitivity and False Positive Rate.

El Alam R, Jhala K, Hammer MM

pubmed logopapersMay 12 2025
Evaluate the false positive rate (FPR) of nodule detection software in real-world use. A total of 250 nonenhanced chest computed tomography (CT) examinations were randomly selected from an academic institution and submitted to the ClearRead nodule detection system (Riverain Technologies). Detected findings were reviewed by a thoracic imaging fellow. Nodules were classified as true nodules, lymph nodes, or other findings (branching opacity, vessel, mucus plug, etc.), and FPR was recorded. FPR was compared with the initial published FPR in the literature. True diagnosis was based on pathology or follow-up stability. For cases with malignant nodules, we recorded whether malignancy was detected by clinical radiology report (which was performed without software assistance) and/or ClearRead. Twenty-one CTs were excluded due to a lack of thin-slice images, and 229 CTs were included. A total of 594 findings were reported by ClearRead, of which 362 (61%) were true nodules and 232 (39%) were other findings. Of the true nodules, 297 were solid nodules, of which 79 (27%) were intrapulmonary lymph nodes. The mean findings identified by ClearRead per scan was 2.59. ClearRead mean FPR was 1.36, greater than the published rate of 0.58 (P<0.0001). If we consider true lung nodules <6 mm as false positive, FPR is 2.19. A malignant nodule was present in 30 scans; ClearRead identified it in 26 (87%), and the clinical report identified it in 28 (93%) (P=0.32). In real-world use, ClearRead had a much higher FPR than initially reported but a similar sensitivity for malignant nodule detection compared with unassisted radiologists.

Artificial Intelligence in Vascular Neurology: Applications, Challenges, and a Review of AI Tools for Stroke Imaging, Clinical Decision Making, and Outcome Prediction Models.

Alqadi MM, Vidal SGM

pubmed logopapersMay 9 2025
Artificial intelligence (AI) promises to compress stroke treatment timelines, yet its clinical return on investment remains uncertain. We interrogate state‑of‑the‑art AI platforms across imaging, workflow orchestration, and outcome prediction to clarify value drivers and execution risks. Convolutional, recurrent, and transformer architectures now trigger large‑vessel‑occlusion alerts, delineate ischemic core in seconds, and forecast 90‑day function. Commercial deployments-RapidAI, Viz.ai, Aidoc-report double‑digit reductions in door‑to‑needle metrics and expanded thrombectomy eligibility. However, dataset bias, opaque reasoning, and limited external validation constrain scalability. Hybrid image‑plus‑clinical models elevate predictive accuracy but intensify data‑governance demands. AI can operationalize precision stroke care, but enterprise‑grade adoption requires federated data pipelines, explainable‑AI dashboards, and fit‑for‑purpose regulation. Prospective multicenter trials and continuous lifecycle surveillance are mandatory to convert algorithmic promise into reproducible, equitable patient benefit.

Automated Emergent Large Vessel Occlusion Detection Using Viz.ai Software and Its Impact on Stroke Workflow Metrics and Patient Outcomes in Stroke Centers: A Systematic Review and Meta-analysis.

Sarhan K, Azzam AY, Moawad MHED, Serag I, Abbas A, Sarhan AE

pubmed logopapersMay 8 2025
The implementation of artificial intelligence (AI), particularly Viz.ai software in stroke care, has emerged as a promising tool to enhance the detection of large vessel occlusion (LVO) and to improve stroke workflow metrics and patient outcomes. The aim of this systematic review and meta-analysis is to evaluate the impact of Viz.ai on stroke workflow efficiency in hospitals and on patients' outcomes. Following the PRISMA guidelines, we conducted a comprehensive search on electronic databases, including PubMed, Web of Science, and Scopus databases, to obtain relevant studies until 25 October 2024. Our primary outcomes were door-to-groin puncture (DTG) time, CT scan-to-start of endovascular treatment (EVT) time, CT scan-to-recanalization time, and door-in-door-out time. Secondary outcomes included symptomatic intracranial hemorrhage (ICH), any ICH, mortality, mRS score < 2 at 90 days, and length of hospital stay. A total of 12 studies involving 15,595 patients were included in our analysis. The pooled analysis demonstrated that the implementation of the Viz.ai algorithm was associated with lesser CT scan to EVT time (SMD -0.71, 95% CI [-0.98, -0.44], p < 0.001) and DTG time (SMD -0.50, 95% CI [-0.66, -0.35], p < 0.001) as well as CT to recanalization time (SMD -0.55, 95% CI [-0.76, -0.33], p < 0.001). Additionally, patients in the post-AI group had significantly lower door-in door-out time than the pre-AI group (SMD -0.49, 95% CI [-0.71, -0.28], p < 0.001). Despite the workflow metrics improvement, our analysis did not reveal statistically significant differences in patient clinical outcomes (p > 0.05). Our results suggest that the integration of the Viz.ai platform in stroke care holds significant potential for reducing EVT delays in patients with LVO and optimizing stroke flow metrics in comprehensive stroke centers. Further studies are required to validate its efficacy in improving clinical outcomes in patients with LVO.

Early budget impact analysis of AI to support the review of radiographic examinations for suspected fractures in NHS emergency departments (ED).

Gregory L, Boodhna T, Storey M, Shelmerdine S, Novak A, Lowe D, Harvey H

pubmed logopapersMay 7 2025
To develop an early budget impact analysis of and inform future research on the national adoption of a commercially available AI application to support clinicians reviewing radiographs for suspected fractures across NHS emergency departments in England. A decision tree framework was coded to assess a change in outcomes for suspected fractures in adults when AI fracture detection was integrated into clinical workflow over a 1-year time horizon. Standard of care was the comparator scenario and the ground truth reference cases were characterised by radiology report findings. The effect of AI on assisting ED clinicians when detecting fractures was sourced from US literature. Data on resource use conditioned on the correct identification of a fracture in the ED was extracted from a London NHS trust. Sensitivity analysis was conducted to account for the influence of parameter uncertainty on results. In one year, an estimated 658,564 radiographs were performed in emergency departments across England for suspected wrist, ankle or hip fractures. The number of patients returning to the ED with a missed fracture was reduced by 21,674 cases and a reduction of 20, 916 unnecessary referrals to fracture clinics. The cost of current practice was estimated at £66,646,542 and £63,012,150 with the integration of AI. Overall, generating a return on investment of £3,634,392 to the NHS. The adoption of AI in EDs across England has the potential to generate cost savings. However, additional evidence on radiograph review accuracy and subsequent resource use is required to further demonstrate this.

Opinions and preferences regarding artificial intelligence use in healthcare delivery: results from a national multi-site survey of breast imaging patients.

Dontchos BN, Dodelzon K, Bhole S, Edmonds CE, Mullen LA, Parikh JR, Daly CP, Epling JA, Christensen S, Grimm LJ

pubmed logopapersMay 6 2025
Artificial intelligence (AI) utilization is growing, but patient perceptions of AI are unclear. Our objective was to understand patient perceptions of AI through a multi-site survey of breast imaging patients. A 36-question survey was distributed to eight US practices (6 academic, 2 non-academic) from October 2023 through October 2024. This manuscript analyzes a subset of questions from the survey addressing digital health literacy and attitudes towards AI in medicine and breast imaging specifically. Multivariable analysis compared responses by respondent demographics. A total of 3,532 surveys were collected (response rate: 69.9%, 3,532/5053). Median respondent age was 55 years (IQR 20). Most respondents were White (73.0%, 2579/3532) and had completed college (77.3%, 2732/3532). Overall, respondents were undecided (range: 43.2%-50.8%) regarding questions about general perceptions of AI in healthcare. Respondents with higher electronic health literacy, more education, and younger age were significantly more likely to consider it useful to use utilize AI for aiding medical tasks (all p<0.001). In contrast, respondents with lower electronic health literacy and less education were significantly more likely to indicate it was a bad idea for AI to perform medical tasks (p<0.001). Non-White patients were more likely to express concerns that AI will not work as well for some groups compared to others (p<0.05). Overall, favorable opinions of AI use for medical tasks were associated with younger age, more education, and higher electronic health literacy. As AI is increasingly implemented into clinical workflows, it is important to educate patients and provide transparency to build patient understanding and trust.
Page 1 of 15 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.