Sort by:
Page 2 of 657 results

[AI-enabled clinical decision support systems: challenges and opportunities].

Tschochohei M, Adams LC, Bressem KK, Lammert J

pubmed logopapersJun 25 2025
Clinical decision-making is inherently complex, time-sensitive, and prone to error. AI-enabled clinical decision support systems (CDSS) offer promising solutions by leveraging large datasets to provide evidence-based recommendations. These systems range from rule-based and knowledge-based to increasingly AI-driven approaches. However, key challenges persist, particularly concerning data quality, seamless integration into clinical workflows, and clinician trust and acceptance. Ethical and legal considerations, especially data privacy, are also paramount.AI-CDSS have demonstrated success in fields like radiology (e.g., pulmonary nodule detection, mammography interpretation) and cardiology, where they enhance diagnostic accuracy and improve patient outcomes. Looking ahead, chat and voice interfaces powered by large language models (LLMs) could support shared decision-making (SDM) by fostering better patient engagement and understanding.To fully realize the potential of AI-CDSS in advancing efficient, patient-centered care, it is essential to ensure their responsible development. This includes grounding AI models in domain-specific data, anonymizing user inputs, and implementing rigorous validation of AI-generated outputs before presentation. Thoughtful design and ethical oversight will be critical to integrating AI safely and effectively into clinical practice.

[The analysis of invention patents in the field of artificial intelligent medical devices].

Zhang T, Chen J, Lu Y, Xu D, Yan S, Ouyang Z

pubmed logopapersJun 25 2025
The emergence of new-generation artificial intelligence technology has brought numerous innovations to the healthcare field, including telemedicine and intelligent care. However, the artificial intelligent medical device sector still faces significant challenges, such as data privacy protection and algorithm reliability. This study, based on invention patent analysis, revealed the technological innovation trends in the field of artificial intelligent medical devices from aspects such as patent application time trends, hot topics, regional distribution, and innovation players. The results showed that global invention patent applications had remained active, with technological innovations primarily focused on medical image processing, physiological signal processing, surgical robots, brain-computer interfaces, and intelligent physiological parameter monitoring technologies. The United States and China led the world in the number of invention patent applications. Major international medical device giants, such as Philips, Siemens, General Electric, and Medtronic, were at the forefront of global technological innovation, with significant advantages in patent application volumes and international market presence. Chinese universities and research institutes, such as Zhejiang University, Tianjin University, and the Shenzhen Institute of Advanced Technology, had demonstrated notable technological innovation, with a relatively high number of patent applications. However, their overseas market expansion remained limited. This study provides a comprehensive overview of the technological innovation trends in the artificial intelligent medical device field and offers valuable information support for industry development from an informatics perspective.

[Practical artificial intelligence for urology : Technical principles, current application and future implementation of AI in practice].

Rodler S, Hügelmann K, von Knobloch HC, Weiss ML, Buck L, Kohler J, Fabian A, Jarczyk J, Nuhn P

pubmed logopapersJun 24 2025
Artificial intelligence (AI) is a disruptive technology that is currently finding widespread application after having long been confined to the domain of specialists. In urology, in particular, new fields of application are continuously emerging, which are being studied both in preclinical basic research and in clinical applications. Potential applications include image recognition in the operating room or interpreting images from radiology and pathology, the automatic measurement of urinary stones and radiotherapy. Certain medical devices, particularly in the field of AI-based predictive biomarkers, have already been incorporated into international guidelines. In addition, AI is playing an increasingly more important role in administrative tasks and is expected to lead to enormous changes, especially in the outpatient sector. For urologists, it is becoming increasingly more important to engage with this technology, to pursue appropriate training and therefore to optimally implement AI into the treatment of patients and in the management of their practices or hospitals.

Quality appraisal of radiomics-based studies on chondrosarcoma using METhodological RadiomICs Score (METRICS) and Radiomics Quality Score (RQS).

Gitto S, Cuocolo R, Klontzas ME, Albano D, Messina C, Sconfienza LM

pubmed logopapersJun 18 2025
To assess the methodological quality of radiomics-based studies on bone chondrosarcoma using METhodological RadiomICs Score (METRICS) and Radiomics Quality Score (RQS). A literature search was conducted on EMBASE and PubMed databases for research papers published up to July 2024 and focused on radiomics in bone chondrosarcoma, with no restrictions regarding the study aim. Three readers independently evaluated the study quality using METRICS and RQS. Baseline study characteristics were extracted. Inter-reader reliability was calculated using intraclass correlation coefficient (ICC). Out of 68 identified papers, 18 were finally included in the analysis. Radiomics research was aimed at lesion classification (n = 15), outcome prediction (n = 2) or both (n = 1). Study design was retrospective in all papers. Most studies employed MRI (n = 12), CT (n = 3) or both (n = 1). METRICS and RQS adherence rates ranged between 37.3-94.8% and 2.8-44.4%, respectively. Excellent inter-reader reliability was found for both METRICS (ICC = 0.961) and RQS (ICC = 0.975). Among the limitations of the evaluated studies, the absence of prospective studies and deep learning-based analyses was highlighted, along with the limited adherence to radiomics guidelines, use of external testing datasets and open science data. METRICS and RQS are reproducible quality assessment tools, with the former showing higher adherence rates in studies on chondrosarcoma. METRICS is better suited for assessing papers with retrospective design, which is often chosen in musculoskeletal oncology due to the low prevalence of bone sarcomas. Employing quality scoring systems should be promoted in radiomics-based studies to improve methodological quality and facilitate clinical translation. Employing reproducible quality scoring systems, especially METRICS (which shows higher adherence rates than RQS and is better suited for assessing retrospective investigations), is highly recommended to design radiomics-based studies on chondrosarcoma, improve methodological quality and facilitate clinical translation. The low scientific and reporting quality of radiomics studies on chondrosarcoma is the main reason preventing clinical translation. Quality appraisal using METRICS and RQS showed 37.3-94.8% and 2.8-44.4% adherence rates, respectively. Room for improvement was noted in study design, deep learning methods, external testing and open science. Employing reproducible quality scoring systems is recommended to design radiomics studies on bone chondrosarcoma and facilitate clinical translation.

Radiologist-AI workflow can be modified to reduce the risk of medical malpractice claims

Bernstein, M., Sheppard, B., Bruno, M. A., Lay, P. S., Baird, G. L.

medrxiv logopreprintJun 16 2025
BackgroundArtificial Intelligence (AI) is rapidly changing the legal landscape of radiology. Results from a previous experiment suggested that providing AI error rates can reduce perceived radiologist culpability, as judged by mock jury members (4). The current study advances this work by examining whether the radiologists behavior also impacts perceptions of liability. Methods. Participants (n=282) read about a hypothetical malpractice case where a 50-year-old who visited the Emergency Department with acute neurological symptoms received a brain CT scan to determine if bleeding was present. An AI system was used by the radiologist who interpreted imaging. The AI system correctly flagged the case as abnormal. Nonetheless, the radiologist concluded no evidence of bleeding, and the blood-thinner t-PA was administered. Participants were randomly assigned to either a 1.) single-read condition, where the radiologist interpreted the CT once after seeing AI feedback, or 2.) a double-read condition, where the radiologist interpreted the CT twice, first without AI and then with AI feedback. Participants were then told the patient suffered irreversible brain damage due to the missed brain bleed, resulting in the patient (plaintiff) suing the radiologist (defendant). Participants indicated whether the radiologist met their duty of care to the patient (yes/no). Results. Hypothetical jurors were more likely to side with the plaintiff in the single-read condition (106/142, 74.7%) than in the double-read condition (74/140, 52.9%), p=0.0002. Conclusion. This suggests that the penalty for disagreeing with correct AI can be mitigated when images are interpreted twice, or at least if a radiologist gives an interpretation before AI is used.

Appropriateness of acute breast symptom recommendations provided by ChatGPT.

Byrd C, Kingsbury C, Niell B, Funaro K, Bhatt A, Weinfurtner RJ, Ataya D

pubmed logopapersJun 16 2025
We evaluated the accuracy of ChatGPT-3.5's responses to common questions regarding acute breast symptoms and explored whether using lay language, as opposed to medical language, affected the accuracy of the responses. Questions were formulated addressing acute breast conditions, informed by the American College of Radiology (ACR) Appropriateness Criteria (AC) and our clinical experience at a tertiary referral breast center. Of these, seven addressed the most common acute breast symptoms, nine addressed pregnancy-associated breast symptoms, and four addressed specific management and imaging recommendations for a palpable breast abnormality. Questions were submitted three times to ChatGPT-3.5 and all responses were assessed by five fellowship-trained breast radiologists. Evaluation criteria included clinical judgment and adherence to the ACR guidelines, with responses scored as: 1) "appropriate," 2) "inappropriate" if any response contained inappropriate information, or 3) "unreliable" if responses were inconsistent. A majority vote determined the appropriateness for each question. ChatGPT-3.5 generated responses were appropriate for 7/7 (100 %) questions regarding common acute breast symptoms when phrased both colloquially and using standard medical terminology. In contrast, ChatGPT-3.5 generated responses were appropriate for 3/9 (33 %) questions about pregnancy-associated breast symptoms and 3/4 (75 %) questions about management and imaging recommendations for a palpable breast abnormality. ChatGPT-3.5 can automate healthcare information related to appropriate management of acute breast symptoms when prompted with both standard medical terminology or lay phrasing of the questions. However, physician oversight remains critical given the presence of inappropriate recommendations for pregnancy associated breast symptoms and management of palpable abnormalities.

FairICP: identifying biases and increasing transparency at the point of care in post-implementation clinical decision support using inductive conformal prediction.

Sun X, Nakashima M, Nguyen C, Chen PH, Tang WHW, Kwon D, Chen D

pubmed logopapersJun 15 2025
Fairness concerns stemming from known and unknown biases in healthcare practices have raised questions about the trustworthiness of Artificial Intelligence (AI)-driven Clinical Decision Support Systems (CDSS). Studies have shown unforeseen performance disparities in subpopulations when applied to clinical settings different from training. Existing unfairness mitigation strategies often struggle with scalability and accessibility, while their pursuit of group-level prediction performance parity does not effectively translate into fairness at the point of care. This study introduces FairICP, a flexible and cost-effective post-implementation framework based on Inductive Conformal Prediction (ICP), to provide users with actionable knowledge of model uncertainty due to subpopulation level biases at the point of care. FairICP applies ICP to identify the model's scope of competence through group specific calibration, ensuring equitable prediction reliability by filtering predictions that fall within the trusted competence boundaries. We evaluated FairICP against four benchmarks on three medical imaging modalities: (1) Cardiac Magnetic Resonance Imaging (MRI), (2) Chest X-ray and (3) Dermatology Imaging, acquired from both private and large public datasets. Frameworks are assessed on prediction performance enhancement and unfairness mitigation capabilities. Compared to the baseline, FairICP improved prediction accuracy by 7.2% and reduced the accuracy gap between the privileged and unprivileged subpopulations by 2.2% on average across all three datasets. Our work provides a robust solution to promote trust and transparency in AI-CDSS, fostering equality and equity in healthcare for diverse patient populations. Such post-process methods are critical to enabling a robust framework for AI-CDSS implementation and monitoring for healthcare settings.

Artificial intelligence for age-related macular degeneration diagnosis in Australia: A Novel Qualitative Interview Study.

Ly A, Herse S, Williams MA, Stapleton F

pubmed logopapersJun 14 2025
Artificial intelligence (AI) systems for age-related macular degeneration (AMD) diagnosis abound but are not yet widely implemented. AI implementation is complex, requiring the involvement of multiple, diverse stakeholders including technology developers, clinicians, patients, health networks, public hospitals, private providers and payers. There is a pressing need to investigate how AI might be adopted to improve patient outcomes. The purpose of this first study of its kind was to use the AI translation extended version of the non-adoption, abandonment, scale-up, spread and sustainability of healthcare technologies framework to explore stakeholder experiences, attitudes, enablers, barriers and possible futures of digital diagnosis using AI for AMD and eyecare in Australia. Semi-structured, online interviews were conducted with 37 stakeholders (12 clinicians, 10 healthcare leaders, 8 patients and 7 developers) from September 2022 to March 2023. The interviews were audio-recorded, transcribed and analysed using directed and summative content analysis. Technological features influencing implementation were most frequently discussed, followed by the context or wider system, value proposition, adopters, organisations, the condition and finally embedding the adaptation. Patients preferred to focus on the condition, while healthcare leaders elaborated on organisation factors. Overall, stakeholders supported a portable, device-independent clinical decision support tool that could be integrated with existing diagnostic equipment and patient management systems. Opportunities for AI to drive new models of healthcare, patient education and outreach, and the importance of maintaining equity across population groups were consistently emphasised. This is the first investigation to report numerous, interacting perspectives on the adoption of digital diagnosis for AMD in Australia, incorporating an intentionally diverse stakeholder group and the patient voice. It provides a series of practical considerations for the implementation of AI and digital diagnosis into existing care for people with AMD.

Generalist Models in Medical Image Segmentation: A Survey and Performance Comparison with Task-Specific Approaches

Andrea Moglia, Matteo Leccardi, Matteo Cavicchioli, Alice Maccarini, Marco Marcon, Luca Mainardi, Pietro Cerveri

arxiv logopreprintJun 12 2025
Following the successful paradigm shift of large language models, leveraging pre-training on a massive corpus of data and fine-tuning on different downstream tasks, generalist models have made their foray into computer vision. The introduction of Segment Anything Model (SAM) set a milestone on segmentation of natural images, inspiring the design of a multitude of architectures for medical image segmentation. In this survey we offer a comprehensive and in-depth investigation on generalist models for medical image segmentation. We start with an introduction on the fundamentals concepts underpinning their development. Then, we provide a taxonomy on the different declinations of SAM in terms of zero-shot, few-shot, fine-tuning, adapters, on the recent SAM 2, on other innovative models trained on images alone, and others trained on both text and images. We thoroughly analyze their performances at the level of both primary research and best-in-literature, followed by a rigorous comparison with the state-of-the-art task-specific models. We emphasize the need to address challenges in terms of compliance with regulatory frameworks, privacy and security laws, budget, and trustworthy artificial intelligence (AI). Finally, we share our perspective on future directions concerning synthetic data, early fusion, lessons learnt from generalist models in natural language processing, agentic AI and physical AI, and clinical translation.
Page 2 of 657 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.