Sort by:
Page 1 of 880 results
Next

Artificial intelligence in medical imaging diagnosis: are we ready for its clinical implementation?

Ramos-Soto O, Aranguren I, Carrillo M M, Oliva D, Balderas-Mata SE

pubmed logopapersNov 1 2025
We examine the transformative potential of artificial intelligence (AI) in medical imaging diagnosis, focusing on improving diagnostic accuracy and efficiency through advanced algorithms. It addresses the significant challenges preventing immediate clinical adoption of AI, specifically from technical, ethical, and legal perspectives. The aim is to highlight the current state of AI in medical imaging and outline the necessary steps to ensure safe, effective, and ethically sound clinical implementation. We conduct a comprehensive discussion, with special emphasis on the technical requirements for robust AI models, the ethical frameworks needed for responsible deployment, and the legal implications, including data privacy and regulatory compliance. Explainable artificial intelligence (XAI) is examined as a means to increase transparency and build trust among healthcare professionals and patients. The analysis reveals key challenges to AI integration in clinical settings, including the need for extensive high-quality datasets, model reliability, advanced infrastructure, and compliance with regulatory standards. The lack of explainability in AI outputs remains a barrier, with XAI identified as crucial for meeting transparency standards and enhancing trust among end users. Overcoming these barriers requires a collaborative, multidisciplinary approach to integrate AI into clinical practice responsibly. Addressing technical, ethical, and legal issues will support a softer transition, fostering a more accurate, efficient, and patient-centered healthcare system where AI augments traditional medical practices.

Recommendations for the use of functional medical imaging in the management of cancer of the cervix in New Zealand: a rapid review.

Feng S, Mdletshe S

pubmed logopapersAug 15 2025
We aimed to review the role of functional imaging in cervical cancer to underscore its significance in the diagnosis and management of cervical cancer and in improving patient outcomes. This rapid literature review targeting the clinical guidelines for functional imaging in cervical cancer sourced literature from 2017 to 2023 using PubMed, Google Scholar, MEDLINE and Scopus. Keywords such as cervical cancer, cervical neoplasms, functional imaging, stag*, treatment response, monitor* and New Zealand or NZ were used with Boolean operators to maximise results. Emphasis was on English full research studies pertinent to New Zealand. The study quality of the reviewed articles was assessed using the Joanna Briggs Institute critical appraisal checklists. The search yielded a total of 21 papers after all duplicates and yields that did not meet the inclusion criteria were excluded. Only one paper was found to incorporate the New Zealand context. The papers reviewed yielded results that demonstrate the important role of functional imaging in cervical cancer diagnosis, staging and treatment response monitoring. Techniques such as dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), diffusion-weighted magnetic resonance imaging (DW-MRI), computed tomography perfusion (CTP) and positron emission tomography computed tomography (PET/CT) provide deep insights into tumour behaviour, facilitating personalised care. Integration of artificial intelligence in image analysis promises increased accuracy of these modalities. Functional imaging could play a significant role in a unified approach in New Zealand to improve patient outcomes for cervical cancer management. Therefore, this study advocates for New Zealand's medical sector to harness functional imaging's potential in cervical cancer management.

Explanation and Elaboration with Examples for METRICS (METRICS-E3): an initiative from the EuSoMII Radiomics Auditing Group.

Kocak B, Ammirabile A, Ambrosini I, Akinci D'Antonoli T, Borgheresi A, Cavallo AU, Cannella R, D'Anna G, Díaz O, Doniselli FM, Fanni SC, Ghezzo S, Groot Lipman KBW, Klontzas ME, Ponsiglione A, Stanzione A, Triantafyllou M, Vernuccio F, Cuocolo R

pubmed logopapersAug 13 2025
Radiomics research has been hindered by inconsistent and often poor methodological quality, limiting its potential for clinical translation. To address this challenge, the METhodological RadiomICs Score (METRICS) was recently introduced as a tool for systematically assessing study rigor. However, its effective application requires clearer guidance. The METRICS-E3 (Explanation and Elaboration with Examples) resource was developed by the European Society of Medical Imaging Informatics-Radiomics Auditing Group in response. This international initiative provides comprehensive support for users by offering detailed rationales, interpretive guidance, scoring recommendations, and illustrative examples for each METRICS item and condition. Each criterion includes positive examples from peer-reviewed, open-access studies and hypothetical negative examples. In total, the finalized METRICS-E3 includes over 200 examples. The complete resource is publicly available through an interactive website. CRITICAL RELEVANCE STATEMENT: METRICS-E3 offers deeper insights into each METRICS item and condition, providing concrete examples with accompanying commentary and recommendations to enhance the evaluation of methodological quality in radiomics research. KEY POINTS: As a complementary initiative to METRICS, METRICS-E3 is intended to support stakeholders in evaluating the methodological aspects of radiomics studies. In METRICS-E3, each METRICS item and condition is supplemented with interpretive guidance, positive literature-based examples, hypothetical negative examples, and scoring recommendations. The complete METRICS-E3 explanation and elaboration resource is accessible at its interactive website.

Enabling Physicians to Make an Informed Adoption Decision on Artificial Intelligence Applications in Medical Imaging Diagnostics: Qualitative Study.

Hennrich J, Doctor E, Körner MF, Lederman R, Eymann T

pubmed logopapersAug 12 2025
Artificial intelligence (AI) applications hold great promise for improving accuracy and efficiency in medical imaging diagnostics. However, despite the expected benefit of AI applications, widespread adoption of the technology is progressing slower than expected due to technological, organizational, and regulatory obstacles, and user-related barriers, with physicians playing a central role in adopting AI applications. This study aims to provide guidance on enabling physicians to make an informed adoption decision regarding AI applications by identifying and discussing measures to address key barriers from physicians' perspectives. We used a 2-step qualitative research approach. First, we conducted a structured literature review by screening 865 papers to identify potential enabling measures. Second, we interviewed 14 experts to evaluate the literature-based measures and enriched them. By analyzing the literature and interview transcripts, we revealed 11 measures, categorized into Enabling Adoption Decision Measures (eg, educating physicians, preparing future physicians, and providing transparency) and Supporting Adoption Measures (eg, implementation guidelines and AI marketplaces). These measures aim to inform physicians' decisions and support the adoption process. This study provides a comprehensive overview of measures to enable physicians to make an informed adoption decision on AI applications in medical imaging diagnostics. Thereby, we are the first to give specific recommendations on how to realize the potential of AI applications in medical imaging diagnostics from a user perspective.

Dense breasts and women's health: which screenings are essential?

Mota BS, Shimizu C, Reis YN, Gonçalves R, Soares Junior JM, Baracat EC, Filassi JR

pubmed logopapersAug 9 2025
This review synthesizes current evidence regarding optimal breast cancer screening strategies for women with dense breasts, a population at increased risk due to decreased mammographic sensitivity. A systematic literature review was performed in accordance with PRISMA criteria, covering MEDLINE, EMBASE, CINAHL Plus, Scopus, and Web of Science until May 2025. The analysis examines advanced imaging techniques such as digital breast tomosynthesis (DBT), contrast-enhanced spectral mammography (CESM), ultrasound, and magnetic resonance imaging (MRI), assessing their effectiveness in addressing the shortcomings of traditional mammography in dense breast tissue. The review rigorously evaluates the incorporation of risk stratification models, such as the BCSC, in customizing screening regimens, in conjunction with innovative technologies like liquid biopsy and artificial intelligence-based image analysis for improved risk prediction. A key emphasis is placed on the heterogeneity in international screening guidelines and the challenges in translating research findings to diverse clinical settings, particularly in resource-constrained environments. The discussion includes ethical implications regarding compulsory breast density notification and the possibility of intensifying disparities in health care. The review ultimately encourages the development of evidence-based, context-specific guidelines that facilitate equitable access to effective breast cancer screening for all women with dense breasts.

Value of artificial intelligence in neuro-oncology.

Voigtlaender S, Nelson TA, Karschnia P, Vaios EJ, Kim MM, Lohmann P, Galldiks N, Filbin MG, Azizi S, Natarajan V, Monje M, Dietrich J, Winter SF

pubmed logopapersAug 8 2025
CNS cancers are complex, difficult-to-treat malignancies that remain insufficiently understood and mostly incurable, despite decades of research efforts. Artificial intelligence (AI) is poised to reshape neuro-oncological practice and research, driving advances in medical image analysis, neuro-molecular-genetic characterisation, biomarker discovery, therapeutic target identification, tailored management strategies, and neurorehabilitation. This Review examines key opportunities and challenges associated with AI applications along the neuro-oncological care trajectory. We highlight emerging trends in foundation models, biophysical modelling, synthetic data, and drug development and discuss regulatory, operational, and ethical hurdles across data, translation, and implementation gaps. Near-term clinical translation depends on scaling validated AI solutions for well defined clinical tasks. In contrast, more experimental AI solutions offer broader potential but require technical refinement and resolution of data and regulatory challenges. Addressing both general and neuro-oncology-specific issues is essential to unlock the full potential of AI and ensure its responsible, effective, and needs-based integration into neuro-oncological practice.

Artificial intelligence in radiology, nuclear medicine and radiotherapy: Perceptions, experiences and expectations from the medical radiation technologists in Central and South America.

Mendez-Avila C, Torre S, Arce YV, Contreras PR, Rios J, Raza NO, Gonzalez H, Hernandez YC, Cabezas A, Lucero M, Ezquerra V, Malamateniou C, Solis-Barquero SM

pubmed logopapersAug 8 2025
Artificial intelligence (AI) has been growing in the field of medical imaging and clinical practice. It is essential to comprehend the perceptions, experiences, and expectations regarding AI implementation among medical radiation technologists (MRTs) working in radiology, nuclear medicine, and radiotherapy. Some global studies tend to inform about AI implementation, but there is almost no information from Central and South American professionals. This study aimed to understand the perceptions of the impact of AI on the MRTs, as well as the varying experiences and expectations these professionals have regarding its implementation. An online survey was conducted among Central and South American MRTs for the collection of qualitative data concerning the primary perceptions regarding the implementation of AI in radiology, nuclear medicine, and radiotherapy. The analysis considered descriptive statistics in closed-ended questions and dimension codification for open-ended responses. A total of 398 valid responses were obtained, and it was determined that 98.5 % (n = 392) of the respondents agreed with the implementation of AI in clinical practice. The primary contributions of AI that were identified were the optimization of processes, greater diagnostic accuracy, and the possibility of job expansion. On the other hand, concerns were raised regarding the delay in providing training opportunities and limited avenues for learning in this domain, the displacement of roles, and dehumanization in clinical practice. This sample of participants likely represents mostly professionals who have more AI knowledge than others. It is therefore important to interpret these results with caution. Our findings indicate strong professional confidence in AI's capacity to improve imaging quality while maintaining patient safety standards. However, user resistance may disturb implementation efforts. Our results highlight the dual need for (a) comprehensive professional training programs and (b) user education initiatives that demonstrate AI's clinical value in radiology. We therefore recommend a carefully structured, phased AI implementation approach, guided by evidence-based guidelines and validated training protocols from existing research. AI is already present in medical imaging, but its effective implementations depend on building acceptance and trust through education and training, enabling MRTs to use it safely for patient benefit.

Patient Preferences for Artificial Intelligence in Medical Imaging: A Single-Center Cross-Sectional Survey.

McGhee KN, Barrett DJ, Safarini O, Elkassem AA, Eddins JT, Smith AD, Rothenberg SA

pubmed logopapersAug 7 2025
Artificial Intelligence (AI) is rapidly being implemented into clinical practice to improve diagnostic accuracy and reduce provider burnout. However, patient self-perceived knowledge and perceptions of AI's role in their care remain unclear. This study aims to explore patient preferences regarding the use of and communication of AI in their care for patients undergoing cross-sectional imaging exams. This single-center cross-sectional study, a structured questionnaire recruited patients undergoing outpatient CT or MRI examinations between June and July 2024 to assess baseline self-perceived knowledge of AI, perspectives on AI in clinical care, preferences regarding AI-generated results, and economic considerations related to AI, using Likert scales and categorical questions. A total of 226 participants (143 females; mean age 53 years) were surveyed with 67.4% (151/224) reporting having minimal to no knowledge of AI in medicine, with lower knowledge levels associated with lower socioeconomic status (p < .001). 90.3% (204/226) believed they should be informed about the use of AI in their care, and 91.1% (204/224) supported the right to opt out. Additionally, 91.1% (204/224) of participants expressed a strong preference for being informed when AI was involved in interpreting their medical images. 65.6% (143/218) indicated that they would not accept a screening imaging exam exclusively interpreted by an AI algorithm. Finally, 91.1% (204/224) of participants wanted disclosure when AI was used and 89.1% (196/220) felt such disclosure and clarification of discrepancies should be considered standard care. To align AI adoption with patient preferences and expectations, radiology practices must prioritize disclosure, patient engagement, and standardized documentation of AI use without being overly burdensome to the diagnostic workflow. Patients prefer transparency for AI utilization in their care, and our study highlights the discrepancy between patient preferences and current clinical practice. Patients are not expected to determine the technical aspects of an image examination such as acquisition parameters or reconstruction kernel and must trust their providers to act in their best interest. Clear communication of how AI is being used in their care should be provided in ways that do not overly burden the radiologist.

Role of AI in Clinical Decision-Making: An Analysis of FDA Medical Device Approvals.

Fernando P, Lyell D, Wang Y, Magrabi F

pubmed logopapersAug 7 2025
The U.S. Food and Drug Administration (FDA) plays an important role in ensuring safety and effectiveness of AI/ML-enabled devices through its regulatory processes. In recent years, there has been an increase in the number of these devices cleared by FDA. This study analyzes 104 FDA-approved ML-enabled medical devices from May 2021 to April 2023, extending previous research to provide a contemporary perspective on this evolving landscape. We examined clinical task, device task, device input and output, ML method and level of autonomy. Most approvals (n = 103) were via the 510(k) premarket notification pathway, indicating substantial equivalence to existing devices. Devices predominantly supported diagnostic tasks (n = 81). The majority of devices used imaging data (n = 99), with CT and MRI being the most common modalities. Device autonomy levels were distributed as follows: 52% assistive (requiring users to confirm or approve AI provided information or decision), 27% autonomous information, and 21% autonomous decision. The prevalence of assistive devices indicates a cautious approach to integrating ML into clinical decision-making, favoring support rather than replacement of human judgment.

Foundation models for radiology-the position of the AI for Health Imaging (AI4HI) network.

de Almeida JG, Alberich LC, Tsakou G, Marias K, Tsiknakis M, Lekadir K, Marti-Bonmati L, Papanikolaou N

pubmed logopapersAug 6 2025
Foundation models are large models trained on big data which can be used for downstream tasks. In radiology, these models can potentially address several gaps in fairness and generalization, as they can be trained on massive datasets without labelled data and adapted to tasks requiring data with a small number of descriptions. This reduces one of the limiting bottlenecks in clinical model construction-data annotation-as these models can be trained through a variety of techniques that require little more than radiological images with or without their corresponding radiological reports. However, foundation models may be insufficient as they are affected-to a smaller extent when compared with traditional supervised learning approaches-by the same issues that lead to underperforming models, such as a lack of transparency/explainability, and biases. To address these issues, we advocate that the development of foundation models should not only be pursued but also accompanied by the development of a decentralized clinical validation and continuous training framework. This does not guarantee the resolution of the problems associated with foundation models, but it enables developers, clinicians and patients to know when, how and why models should be updated, creating a clinical AI ecosystem that is better capable of serving all stakeholders. CRITICAL RELEVANCE STATEMENT: Foundation models may mitigate issues like bias and poor generalization in radiology AI, but challenges persist. We propose a decentralized, cross-institutional framework for continuous validation and training to enhance model reliability, safety, and clinical utility. KEY POINTS: Foundation models trained on large datasets reduce annotation burdens and improve fairness and generalization in radiology. Despite improvements, they still face challenges like limited transparency, explainability, and residual biases. A decentralized, cross-institutional framework for clinical validation and continuous training can strengthen reliability and inclusivity in clinical AI.
Page 1 of 880 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.