Sort by:
Page 4 of 987 results

American College of Veterinary Radiology and European College of Veterinary Diagnostic Imaging position statement on artificial intelligence.

Appleby RB, Difazio M, Cassel N, Hennessey R, Basran PS

pubmed logopapersJun 1 2025
The American College of Veterinary Radiology (ACVR) and the European College of Veterinary Diagnostic Imaging (ECVDI) recognize the transformative potential of AI in veterinary diagnostic imaging and radiation oncology. This position statement outlines the guiding principles for the ethical development and integration of AI technologies to ensure patient safety and clinical effectiveness. Artificial intelligence systems must adhere to good machine learning practices, emphasizing transparency, error reporting, and the involvement of clinical experts throughout development. These tools should also include robust mechanisms for secure patient data handling and postimplementation monitoring. The position highlights the critical importance of maintaining a veterinarian in the loop, preferably a board-certified radiologist or radiation oncologist, to interpret AI outputs and safeguard diagnostic quality. Currently, no commercially available AI products for veterinary diagnostic imaging meet the required standards for transparency, validation, or safety. The ACVR and ECVDI advocate for rigorous peer-reviewed research, unbiased third-party evaluations, and interdisciplinary collaboration to establish evidence-based benchmarks for AI applications. Additionally, the statement calls for enhanced education on AI for veterinary professionals, from foundational training in curricula to continuing education for practitioners. Veterinarians are encouraged to disclose AI usage to pet owners and provide alternative diagnostic options as needed. Regulatory bodies should establish guidelines to prevent misuse and protect the profession and patients. The ACVR and ECVDI stress the need for a cautious, informed approach to AI adoption, ensuring these technologies augment, rather than compromise, veterinary care.

Radiomics across modalities: a comprehensive review of neurodegenerative diseases.

Inglese M, Conti A, Toschi N

pubmed logopapersJun 1 2025
Radiomics allows extraction from medical images of quantitative features that are able to reveal tissue patterns that are generally invisible to human observers. Despite the challenges in visually interpreting radiomic features and the computational resources required to generate them, they hold significant value in downstream automated processing. For instance, in statistical or machine learning frameworks, radiomic features enhance sensitivity and specificity, making them indispensable for tasks such as diagnosis, prognosis, prediction, monitoring, image-guided interventions, and evaluating therapeutic responses. This review explores the application of radiomics in neurodegenerative diseases, with a focus on Alzheimer's disease, Parkinson's disease, Huntington's disease, and multiple sclerosis. While radiomics literature often focuses on magnetic resonance imaging (MRI) and computed tomography (CT), this review also covers its broader application in nuclear medicine, with use cases of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) radiomics. Additionally, we review integrated radiomics, where features from multiple imaging modalities are fused to improve model performance. This review also highlights the growing integration of radiomics with artificial intelligence and the need for feature standardisation and reproducibility to facilitate its translation into clinical practice.

Generative adversarial networks in medical image reconstruction: A systematic literature review.

Hussain J, Båth M, Ivarsson J

pubmed logopapersJun 1 2025
Recent advancements in generative adversarial networks (GANs) have demonstrated substantial potential in medical image processing. Despite this progress, reconstructing images from incomplete data remains a challenge, impacting image quality. This systematic literature review explores the use of GANs in enhancing and reconstructing medical imaging data. A document survey of computing literature was conducted using the ACM Digital Library to identify relevant articles from journals and conference proceedings using keyword combinations, such as "generative adversarial networks or generative adversarial network," "medical image or medical imaging," and "image reconstruction." Across the reviewed articles, there were 122 datasets used in 175 instances, 89 top metrics employed 335 times, 10 different tasks with a total count of 173, 31 distinct organs featured in 119 instances, and 18 modalities utilized in 121 instances, collectively depicting significant utilization of GANs in medical imaging. The adaptability and efficacy of GANs were showcased across diverse medical tasks, organs, and modalities, utilizing top public as well as private/synthetic datasets for disease diagnosis, including the identification of conditions like cancer in different anatomical regions. The study emphasized GAN's increasing integration and adaptability in diverse radiology modalities, showcasing their transformative impact on diagnostic techniques, including cross-modality tasks. The intricate interplay between network size, batch size, and loss function refinement significantly impacts GAN's performance, although challenges in training persist. The study underscores GANs as dynamic tools shaping medical imaging, contributing significantly to image quality, training methodologies, and overall medical advancements, positioning them as substantial components driving medical advancements.

Brain tumor segmentation with deep learning: Current approaches and future perspectives.

Verma A, Yadav AK

pubmed logopapersJun 1 2025
Accurate brain tumor segmentation from MRI images is critical in the medical industry, directly impacts the efficacy of diagnostic and treatment plans. Accurate segmentation of tumor region can be challenging, especially when noise and abnormalities are present. This research provides a systematic review of automatic brain tumor segmentation techniques, with a specific focus on the design of network architectures. The review categorizes existing methods into unsupervised and supervised learning techniques, as well as machine learning and deep learning approaches within supervised techniques. Deep learning techniques are thoroughly reviewed, with a particular focus on CNN-based, U-Net-based, transfer learning-based, transformer-based, and hybrid transformer-based methods. This survey encompasses a broad spectrum of automatic segmentation methodologies, from traditional machine learning approaches to advanced deep learning frameworks. It provides an in-depth comparison of performance metrics, model efficiency, and robustness across multiple datasets, particularly the BraTS dataset. The study further examines multi-modal MRI imaging and its influence on segmentation accuracy, addressing domain adaptation, class imbalance, and generalization challenges. The analysis highlights the current challenges in Computer-aided Diagnostic (CAD) systems, examining how different models and imaging sequences impact performance. Recent advancements in deep learning, especially the widespread use of U-Net architectures, have significantly enhanced medical image segmentation. This review critically evaluates these developments, focusing the iterative improvements in U-Net models that have driven progress in brain tumor segmentation. Furthermore, it explores various techniques for improving U-Net performance for medical applications, focussing on its potential for improving diagnostic and treatment planning procedures. The efficiency of these automated segmentation approaches is rigorously evaluated using the BraTS dataset, a benchmark dataset, part of the annual Multimodal Brain Tumor Segmentation Challenge (MICCAI). This evaluation provides insights into the current state-of-the-art and identifies key areas for future research and development.

Broadening the Net: Overcoming Challenges and Embracing Novel Technologies in Lung Cancer Screening.

Czerlanis CM, Singh N, Fintelmann FJ, Damaraju V, Chang AEB, White M, Hanna N

pubmed logopapersJun 1 2025
Lung cancer is one of the leading causes of cancer-related mortality worldwide, with most cases diagnosed at advanced stages where curative treatment options are limited. Low-dose computed tomography (LDCT) for lung cancer screening (LCS) of individuals selected based on age and smoking history has shown a significant reduction in lung cancer-specific mortality. The number needed to screen to prevent one death from lung cancer is lower than that for breast cancer, cervical cancer, and colorectal cancer. Despite the substantial impact on reducing lung cancer-related mortality and proof that LCS with LDCT is effective, uptake of LCS has been low and LCS eligibility criteria remain imperfect. While LCS programs have historically faced patient recruitment challenges, research suggests that there are novel opportunities to both identify and improve screening for at-risk populations. In this review, we discuss the global obstacles to implementing LCS programs and strategies to overcome barriers in resource-limited settings. We explore successful approaches to promote LCS through robust engagement with community partners. Finally, we examine opportunities to enhance LCS in at-risk populations not captured by current eligibility criteria, including never smokers and individuals with a family history of lung cancer, with a focus on early detection through novel artificial intelligence technologies.

ESR Essentials: how to get to valuable radiology AI: the role of early health technology assessment-practice recommendations by the European Society of Medical Imaging Informatics.

Kemper EHM, Erenstein H, Boverhof BJ, Redekop K, Andreychenko AE, Dietzel M, Groot Lipman KBW, Huisman M, Klontzas ME, Vos F, IJzerman M, Starmans MPA, Visser JJ

pubmed logopapersJun 1 2025
AI tools in radiology are revolutionising the diagnosis, evaluation, and management of patients. However, there is a major gap between the large number of developed AI tools and those translated into daily clinical practice, which can be primarily attributed to limited usefulness and trust in current AI tools. Instead of technically driven development, little effort has been put into value-based development to ensure AI tools will have a clinically relevant impact on patient care. An iterative comprehensive value evaluation process covering the complete AI tool lifecycle should be part of radiology AI development. For value assessment of health technologies, health technology assessment (HTA) is an extensively used and comprehensive method. While most aspects of value covered by HTA apply to radiology AI, additional aspects, including transparency, explainability, and robustness, are unique to radiology AI and crucial in its value assessment. Additionally, value assessment should already be included early in the design stage to determine the potential impact and subsequent requirements of the AI tool. Such early assessment should be systematic, transparent, and practical to ensure all stakeholders and value aspects are considered. Hence, early value-based development by incorporating early HTA will lead to more valuable AI tools and thus facilitate translation to clinical practice. CLINICAL RELEVANCE STATEMENT: This paper advocates for the use of early value-based assessments. These assessments promote a comprehensive evaluation on how an AI tool in development can provide value in clinical practice and thus help improve the quality of these tools and the clinical process they support. KEY POINTS: Value in radiology AI should be perceived as a comprehensive term including health technology assessment domains and AI-specific domains. Incorporation of an early health technology assessment for radiology AI during development will lead to more valuable radiology AI tools. Comprehensive and transparent value assessment of radiology AI tools is essential for their widespread adoption.

A European Multi-Center Breast Cancer MRI Dataset

Gustav Müller-Franzes, Lorena Escudero Sánchez, Nicholas Payne, Alexandra Athanasiou, Michael Kalogeropoulos, Aitor Lopez, Alfredo Miguel Soro Busto, Julia Camps Herrero, Nika Rasoolzadeh, Tianyu Zhang, Ritse Mann, Debora Jutz, Maike Bode, Christiane Kuhl, Wouter Veldhuis, Oliver Lester Saldanha, JieFu Zhu, Jakob Nikolas Kather, Daniel Truhn, Fiona J. Gilbert

arxiv logopreprintMay 31 2025
Detecting breast cancer early is of the utmost importance to effectively treat the millions of women afflicted by breast cancer worldwide every year. Although mammography is the primary imaging modality for screening breast cancer, there is an increasing interest in adding magnetic resonance imaging (MRI) to screening programmes, particularly for women at high risk. Recent guidelines by the European Society of Breast Imaging (EUSOBI) recommended breast MRI as a supplemental screening tool for women with dense breast tissue. However, acquiring and reading MRI scans requires significantly more time from expert radiologists. This highlights the need to develop new automated methods to detect cancer accurately using MRI and Artificial Intelligence (AI), which have the potential to support radiologists in breast MRI interpretation and classification and help detect cancer earlier. For this reason, the ODELIA consortium has made this multi-centre dataset publicly available to assist in developing AI tools for the detection of breast cancer on MRI.

ABCDEFGH: An Adaptation-Based Convolutional Neural Network-CycleGAN Disease-Courses Evolution Framework Using Generative Models in Health Education

Ruiming Min, Minghao Liu

arxiv logopreprintMay 31 2025
With the advancement of modern medicine and the development of technologies such as MRI, CT, and cellular analysis, it has become increasingly critical for clinicians to accurately interpret various diagnostic images. However, modern medical education often faces challenges due to limited access to high-quality teaching materials, stemming from privacy concerns and a shortage of educational resources (Balogh et al., 2015). In this context, image data generated by machine learning models, particularly generative models, presents a promising solution. These models can create diverse and comparable imaging datasets without compromising patient privacy, thereby supporting modern medical education. In this study, we explore the use of convolutional neural networks (CNNs) and CycleGAN (Zhu et al., 2017) for generating synthetic medical images. The source code is available at https://github.com/mliuby/COMP4211-Project.
Page 4 of 987 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.