Sort by:
Page 8 of 17169 results

Emerging Frameworks for Objective Task-based Evaluation of Quantitative Medical Imaging Methods

Yan Liu, Huitian Xia, Nancy A. Obuchowski, Richard Laforest, Arman Rahmim, Barry A. Siegel, Abhinav K. Jha

arxiv logopreprintJul 7 2025
Quantitative imaging (QI) is demonstrating strong promise across multiple clinical applications. For clinical translation of QI methods, objective evaluation on clinically relevant tasks is essential. To address this need, multiple evaluation strategies are being developed. In this paper, based on previous literature, we outline four emerging frameworks to perform evaluation studies of QI methods. We first discuss the use of virtual imaging trials (VITs) to evaluate QI methods. Next, we outline a no-gold-standard evaluation framework to clinically evaluate QI methods without ground truth. Third, a framework to evaluate QI methods for joint detection and quantification tasks is outlined. Finally, we outline a framework to evaluate QI methods that output multi-dimensional parameters, such as radiomic features. We review these frameworks, discussing their utilities and limitations. Further, we examine future research areas in evaluation of QI methods. Given the recent advancements in PET, including long axial field-of-view scanners and the development of artificial-intelligence algorithms, we present these frameworks in the context of PET.

Enhancing Prostate Cancer Classification: A Comprehensive Review of Multiparametric MRI and Deep Learning Integration.

Valizadeh G, Morafegh M, Fatemi F, Ghafoori M, Saligheh Rad H

pubmed logopapersJul 4 2025
Multiparametric MRI (mpMRI) has become an essential tool in the detection of prostate cancer (PCa) and can help many men avoid unnecessary biopsies. However, interpreting prostate mpMRI remains subjective, labor-intensive, and more complex compared to traditional transrectal ultrasound. These challenges will likely grow as MRI is increasingly adopted for PCa screening and diagnosis. This development has sparked interest in non-invasive artificial intelligence (AI) support, as larger and better-labeled datasets now enable deep-learning (DL) models to address important tasks in the prostate MRI workflow. Specifically, DL classification networks can be trained to differentiate between benign tissue and PCa, identify non-clinically significant disease versus clinically significant disease, and predict high-grade cancer at both the lesion and patient levels. This review focuses on the integration of DL classification networks with mpMRI for PCa assessment, examining key network architectures and strategies, the impact of different MRI sequence inputs on model performance, and the added value of incorporating domain knowledge and clinical information into MRI-based DL classifiers. It also highlights reported comparisons between DL models and the Prostate Imaging Reporting and Data System (PI-RADS) for PCa diagnosis and the potential of AI-assisted predictions, alongside ongoing efforts to improve model explainability and interpretability to support clinical trust and adoption. It further discusses the potential role of DL-based computer-aided diagnosis systems in improving the prostate MRI reporting workflow while addressing current limitations and future outlooks to facilitate better clinical integration of these systems. Evidence Level: N/A. Technical Efficacy: Stage 2.

AI-enabled obstetric point-of-care ultrasound as an emerging technology in low- and middle-income countries: provider and health system perspectives.

Della Ripa S, Santos N, Walker D

pubmed logopapersJul 4 2025
In many low- and middle-income countries (LMICs), widespread access to obstetric ultrasound is challenged by lack of trained providers, workload, and inadequate resources required for sustainability. Artificial intelligence (AI) is a powerful tool for automating image acquisition and interpretation and may help overcome these barriers. This study explored stakeholders' opinions about how AI-enabled point-of-care ultrasound (POCUS) might change current antenatal care (ANC) services in LMICs and identified key considerations for introduction. We purposely sampled midwives, doctors, researchers, and implementors for this mixed methods study, with a focus on those who live or work in African LMICs. Individuals completed an anonymous web-based survey, then participated in an interview or focus group. Among the 41 participants, we captured demographics, experience with and perceptions of standard POCUS, and reactions to an AI-enabled POCUS prototype description. Qualitative data were analyzed by thematic content analysis and quantitative Likert and rank-order data were aggregated as frequencies; the latter was presented alongside illustrative quotes to highlight overall versus nuanced perceptions. The following themes emerged: (1) priority AI capabilities; (2) potential impact on ANC quality, services and clinical outcomes; (3) health system integration considerations; and (4) research priorities. First, AI-enabled POCUS elicited concerns around algorithmic accuracy and compromised clinical acumen due to over-reliance on AI, but an interest in gestational age automation. Second, there was overall agreement that both standard and AI-enabled POCUS could improve ANC attendance (75%, 65%, respectively), provider-client trust (82%, 60%), and providers' confidence in clinical decision-making (85%, 70%). AI consistently elicited more uncertainty among respondents. Third, health system considerations emerged including task sharing with midwives, ultrasound training delivery and curricular content, and policy-related issues such as data security and liability risks. For both standard and AI-enabled POCUS, clinical decision support and referral strengthening were deemed necessary to improve outcomes. Lastly, ranked priority research areas included algorithm accuracy across diverse populations and impact on ANC performance indicators; mortality indicators were less prioritized. Optimism that AI-enabled POCUS can increase access in settings with limited personnel and resources is coupled with expressions of caution and potential risks that warrant careful consideration and exploration.

Clinical obstacles to machine-learning POCUS adoption and system-wide AI implementation (The COMPASS-AI survey).

Wong A, Roslan NL, McDonald R, Noor J, Hutchings S, D'Costa P, Via G, Corradi F

pubmed logopapersJul 3 2025
Point-of-care ultrasound (POCUS) has become indispensable in various medical specialties. The integration of artificial intelligence (AI) and machine learning (ML) holds significant promise to enhance POCUS capabilities further. However, a comprehensive understanding of healthcare professionals' perspectives on this integration is lacking. This study aimed to investigate the global perceptions, familiarity, and adoption of AI in POCUS among healthcare professionals. An international, web-based survey was conducted among healthcare professionals involved in POCUS. The survey instrument included sections on demographics, familiarity with AI, perceived utility, barriers (technological, training, trust, workflow, legal/ethical), and overall perceptions regarding AI-assisted POCUS. The data was analysed by descriptive statistics, frequency distributions, and group comparisons (using chi-square/Fisher's exact test and t-test/Mann-Whitney U test). This study surveyed 1154 healthcare professionals on perceived barriers to implementing AI in point-of-care ultrasound. Despite general enthusiasm, with 81.1% of respondents expressing agreement or strong agreement, significant barriers were identified. The most frequently cited single greatest barriers were Training & Education (27.1%) and Clinical Validation & Evidence (17.5%). Analysis also revealed that perceptions of specific barriers vary significantly based on demographic factors, including region of practice, medical specialty, and years of healthcare experience. This novel global survey provides critical insights into the perceptions and adoption of AI in POCUS. Findings highlight considerable enthusiasm alongside crucial challenges, primarily concerning training, validation, guidelines, and support. Addressing these barriers is essential for the responsible and effective implementation of AI in POCUS.

Radiological and Biological Dictionary of Radiomics Features: Addressing Understandable AI Issues in Personalized Prostate Cancer, Dictionary Version PM1.0.

Salmanpour MR, Amiri S, Gharibi S, Shariftabrizi A, Xu Y, Weeks WB, Rahmim A, Hacihaliloglu I

pubmed logopapersJul 3 2025
Artificial intelligence (AI) can advance medical diagnostics, but interpretability limits its clinical use. This work links standardized quantitative Radiomics features (RF) extracted from medical images with clinical frameworks like PI-RADS, ensuring AI models are understandable and aligned with clinical practice. We investigate the connection between visual semantic features defined in PI-RADS and associated risk factors, moving beyond abnormal imaging findings, and establishing a shared framework between medical and AI professionals by creating a standardized radiological/biological RF dictionary. Six interpretable and seven complex classifiers, combined with nine interpretable feature selection algorithms (FSA), were applied to RFs extracted from segmented lesions in T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), and apparent diffusion coefficient (ADC) multiparametric MRI sequences to predict TCIA-UCLA scores, grouped as low-risk (scores 1-3) and high-risk (scores 4-5). We then utilized the created dictionary to interpret the best predictive models. Combining sequences with FSAs including ANOVA F-test, Correlation Coefficient, and Fisher Score, and utilizing logistic regression, identified key features: The 90th percentile from T2WI, (reflecting hypo-intensity related to prostate cancer risk; Variance from T2WI (lesion heterogeneity; shape metrics including Least Axis Length and Surface Area to Volume ratio from ADC, describing lesion shape and compactness; and Run Entropy from ADC (texture consistency). This approach achieved the highest average accuracy of 0.78 ± 0.01, significantly outperforming single-sequence methods (p-value < 0.05). The developed dictionary for Prostate-MRI (PM1.0) serves as a common language and fosters collaboration between clinical professionals and AI developers to advance trustworthy AI solutions that support reliable/interpretable clinical decisions.

Embedding-Based Federated Data Sharing via Differentially Private Conditional VAEs

Francesco Di Salvo, Hanh Huyen My Nguyen, Christian Ledig

arxiv logopreprintJul 3 2025
Deep Learning (DL) has revolutionized medical imaging, yet its adoption is constrained by data scarcity and privacy regulations, limiting access to diverse datasets. Federated Learning (FL) enables decentralized training but suffers from high communication costs and is often restricted to a single downstream task, reducing flexibility. We propose a data-sharing method via Differentially Private (DP) generative models. By adopting foundation models, we extract compact, informative embeddings, reducing redundancy and lowering computational overhead. Clients collaboratively train a Differentially Private Conditional Variational Autoencoder (DP-CVAE) to model a global, privacy-aware data distribution, supporting diverse downstream tasks. Our approach, validated across multiple feature extractors, enhances privacy, scalability, and efficiency, outperforming traditional FL classifiers while ensuring differential privacy. Additionally, DP-CVAE produces higher-fidelity embeddings than DP-CGAN while requiring $5{\times}$ fewer parameters.

Are Vision Transformer Representations Semantically Meaningful? A Case Study in Medical Imaging

Montasir Shams, Chashi Mahiul Islam, Shaeke Salman, Phat Tran, Xiuwen Liu

arxiv logopreprintJul 2 2025
Vision transformers (ViTs) have rapidly gained prominence in medical imaging tasks such as disease classification, segmentation, and detection due to their superior accuracy compared to conventional deep learning models. However, due to their size and complex interactions via the self-attention mechanism, they are not well understood. In particular, it is unclear whether the representations produced by such models are semantically meaningful. In this paper, using a projected gradient-based algorithm, we show that their representations are not semantically meaningful and they are inherently vulnerable to small changes. Images with imperceptible differences can have very different representations; on the other hand, images that should belong to different semantic classes can have nearly identical representations. Such vulnerability can lead to unreliable classification results; for example, unnoticeable changes cause the classification accuracy to be reduced by over 60\%. %. To the best of our knowledge, this is the first work to systematically demonstrate this fundamental lack of semantic meaningfulness in ViT representations for medical image classification, revealing a critical challenge for their deployment in safety-critical systems.

A federated learning-based privacy-preserving image processing framework for brain tumor detection from CT scans.

Al-Saleh A, Tejani GG, Mishra S, Sharma SK, Mousavirad SJ

pubmed logopapersJul 2 2025
The detection of brain tumors is crucial in medical imaging, because accurate and early diagnosis can have a positive effect on patients. Because traditional deep learning models store all their data together, they raise questions about privacy, complying with regulations and the different types of data used by various institutions. We introduce the anisotropic-residual capsule hybrid Gorilla Badger optimized network (Aniso-ResCapHGBO-Net) framework for detecting brain tumors in a privacy-preserving, decentralized system used by many healthcare institutions. ResNet-50 and capsule networks are incorporated to achieve better feature extraction and maintain the structure of images' spatial data. To get the best results, the hybrid Gorilla Badger optimization algorithm (HGBOA) is applied for selecting the key features. Preprocessing techniques include anisotropic diffusion filtering, morphological operations, and mutual information-based image registration. Updates to the model are made secure and tamper-evident on the Ethereum network with its private blockchain and SHA-256 hashing scheme. The project is built using Python, TensorFlow and PyTorch. The model displays 99.07% accuracy, 98.54% precision and 99.82% sensitivity on assessments from benchmark CT imaging of brain tumors. This approach also helps to reduce the number of cases where no disease is found when there is one and vice versa. The framework ensures that patients' data is protected and does not decrease the accuracy of brain tumor detection.

Multimodal AI to forecast arrhythmic death in hypertrophic cardiomyopathy.

Lai C, Yin M, Kholmovski EG, Popescu DM, Lu DY, Scherer E, Binka E, Zimmerman SL, Chrispin J, Hays AG, Phelan DM, Abraham MR, Trayanova NA

pubmed logopapersJul 2 2025
Sudden cardiac death from ventricular arrhythmias is a leading cause of mortality worldwide. Arrhythmic death prognostication is challenging in patients with hypertrophic cardiomyopathy (HCM), a setting where current clinical guidelines show low performance and inconsistent accuracy. Here, we present a deep learning approach, MAARS (Multimodal Artificial intelligence for ventricular Arrhythmia Risk Stratification), to forecast lethal arrhythmia events in patients with HCM by analyzing multimodal medical data. MAARS' transformer-based neural networks learn from electronic health records, echocardiogram and radiology reports, and contrast-enhanced cardiac magnetic resonance images, the latter being a unique feature of this model. MAARS achieves an area under the curve of 0.89 (95% confidence interval (CI) 0.79-0.94) and 0.81 (95% CI 0.69-0.93) in internal and external cohorts and outperforms current clinical guidelines by 0.27-0.35 (internal) and 0.22-0.30 (external). In contrast to clinical guidelines, it demonstrates fairness across demographic subgroups. We interpret MAARS' predictions on multiple levels to promote artificial intelligence transparency and derive risk factors warranting further investigation.

Enhancing ultrasonographic detection of hepatocellular carcinoma with artificial intelligence: current applications, challenges and future directions.

Wongsuwan J, Tubtawee T, Nirattisaikul S, Danpanichkul P, Cheungpasitporn W, Chaichulee S, Kaewdech A

pubmed logopapersJul 1 2025
Hepatocellular carcinoma (HCC) remains a leading cause of cancer-related mortality worldwide, with early detection playing a crucial role in improving survival rates. Artificial intelligence (AI), particularly in medical image analysis, has emerged as a potential tool for HCC diagnosis and surveillance. Recent advancements in deep learning-driven medical imaging have demonstrated significant potential in enhancing early HCC detection, particularly in ultrasound (US)-based surveillance. This review provides a comprehensive analysis of the current landscape, challenges, and future directions of AI in HCC surveillance, with a specific focus on the application in US imaging. Additionally, it explores AI's transformative potential in clinical practice and its implications for improving patient outcomes. We examine various AI models developed for HCC diagnosis, highlighting their strengths and limitations, with a particular emphasis on deep learning approaches. Among these, convolutional neural networks have shown notable success in detecting and characterising different focal liver lesions on B-mode US often outperforming conventional radiological assessments. Despite these advancements, several challenges hinder AI integration into clinical practice, including data heterogeneity, a lack of standardisation, concerns regarding model interpretability, regulatory constraints, and barriers to real-world clinical adoption. Addressing these issues necessitates the development of large, diverse, and high-quality data sets to enhance the robustness and generalisability of AI models. Emerging trends in AI for HCC surveillance, such as multimodal integration, explainable AI, and real-time diagnostics, offer promising advancements. These innovations have the potential to significantly improve the accuracy, efficiency, and clinical applicability of AI-driven HCC surveillance, ultimately contributing to enhanced patient outcomes.
Page 8 of 17169 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.