Sort by:
Page 35 of 92915 results

Securing Healthcare Data Integrity: Deepfake Detection Using Autonomous AI Approaches.

Hsu CC, Tsai MY, Yu CM

pubmed logopapersJul 9 2025
The rapid evolution of deepfake technology poses critical challenges to healthcare systems, particularly in safeguarding the integrity of medical imaging, electronic health records (EHR), and telemedicine platforms. As autonomous AI becomes increasingly integrated into smart healthcare, the potential misuse of deepfakes to manipulate sensitive healthcare data or impersonate medical professionals highlights the urgent need for robust and adaptive detection mechanisms. In this work, we propose DProm, a dynamic deepfake detection framework leveraging visual prompt tuning (VPT) with a pre-trained Swin Transformer. Unlike traditional static detection models, which struggle to adapt to rapidly evolving deepfake techniques, DProm fine-tunes a small set of visual prompts to efficiently adapt to new data distributions with minimal computational and storage requirements. Comprehensive experiments demonstrate that DProm achieves state-of-the-art performance in both static cross-dataset evaluations and dynamic scenarios, ensuring robust detection across diverse data distributions. By addressing the challenges of scalability, adaptability, and resource efficiency, DProm offers a transformative solution for enhancing the security and trustworthiness of autonomous AI systems in healthcare, paving the way for safer and more reliable smart healthcare applications.

Enhancing automated detection and classification of dementia in individuals with cognitive impairment using artificial intelligence techniques.

Alotaibi SD, Alharbi AAK

pubmed logopapersJul 9 2025
Dementia is a degenerative and chronic disorder, increasingly prevalent among older adults, posing significant challenges in providing appropriate care. As the number of dementia cases continues to rise, delivering optimal care becomes more complex. Machine learning (ML) plays a crucial role in addressing this challenge by utilizing medical data to enhance care planning and management for individuals at risk of various types of dementia. Magnetic resonance imaging (MRI) is a commonly used method for analyzing neurological disorders. Recent evidence highlights the benefits of integrating artificial intelligence (AI) techniques with MRI, significantly enhancing the diagnostic accuracy for different forms of dementia. This paper explores the use of AI in the automated detection and classification of dementia, aiming to streamline early diagnosis and improve patient outcomes. Integrating ML models into clinical practice can transform dementia care by enabling early detection, personalized treatment plans, and more effectual monitoring of disease progression. In this study, an Enhancing Automated Detection and Classification of Dementia in Thinking Inability Persons using Artificial Intelligence Techniques (EADCD-TIPAIT) technique is presented. The goal of the EADCD-TIPAIT technique is for the detection and classification of dementia in individuals with cognitive impairment using MRI imaging. The EADCD-TIPAIT method performs preprocessing to scale the input data using z-score normalization to obtain this. Next, the EADCD-TIPAIT technique performs a binary greylag goose optimization (BGGO)-based feature selection approach to efficiently identify relevant features that distinguish between normal and dementia-affected brain regions. In addition, the wavelet neural network (WNN) classifier is employed to detect and classify dementia. Finally, the improved salp swarm algorithm (ISSA) is implemented to choose the WNN technique's hyperparameters optimally. The stimulation of the EADCD-TIPAIT technique is examined under a Dementia prediction dataset. The performance validation of the EADCD-TIPAIT approach portrayed a superior accuracy value of 95.00% under diverse measures.

4KAgent: Agentic Any Image to 4K Super-Resolution

Yushen Zuo, Qi Zheng, Mingyang Wu, Xinrui Jiang, Renjie Li, Jian Wang, Yide Zhang, Gengchen Mai, Lihong V. Wang, James Zou, Xiaoyu Wang, Ming-Hsuan Yang, Zhengzhong Tu

arxiv logopreprintJul 9 2025
We present 4KAgent, a unified agentic super-resolution generalist system designed to universally upscale any image to 4K resolution (and even higher, if applied iteratively). Our system can transform images from extremely low resolutions with severe degradations, for example, highly distorted inputs at 256x256, into crystal-clear, photorealistic 4K outputs. 4KAgent comprises three core components: (1) Profiling, a module that customizes the 4KAgent pipeline based on bespoke use cases; (2) A Perception Agent, which leverages vision-language models alongside image quality assessment experts to analyze the input image and make a tailored restoration plan; and (3) A Restoration Agent, which executes the plan, following a recursive execution-reflection paradigm, guided by a quality-driven mixture-of-expert policy to select the optimal output for each step. Additionally, 4KAgent embeds a specialized face restoration pipeline, significantly enhancing facial details in portrait and selfie photos. We rigorously evaluate our 4KAgent across 11 distinct task categories encompassing a total of 26 diverse benchmarks, setting new state-of-the-art on a broad spectrum of imaging domains. Our evaluations cover natural images, portrait photos, AI-generated content, satellite imagery, fluorescence microscopy, and medical imaging like fundoscopy, ultrasound, and X-ray, demonstrating superior performance in terms of both perceptual (e.g., NIQE, MUSIQ) and fidelity (e.g., PSNR) metrics. By establishing a novel agentic paradigm for low-level vision tasks, we aim to catalyze broader interest and innovation within vision-centric autonomous agents across diverse research communities. We will release all the code, models, and results at: https://4kagent.github.io.

Population-scale cross-sectional observational study for AI-powered TB screening on one million CXRs.

Munjal P, Mahrooqi AA, Rajan R, Jeremijenko A, Ahmad I, Akhtar MI, Pimentel MAF, Khan S

pubmed logopapersJul 9 2025
Traditional tuberculosis (TB) screening involves radiologists manually reviewing chest X-rays (CXR), which is time-consuming, error-prone, and limited by workforce shortages. Our AI model, AIRIS-TB (AI Radiology In Screening TB), aims to address these challenges by automating the reporting of all X-rays without any findings. AIRIS-TB was evaluated on over one million CXRs, achieving an AUC of 98.51% and overall false negative rate (FNR) of 1.57%, outperforming radiologists (1.85%) while maintaining a 0% TB-FNR. By selectively deferring only cases with findings to radiologists, the model has the potential to automate up to 80% of routine CXR reporting. Subgroup analysis revealed insignificant performance disparities across age, sex, HIV status, and region of origin, with sputum tests for suspected TB showing a strong correlation with model predictions. This large-scale validation demonstrates AIRIS-TB's safety and efficiency in high-volume TB screening programs, reducing radiologist workload without compromising diagnostic accuracy.

Cross-Modality Masked Learning for Survival Prediction in ICI Treated NSCLC Patients

Qilong Xing, Zikai Song, Bingxin Gong, Lian Yang, Junqing Yu, Wei Yang

arxiv logopreprintJul 9 2025
Accurate prognosis of non-small cell lung cancer (NSCLC) patients undergoing immunotherapy is essential for personalized treatment planning, enabling informed patient decisions, and improving both treatment outcomes and quality of life. However, the lack of large, relevant datasets and effective multi-modal feature fusion strategies pose significant challenges in this domain. To address these challenges, we present a large-scale dataset and introduce a novel framework for multi-modal feature fusion aimed at enhancing the accuracy of survival prediction. The dataset comprises 3D CT images and corresponding clinical records from NSCLC patients treated with immune checkpoint inhibitors (ICI), along with progression-free survival (PFS) and overall survival (OS) data. We further propose a cross-modality masked learning approach for medical feature fusion, consisting of two distinct branches, each tailored to its respective modality: a Slice-Depth Transformer for extracting 3D features from CT images and a graph-based Transformer for learning node features and relationships among clinical variables in tabular data. The fusion process is guided by a masked modality learning strategy, wherein the model utilizes the intact modality to reconstruct missing components. This mechanism improves the integration of modality-specific features, fostering more effective inter-modality relationships and feature interactions. Our approach demonstrates superior performance in multi-modal integration for NSCLC survival prediction, surpassing existing methods and setting a new benchmark for prognostic models in this context.

Machine learning techniques for stroke prediction: A systematic review of algorithms, datasets, and regional gaps.

Soladoye AA, Aderinto N, Popoola MR, Adeyanju IA, Osonuga A, Olawade DB

pubmed logopapersJul 9 2025
Stroke is a leading cause of mortality and disability worldwide, with approximately 15 million people suffering strokes annually. Machine learning (ML) techniques have emerged as powerful tools for stroke prediction, enabling early identification of risk factors through data-driven approaches. However, the clinical utility and performance characteristics of these approaches require systematic evaluation. To systematically review and analyze ML techniques used for stroke prediction, systematically synthesize performance metrics across different prediction targets and data sources, evaluate their clinical applicability, and identify research trends focusing on patient population characteristics and stroke prevalence patterns. A systematic review was conducted following PRISMA guidelines. Five databases (Google Scholar, Lens, PubMed, ResearchGate, and Semantic Scholar) were searched for open-access publications on ML-based stroke prediction published between January 2013 and December 2024. Data were extracted on publication characteristics, datasets, ML methodologies, evaluation metrics, prediction targets (stroke occurrence vs. outcomes), data sources (EHR, imaging, biosignals), patient demographics, and stroke prevalence. Descriptive synthesis was performed due to substantial heterogeneity precluding quantitative meta-analysis. Fifty-eight studies were included, with peak publication output in 2021 (21 articles). Studies targeted three main prediction objectives: stroke occurrence prediction (n = 52, 62.7 %), stroke outcome prediction (n = 19, 22.9 %), and stroke type classification (n = 12, 14.4 %). Data sources included electronic health records (n = 48, 57.8 %), medical imaging (n = 21, 25.3 %), and biosignals (n = 14, 16.9 %). Systematic analysis revealed ensemble methods consistently achieved highest accuracies for stroke occurrence prediction (range: 90.4-97.8 %), while deep learning excelled in imaging-based applications. African populations, despite highest stroke mortality rates globally, were represented in fewer than 4 studies. ML techniques show promising results for stroke prediction. However, significant gaps exist in representation of high-risk populations and real-world clinical validation. Future research should prioritize population-specific model development and clinical implementation frameworks.

Noise-inspired diffusion model for generalizable low-dose CT reconstruction.

Gao Q, Chen Z, Zeng D, Zhang J, Ma J, Shan H

pubmed logopapersJul 8 2025
The generalization of deep learning-based low-dose computed tomography (CT) reconstruction models to doses unseen in the training data is important and remains challenging. Previous efforts heavily rely on paired data to improve the generalization performance and robustness through collecting either diverse CT data for re-training or a few test data for fine-tuning. Recently, diffusion models have shown promising and generalizable performance in low-dose CT (LDCT) reconstruction, however, they may produce unrealistic structures due to the CT image noise deviating from Gaussian distribution and imprecise prior information from the guidance of noisy LDCT images. In this paper, we propose a noise-inspired diffusion model for generalizable LDCT reconstruction, termed NEED, which tailors diffusion models for noise characteristics of each domain. First, we propose a novel shifted Poisson diffusion model to denoise projection data, which aligns the diffusion process with the noise model in pre-log LDCT projections. Second, we devise a doubly guided diffusion model to refine reconstructed images, which leverages LDCT images and initial reconstructions to more accurately locate prior information and enhance reconstruction fidelity. By cascading these two diffusion models for dual-domain reconstruction, our NEED requires only normal-dose data for training and can be effectively extended to various unseen dose levels during testing via a time step matching strategy. Extensive qualitative, quantitative, and segmentation-based evaluations on two datasets demonstrate that our NEED consistently outperforms state-of-the-art methods in reconstruction and generalization performance. Source code is made available at https://github.com/qgao21/NEED.

Mamba Goes HoME: Hierarchical Soft Mixture-of-Experts for 3D Medical Image Segmentation

Szymon Płotka, Maciej Chrabaszcz, Gizem Mert, Ewa Szczurek, Arkadiusz Sitek

arxiv logopreprintJul 8 2025
In recent years, artificial intelligence has significantly advanced medical image segmentation. However, challenges remain, including efficient 3D medical image processing across diverse modalities and handling data variability. In this work, we introduce Hierarchical Soft Mixture-of-Experts (HoME), a two-level token-routing layer for efficient long-context modeling, specifically designed for 3D medical image segmentation. Built on the Mamba state-space model (SSM) backbone, HoME enhances sequential modeling through sparse, adaptive expert routing. The first stage employs a Soft Mixture-of-Experts (SMoE) layer to partition input sequences into local groups, routing tokens to specialized per-group experts for localized feature extraction. The second stage aggregates these outputs via a global SMoE layer, enabling cross-group information fusion and global context refinement. This hierarchical design, combining local expert routing with global expert refinement improves generalizability and segmentation performance, surpassing state-of-the-art results across datasets from the three most commonly used 3D medical imaging modalities and data quality.

Capsule-ConvKAN: A Hybrid Neural Approach to Medical Image Classification

Laura Pituková, Peter Sinčák, László József Kovács

arxiv logopreprintJul 8 2025
This study conducts a comprehensive comparison of four neural network architectures: Convolutional Neural Network, Capsule Network, Convolutional Kolmogorov--Arnold Network, and the newly proposed Capsule--Convolutional Kolmogorov--Arnold Network. The proposed Capsule-ConvKAN architecture combines the dynamic routing and spatial hierarchy capabilities of Capsule Network with the flexible and interpretable function approximation of Convolutional Kolmogorov--Arnold Networks. This novel hybrid model was developed to improve feature representation and classification accuracy, particularly in challenging real-world biomedical image data. The architectures were evaluated on a histopathological image dataset, where Capsule-ConvKAN achieved the highest classification performance with an accuracy of 91.21\%. The results demonstrate the potential of the newly introduced Capsule-ConvKAN in capturing spatial patterns, managing complex features, and addressing the limitations of traditional convolutional models in medical image classification.

Deep supervised transformer-based noise-aware network for low-dose PET denoising across varying count levels.

Azimi MS, Felfelian V, Zeraatkar N, Dadgar H, Arabi H, Zaidi H

pubmed logopapersJul 8 2025
Reducing radiation dose from PET imaging is essential to minimize cancer risks; however, it often leads to increased noise and degraded image quality, compromising diagnostic reliability. Recent advances in deep learning have shown promising results in addressing these limitations through effective denoising. However, existing networks trained on specific noise levels often fail to generalize across diverse acquisition conditions. Moreover, training multiple models for different noise levels is impractical due to data and computational constraints. This study aimed to develop a supervised Swin Transformer-based unified noise-aware (ST-UNN) network that handles diverse noise levels and reconstructs high-quality images in low-dose PET imaging. We present a Swin Transformer-based Noise-Aware Network (ST-UNN), which incorporates multiple sub-networks, each designed to address specific noise levels ranging from 1 % to 10 %. An adaptive weighting mechanism dynamically integrates the outputs of these sub-networks to achieve effective denoising. The model was trained and evaluated using PET/CT dataset encompassing the entire head and malignant lesions in the head and neck region. Performance was assessed using a combination of structural and statistical metrics, including the Structural Similarity Index (SSIM), Peak Signal-to-Noise Ratio (PSNR), Standardized Uptake Value (SUV) mean bias, SUV<sub>max</sub> bias, and Root Mean Square Error (RMSE). This comprehensive evaluation ensured reliable results for both global and localized regions within PET images. The ST-UNN consistently outperformed conventional networks, particularly in ultra-low-dose scenarios. At 1 % count level, it achieved a PSNR of 34.77, RMSE of 0.05, and SSIM of 0.97, notably surpassing the baseline networks. It also achieved the lowest SUV<sub>mean</sub> bias (0.08) and RMSE lesion (0.12) at this level. Across all count levels, ST-UNN maintained high performance and low error, demonstrating strong generalization and diagnostic integrity. ST-UNN offers a scalable, transformer-based solution for low-dose PET imaging. By dynamically integrating sub-networks, it effectively addresses noise variability and provides superior image quality, thereby advancing the capabilities of low-dose and dynamic PET imaging.
Page 35 of 92915 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.