Sort by:
Page 63 of 99986 results

Advancements in Herpes Zoster Diagnosis, Treatment, and Management: Systematic Review of Artificial Intelligence Applications.

Wu D, Liu N, Ma R, Wu P

pubmed logopapersJun 30 2025
The application of artificial intelligence (AI) in medicine has garnered significant attention in recent years, offering new possibilities for improving patient care across various domains. For herpes zoster, a viral infection caused by the reactivation of the varicella-zoster virus, AI technologies have shown remarkable potential in enhancing disease diagnosis, treatment, and management. This study aims to investigate the current research status in the use of AI for herpes zoster, offering a comprehensive synthesis of existing advancements. A systematic literature review was conducted following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Three databases of Web of Science Core Collection, PubMed, and IEEE were searched to identify relevant studies on AI applications in herpes zoster research on November 17, 2023. Inclusion criteria were as follows: (1) research articles, (2) published in English, (3) involving actual AI applications, and (4) focusing on herpes zoster. Exclusion criteria comprised nonresearch articles, non-English papers, and studies only mentioning AI without application. Two independent clinicians screened the studies, with a third senior clinician resolving disagreements. In total, 26 articles were included. Data were extracted on AI task types; algorithms; data sources; data types; and clinical applications in diagnosis, treatment, and management. Trend analysis revealed an increasing annual interest in AI applications for herpes zoster. Hospital-derived data were the primary source (15/26, 57.7%), followed by public databases (6/26, 23.1%) and internet data (5/26, 19.2%). Medical images (9/26, 34.6%) and electronic medical records (7/26, 26.9%) were the most commonly used data types. Classification tasks (85.2%) dominated AI applications, with neural networks, particularly multilayer perceptron and convolutional neural networks being the most frequently used algorithms. AI applications were analyzed across three domains: (1) diagnosis, where mobile deep neural networks, convolutional neural network ensemble models, and mixed-scale attention-based models have improved diagnostic accuracy and efficiency; (2) treatment, where machine learning models, such as deep autoencoders combined with functional magnetic resonance imaging, electroencephalography, and clinical data, have enhanced treatment outcome predictions; and (3) management, where AI has facilitated case identification, epidemiological research, health care burden assessment, and risk factor exploration for postherpetic neuralgia and other complications. Overall, this study provides a comprehensive overview of AI applications in herpes zoster from clinical, data, and algorithmic perspectives, offering valuable insights for future research in this rapidly evolving field. AI has significantly advanced herpes zoster research by enhancing diagnostic accuracy, predicting treatment outcomes, and optimizing disease management. However, several limitations exist, including potential omissions from excluding databases like Embase and Scopus, language bias due to the inclusion of only English publications, and the risk of subjective bias in study selection. Broader studies and continuous updates are needed to fully capture the scope of AI applications in herpes zoster in the future.

Multimodal, Multi-Disease Medical Imaging Foundation Model (MerMED-FM)

Yang Zhou, Chrystie Wan Ning Quek, Jun Zhou, Yan Wang, Yang Bai, Yuhe Ke, Jie Yao, Laura Gutierrez, Zhen Ling Teo, Darren Shu Jeng Ting, Brian T. Soetikno, Christopher S. Nielsen, Tobias Elze, Zengxiang Li, Linh Le Dinh, Lionel Tim-Ee Cheng, Tran Nguyen Tuan Anh, Chee Leong Cheng, Tien Yin Wong, Nan Liu, Iain Beehuat Tan, Tony Kiat Hon Lim, Rick Siow Mong Goh, Yong Liu, Daniel Shu Wei Ting

arxiv logopreprintJun 30 2025
Current artificial intelligence models for medical imaging are predominantly single modality and single disease. Attempts to create multimodal and multi-disease models have resulted in inconsistent clinical accuracy. Furthermore, training these models typically requires large, labour-intensive, well-labelled datasets. We developed MerMED-FM, a state-of-the-art multimodal, multi-specialty foundation model trained using self-supervised learning and a memory module. MerMED-FM was trained on 3.3 million medical images from over ten specialties and seven modalities, including computed tomography (CT), chest X-rays (CXR), ultrasound (US), pathology patches, color fundus photography (CFP), optical coherence tomography (OCT) and dermatology images. MerMED-FM was evaluated across multiple diseases and compared against existing foundational models. Strong performance was achieved across all modalities, with AUROCs of 0.988 (OCT); 0.982 (pathology); 0.951 (US); 0.943 (CT); 0.931 (skin); 0.894 (CFP); 0.858 (CXR). MerMED-FM has the potential to be a highly adaptable, versatile, cross-specialty foundation model that enables robust medical imaging interpretation across diverse medical disciplines.

VAP-Diffusion: Enriching Descriptions with MLLMs for Enhanced Medical Image Generation

Peng Huang, Junhu Fu, Bowen Guo, Zeju Li, Yuanyuan Wang, Yi Guo

arxiv logopreprintJun 30 2025
As the appearance of medical images is influenced by multiple underlying factors, generative models require rich attribute information beyond labels to produce realistic and diverse images. For instance, generating an image of skin lesion with specific patterns demands descriptions that go beyond diagnosis, such as shape, size, texture, and color. However, such detailed descriptions are not always accessible. To address this, we explore a framework, termed Visual Attribute Prompts (VAP)-Diffusion, to leverage external knowledge from pre-trained Multi-modal Large Language Models (MLLMs) to improve the quality and diversity of medical image generation. First, to derive descriptions from MLLMs without hallucination, we design a series of prompts following Chain-of-Thoughts for common medical imaging tasks, including dermatologic, colorectal, and chest X-ray images. Generated descriptions are utilized during training and stored across different categories. During testing, descriptions are randomly retrieved from the corresponding category for inference. Moreover, to make the generator robust to unseen combination of descriptions at the test time, we propose a Prototype Condition Mechanism that restricts test embeddings to be similar to those from training. Experiments on three common types of medical imaging across four datasets verify the effectiveness of VAP-Diffusion.

MedSAM-CA: A CNN-Augmented ViT with Attention-Enhanced Multi-Scale Fusion for Medical Image Segmentation

Peiting Tian, Xi Chen, Haixia Bi, Fan Li

arxiv logopreprintJun 30 2025
Medical image segmentation plays a crucial role in clinical diagnosis and treatment planning, where accurate boundary delineation is essential for precise lesion localization, organ identification, and quantitative assessment. In recent years, deep learning-based methods have significantly advanced segmentation accuracy. However, two major challenges remain. First, the performance of these methods heavily relies on large-scale annotated datasets, which are often difficult to obtain in medical scenarios due to privacy concerns and high annotation costs. Second, clinically challenging scenarios, such as low contrast in certain imaging modalities and blurry lesion boundaries caused by malignancy, still pose obstacles to precise segmentation. To address these challenges, we propose MedSAM-CA, an architecture-level fine-tuning approach that mitigates reliance on extensive manual annotations by adapting the pretrained foundation model, Medical Segment Anything (MedSAM). MedSAM-CA introduces two key components: the Convolutional Attention-Enhanced Boundary Refinement Network (CBR-Net) and the Attention-Enhanced Feature Fusion Block (Atte-FFB). CBR-Net operates in parallel with the MedSAM encoder to recover boundary information potentially overlooked by long-range attention mechanisms, leveraging hierarchical convolutional processing. Atte-FFB, embedded in the MedSAM decoder, fuses multi-level fine-grained features from skip connections in CBR-Net with global representations upsampled within the decoder to enhance boundary delineation accuracy. Experiments on publicly available datasets covering dermoscopy, CT, and MRI imaging modalities validate the effectiveness of MedSAM-CA. On dermoscopy dataset, MedSAM-CA achieves 94.43% Dice with only 2% of full training data, reaching 97.25% of full-data training performance, demonstrating strong effectiveness in low-resource clinical settings.

Exposing and Mitigating Calibration Biases and Demographic Unfairness in MLLM Few-Shot In-Context Learning for Medical Image Classification

Xing Shen, Justin Szeto, Mingyang Li, Hengguan Huang, Tal Arbel

arxiv logopreprintJun 29 2025
Multimodal large language models (MLLMs) have enormous potential to perform few-shot in-context learning in the context of medical image analysis. However, safe deployment of these models into real-world clinical practice requires an in-depth analysis of the accuracies of their predictions, and their associated calibration errors, particularly across different demographic subgroups. In this work, we present the first investigation into the calibration biases and demographic unfairness of MLLMs' predictions and confidence scores in few-shot in-context learning for medical image classification. We introduce CALIN, an inference-time calibration method designed to mitigate the associated biases. Specifically, CALIN estimates the amount of calibration needed, represented by calibration matrices, using a bi-level procedure: progressing from the population level to the subgroup level prior to inference. It then applies this estimation to calibrate the predicted confidence scores during inference. Experimental results on three medical imaging datasets: PAPILA for fundus image classification, HAM10000 for skin cancer classification, and MIMIC-CXR for chest X-ray classification demonstrate CALIN's effectiveness at ensuring fair confidence calibration in its prediction, while improving its overall prediction accuracies and exhibiting minimum fairness-utility trade-off.

Frequency-enhanced Multi-granularity Context Network for Efficient Vertebrae Segmentation

Jian Shi, Tianqi You, Pingping Zhang, Hongli Zhang, Rui Xu, Haojie Li

arxiv logopreprintJun 29 2025
Automated and accurate segmentation of individual vertebra in 3D CT and MRI images is essential for various clinical applications. Due to the limitations of current imaging techniques and the complexity of spinal structures, existing methods still struggle with reducing the impact of image blurring and distinguishing similar vertebrae. To alleviate these issues, we introduce a Frequency-enhanced Multi-granularity Context Network (FMC-Net) to improve the accuracy of vertebrae segmentation. Specifically, we first apply wavelet transform for lossless downsampling to reduce the feature distortion in blurred images. The decomposed high and low-frequency components are then processed separately. For the high-frequency components, we apply a High-frequency Feature Refinement (HFR) to amplify the prominence of key features and filter out noises, restoring fine-grained details in blurred images. For the low-frequency components, we use a Multi-granularity State Space Model (MG-SSM) to aggregate feature representations with different receptive fields, extracting spatially-varying contexts while capturing long-range dependencies with linear complexity. The utilization of multi-granularity contexts is essential for distinguishing similar vertebrae and improving segmentation accuracy. Extensive experiments demonstrate that our method outperforms state-of-the-art approaches on both CT and MRI vertebrae segmentation datasets. The source code is publicly available at https://github.com/anaanaa/FMCNet.

Exposing and Mitigating Calibration Biases and Demographic Unfairness in MLLM Few-Shot In-Context Learning for Medical Image Classification

Xing Shen, Justin Szeto, Mingyang Li, Hengguan Huang, Tal Arbel

arxiv logopreprintJun 29 2025
Multimodal large language models (MLLMs) have enormous potential to perform few-shot in-context learning in the context of medical image analysis. However, safe deployment of these models into real-world clinical practice requires an in-depth analysis of the accuracies of their predictions, and their associated calibration errors, particularly across different demographic subgroups. In this work, we present the first investigation into the calibration biases and demographic unfairness of MLLMs' predictions and confidence scores in few-shot in-context learning for medical image classification. We introduce CALIN, an inference-time calibration method designed to mitigate the associated biases. Specifically, CALIN estimates the amount of calibration needed, represented by calibration matrices, using a bi-level procedure: progressing from the population level to the subgroup level prior to inference. It then applies this estimation to calibrate the predicted confidence scores during inference. Experimental results on three medical imaging datasets: PAPILA for fundus image classification, HAM10000 for skin cancer classification, and MIMIC-CXR for chest X-ray classification demonstrate CALIN's effectiveness at ensuring fair confidence calibration in its prediction, while improving its overall prediction accuracies and exhibiting minimum fairness-utility trade-off.

Cognition-Eye-Brain Connection in Alzheimer's Disease Spectrum Revealed by Multimodal Imaging.

Shi Y, Shen T, Yan S, Liang J, Wei T, Huang Y, Gao R, Zheng N, Ci R, Zhang M, Tang X, Qin Y, Zhu W

pubmed logopapersJun 29 2025
The connection between cognition, eye, and brain remains inconclusive in Alzheimer's disease (AD) spectrum disorders. To explore the relationship between cognitive function, retinal biometrics, and brain alterations in the AD spectrum. Prospective. Healthy control (HC) (n = 16), subjective cognitive decline (SCD) (n = 35), mild cognitive impairment (MCI) (n = 18), and AD group (n = 7). 3-T, 3D T1-weighted Brain Volume (BRAVO) and resting-state functional MRI (fMRI). In all subgroups, cortical thickness was measured from BRAVO and segmented using the Desikan-Killiany-Tourville (DKT) atlas. The fractional amplitude of low-frequency fluctuations (FALFF) and regional homogeneity (ReHo) were measured in fMRI using voxel-based analysis. The eye was imaged by optical coherence tomography angiography (OCTA), with the deep learning model FARGO segmenting the foveal avascular zone (FAZ) and retinal vessels. FAZ area and perimeter, retinal blood vessels curvature (RBVC), thicknesses of the retinal nerve fiber layer (RNFL) and ganglion cell layer-inner plexiform layer (GCL-IPL) were calculated. Cognition-eye-brain associations were compared across the HC group and each AD spectrum stage using multivariable linear regression. Multivariable linear regression analysis. Statistical significance was set at p < 0.05 with FWE correction for fMRI and p < 1/62 (Bonferroni-corrected) for structural analyses. Reductions of FALFF in temporal regions, especially the left superior temporal gyrus (STG) in MCI patients, were linked to decreased RNFL thickness and increased FAZ area significantly. In AD patients, reduced ReHo values in occipital regions, especially the right middle occipital gyrus (MOG), were significantly associated with an enlarged FAZ area. The SCD group showed widespread cortical thickening significantly associated with all aforementioned retinal biometrics, with notable thickening in the right fusiform gyrus (FG) and right parahippocampal gyrus (PHG) correlating with reduced GCL-IPL thickness. Brain function and structure may be associated with cognition and retinal biometrics across the AD spectrum. Specifically, cognition-eye-brain connections may be present in SCD. 2. 3.

Inpainting is All You Need: A Diffusion-based Augmentation Method for Semi-supervised Medical Image Segmentation

Xinrong Hu, Yiyu Shi

arxiv logopreprintJun 28 2025
Collecting pixel-level labels for medical datasets can be a laborious and expensive process, and enhancing segmentation performance with a scarcity of labeled data is a crucial challenge. This work introduces AugPaint, a data augmentation framework that utilizes inpainting to generate image-label pairs from limited labeled data. AugPaint leverages latent diffusion models, known for their ability to generate high-quality in-domain images with low overhead, and adapts the sampling process for the inpainting task without need for retraining. Specifically, given a pair of image and label mask, we crop the area labeled with the foreground and condition on it during reversed denoising process for every noise level. Masked background area would gradually be filled in, and all generated images are paired with the label mask. This approach ensures the accuracy of match between synthetic images and label masks, setting it apart from existing dataset generation methods. The generated images serve as valuable supervision for training downstream segmentation models, effectively addressing the challenge of limited annotations. We conducted extensive evaluations of our data augmentation method on four public medical image segmentation datasets, including CT, MRI, and skin imaging. Results across all datasets demonstrate that AugPaint outperforms state-of-the-art label-efficient methodologies, significantly improving segmentation performance.

Emerging Artificial Intelligence Innovations in Rheumatoid Arthritis and Challenges to Clinical Adoption.

Gilvaz VJ, Sudheer A, Reginato AM

pubmed logopapersJun 28 2025
This review was written to inform practicing clinical rheumatologists about recent advances in artificial intelligence (AI) based research in rheumatoid arthritis (RA), using accessible and practical language. We highlight developments from 2023 to early 2025 across diagnostic imaging, treatment prediction, drug discovery, and patient-facing tools. Given the increasing clinical interest in AI and its potential to augment care delivery, this article aims to bridge the gap between technical innovation and real-world rheumatology practice. Several AI models have demonstrated high accuracy in early RA detection using imaging modalities such as thermal imaging and nuclear scans. Predictive models for treatment response have leveraged routinely collected electronic health record (EHR) data, moving closer to practical application in clinical workflows. Patient-facing tools like mobile symptom checkers and large language models (LLMs) such as ChatGPT show promise in enhancing education and engagement, although accuracy and safety remain variable. AI has also shown utility in identifying novel biomarkers and accelerating drug discovery. Despite these advances, as of early 2025, no AI-based tools have received FDA approval for use in rheumatology, in contrast to other specialties. Artificial intelligence holds tremendous promise to enhance clinical care in RA-from early diagnosis to personalized therapy. However, clinical adoption remains limited due to regulatory, technical, and implementation challenges. A streamlined regulatory framework and closer collaboration between clinicians, researchers, and industry partners are urgently needed. With thoughtful integration, AI can serve as a valuable adjunct in addressing clinical complexity and workforce shortages in rheumatology.
Page 63 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.