Sort by:
Page 9 of 71706 results

Imaging in clinical trials of rheumatoid arthritis: where are we in 2025?

Østergaard M, Rolland MAJ, Terslev L

pubmed logopapersAug 5 2025
Accurate detection and assessment of inflammatory activity is crucial not only for diagnosing patients with rheumatoid arthritis but also for effective monitoring of treatment effect. Ultrasound and magnetic resonance imaging (MRI) have both been shown to be truthful, reproducible, and sensitive to change for inflammation in joints and tendon sheaths and have validated scoring systems, which altogether allow them to be used as outcome measurement instruments in clinical trials. Furthermore, MRI also allows sensitive and discriminative assessment of structural damage progression in RA, also with validated outcome measures. Other relevant imaging techniques, including the use of artificial intelligence, pose interesting possibilities for future clinical trials and will be briefly addressed in this review article.

MedCAL-Bench: A Comprehensive Benchmark on Cold-Start Active Learning with Foundation Models for Medical Image Analysis

Ning Zhu, Xiaochuan Ma, Shaoting Zhang, Guotai Wang

arxiv logopreprintAug 5 2025
Cold-Start Active Learning (CSAL) aims to select informative samples for annotation without prior knowledge, which is important for improving annotation efficiency and model performance under a limited annotation budget in medical image analysis. Most existing CSAL methods rely on Self-Supervised Learning (SSL) on the target dataset for feature extraction, which is inefficient and limited by insufficient feature representation. Recently, pre-trained Foundation Models (FMs) have shown powerful feature extraction ability with a potential for better CSAL. However, this paradigm has been rarely investigated, with a lack of benchmarks for comparison of FMs in CSAL tasks. To this end, we propose MedCAL-Bench, the first systematic FM-based CSAL benchmark for medical image analysis. We evaluate 14 FMs and 7 CSAL strategies across 7 datasets under different annotation budgets, covering classification and segmentation tasks from diverse medical modalities. It is also the first CSAL benchmark that evaluates both the feature extraction and sample selection stages. Our experimental results reveal that: 1) Most FMs are effective feature extractors for CSAL, with DINO family performing the best in segmentation; 2) The performance differences of these FMs are large in segmentation tasks, while small for classification; 3) Different sample selection strategies should be considered in CSAL on different datasets, with Active Learning by Processing Surprisal (ALPS) performing the best in segmentation while RepDiv leading for classification. The code is available at https://github.com/HiLab-git/MedCAL-Bench.

Controllable Mask Diffusion Model for medical annotation synthesis with semantic information extraction.

Heo C, Jung J

pubmed logopapersAug 5 2025
Medical segmentation, a prominent task in medical image analysis utilizing artificial intelligence, plays a crucial role in computer-aided diagnosis and depends heavily on the quality of the training data. However, the availability of sufficient data is constrained by strict privacy regulations associated with medical data. To mitigate this issue, research on data augmentation has gained significant attention. Medical segmentation tasks require paired datasets consisting of medical images and annotation images, also known as mask images, which represent lesion areas or radiological information within the medical images. Consequently, it is essential to apply data augmentation to both image types. This study proposes a Controllable Mask Diffusion Model, a novel approach capable of controlling and generating new masks. This model leverages the binary structure of the mask to extract semantic information, namely, the mask's size, location, and count, which is then applied as multi-conditional input to a diffusion model via a regressor. Through the regressor, newly generated masks conform to the input semantic information, thereby enabling input-driven controllable generation. Additionally, a technique that analyzes correlation within semantic information was devised for large-scale data synthesis. The generative capacity of the proposed model was evaluated against real datasets, and the model's ability to control and generate new masks based on previously unseen semantic information was confirmed. Furthermore, the practical applicability of the model was demonstrated by augmenting the data with the generated data, applying it to segmentation tasks, and comparing the performance with and without augmentation. Additionally, experiments were conducted on single-label and multi-label masks, yielding superior results for both types. This demonstrates the potential applicability of this study to various areas within the medical field.

Integration of Spatiotemporal Dynamics and Structural Connectivity for Automated Epileptogenic Zone Localization in Temporal Lobe Epilepsy.

Xiao L, Zheng Q, Li S, Wei Y, Si W, Pan Y

pubmed logopapersAug 5 2025
Accurate localization of the epileptogenic zone (EZ) is essential for surgical success in temporal lobe epilepsy. While stereoelectroencephalography (SEEG) and structural magnetic resonance imaging (MRI) provide complementary insights, existing unimodal methods fail to fully capture epileptogenic brain activity, and multimodal fusion remains challenging due to data complexity and surgeon-dependent interpretations. To address these issues, we proposed a novel multimodal framework to improve EZ localization with SEEG-drived electrophysiology with structural connectivity in temporal lobe epilepsy. By retrospectively analyzing SEEG, post-implant Computed Tomography (CT) and MRI (T1 & Diffusion Tensor Imaging (DTI)) data from 15 patients, we reconstructed SEEG electrode positions and obtained the SEEG and structural connectivity fusion features. We then proposed a spatiotemporal co-attention deep neural network (ST-CANet) to identify the fusion features, categorizing electrodes into seizure onset zone (SOZ), propagation zone (PZ), and non-involved zone (NIZ). Anatomical EZ boundaries were delineated by fusing the electrode position and classification information on brain atlas. The proposed method was evaluated based on the identification and localization performance of three epilepsy-related zones. The experiment results demonstrate that our method achieves 98.08% average accuracy and outperforms other identification methods, and improves the localization with Dice similarity coefficients (DSC) of 95.65% (SOZ), 92.13% (PZ), and 99.61% (NIZ), aligning with clinically validated surgical resection areas. This multimodal fusion strategy based on electrophysiological and structural connectivity information promises to assist neurosurgeons in accurately localizing EZ and may find broader applications in preoperative planning for epilepsy surgeries.

MAUP: Training-free Multi-center Adaptive Uncertainty-aware Prompting for Cross-domain Few-shot Medical Image Segmentation

Yazhou Zhu, Haofeng Zhang

arxiv logopreprintAug 5 2025
Cross-domain Few-shot Medical Image Segmentation (CD-FSMIS) is a potential solution for segmenting medical images with limited annotation using knowledge from other domains. The significant performance of current CD-FSMIS models relies on the heavily training procedure over other source medical domains, which degrades the universality and ease of model deployment. With the development of large visual models of natural images, we propose a training-free CD-FSMIS model that introduces the Multi-center Adaptive Uncertainty-aware Prompting (MAUP) strategy for adapting the foundation model Segment Anything Model (SAM), which is trained with natural images, into the CD-FSMIS task. To be specific, MAUP consists of three key innovations: (1) K-means clustering based multi-center prompts generation for comprehensive spatial coverage, (2) uncertainty-aware prompts selection that focuses on the challenging regions, and (3) adaptive prompt optimization that can dynamically adjust according to the target region complexity. With the pre-trained DINOv2 feature encoder, MAUP achieves precise segmentation results across three medical datasets without any additional training compared with several conventional CD-FSMIS models and training-free FSMIS model. The source code is available at: https://github.com/YazhouZhu19/MAUP.

Augmenting Continual Learning of Diseases with LLM-Generated Visual Concepts

Jiantao Tan, Peixian Ma, Kanghao Chen, Zhiming Dai, Ruixuan Wang

arxiv logopreprintAug 5 2025
Continual learning is essential for medical image classification systems to adapt to dynamically evolving clinical environments. The integration of multimodal information can significantly enhance continual learning of image classes. However, while existing approaches do utilize textual modality information, they solely rely on simplistic templates with a class name, thereby neglecting richer semantic information. To address these limitations, we propose a novel framework that harnesses visual concepts generated by large language models (LLMs) as discriminative semantic guidance. Our method dynamically constructs a visual concept pool with a similarity-based filtering mechanism to prevent redundancy. Then, to integrate the concepts into the continual learning process, we employ a cross-modal image-concept attention module, coupled with an attention loss. Through attention, the module can leverage the semantic knowledge from relevant visual concepts and produce class-representative fused features for classification. Experiments on medical and natural image datasets show our method achieves state-of-the-art performance, demonstrating the effectiveness and superiority of our method. We will release the code publicly.

CAPoxy: a feasibility study to investigate multispectral imaging in nailfold capillaroscopy

Taylor-Williams, M., Khalil, I., Manning, J., Dinsdale, G., Berks, M., Porcu, L., Wilkinson, S., Bohndiek, S., Murray, A.

medrxiv logopreprintAug 5 2025
BackgroundNailfold capillaroscopy enables visualisation of structural abnormalities in the microvasculature of patients with systemic sclerosis (SSc). The objective of this feasibility study was to determine whether multispectral imaging could provide functional assessment (differences in haemoglobin concentration or oxygenation) of capillaries to aid discrimination between healthy controls and patients with SSc. MSI of nailfold capillaries visualizes the smallest blood vessels and the impact of SSc on angiogenesis and their deformation, making it suitable for evaluating oxygenation-sensitive imaging techniques. Imaging of the nailfold capillaries offers tissue-specific oxygenation information, unlike pulse oximetry, which measures arterial blood oxygenation as a single-point measurement. MethodsThe CAPoxy study was a single-centre, cross-sectional, feasibility study of nailfold capillary multispectral imaging, comparing a cohort of patients with SSc to controls. A nine-band multispectral camera was used to image 22 individuals (10 patients with SSc and 12 controls). Linear mixed-effects models and summary statistics were used to compare the different regions of the nailfold (capillaries, surrounding edges, and outside area) between SSc and controls. A machine learning model was used to compare the two groups. ResultsPatients with SSc exhibited higher indicators of haemoglobin concentration in the capillary and adjacent regions compared to controls, which were significant in the regions surrounding the capillaries (p<0.001). There were also spectral differences between the SSc and controls groups that could indicate differences in oxygenation of the capillaries and surrounding tissue. Additionally, a machine learning model distinguished SSc patients from healthy controls with an accuracy of 84%, suggesting potential for multispectral imaging to classify SSc based on structural and functional microvascular changes. ConclusionsData indicates that multispectral imaging differentiates between patients with SSc from controls based on differences in vascular function. Further work to develop a targeted spectral camera would further improve the contrast between patients with SSc and controls, enabling better imaging. Key messagesMultispectral imaging holds promise for providing functional oxygenation measurement in nailfold capillaroscopy. Significant oxygenation differences between individuals with systemic sclerosis and healthy controls can be detected with multispectral imaging in the tissue surrounding capillaries.

GRASPing Anatomy to Improve Pathology Segmentation

Keyi Li, Alexander Jaus, Jens Kleesiek, Rainer Stiefelhagen

arxiv logopreprintAug 5 2025
Radiologists rely on anatomical understanding to accurately delineate pathologies, yet most current deep learning approaches use pure pattern recognition and ignore the anatomical context in which pathologies develop. To narrow this gap, we introduce GRASP (Guided Representation Alignment for the Segmentation of Pathologies), a modular plug-and-play framework that enhances pathology segmentation models by leveraging existing anatomy segmentation models through pseudolabel integration and feature alignment. Unlike previous approaches that obtain anatomical knowledge via auxiliary training, GRASP integrates into standard pathology optimization regimes without retraining anatomical components. We evaluate GRASP on two PET/CT datasets, conduct systematic ablation studies, and investigate the framework's inner workings. We find that GRASP consistently achieves top rankings across multiple evaluation metrics and diverse architectures. The framework's dual anatomy injection strategy, combining anatomical pseudo-labels as input channels with transformer-guided anatomical feature fusion, effectively incorporates anatomical context.

A Survey of Medical Point Cloud Shape Learning: Registration, Reconstruction and Variation

Tongxu Zhang, Zhiming Liang, Bei Wang

arxiv logopreprintAug 5 2025
Point clouds have become an increasingly important representation for 3D medical imaging, offering a compact, surface-preserving alternative to traditional voxel or mesh-based approaches. Recent advances in deep learning have enabled rapid progress in extracting, modeling, and analyzing anatomical shapes directly from point cloud data. This paper provides a comprehensive and systematic survey of learning-based shape analysis for medical point clouds, focusing on three fundamental tasks: registration, reconstruction, and variation modeling. We review recent literature from 2021 to 2025, summarize representative methods, datasets, and evaluation metrics, and highlight clinical applications and unique challenges in the medical domain. Key trends include the integration of hybrid representations, large-scale self-supervised models, and generative techniques. We also discuss current limitations, such as data scarcity, inter-patient variability, and the need for interpretable and robust solutions for clinical deployment. Finally, future directions are outlined for advancing point cloud-based shape learning in medical imaging.

ClinicalFMamba: Advancing Clinical Assessment using Mamba-based Multimodal Neuroimaging Fusion

Meng Zhou, Farzad Khalvati

arxiv logopreprintAug 5 2025
Multimodal medical image fusion integrates complementary information from different imaging modalities to enhance diagnostic accuracy and treatment planning. While deep learning methods have advanced performance, existing approaches face critical limitations: Convolutional Neural Networks (CNNs) excel at local feature extraction but struggle to model global context effectively, while Transformers achieve superior long-range modeling at the cost of quadratic computational complexity, limiting clinical deployment. Recent State Space Models (SSMs) offer a promising alternative, enabling efficient long-range dependency modeling in linear time through selective scan mechanisms. Despite these advances, the extension to 3D volumetric data and the clinical validation of fused images remains underexplored. In this work, we propose ClinicalFMamba, a novel end-to-end CNN-Mamba hybrid architecture that synergistically combines local and global feature modeling for 2D and 3D images. We further design a tri-plane scanning strategy for effectively learning volumetric dependencies in 3D images. Comprehensive evaluations on three datasets demonstrate the superior fusion performance across multiple quantitative metrics while achieving real-time fusion. We further validate the clinical utility of our approach on downstream 2D/3D brain tumor classification tasks, achieving superior performance over baseline methods. Our method establishes a new paradigm for efficient multimodal medical image fusion suitable for real-time clinical deployment.
Page 9 of 71706 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.