Sort by:
Page 94 of 1331328 results

DMAF-Net: An Effective Modality Rebalancing Framework for Incomplete Multi-Modal Medical Image Segmentation

Libin Lan, Hongxing Li, Zunhui Xia, Yudong Zhang

arxiv logopreprintJun 13 2025
Incomplete multi-modal medical image segmentation faces critical challenges from modality imbalance, including imbalanced modality missing rates and heterogeneous modality contributions. Due to their reliance on idealized assumptions of complete modality availability, existing methods fail to dynamically balance contributions and neglect the structural relationships between modalities, resulting in suboptimal performance in real-world clinical scenarios. To address these limitations, we propose a novel model, named Dynamic Modality-Aware Fusion Network (DMAF-Net). The DMAF-Net adopts three key ideas. First, it introduces a Dynamic Modality-Aware Fusion (DMAF) module to suppress missing-modality interference by combining transformer attention with adaptive masking and weight modality contributions dynamically through attention maps. Second, it designs a synergistic Relation Distillation and Prototype Distillation framework to enforce global-local feature alignment via covariance consistency and masked graph attention, while ensuring semantic consistency through cross-modal class-specific prototype alignment. Third, it presents a Dynamic Training Monitoring (DTM) strategy to stabilize optimization under imbalanced missing rates by tracking distillation gaps in real-time, and to balance convergence speeds across modalities by adaptively reweighting losses and scaling gradients. Extensive experiments on BraTS2020 and MyoPS2020 demonstrate that DMAF-Net outperforms existing methods for incomplete multi-modal medical image segmentation. Extensive experiments on BraTS2020 and MyoPS2020 demonstrate that DMAF-Net outperforms existing methods for incomplete multi-modal medical image segmentation. Our code is available at https://github.com/violet-42/DMAF-Net.

3D Skin Segmentation Methods in Medical Imaging: A Comparison

Martina Paccini, Giuseppe Patanè

arxiv logopreprintJun 13 2025
Automatic segmentation of anatomical structures is critical in medical image analysis, aiding diagnostics and treatment planning. Skin segmentation plays a key role in registering and visualising multimodal imaging data. 3D skin segmentation enables applications in personalised medicine, surgical planning, and remote monitoring, offering realistic patient models for treatment simulation, procedural visualisation, and continuous condition tracking. This paper analyses and compares algorithmic and AI-driven skin segmentation approaches, emphasising key factors to consider when selecting a strategy based on data availability and application requirements. We evaluate an iterative region-growing algorithm and the TotalSegmentator, a deep learning-based approach, across different imaging modalities and anatomical regions. Our tests show that AI segmentation excels in automation but struggles with MRI due to its CT-based training, while the graphics-based method performs better for MRIs but introduces more noise. AI-driven segmentation also automates patient bed removal in CT, whereas the graphics-based method requires manual intervention.

Enhancing Privacy: The Utility of Stand-Alone Synthetic CT and MRI for Tumor and Bone Segmentation

André Ferreira, Kunpeng Xie, Caroline Wilpert, Gustavo Correia, Felix Barajas Ordonez, Tiago Gil Oliveira, Maike Bode, Robert Siepmann, Frank Hölzle, Rainer Röhrig, Jens Kleesiek, Daniel Truhn, Jan Egger, Victor Alves, Behrus Puladi

arxiv logopreprintJun 13 2025
AI requires extensive datasets, while medical data is subject to high data protection. Anonymization is essential, but poses a challenge for some regions, such as the head, as identifying structures overlap with regions of clinical interest. Synthetic data offers a potential solution, but studies often lack rigorous evaluation of realism and utility. Therefore, we investigate to what extent synthetic data can replace real data in segmentation tasks. We employed head and neck cancer CT scans and brain glioma MRI scans from two large datasets. Synthetic data were generated using generative adversarial networks and diffusion models. We evaluated the quality of the synthetic data using MAE, MS-SSIM, Radiomics and a Visual Turing Test (VTT) performed by 5 radiologists and their usefulness in segmentation tasks using DSC. Radiomics indicates high fidelity of synthetic MRIs, but fall short in producing highly realistic CT tissue, with correlation coefficient of 0.8784 and 0.5461 for MRI and CT tumors, respectively. DSC results indicate limited utility of synthetic data: tumor segmentation achieved DSC=0.064 on CT and 0.834 on MRI, while bone segmentation a mean DSC=0.841. Relation between DSC and correlation is observed, but is limited by the complexity of the task. VTT results show synthetic CTs' utility, but with limited educational applications. Synthetic data can be used independently for the segmentation task, although limited by the complexity of the structures to segment. Advancing generative models to better tolerate heterogeneous inputs and learn subtle details is essential for enhancing their realism and expanding their application potential.

BraTS orchestrator : Democratizing and Disseminating state-of-the-art brain tumor image analysis

Florian Kofler, Marcel Rosier, Mehdi Astaraki, Ujjwal Baid, Hendrik Möller, Josef A. Buchner, Felix Steinbauer, Eva Oswald, Ezequiel de la Rosa, Ivan Ezhov, Constantin von See, Jan Kirschke, Anton Schmick, Sarthak Pati, Akis Linardos, Carla Pitarch, Sanyukta Adap, Jeffrey Rudie, Maria Correia de Verdier, Rachit Saluja, Evan Calabrese, Dominic LaBella, Mariam Aboian, Ahmed W. Moawad, Nazanin Maleki, Udunna Anazodo, Maruf Adewole, Marius George Linguraru, Anahita Fathi Kazerooni, Zhifan Jiang, Gian Marco Conte, Hongwei Li, Juan Eugenio Iglesias, Spyridon Bakas, Benedikt Wiestler, Marie Piraud, Bjoern Menze

arxiv logopreprintJun 13 2025
The Brain Tumor Segmentation (BraTS) cluster of challenges has significantly advanced brain tumor image analysis by providing large, curated datasets and addressing clinically relevant tasks. However, despite its success and popularity, algorithms and models developed through BraTS have seen limited adoption in both scientific and clinical communities. To accelerate their dissemination, we introduce BraTS orchestrator, an open-source Python package that provides seamless access to state-of-the-art segmentation and synthesis algorithms for diverse brain tumors from the BraTS challenge ecosystem. Available on GitHub (https://github.com/BrainLesion/BraTS), the package features intuitive tutorials designed for users with minimal programming experience, enabling both researchers and clinicians to easily deploy winning BraTS algorithms for inference. By abstracting the complexities of modern deep learning, BraTS orchestrator democratizes access to the specialized knowledge developed within the BraTS community, making these advances readily available to broader neuro-radiology and neuro-oncology audiences.

MRI-CORE: A Foundation Model for Magnetic Resonance Imaging

Haoyu Dong, Yuwen Chen, Hanxue Gu, Nicholas Konz, Yaqian Chen, Qihang Li, Maciej A. Mazurowski

arxiv logopreprintJun 13 2025
The widespread use of Magnetic Resonance Imaging (MRI) and the rise of deep learning have enabled the development of powerful predictive models for a wide range of diagnostic tasks in MRI, such as image classification or object segmentation. However, training models for specific new tasks often requires large amounts of labeled data, which is difficult to obtain due to high annotation costs and data privacy concerns. To circumvent this issue, we introduce MRI-CORE (MRI COmprehensive Representation Encoder), a vision foundation model pre-trained using more than 6 million slices from over 110,000 MRI volumes across 18 main body locations. Experiments on five diverse object segmentation tasks in MRI demonstrate that MRI-CORE can significantly improve segmentation performance in realistic scenarios with limited labeled data availability, achieving an average gain of 6.97% 3D Dice Coefficient using only 10 annotated slices per task. We further demonstrate new model capabilities in MRI such as classification of image properties including body location, sequence type and institution, and zero-shot segmentation. These results highlight the value of MRI-CORE as a generalist vision foundation model for MRI, potentially lowering the data annotation resource barriers for many applications.

InceptionMamba: Efficient Multi-Stage Feature Enhancement with Selective State Space Model for Microscopic Medical Image Segmentation

Daniya Najiha Abdul Kareem, Abdul Hannan, Mubashir Noman, Jean Lahoud, Mustansar Fiaz, Hisham Cholakkal

arxiv logopreprintJun 13 2025
Accurate microscopic medical image segmentation plays a crucial role in diagnosing various cancerous cells and identifying tumors. Driven by advancements in deep learning, convolutional neural networks (CNNs) and transformer-based models have been extensively studied to enhance receptive fields and improve medical image segmentation task. However, they often struggle to capture complex cellular and tissue structures in challenging scenarios such as background clutter and object overlap. Moreover, their reliance on the availability of large datasets for improved performance, along with the high computational cost, limit their practicality. To address these issues, we propose an efficient framework for the segmentation task, named InceptionMamba, which encodes multi-stage rich features and offers both performance and computational efficiency. Specifically, we exploit semantic cues to capture both low-frequency and high-frequency regions to enrich the multi-stage features to handle the blurred region boundaries (e.g., cell boundaries). These enriched features are input to a hybrid model that combines an Inception depth-wise convolution with a Mamba block, to maintain high efficiency and capture inherent variations in the scales and shapes of the regions of interest. These enriched features along with low-resolution features are fused to get the final segmentation mask. Our model achieves state-of-the-art performance on two challenging microscopic segmentation datasets (SegPC21 and GlaS) and two skin lesion segmentation datasets (ISIC2017 and ISIC2018), while reducing computational cost by about 5 times compared to the previous best performing method.

Protocol of the observational study STRATUM-OS: First step in the development and validation of the STRATUM tool based on multimodal data processing to assist surgery in patients affected by intra-axial brain tumours

Fabelo, H., Ramallo-Farina, Y., Morera, J., Pineiro, J. F., Lagares, A., Jimenez-Roldan, L., Burstrom, G., Garcia-Bello, M. A., Garcia-Perez, L., Falero, R., Gonzalez, M., Duque, S., Rodriguez-Jimenez, C., Hernandez, M., Delgado-Sanchez, J. J., Paredes, A. B., Hernandez, G., Ponce, P., Leon, R., Gonzalez-Martin, J. M., Rodriguez-Esparragon, F., Callico, G. M., Wagner, A. M., Clavo, B., STRATUM,

medrxiv logopreprintJun 13 2025
IntroductionIntegrated digital diagnostics can support complex surgeries in many anatomic sites, and brain tumour surgery represents one of the most complex cases. Neurosurgeons face several challenges during brain tumour surgeries, such as differentiating critical tissue from brain tumour margins. To overcome these challenges, the STRATUM project will develop a 3D decision support tool for brain surgery guidance and diagnostics based on multimodal data processing, including hyperspectral imaging, integrated as a point-of-care computing tool in neurosurgical workflows. This paper reports the protocol for the development and technical validation of the STRATUM tool. Methods and analysisThis international multicentre, prospective, open, observational cohort study, STRATUM-OS (study: 28 months, pre-recruitment: 2 months, recruitment: 20 months, follow-up: 6 months), with no control group, will collect data from 320 patients undergoing standard neurosurgical procedures to: (1) develop and technically validate the STRATUM tool, and (2) collect the outcome measures for comparing the standard procedure versus the standard procedure plus the use of the STRATUM tool during surgery in a subsequent historically controlled non-randomized clinical trial. Ethics and disseminationThe protocol was approved by the participant Ethics Committees. Results will be disseminated in scientific conferences and peer-reviewed journals. Trial registration number[Pending Number] ARTICLE SUMMARYO_ST_ABSStrengths and limitations of this studyC_ST_ABSO_LISTRATUM-OS will be the first multicentre prospective observational study to develop and technically validate a 3D decision support tool for brain surgery guidance and diagnostics in real-time based on artificial intelligence and multimodal data processing, including the emerging hyperspectral imaging modality. C_LIO_LIThis study encompasses a prospective collection of multimodal pre, intra and postoperative medical data, including innovative imaging modalities, from patients with intra-axial brain tumours. C_LIO_LIThis large observational study will act as historical control in a subsequent clinical trial to evaluate a fully-working prototype. C_LIO_LIAlthough the estimated sample size is deemed adequate for the purpose of the study, the complexity of the clinical context and the type of surgery could potentially lead to under-recruitment and under-representation of less prevalent tumour types. C_LI

SWDL: Stratum-Wise Difference Learning with Deep Laplacian Pyramid for Semi-Supervised 3D Intracranial Hemorrhage Segmentation

Cheng Wang, Siqi Chen, Donghua Mi, Yang Chen, Yudong Zhang, Yinsheng Li

arxiv logopreprintJun 12 2025
Recent advances in medical imaging have established deep learning-based segmentation as the predominant approach, though it typically requires large amounts of manually annotated data. However, obtaining annotations for intracranial hemorrhage (ICH) remains particularly challenging due to the tedious and costly labeling process. Semi-supervised learning (SSL) has emerged as a promising solution to address the scarcity of labeled data, especially in volumetric medical image segmentation. Unlike conventional SSL methods that primarily focus on high-confidence pseudo-labels or consistency regularization, we propose SWDL-Net, a novel SSL framework that exploits the complementary advantages of Laplacian pyramid and deep convolutional upsampling. The Laplacian pyramid excels at edge sharpening, while deep convolutions enhance detail precision through flexible feature mapping. Our framework achieves superior segmentation of lesion details and boundaries through a difference learning mechanism that effectively integrates these complementary approaches. Extensive experiments on a 271-case ICH dataset and public benchmarks demonstrate that SWDL-Net outperforms current state-of-the-art methods in scenarios with only 2% labeled data. Additional evaluations on the publicly available Brain Hemorrhage Segmentation Dataset (BHSD) with 5% labeled data further confirm the superiority of our approach. Code and data have been released at https://github.com/SIAT-CT-LAB/SWDL.

AI-Based screening for thoracic aortic aneurysms in routine breast MRI.

Bounias D, Führes T, Brock L, Graber J, Kapsner LA, Liebert A, Schreiter H, Eberle J, Hadler D, Skwierawska D, Floca R, Neher P, Kovacs B, Wenkel E, Ohlmeyer S, Uder M, Maier-Hein K, Bickelhaupt S

pubmed logopapersJun 12 2025
Prognosis for thoracic aortic aneurysms is significantly worse for women than men, with a higher mortality rate observed among female patients. The increasing use of magnetic resonance breast imaging (MRI) offers a unique opportunity for simultaneous detection of both breast cancer and thoracic aortic aneurysms. We retrospectively validate a fully-automated artificial neural network (ANN) pipeline on 5057 breast MRI examinations from public (Duke University Hospital/EA1141 trial) and in-house (Erlangen University Hospital) data. The ANN, benchmarked against 3D-ground-truth segmentations, clinical reports, and a multireader panel, demonstrates high technical robustness (dice/clDice 0.88-0.91/0.97-0.99) across different vendors and field strengths. The ANN improves aneurysm detection rates by 3.5-fold compared with routine clinical readings, highlighting its potential to improve early diagnosis and patient outcomes. Notably, a higher odds ratio (OR = 2.29, CI: [0.55,9.61]) for thoracic aortic aneurysms is observed in women with breast cancer or breast cancer history, suggesting potential further benefits from integrated simultaneous assessment for cancer and aortic aneurysms.

Task Augmentation-Based Meta-Learning Segmentation Method for Retinopathy.

Wang J, Mateen M, Xiang D, Zhu W, Shi F, Huang J, Sun K, Dai J, Xu J, Zhang S, Chen X

pubmed logopapersJun 12 2025
Deep learning (DL) requires large amounts of labeled data, which is extremely time-consuming and laborintensive to obtain for medical image segmentation tasks. Metalearning focuses on developing learning strategies that enable quick adaptation to new tasks with limited labeled data. However, rich-class medical image segmentation datasets for constructing meta-learning multi-tasks are currently unavailable. In addition, data collected from various healthcare sites and devices may present significant distribution differences, potentially degrading model's performance. In this paper, we propose a task augmentation-based meta-learning method for retinal image segmentation (TAMS) to meet labor-intensive annotation demand. A retinal Lesion Simulation Algorithm (LSA) is proposed to automatically generate multi-class retinal disease datasets with pixel-level segmentation labels, such that metalearning tasks can be augmented without collecting data from various sources. In addition, a novel simulation function library is designed to control generation process and ensure interpretability. Moreover, a generative simulation network (GSNet) with an improved adversarial training strategy is introduced to maintain high-quality representations of complex retinal diseases. TAMS is evaluated on three different OCT and CFP image datasets, and comprehensive experiments have demonstrated that TAMS achieves superior segmentation performance than state-of-the-art models.
Page 94 of 1331328 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.