Sort by:
Page 44 of 1341332 results

A novel lung cancer diagnosis model using hybrid convolution (2D/3D)-based adaptive DenseUnet with attention mechanism.

Deepa J, Badhu Sasikala L, Indumathy P, Jerrin Simla A

pubmed logopapersAug 5 2025
Existing Lung Cancer Diagnosis (LCD) models have difficulty in detecting early-stage lung cancer due to the asymptomatic nature of the disease which leads to an increased death rate of patients. Therefore, it is important to diagnose lung disease at an early stage to save the lives of affected persons. Hence, the research work aims to develop an efficient lung disease diagnosis using deep learning techniques for the early and accurate detection of lung cancer. This is achieved by. Initially, the proposed model collects the mandatory CT images from the standard benchmark datasets. Then, the lung cancer segmentation is done by using the development of Hybrid Convolution (2D/3D)-based Adaptive DenseUnet with Attention mechanism (HC-ADAM). The Hybrid Sewing Training with Spider Monkey Optimization (HSTSMO) is introduced to optimize the parameters in the developed HC-ADAM segmentation approach. Finally, the dissected lung nodule imagery is considered for the lung cancer classification stage, where the Hybrid Adaptive Dilated Networks with Attention mechanism (HADN-AM) are implemented with the serial cascading of ResNet and Long Short Term Memory (LSTM) for attaining better categorization performance. The accuracy, precision, and F1-score of the developed model for the LIDC-IDRI dataset are 96.3%, 96.38%, and 96.36%, respectively.

Brain tumor segmentation by optimizing deep learning U-Net model.

Asiri AA, Hussain L, Irfan M, Mehdar KM, Awais M, Alelyani M, Alshuhri M, Alghamdi AJ, Alamri S, Nadeem MA

pubmed logopapersAug 5 2025
BackgroundMagnetic Resonance Imaging (MRI) is a cornerstone in diagnosing brain tumors. However, the complex nature of these tumors makes accurate segmentation in MRI images a demanding task.ObjectiveAccurate brain tumor segmentation remains a critical challenge in medical image analysis, with early detection crucial for improving patient outcomes.MethodsTo develop and evaluate a novel UNet-based architecture for improved brain tumor segmentation in MRI images. This paper presents a novel UNet-based architecture for improved brain tumor segmentation. The UNet model architecture incorporates Leaky ReLU activation, batch normalization, and regularization to enhance training and performance. The model consists of varying numbers of layers and kernel sizes to capture different levels of detail. To address the issue of class imbalance in medical image segmentation, we employ focused loss and generalized Dice (GDL) loss functions.ResultsThe proposed model was evaluated on the BraTS'2020 dataset, achieving an accuracy of 99.64% and Dice coefficients of 0.8984, 0.8431, and 0.8824 for necrotic core, edema, and enhancing tumor regions, respectively.ConclusionThese findings demonstrate the efficacy of our approach in accurately predicting tumors, which has the potential to enhance diagnostic systems and improve patient outcomes.

MAUP: Training-free Multi-center Adaptive Uncertainty-aware Prompting for Cross-domain Few-shot Medical Image Segmentation

Yazhou Zhu, Haofeng Zhang

arxiv logopreprintAug 5 2025
Cross-domain Few-shot Medical Image Segmentation (CD-FSMIS) is a potential solution for segmenting medical images with limited annotation using knowledge from other domains. The significant performance of current CD-FSMIS models relies on the heavily training procedure over other source medical domains, which degrades the universality and ease of model deployment. With the development of large visual models of natural images, we propose a training-free CD-FSMIS model that introduces the Multi-center Adaptive Uncertainty-aware Prompting (MAUP) strategy for adapting the foundation model Segment Anything Model (SAM), which is trained with natural images, into the CD-FSMIS task. To be specific, MAUP consists of three key innovations: (1) K-means clustering based multi-center prompts generation for comprehensive spatial coverage, (2) uncertainty-aware prompts selection that focuses on the challenging regions, and (3) adaptive prompt optimization that can dynamically adjust according to the target region complexity. With the pre-trained DINOv2 feature encoder, MAUP achieves precise segmentation results across three medical datasets without any additional training compared with several conventional CD-FSMIS models and training-free FSMIS model. The source code is available at: https://github.com/YazhouZhu19/MAUP.

MedCAL-Bench: A Comprehensive Benchmark on Cold-Start Active Learning with Foundation Models for Medical Image Analysis

Ning Zhu, Xiaochuan Ma, Shaoting Zhang, Guotai Wang

arxiv logopreprintAug 5 2025
Cold-Start Active Learning (CSAL) aims to select informative samples for annotation without prior knowledge, which is important for improving annotation efficiency and model performance under a limited annotation budget in medical image analysis. Most existing CSAL methods rely on Self-Supervised Learning (SSL) on the target dataset for feature extraction, which is inefficient and limited by insufficient feature representation. Recently, pre-trained Foundation Models (FMs) have shown powerful feature extraction ability with a potential for better CSAL. However, this paradigm has been rarely investigated, with a lack of benchmarks for comparison of FMs in CSAL tasks. To this end, we propose MedCAL-Bench, the first systematic FM-based CSAL benchmark for medical image analysis. We evaluate 14 FMs and 7 CSAL strategies across 7 datasets under different annotation budgets, covering classification and segmentation tasks from diverse medical modalities. It is also the first CSAL benchmark that evaluates both the feature extraction and sample selection stages. Our experimental results reveal that: 1) Most FMs are effective feature extractors for CSAL, with DINO family performing the best in segmentation; 2) The performance differences of these FMs are large in segmentation tasks, while small for classification; 3) Different sample selection strategies should be considered in CSAL on different datasets, with Active Learning by Processing Surprisal (ALPS) performing the best in segmentation while RepDiv leading for classification. The code is available at https://github.com/HiLab-git/MedCAL-Bench.

GRASPing Anatomy to Improve Pathology Segmentation

Keyi Li, Alexander Jaus, Jens Kleesiek, Rainer Stiefelhagen

arxiv logopreprintAug 5 2025
Radiologists rely on anatomical understanding to accurately delineate pathologies, yet most current deep learning approaches use pure pattern recognition and ignore the anatomical context in which pathologies develop. To narrow this gap, we introduce GRASP (Guided Representation Alignment for the Segmentation of Pathologies), a modular plug-and-play framework that enhances pathology segmentation models by leveraging existing anatomy segmentation models through pseudolabel integration and feature alignment. Unlike previous approaches that obtain anatomical knowledge via auxiliary training, GRASP integrates into standard pathology optimization regimes without retraining anatomical components. We evaluate GRASP on two PET/CT datasets, conduct systematic ablation studies, and investigate the framework's inner workings. We find that GRASP consistently achieves top rankings across multiple evaluation metrics and diverse architectures. The framework's dual anatomy injection strategy, combining anatomical pseudo-labels as input channels with transformer-guided anatomical feature fusion, effectively incorporates anatomical context.

Towards a zero-shot low-latency navigation for open surgery augmented reality applications.

Schwimmbeck M, Khajarian S, Auer C, Wittenberg T, Remmele S

pubmed logopapersAug 5 2025
Augmented reality (AR) enhances surgical navigation by superimposing visible anatomical structures with three-dimensional virtual models using head-mounted displays (HMDs). In particular, interventions such as open liver surgery can benefit from AR navigation, as it aids in identifying and distinguishing tumors and risk structures. However, there is a lack of automatic and markerless methods that are robust against real-world challenges, such as partial occlusion and organ motion. We introduce a novel multi-device approach for automatic live navigation in open liver surgery that enhances the visualization and interaction capabilities of a HoloLens 2 HMD through precise and reliable registration using an Intel RealSense RGB-D camera. The intraoperative RGB-D segmentation and the preoperative CT data are utilized to register a virtual liver model to the target anatomy. An AR-prompted Segment Anything Model (SAM) enables robust segmentation of the liver in situ without the need for additional training data. To mitigate algorithmic latency, Double Exponential Smoothing (DES) is applied to forecast registration results. We conducted a phantom study for open liver surgery, investigating various scenarios of liver motion, viewpoints, and occlusion. The mean registration errors (8.31 mm-18.78 mm TRE) are comparable to those reported in prior work, while our approach demonstrates high success rates even for high occlusion factors and strong motion. Using forecasting, we bypassed the algorithmic latency of 79.8 ms per frame, with median forecasting errors below 2 mms and 1.5 degrees between the quaternions. To our knowledge, this is the first work to approach markerless in situ visualization by combining a multi-device method with forecasting and a foundation model for segmentation and tracking. This enables a more reliable and precise AR registration of surgical targets with low latency. Our approach can be applied to other surgical applications and AR hardware with minimal effort.

Policy to Assist Iteratively Local Segmentation: Optimising Modality and Location Selection for Prostate Cancer Localisation

Xiangcen Wu, Shaheer U. Saeed, Yipei Wang, Ester Bonmati Coll, Yipeng Hu

arxiv logopreprintAug 5 2025
Radiologists often mix medical image reading strategies, including inspection of individual modalities and local image regions, using information at different locations from different images independently as well as concurrently. In this paper, we propose a recommend system to assist machine learning-based segmentation models, by suggesting appropriate image portions along with the best modality, such that prostate cancer segmentation performance can be maximised. Our approach trains a policy network that assists tumor localisation, by recommending both the optimal imaging modality and the specific sections of interest for review. During training, a pre-trained segmentation network mimics radiologist inspection on individual or variable combinations of these imaging modalities and their sections - selected by the policy network. Taking the locally segmented regions as an input for the next step, this dynamic decision making process iterates until all cancers are best localised. We validate our method using a data set of 1325 labelled multiparametric MRI images from prostate cancer patients, demonstrating its potential to improve annotation efficiency and segmentation accuracy, especially when challenging pathology is present. Experimental results show that our approach can surpass standard segmentation networks. Perhaps more interestingly, our trained agent independently developed its own optimal strategy, which may or may not be consistent with current radiologist guidelines such as PI-RADS. This observation also suggests a promising interactive application, in which the proposed policy networks assist human radiologists.

Are Vision-xLSTM-embedded U-Nets better at segmenting medical images?

Dutta P, Bose S, Roy SK, Mitra S

pubmed logopapersAug 5 2025
The development of efficient segmentation strategies for medical images has evolved from its initial dependence on Convolutional Neural Networks (CNNs) to the current investigation of hybrid models that combine CNNs with Vision Transformers (ViTs). There is an increasing focus on developing architectures that are both high-performing and computationally efficient, capable of being deployed on remote systems with limited resources. Although transformers can capture global dependencies in the input space, they face challenges from the corresponding high computational and storage expenses involved. The objective of this research is to propose that Vision Extended Long Short-Term Memory (Vision-xLSTM) forms an appropriate backbone for medical image segmentation, offering excellent performance with reduced computational costs. This study investigates the integration of CNNs with Vision-xLSTM by introducing the novel U-VixLSTM. The Vision-xLSTM blocks capture the temporal and global relationships within the patches extracted from the CNN feature maps. The convolutional feature reconstruction path upsamples the output volume from the Vision-xLSTM blocks to produce the segmentation output. The U-VixLSTM exhibits superior performance compared to the state-of-the-art networks in the publicly available Synapse, ISIC and ACDC datasets. The findings suggest that U-VixLSTM is a promising alternative to ViTs for medical image segmentation, delivering effective performance without substantial computational burden. This makes it feasible for deployment in healthcare environments with limited resources for faster diagnosis. Code provided: https://github.com/duttapallabi2907/U-VixLSTM.

Skin lesion segmentation: A systematic review of computational techniques, tools, and future directions.

Sharma AL, Sharma K, Ghosal P

pubmed logopapersAug 5 2025
Skin lesion segmentation is a highly sought-after research topic in medical image processing, which may help in the early diagnosis of skin diseases. Early detection of skin diseases like Melanoma can decrease the mortality rate by 95%. Distinguishing lesions from healthy skin through skin image segmentation is a critical step. Various factors such as color, size, shape of the skin lesion, presence of hair, and other noise pose challenges in segmenting a lesion from healthy skin. Hence, the effectiveness of the segmentation technique utilized is vital for precise disease diagnosis and treatment planning. This review explores and summarizes the latest advancements in skin lesion segmentation techniques and their state-of-the-art methods from 2018 to 2025. It also covers crucial information, including input datasets, pre-processing, augmentation, method configuration, loss functions, hyperparameter settings, and performance metrics. The review addresses the primary challenges encountered in skin lesion segmentation from images and comprehensively compares state-of-the-art techniques for skin lesion segmentation. Researchers in this field will find this review compelling due to the insights on skin lesion segmentation and methodological details, as well as the encouraging results analysis of the state-of-the-art methods.

NUTRITIONAL IMPACT OF LEUCINE-ENRICHED SUPPLEMENTS: EVALUATING PROTEIN TYPE THROUGH ARTIFICIAL INTELLIGENCE (AI)-AUGMENTED MUSCLE ULTRASONOGRAPHY IN HYPERCALORIC, HYPERPROTEIC SUPPORT.

López Gómez JJ, Gutiérrez JG, Jauregui OI, Cebriá Á, Asensio LE, Martín DP, Velasco PF, Pérez López P, Sahagún RJ, Bargues DR, Godoy EJ, de Luis Román DA

pubmed logopapersAug 5 2025
Malnutrition adversely affects physical function and body composition in patients with chronic diseases. Leucine supplementation has shown benefits in improving body composition and clinical outcomes. This study aimed to evaluate the effects of a leucine-enriched oral nutritional supplement (ONS) on the nutritional status of patients at risk of malnutrition. This prospective observational study followed two cohorts of malnourished patients receiving personalized nutritional interventions over 3 months. One group received a leucine-enriched oral supplement (20% protein, 100% whey, 3 g leucine), while other received a standard supplement (hypercaloric and normo-hyperproteic) with mixed protein sources. Nutritional status was assessed at baseline and after 3 months using anthropometry, bioelectrical impedance analysis, AI assisted muscle ultrasound, and handgrip strength RESULTS: A total of 142 patients were included (76 Leucine-ONS, 66 Standard-ONS), mostly women (65.5%), mean age 62.00 (18.66) years. Malnutrition was present in 90.1% and 34.5% had sarcopenia. Cancer was the most common condition (30.3%). The Leucine-ONS group showed greater improvements in phase angle (+2.08% vs. -1.57%; p=0.02) and rectus femoris thickness (+1.72% vs. -5.89%; p=0.03). Multivariate analysis confirmed associations between Leucine-ONS and improved phase angle (OR=2.41; 95%CI: 1.18-4.92; p=0.02) and reduced intramuscular fat (OR=2.24; 95%CI: 1.13-4.46; p=0.02). Leucine-enriched-ONS significantly improved phase angle and muscle thickness compared to standard ONS, supporting its role in enhancing body composition in malnourished patients. These results must be interpreted in the context of the observational design of the study, the heterogeneity of comparison groups and the short duration of intervention. Further randomized controlled trials are needed to confirm these results and assess long-term clinical and functional outcomes.
Page 44 of 1341332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.