Sort by:
Page 105 of 4034028 results

A novel lung cancer diagnosis model using hybrid convolution (2D/3D)-based adaptive DenseUnet with attention mechanism.

Deepa J, Badhu Sasikala L, Indumathy P, Jerrin Simla A

pubmed logopapersAug 5 2025
Existing Lung Cancer Diagnosis (LCD) models have difficulty in detecting early-stage lung cancer due to the asymptomatic nature of the disease which leads to an increased death rate of patients. Therefore, it is important to diagnose lung disease at an early stage to save the lives of affected persons. Hence, the research work aims to develop an efficient lung disease diagnosis using deep learning techniques for the early and accurate detection of lung cancer. This is achieved by. Initially, the proposed model collects the mandatory CT images from the standard benchmark datasets. Then, the lung cancer segmentation is done by using the development of Hybrid Convolution (2D/3D)-based Adaptive DenseUnet with Attention mechanism (HC-ADAM). The Hybrid Sewing Training with Spider Monkey Optimization (HSTSMO) is introduced to optimize the parameters in the developed HC-ADAM segmentation approach. Finally, the dissected lung nodule imagery is considered for the lung cancer classification stage, where the Hybrid Adaptive Dilated Networks with Attention mechanism (HADN-AM) are implemented with the serial cascading of ResNet and Long Short Term Memory (LSTM) for attaining better categorization performance. The accuracy, precision, and F1-score of the developed model for the LIDC-IDRI dataset are 96.3%, 96.38%, and 96.36%, respectively.

Brain tumor segmentation by optimizing deep learning U-Net model.

Asiri AA, Hussain L, Irfan M, Mehdar KM, Awais M, Alelyani M, Alshuhri M, Alghamdi AJ, Alamri S, Nadeem MA

pubmed logopapersAug 5 2025
BackgroundMagnetic Resonance Imaging (MRI) is a cornerstone in diagnosing brain tumors. However, the complex nature of these tumors makes accurate segmentation in MRI images a demanding task.ObjectiveAccurate brain tumor segmentation remains a critical challenge in medical image analysis, with early detection crucial for improving patient outcomes.MethodsTo develop and evaluate a novel UNet-based architecture for improved brain tumor segmentation in MRI images. This paper presents a novel UNet-based architecture for improved brain tumor segmentation. The UNet model architecture incorporates Leaky ReLU activation, batch normalization, and regularization to enhance training and performance. The model consists of varying numbers of layers and kernel sizes to capture different levels of detail. To address the issue of class imbalance in medical image segmentation, we employ focused loss and generalized Dice (GDL) loss functions.ResultsThe proposed model was evaluated on the BraTS'2020 dataset, achieving an accuracy of 99.64% and Dice coefficients of 0.8984, 0.8431, and 0.8824 for necrotic core, edema, and enhancing tumor regions, respectively.ConclusionThese findings demonstrate the efficacy of our approach in accurately predicting tumors, which has the potential to enhance diagnostic systems and improve patient outcomes.

Utilizing 3D fast spin echo anatomical imaging to reduce the number of contrast preparations in <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> quantification of knee cartilage using learning-based methods.

Zhong J, Huang C, Yu Z, Xiao F, Blu T, Li S, Ong TM, Ho KK, Chan Q, Griffith JF, Chen W

pubmed logopapersAug 5 2025
To propose and evaluate an accelerated <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> quantification method that combines <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted fast spin echo (FSE) images and proton density (PD)-weighted anatomical FSE images, leveraging deep learning models for <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> mapping. The goal is to reduce scan time and facilitate integration into routine clinical workflows for osteoarthritis (OA) assessment. This retrospective study utilized MRI data from 40 participants (30 OA patients and 10 healthy volunteers). A volume of PD-weighted anatomical FSE images and a volume of <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted images acquired at a non-zero spin-lock time were used as input to train deep learning models, including a 2D U-Net and a multi-layer perceptron (MLP). <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> maps generated by these models were compared with ground truth maps derived from a traditional non-linear least squares (NLLS) fitting method using four <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted images. Evaluation metrics included mean absolute error (MAE), mean absolute percentage error (MAPE), regional error (RE), and regional percentage error (RPE). The best-performed deep learning models achieved RPEs below 5% across all evaluated scenarios. This performance was consistent even in reduced acquisition settings that included only one PD-weighted image and one <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted image, where NLLS methods cannot be applied. Furthermore, the results were comparable to those obtained with NLLS when longer acquisitions with four <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted images were used. The proposed approach enables efficient <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> mapping using PD-weighted anatomical images, reducing scan time while maintaining clinical standards. This method has the potential to facilitate the integration of quantitative MRI techniques into routine clinical practice, benefiting OA diagnosis and monitoring.

Sex differences in white matter amplitude of low-frequency fluctuation associated with cognitive performance across the Alzheimer's disease continuum.

Chen X, Zhou S, Wang W, Gao Z, Ye W, Zhu W, Lu Y, Ma J, Li X, Yu Y, Li X

pubmed logopapersAug 5 2025
BackgroundSex differences in Alzheimer's disease (AD) progression offer insights into pathogenesis and clinical management. White matter (WM) amplitude of low-frequency fluctuation (ALFF), reflecting neural activity, represents a potential disease biomarker.ObjectiveTo explore whether there are sex differences in regional WM ALFF among AD patients, amnestic mild cognitive impairment (aMCI) patients, and healthy controls (HCs), how it is related to cognitive performance, and whether it can be used for disease classification.MethodsResting-state functional magnetic resonance images and cognitive assessments were obtained from 85 AD (36 female), 52 aMCI (23 female), and 78 HCs (43 female). Two-way ANOVA examined group × sex interactions for regional WM ALFF and cognitive scores. WM ALFF-cognition correlations and support vector machine diagnostic accuracy were evaluated.ResultsSex × group interaction effects on WM ALFF were detected in the right superior longitudinal fasciculus (<i>F</i> = 20.08, <i>p</i><sub>FDR_corrected</sub> < 0.001), left superior longitudinal fasciculus (<i>F</i> = 5.45, <i>p</i><sub>GRF_corrected</sub> < 0.001) and right inferior longitudinal fasciculus (<i>F</i> = 6.00, <i>p</i><sub>GRF_corrected</sub> = 0.001). These WM ALFF values positively correlated with different cognitive performance between sexes. The support vector machine learning best differentiated aMCI from AD in the full cohort and males (accuracy = 75%), and HCs from aMCI in females (accuracy = 93%).ConclusionsSex differences in regional WM ALFF during AD progression are associated with cognitive performance and can be utilized for disease classification.

MAUP: Training-free Multi-center Adaptive Uncertainty-aware Prompting for Cross-domain Few-shot Medical Image Segmentation

Yazhou Zhu, Haofeng Zhang

arxiv logopreprintAug 5 2025
Cross-domain Few-shot Medical Image Segmentation (CD-FSMIS) is a potential solution for segmenting medical images with limited annotation using knowledge from other domains. The significant performance of current CD-FSMIS models relies on the heavily training procedure over other source medical domains, which degrades the universality and ease of model deployment. With the development of large visual models of natural images, we propose a training-free CD-FSMIS model that introduces the Multi-center Adaptive Uncertainty-aware Prompting (MAUP) strategy for adapting the foundation model Segment Anything Model (SAM), which is trained with natural images, into the CD-FSMIS task. To be specific, MAUP consists of three key innovations: (1) K-means clustering based multi-center prompts generation for comprehensive spatial coverage, (2) uncertainty-aware prompts selection that focuses on the challenging regions, and (3) adaptive prompt optimization that can dynamically adjust according to the target region complexity. With the pre-trained DINOv2 feature encoder, MAUP achieves precise segmentation results across three medical datasets without any additional training compared with several conventional CD-FSMIS models and training-free FSMIS model. The source code is available at: https://github.com/YazhouZhu19/MAUP.

R2GenKG: Hierarchical Multi-modal Knowledge Graph for LLM-based Radiology Report Generation

Futian Wang, Yuhan Qiao, Xiao Wang, Fuling Wang, Yuxiang Zhang, Dengdi Sun

arxiv logopreprintAug 5 2025
X-ray medical report generation is one of the important applications of artificial intelligence in healthcare. With the support of large foundation models, the quality of medical report generation has significantly improved. However, challenges such as hallucination and weak disease diagnostic capability still persist. In this paper, we first construct a large-scale multi-modal medical knowledge graph (termed M3KG) based on the ground truth medical report using the GPT-4o. It contains 2477 entities, 3 kinds of relations, 37424 triples, and 6943 disease-aware vision tokens for the CheXpert Plus dataset. Then, we sample it to obtain multi-granularity semantic graphs and use an R-GCN encoder for feature extraction. For the input X-ray image, we adopt the Swin-Transformer to extract the vision features and interact with the knowledge using cross-attention. The vision tokens are fed into a Q-former and retrieved the disease-aware vision tokens using another cross-attention. Finally, we adopt the large language model to map the semantic knowledge graph, input X-ray image, and disease-aware vision tokens into language descriptions. Extensive experiments on multiple datasets fully validated the effectiveness of our proposed knowledge graph and X-ray report generation framework. The source code of this paper will be released on https://github.com/Event-AHU/Medical_Image_Analysis.

Evaluating the Predictive Value of Preoperative MRI for Erectile Dysfunction Following Radical Prostatectomy

Gideon N. L. Rouwendaal, Daniël Boeke, Inge L. Cox, Henk G. van der Poel, Margriet C. van Dijk-de Haan, Regina G. H. Beets-Tan, Thierry N. Boellaard, Wilson Silva

arxiv logopreprintAug 5 2025
Accurate preoperative prediction of erectile dysfunction (ED) is important for counseling patients undergoing radical prostatectomy. While clinical features are established predictors, the added value of preoperative MRI remains underexplored. We investigate whether MRI provides additional predictive value for ED at 12 months post-surgery, evaluating four modeling strategies: (1) a clinical-only baseline, representing current state-of-the-art; (2) classical models using handcrafted anatomical features derived from MRI; (3) deep learning models trained directly on MRI slices; and (4) multimodal fusion of imaging and clinical inputs. Imaging-based models (maximum AUC 0.569) slightly outperformed handcrafted anatomical approaches (AUC 0.554) but fell short of the clinical baseline (AUC 0.663). Fusion models offered marginal gains (AUC 0.586) but did not exceed clinical-only performance. SHAP analysis confirmed that clinical features contributed most to predictive performance. Saliency maps from the best-performing imaging model suggested a predominant focus on anatomically plausible regions, such as the prostate and neurovascular bundles. While MRI-based models did not improve predictive performance over clinical features, our findings suggest that they try to capture patterns in relevant anatomical structures and may complement clinical predictors in future multimodal approaches.

MedCAL-Bench: A Comprehensive Benchmark on Cold-Start Active Learning with Foundation Models for Medical Image Analysis

Ning Zhu, Xiaochuan Ma, Shaoting Zhang, Guotai Wang

arxiv logopreprintAug 5 2025
Cold-Start Active Learning (CSAL) aims to select informative samples for annotation without prior knowledge, which is important for improving annotation efficiency and model performance under a limited annotation budget in medical image analysis. Most existing CSAL methods rely on Self-Supervised Learning (SSL) on the target dataset for feature extraction, which is inefficient and limited by insufficient feature representation. Recently, pre-trained Foundation Models (FMs) have shown powerful feature extraction ability with a potential for better CSAL. However, this paradigm has been rarely investigated, with a lack of benchmarks for comparison of FMs in CSAL tasks. To this end, we propose MedCAL-Bench, the first systematic FM-based CSAL benchmark for medical image analysis. We evaluate 14 FMs and 7 CSAL strategies across 7 datasets under different annotation budgets, covering classification and segmentation tasks from diverse medical modalities. It is also the first CSAL benchmark that evaluates both the feature extraction and sample selection stages. Our experimental results reveal that: 1) Most FMs are effective feature extractors for CSAL, with DINO family performing the best in segmentation; 2) The performance differences of these FMs are large in segmentation tasks, while small for classification; 3) Different sample selection strategies should be considered in CSAL on different datasets, with Active Learning by Processing Surprisal (ALPS) performing the best in segmentation while RepDiv leading for classification. The code is available at https://github.com/HiLab-git/MedCAL-Bench.

GRASPing Anatomy to Improve Pathology Segmentation

Keyi Li, Alexander Jaus, Jens Kleesiek, Rainer Stiefelhagen

arxiv logopreprintAug 5 2025
Radiologists rely on anatomical understanding to accurately delineate pathologies, yet most current deep learning approaches use pure pattern recognition and ignore the anatomical context in which pathologies develop. To narrow this gap, we introduce GRASP (Guided Representation Alignment for the Segmentation of Pathologies), a modular plug-and-play framework that enhances pathology segmentation models by leveraging existing anatomy segmentation models through pseudolabel integration and feature alignment. Unlike previous approaches that obtain anatomical knowledge via auxiliary training, GRASP integrates into standard pathology optimization regimes without retraining anatomical components. We evaluate GRASP on two PET/CT datasets, conduct systematic ablation studies, and investigate the framework's inner workings. We find that GRASP consistently achieves top rankings across multiple evaluation metrics and diverse architectures. The framework's dual anatomy injection strategy, combining anatomical pseudo-labels as input channels with transformer-guided anatomical feature fusion, effectively incorporates anatomical context.

ClinicalFMamba: Advancing Clinical Assessment using Mamba-based Multimodal Neuroimaging Fusion

Meng Zhou, Farzad Khalvati

arxiv logopreprintAug 5 2025
Multimodal medical image fusion integrates complementary information from different imaging modalities to enhance diagnostic accuracy and treatment planning. While deep learning methods have advanced performance, existing approaches face critical limitations: Convolutional Neural Networks (CNNs) excel at local feature extraction but struggle to model global context effectively, while Transformers achieve superior long-range modeling at the cost of quadratic computational complexity, limiting clinical deployment. Recent State Space Models (SSMs) offer a promising alternative, enabling efficient long-range dependency modeling in linear time through selective scan mechanisms. Despite these advances, the extension to 3D volumetric data and the clinical validation of fused images remains underexplored. In this work, we propose ClinicalFMamba, a novel end-to-end CNN-Mamba hybrid architecture that synergistically combines local and global feature modeling for 2D and 3D images. We further design a tri-plane scanning strategy for effectively learning volumetric dependencies in 3D images. Comprehensive evaluations on three datasets demonstrate the superior fusion performance across multiple quantitative metrics while achieving real-time fusion. We further validate the clinical utility of our approach on downstream 2D/3D brain tumor classification tasks, achieving superior performance over baseline methods. Our method establishes a new paradigm for efficient multimodal medical image fusion suitable for real-time clinical deployment.
Page 105 of 4034028 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.