Sort by:
Page 37 of 2182174 results

Muscle-Driven prognostication in gastric cancer: A multicenter deep learning framework integrating Iliopsoas and erector spinae radiomics for 5-Year survival prediction.

Hong Y, Zhang P, Teng Z, Cheng K, Zhang Z, Cheng Y, Cao G, Chen B

pubmed logopapersJul 1 2025
This study developed a 5-year survival prediction model for gastric cancer patients by combining radiomics and deep learning, focusing on CT-based 2D and 3D features of the iliopsoas and erector spinae muscles. Retrospective data from 705 patients across two centers were analyzed, with clinical variables assessed via Cox regression and radiomic features extracted using deep learning. The 2D model outperformed the 3D approach, leading to feature fusion across five dimensions, optimized via logistic regression. Results showed no significant association between clinical baseline characteristics and survival, but the 2D model demonstrated strong prognostic performance (AUC ~ 0.8), with attention heatmaps emphasizing spinal muscle regions. The 3D model underperformed due to irrelevant data. The final integrated model achieved stable predictive accuracy, confirming the link between muscle mass and survival. This approach advances precision medicine by enabling personalized prognosis and exploring 3D imaging feasibility, offering insights for gastric cancer research.

Determination of the oral carcinoma and sarcoma in contrast enhanced CT images using deep convolutional neural networks.

Warin K, Limprasert W, Paipongna T, Chaowchuen S, Vicharueang S

pubmed logopapersJul 1 2025
Oral cancer is a hazardous disease and a major cause of morbidity and mortality worldwide. The purpose of this study was to develop the deep convolutional neural networks (CNN)-based multiclass classification and object detection models for distinguishing and detection of oral carcinoma and sarcoma in contrast-enhanced CT images. This study included 3,259 slices of CT images of oral cancer cases from the cancer hospital and two regional hospitals from 2016 to 2020. Multiclass classification models were constructed using DenseNet-169, ResNet-50, EfficientNet-B0, ConvNeXt-Base, and ViT-Base-Patch16-224 to accurately differentiate between oral carcinoma and sarcoma. Additionally, multiclass object detection models, including Faster R-CNN, YOLOv8, and YOLOv11, were designed to autonomously identify and localize lesions by placing bounding boxes on CT images. Performance evaluation on a test dataset showed that the best classification model achieved an accuracy of 0.97, while the best detection models yielded a mean average precision (mAP) of 0.87. In conclusion, the CNN-based multiclass models have a great promise for accurately determining and distinguishing oral carcinoma and sarcoma in CT imaging, potentially enhancing early detection and informing treatment strategies.

Innovative deep learning classifiers for breast cancer detection through hybrid feature extraction techniques.

Vijayalakshmi S, Pandey BK, Pandey D, Lelisho ME

pubmed logopapersJul 1 2025
Breast cancer remains a major cause of mortality among women, where early and accurate detection is critical to improving survival rates. This study presents a hybrid classification approach for mammogram analysis by combining handcrafted statistical features and deep learning techniques. The methodology involves preprocessing with the Shearlet Transform, segmentation using Improved Otsu thresholding and Canny edge detection, followed by feature extraction through Gray Level Co-occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), and 1st-order statistical descriptors. These features are input into a 2D BiLSTM-CNN model designed to learn spatial and sequential patterns in mammogram images. Evaluated on the MIAS dataset, the proposed method achieved 97.14% accuracy, outperforming several benchmark models. The results indicate that this hybrid strategy offers improvements in classification performance and may assist radiologists in more effective breast cancer screening.

AI-based CT assessment of 3117 vertebrae reveals significant sex-specific vertebral height differences.

Palm V, Thangamani S, Budai BK, Skornitzke S, Eckl K, Tong E, Sedaghat S, Heußel CP, von Stackelberg O, Engelhardt S, Kopytova T, Norajitra T, Maier-Hein KH, Kauczor HU, Wielpütz MO

pubmed logopapersJul 1 2025
Predicting vertebral height is complex due to individual factors. AI-based medical imaging analysis offers new opportunities for vertebral assessment. Thereby, these novel methods may contribute to sex-adapted nomograms and vertebral height prediction models, aiding in diagnosing spinal conditions like compression fractures and supporting individualized, sex-specific medicine. In this study an AI-based CT-imaging spine analysis of 262 subjects (mean age 32.36 years, range 20-54 years) was conducted, including a total of 3117 vertebrae, to assess sex-associated anatomical variations. Automated segmentations provided anterior, central, and posterior vertebral heights. Regression analysis with a cubic spline linear mixed-effects model was adapted to age, sex, and spinal segments. Measurement reliability was confirmed by two readers with an intraclass correlation coefficient (ICC) of 0.94-0.98. Female vertebral heights were consistently smaller than males (p < 0.05). The largest differences were found in the upper thoracic spine (T1-T6), with mean differences of 7.9-9.0%. Specifically, T1 and T2 showed differences of 8.6% and 9.0%, respectively. The strongest height increase between consecutive vertebrae was observed from T9 to L1 (mean slope of 1.46; 6.63% for females and 1.53; 6.48% for males). This study highlights significant sex-based differences in vertebral heights, resulting in sex-adapted nomograms that can enhance diagnostic accuracy and support individualized patient assessments.

Attention residual network for medical ultrasound image segmentation.

Liu H, Zhang P, Hu J, Huang Y, Zuo S, Li L, Liu M, She C

pubmed logopapersJul 1 2025
Ultrasound imaging can distinctly display the morphology and structure of internal organs within the human body, enabling the examination of organs like the breast, liver, and thyroid. It can identify the locations of tumors, nodules, and other lesions, thereby serving as an efficacious tool for treatment detection and rehabilitation evaluation. Typically, the attending physician is required to manually demarcate the boundaries of lesion locations, such as tumors, in ultrasound images. Nevertheless, several issues exist. The high noise level in ultrasound images, the degradation of image quality due to the impact of surrounding tissues, and the influence of the operator's experience and proficiency on the determination of lesion locations can all contribute to a reduction in the accuracy of delineating the boundaries of lesion sites. In the wake of the advancement of deep learning, its application in medical image segmentation is becoming increasingly prevalent. For instance, while the U-Net model has demonstrated a favorable performance in medical image segmentation, the convolution layers of the traditional U-Net model are relatively simplistic, leading to suboptimal extraction of global information. Moreover, due to the significant noise present in ultrasound images, the model is prone to interference. In this research, we propose an Attention Residual Network model (ARU-Net). By incorporating residual connections within the encoder section, the learning capacity of the model is enhanced. Additionally, a spatial hybrid convolution module is integrated to augment the model's ability to extract global information and deepen the vertical architecture of the network. During the feature fusion stage of the skip connections, a channel attention mechanism and a multi-convolutional self-attention mechanism are respectively introduced to suppress noisy points within the fused feature maps, enabling the model to acquire more information regarding the target region. Finally, the predictive efficacy of the model was evaluated using publicly accessible breast ultrasound and thyroid ultrasound data. The ARU-Net achieved mean Intersection over Union (mIoU) values of 82.59% and 84.88%, accuracy values of 97.53% and 96.09%, and F1-score values of 90.06% and 89.7% for breast and thyroid ultrasound, respectively.

Radiomics analysis based on dynamic contrast-enhanced MRI for predicting early recurrence after hepatectomy in hepatocellular carcinoma patients.

Wang KD, Guan MJ, Bao ZY, Shi ZJ, Tong HH, Xiao ZQ, Liang L, Liu JW, Shen GL

pubmed logopapersJul 1 2025
This study aimed to develop a machine learning model based on Magnetic Resonance Imaging (MRI) radiomics for predicting early recurrence after curative surgery in patients with hepatocellular carcinoma (HCC).A retrospective analysis was conducted on 200 patients with HCC who underwent curative hepatectomy. Patients were randomly allocated to training (n = 140) and validation (n = 60) cohorts. Preoperative arterial, portal venous, and delayed phase images were acquired. Tumor regions of interest (ROIs) were manually delineated, with an additional ROI obtained by expanding the tumor boundary by 5 mm. Radiomic features were extracted and selected using the Least Absolute Shrinkage and Selection Operator (LASSO). Multiple machine learning algorithms were employed to develop predictive models. Model performance was evaluated using receiver operating characteristic (ROC) curves, decision curve analysis, and calibration curves. The 20 most discriminative radiomic features were integrated with tumor size and satellite nodules for model development. In the validation cohort, the clinical-peritumoral radiomics model demonstrated superior predictive accuracy (AUC = 0.85, 95% CI: 0.74-0.95) compared to the clinical-intratumoral radiomics model (AUC = 0.82, 95% CI: 0.68-0.93) and the radiomics-only model (AUC = 0.82, 95% CI: 0.69-0.93). Furthermore, calibration curves and decision curve analyses indicated superior calibration ability and clinical benefit. The MRI-based peritumoral radiomics model demonstrates significant potential for predicting early recurrence of HCC.

Brain structural features with functional priori to classify Parkinson's disease and multiple system atrophy using diagnostic MRI.

Zhou K, Li J, Huang R, Yu J, Li R, Liao W, Lu F, Hu X, Chen H, Gao Q

pubmed logopapersJul 1 2025
Clinical two-dimensional (2D) MRI data has seen limited application in the early diagnosis of Parkinson's disease (PD) and multiple system atrophy (MSA) due to quality limitations, yet its diagnostic and therapeutic potential remains underexplored. This study presents a novel machine learning framework using reconstructed clinical images to accurately distinguish PD from MSA and identify disease-specific neuroimaging biomarkers. The structure constrained super-resolution network (SCSRN) algorithm was employed to reconstruct clinical 2D MRI data for 56 PD and 58 MSA patients. Features were derived from a functional template, and hierarchical SHAP-based feature selection improved model accuracy and interpretability. In the test set, the Extra Trees and logistic regression models based on the functional template demonstrated an improved accuracy rate of 95.65% and an AUC of 99%. The positive and negative impacts of various features predicting PD and MSA were clarified, with larger fourth ventricular and smaller brainstem volumes being most significant. The proposed framework provides new insights into the comprehensive utilization of clinical 2D MRI images to explore underlying neuroimaging biomarkers that can distinguish between PD and MSA, highlighting disease-specific alterations in brain morphology observed in these conditions.

Hybrid transfer learning and self-attention framework for robust MRI-based brain tumor classification.

Panigrahi S, Adhikary DRD, Pattanayak BK

pubmed logopapersJul 1 2025
Brain tumors are a significant contributor to cancer-related deaths worldwide. Accurate and prompt detection is crucial to reduce mortality rates and improve patient survival prospects. Magnetic Resonance Imaging (MRI) is crucial for diagnosis, but manual analysis is resource-intensive and error-prone, highlighting the need for robust Computer-Aided Diagnosis (CAD) systems. This paper proposes a novel hybrid model combining Transfer Learning (TL) and attention mechanisms to enhance brain tumor classification accuracy. Leveraging features from the pre-trained DenseNet201 Convolutional Neural Networks (CNN) model and integrating a Transformer-based architecture, our approach overcomes challenges like computational intensity, detail detection, and noise sensitivity. We also evaluated five additional pre-trained models-VGG19, InceptionV3, Xception, MobileNetV2, and ResNet50V2 and incorporated Multi-Head Self-Attention (MHSA) and Squeeze-and-Excitation Attention (SEA) blocks individually to improve feature representation. Using the Br35H dataset of 3,000 MRI images, our proposed DenseTransformer model achieved a consistent accuracy of 99.41%, demonstrating its reliability as a diagnostic tool. Statistical analysis using Z-test based on Cohen's Kappa Score, DeLong's test based on AUC Score and McNemar's test based on F1-score confirms the model's reliability. Additionally, Explainable AI (XAI) techniques like Gradient-weighted Class Activation Mapping (Grad-CAM) and Local Interpretable Model-agnostic Explanations (LIME) enhanced model transparency and interpretability. This study underscores the potential of hybrid Deep Learning (DL) models in advancing brain tumor diagnosis and improving patient outcomes.

Synergizing advanced algorithm of explainable artificial intelligence with hybrid model for enhanced brain tumor detection in healthcare.

Lamba K, Rani S, Shabaz M

pubmed logopapersJul 1 2025
Brain tumor causes life-threatening consequences due to which its timely detection and accurate classification are critical for determining appropriate treatment plans while focusing on the improved patient outcomes. However, conventional approaches of brain tumor diagnosis, such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) scans, are often labor-intensive, prone to human error, and completely reliable on expertise of radiologists.Thus, the integration of advanced techniques such as Machine Learning (ML) and Deep Learning (DL) has brought revolution in the healthcare sector due to their supporting features or properties having ability to analyze medical images in recent years, demonstrating great potential for achieving accurate and improved outcomes but also resulted in a few drawbacks due to their black-box nature. As understanding reasoning behind their predictions is still a great challenge for the healthcare professionals and raised a great concern about their trustworthiness, interpretability and transparency in clinical settings. Thus, an advanced algorithm of explainable artificial intelligence (XAI) has been synergized with hybrid model comprising of DenseNet201 network for extracting the most important features based on the input Magnetic resonance imaging (MRI) data following supervised algorithm, support vector machine (SVM) to distinguish distinct types of brain scans. To overcome this, an explainable hybrid framework has been proposed that integrates DenseNet201 for deep feature extraction with a Support Vector Machine (SVM) classifier for robust binary classification. A region-adaptive preprocessing pipeline is used to enhance tumor visibility and feature clarity. To address the need for interpretability, multiple XAI techniques-Grad-CAM, Integrated Gradients (IG), and Layer-wise Relevance Propagation (LRP) have been incorporated. Our comparative evaluation shows that LRP achieves the highest performance across all explainability metrics, with 98.64% accuracy, 0.74 F1-score, and 0.78 IoU. The proposed model provides transparent and highly accurate diagnostic predictions, offering a reliable clinical decision support tool. It achieves 0.9801 accuracy, 0.9223 sensitivity, 0.9909 specificity, 0.9154 precision, and 0.9360 F1-score, demonstrating strong potential for real-world brain tumor diagnosis and personalized treatment strategies.

A superpixel based self-attention network for uterine fibroid segmentation in high intensity focused ultrasound guidance images.

Wen S, Zhang D, Lei Y, Yang Y

pubmed logopapersJul 1 2025
Ultrasound guidance images are widely used for high intensity focused ultrasound (HIFU) therapy; however, the speckles, acoustic shadows, and signal attenuation in ultrasound guidance images hinder the observation of the images by radiologists and make segmentation of ultrasound guidance images more difficult. To address these issues, we proposed the superpixel based attention network, a network integrating superpixels and self-attention mechanisms that can automatically segment tumor regions in ultrasound guidance images. The method is implemented based on the framework of region splitting and merging. The ultrasound guidance image is first over-segmented into superpixels, then features within the superpixels are extracted and encoded into superpixel feature matrices with the uniform size. The network takes superpixel feature matrices and their positional information as input, and classifies superpixels using self-attention modules and convolutional layers. Finally, the superpixels are merged based on the classification results to obtain the tumor region, achieving automatic tumor region segmentation. The method was applied to a local dataset consisting of 140 ultrasound guidance images from uterine fibroid HIFU therapy. The performance of the proposed method was quantitatively evaluated by comparing the segmentation results with those of the pixel-wise segmentation networks. The proposed method achieved 75.95% and 7.34% in mean intersection over union (IoU) and mean normalized Hausdorff distance (NormHD). In comparison to the segmentation transformer (SETR), this represents an improvement in performance by 5.52% for IoU and 1.49% for NormHD. Paired t-tests were conducted to evaluate the significant difference in IoU and NormHD between the proposed method and the comparison methods. All p-values of the paired t-tests were found to be less than 0.05. The analysis of evaluation metrics and segmentation results indicates that the proposed method performs better than existing pixel-wise segmentation networks in segmenting the tumor region on ultrasound guidance images.
Page 37 of 2182174 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.