Sort by:
Page 38 of 1611610 results

Attention residual network for medical ultrasound image segmentation.

Liu H, Zhang P, Hu J, Huang Y, Zuo S, Li L, Liu M, She C

pubmed logopapersJul 1 2025
Ultrasound imaging can distinctly display the morphology and structure of internal organs within the human body, enabling the examination of organs like the breast, liver, and thyroid. It can identify the locations of tumors, nodules, and other lesions, thereby serving as an efficacious tool for treatment detection and rehabilitation evaluation. Typically, the attending physician is required to manually demarcate the boundaries of lesion locations, such as tumors, in ultrasound images. Nevertheless, several issues exist. The high noise level in ultrasound images, the degradation of image quality due to the impact of surrounding tissues, and the influence of the operator's experience and proficiency on the determination of lesion locations can all contribute to a reduction in the accuracy of delineating the boundaries of lesion sites. In the wake of the advancement of deep learning, its application in medical image segmentation is becoming increasingly prevalent. For instance, while the U-Net model has demonstrated a favorable performance in medical image segmentation, the convolution layers of the traditional U-Net model are relatively simplistic, leading to suboptimal extraction of global information. Moreover, due to the significant noise present in ultrasound images, the model is prone to interference. In this research, we propose an Attention Residual Network model (ARU-Net). By incorporating residual connections within the encoder section, the learning capacity of the model is enhanced. Additionally, a spatial hybrid convolution module is integrated to augment the model's ability to extract global information and deepen the vertical architecture of the network. During the feature fusion stage of the skip connections, a channel attention mechanism and a multi-convolutional self-attention mechanism are respectively introduced to suppress noisy points within the fused feature maps, enabling the model to acquire more information regarding the target region. Finally, the predictive efficacy of the model was evaluated using publicly accessible breast ultrasound and thyroid ultrasound data. The ARU-Net achieved mean Intersection over Union (mIoU) values of 82.59% and 84.88%, accuracy values of 97.53% and 96.09%, and F1-score values of 90.06% and 89.7% for breast and thyroid ultrasound, respectively.

Radiomics analysis based on dynamic contrast-enhanced MRI for predicting early recurrence after hepatectomy in hepatocellular carcinoma patients.

Wang KD, Guan MJ, Bao ZY, Shi ZJ, Tong HH, Xiao ZQ, Liang L, Liu JW, Shen GL

pubmed logopapersJul 1 2025
This study aimed to develop a machine learning model based on Magnetic Resonance Imaging (MRI) radiomics for predicting early recurrence after curative surgery in patients with hepatocellular carcinoma (HCC).A retrospective analysis was conducted on 200 patients with HCC who underwent curative hepatectomy. Patients were randomly allocated to training (n = 140) and validation (n = 60) cohorts. Preoperative arterial, portal venous, and delayed phase images were acquired. Tumor regions of interest (ROIs) were manually delineated, with an additional ROI obtained by expanding the tumor boundary by 5 mm. Radiomic features were extracted and selected using the Least Absolute Shrinkage and Selection Operator (LASSO). Multiple machine learning algorithms were employed to develop predictive models. Model performance was evaluated using receiver operating characteristic (ROC) curves, decision curve analysis, and calibration curves. The 20 most discriminative radiomic features were integrated with tumor size and satellite nodules for model development. In the validation cohort, the clinical-peritumoral radiomics model demonstrated superior predictive accuracy (AUC = 0.85, 95% CI: 0.74-0.95) compared to the clinical-intratumoral radiomics model (AUC = 0.82, 95% CI: 0.68-0.93) and the radiomics-only model (AUC = 0.82, 95% CI: 0.69-0.93). Furthermore, calibration curves and decision curve analyses indicated superior calibration ability and clinical benefit. The MRI-based peritumoral radiomics model demonstrates significant potential for predicting early recurrence of HCC.

Hybrid transfer learning and self-attention framework for robust MRI-based brain tumor classification.

Panigrahi S, Adhikary DRD, Pattanayak BK

pubmed logopapersJul 1 2025
Brain tumors are a significant contributor to cancer-related deaths worldwide. Accurate and prompt detection is crucial to reduce mortality rates and improve patient survival prospects. Magnetic Resonance Imaging (MRI) is crucial for diagnosis, but manual analysis is resource-intensive and error-prone, highlighting the need for robust Computer-Aided Diagnosis (CAD) systems. This paper proposes a novel hybrid model combining Transfer Learning (TL) and attention mechanisms to enhance brain tumor classification accuracy. Leveraging features from the pre-trained DenseNet201 Convolutional Neural Networks (CNN) model and integrating a Transformer-based architecture, our approach overcomes challenges like computational intensity, detail detection, and noise sensitivity. We also evaluated five additional pre-trained models-VGG19, InceptionV3, Xception, MobileNetV2, and ResNet50V2 and incorporated Multi-Head Self-Attention (MHSA) and Squeeze-and-Excitation Attention (SEA) blocks individually to improve feature representation. Using the Br35H dataset of 3,000 MRI images, our proposed DenseTransformer model achieved a consistent accuracy of 99.41%, demonstrating its reliability as a diagnostic tool. Statistical analysis using Z-test based on Cohen's Kappa Score, DeLong's test based on AUC Score and McNemar's test based on F1-score confirms the model's reliability. Additionally, Explainable AI (XAI) techniques like Gradient-weighted Class Activation Mapping (Grad-CAM) and Local Interpretable Model-agnostic Explanations (LIME) enhanced model transparency and interpretability. This study underscores the potential of hybrid Deep Learning (DL) models in advancing brain tumor diagnosis and improving patient outcomes.

Synergizing advanced algorithm of explainable artificial intelligence with hybrid model for enhanced brain tumor detection in healthcare.

Lamba K, Rani S, Shabaz M

pubmed logopapersJul 1 2025
Brain tumor causes life-threatening consequences due to which its timely detection and accurate classification are critical for determining appropriate treatment plans while focusing on the improved patient outcomes. However, conventional approaches of brain tumor diagnosis, such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) scans, are often labor-intensive, prone to human error, and completely reliable on expertise of radiologists.Thus, the integration of advanced techniques such as Machine Learning (ML) and Deep Learning (DL) has brought revolution in the healthcare sector due to their supporting features or properties having ability to analyze medical images in recent years, demonstrating great potential for achieving accurate and improved outcomes but also resulted in a few drawbacks due to their black-box nature. As understanding reasoning behind their predictions is still a great challenge for the healthcare professionals and raised a great concern about their trustworthiness, interpretability and transparency in clinical settings. Thus, an advanced algorithm of explainable artificial intelligence (XAI) has been synergized with hybrid model comprising of DenseNet201 network for extracting the most important features based on the input Magnetic resonance imaging (MRI) data following supervised algorithm, support vector machine (SVM) to distinguish distinct types of brain scans. To overcome this, an explainable hybrid framework has been proposed that integrates DenseNet201 for deep feature extraction with a Support Vector Machine (SVM) classifier for robust binary classification. A region-adaptive preprocessing pipeline is used to enhance tumor visibility and feature clarity. To address the need for interpretability, multiple XAI techniques-Grad-CAM, Integrated Gradients (IG), and Layer-wise Relevance Propagation (LRP) have been incorporated. Our comparative evaluation shows that LRP achieves the highest performance across all explainability metrics, with 98.64% accuracy, 0.74 F1-score, and 0.78 IoU. The proposed model provides transparent and highly accurate diagnostic predictions, offering a reliable clinical decision support tool. It achieves 0.9801 accuracy, 0.9223 sensitivity, 0.9909 specificity, 0.9154 precision, and 0.9360 F1-score, demonstrating strong potential for real-world brain tumor diagnosis and personalized treatment strategies.

A superpixel based self-attention network for uterine fibroid segmentation in high intensity focused ultrasound guidance images.

Wen S, Zhang D, Lei Y, Yang Y

pubmed logopapersJul 1 2025
Ultrasound guidance images are widely used for high intensity focused ultrasound (HIFU) therapy; however, the speckles, acoustic shadows, and signal attenuation in ultrasound guidance images hinder the observation of the images by radiologists and make segmentation of ultrasound guidance images more difficult. To address these issues, we proposed the superpixel based attention network, a network integrating superpixels and self-attention mechanisms that can automatically segment tumor regions in ultrasound guidance images. The method is implemented based on the framework of region splitting and merging. The ultrasound guidance image is first over-segmented into superpixels, then features within the superpixels are extracted and encoded into superpixel feature matrices with the uniform size. The network takes superpixel feature matrices and their positional information as input, and classifies superpixels using self-attention modules and convolutional layers. Finally, the superpixels are merged based on the classification results to obtain the tumor region, achieving automatic tumor region segmentation. The method was applied to a local dataset consisting of 140 ultrasound guidance images from uterine fibroid HIFU therapy. The performance of the proposed method was quantitatively evaluated by comparing the segmentation results with those of the pixel-wise segmentation networks. The proposed method achieved 75.95% and 7.34% in mean intersection over union (IoU) and mean normalized Hausdorff distance (NormHD). In comparison to the segmentation transformer (SETR), this represents an improvement in performance by 5.52% for IoU and 1.49% for NormHD. Paired t-tests were conducted to evaluate the significant difference in IoU and NormHD between the proposed method and the comparison methods. All p-values of the paired t-tests were found to be less than 0.05. The analysis of evaluation metrics and segmentation results indicates that the proposed method performs better than existing pixel-wise segmentation networks in segmenting the tumor region on ultrasound guidance images.

Auto-Segmentation via deep-learning approaches for the assessment of flap volume after reconstructive surgery or radiotherapy in head and neck cancer.

Thariat J, Mesbah Z, Chahir Y, Beddok A, Blache A, Bourhis J, Fatallah A, Hatt M, Modzelewski R

pubmed logopapersJul 1 2025
Reconstructive flap surgery aims to restore the substance and function losses associated with tumor resection. Automatic flap segmentation could allow quantification of flap volume and correlations with functional outcomes after surgery or post-operative RT (poRT). Flaps being ectopic tissues of various components (fat, skin, fascia, muscle, bone) of various volume, shape and texture, the anatomical modifications, inflammation and edema of the postoperative bed make the segmentation task challenging. We built a artificial intelligence-enabled automatic soft-tissue flap segmentation method from CT scans of Head and Neck Cancer (HNC) patients. Ground-truth flap segmentation masks were delineated by two experts on postoperative CT scans of 148 HNC patients undergoing poRT. All CTs and flaps (free or pedicled, soft tissue only or bone) were kept, including those with artefacts, to ensure generalizability. A deep-learning nnUNetv2 framework was built using Hounsfield Units (HU) windowing to mimic radiological assessment. A transformer-based 2D "Segment Anything Model" (MedSAM) was also built and fine-tuned to medical CTs. Models were compared with the Dice Similarity Coefficient (DSC) and Hausdorff Distance 95th percentile (HD95) metrics. Flaps were in the oral cavity (N = 102), oropharynx (N = 26) or larynx/hypopharynx (N = 20). There were free flaps (N = 137), pedicled flaps (N = 11), of soft tissue flap-only (N = 92), reconstructed bone (N = 42), or bone resected without reconstruction (N = 40). The nnUNet-windowing model outperformed the nnUNetv2 and MedSam models. It achieved mean DSCs of 0.69 and HD95 of 25.6 mm using 5-fold cross-validation. Segmentation performed better in the absence of artifacts, and rare situations such as pedicled flaps, laryngeal primaries and resected bone without bone reconstruction (p < 0.01). Automatic flap segmentation demonstrates clinical performances that allow to quantify spontaneous and radiation-induced volume shrinkage of flaps. Free flaps achieved excellent performances; rare situations will be addressed by fine-tuning the network.

Generative AI for weakly supervised segmentation and downstream classification of brain tumors on MR images.

Yoo JJ, Namdar K, Wagner MW, Yeom KW, Nobre LF, Tabori U, Hawkins C, Ertl-Wagner BB, Khalvati F

pubmed logopapersJul 1 2025
Segmenting abnormalities is a leading problem in medical imaging. Using machine learning for segmentation generally requires manually annotated segmentations, demanding extensive time and resources from radiologists. We propose a weakly supervised approach that utilizes binary image-level labels, which are much simpler to acquire, rather than manual annotations to segment brain tumors on magnetic resonance images. The proposed method generates healthy variants of cancerous images for use as priors when training the segmentation model. However, using weakly supervised segmentations for downstream tasks such as classification can be challenging due to occasional unreliable segmentations. To address this, we propose using the generated non-cancerous variants to identify the most effective segmentations without requiring ground truths. Our proposed method generates segmentations that achieve Dice coefficients of 79.27% on the Multimodal Brain Tumor Segmentation (BraTS) 2020 dataset and 73.58% on an internal dataset of pediatric low-grade glioma (pLGG), which increase to 88.69% and 80.29%, respectively, when removing suboptimal segmentations identified using the proposed method. Using the segmentations for tumor classification results with Area Under the Characteristic Operating Curve (AUC) of 93.54% and 83.74% on the BraTS and pLGG datasets, respectively. These are comparable to using manual annotations which achieve AUCs of 95.80% and 83.03% on the BraTS and pLGG datasets, respectively.

Breast cancer detection based on histological images using fusion of diffusion model outputs.

Akbari Y, Abdullakutty F, Al Maadeed S, Bouridane A, Hamoudi R

pubmed logopapersJul 1 2025
The precise detection of breast cancer in histopathological images remains a critical challenge in computational pathology, where accurate tissue segmentation significantly enhances diagnostic accuracy. This study introduces a novel approach leveraging a Conditional Denoising Diffusion Probabilistic Model (DDPM) to improve breast cancer detection through advanced segmentation and feature fusion. The method employs a conditional channel within the DDPM framework, first trained on a breast cancer histopathology dataset and extended to additional datasets to achieve regional-level segmentation of tumor areas and other tissue regions. These segmented regions, combined with predicted noise from the diffusion model and original images, are processed through an EfficientNet-B0 network to extract enhanced features. A transformer decoder then fuses these features to generate final detection results. Extensive experiments optimizing the network architecture and fusion strategies were conducted, and the proposed method was evaluated across four distinct datasets, achieving a peak accuracy of 92.86% on the BRACS dataset, 100% on the BreCaHAD dataset, 96.66% the ICIAR2018 dataset. This approach represents a significant advancement in computational pathology, offering a robust tool for breast cancer detection with potential applications in broader medical imaging contexts.

An adaptive deep learning approach based on InBNFus and CNNDen-GRU networks for breast cancer and maternal fetal classification using ultrasound images.

Fatima M, Khan MA, Mirza AM, Shin J, Alasiry A, Marzougui M, Cha J, Chang B

pubmed logopapersJul 1 2025
Convolutional Neural Networks (CNNs), a sophisticated deep learning technique, have proven highly effective in identifying and classifying abnormalities related to various diseases. The manual classification of these is a hectic and time-consuming process; therefore, it is essential to develop a computerized technique. Most existing methods are designed to address a single specific problem, limiting their adaptability. In this work, we proposed a novel adaptive deep-learning framework for simultaneously classifying breast cancer and maternal-fetal ultrasound datasets. Data augmentation was applied in the preprocessing phase to address the data imbalance problem. After, two novel architectures are proposed: InBnFUS and CNNDen-GRU. The InBnFUS network combines 5-Blocks inception-based architecture (Model 1) and 5-Blocks inverted bottleneck-based architecture (Model 2) through a depth-wise concatenation layer, while CNNDen-GRU incorporates 5-Blocks dense architecture with an integrated GRU layer. Post-training features were extracted from the global average pooling and GRU layer and classified using neural network classifiers. The experimental evaluation achieved enhanced accuracy rates of 99.0% for breast cancer, 96.6% for maternal-fetal (common planes), and 94.6% for maternal-fetal (brain) datasets. Additionally, the models consistently achieve high precision, recall, and F1 scores across both datasets. A comprehensive ablation study has been performed, and the results show the superior performance of the proposed models.

Personalized prediction model generated with machine learning for kidney function one year after living kidney donation.

Oki R, Hirai T, Iwadoh K, Kijima Y, Hashimoto H, Nishimura Y, Banno T, Unagami K, Omoto K, Shimizu T, Hoshino J, Takagi T, Ishida H, Hirai T

pubmed logopapersJul 1 2025
Living kidney donors typically experience approximately a 30% reduction in kidney function after donation, although the degree of reduction varies among individuals. This study aimed to develop a machine learning (ML) model to predict serum creatinine (Cre) levels at one year post-donation using preoperative clinical data, including kidney-, fat-, and muscle-volumetry values from computed tomography. A total of 204 living kidney donors were included. Symbolic regression via genetic programming was employed to create an ML-based Cre prediction model using preoperative clinical variables. Validation was conducted using a 7:3 training-to-test data split. The ML model demonstrated a median absolute error of 0.079 mg/dL for predicting Cre. In the validation cohort, it outperformed conventional methods (which assume post-donation eGFR to be 70% of the preoperative value) with higher R<sup>2</sup> (0.58 vs. 0.27), lower root mean squared error (5.27 vs. 6.89), and lower mean absolute error (3.92 vs. 5.8). Key predictive variables included preoperative Cre and remnant kidney volume. The model was deployed as a web application for clinical use. The ML model offers accurate predictions of post-donation kidney function and may assist in monitoring donor outcomes, enhancing personalized care after kidney donation.
Page 38 of 1611610 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.