Breast ultrasound images for segmentation and classification using multi-task U-Net.
Authors
Affiliations (8)
Affiliations (8)
- Department of ECE, SR University, Warangal, Telangana, 506371, India.
- Department of ECE, Sumathi Reddy Institute of Technology for Women, Warangal, India.
- Department of Software Engineering, Faculty of Engineering and Architecture, Recep Tayyip Erdoğan ÜniversitesiTürkiye, Zihni Derin Yerleşkesi, Fener, Merkez/Rize, 53100, Turkey.
- Department of Computer Science, Faculty of Information and Communication Sciences, University of Ilorin, Ilorin, 240003, Kwara, Nigeria.
- Department of Information Systems, Faculty of Computer Science and Engineering, Obafemi Awolowo University, Ile-Ife, Osun State, Nigeria. [email protected].
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, India. [email protected].
- Department of Operations and Quality Management, Durban University of Technology, Durban, South Africa.
- Centre for Ecological Intelligence, Faculty of Engineering and the Built Environment, University of Johannesburg, Johannesburg, South Africa.
Abstract
Breast ultrasound imaging is widely used for the early detection of breast cancer due to its accessibility and effectiveness, particularly in dense breast tissues. However, its diagnostic performance is often affected by operator dependency, speckle noise, low contrast, and variability in data quality. Although deep learning methods have shown promise in automated tumor segmentation and classification, their clinical applicability remains limited due to challenges such as small and imbalanced datasets, inconsistent annotations, and the lack of integrated learning strategies. In this work, we propose a Multi-Task U-Net framework that jointly performs lesion segmentation and tumor classification by leveraging shared feature representations. The proposed method incorporates a deterministic oversampling strategy for handling class imbalance, a prediction-refinement module to ensure consistency between segmentation and classification outputs, and an attention-guided feature learning mechanism to enhance lesion localization. Additionally, a curated version of the BUSI dataset is constructed by removing duplicate and inconsistent samples to ensure reliable evaluation. The proposed model achieves a Dice score of up to 0.81 in comparative evaluation, along with classification accuracy of up to 0.96-0.98, demonstrating improved performance over baseline methods. The consistent performance across both segmentation and classification tasks indicates good generalization capability despite dataset limitations. Finally, the proposed multi-task framework provides an effective and reliable solution for automated breast cancer detection in ultrasound images and shows strong potential for clinical application.