Back to all papers

HyFusion-X: hybrid deep and traditional feature fusion with ensemble classifiers for breast cancer detection using mammogram and ultrasound images.

November 3, 2025pubmed logopapers

Authors

Shaukat A,Farhan S,Javed R,Idris S,Alamri FS,Ara A,Saba T

Affiliations (4)

  • Department of Computer Science, Lahore College for Women University, Lahore, 54000, Pakistan.
  • Center of Excellence in Cyber Security (CYBEX), Prince Sultan University, Riyadh, Saudi Arabia.
  • Department of Mathematical Sciences, Princess Nourah Bint Abdulrahman University, P.O.Box 84428, 116711, Riyadh, Saudi Arabia. [email protected].
  • AIDA Lab. College of Computer and Information Sciences (CCIS), Prince Sultan University, 11586, Riyadh, Saudi Arabia.

Abstract

Breast cancer detection and diagnosis remain challenging due to the complexity of tumor tissues and image quality variations, which hinder early and accurate identification. Timely diagnosis is vital for initiating treatment and improving patient outcomes. This study presents a novel hybrid feature fusion method, combining deep features from pre-trained models (ResNet50, InceptionV3, and MobileNetV2) with traditional texture features from Gabor filters and wavelet transforms, applied separately to mammogram and ultrasound datasets for breast cancer detection. A robust pre-processing pipeline, including image resizing, scaling, normalization, and CLAHE for contrast enhancement, is used to improve model performance. Data augmentation strengthens model robustness, and tumor segmentation is performed using Otsu's multi-thresholding to accurately localize high-intensity regions. The hybrid feature extraction method yields 600 features, which are optimized through statistical feature selection for enhanced classification accuracy. Machine learning algorithms-XGBoost, AdaBoost, and CatBoost-are utilized to classify breast lesions across datasets, including Mini-DDSM and INbreast for mammogram images, and Rodrigues and BUSI for ultrasound images. Unlike most prior work, this fusion is applied across both mammogram and ultrasound modalities within one framework, a combination that has not been widely explored. Our approach explicitly targets the multi-modal gap to enhance robustness and generalizability across imaging types. The performance of the ensemble classifiers is compared, demonstrating the effectiveness of the proposed approach. The models achieved high classification accuracies: 98.67% for Rodrigues, 97.06% for INbreast, 97.02% for BUSI, and 95.00% for Mini-DDSM. These results highlight the effectiveness of the method and its potential to improve breast cancer detection. Future research will focus on comparisons with state-of-the-art models and real-world clinical applications.

Topics

Breast NeoplasmsMammographyUltrasonography, MammaryImage Processing, Computer-AssistedImage Interpretation, Computer-AssistedJournal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.