Sort by:
Page 12 of 23225 results

Detection of breast cancer using fractional discrete sinc transform based on empirical Fourier decomposition.

Azmy MM

pubmed logopapersJun 20 2025
Breast cancer is the most common cause of death among women worldwide. Early detection of breast cancer is important; for saving patients' lives. Ultrasound and mammography are the most common noninvasive methods for detecting breast cancer. Computer techniques are used to help physicians diagnose cancer. In most of the previous studies, the classification parameter rates were not high enough to achieve the correct diagnosis. In this study, new approaches were applied to detect breast cancer images from three databases. The programming software used to extract features from the images was MATLAB R2022a. Novel approaches were obtained using new fractional transforms. These fractional transforms were deduced from the fraction Fourier transform and novel discrete transforms. The novel discrete transforms were derived from discrete sine and cosine transforms. The steps of the approaches were described below. First, fractional transforms were applied to the breast images. Then, the empirical Fourier decomposition (EFD) was obtained. The mean, variance, kurtosis, and skewness were subsequently calculated. Finally, RNN-BILSTM (recurrent neural network-bidirectional-long short-term memory) was used as a classification phase. The proposed approaches were compared to obtain the highest accuracy rate during the classification phase based on different fractional transforms. The highest accuracy rate was obtained when the fractional discrete sinc transform of approach 4 was applied. The area under the receiver operating characteristic curve (AUC) was 1. The accuracy, sensitivity, specificity, precision, G-mean, and F-measure rates were 100%. If traditional machine learning methods, such as support vector machines (SVMs) and artificial neural networks (ANNs), were used, the classification parameter rates would be low. Therefore, the fourth approach used RNN-BILSTM to extract the features of breast images perfectly. This approach can be programed on a computer to help physicians correctly classify breast images.

MVKD-Trans: A Multi-View Knowledge Distillation Vision Transformer Architecture for Breast Cancer Classification Based on Ultrasound Images.

Ling D, Jiao X

pubmed logopapersJun 20 2025
Breast cancer is the leading cancer threatening women's health. In recent years, deep neural networks have outperformed traditional methods in terms of both accuracy and efficiency for breast cancer classification. However, most ultrasound-based breast cancer classification methods rely on single-perspective information, which may lead to higher misdiagnosis rates. In this study, we propose a multi-view knowledge distillation vision transformer architecture (MVKD-Trans) for the classification of benign and malignant breast tumors. We utilize multi-view ultrasound images of the same tumor to capture diverse features. Additionally, we employ a shuffle module for feature fusion, extracting channel and spatial dual-attention information to improve the model's representational capability. Given the limited computational capacity of ultrasound devices, we also utilize knowledge distillation (KD) techniques to compress the multi-view network into a single-view network. The results show that the accuracy, area under the ROC curve (AUC), sensitivity, specificity, precision, and F1 score of the model are 88.15%, 91.23%, 81.41%, 90.73%, 78.29%, and 79.69%, respectively. The superior performance of our approach, compared to several existing models, highlights its potential to significantly enhance the understanding and classification of breast cancer.

Artificial intelligence-based tumor size measurement on mammography: agreement with pathology and comparison with human readers' assessments across multiple imaging modalities.

Kwon MR, Kim SH, Park GE, Mun HS, Kang BJ, Kim YT, Yoon I

pubmed logopapersJun 20 2025
To evaluate the agreement between artificial intelligence (AI)-based tumor size measurements of breast cancer and the final pathology and compare these results with those of other imaging modalities. This retrospective study included 925 women (mean age, 55.3 years ± 11.6) with 936 breast cancers, who underwent digital mammography, breast ultrasound, and magnetic resonance imaging before breast cancer surgery. AI-based tumor size measurement was performed on post-processed mammographic images, outlining areas with AI abnormality scores of 10, 50, and 90%. Absolute agreement between AI-based tumor sizes, image modalities, and histopathology was assessed using intraclass correlation coefficient (ICC) analysis. Concordant and discordant cases between AI measurements and histopathologic examinations were compared. Tumor size with an abnormality score of 50% showed the highest agreement with histopathologic examination (ICC = 0.54, 95% confidential interval [CI]: 0.49-0.59), showing comparable agreement with mammography (ICC = 0.54, 95% CI: 0.48-0.60, p = 0.40). For ductal carcinoma in situ and human epidermal growth factor receptor 2-positive cancers, AI revealed a higher agreement than that of mammography (ICC = 0.76, 95% CI: 0.67-0.84 and ICC = 0.73, 95% CI: 0.52-0.85). Overall, 52.0% (487/936) of cases were discordant, with these cases more commonly observed in younger patients with dense breasts, multifocal malignancies, lower abnormality scores, and different imaging characteristics. AI-based tumor size measurements with abnormality scores of 50% showed moderate agreement with histopathology but demonstrated size discordance in more than half of the cases. While comparable to mammography, its limitations emphasize the need for further refinement and research.

The diagnostic accuracy of MRI radiomics in axillary lymph node metastasis prediction: a systematic review and meta-analysis.

Motiei M, Mansouri SS, Tamimi A, Farokhi S, Fakouri A, Rassam K, Sedighi-Pirsaraei N, Hassanzadeh-Rad A

pubmed logopapersJun 20 2025
Breast cancer is the most prevalent malignancy in women and a leading cause of mortality. Accurate assessment of axillary lymph node metastasis (LNM) is critical for breast cancer management. Exploring non-invasive methods such as radiomics for the detection of LNM is highly important. We systematically searched Pubmed, Embase, Scopus, Web of Science and google scholar until 11 March 2024. To assess the risk of bias and quality of studies, we utilized the quality assessment of diagnostic accuracy studies (QUADAS) tool as well as the radiomics quality score (RQS). Area under the curve (AUC), sensitivity, specificity and accuracy were determined for each study to evaluate the diagnostic accuracy of radiomics in magnetic resonance imaging (MRI) for detecting LNM in patients with breast cancer. This meta-analysis of 20 studies (5072 patients) demonstrated an overall AUC of 0.83 (95% confidence interval (CI): 0.80-0.86). Subgroup analysis revealed a trend towards higher specificity when radiomics was combined with clinical factors (0.83) compared to radiomics alone (0.79). Sensitivity analysis confirmed the robustness of the findings and publication bias was not evident. The radiomics models increased the likelihood of a positive LNM outcome from 37% to 73.2% when initial probability was positive and decreased the likelihood to 8% when initial probability was negative, highlighting their potential clinical utility. Radiomics as a non-invasive method demonstrates strong potential for detecting LNM in breast cancer, offering clinical promise. However, further standardization and validation are needed in future studies.

Optimized YOLOv8 for enhanced breast tumor segmentation in ultrasound imaging.

Mostafa AM, Alaerjan AS, Aldughayfiq B, Allahem H, Mahmoud AA, Said W, Shabana H, Ezz M

pubmed logopapersJun 19 2025
Breast cancer significantly affects people's health globally, making early and accurate diagnosis vital. While ultrasound imaging is safe and non-invasive, its manual interpretation is subjective. This study explores machine learning (ML) techniques to improve breast ultrasound image segmentation, comparing models trained on combined versus separate classes of benign and malignant tumors. The YOLOv8 object detection algorithm is applied to the image segmentation task, aiming to capitalize on its robust feature detection capabilities. We utilized a dataset of 780 ultrasound images categorized into benign and malignant classes to train several deep learning (DL) models: UNet, UNet with DenseNet-121, VGG16, VGG19, and an adapted YOLOv8. These models were evaluated in two experimental setups-training on a combined dataset and training on separate datasets for benign and malignant classes. Performance metrics such as Dice Coefficient, Intersection over Union (IoU), and mean Average Precision (mAP) were used to assess model effectiveness. The study demonstrated substantial improvements in model performance when trained on separate classes, with the UNet model's F1-score increasing from 77.80 to 84.09% and Dice Coefficient from 75.58 to 81.17%, and the adapted YOLOv8 model achieving an F1-score improvement from 93.44 to 95.29% and Dice Coefficient from 82.10 to 84.40%. These results highlight the advantage of specialized model training and the potential of using advanced object detection algorithms for segmentation tasks. This research underscores the significant potential of using specialized training strategies and innovative model adaptations in medical imaging segmentation, ultimately contributing to better patient outcomes.

Artificial Intelligence Language Models to Translate Professional Radiology Mammography Reports Into Plain Language - Impact on Interpretability and Perception by Patients.

Pisarcik D, Kissling M, Heimer J, Farkas M, Leo C, Kubik-Huch RA, Euler A

pubmed logopapersJun 19 2025
This study aimed to evaluate the interpretability and patient perception of AI-translated mammography and sonography reports, focusing on comprehensibility, follow-up recommendations, and conveyed empathy using a survey. In this observational study, three fictional mammography and sonography reports with BI-RADS categories 3, 4, and 5 were created. These reports were repeatedly translated to plain language by three different large language models (LLM: ChatGPT-4, ChatGPT-4o, Google Gemini). In a first step, the best of these repeatedly translated reports for each BI-RADS category and LLM was selected by two experts in breast imaging considering factual correctness, completeness, and quality. In a second step, female participants compared and rated the translated reports regarding comprehensibility, follow-up recommendations, conveyed empathy, and additional value of each report using a survey with Likert scales. Statistical analysis included cumulative link mixed models and the Plackett-Luce model for ranking preferences. 40 females participated in the survey. GPT-4 and GPT-4o were rated significantly higher than Gemini across all categories (P<.001). Participants >50 years of age rated the reports significantly higher as compared to participants of 18-29 years of age (P<.05). Higher education predicted lower ratings (P=.02). No prior mammography increased scores (P=.03), and AI-experience had no effect (P=.88). Ranking analysis showed GPT-4o as the most preferred (P=.48), followed by GPT-4 (P=.37), with Gemini ranked last (P=.15). Patient preference differed among AI-translated radiology reports. Compared to a traditional report using radiological language, AI-translated reports add value for patients, enhance comprehensibility and empathy and therefore hold the potential to improve patient communication in breast imaging.

Applying a multi-task and multi-instance framework to predict axillary lymph node metastases in breast cancer.

Li Y, Chen Z, Ding Z, Mei D, Liu Z, Wang J, Tang K, Yi W, Xu Y, Liang Y, Cheng Y

pubmed logopapersJun 18 2025
Deep learning (DL) models have shown promise in predicting axillary lymph node (ALN) status. However, most existing DL models were classification-only models and did not consider the practical application scenarios of multi-view joint prediction. Here, we propose a Multi-Task Learning (MTL) and Multi-Instance Learning (MIL) framework that simulates the real-world clinical diagnostic scenario for ALN status prediction in breast cancer. Ultrasound images of the primary tumor and ALN (if available) regions were collected, each annotated with a segmentation label. The model was trained on a training cohort and tested on both internal and external test cohorts. The proposed two-stage DL framework using one of the Transformer models, Segformer, as the network backbone, exhibits the top-performing model. It achieved an AUC of 0.832, a sensitivity of 0.815, and a specificity of 0.854 in the internal test cohort. In the external cohort, this model attained an AUC of 0.918, a sensitivity of 0.851 and a specificity of 0.957. The Class Activation Mapping method demonstrated that the DL model correctly identified the characteristic areas of metastasis within the primary tumor and ALN regions. This framework may serve as an effective second reader to assist clinicians in ALN status assessment.

Artificial Intelligence in Breast US Diagnosis and Report Generation.

Wang J, Tian H, Yang X, Wu H, Zhu X, Chen R, Chang A, Chen Y, Dou H, Huang R, Cheng J, Zhou Y, Gao R, Yang K, Li G, Chen J, Ni D, Dong F, Xu J, Gu N

pubmed logopapersJun 18 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and evaluate an artificial intelligence (AI) system for generating breast ultrasound (BUS) reports. Materials and Methods This retrospective study included 104,364 cases from three hospitals (January 2020-December 2022). The AI system was trained on 82,896 cases, validated on 10,385 cases, and tested on an internal set (10,383 cases) and two external sets (300 and 400 cases). Under blind review, three senior radiologists (> 10 years of experience) evaluated AI-generated reports and those written by one midlevel radiologist (7 years of experience), as well as reports from three junior radiologists (2-3 years of experience) with and without AI assistance. The primary outcomes included the acceptance rates of Breast Imaging Reporting and Data System (BI-RADS) categories and lesion characteristics. Statistical analysis included one-sided and two-sided McNemar tests for non-inferiority and significance testing. Results In external test set 1 (300 cases), the midlevel radiologist and AI system achieved BI-RADS acceptance rates of 95.00% [285/300] versus 92.33% [277/300] (<i>P</i> < .001; non-inferiority test with a prespecified margin of 10%). In external test set 2 (400 cases), three junior radiologists had BI-RADS acceptance rates of 87.00% [348/400] versus 90.75% [363/400] (<i>P</i> = .06), 86.50% [346/400] versus 92.00% [368/400] ( <i>P</i> = .007), and 84.75% [339/400] versus 90.25% [361/400] (<i>P</i> = .02) with and without AI assistance, respectively. Conclusion The AI system performed comparably to a midlevel radiologist and aided junior radiologists in BI-RADS classification. ©RSNA, 2025.

Step-by-Step Approach to Design Image Classifiers in AI: An Exemplary Application of the CNN Architecture for Breast Cancer Diagnosis

Lohani, A., Mishra, B. K., Wertheim, K. Y., Fagbola, T. M.

medrxiv logopreprintJun 17 2025
In recent years, different Convolutional Neural Networks (CNNs) approaches have been applied for image classification in general and specific problems such as breast cancer diagnosis, but there is no standardising approach to facilitate comparison and synergy. This paper attempts a step-by-step approach to standardise a common application of image classification with the specific problem of classifying breast ultrasound images for breast cancer diagnosis as an illustrative example. In this study, three distinct datasets: Breast Ultrasound Image (BUSI), Breast Ultrasound Image (BUI), and Ultrasound Breast Images for Breast Cancer (UBIBC) datasets have been used to build and fine-tune custom and pre-trained CNN models systematically. Custom CNN models have been built, and hence, transfer learning (TL) has been applied to deploy a broad range of pre-trained models, optimised by applying data augmentation techniques and hyperparameter tuning. Models were trained and tested in scenarios involving limited and large datasets to gain insights into their robustness and generality. The obtained results indicated that the custom CNN and VGG19 are the two most suitable architectures for this problem. The experimental results highlight the significance of employing an effective step-by-step approach in image classification tasks to enhance the robustness and generalisation capabilities of CNN-based classifiers.
Page 12 of 23225 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.