Sort by:
Page 4 of 876 results

Improving YOLO-based breast mass detection with transfer learning pretraining on the OPTIMAM Mammography Image Database.

Ho PS, Tsai HY, Liu I, Lee YY, Chan SW

pubmed logopapersJul 1 2025
Early detection of breast cancer through mammography significantly improves survival rates. However, high false positive and false negative rates remain a challenge. Deep learning-based computer-aided diagnosis systems can assist in lesion detection, but their performance is often limited by the availability of labeled clinical data. This study systematically evaluated the effectiveness of transfer learning, image preprocessing techniques, and the latest You Only Look Once (YOLO) model (v9) for optimizing breast mass detection models on small proprietary datasets. We examined 133 mammography images containing masses and assessed various preprocessing strategies, including cropping and contrast enhancement. We further investigated the impact of transfer learning using the OPTIMAM Mammography Image Database (OMI-DB) compared with training on proprietary data. The performance of YOLOv9 was evaluated against YOLOv7 to determine improvements in detection accuracy. Pretraining on the OMI-DB dataset with cropped images significantly improved model performance, with YOLOv7 achieving a 13.9 % higher mean average precision (mAP) and 13.2 % higher F1-score compared to training only on proprietary data. Among the tested models and configurations, the best results were obtained using YOLOv9 pretrained OMI-DB and fine-tuned with cropped proprietary images, yielding an mAP of 73.3 % ± 16.7 % and an F1-score of 76.0 % ± 13.4 %, under this condition, YOLOv9 outperformed YOLOv7 by 8.1 % in mAP and 9.2 % in F1-score. This study provides a systematic evaluation of transfer learning and preprocessing techniques for breast mass detection in small datasets. Our results demonstrating that YOLOv9 with OMI-DB pretraining significantly enhances the performance of breast mass detection models while reducing training time, providing a valuable guideline for optimizing deep learning models in data-limited clinical applications.

Innovative deep learning classifiers for breast cancer detection through hybrid feature extraction techniques.

Vijayalakshmi S, Pandey BK, Pandey D, Lelisho ME

pubmed logopapersJul 1 2025
Breast cancer remains a major cause of mortality among women, where early and accurate detection is critical to improving survival rates. This study presents a hybrid classification approach for mammogram analysis by combining handcrafted statistical features and deep learning techniques. The methodology involves preprocessing with the Shearlet Transform, segmentation using Improved Otsu thresholding and Canny edge detection, followed by feature extraction through Gray Level Co-occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), and 1st-order statistical descriptors. These features are input into a 2D BiLSTM-CNN model designed to learn spatial and sequential patterns in mammogram images. Evaluated on the MIAS dataset, the proposed method achieved 97.14% accuracy, outperforming several benchmark models. The results indicate that this hybrid strategy offers improvements in classification performance and may assist radiologists in more effective breast cancer screening.

Contrast-enhanced mammography-based interpretable machine learning model for the prediction of the molecular subtype breast cancers.

Ma M, Xu W, Yang J, Zheng B, Wen C, Wang S, Xu Z, Qin G, Chen W

pubmed logopapersJul 1 2025
This study aims to establish a machine learning prediction model to explore the correlation between contrast-enhanced mammography (CEM) imaging features and molecular subtypes of mass-type breast cancer. This retrospective study included women with breast cancer who underwent CEM preoperatively between 2018 and 2021. We included 241 patients, which were randomly assigned to either a training or a test set in a 7:3 ratio. Twenty-one features were visually described, including four clinical features and seventeen radiological features, these radiological features which extracted from the CEM. Three binary classifications of subtypes were performed: Luminal vs. non-Luminal, HER2-enriched vs. non-HER2-enriched, and triple-negative (TNBC) vs. non-triple-negative. A multinomial naive Bayes (MNB) machine learning scheme was employed for the classification, and the least absolute shrink age and selection operator method were used to select the most predictive features for the classifiers. The classification performance was evaluated using the area under the receiver operating characteristic curve. We also utilized SHapley Additive exPlanation (SHAP) values to explain the prediction model. The model that used a combination of low energy (LE) and dual-energy subtraction (DES) achieved the best performance compared to using either of the two images alone, yielding an area under the receiver operating characteristic curve of 0.798 for Luminal vs. non-Luminal subtypes, 0.695 for TNBC vs. non-TNBC, and 0.773 for HER2-enriched vs. non-HER2-enriched. The SHAP algorithm shows that "LE_mass_margin_spiculated," "DES_mass_enhanced_margin_spiculated," and "DES_mass_internal_enhancement_homogeneous" have the most significant impact on the model's performance in predicting Luminal and non-Luminal breast cancer. "mass_calcification_relationship_no," "calcification_ type_no," and "LE_mass_margin_spiculated" have a considerable impact on the model's performance in predicting HER2 and non-HER2 breast cancer. The radiological characteristics of breast tumors extracted from CEM were found to be associated with breast cancer subtypes in our study. Future research is needed to validate these findings.

The evaluation of artificial intelligence in mammography-based breast cancer screening: Is breast-level analysis enough?

Taib AG, Partridge GJW, Yao L, Darker I, Chen Y

pubmed logopapersJun 25 2025
To assess whether the diagnostic performance of a commercial artificial intelligence (AI) algorithm for mammography differs between breast-level and lesion-level interpretations and to compare performance to a large population of specialised human readers. We retrospectively analysed 1200 mammograms from the NHS breast cancer screening programme using a commercial AI algorithm and assessments from 1258 trained human readers from the Personal Performance in Mammographic Screening (PERFORMS) external quality assurance programme. For breasts containing pathologically confirmed malignancies, a breast and lesion-level analysis was performed. The latter considered the locations of marked regions of interest for AI and humans. The highest score per lesion was recorded. For non-malignant breasts, a breast-level analysis recorded the highest score per breast. Area under the curve (AUC), sensitivity and specificity were calculated at the developer's recommended threshold for recall. The study was designed to detect a medium-sized effect (odds ratio 3.5 or 0.29) for sensitivity. The test set contained 882 non-malignant (73%) and 318 malignant breasts (27%), with 328 cancer lesions. The AI AUC was 0.942 at breast level and 0.929 at lesion level (difference -0.013, p < 0.01). The mean human AUC was 0.878 at breast level and 0.851 at lesion level (difference -0.027, p < 0.01). AI outperformed human readers at the breast and lesion level (ps < 0.01, respectively) according to the AUC. AI's diagnostic performance significantly decreased at the lesion level, indicating reduced accuracy in localising malignancies. However, its overall performance exceeded that of human readers. Question AI often recalls mammography cases not recalled by humans; to understand why, we as humans must consider the regions of interest it has marked as cancerous. Findings Evaluations of AI typically occur at the breast level, but performance decreases when AI is evaluated on a lesion level. This also occurs for humans. Clinical relevance To improve human-AI collaboration, AI should be assessed at the lesion level; poor accuracy here may lead to automation bias and unnecessary patient procedures.

Reconsidering Explicit Longitudinal Mammography Alignment for Enhanced Breast Cancer Risk Prediction

Solveig Thrun, Stine Hansen, Zijun Sun, Nele Blum, Suaiba A. Salahuddin, Kristoffer Wickstrøm, Elisabeth Wetzer, Robert Jenssen, Maik Stille, Michael Kampffmeyer

arxiv logopreprintJun 24 2025
Regular mammography screening is essential for early breast cancer detection. Deep learning-based risk prediction methods have sparked interest to adjust screening intervals for high-risk groups. While early methods focused only on current mammograms, recent approaches leverage the temporal aspect of screenings to track breast tissue changes over time, requiring spatial alignment across different time points. Two main strategies for this have emerged: explicit feature alignment through deformable registration and implicit learned alignment using techniques like transformers, with the former providing more control. However, the optimal approach for explicit alignment in mammography remains underexplored. In this study, we provide insights into where explicit alignment should occur (input space vs. representation space) and if alignment and risk prediction should be jointly optimized. We demonstrate that jointly learning explicit alignment in representation space while optimizing risk estimation performance, as done in the current state-of-the-art approach, results in a trade-off between alignment quality and predictive performance and show that image-level alignment is superior to representation-level alignment, leading to better deformation field quality and enhanced risk prediction accuracy. The code is available at https://github.com/sot176/Longitudinal_Mammogram_Alignment.git.

Detection of breast cancer using fractional discrete sinc transform based on empirical Fourier decomposition.

Azmy MM

pubmed logopapersJun 20 2025
Breast cancer is the most common cause of death among women worldwide. Early detection of breast cancer is important; for saving patients' lives. Ultrasound and mammography are the most common noninvasive methods for detecting breast cancer. Computer techniques are used to help physicians diagnose cancer. In most of the previous studies, the classification parameter rates were not high enough to achieve the correct diagnosis. In this study, new approaches were applied to detect breast cancer images from three databases. The programming software used to extract features from the images was MATLAB R2022a. Novel approaches were obtained using new fractional transforms. These fractional transforms were deduced from the fraction Fourier transform and novel discrete transforms. The novel discrete transforms were derived from discrete sine and cosine transforms. The steps of the approaches were described below. First, fractional transforms were applied to the breast images. Then, the empirical Fourier decomposition (EFD) was obtained. The mean, variance, kurtosis, and skewness were subsequently calculated. Finally, RNN-BILSTM (recurrent neural network-bidirectional-long short-term memory) was used as a classification phase. The proposed approaches were compared to obtain the highest accuracy rate during the classification phase based on different fractional transforms. The highest accuracy rate was obtained when the fractional discrete sinc transform of approach 4 was applied. The area under the receiver operating characteristic curve (AUC) was 1. The accuracy, sensitivity, specificity, precision, G-mean, and F-measure rates were 100%. If traditional machine learning methods, such as support vector machines (SVMs) and artificial neural networks (ANNs), were used, the classification parameter rates would be low. Therefore, the fourth approach used RNN-BILSTM to extract the features of breast images perfectly. This approach can be programed on a computer to help physicians correctly classify breast images.

Artificial intelligence-based tumor size measurement on mammography: agreement with pathology and comparison with human readers' assessments across multiple imaging modalities.

Kwon MR, Kim SH, Park GE, Mun HS, Kang BJ, Kim YT, Yoon I

pubmed logopapersJun 20 2025
To evaluate the agreement between artificial intelligence (AI)-based tumor size measurements of breast cancer and the final pathology and compare these results with those of other imaging modalities. This retrospective study included 925 women (mean age, 55.3 years ± 11.6) with 936 breast cancers, who underwent digital mammography, breast ultrasound, and magnetic resonance imaging before breast cancer surgery. AI-based tumor size measurement was performed on post-processed mammographic images, outlining areas with AI abnormality scores of 10, 50, and 90%. Absolute agreement between AI-based tumor sizes, image modalities, and histopathology was assessed using intraclass correlation coefficient (ICC) analysis. Concordant and discordant cases between AI measurements and histopathologic examinations were compared. Tumor size with an abnormality score of 50% showed the highest agreement with histopathologic examination (ICC = 0.54, 95% confidential interval [CI]: 0.49-0.59), showing comparable agreement with mammography (ICC = 0.54, 95% CI: 0.48-0.60, p = 0.40). For ductal carcinoma in situ and human epidermal growth factor receptor 2-positive cancers, AI revealed a higher agreement than that of mammography (ICC = 0.76, 95% CI: 0.67-0.84 and ICC = 0.73, 95% CI: 0.52-0.85). Overall, 52.0% (487/936) of cases were discordant, with these cases more commonly observed in younger patients with dense breasts, multifocal malignancies, lower abnormality scores, and different imaging characteristics. AI-based tumor size measurements with abnormality scores of 50% showed moderate agreement with histopathology but demonstrated size discordance in more than half of the cases. While comparable to mammography, its limitations emphasize the need for further refinement and research.

Artificial Intelligence Language Models to Translate Professional Radiology Mammography Reports Into Plain Language - Impact on Interpretability and Perception by Patients.

Pisarcik D, Kissling M, Heimer J, Farkas M, Leo C, Kubik-Huch RA, Euler A

pubmed logopapersJun 19 2025
This study aimed to evaluate the interpretability and patient perception of AI-translated mammography and sonography reports, focusing on comprehensibility, follow-up recommendations, and conveyed empathy using a survey. In this observational study, three fictional mammography and sonography reports with BI-RADS categories 3, 4, and 5 were created. These reports were repeatedly translated to plain language by three different large language models (LLM: ChatGPT-4, ChatGPT-4o, Google Gemini). In a first step, the best of these repeatedly translated reports for each BI-RADS category and LLM was selected by two experts in breast imaging considering factual correctness, completeness, and quality. In a second step, female participants compared and rated the translated reports regarding comprehensibility, follow-up recommendations, conveyed empathy, and additional value of each report using a survey with Likert scales. Statistical analysis included cumulative link mixed models and the Plackett-Luce model for ranking preferences. 40 females participated in the survey. GPT-4 and GPT-4o were rated significantly higher than Gemini across all categories (P<.001). Participants >50 years of age rated the reports significantly higher as compared to participants of 18-29 years of age (P<.05). Higher education predicted lower ratings (P=.02). No prior mammography increased scores (P=.03), and AI-experience had no effect (P=.88). Ranking analysis showed GPT-4o as the most preferred (P=.48), followed by GPT-4 (P=.37), with Gemini ranked last (P=.15). Patient preference differed among AI-translated radiology reports. Compared to a traditional report using radiological language, AI-translated reports add value for patients, enhance comprehensibility and empathy and therefore hold the potential to improve patient communication in breast imaging.

Appropriateness of acute breast symptom recommendations provided by ChatGPT.

Byrd C, Kingsbury C, Niell B, Funaro K, Bhatt A, Weinfurtner RJ, Ataya D

pubmed logopapersJun 16 2025
We evaluated the accuracy of ChatGPT-3.5's responses to common questions regarding acute breast symptoms and explored whether using lay language, as opposed to medical language, affected the accuracy of the responses. Questions were formulated addressing acute breast conditions, informed by the American College of Radiology (ACR) Appropriateness Criteria (AC) and our clinical experience at a tertiary referral breast center. Of these, seven addressed the most common acute breast symptoms, nine addressed pregnancy-associated breast symptoms, and four addressed specific management and imaging recommendations for a palpable breast abnormality. Questions were submitted three times to ChatGPT-3.5 and all responses were assessed by five fellowship-trained breast radiologists. Evaluation criteria included clinical judgment and adherence to the ACR guidelines, with responses scored as: 1) "appropriate," 2) "inappropriate" if any response contained inappropriate information, or 3) "unreliable" if responses were inconsistent. A majority vote determined the appropriateness for each question. ChatGPT-3.5 generated responses were appropriate for 7/7 (100 %) questions regarding common acute breast symptoms when phrased both colloquially and using standard medical terminology. In contrast, ChatGPT-3.5 generated responses were appropriate for 3/9 (33 %) questions about pregnancy-associated breast symptoms and 3/4 (75 %) questions about management and imaging recommendations for a palpable breast abnormality. ChatGPT-3.5 can automate healthcare information related to appropriate management of acute breast symptoms when prompted with both standard medical terminology or lay phrasing of the questions. However, physician oversight remains critical given the presence of inappropriate recommendations for pregnancy associated breast symptoms and management of palpable abnormalities.
Page 4 of 876 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.