Sort by:
Page 12 of 14133 results

Comparative analysis of deep learning methods for breast ultrasound lesion detection and classification.

Vallez N, Mateos-Aparicio-Ruiz I, Rienda MA, Deniz O, Bueno G

pubmed logopapersMay 16 2025
Breast ultrasound (BUS) computer-aided diagnosis (CAD) systems aims to perform two major steps: detecting lesions and classifying them as benign or malignant. However, the impact of combining both steps has not been previously addressed. Moreover, the specific method employed can influence the final outcome of the system. In this work, a comparison of the effects of using object detection, semantic segmentation and instance segmentation to detect lesions in BUS images was conducted. To this end, four approaches were examined: a) multi-class object detection, b) one-class object detection followed by localized region classification, c) multi-class segmentation, and d) one-class segmentation followed by segmented region classification. Additionally, a novel dataset for BUS segmentation, called BUS-UCLM, has been gathered, annotated and shared publicly. The evaluation of the methods proposed was carried out with this new dataset and four publicly available datasets: BUSI, OASBUD, RODTOOK and UDIAT. Among the four approaches compared, multi-class detection and multi-class segmentation achieved the best results when instance segmentation CNNs are used. The best results in detection were obtained with a multi-class Mask R-CNN with a COCO AP50 metric of 72.9%. In the multi-class segmentation scenario, Poolformer achieved the best results with a Dice score of 77.7%. The analysis of detection and segmentation models in BUS highlights several key challenges, emphasizing the complexity of accurately identifying and segmenting lesions. Among the methods evaluated, instance segmentation has proven to be the most effective for BUS images, offering superior performance in delineating individual lesions.

Machine learning prediction of pathological complete response to neoadjuvant chemotherapy with peritumoral breast tumor ultrasound radiomics: compare with intratumoral radiomics and clinicopathologic predictors.

Yao J, Zhou W, Jia X, Zhu Y, Chen X, Zhan W, Zhou J

pubmed logopapersMay 16 2025
Noninvasive, accurate and novel approaches to predict patients who will achieve pathological complete response (pCR) after neoadjuvant chemotherapy (NAC) could assist treatment strategies. The aim of this study was to explore the application of machine learning (ML) based peritumoral ultrasound radiomics signature (PURS), compared with intratumoral radiomics (IURS) and clinicopathologic factors, for early prediction of pCR. We analyzed 358 locally advanced breast cancer patients (250 in the training set and 108 in the test set), who accepted NAC and post NAC surgery at our institution. The clinical and pathological data were analyzed using the independent t test and the Chi-square test to determine the factors associated with pCR. The PURS and IURS of baseline breast tumors were extracted by using 3D-slicer and PyRadiomics software. Five ML classifiers including linear discriminant analysis (LDA), support vector machine (SVM), random forest (RF), logistic regression (LR), and adaptive boosting (AdaBoost) were applied to construct radiomics predictive models. The performance of PURS, IURS models and clinicopathologic predictors were assessed with respect to sensitivity, specificity, accuracy and the areas under the curve (AUCs). Ninety-seven patients achieved pCR. The clinicopathologic predictors obtained an AUC of 0.759. Among PURS models, the RF classifier achieved better efficacy (AUC of 0.889) than LR (0.849), AdaBoost (0.823), SVM (0.746) and LDA (0.732). The RF classifier also obtained a maximum AUC of 0.931 than 0.920 (AdaBoost), 0.875 (LR), 0.825 (SVM), and 0.798 (LDA) in IURS models in the test set. The RF based PURS yielded higher predictive ability (AUC 0.889; 95% CI 0.814, 0.947) than clinicopathologic factors (AUC 0.759; 95% CI 0.657, 0.861; p < 0.05), but lower efficacy compared with IURS (AUC 0.931; 95% CI 0.865, 0.980; p < 0.05). The peritumoral US radiomics, as a novel potential biomarker, can assist clinical therapy decisions.

Challenges in Implementing Artificial Intelligence in Breast Cancer Screening Programs: Systematic Review and Framework for Safe Adoption.

Goh S, Goh RSJ, Chong B, Ng QX, Koh GCH, Ngiam KY, Hartman M

pubmed logopapersMay 15 2025
Artificial intelligence (AI) studies show promise in enhancing accuracy and efficiency in mammographic screening programs worldwide. However, its integration into clinical workflows faces several challenges, including unintended errors, the need for professional training, and ethical concerns. Notably, specific frameworks for AI imaging in breast cancer screening are still lacking. This study aims to identify the challenges associated with implementing AI in breast screening programs and to apply the Consolidated Framework for Implementation Research (CFIR) to discuss a practical governance framework for AI in this context. Three electronic databases (PubMed, Embase, and MEDLINE) were searched using combinations of the keywords "artificial intelligence," "regulation," "governance," "breast cancer," and "screening." Original studies evaluating AI in breast cancer detection or discussing challenges related to AI implementation in this setting were eligible for review. Findings were narratively synthesized and subsequently mapped directly onto the constructs within the CFIR. A total of 1240 results were retrieved, with 20 original studies ultimately included in this systematic review. The majority (n=19) focused on AI-enhanced mammography, while 1 addressed AI-enhanced ultrasound for women with dense breasts. Most studies originated from the United States (n=5) and the United Kingdom (n=4), with publication years ranging from 2019 to 2023. The quality of papers was rated as moderate to high. The key challenges identified were reproducibility, evidentiary standards, technological concerns, trust issues, as well as ethical, legal, societal concerns, and postadoption uncertainty. By aligning these findings with the CFIR constructs, action plans targeting the main challenges were incorporated into the framework, facilitating a structured approach to addressing these issues. This systematic review identifies key challenges in implementing AI in breast cancer screening, emphasizing the need for consistency, robust evidentiary standards, technological advancements, user trust, ethical frameworks, legal safeguards, and societal benefits. These findings can serve as a blueprint for policy makers, clinicians, and AI developers to collaboratively advance AI adoption in breast cancer screening. PROSPERO CRD42024553889; https://tinyurl.com/mu4nwcxt.

Modifying the U-Net's Encoder-Decoder Architecture for Segmentation of Tumors in Breast Ultrasound Images.

Derakhshandeh S, Mahloojifar A

pubmed logopapersMay 15 2025
Segmentation is one of the most significant steps in image processing. Segmenting an image is a technique that makes it possible to separate a digital image into various areas based on the different characteristics of pixels in the image. In particular, the segmentation of breast ultrasound images is widely used for cancer identification. As a result of image segmentation, it is possible to make early diagnoses of a diseases via medical images in a very effective way. Due to various ultrasound artifacts and noises, including speckle noise, low signal-to-noise ratio, and intensity heterogeneity, the process of accurately segmenting medical images, such as ultrasound images, is still a challenging task. In this paper, we present a new method to improve the accuracy and effectiveness of breast ultrasound image segmentation. More precisely, we propose a neural network (NN) based on U-Net and an encoder-decoder architecture. By taking U-Net as the basis, both the encoder and decoder parts are developed by combining U-Net with other deep neural networks (Res-Net and MultiResUNet) and introducing a new approach and block (Co-Block), which preserve as much as possible the low-level and the high-level features. The designed network is evaluated using the Breast Ultrasound Images (BUSI) Dataset. It consists of 780 images, and the images are categorized into three classes, which are normal, benign, and malignant. According to our extensive evaluations on a public breast ultrasound dataset, the designed network segments the breast lesions more accurately than other state-of-the-art deep learning methods. With only 8.88 M parameters, our network (CResU-Net) obtained 82.88%, 77.5%, 90.3%, and 98.4% in terms of Dice similarity coefficients (DSC), intersection over union (IoU), area under curve (AUC), and global accuracy (ACC), respectively, on the BUSI dataset.

Assessing artificial intelligence in breast screening with stratified results on 306 839 mammograms across geographic regions, age, breast density and ethnicity: A Retrospective Investigation Evaluating Screening (ARIES) study.

Oberije CJG, Currie R, Leaver A, Redman A, Teh W, Sharma N, Fox G, Glocker B, Khara G, Nash J, Ng AY, Kecskemethy PD

pubmed logopapersMay 14 2025
Evaluate an Artificial Intelligence (AI) system in breast screening through stratified results across age, breast density, ethnicity and screening centres, from different UK regions. A large-scale retrospective study evaluating two variations of using AI as an independent second reader in double reading was executed. Stratifications were conducted for clinical and operational metrics. Data from 306 839 mammography cases screened between 2017 and 2021 were used and included three different UK regions.The impact on safety and effectiveness was assessed using clinical metrics: cancer detection rate and positive predictive value, stratified according to age, breast density and ethnicity. Operational impact was assessed through reading workload and recall rate, measured overall and per centre.Non-inferiority was tested for AI workflows compared with human double reading, and when passed, superiority was tested. AI interval cancer (IC) flag rate was assessed to estimate additional cancer detection opportunity with AI that cannot be assessed retrospectively. The AI workflows passed non-inferiority or superiority tests for every metric across all subgroups, with workload savings between 38.3% and 43.7%. The AI standalone flagged 41.2% of ICs overall, ranging between 33.3% and 46.8% across subgroups, with the highest detection rate for dense breasts. Human double reading and AI workflows showed the same performance disparities across subgroups. The AI integrations maintained or improved performance at all metrics for all subgroups while achieving significant workload reduction. Moreover, complementing these integrations with AI as an additional reader can improve cancer detection. The granularity of assessment showed that screening with the AI-system integrations was as safe as standard double reading across heterogeneous populations.

Optimizing breast lesions diagnosis and decision-making with a deep learning fusion model integrating ultrasound and mammography: a dual-center retrospective study.

Xu Z, Zhong S, Gao Y, Huo J, Xu W, Huang W, Huang X, Zhang C, Zhou J, Dan Q, Li L, Jiang Z, Lang T, Xu S, Lu J, Wen G, Zhang Y, Li Y

pubmed logopapersMay 14 2025
This study aimed to develop a BI-RADS network (DL-UM) via integrating ultrasound (US) and mammography (MG) images and explore its performance in improving breast lesion diagnosis and management when collaborating with radiologists, particularly in cases with discordant US and MG Breast Imaging Reporting and Data System (BI-RADS) classifications. We retrospectively collected image data from 1283 women with breast lesions who underwent both US and MG within one month at two medical centres and categorised them into concordant and discordant BI-RADS classification subgroups. We developed a DL-UM network via integrating US and MG images, and DL networks using US (DL-U) or MG (DL-M) alone, respectively. The performance of DL-UM network for breast lesion diagnosis was evaluated using ROC curves and compared to DL-U and DL-M networks in the external testing dataset. The diagnostic performance of radiologists with different levels of experience under the assistance of DL-UM network was also evaluated. In the external testing dataset, DL-UM outperformed DL-M in sensitivity (0.962 vs. 0.833, P = 0.016) and DL-U in specificity (0.667 vs. 0.526, P = 0.030), respectively. In the discordant BI-RADS classification subgroup, DL-UM achieved an AUC of 0.910. The diagnostic performance of four radiologists improved when collaborating with the DL-UM network, with AUCs increased from 0.674-0.772 to 0.889-0.910, specificities from 52.1%-75.0 to 81.3-87.5% and reducing unnecessary biopsies by 16.1%-24.6%, particularly for junior radiologists. Meanwhile, DL-UM outputs and heatmaps enhanced radiologists' trust and improved interobserver agreement between US and MG, with weighted kappa increased from 0.048 to 0.713 (P < 0.05). The DL-UM network, integrating complementary US and MG features, assisted radiologists in improving breast lesion diagnosis and management, potentially reducing unnecessary biopsies.

Automated whole-breast ultrasound tumor diagnosis using attention-inception network.

Zhang J, Huang YS, Wang YW, Xiang H, Lin X, Chang RF

pubmed logopapersMay 14 2025
Automated Whole-Breast Ultrasound (ABUS) has been widely used as an important tool in breast cancer diagnosis due to the ability of this technique to provide complete three-dimensional (3D) images of breasts. To eliminate the risk of misdiagnosis, computer-aided diagnosis (CADx) systems have been proposed to assist radiologists. Convolutional neural networks (CNNs), renowned for the automatic feature extraction capabilities, have developed rapidly in medical image analysis, and this study proposes a CADx system based on 3D CNN for ABUS. This study used a private dataset collected at Sun Yat-Sen University Cancer Center (SYSUCC) from 396 breast tumor patients. First, the tumor volume of interest (VOI) was extracted and resized, and then the tumor was enhanced by histogram equalization. Second, a 3D U-Net++ was employed to segment the tumor mask. Finally, the VOI, the enhanced VOI, and the corresponding tumor mask were fed into a 3D Attention-Inception network to classify the tumor as benign or malignant. The experiment results indicate an accuracy of 89.4%, a sensitivity of 91.2%, a specificity of 87.6%, and an area under the receiver operating characteristic curve (AUC) of 0.9262, which suggests that the proposed CADx system for ABUS images rivals the performance of experienced radiologists in tumor diagnosis tasks. This study proposes a CADx system consisting of a 3D U-Net++ tumor segmentation model and a 3D attention inception neural network tumor classification model for diagnosis in ABUS images. The results indicate that the proposed CADx system is effective and efficient in tumor diagnosis tasks.

Automated field-in-field planning for tangential breast radiation therapy based on digitally reconstructed radiograph.

Srikornkan P, Khamfongkhruea C, Intanin P, Thongsawad S

pubmed logopapersMay 12 2025
The tangential field-in-field (FIF) technique is a widely used method in breast radiation therapy, known for its efficiency and the reduced number of fields required in treatment planning. However, it is labor-intensive, requiring manual shaping of the multileaf collimator (MLC) to minimize hot spots. This study aims to develop a novel automated FIF planning approach for tangential breast radiation therapy using Digitally Reconstructed Radiograph (DRR) images. A total of 78 patients were selected to train and test a fluence map prediction model based on U-Net architecture. DRR images were used as input data to predict the fluence maps. The predicted fluence maps for each treatment plan were then converted into MLC positions and exported as Digital Imaging and Communications in Medicine (DICOM) files. These files were used to recalculate the dose distribution and assess dosimetric parameters for both the PTV and OARs. The mean absolute error (MAE) between the predicted and original fluence map was 0.007 ± 0.002. The result of gamma analysis indicates strong agreement between the predicted and original fluence maps, with gamma passing rate values of 95.47 ± 4.27 for the 3 %/3 mm criteria, 94.65 ± 4.32 for the 3 %/2 mm criteria, and 83.4 ± 12.14 for the 2 %/2 mm criteria. The plan quality, in terms of tumor coverage and doses to organs at risk (OARs), showed no significant differences between the automated FIF and original plans. The automated plans yielded promising results, with plan quality comparable to the original.

Identification of HER2-over-expression, HER2-low-expression, and HER2-zero-expression statuses in breast cancer based on <sup>18</sup>F-FDG PET/CT radiomics.

Hou X, Chen K, Luo H, Xu W, Li X

pubmed logopapersMay 12 2025
According to the updated classification system, human epidermal growth factor receptor 2 (HER2) expression statuses are divided into the following three groups: HER2-over-expression, HER2-low-expression, and HER2-zero-expression. HER2-negative expression was reclassified into HER2-low-expression and HER2-zero-expression. This study aimed to identify three different HER2 expression statuses for breast cancer (BC) patients using PET/CT radiomics and clinicopathological characteristics. A total of 315 BC patients who met the inclusion and exclusion criteria from two institutions were retrospectively included. The patients in institution 1 were divided into the training set and the independent validation set according to the ratio of 7:3, and institution 2 was used as the external validation set. According to the results of pathological examination, all BC patients were divided into HER2-over-expression, HER2-low-expression, and HER2-zero-expression. First, PET/CT radiomic features and clinicopathological features based on each patient were extracted and collected. Second, multiple methods were used to perform feature screening and feature selection. Then, four machine learning classifiers, including logistic regression (LR), k-nearest neighbor (KNN), support vector machine (SVM), and random forest (RF), were constructed to identify HER2-over-expression vs. others, HER2-low-expression vs. others, and HER2-zero-expression vs. others. The receiver operator characteristic (ROC) curve was plotted to measure the model's predictive power. According to the feature screening process, 8, 10, and 2 radiomics features and 2 clinicopathological features were finally selected to construct three prediction models (HER2-over-expression vs. others, HER2-low-expression vs. others, and HER2-zero-expression vs. others). For HER2-over-expression vs. others, the RF model outperformed other models with an AUC value of 0.843 (95%CI: 0.774-0.897), 0.785 (95%CI: 0.665-0.877), and 0.788 (95%CI: 0.708-0.868) in the training set, independent validation set, and external validation set. Concerning HER2-low-expression vs. others, the outperformance of the LR model over other models was identified with an AUC value of 0.783 (95%CI: 0.708-0.846), 0.756 (95%CI: 0.634-0.854), and 0.779 (95%CI: 0.698-0.860) in the training set, independent validation set, and external validation set. Whereas, the KNN model was confirmed as the optimal model to distinguish HER2-zero-expression from others, with an AUC value of 0.929 (95%CI: 0.890-0.958), 0.847 (95%CI: 0.764-0.910), and 0.835 (95%CI: 0.762-0.908) in the training set, independent validation set, and external validation set. Combined PET/CT radiomic models integrating with clinicopathological characteristics are non-invasively predictive of different HER2 statuses of BC patients.

Paradigm-Shifting Attention-based Hybrid View Learning for Enhanced Mammography Breast Cancer Classification with Multi-Scale and Multi-View Fusion.

Zhao H, Zhang C, Wang F, Li Z, Gao S

pubmed logopapersMay 12 2025
Breast cancer poses a serious threat to women's health, and its early detection is crucial for enhancing patient survival rates. While deep learning has significantly advanced mammographic image analysis, existing methods struggle to balance between view consistency with input adaptability. Furthermore, current models face challenges in accurately capturing multi-scale features, especially when subtle lesion variations across different scales are involved. To address this challenge, this paper proposes a Hybrid View Learning (HVL) paradigm that unifies traditional Single-View and Multi-View Learning approaches. The core component of this paradigm, our Attention-based Hybrid View Learning (AHVL) framework, incorporates two essential attention mechanisms: Contrastive Switch Attention (CSA) and Selective Pooling Attention (SPA). The CSA mechanism flexibly alternates between self-attention and cross-attention based on data integrity, integrating a pre-trained language model for contrastive learning to enhance model stability. Meanwhile, the SPA module employs multi-scale feature pooling and selection to capture critical features from mammographic images, overcoming the limitations of traditional models that struggle with fine-grained lesion detection. Experimental validation on the INbreast and CBIS-DDSM datasets shows that the AHVL framework outperforms both single-view and multi-view methods, especially under extreme view missing conditions. Even with an 80% missing rate on both datasets, AHVL maintains the highest accuracy and experiences the smallest performance decline in metrics like F1 score and AUC-PR, demonstrating its robustness and stability. This study redefines mammographic image analysis by leveraging attention-based hybrid view processing, setting a new standard for precise and efficient breast cancer diagnosis.
Page 12 of 14133 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.