Sort by:
Page 6 of 14133 results

Intratumoral and peritumoral ultrasound radiomics analysis for predicting HER2-low expression in HER2-negative breast cancer patients: a retrospective analysis of dual-central study.

Wang J, Gu Y, Zhan Y, Li R, Bi Y, Gao L, Wu X, Shao J, Chen Y, Ye L, Peng M

pubmed logopapersJun 5 2025
This study aims to explore whether intratumoral and peritumoral ultrasound radiomics of ultrasound images can predict the low expression status of human epidermal growth factor receptor 2 (HER2) in HER2-negative breast cancer patients. HER2-negative breast cancer patients were recruited retrospectively and randomly divided into a training cohort (n = 303) and a test cohort (n = 130) at a ratio of 7:3. The region of interest within the breast ultrasound image was designated as the intratumoral region, and expansions of 3 mm, 5 mm, and 8 mm from this region were considered as the peritumoral regions for the extraction of ultrasound radiomic features. Feature extraction and selection were performed, and radiomics scores (Rad-score) were obtained in four ultrasound radiomics scenarios: intratumoral only, intratumoral + peritumoral 3 mm, intratumoral + peritumoral 5 mm, and intratumoral + peritumoral 8 mm. An optimal combined nomogram radiomic model incorporating clinical features was established and validated. Subsequently, the diagnostic performance of the radiomic models was evaluated. The results indicated that the intratumoral + peritumoral (5 mm) ultrasound radiomics exhibited the excellent diagnostic performance in evaluated the HER2 low expression. The nomogram combining intratumoral + peritumoral (5 mm) and clinical features showed superior diagnostic performance, achieving an area under the curve (AUC) of 0.911 and 0.869 in the training and test cohorts, respectively. The combination of intratumoral + peritumoral (5 mm) ultrasound radiomics and clinical features possesses the capability to accurately predict the low-expression status of HER2 in HER2-negative breast cancer patients.

A Novel Deep Learning Framework for Nipple Segmentation in Digital Mammography.

Rogozinski M, Hurtado J, Sierra-Franco CA, R Hall Barbosa C, Raposo A

pubmed logopapersJun 3 2025
This study introduces a novel methodology to enhance nipple segmentation in digital mammography, a critical component for accurate medical analysis and computer-aided detection systems. The nipple is a key anatomical landmark for multi-view and multi-modality breast image registration, where accurate localization is vital for ensuring image quality and enabling precise registration of anomalies across different mammographic views. The proposed approach significantly outperforms baseline methods, particularly in challenging cases where previous techniques failed. It achieved successful detection across all cases and reached a mean Intersection over Union (mIoU) of 0.63 in instances where the baseline failed entirely. Additionally, it yielded nearly a tenfold improvement in Hausdorff distance and consistent gains in overlap-based metrics, with the mIoU increasing from 0.7408 to 0.8011 in the craniocaudal (CC) view and from 0.7488 to 0.7767 in the mediolateral oblique (MLO) view. Furthermore, its generalizability suggests the potential for application to other breast imaging modalities and related domains facing challenges such as class imbalance and high variability in object characteristics.

Validation of a Dynamic Risk Prediction Model Incorporating Prior Mammograms in a Diverse Population.

Jiang S, Bennett DL, Colditz GA

pubmed logopapersJun 2 2025
For breast cancer risk prediction to be clinically useful, it must be accurate and applicable to diverse groups of women across multiple settings. To examine whether a dynamic risk prediction model incorporating prior mammograms, previously validated in Black and White women, could predict future risk of breast cancer across a racially and ethnically diverse population in a population-based screening program. This prognostic study included women aged 40 to 74 years with 1 or more screening mammograms drawn from the British Columbia Breast Screening Program from January 1, 2013, to December 31, 2019, with follow-up via linkage to the British Columbia Cancer Registry through June 2023. This provincial, organized screening program offers screening mammography with full field digital mammography (FFDM) every 2 years. Data were analyzed from May to August 2024. FFDM-based, artificial intelligence-generated mammogram risk score (MRS), including up to 4 years of prior mammograms. The primary outcomes were 5-year risk of breast cancer (measured with the area under the receiver operating characteristic curve [AUROC]) and absolute risk of breast cancer calibrated to the US Surveillance, Epidemiology, and End Results incidence rates. Among 206 929 women (mean [SD] age, 56.1 [9.7] years; of 118 093 with data on race, there were 34 266 East Asian; 1946 Indigenous; 6116 South Asian; and 66 742 White women), there were 4168 pathology-confirmed incident breast cancers diagnosed through June 2023. Mean (SD) follow-up time was 5.3 (3.0) years. Using up to 4 years of prior mammogram images in addition to the most current mammogram, a 5-year AUROC of 0.78 (95% CI, 0.77-0.80) was obtained based on analysis of images alone. Performance was consistent across subgroups defined by race and ethnicity in East Asian (AUROC, 0.77; 95% CI, 0.75-0.79), Indigenous (AUROC, 0.77; 95% CI 0.71-0.83), and South Asian (AUROC, 0.75; 95% CI 0.71-0.79) women. Stratification by age gave a 5-year AUROC of 0.76 (95% CI, 0.74-0.78) for women aged 50 years or younger and 0.80 (95% CI, 0.78-0.82) for women older than 50 years. There were 18 839 participants (9.0%) with a 5-year risk greater than 3%, and the positive predictive value was 4.9% with an incidence of 11.8 per 1000 person-years. A dynamic MRS generated from both current and prior mammograms showed robust performance across diverse racial and ethnic populations in a province-wide screening program starting from age 40 years, reflecting improved accuracy for racially and ethnically diverse populations.

Inferring single-cell spatial gene expression with tissue morphology via explainable deep learning

Zhao, Y., Alizadeh, E., Taha, H. B., Liu, Y., Xu, M., Mahoney, J. M., Li, S.

biorxiv logopreprintJun 2 2025
Deep learning models trained with spatial omics data uncover complex patterns and relationships among cells, genes, and proteins in a high-dimensional space. State-of-the-art in silico spatial multi-cell gene expression methods using histological images of tissue stained with hematoxylin and eosin (H&E) allow us to characterize cellular heterogeneity. We developed a vision transformer (ViT) framework to map histological signatures to spatial single-cell transcriptomic signatures, named SPiRiT. SPiRiT predicts single-cell spatial gene expression using the matched H&E image tiles of human breast cancer and whole mouse pup, evaluated by Xenium (10x Genomics) datasets. Importantly, SPiRiT incorporates rigorous strategies to ensure reproducibility and robustness of predictions and provides trustworthy interpretation through attention-based model explainability. SPiRiT model interpretation revealed the areas, and attention details it uses to predict gene expressions like marker genes in invasive cancer cells. In an apple-to-apple comparison with ST-Net, SPiRiT improved the predictive accuracy by 40%. These gene predictions and expression levels were highly consistent with the tumor region annotation. In summary, SPiRiT highlights the feasibility to infer spatial single-cell gene expression using tissue morphology in multiple-species.

Synthetic Ultrasound Image Generation for Breast Cancer Diagnosis Using cVAE-WGAN Models: An Approach Based on Generative Artificial Intelligence

Mondillo, G., Masino, M., Colosimo, S., Perrotta, A., Frattolillo, V., Abbate, F. G.

medrxiv logopreprintJun 2 2025
The scarcity and imbalance of medical image datasets hinder the development of robust computer-aided diagnosis (CAD) systems for breast cancer. This study explores the application of advanced generative models, based on generative artificial intelligence (GenAI), for the synthesis of digital breast ultrasound images. Using a hybrid Conditional Variational Autoencoder-Wasserstein Generative Adversarial Network (CVAE-WGAN) architecture, we developed a system to generate high-quality synthetic images conditioned on the class (malignant vs. normal/benign). These synthetic images, generated from the low-resolution BreastMNIST dataset and filtered for quality, were systematically integrated with real training data at different mixing ratios (W). The performance of a CNN classifier trained on these mixed datasets was evaluated against a baseline model trained only on real data balanced with SMOTE. The optimal integration (mixing weight W=0.25) produced a significant performance increase on the real test set: +8.17% in macro-average F1-score and +4.58% in accuracy compared to using real data alone. Analysis confirmed the originality of the generated samples. This approach offers a promising solution for overcoming data limitations in image-based breast cancer diagnostics, potentially improving the capabilities of CAD systems.

A Comparative Performance Analysis of Regular Expressions and an LLM-Based Approach to Extract the BI-RADS Score from Radiological Reports

Dennstaedt, F., Lerch, L., Schmerder, M., Cihoric, N., Cerghetti, G. M., Gaio, R., Bonel, H., Filchenko, I., Hastings, J., Dammann, F., Aebersold, D. M., von Tengg, H., Nairz, K.

medrxiv logopreprintJun 2 2025
BackgroundDifferent Natural Language Processing (NLP) techniques have demonstrated promising results for data extraction from radiological reports. Both traditional rule-based methods like regular expressions (Regex) and modern Large Language Models (LLMs) can extract structured information. However, comparison between these approaches for extraction of specific radiological data elements has not been widely conducted. MethodsWe compared accuracy and processing time between Regex and LLM-based approaches for extracting BI-RADS scores from 7,764 radiology reports (mammography, ultrasound, MRI, and biopsy). We developed a rule-based algorithm using Regex patterns and implemented an LLM-based extraction using the Rombos-LLM-V2.6-Qwen-14b model. A ground truth dataset of 199 manually classified reports was used for evaluation. ResultsThere was no statistically significant difference in the accuracy in extracting BI-RADS scores between Regex and an LLM-based method (accuracy of 89.20% for Regex versus 87.69% for the LLM-based method; p=0.56). Compared to the LLM-based method, Regex processing was more efficient, completing the task 28,120 times faster (0.06 seconds vs. 1687.20 seconds). Further analysis revealed LLMs favored common classifications (particularly BI-RADS value of 2) while Regex more frequently returned "unclear" values. We also could confirm in our sample an already known laterality bias for breast cancer (BI-RADS 6) and detected a slight laterality skew for suspected breast cancer (BI-RADS 5) as well. ConclusionFor structured, standardized data like BI-RADS, traditional NLP techniques seem to be superior, though future work should explore hybrid approaches combining Regex precision for standardized elements with LLM contextual understanding for more complex information extraction tasks.

Machine learning can reliably predict malignancy of breast lesions based on clinical and ultrasonographic features.

Buzatto IPC, Recife SA, Miguel L, Bonini RM, Onari N, Faim ALPA, Silvestre L, Carlotti DP, Fröhlich A, Tiezzi DG

pubmed logopapersJun 1 2025
To establish a reliable machine learning model to predict malignancy in breast lesions identified by ultrasound (US) and optimize the negative predictive value to minimize unnecessary biopsies. We included clinical and ultrasonographic attributes from 1526 breast lesions classified as BI-RADS 3, 4a, 4b, 4c, 5, and 6 that underwent US-guided breast biopsy in four institutions. We selected the most informative attributes to train nine machine learning models, ensemble models and models with tuned threshold to make inferences about the diagnosis of BI-RADS 4a and 4b lesions (validation dataset). We tested the performance of the final model with 403 new suspicious lesions. The most informative attributes were shape, margin, orientation and size of the lesions, the resistance index of the internal vessel, the age of the patient and the presence of a palpable lump. The highest mean negative predictive value (NPV) was achieved with the K-Nearest Neighbors algorithm (97.9%). Making ensembles did not improve the performance. Tuning the threshold did improve the performance of the models and we chose the algorithm XGBoost with the tuned threshold as the final one. The tested performance of the final model was: NPV 98.1%, false negative 1.9%, positive predictive value 77.1%, false positive 22.9%. Applying this final model, we would have missed 2 of the 231 malignant lesions of the test dataset (0.8%). Machine learning can help physicians predict malignancy in suspicious breast lesions identified by the US. Our final model would be able to avoid 60.4% of the biopsies in benign lesions missing less than 1% of the cancer cases.

Advanced image preprocessing and context-aware spatial decomposition for enhanced breast cancer segmentation.

Kalpana G, Deepa N, Dhinakaran D

pubmed logopapersJun 1 2025
The segmentation of breast cancer diagnosis and medical imaging contains issues such as noise, variation in contrast, and low resolutions which make it challenging to distinguish malignant sites. In this paper, we propose a new solution that integrates with AIPT (Advanced Image Preprocessing Techniques) and CASDN (Context-Aware Spatial Decomposition Network) to overcome these problems. The preprocessing pipeline apply bunch of methods including Adaptive Thresholding, Hierarchical Contrast Normalization, Contextual Feature Augmentation, Multi-Scale Region Enhancement, and Dynamic Histogram Equalization for image quality. These methods smooth edges, equalize the contrasting picture and inlay contextual details in a way which effectively eliminate the noise and make the images clearer and with fewer distortions. Experimental outcomes demonstrate its effectiveness by delivering a Dice Coefficient of 0.89, IoU of 0.85, and a Hausdorff Distance of 5.2 demonstrating its enhanced capability in segmenting significant tumor margins over other techniques. Furthermore, the use of the improved preprocessing pipeline benefits classification models with improved Convolutional Neural Networks having a classification accuracy of 85.3 % coupled with AUC-ROC of 0.90 which shows a significant enhancement from conventional techniques.•Enhanced segmentation accuracy with advanced preprocessing and CASDN, achieving superior performance metrics.•Robust multi-modality compatibility, ensuring effectiveness across mammograms, ultrasounds, and MRI scans.

AI image analysis as the basis for risk-stratified screening.

Strand F

pubmed logopapersJun 1 2025
Artificial intelligence (AI) has emerged as a transformative tool in breast cancer screening, with two distinct applications: computer-aided cancer detection (CAD) and risk prediction. While AI CAD systems are slowly finding its way into clinical practice to assist radiologists or make independent reads, this review focuses on AI risk models, which aim to predict a patient's likelihood of being diagnosed with breast cancer within a few years after negative screening. Unlike AI CAD systems, AI risk models are mainly explored in research settings without widespread clinical adoption. This review synthesizes advances in AI-driven risk prediction models, from traditional imaging biomarkers to cutting-edge deep learning methodologies and multimodal approaches. Contributions by leading researchers are explored with critical appraisal of their methods and findings. Ethical, practical, and clinical challenges in implementing AI models are also discussed, with an emphasis on real-world applications. This review concludes by proposing future directions to optimize the adoption of AI tools in breast cancer screening and improve equity and outcomes for diverse populations.

Image normalization techniques and their effect on the robustness and predictive power of breast MRI radiomics.

Schwarzhans F, George G, Escudero Sanchez L, Zaric O, Abraham JE, Woitek R, Hatamikia S

pubmed logopapersJun 1 2025
Radiomics analysis has emerged as a promising approach to aid in cancer diagnosis and treatment. However, radiomics research currently lacks standardization, and radiomics features can be highly dependent on acquisition and pre-processing techniques used. In this study, we aim to investigate the effect of various image normalization techniques on robustness of radiomics features extracted from breast cancer patient MRI scans. MRI scans from the publicly available MAMA-MIA dataset and an internal breast MRI test set depicting triple negative breast cancer (TNBC) were used. We compared the effect of commonly used image normalization techniques on radiomics feature robustnessusing Concordance-Correlation-Coefficient (CCC) between multiple combinations of normalization approaches. We also trained machine learning-based prediction models of pathologic complete response (pCR) on radiomics after different normalization techniques were used and compared their areas under the receiver operating characteristic curve (ROC-AUC). For predicting complete pathological response from pre-treatment breast cancer MRI radiomics, the highest overall ROC-AUC was achieved by using a combination of three different normalization techniques indicating their potentially powerful role when working with heterogeneous imaging data. The effect of normalization was more pronounced with smaller training data and normalization may be less important with increasing abundance of training data. Additionally, we observed considerable differences between MRI data sets and their feature robustness towards normalization. Overall, we were able to demonstrate the importance of selecting and standardizing normalization methods for accurate and reliable radiomics analysis in breast MRI scans especially with small training data sets.
Page 6 of 14133 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.