Sort by:
Page 15 of 23225 results

Intratumoral and peritumoral ultrasound radiomics analysis for predicting HER2-low expression in HER2-negative breast cancer patients: a retrospective analysis of dual-central study.

Wang J, Gu Y, Zhan Y, Li R, Bi Y, Gao L, Wu X, Shao J, Chen Y, Ye L, Peng M

pubmed logopapersJun 5 2025
This study aims to explore whether intratumoral and peritumoral ultrasound radiomics of ultrasound images can predict the low expression status of human epidermal growth factor receptor 2 (HER2) in HER2-negative breast cancer patients. HER2-negative breast cancer patients were recruited retrospectively and randomly divided into a training cohort (n = 303) and a test cohort (n = 130) at a ratio of 7:3. The region of interest within the breast ultrasound image was designated as the intratumoral region, and expansions of 3 mm, 5 mm, and 8 mm from this region were considered as the peritumoral regions for the extraction of ultrasound radiomic features. Feature extraction and selection were performed, and radiomics scores (Rad-score) were obtained in four ultrasound radiomics scenarios: intratumoral only, intratumoral + peritumoral 3 mm, intratumoral + peritumoral 5 mm, and intratumoral + peritumoral 8 mm. An optimal combined nomogram radiomic model incorporating clinical features was established and validated. Subsequently, the diagnostic performance of the radiomic models was evaluated. The results indicated that the intratumoral + peritumoral (5 mm) ultrasound radiomics exhibited the excellent diagnostic performance in evaluated the HER2 low expression. The nomogram combining intratumoral + peritumoral (5 mm) and clinical features showed superior diagnostic performance, achieving an area under the curve (AUC) of 0.911 and 0.869 in the training and test cohorts, respectively. The combination of intratumoral + peritumoral (5 mm) ultrasound radiomics and clinical features possesses the capability to accurately predict the low-expression status of HER2 in HER2-negative breast cancer patients.

Comparative analysis of semantic-segmentation models for screen film mammograms.

Rani J, Singh J, Virmani J

pubmed logopapersJun 5 2025
Accurate segmentation of mammographic mass is very important as shape characteristics of these masses play a significant role for radiologist to diagnose benign and malignant cases. Recently, various deep learning segmentation algorithms have become popular for segmentation tasks. In the present work, rigorous performance analysis of ten semantic-segmentation models has been performed with 518 images taken from DDSM dataset (digital database for screening mammography) with 208 mass images ϵ BIRAD3, 150 mass images ϵ BIRAD4 and 160 mass images ϵ BIRAD5 classes, respectively. These models are (1) simple convolution series models namely- VGG16/VGG19, (2) simple convolution DAG (directed acyclic graph) models namely- U-Net (3) dilated convolution DAG models namely ResNet18/ResNet50/ShuffleNet/XceptionNet/InceptionV2/MobileNetV2 and (4) hybrid model, i.e. hybrid U-Net. On the basis of exhaustive experimentation, it was observed that dilated convolution DAG models namely- ResNet50, ShuffleNet and MobileNetV2 outperform other network models yielding cumulative JI and F1 score values of 0.87 and 0.92, 0.85 and 91, 0.84 and 0.90, respectively. The segmented images obtained by best performing models were subjectively analyzed by participating radiologist in terms of (a) size (b) margins and (c) shape characteristics. From objective and subjective analysis it was concluded that ResNet50 is the optimal model for segmentation of difficult to delineate breast masses with dense background and masses where both masses and micro-calcifications are simultaneously present. The result of the study indicates that ResNet50 model can be used in routine clinical environment for segmentation of mammographic masses.

A Novel Deep Learning Framework for Nipple Segmentation in Digital Mammography.

Rogozinski M, Hurtado J, Sierra-Franco CA, R Hall Barbosa C, Raposo A

pubmed logopapersJun 3 2025
This study introduces a novel methodology to enhance nipple segmentation in digital mammography, a critical component for accurate medical analysis and computer-aided detection systems. The nipple is a key anatomical landmark for multi-view and multi-modality breast image registration, where accurate localization is vital for ensuring image quality and enabling precise registration of anomalies across different mammographic views. The proposed approach significantly outperforms baseline methods, particularly in challenging cases where previous techniques failed. It achieved successful detection across all cases and reached a mean Intersection over Union (mIoU) of 0.63 in instances where the baseline failed entirely. Additionally, it yielded nearly a tenfold improvement in Hausdorff distance and consistent gains in overlap-based metrics, with the mIoU increasing from 0.7408 to 0.8011 in the craniocaudal (CC) view and from 0.7488 to 0.7767 in the mediolateral oblique (MLO) view. Furthermore, its generalizability suggests the potential for application to other breast imaging modalities and related domains facing challenges such as class imbalance and high variability in object characteristics.

A Comparative Performance Analysis of Regular Expressions and an LLM-Based Approach to Extract the BI-RADS Score from Radiological Reports

Dennstaedt, F., Lerch, L., Schmerder, M., Cihoric, N., Cerghetti, G. M., Gaio, R., Bonel, H., Filchenko, I., Hastings, J., Dammann, F., Aebersold, D. M., von Tengg, H., Nairz, K.

medrxiv logopreprintJun 2 2025
BackgroundDifferent Natural Language Processing (NLP) techniques have demonstrated promising results for data extraction from radiological reports. Both traditional rule-based methods like regular expressions (Regex) and modern Large Language Models (LLMs) can extract structured information. However, comparison between these approaches for extraction of specific radiological data elements has not been widely conducted. MethodsWe compared accuracy and processing time between Regex and LLM-based approaches for extracting BI-RADS scores from 7,764 radiology reports (mammography, ultrasound, MRI, and biopsy). We developed a rule-based algorithm using Regex patterns and implemented an LLM-based extraction using the Rombos-LLM-V2.6-Qwen-14b model. A ground truth dataset of 199 manually classified reports was used for evaluation. ResultsThere was no statistically significant difference in the accuracy in extracting BI-RADS scores between Regex and an LLM-based method (accuracy of 89.20% for Regex versus 87.69% for the LLM-based method; p=0.56). Compared to the LLM-based method, Regex processing was more efficient, completing the task 28,120 times faster (0.06 seconds vs. 1687.20 seconds). Further analysis revealed LLMs favored common classifications (particularly BI-RADS value of 2) while Regex more frequently returned "unclear" values. We also could confirm in our sample an already known laterality bias for breast cancer (BI-RADS 6) and detected a slight laterality skew for suspected breast cancer (BI-RADS 5) as well. ConclusionFor structured, standardized data like BI-RADS, traditional NLP techniques seem to be superior, though future work should explore hybrid approaches combining Regex precision for standardized elements with LLM contextual understanding for more complex information extraction tasks.

Validation of a Dynamic Risk Prediction Model Incorporating Prior Mammograms in a Diverse Population.

Jiang S, Bennett DL, Colditz GA

pubmed logopapersJun 2 2025
For breast cancer risk prediction to be clinically useful, it must be accurate and applicable to diverse groups of women across multiple settings. To examine whether a dynamic risk prediction model incorporating prior mammograms, previously validated in Black and White women, could predict future risk of breast cancer across a racially and ethnically diverse population in a population-based screening program. This prognostic study included women aged 40 to 74 years with 1 or more screening mammograms drawn from the British Columbia Breast Screening Program from January 1, 2013, to December 31, 2019, with follow-up via linkage to the British Columbia Cancer Registry through June 2023. This provincial, organized screening program offers screening mammography with full field digital mammography (FFDM) every 2 years. Data were analyzed from May to August 2024. FFDM-based, artificial intelligence-generated mammogram risk score (MRS), including up to 4 years of prior mammograms. The primary outcomes were 5-year risk of breast cancer (measured with the area under the receiver operating characteristic curve [AUROC]) and absolute risk of breast cancer calibrated to the US Surveillance, Epidemiology, and End Results incidence rates. Among 206 929 women (mean [SD] age, 56.1 [9.7] years; of 118 093 with data on race, there were 34 266 East Asian; 1946 Indigenous; 6116 South Asian; and 66 742 White women), there were 4168 pathology-confirmed incident breast cancers diagnosed through June 2023. Mean (SD) follow-up time was 5.3 (3.0) years. Using up to 4 years of prior mammogram images in addition to the most current mammogram, a 5-year AUROC of 0.78 (95% CI, 0.77-0.80) was obtained based on analysis of images alone. Performance was consistent across subgroups defined by race and ethnicity in East Asian (AUROC, 0.77; 95% CI, 0.75-0.79), Indigenous (AUROC, 0.77; 95% CI 0.71-0.83), and South Asian (AUROC, 0.75; 95% CI 0.71-0.79) women. Stratification by age gave a 5-year AUROC of 0.76 (95% CI, 0.74-0.78) for women aged 50 years or younger and 0.80 (95% CI, 0.78-0.82) for women older than 50 years. There were 18 839 participants (9.0%) with a 5-year risk greater than 3%, and the positive predictive value was 4.9% with an incidence of 11.8 per 1000 person-years. A dynamic MRS generated from both current and prior mammograms showed robust performance across diverse racial and ethnic populations in a province-wide screening program starting from age 40 years, reflecting improved accuracy for racially and ethnically diverse populations.

Inferring single-cell spatial gene expression with tissue morphology via explainable deep learning

Zhao, Y., Alizadeh, E., Taha, H. B., Liu, Y., Xu, M., Mahoney, J. M., Li, S.

biorxiv logopreprintJun 2 2025
Deep learning models trained with spatial omics data uncover complex patterns and relationships among cells, genes, and proteins in a high-dimensional space. State-of-the-art in silico spatial multi-cell gene expression methods using histological images of tissue stained with hematoxylin and eosin (H&E) allow us to characterize cellular heterogeneity. We developed a vision transformer (ViT) framework to map histological signatures to spatial single-cell transcriptomic signatures, named SPiRiT. SPiRiT predicts single-cell spatial gene expression using the matched H&E image tiles of human breast cancer and whole mouse pup, evaluated by Xenium (10x Genomics) datasets. Importantly, SPiRiT incorporates rigorous strategies to ensure reproducibility and robustness of predictions and provides trustworthy interpretation through attention-based model explainability. SPiRiT model interpretation revealed the areas, and attention details it uses to predict gene expressions like marker genes in invasive cancer cells. In an apple-to-apple comparison with ST-Net, SPiRiT improved the predictive accuracy by 40%. These gene predictions and expression levels were highly consistent with the tumor region annotation. In summary, SPiRiT highlights the feasibility to infer spatial single-cell gene expression using tissue morphology in multiple-species.

Synthetic Ultrasound Image Generation for Breast Cancer Diagnosis Using cVAE-WGAN Models: An Approach Based on Generative Artificial Intelligence

Mondillo, G., Masino, M., Colosimo, S., Perrotta, A., Frattolillo, V., Abbate, F. G.

medrxiv logopreprintJun 2 2025
The scarcity and imbalance of medical image datasets hinder the development of robust computer-aided diagnosis (CAD) systems for breast cancer. This study explores the application of advanced generative models, based on generative artificial intelligence (GenAI), for the synthesis of digital breast ultrasound images. Using a hybrid Conditional Variational Autoencoder-Wasserstein Generative Adversarial Network (CVAE-WGAN) architecture, we developed a system to generate high-quality synthetic images conditioned on the class (malignant vs. normal/benign). These synthetic images, generated from the low-resolution BreastMNIST dataset and filtered for quality, were systematically integrated with real training data at different mixing ratios (W). The performance of a CNN classifier trained on these mixed datasets was evaluated against a baseline model trained only on real data balanced with SMOTE. The optimal integration (mixing weight W=0.25) produced a significant performance increase on the real test set: +8.17% in macro-average F1-score and +4.58% in accuracy compared to using real data alone. Analysis confirmed the originality of the generated samples. This approach offers a promising solution for overcoming data limitations in image-based breast cancer diagnostics, potentially improving the capabilities of CAD systems.

Utilizing Pseudo Color Image to Improve the Performance of Deep Transfer Learning-Based Computer-Aided Diagnosis Schemes in Breast Mass Classification.

Jones MA, Zhang K, Faiz R, Islam W, Jo J, Zheng B, Qiu Y

pubmed logopapersJun 1 2025
The purpose of this study is to investigate the impact of using morphological information in classifying suspicious breast lesions. The widespread use of deep transfer learning can significantly improve the performance of the mammogram based CADx schemes. However, digital mammograms are grayscale images, while deep learning models are typically optimized using the natural images containing three channels. Thus, it is needed to convert the grayscale mammograms into three channel images for the input of deep transfer models. This study aims to develop a novel pseudo color image generation method which utilizes the mass contour information to enhance the classification performance. Accordingly, a total of 830 breast cancer cases were retrospectively collected, which contains 310 benign and 520 malignant cases, respectively. For each case, a total of four regions of interest (ROI) are collected from the grayscale images captured for both the CC and MLO views of the two breasts. Meanwhile, a total of seven pseudo color image sets are generated as the input of the deep learning models, which are created through a combination of the original grayscale image, a histogram equalized image, a bilaterally filtered image, and a segmented mass. Accordingly, the output features from four identical pre-trained deep learning models are concatenated and then processed by a support vector machine-based classifier to generate the final benign/malignant labels. The performance of each image set was evaluated and compared. The results demonstrate that the pseudo color sets containing the manually segmented mass performed significantly better than all other pseudo color sets, which achieved an AUC (area under the ROC curve) up to 0.889 ± 0.012 and an overall accuracy up to 0.816 ± 0.020, respectively. At the same time, the performance improvement is also dependent on the accuracy of the mass segmentation. The results of this study support our hypothesis that adding accurately segmented mass contours can provide complementary information, thereby enhancing the performance of the deep transfer model in classifying suspicious breast lesions.

Deep Learning in Digital Breast Tomosynthesis: Current Status, Challenges, and Future Trends.

Wang R, Chen F, Chen H, Lin C, Shuai J, Wu Y, Ma L, Hu X, Wu M, Wang J, Zhao Q, Shuai J, Pan J

pubmed logopapersJun 1 2025
The high-resolution three-dimensional (3D) images generated with digital breast tomosynthesis (DBT) in the screening of breast cancer offer new possibilities for early disease diagnosis. Early detection is especially important as the incidence of breast cancer increases. However, DBT also presents challenges in terms of poorer results for dense breasts, increased false positive rates, slightly higher radiation doses, and increased reading times. Deep learning (DL) has been shown to effectively increase the processing efficiency and diagnostic accuracy of DBT images. This article reviews the application and outlook of DL in DBT-based breast cancer screening. First, the fundamentals and challenges of DBT technology are introduced. The applications of DL in DBT are then grouped into three categories: diagnostic classification of breast diseases, lesion segmentation and detection, and medical image generation. Additionally, the current public databases for mammography are summarized in detail. Finally, this paper analyzes the main challenges in the application of DL techniques in DBT, such as the lack of public datasets and model training issues, and proposes possible directions for future research, including large language models, multisource domain transfer, and data augmentation, to encourage innovative applications of DL in medical imaging.

Data Augmentation for Medical Image Classification Based on Gaussian Laplacian Pyramid Blending With a Similarity Measure.

Kumar A, Sharma A, Singh AK, Singh SK, Saxena S

pubmed logopapersJun 1 2025
Breast cancer is a devastating disease that affects women worldwide, and computer-aided algorithms have shown potential in automating cancer diagnosis. Recently Generative Artificial Intelligence (GenAI) opens new possibilities for addressing the challenges of labeled data scarcity and accurate prediction in critical applications. However, a lack of diversity, as well as unrealistic and unreliable data, have a detrimental impact on performance. Therefore, this study proposes an augmentation scheme to address the scarcity of labeled data and data imbalance in medical datasets. This approach integrates the concepts of the Gaussian-Laplacian pyramid and pyramid blending with similarity measures. In order to maintain the structural properties of images and capture inter-variability of patient images of the same category similarity-metric-based intermixing has been introduced. It helps to maintain the overall quality and integrity of the dataset. Subsequently, deep learning approach with significant modification, that leverages transfer learning through the usage of concatenated pre-trained models is applied to classify breast cancer histopathological images. The effectiveness of the proposal, including the impact of data augmentation, is demonstrated through a detailed analysis of three different medical datasets, showing significant performance improvement over baseline models. The proposal has the potential to contribute to the development of more accurate and reliable approach for breast cancer diagnosis.
Page 15 of 23225 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.