Sort by:
Page 2 of 23224 results

Dense breasts and women's health: which screenings are essential?

Mota BS, Shimizu C, Reis YN, Gonçalves R, Soares Junior JM, Baracat EC, Filassi JR

pubmed logopapersAug 9 2025
This review synthesizes current evidence regarding optimal breast cancer screening strategies for women with dense breasts, a population at increased risk due to decreased mammographic sensitivity. A systematic literature review was performed in accordance with PRISMA criteria, covering MEDLINE, EMBASE, CINAHL Plus, Scopus, and Web of Science until May 2025. The analysis examines advanced imaging techniques such as digital breast tomosynthesis (DBT), contrast-enhanced spectral mammography (CESM), ultrasound, and magnetic resonance imaging (MRI), assessing their effectiveness in addressing the shortcomings of traditional mammography in dense breast tissue. The review rigorously evaluates the incorporation of risk stratification models, such as the BCSC, in customizing screening regimens, in conjunction with innovative technologies like liquid biopsy and artificial intelligence-based image analysis for improved risk prediction. A key emphasis is placed on the heterogeneity in international screening guidelines and the challenges in translating research findings to diverse clinical settings, particularly in resource-constrained environments. The discussion includes ethical implications regarding compulsory breast density notification and the possibility of intensifying disparities in health care. The review ultimately encourages the development of evidence-based, context-specific guidelines that facilitate equitable access to effective breast cancer screening for all women with dense breasts.

Supporting intraoperative margin assessment using deep learning for automatic tumour segmentation in breast lumpectomy micro-PET-CT.

Maris L, Göker M, De Man K, Van den Broeck B, Van Hoecke S, Van de Vijver K, Vanhove C, Keereman V

pubmed logopapersAug 9 2025
Complete tumour removal is vital in curative breast cancer (BCa) surgery to prevent recurrence. Recently, [<sup>18</sup>F]FDG micro-PET-CT of lumpectomy specimens has shown promise for intraoperative margin assessment (IMA). To aid interpretation, we trained a 2D Residual U-Net to delineate invasive carcinoma of no special type in micro-PET-CT lumpectomy images. We collected 53 BCa lamella images from 19 patients with true histopathology-defined tumour segmentations. Group five-fold cross-validation yielded a dice similarity coefficient of 0.71 ± 0.20 for segmentation. Afterwards, an ensemble model was generated to segment tumours and predict margin status. Comparing predicted and true histopathological margin status in a separate set of 31 micro-PET-CT lumpectomy images of 31 patients achieved an F1 score of 84%, closely matching the mean performance of seven physicians who manually interpreted the same images. This model represents an important step towards a decision-support system that enhances micro-PET-CT-based IMA in BCa, facilitating its clinical adoption.

Enhancing B-mode-based breast cancer diagnosis via cross-attention fusion of H-scan and Nakagami imaging with multi-CAM-QUS-driven XAI.

Mondol SS, Hasan MK

pubmed logopapersAug 8 2025
B-mode ultrasound is widely employed for breast lesion diagnosis due to its affordability, widespread availability, and effectiveness, particularly in cases of dense breast tissue where mammography may be less sensitive. However, it disregards critical tissue information embedded in raw radiofrequency (RF) data. While both modalities have demonstrated promise in Computer-Aided Diagnosis (CAD), their combined potential remains largely unexplored.&#xD;Approach.This paper presents an automated breast lesion classification network that utilizes H-scan and Nakagami parametric images derived from RF ultrasound signals, combined with machine-generated B-mode images, seamlessly integrated through a Multi Modal Cross Attention Fusion (MM-CAF) mechanism to extract complementary information. The proposed architecture also incorporates an attention-guided modified InceptionV3 for feature extraction, a Knowledge-Guided Cross-Modality Learning (KGCML) module for inter‑modal knowledge sharing, and Attention-Driven Context Enhancement (ADCE) modules to improve contextual understanding and fusion with the classification network. The network employs categorical cross-entropy loss, a Multi-CAM-based loss to guide learning toward accurate lesion-specific features, and a Multi-QUS-based loss to embed clinically meaningful domain knowledge and effectively distinguishing between benign and malignant lesions, all while supporting explainable AI (XAI) principles.&#xD;Main results. Experiments conducted on multi-center breast ultrasound datasets--BUET-BUSD, ATL, and OASBUD--characterized by demographic diversity, demonstrate the effectiveness of the proposed approach, achieving classification accuracies of 92.54%, 89.93%, and 90.0%, respectively, along with high interpretability and trustworthiness. These results surpass those of existing methods based on B-mode and/or RF data, highlighting the superior performance and robustness of the proposed technique. By integrating complementary RF‑derived information with B‑mode imaging with pseudo‑segmentation and domain‑informed loss functions, our method significantly boosts lesion classification accuracy-enabling fully automated, explainable CAD and paving the way for widespread clinical adoption of AI‑driven breast screening.

BCDCNN: breast cancer deep convolutional neural network for breast cancer detection using MRI images.

Martina Jaincy DE, Pattabiraman V

pubmed logopapersAug 8 2025
Breast cancer (BC) is a kind of cancer that is created from the cells in breast tissue. This is a primary cancer that occurs in women. Earlier identification of BC is significant in the treatment process. To lessen unwanted biopsies, Magnetic Resonance Imaging (MRI) is utilized for diagnosing BC nowadays. MRI is the most recommended examination to detect and monitor BC and explain lesion areas as it has a better ability for soft tissue imaging. Even though, it is a time-consuming procedure and requires skilled radiologists. Here, Breast Cancer Deep Convolutional Neural Network (BCDCNN) is presented for Breast Cancer Detection (BCD) using MRI images. At first, the input image is taken from the database and subjected to a pre-processing segment. Adaptive Kalman filter (AKF) is utilized to execute the pre-processing phase. Thereafter, cancer area segmentation is conducted on filtered images by Pyramid Scene Parsing Network (PSPNet). To improve segmentation accuracy and adapt to complex tumor boundaries, PSPNet is optimized using the Jellyfish Search Optimizer (JSO). It is a recent nature-inspired metaheuristic that converges to an optimal solution in fewer iterations compared to conventional methods. Then, image augmentation is performed that includes augmentation techniques namely rotation, random erasing and slipping. Afterwards, feature extraction is done and finally, BCD is conducted employing BCDCNN, wherein the loss function is newly designed based on an adaptive error similarity. It improves the overall performance by dynamically emphasizing samples with ambiguous predictions, enabling the model to focus more on diagnostically challenging cases and enhancing its discriminative capability. Furthermore, BCDCNN acquired 90.2% of accuracy, 90.6% of sensitivity and 90.9% of specificity. The proposed method not only demonstrates strong classification performance but also holds promising potential for real-world clinical application in early and accurate breast cancer diagnosis.

Transformer-Based Explainable Deep Learning for Breast Cancer Detection in Mammography: The MammoFormer Framework

Ojonugwa Oluwafemi Ejiga Peter, Daniel Emakporuena, Bamidele Dayo Tunde, Maryam Abdulkarim, Abdullahi Bn Umar

arxiv logopreprintAug 8 2025
Breast cancer detection through mammography interpretation remains difficult because of the minimal nature of abnormalities that experts need to identify alongside the variable interpretations between readers. The potential of CNNs for medical image analysis faces two limitations: they fail to process both local information and wide contextual data adequately, and do not provide explainable AI (XAI) operations that doctors need to accept them in clinics. The researcher developed the MammoFormer framework, which unites transformer-based architecture with multi-feature enhancement components and XAI functionalities within one framework. Seven different architectures consisting of CNNs, Vision Transformer, Swin Transformer, and ConvNext were tested alongside four enhancement techniques, including original images, negative transformation, adaptive histogram equalization, and histogram of oriented gradients. The MammoFormer framework addresses critical clinical adoption barriers of AI mammography systems through: (1) systematic optimization of transformer architectures via architecture-specific feature enhancement, achieving up to 13% performance improvement, (2) comprehensive explainable AI integration providing multi-perspective diagnostic interpretability, and (3) a clinically deployable ensemble system combining CNN reliability with transformer global context modeling. The combination of transformer models with suitable feature enhancements enables them to achieve equal or better results than CNN approaches. ViT achieves 98.3% accuracy alongside AHE while Swin Transformer gains a 13.0% advantage through HOG enhancements

Structured Report Generation for Breast Cancer Imaging Based on Large Language Modeling: A Comparative Analysis of GPT-4 and DeepSeek.

Chen K, Hou X, Li X, Xu W, Yi H

pubmed logopapersAug 7 2025
The purpose of this study is to compare the performance of GPT-4 and DeepSeek large language models in generating structured breast cancer multimodality imaging integrated reports from free-text radiology reports including mammography, ultrasound, MRI, and PET/CT. A retrospective analysis was conducted on 1358 free-text reports from 501 breast cancer patients across two institutions. The study design involved synthesizing multimodal imaging data into structured reports with three components: primary lesion characteristics, metastatic lesions, and TNM staging. Input prompts were standardized for both models, with GPT-4 using predesigned instructions and DeepSeek requiring manual input. Reports were evaluated based on physician satisfaction using a Likert scale, descriptive accuracy including lesion localization, size, SUV, and metastasis assessment, and TNM staging correctness according to NCCN guidelines. Statistical analysis included McNemar tests for binary outcomes and correlation analysis for multiclass comparisons with a significance threshold of P < .05. Physician satisfaction scores showed strong correlation between models with r-values of 0.665 and 0.558 and P-values below .001. Both models demonstrated high accuracy in data extraction and integration. The mean accuracy for primary lesion features was 91.7% for GPT-4% and 92.1% for DeepSeek, while feature synthesis accuracy was 93.4% for GPT4 and 93.9% for DeepSeek. Metastatic lesion identification showed comparable overall accuracy at 93.5% for GPT4 and 94.4% for DeepSeek. GPT-4 performed better in pleural lesion detection with 94.9% accuracy compared to 79.5% for DeepSeek, whereas DeepSeek achieved higher accuracy in mesenteric metastasis identification at 87.5% vs 43.8% for GPT4. TNM staging accuracy exceeded 92% for T-stage and 94% for M-stage, with N-stage accuracy improving beyond 90% when supplemented with physical exam data. Both GPT-4 and DeepSeek effectively generate structured breast cancer imaging reports with high accuracy in data mining, integration, and TNM staging. Integrating these models into clinical practice is expected to enhance report standardization and physician productivity.

Advanced Multi-Architecture Deep Learning Framework for BIRADS-Based Mammographic Image Retrieval: Comprehensive Performance Analysis with Super-Ensemble Optimization

MD Shaikh Rahman, Feiroz Humayara, Syed Maudud E Rabbi, Muhammad Mahbubur Rashid

arxiv logopreprintAug 6 2025
Content-based mammographic image retrieval systems require exact BIRADS categorical matching across five distinct classes, presenting significantly greater complexity than binary classification tasks commonly addressed in literature. Current medical image retrieval studies suffer from methodological limitations including inadequate sample sizes, improper data splitting, and insufficient statistical validation that hinder clinical translation. We developed a comprehensive evaluation framework systematically comparing CNN architectures (DenseNet121, ResNet50, VGG16) with advanced training strategies including sophisticated fine-tuning, metric learning, and super-ensemble optimization. Our evaluation employed rigorous stratified data splitting (50%/20%/30% train/validation/test), 602 test queries, and systematic validation using bootstrap confidence intervals with 1,000 samples. Advanced fine-tuning with differential learning rates achieved substantial improvements: DenseNet121 (34.79% precision@10, 19.64% improvement) and ResNet50 (34.54%, 19.58% improvement). Super-ensemble optimization combining complementary architectures achieved 36.33% precision@10 (95% CI: [34.78%, 37.88%]), representing 24.93% improvement over baseline and providing 3.6 relevant cases per query. Statistical analysis revealed significant performance differences between optimization strategies (p<0.001) with large effect sizes (Cohen's d>0.8), while maintaining practical search efficiency (2.8milliseconds). Performance significantly exceeds realistic expectations for 5-class medical retrieval tasks, where literature suggests 20-25% precision@10 represents achievable performance for exact BIRADS matching. Our framework establishes new performance benchmarks while providing evidence-based architecture selection guidelines for clinical deployment in diagnostic support and quality assurance applications.

Deep learning-based radiomics does not improve residual cancer burden prediction post-chemotherapy in LIMA breast MRI trial.

Janse MHA, Janssen LM, Wolters-van der Ben EJM, Moman MR, Viergever MA, van Diest PJ, Gilhuijs KGA

pubmed logopapersAug 6 2025
This study aimed to evaluate the potential additional value of deep radiomics for assessing residual cancer burden (RCB) in locally advanced breast cancer, after neoadjuvant chemotherapy (NAC) but before surgery, compared to standard predictors: tumor volume and subtype. This retrospective study used a 105-patient single-institution training set and a 41-patient external test set from three institutions in the LIMA trial. DCE-MRI was performed before and after NAC, and RCB was determined post-surgery. Three networks (nnU-Net, Attention U-net and vector-quantized encoder-decoder) were trained for tumor segmentation. For each network, deep features were extracted from the bottleneck layer and used to train random forest regression models to predict RCB score. Models were compared to (1) a model trained on tumor volume and (2) a model combining tumor volume and subtype. The potential complementary performance of combining deep radiomics with a clinical-radiological model was assessed. From the predicted RCB score, three metrics were calculated: area under the curve (AUC) for categories RCB-0/RCB-I versus RCB-II/III, pathological complete response (pCR) versus non-pCR, and Spearman's correlation. Deep radiomics models had an AUC between 0.68-0.74 for pCR and 0.68-0.79 for RCB, while the volume-only model had an AUC of 0.74 and 0.70 for pCR and RCB, respectively. Spearman's correlation varied from 0.45-0.51 (deep radiomics) to 0.53 (combined model). No statistical difference between models was observed. Segmentation network-derived deep radiomics contain similar information to tumor volume and subtype for inferring pCR and RCB after NAC, but do not complement standard clinical predictors in the LIMA trial. Question It is unknown if and which deep radiomics approach is most suitable to extract relevant features to assess neoadjuvant chemotherapy response on breast MRI. Findings Radiomic features extracted from deep-learning networks yield similar results in predicting neoadjuvant chemotherapy response as tumor volume and subtype in the LIMA study. However, they do not provide complementary information. Clinical relevance For predicting response to neoadjuvant chemotherapy in breast cancer patients, tumor volume on MRI and subtype remain important predictors of treatment outcome; deep radiomics might be an alternative when determining tumor volume and/or subtype is not feasible.

Prediction of breast cancer HER2 status changes based on ultrasound radiomics attention network.

Liu J, Xue X, Yan Y, Song Q, Cheng Y, Wang L, Wang X, Xu D

pubmed logopapersAug 5 2025
Following Neoadjuvant Chemotherapy (NAC), there exists a probability of changes occurring in the Human Epidermal Growth Factor Receptor 2 (HER2) status. If these changes are not promptly addressed, it could hinder the timely adjustment of treatment plans, thereby affecting the optimal management of breast cancer. Consequently, the accurate prediction of HER2 status changes holds significant clinical value, underscoring the need for a model capable of precisely forecasting these alterations. In this paper, we elucidate the intricacies surrounding HER2 status changes, and propose a deep learning architecture combined with radiomics techniques, named as Ultrasound Radiomics Attention Network (URAN), to predict HER2 status changes. Firstly, radiomics technology is used to extract ultrasound image features to provide rich and comprehensive medical information. Secondly, HER2 Key Feature Selection (HKFS) network is constructed for retain crucial features relevant to HER2 status change. Thirdly, we design Max and Average Attention and Excitation (MAAE) network to adjust the model's focus on different key features. Finally, a fully connected neural network is utilized to predict HER2 status changes. The code to reproduce our experiments can be found at https://github.com/joanaapa/Foundation-Medical. Our research was carried out using genuine ultrasound images sourced from hospitals. On this dataset, URAN outperformed both state-of-the-art and traditional methods in predicting HER2 status changes, achieving an accuracy of 0.8679 and an AUC of 0.8328 (95% CI: 0.77-0.90). Comparative experiments on the public BUS_UCLM dataset further demonstrated URAN's superiority, attaining an accuracy of 0.9283 and an AUC of 0.9161 (95% CI: 0.91-0.92). Additionally, we undertook rigorously crafted ablation studies, which validated the logicality and effectiveness of the radiomics techniques, as well as the HKFS and MAAE modules integrated within the URAN model. The results pertaining to specific HER2 statuses indicate that URAN exhibits superior accuracy in predicting changes in HER2 status characterized by low expression and IHC scores of 2+ or below. Furthermore, we examined the radiomics attributes of ultrasound images and discovered that various wavelet transform features significantly impacted the changes in HER2 status. We have developed a URAN method for predicting HER2 status changes that combines radiomics techniques and deep learning. URAN model have better predictive performance compared to other competing algorithms, and can mine key radiomics features related to HER2 status changes.

Retrospective evaluation of interval breast cancer screening mammograms by radiologists and AI.

Subelack J, Morant R, Blum M, Gräwingholt A, Vogel J, Geissler A, Ehlig D

pubmed logopapersAug 4 2025
To determine whether an AI system can identify breast cancer risk in interval breast cancer (IBC) screening mammograms. IBC screening mammograms from a Swiss screening program were retrospectively analyzed by radiologists/an AI system. Radiologists determined whether the IBC mammogram showed human visible signs of breast cancer (potentially missed IBCs) or not (IBCs without retrospective abnormalities). The AI system provided a case score and a prognostic risk category per mammogram. 119 IBC cases (mean age 57.3 (5.4)) were available with complete retrospective evaluations by radiologists/the AI system. 82 (68.9%) were classified as IBCs without retrospective abnormalities and 37 (31.1%) as potentially missed IBCs. 46.2% of all IBCs received a case score ≥ 25, 25.2% ≥ 50, and 13.4% ≥ 75. Of the 25.2% of the IBCs ≥ 50 (vs. 13.4% of a no breast cancer population), 45.2% had not been discussed during a consensus conference, reflecting 11.4% of all IBC cases. The potentially missed IBCs received significantly higher case scores and risk classifications than IBCs without retrospective abnormalities (case score mean: 54.1 vs. 23.1; high risk: 48.7% vs. 14.7%; p < 0.05). 13.4% of the IBCs without retrospective abnormalities received a case score ≥ 50, of which 62.5% had not been discussed during a consensus conference. An AI system can identify IBC screening mammograms with a higher risk for breast cancer, particularly in potentially missed IBCs but also in some IBCs without retrospective abnormalities where radiologists did not see anything, indicating its ability to improve mammography screening quality. Question AI presents a promising opportunity to enhance breast cancer screening in general, but evidence is missing regarding its ability to reduce interval breast cancers. Findings The AI system detected a high risk of breast cancer in most interval breast cancer screening mammograms where radiologists retrospectively detected abnormalities. Clinical relevance Utilization of an AI system in mammography screening programs can identify breast cancer risk in many interval breast cancer screening mammograms and thus potentially reduce the number of interval breast cancers.
Page 2 of 23224 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.