Sort by:
Page 7 of 30293 results

Relationship Between [<sup>18</sup>F]FDG PET/CT Texture Analysis and Progression-Free Survival in Patients Diagnosed With Invasive Breast Carcinoma.

Bülbül O, Bülbül HM, Göksel S

pubmed logopapersAug 22 2025
Breast cancer is the most common cancer and the leading cause of cancer-related deaths in women. Texture analysis provides crucial prognostic information about many types of cancer, including breast cancer. The aim was to examine the relationship between texture features (TFs) of 2-deoxy-2[<sup>18</sup>F] fluoro-D-glucose positron emission tomography (PET)/computed tomography and disease progression in patients with invasive breast cancer. TFs of the primary malignant lesion were extracted from PET images of 112 patients. TFs that showed significant differences between patients who achieved one-, three-, and five-year progression-free survival (PFS) and those who did not were selected and subjected to the least absolute shrinkage and selection operator regression method to reduce features and prevent overfitting. Machine learning (ML) was used to predict PFS using TFs and selected clinicopathological parameters. In models using only TFs, random forest predicted one-, three-, and five-year PFS with area under the curve (AUC) values of 0.730, 0.758, and 0.797, respectively. Naive Bayes predicted one-, three-, and five-year PFS with AUC values of 0.857, 0.804, and 0.843, respectively. The neural network predicted one-, three-, and five-year PFS with AUC values of 0.782, 0.828, and 0.780, respectively. These findings indicated increased AUC values when the models combined TFs with clinicopathological parameters. The lowest AUC values of the models combining TFs and clinicopathological parameters when predicting one-year, three-year, and five-year PFS were 0.867, 0.898, and 0.867, respectively. ML models incorporating PET-derived TFs and clinical parameters may assist in predicting progression during the pre-treatment period in patients with invasive breast carcinoma.

Combined use of two artificial intelligence-based algorithms for mammography triaging: a retrospective simulation study.

Kim HJ, Kim HH, Eom HJ, Choi WJ, Chae EY, Shin HJ, Cha JH

pubmed logopapersAug 21 2025
To evaluate triaging scenarios involving two commercial AI algorithms to enhance mammography interpretation and reduce workload. A total of 3012 screening or diagnostic mammograms, including 213 cancer cases, were analyzed using two AI algorithms (AI-1, AI-2) and categorized as "high-risk" (top 10%), "minimal-risk" (bottom 20%), or "indeterminate" based on malignancy likelihood. Five triaging scenarios of combined AI use (Sensitive, Specific, Conservative, Sequential Modes A and B) determined whether cases would be autonomously recalled, classified as negative, or referred for radiologist interpretation. Sensitivity, specificity, number of mammograms requiring review, and abnormal interpretation rate (AIR) were compared against single AIs and manual reading using McNemar's test. Sensitive Mode achieved 84% sensitivity, outperforming single AI (p = 0.03 [AI-1], 0.01 [AI-2]) and manual reading (p = 0.03), with an 18.3% reduction in mammograms requiring review (AIR, 23.3%). Specific Mode achieved 87.7% specificity, exceeding single AI (p < 0.001 [AI-1, AI-2]) and comparable to manual reading (p = 0.37), with a 41.7% reduction in mammograms requiring review (AIR, 17%). Conservative and Sequential Modes A and B achieved sensitivities of 82.2%, 80.8%, and 80.3%, respectively, comparable to single AI or manual reading (p > 0.05, all), with reductions of 9.8%, 49.8%, and 49.8% in mammograms requiring review (AIRs, 18.6%, 21.6%, 21.7%). Combining two AI algorithms improved sensitivity or specificity in mammography interpretation while reducing mammograms requiring radiologist review in this cancer-enriched dataset from a tertiary center. Scenario selection should consider clinical needs and requires validation in a screening population. Question AI algorithms have the potential to improve workflow efficiency by triaging mammograms. Combining algorithms trained under different conditions may offer synergistic benefits. Findings The combined use of two commercial AI algorithms for triaging mammograms improved sensitivity or specificity, depending on the scenario, while also reducing mammograms requiring radiologist review. Clinical relevance Integrating two commercial AI algorithms could enhance mammography interpretation over using a single AI for triaging or manual reading.

DoSReMC: Domain Shift Resilient Mammography Classification using Batch Normalization Adaptation

Uğurcan Akyüz, Deniz Katircioglu-Öztürk, Emre K. Süslü, Burhan Keleş, Mete C. Kaya, Gamze Durhan, Meltem G. Akpınar, Figen B. Demirkazık, Gözde B. Akar

arxiv logopreprintAug 21 2025
Numerous deep learning-based solutions have been developed for the automatic recognition of breast cancer using mammography images. However, their performance often declines when applied to data from different domains, primarily due to domain shift - the variation in data distributions between source and target domains. This performance drop limits the safe and equitable deployment of AI in real-world clinical settings. In this study, we present DoSReMC (Domain Shift Resilient Mammography Classification), a batch normalization (BN) adaptation framework designed to enhance cross-domain generalization without retraining the entire model. Using three large-scale full-field digital mammography (FFDM) datasets - including HCTP, a newly introduced, pathologically confirmed in-house dataset - we conduct a systematic cross-domain evaluation with convolutional neural networks (CNNs). Our results demonstrate that BN layers are a primary source of domain dependence: they perform effectively when training and testing occur within the same domain, and they significantly impair model generalization under domain shift. DoSReMC addresses this limitation by fine-tuning only the BN and fully connected (FC) layers, while preserving pretrained convolutional filters. We further integrate this targeted adaptation with an adversarial training scheme, yielding additional improvements in cross-domain generalizability. DoSReMC can be readily incorporated into existing AI pipelines and applied across diverse clinical environments, providing a practical pathway toward more robust and generalizable mammography classification systems.

Differentiation of Suspicious Microcalcifications Using Deep Learning: DCIS or IDC.

Xu W, Deng S, Mao G, Wang N, Huang Y, Zhang C, Sa G, Wu S, An Y

pubmed logopapersAug 20 2025
To explore the value of a deep learning-based model in distinguishing between ductal carcinoma in situ (DCIS) and invasive ductal carcinoma (IDC) manifesting suspicious microcalcifications on mammography. A total of 294 breast cancer cases (106 DCIS and 188 IDC) from two centers were randomly allocated into training, internal validation and external validation sets in this retrospective study. Clinical variables differentiating DCIS from IDC were identified through univariate and multivariate analyses and used to build a clinical model. Deep learning features were extracted using Resnet101 and selected by minimum redundancy maximum correlation (mRMR) and least absolute shrinkage and selection operator (LASSO). A deep learning model was developed using deep learning features, and a combined model was constructed by combining these features with clinical variables. The area under the receiver operating characteristic curve (AUC) was used to assess the performance of each model. Multivariate logistic regression identified lesion type and BI-RADS category as independent predictors for differentiating DCIS from IDC. The clinical model incorporating these factors achieved an AUC of 0.67, sensitivity of 0.53, specificity of 0.81, and accuracy of 0.63 in the external validation set. In comparison, the deep learning model showed an AUC of 0.97, sensitivity of 0.94 and specificity of 0.92, accuracy of 0.93. For the combined model, the AUC, sensitivity, specificity and accuracy were 0.97, 0.96, 0.92 and 0.95, respectively. The diagnostic efficacy of the deep learning model and combined model was comparable (p>0.05), and both models outperformed the clinical model (p<0.05). Deep learning provides an effective non-invasive approach to differentiate DCIS from IDC presenting as suspicious microcalcifications on mammography.

Deep Learning Model for Breast Shear Wave Elastography to Improve Breast Cancer Diagnosis (INSPiRED 006): An International, Multicenter Analysis.

Cai L, Pfob A, Barr RG, Duda V, Alwafai Z, Balleyguier C, Clevert DA, Fastner S, Gomez C, Goncalo M, Gruber I, Hahn M, Kapetas P, Nees J, Ohlinger R, Riedel F, Rutten M, Stieber A, Togawa R, Sidey-Gibbons C, Tozaki M, Wojcinski S, Heil J, Golatta M

pubmed logopapersAug 20 2025
Shear wave elastography (SWE) has been investigated as a complement to B-mode ultrasound for breast cancer diagnosis. Although multicenter trials suggest benefits for patients with Breast Imaging Reporting and Data System (BI-RADS) 4(a) breast masses, widespread adoption remains limited because of the absence of validated velocity thresholds. This study aims to develop and validate a deep learning (DL) model using SWE images (artificial intelligence [AI]-SWE) for BI-RADS 3 and 4 breast masses and compare its performance with human experts using B-mode ultrasound. We used data from an international, multicenter trial (ClinicalTrials.gov identifier: NCT02638935) evaluating SWE in women with BI-RADS 3 or 4 breast masses across 12 institutions in seven countries. Images from 11 sites were used to develop an EfficientNetB1-based DL model. An external validation was conducted using data from the 12th site. Another validation was performed using the latest SWE software from a separate institutional cohort. Performance metrics included sensitivity, specificity, false-positive reduction, and area under the receiver operator curve (AUROC). The development set included 924 patients (4,026 images); the external validation sets included 194 patients (562 images) and 176 patients (188 images, latest SWE software). AI-SWE achieved an AUROC of 0.94 (95% CI, 0.91 to 0.96) and 0.93 (95% CI, 0.88 to 0.98) in the two external validation sets. Compared with B-mode ultrasound, AI-SWE significantly reduced false-positive rates by 62.1% (20.4% [30/147] <i>v</i> 53.8% [431/801]; <i>P</i> < .001) and 38.1% (33.3% [14/42] <i>v</i> 53.8% [431/801]; <i>P</i> < .001), with comparable sensitivity (97.9% [46/47] and 97.8% [131/134] <i>v</i> 98.1% [311/317]; <i>P</i> = .912 and <i>P</i> = .810). AI-SWE demonstrated accuracy comparable with human experts in malignancy detection while significantly reducing false-positive imaging findings (ie, unnecessary biopsies). Future studies should explore its integration into multimodal breast cancer diagnostics.

Evolution and integration of artificial intelligence across the cancer continuum in women: advances in risk assessment, prevention, and early detection.

Desai M, Desai B

pubmed logopapersAug 20 2025
Artificial Intelligence (AI) is revolutionizing the prevention and control of breast cancer by improving risk assessment, prevention, and early diagnosis. Considering an emphasis on AI applications across the women's breast cancer spectrum, this review summarizes developments, existing applications, and future potential prospects. We conducted an in-depth review of the literature on AI applications in breast cancer risk prediction, prevention, and early detection from 2000 to 2025, with particular emphasis on Explainable AI (XAI), deep learning (DL), and machine learning (ML). We examined algorithmic fairness, model transparency, dataset representation, and clinical performance indicators. As compared to traditional methods, AI-based models continuously enhanced risk categorization, screening sensitivity, and early detection (AUCs ranging from 0.65 to 0.975). However, challenges remain in algorithmic bias, underrepresentation of minority populations, and limited external validation. Remarkably, 58% of public datasets focused on mammography, leaving gaps in modalities such as tomosynthesis and histopathology. AI technologies have an enormous number of opportunities for enhancing the diagnosis and treatment of breast cancer. However, transparent models, inclusive datasets, and standardized frameworks for explainability and external validation should be given the greatest attention in subsequent studies to ensure equitable and effective implementation.

A Fully Transformer Based Multimodal Framework for Explainable Cancer Image Segmentation Using Radiology Reports

Enobong Adahada, Isabel Sassoon, Kate Hone, Yongmin Li

arxiv logopreprintAug 19 2025
We introduce Med-CTX, a fully transformer based multimodal framework for explainable breast cancer ultrasound segmentation. We integrate clinical radiology reports to boost both performance and interpretability. Med-CTX achieves exact lesion delineation by using a dual-branch visual encoder that combines ViT and Swin transformers, as well as uncertainty aware fusion. Clinical language structured with BI-RADS semantics is encoded by BioClinicalBERT and combined with visual features utilising cross-modal attention, allowing the model to provide clinically grounded, model generated explanations. Our methodology generates segmentation masks, uncertainty maps, and diagnostic rationales all at once, increasing confidence and transparency in computer assisted diagnosis. On the BUS-BRA dataset, Med-CTX achieves a Dice score of 99% and an IoU of 95%, beating existing baselines U-Net, ViT, and Swin. Clinical text plays a key role in segmentation accuracy and explanation quality, as evidenced by ablation studies that show a -5.4% decline in Dice score and -31% in CIDEr. Med-CTX achieves good multimodal alignment (CLIP score: 85%) and increased confi dence calibration (ECE: 3.2%), setting a new bar for trustworthy, multimodal medical architecture.

Comparing Conditional Diffusion Models for Synthesizing Contrast-Enhanced Breast MRI from Pre-Contrast Images

Sebastian Ibarra, Javier del Riego, Alessandro Catanese, Julian Cuba, Julian Cardona, Nataly Leon, Jonathan Infante, Karim Lekadir, Oliver Diaz, Richard Osuala

arxiv logopreprintAug 19 2025
Dynamic contrast-enhanced (DCE) MRI is essential for breast cancer diagnosis and treatment. However, its reliance on contrast agents introduces safety concerns, contraindications, increased cost, and workflow complexity. To this end, we present pre-contrast conditioned denoising diffusion probabilistic models to synthesize DCE-MRI, introducing, evaluating, and comparing a total of 22 generative model variants in both single-breast and full breast settings. Towards enhancing lesion fidelity, we introduce both tumor-aware loss functions and explicit tumor segmentation mask conditioning. Using a public multicenter dataset and comparing to respective pre-contrast baselines, we observe that subtraction image-based models consistently outperform post-contrast-based models across five complementary evaluation metrics. Apart from assessing the entire image, we also separately evaluate the region of interest, where both tumor-aware losses and segmentation mask inputs improve evaluation metrics. The latter notably enhance qualitative results capturing contrast uptake, albeit assuming access to tumor localization inputs that are not guaranteed to be available in screening settings. A reader study involving 2 radiologists and 4 MRI technologists confirms the high realism of the synthetic images, indicating an emerging clinical potential of generative contrast-enhancement. We share our codebase at https://github.com/sebastibar/conditional-diffusion-breast-MRI.

Overview of Multimodal Radiomics and Deep Learning in the Prediction of Axillary Lymph Node Status in Breast Cancer.

Zhao X, Wang M, Wei Y, Lu Z, Peng Y, Cheng X, Song J

pubmed logopapersAug 18 2025
Breast cancer is the most prevalent malignancy in women, with the status of axillary lymph nodes being a pivotal factor in treatment decision-making and prognostic evaluation. With the integration of deep learning algorithms, radiomics has become a transformative tool with increasingly extensive applications across multimodality, particularly in oncological imaging. Recent studies of radiomics and deep learning have demonstrated considerable potential for noninvasive diagnosis and prediction in breast cancer through multimodalities (mammography, ultrasonography, MRI and PET/CT), specifically for predicting axillary lymph node status. Although significant progress has been achieved in radiomics-based prediction of axillary lymph node metastasis in breast cancer, several methodological and technical challenges remain to be addressed. The comprehensive review incorporates a detailed analysis of radiomics workflow and model construction strategies. The objective of this review is to synthesize and evaluate current research findings, thereby providing valuable references for precision diagnosis and assessment of axillary lymph node metastasis in breast cancer, while promoting development and advancement in this evolving field.

Craniocaudal Mammograms Generation using Image-to-Image Translation Techniques.

Piras V, Bonatti AF, De Maria C, Cignoni P, Banterle F

pubmed logopapersAug 18 2025
Breast cancer is the leading cause of cancer death in women worldwide, emphasizing the need for prevention and early detection. Mammography screening plays a crucial role in secondary prevention, but large datasets of referred mammograms from hospital databases are hard to access due to privacy concerns, and publicly available datasets are often unreliable and unbalanced. We propose a novel workflow using a statistical generative model based on generative adversarial networks to generate high-resolution synthetic mammograms. Utilizing a unique 2D parametric model of the compressed breast in craniocaudal projection and image-to-image translation techniques, our approach allows full and precise control over breast features and the generation of both normal and tumor cases. Quality assessment was conducted through visual analysis, and statistical analysis using the first five statistical moments. Additionally a questionnaire was administered to 45 medical experts (radiologists and radiology residents). The results showed that the features of the real mammograms were accurately replicated in the synthetic ones, the image statistics overall correspond reasonably well, and the two groups of images were statistically indistinguishable in almost all cases according to the experts. The proposed workflow generates realistic synthetic mammograms with fine-tuned features. Synthetic mammograms are powerful tools that can create new or balance existing datasets, allowing for the training of machine learning and deep learning algorithms. These algorithms can then assist radiologists in tasks like classification and segmentation, improving diagnostic performance. The code and dataset are available at: https://github.com/cnr-isti-vclab/CC-Mammograms-Generation_GUI.
Page 7 of 30293 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.