Sort by:
Page 6 of 23225 results

Establishment of an interpretable MRI radiomics-based machine learning model capable of predicting axillary lymph node metastasis in invasive breast cancer.

Zhang D, Shen M, Zhang L, He X, Huang X

pubmed logopapersJul 18 2025
This study sought to develop a radiomics model capable of predicting axillary lymph node metastasis (ALNM) in patients with invasive breast cancer (IBC) based on dual-sequence magnetic resonance imaging(MRI) of diffusion-weighted imaging (DWI) and dynamic contrast enhancement (DCE) data. The interpretability of the resultant model was probed with the SHAP (Shapley Additive Explanations) method. Established inclusion/exclusion criteria were used to retrospectively compile MRI and matching clinical data from 183 patients with pathologically confirmed IBC from our hospital evaluated between June 2021 and December 2023. All of these patients had undergone plain and enhanced MRI scans prior to treatment. These patients were separated according to their pathological biopsy results into those with ALNM (n = 107) and those without ALNM (n = 76). These patients were then randomized into training (n = 128) and testing (n = 55) cohorts at a 7:3 ratio. Optimal radiomics features were selected from the extracted data. The random forest method was used to establish three predictive models (DWI, DCE, and combined DWI + DCE sequence models). Area under the curve (AUC) values for receiver operating characteristic (ROC) curves were utilized to assess model performance. The DeLong test was utilized to compare model predictive efficacy. Model discrimination was assessed based on the integrated discrimination improvement (IDI) method. Decision curves revealed net clinical benefits for each of these models. The SHAP method was used to achieve the best model interpretability. Clinicopathological characteristics (age, menopausal status, molecular subtypes, and estrogen receptor, progesterone receptor, human epidermal growth factor receptor 2, and Ki-67 status) were comparable when comparing the ALNM and non-ALNM groups as well as the training and testing cohorts (P > 0.05). AUC values for the DWI, DCE, and combined models in the training cohort were 0.793, 0.774, and 0.864, respectively, with corresponding values of 0.728, 0.760, and 0.859 in the testing cohort. The predictive efficacy of the DWI and combined models was found to differ significantly according to the DeLong test, as did the predictive efficacy of the DCE and combined models in the training groups (P < 0.05), while no other significant differences were noted in model performance (P > 0.05). IDI results indicated that the combined model offered predictive power levels that were 13.5% (P < 0.05) and 10.2% (P < 0.05) higher than those for the respective DWI and DCE models. In a decision curve analysis, the combined model offered a net clinical benefit over the DCE model. The combined dual-sequence MRI-based radiomics model constructed herein and the supporting interpretability analyses can aid in the prediction of the ALNM status of IBC patients, helping to guide clinical decision-making in these cases.

Are Vision Foundation Models Ready for Out-of-the-Box Medical Image Registration?

Hanxue Gu, Yaqian Chen, Nicholas Konz, Qihang Li, Maciej A. Mazurowski

arxiv logopreprintJul 15 2025
Foundation models, pre-trained on large image datasets and capable of capturing rich feature representations, have recently shown potential for zero-shot image registration. However, their performance has mostly been tested in the context of rigid or less complex structures, such as the brain or abdominal organs, and it remains unclear whether these models can handle more challenging, deformable anatomy. Breast MRI registration is particularly difficult due to significant anatomical variation between patients, deformation caused by patient positioning, and the presence of thin and complex internal structure of fibroglandular tissue, where accurate alignment is crucial. Whether foundation model-based registration algorithms can address this level of complexity remains an open question. In this study, we provide a comprehensive evaluation of foundation model-based registration algorithms for breast MRI. We assess five pre-trained encoders, including DINO-v2, SAM, MedSAM, SSLSAM, and MedCLIP, across four key breast registration tasks that capture variations in different years and dates, sequences, modalities, and patient disease status (lesion versus no lesion). Our results show that foundation model-based algorithms such as SAM outperform traditional registration baselines for overall breast alignment, especially under large domain shifts, but struggle with capturing fine details of fibroglandular tissue. Interestingly, additional pre-training or fine-tuning on medical or breast-specific images in MedSAM and SSLSAM, does not improve registration performance and may even decrease it in some cases. Further work is needed to understand how domain-specific training influences registration and to explore targeted strategies that improve both global alignment and fine structure accuracy. We also publicly release our code at \href{https://github.com/mazurowski-lab/Foundation-based-reg}{Github}.

A literature review of radio-genomics in breast cancer: Lessons and insights for low and middle-income countries.

Mooghal M, Shaikh K, Shaikh H, Khan W, Siddiqui MS, Jamil S, Vohra LM

pubmed logopapersJul 15 2025
To improve precision medicine in breast cancer (BC) decision-making, radio-genomics is an emerging branch of artificial intelligence (AI) that links cancer characteristics assessed radiologically with the histopathology and genomic properties of the tumour. By employing MRIs, mammograms, and ultrasounds to uncover distinctive radiomics traits that potentially predict genomic abnormalities, this review attempts to find literature that links AI-based models with the genetic mutations discovered in BC patients. The review's findings can be used to create AI-based population models for low and middle-income countries (LMIC) and evaluate how well they predict outcomes for our cohort.Magnetic resonance imaging (MRI) appears to be the modality employed most frequently to research radio-genomics in BC patients in our systemic analysis. According to the papers we analysed, genetic markers and mutations linked to imaging traits, such as tumour size, shape, enhancing patterns, as well as clinical outcomes of treatment response, disease progression, and survival, can be identified by employing AI. The use of radio-genomics can help LMICs get through some of the barriers that keep the general population from having access to high-quality cancer care, thereby improving the health outcomes for BC patients in these regions. It is imperative to ensure that emerging technologies are used responsibly, in a way that is accessible to and affordable for all patients, regardless of their socio-economic condition.

Learning quality-guided multi-layer features for classifying visual types with ball sports application.

Huang X, Liu T, Yu Y

pubmed logopapersJul 15 2025
Nowadays, breast cancer is one of the leading causes of death among women. This highlights the need for precise X-ray image analysis in the medical and imaging fields. In this study, we present an advanced perceptual deep learning framework that extracts key features from large X-ray datasets, mimicking human visual perception. We begin by using a large dataset of breast cancer images and apply the BING objectness measure to identify relevant visual and semantic patches. To manage the large number of object-aware patches, we propose a new ranking technique in the weak annotation context. This technique identifies the patches that are most aligned with human visual judgment. These key patches are then aggregated to extract meaningful features from each image. We leverage these features to train a multi-class SVM classifier, which categorizes the images into various breast cancer stages. The effectiveness of our deep learning model is demonstrated through extensive comparative analysis and visual examples.

Enhancing breast positioning quality through real-time AI feedback.

Sexauer R, Riehle F, Borkowski K, Ruppert C, Potthast S, Schmidt N

pubmed logopapersJul 15 2025
Enhance mammography quality to increase cancer detection by implementing continuous AI-driven feedback mechanisms, ensuring reliable, consistent, and high-quality screening by the 'Perfect', 'Good', 'Moderate', and 'Inadequate' (PGMI) criteria. To assess the impact of the AI software 'b-box<sup>TM</sup>' on mammography quality, we conducted a comparative analysis of PGMI scores. We evaluated scores 50 days before (A) and after the software's implementation in 2021 (B), along with assessments made in the first week of August 2022 (C1) and 2023 (C2), comparing them to evaluations conducted by two readers. Except for postsurgical patients, we included all diagnostic and screening mammograms from one tertiary hospital. A total of 4577 mammograms from 1220 women (mean age: 59, range: 21-94, standard deviation: 11.18) were included. 1728 images were obtained before (A) and 2330 images after the 2021 software implementation (B), along with 269 images in 2022 (C1) and 250 images in 2023 (C2). The results indicated a significant improvement in diagnostic image quality (p < 0.01). The percentage of 'Perfect' examinations rose from 22.34% to 32.27%, while 'Inadequate' images decreased from 13.31% to 5.41% in 2021, continuing the positive trend with 4.46% and 3.20% 'inadequate' images in 2022 and 2023, respectively (p < 0.01). Using a reliable software platform to perform AI-driven quality evaluation in real-time has the potential to make lasting improvements in image quality, support radiographers' professional growth, and elevate institutional quality standards and documentation simultaneously. Question How can AI-powered quality assessment reduce inadequate mammographic quality, which is known to impact sensitivity and increase the risk of interval cancers? Findings AI implementation decreased 'inadequate' mammograms from 13.31% to 3.20% and substantially improved parenchyma visualization, with consistent subgroup trends. Clinical relevance By reducing 'inadequate' mammograms and enhancing imaging quality, AI-driven tools improve diagnostic reliability and support better outcomes in breast cancer screening.

ESE and Transfer Learning for Breast Tumor Classification.

He Y, Batumalay M, Thinakaran R

pubmed logopapersJul 14 2025
In this study, we proposed a lightweight neural network architecture based on inverted residual network, efficient squeeze excitation (ESE) module, and double transfer learning, called TLese-ResNet, for breast cancer molecular subtype recognition. The inverted ResNet reduces the number of network parameters while enhancing the cross-layer gradient propagation and feature expression capabilities. The introduction of the ESE module reduces the network complexity while maintaining the channel relationship collection. The dataset of this study comes from the mammography images of patients diagnosed with invasive breast cancer in a hospital in Jiangxi. The dataset comprises preoperative mammography images with CC and MLO views. Given that the dataset is somewhat small, in addition to the commonly used data augmentation methods, double transfer learning is also used. Double transfer learning includes the first transfer, in which the source domain is ImageNet and the target domain is the COVID-19 chest X-ray image dataset, and the second transfer, in which the source domain is the target domain of the first transfer, and the target domain is the mammography dataset we collected. By using five-fold cross-validation, the mean accuracy and area under received surgery feature on mammographic images of CC and MLO views were 0.818 and 0.883, respectively, outperforming other state-of-the-art deep learning-based models such as ResNet-50 and DenseNet-121. Therefore, the proposed model can provide clinicians with an effective and non-invasive auxiliary tool for molecular subtype identification of breast cancer.

Early breast cancer detection via infrared thermography using a CNN enhanced with particle swarm optimization.

Alzahrani RM, Sikkandar MY, Begum SS, Babetat AFS, Alhashim M, Alduraywish A, Prakash NB, Ng EYK

pubmed logopapersJul 13 2025
Breast cancer remains the most prevalent cause of cancer-related mortality among women worldwide, with an estimated incidence exceeding 500,000 new cases annually. Timely diagnosis is vital for enhancing therapeutic outcomes and increasing survival probabilities. Although conventional diagnostic tools such as mammography are widely used and generally effective, they are often invasive, costly, and exhibit reduced efficacy in patients with dense breast tissue. Infrared thermography, by contrast, offers a non-invasive and economical alternative; however, its clinical adoption has been limited, largely due to difficulties in accurate thermal image interpretation and the suboptimal tuning of machine learning algorithms. To overcome these limitations, this study proposes an automated classification framework that employs convolutional neural networks (CNNs) for distinguishing between malignant and benign thermographic breast images. An Enhanced Particle Swarm Optimization (EPSO) algorithm is integrated to automatically fine-tune CNN hyperparameters, thereby minimizing manual effort and enhancing computational efficiency. The methodology also incorporates advanced image preprocessing techniques-including Mamdani fuzzy logic-based edge detection, Contrast-Limited Adaptive Histogram Equalization (CLAHE) for contrast enhancement, and median filtering for noise suppression-to bolster classification performance. The proposed model achieves a superior classification accuracy of 98.8%, significantly outperforming conventional CNN implementations in terms of both computational speed and predictive accuracy. These findings suggest that the developed system holds substantial potential for early, reliable, and cost-effective breast cancer screening in real-world clinical environments.

Breast lesion classification via colorized mammograms and transfer learning in a novel CAD framework.

Hussein AA, Valizadeh M, Amirani MC, Mirbolouk S

pubmed logopapersJul 11 2025
Medical imaging sciences and diagnostic techniques for Breast Cancer (BC) imaging have advanced tremendously, particularly with the use of mammography images; however, radiologists may still misinterpret medical images of the breast, resulting in limitations and flaws in the screening process. As a result, Computer-Aided Design (CAD) systems have become increasingly popular due to their ability to operate independently of human analysis. Current CAD systems use grayscale analysis, which lacks the contrast needed to differentiate benign from malignant lesions. As part of this study, an innovative CAD system is presented that transforms standard grayscale mammography images into RGB colored through a three-path preprocessing framework developed for noise reduction, lesion highlighting, and tumor-centric intensity adjustment using a data-driven transfer function. In contrast to a generic approach, this approach statistically tailors colorization in order to emphasize malignant regions, thus enhancing the ability of both machines and humans to recognize cancerous areas. As a consequence of this conversion, breast tumors with anomalies become more visible, which allows us to extract more accurate features about them. In a subsequent step, Machine Learning (ML) algorithms are employed to classify these tumors as malign or benign cases. A pre-trained model is developed to extract comprehensive features from colored mammography images by employing this approach. A variety of techniques are implemented in the pre-processing section to minimize noise and improve image perception; however, the most challenging methodology is the application of creative techniques to adjust pixels' intensity values in mammography images using a data-driven transfer function derived from tumor intensity histograms. This adjustment serves to draw attention to tumors while reducing the brightness of other areas in the breast image. Measuring criteria such as accuracy, sensitivity, specificity, precision, F1-Score, and Area Under the Curve (AUC) are used to evaluate the efficacy of the employed methodologies. This work employed and tested a variety of pre-training and ML techniques. However, the combination of EfficientNetB0 pre-training with ML Support Vector Machines (SVM) produced optimal results with accuracy, sensitivity, specificity, precision, F1-Score, and AUC, of 99.4%, 98.7%, 99.1%, 99%, 98.8%, and 100%, respectively. It is clear from these results that the developed method does not only advance the state-of-the-art in technical terms, but also provides radiologists with a practical tool to aid in the reduction of diagnostic errors and increase the detection of early breast cancer.

Attention-based multimodal deep learning for interpretable and generalizable prediction of pathological complete response in breast cancer.

Nishizawa T, Maldjian T, Jiao Z, Duong TQ

pubmed logopapersJul 10 2025
Accurate prediction of pathological complete response (pCR) to neoadjuvant chemotherapy has significant clinical utility in the management of breast cancer treatment. Although multimodal deep learning models have shown promise for predicting pCR from medical imaging and other clinical data, their adoption has been limited due to challenges with interpretability and generalizability across institutions. We developed a multimodal deep learning model combining post contrast-enhanced whole-breast MRI at pre- and post-treatment timepoints with non-imaging clinical features. The model integrates 3D convolutional neural networks and self-attention to capture spatial and cross-modal interactions. We utilized two public multi-institutional datasets to perform internal and external validation of the model. For model training and validation, we used data from the I-SPY 2 trial (N = 660). For external validation, we used the I-SPY 1 dataset (N = 114). Of the 660 patients in I-SPY 2, 217 patients achieved pCR (32.88%). Of the 114 patients in I-SPY 1, 29 achieved pCR (25.44%). The attention-based multimodal model yielded the best predictive performance with an AUC of 0.73 ± 0.04 on the internal data and an AUC of 0.71 ± 0.02 on the external dataset. The MRI-only model (internal AUC = 0.68 ± 0.03, external AUC = 0.70 ± 0.04) and the non-MRI clinical features-only model (internal AUC = 0.66 ± 0.08, external AUC = 0.71 ± 0.03) trailed in performance, indicating the combination of both modalities is most effective. We present a robust and interpretable deep learning framework for pCR prediction in breast cancer patients undergoing NAC. By combining imaging and clinical data with attention-based fusion, the model achieves strong predictive performance and generalizes across institutions.

A two-stage dual-task learning strategy for early prediction of pathological complete response to neoadjuvant chemotherapy for breast cancer using dynamic contrast-enhanced magnetic resonance images.

Jing B, Wang J

pubmed logopapersJul 10 2025
Early prediction of treatment response can facilitate personalized treatment for breast cancer patients. Studies on the I-SPY 2 clinical trial demonstrate that multi-time point dynamic contrast-enhanced magnetic resonance (DCEMR) imaging improves the accuracy of predicting pathological complete response (pCR) to chemotherapy. However, previous image-based prediction models usually rely on mid- or post-treatment images to ensure the accuracy of prediction, which may outweigh the benefit of response-based adaptive treatment strategy. Accurately predicting the pCR at the early time point is desired yet remains challenging. To improve prediction accuracy at the early time point of treatment, we proposed a two-stage dual-task learning strategy to train a deep neural network for early prediction using only early-treatment data. We developed and evaluated our proposed method using the I-SPY 2 dataset, which included DCEMR images acquired at three time points: pretreatment (T0), after 3 weeks (T1) and 12 weeks of treatment (T2). At the first stage, we trained a convolutional long short-term memory (LSTM) model using all the data to predict pCR and extract the latent space image representation at T2. At the second stage, we trained a dual-task model to simultaneously predict pCR and the image representation at T2 using images from T0 and T1. This allowed us to predict pCR earlier without using images from T2. By using the conventional single-stage single-task strategy, the area under the receiver operating characteristic curve (AUROC) was 0.799. By using the proposed two-stage dual-task learning strategy, the AUROC was improved to 0.820. Our proposed two-stage dual-task learning strategy can improve model performance significantly (p=0.0025) for predicting pCR at the early time point (3rd week) of neoadjuvant chemotherapy for high-risk breast cancer patients. The early prediction model can potentially help physicians to intervene early and develop personalized plans at the early stage of chemotherapy.
Page 6 of 23225 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.