Sort by:
Page 8 of 23225 results

Potential Time and Recall Benefits for Adaptive AI-Based Breast Cancer MRI Screening.

Balkenende L, Ferm J, van Veldhuizen V, Brunekreef J, Teuwen J, Mann RM

pubmed logopapersJul 7 2025
Abbreviated breast MRI protocols are advocated for breast screening as they limit acquisition duration and increase resource availability. However, radiologists' specificity may be slightly lowered when only such short protocols are evaluated. An adaptive approach, where a full protocol is performed only when abnormalities are detected by artificial intelligence (AI)-based models in the abbreviated protocol, might improve and speed up MRI screening. This study explores the potential benefits of such an approach. To assess the potential impact of adaptive breast MRI scanning based on AI detection of malignancies. Mathematical model. Breast cancer screening protocols. Theoretical upper and lower limits on expected protocol duration and recall rate were determined for the adaptive approach, and the influence of the AI model and radiologists' performance metrics on these limits was assessed, under the assumption that any finding on the abbreviated protocol would, in an ideal follow-up scenario, prompt a second MRI with the full protocol. Estimated most likely scenario. Theoretical limits for the proposed adaptive AI-based MRI breast cancer screening showed that the recall rates of the abbreviated and full screening protocols always constrained the recall rate. These abbreviated and full protocols did not fully constrain the expected protocol duration, and an adaptive protocol's expected duration could thus be shorter than the abbreviated protocol duration. Specificity, either from AI models or radiologists, has the largest effect on the theoretical limits. In the most likely scenario, the adaptive protocol achieved an expected protocol duration reduction of ~47%-60% compared with the full protocol. The proposed adaptive approach may offer a reduction in expected protocol duration compared with the use of the full protocol alone, and a lower recall rate relative to an abbreviated-only approach could be achieved. Optimal performance was observed when AI models emulated radiologists' decision-making behavior, rather than focusing solely on near-perfect malignancy detection. Not applicable. Stage 6.

Development and validation of an improved volumetric breast density estimation model using the ResNet technique.

Asai Y, Yamamuro M, Yamada T, Kimura Y, Ishii K, Nakamura Y, Otsuka Y, Kondo Y

pubmed logopapersJul 7 2025

Temporal changes in volumetric breast density (VBD) may serve as prognostic biomarkers for predicting the risk of future breast cancer development. However, accurately measuring VBD from archived X-ray mammograms remains challenging. In a previous study, we proposed a method to estimate volumetric breast density using imaging parameters (tube voltage, tube current, and exposure time) and patient age. This approach, based on a multiple regression model, achieved a determination coefficient (R²) of 0.868. 
Approach:
In this study, we developed and applied machine learning models-Random Forest, XG-Boost-and the deep learning model Residual Network (ResNet) to the same dataset. Model performance was assessed using several metrics: determination coefficient, correlation coefficient, root mean square error, mean absolute error, root mean square percentage error, and mean absolute percentage error. A five-fold cross-validation was conducted to ensure robust validation. 
Main results:
The best-performing fold resulted in R² values of 0.895, 0.907, and 0.918 for Random Forest, XG-Boost, and ResNet, respectively, all surpassing the previous study's results. ResNet consistently achieved the lowest error values across all metrics. 
Significance:
These findings suggest that ResNet successfully achieved the task of accurately determining VBD from past mammography-a task that has not been realised to date. We are confident that this achievement contributes to advancing research aimed at predicting future risks of breast cancer development by enabling high-accuracy time-series analyses of retrospective VBD.&#xD.

PGMI assessment in mammography: AI software versus human readers.

Santner T, Ruppert C, Gianolini S, Stalheim JG, Frei S, Hondl M, Fröhlich V, Hofvind S, Widmann G

pubmed logopapersJul 5 2025
The aim of this study was to evaluate human inter-reader agreement of parameters included in PGMI (perfect-good-moderate-inadequate) classification of screening mammograms and explore the role of artificial intelligence (AI) as an alternative reader. Five radiographers from three European countries independently performed a PGMI assessment of 520 anonymized mammography screening examinations randomly selected from representative subsets from 13 imaging centres within two European countries. As a sixth reader, a dedicated AI software was used. Accuracy, Cohen's Kappa, and confusion matrices were calculated to compare the predictions of the software against the individual assessment of the readers, as well as potential discrepancies between them. A questionnaire and a personality test were used to better understand the decision-making processes of the human readers. Significant inter-reader variability among human readers with poor to moderate agreement (κ = -0.018 to κ = 0.41) was observed, with some showing more homogenous interpretations of single features and overall quality than others. In comparison, the software surpassed human inter-reader agreement in detecting glandular tissue cuts, mammilla deviation, pectoral muscle detection, and pectoral angle measurement, while remaining features and overall image quality exhibited comparable performance to human assessment. Notably, human inter-reader disagreement of PGMI assessment in mammography is considerably high. AI software may already reliably categorize quality. Its potential for standardization and immediate feedback to achieve and monitor high levels of quality in screening programs needs further attention and should be included in future approaches. AI has promising potential for automated assessment of diagnostic image quality. Faster, more representative and more objective feedback may support radiographers in their quality management processes. Direct transformation of common PGMI workflows into an AI algorithm could be challenging.

Multi-modality radiomics diagnosis of breast cancer based on MRI, ultrasound and mammography.

Wu J, Li Y, Gong W, Li Q, Han X, Zhang T

pubmed logopapersJul 4 2025
To develop a multi-modality machine learning-based radiomics model utilizing Magnetic Resonance Imaging (MRI), Ultrasound (US), and Mammography (MMG) for the differentiation of benign and malignant breast nodules. This study retrospectively collected data from 204 patients across three hospitals, including MRI, US, and MMG imaging data along with confirmed pathological diagnoses. Lesions on 2D US, 2D MMG, and 3D MRI images were selected to outline the areas of interest, which were then automatically expanded outward by 3 mm, 5 mm, and 8 mm to extract radiomic features within and around the tumor. ANOVA, the maximum correlation minimum redundancy (mRMR) algorithm, and the least absolute shrinkage and selection operator (LASSO) were used to select features for breast cancer diagnosis through logistic regression analysis. The performance of the radiomics models was evaluated using receiver operating characteristic (ROC) curve analysis, curves decision curve analysis (DCA), and calibration curves. Among the various radiomics models tested, the MRI_US_MMG multi-modality logistic regression model with 5 mm peritumoral features demonstrated the best performance. In the test cohort, this model achieved an AUC of 0.905(95% confidence interval [CI]: 0.805-1). These results suggest that the inclusion of peritumoral features, specifically at a 5 mm expansion, significantly enhanced the diagnostic efficiency of the multi-modality radiomics model in differentiating benign from malignant breast nodules. The multi-modality radiomics model based on MRI, ultrasound, and mammography can predict benign and malignant breast lesions.

A Multimodal Ultrasound-Driven Approach for Automated Tumor Assessment with B-Mode and Multi-Frequency Harmonic Motion Images.

Hu S, Liu Y, Wang R, Li X, Konofagou EE

pubmed logopapersJul 4 2025
Harmonic Motion Imaging (HMI) is an ultrasound elasticity imaging method that measures the mechanical properties of tissue using amplitude-modulated acoustic radiation force (AM-ARF). Multi-frequency HMI (MF-HMI) excites tissue at various AM frequencies simultaneously, allowing for image optimization without prior knowledge of inclusion size and stiffness. However, challenges remain in size estimation as inconsistent boundary effects result in different perceived sizes across AM frequencies. Herein, we developed an automated assessment method for tumor and focused ultrasound surgery (FUS) induced lesions using a transformer-based multi-modality neural network, HMINet, and further automated neoadjuvant chemotherapy (NACT) response prediction. HMINet was trained on 380 pairs of MF-HMI and B-mode images of phantoms and in vivo orthotopic breast cancer mice (4T1). Test datasets included phantoms (n = 32), in vivo 4T1 mice (n = 24), breast cancer patients (n = 20), FUS-induced lesions in ex vivo animal tissue and in vivo clinical settings with real-time inference, with average segmentation accuracy (Dice) of 0.91, 0.83, 0.80, and 0.81, respectively. HMINet outperformed state-of-the-art models; we also demonstrated the enhanced robustness of the multi-modality strategy over B-mode-only, both quantitatively through Dice scores and in terms of interpretation using saliency analysis. The contribution of AM frequency based on the number of salient pixels showed that the most significant AM frequencies are 800 and 200 Hz across clinical cases. We developed an automated, multimodality ultrasound-based tumor and FUS lesion assessment method, which facilitates the clinical translation of stiffness-based breast cancer treatment response prediction and real-time image-guided FUS therapy.

Development of a deep learning-based automated diagnostic system (DLADS) for classifying mammographic lesions - a first large-scale multi-institutional clinical trial in Japan.

Yamaguchi T, Koyama Y, Inoue K, Ban K, Hirokaga K, Kujiraoka Y, Okanami Y, Shinohara N, Tsunoda H, Uematsu T, Mukai H

pubmed logopapersJul 3 2025
Recently, western countries have built evidence on mammographic artificial Intelligence-computer-aided diagnosis (AI-CADx) systems; however, their effectiveness has not yet been sufficiently validated in Japanese women. In this study, we aimed to establish a Japanese mammographic AI-CADx system for the first time. We retrospectively collected screening or diagnostic mammograms from 63 institutions in Japan. We then randomly divided the images into training, validation, and test datasets in a balanced ratio of 8:1:1 on a case-level basis. The gold standard of annotation for the AI-CADx system is mammographic findings based on pathologic references. The AI-CADx system was developed using SE-ResNet modules and a sliding window algorithm. A cut-off concentration gradient of the heatmap image was set at 15%. The AI-CADx system was considered accurate if it detected the presence of a malignant lesion in a breast cancer mammogram. The primary endpoint of the AI-CADx system was defined as a sensitivity and specificity of over 80% for breast cancer diagnosis in the test dataset. We collected 20,638 mammograms from 11,450 Japanese women with a median age of 55 years. The mammograms included 5019 breast cancer (24.3%), 5026 benign (24.4%), and 10,593 normal (51.3%) mammograms. In the test dataset of 2059 mammograms, the AI-CADx system achieved a sensitivity of 83.5% and a specificity of 84.7% for breast cancer diagnosis. The AUC in the test dataset was 0.841 (DeLong 95% CI; 0.822-0.859). The Accuracy was almost consistent independent of breast density, mammographic findings, type of cancer, and mammography vendors (AUC (range); 0.639-0.906). The developed Japanese mammographic AI-CADx system diagnosed breast cancer with a pre-specified sensitivity and specificity. We are planning a prospective study to validate the breast cancer diagnostic performance of Japanese physicians using this AI-CADx system as a second reader. UMIN, trial number UMIN000039009. Registered 26 December 2019, https://www.umin.ac.jp/ctr/.

SPACE: Subregion Perfusion Analysis for Comprehensive Evaluation of Breast Tumor Using Contrast-Enhanced Ultrasound-A Retrospective and Prospective Multicenter Cohort Study.

Fu Y, Chen J, Chen Y, Lin Z, Ye L, Ye D, Gao F, Zhang C, Huang P

pubmed logopapersJul 2 2025
To develop a dynamic contrast-enhanced ultrasound (CEUS)-based method for segmenting tumor perfusion subregions, quantifying tumor heterogeneity, and constructing models for distinguishing benign from malignant breast tumors. This retrospective-prospective cohort study analyzed CEUS videos of patients with breast tumors from four academic medical centers between September 2015 and October 2024. Pixel-based time-intensity curve (TIC) perfusion variables were extracted, followed by the generation of perfusion heterogeneity maps through cluster analysis. A combined diagnostic model incorporating clinical variables, subregion percentages, and radiomics scores was developed, and subsequently, a nomogram based on this model was constructed for clinical application. A total of 339 participants were included in this bidirectional study. Retrospective data included 233 tumors divided into training and test sets. The prospective data comprised 106 tumors as an independent test set. Subregion analysis revealed Subregion 2 dominated benign tumors, while Subregion 3 was prevalent in malignant tumors. Among 59 machine-learning models, Elastic Net (ENET) (α = 0.7) performed best. Age and subregion radiomics scores were independent risk factors. The combined model achieved area under the curve (AUC) values of 0.93, 0.82, and 0.90 in the training, retrospective, and prospective test sets, respectively. The proposed CEUS-based method enhances visualization and quantification of tumor perfusion dynamics, significantly improving the diagnostic accuracy for breast tumors.

Improving YOLO-based breast mass detection with transfer learning pretraining on the OPTIMAM Mammography Image Database.

Ho PS, Tsai HY, Liu I, Lee YY, Chan SW

pubmed logopapersJul 1 2025
Early detection of breast cancer through mammography significantly improves survival rates. However, high false positive and false negative rates remain a challenge. Deep learning-based computer-aided diagnosis systems can assist in lesion detection, but their performance is often limited by the availability of labeled clinical data. This study systematically evaluated the effectiveness of transfer learning, image preprocessing techniques, and the latest You Only Look Once (YOLO) model (v9) for optimizing breast mass detection models on small proprietary datasets. We examined 133 mammography images containing masses and assessed various preprocessing strategies, including cropping and contrast enhancement. We further investigated the impact of transfer learning using the OPTIMAM Mammography Image Database (OMI-DB) compared with training on proprietary data. The performance of YOLOv9 was evaluated against YOLOv7 to determine improvements in detection accuracy. Pretraining on the OMI-DB dataset with cropped images significantly improved model performance, with YOLOv7 achieving a 13.9 % higher mean average precision (mAP) and 13.2 % higher F1-score compared to training only on proprietary data. Among the tested models and configurations, the best results were obtained using YOLOv9 pretrained OMI-DB and fine-tuned with cropped proprietary images, yielding an mAP of 73.3 % ± 16.7 % and an F1-score of 76.0 % ± 13.4 %, under this condition, YOLOv9 outperformed YOLOv7 by 8.1 % in mAP and 9.2 % in F1-score. This study provides a systematic evaluation of transfer learning and preprocessing techniques for breast mass detection in small datasets. Our results demonstrating that YOLOv9 with OMI-DB pretraining significantly enhances the performance of breast mass detection models while reducing training time, providing a valuable guideline for optimizing deep learning models in data-limited clinical applications.

Knowledge mapping of ultrasound technology and triple-negative breast cancer: a visual and bibliometric analysis.

Wan Y, Shen Y, Wang J, Zhang T, Fu X

pubmed logopapersJul 1 2025
This study aims to explore the application of ultrasound technology in triple-negative breast cancer (TNBC) using bibliometric methods. It presents a visual knowledge map to exhibit global research dynamics and elucidates the research directions, hotspots, trends, and frontiers in this field. The Web of Science Core Collection database was used, and CiteSpace and VOSviewer software were employed to visualize the annual publication volume, collaborative networks (including countries, institutions, and authors), citation characteristics (such as references, co-citations, and publications), as well as keywords (including emergence and clustering) related to ultrasound applications in TNBC over the past 15 years. A total of 310 papers were included. The first paper was published in 2010, and after that, publications in this field really took off, especially after 2020. China emerged as the leading country in terms of publication volume, while Shanghai Jiao Tong University had the highest output among institutions. Memorial Sloan Kettering Cancer Center was recognized as a key research institution within this domain. Adrada BE was the most prolific author in terms of publication count. Ko Es held the highest citation frequency among authors. Co-occurrence analysis of keywords revealed that the top three keywords by frequency were "triple-negative breast cancer," "breast cancer," and "sonography." The timeline visualization indicated a strong temporal continuity in the clusters of "breast cancer," "recommendations," "biopsy," "estrogen receptor," and "radiomics." The keyword with the highest emergence value was "neoplasms" (6.80). Trend analysis of emerging terms indicated a growing focus on "machine learning approaches," "prognosis," and "molecular subtypes," with "machine learning approach" emerging as a significant keyword currently. This study provided a systematic analysis of the current state of ultrasound technology applications in TNBC. It highlighted that "machine learning methods" have emerged as a central focus and frontier in this research area, both presently and for the foreseeable future. The findings offer valuable theoretical insights for the application of ultrasound technology in TNBC diagnosis and treatment and establish a solid foundation for further advancements in medical imaging research related to TNBC.

Prediction of axillary lymph node metastasis in triple negative breast cancer using MRI radiomics and clinical features.

Shen Y, Huang R, Zhang Y, Zhu J, Li Y

pubmed logopapersJul 1 2025
To develop and validate a machine learning-based prediction model to predict axillary lymph node (ALN) metastasis in triple negative breast cancer (TNBC) patients using magnetic resonance imaging (MRI) and clinical characteristics. This retrospective study included TNBC patients from the First Affiliated Hospital of Soochow University and Jiangsu Province Hospital (2016-2023). We analyzed clinical characteristics and radiomic features from T2-weighted MRI. Using LASSO regression for feature selection, we applied Logistic Regression (LR), Random Forest (RF), and Support Vector Machine (SVM) to build prediction models. A total of 163 patients, with a median age of 53 years (range: 24-73), were divided into a training group (n = 115) and a validation group (n = 48). Among them, 54 (33.13%) had ALN metastasis, and 109 (66.87%) were non-metastasis. Nottingham grade (P = 0.005), tumor size (P = 0.016) were significant difference between non-metastasis cases and metastasis cases. In the validation set, the LR-based combined model achieved the highest AUC (0.828, 95%CI: 0.706-0.950) with excellent sensitivity (0.813) and accuracy (0.812). Although the RF-based model had the highest AUC in the training set and the highest specificity (0.906) in the validation set, its performance was less consistent compared to the LR model. MRI-T2WI radiomic features predict ALN metastasis in TNBC, with integration into clinical models enhancing preoperative predictions and personalizing management.
Page 8 of 23225 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.