Sort by:
Page 4 of 23224 results

Risk inventory and mitigation actions for AI in medical imaging-a qualitative study of implementing standalone AI for screening mammography.

Gerigoorian A, Kloub M, Dembrower K, Engwall M, Strand F

pubmed logopapersJul 30 2025
Recent prospective studies have shown that AI may be integrated in double-reader settings to increase cancer detection. The ScreenTrustCAD study was conducted at the breast radiology department at the Capio S:t Göran Hospital where AI is now implemented in clinical practice. This study reports on how the hospital prepared by exploring risks from an enterprise risk management perspective, i.e., applying a holistic and proactive perspective, and developed risk mitigation actions. The study was conducted as an integral part of the preparations before implementing AI in a breast imaging department. Collaborative ideation sessions were conducted with personnel at the hospital, either directly or indirectly involved with AI, to identify risks. Two external experts with competencies in cybersecurity, machine learning, and the ethical aspects of AI, were interviewed as a complement. The risks identified were analyzed according to an Enterprise Risk Management framework, adopted for healthcare, that assumes risks to be emerging from eight different domains. Finally, appropriate risk mitigation actions were identified and discussed. Twenty-three risks were identified covering seven of eight risk domains, in turn generating 51 suggested risk mitigation actions. Not only does the study indicate the emergence of patient safety risks, but it also shows that there are operational, strategic, financial, human capital, legal, and technological risks. The risks with most suggested mitigation actions were ‘Radiographers unable to answer difficult questions from patients’, ‘Increased risk that patient-reported symptoms are missed by the single radiologist’, ‘Increased pressure on the single reader knowing they are the only radiologist to catch a mistake by AI’, and ‘The performance of the AI algorithm might deteriorate’. Before a clinical integration of AI, hospitals should expand, identify, and address risks beyond immediate patient safety by applying comprehensive and proactive risk management. The online version contains supplementary material available at 10.1186/s12913-025-13176-9.

Radiomics meets transformers: A novel approach to tumor segmentation and classification in mammography for breast cancer.

Saadh MJ, Hussain QM, Albadr RJ, Doshi H, Rekha MM, Kundlas M, Pal A, Rizaev J, Taher WM, Alwan M, Jawad MJ, Al-Nuaimi AMA, Farhood B

pubmed logopapersJul 29 2025
ObjectiveThis study aimed to develop a robust framework for breast cancer diagnosis by integrating advanced segmentation and classification approaches. Transformer-based and U-Net segmentation models were combined with radiomic feature extraction and machine learning classifiers to improve segmentation precision and classification accuracy in mammographic images.Materials and MethodsA multi-center dataset of 8000 mammograms (4200 normal, 3800 abnormal) was used. Segmentation was performed using Transformer-based and U-Net models, evaluated through Dice Coefficient (DSC), Intersection over Union (IoU), Hausdorff Distance (HD95), and Pixel-Wise Accuracy. Radiomic features were extracted from segmented masks, with Recursive Feature Elimination (RFE) and Analysis of Variance (ANOVA) employed to select significant features. Classifiers including Logistic Regression, XGBoost, CatBoost, and a Stacking Ensemble model were applied to classify tumors into benign or malignant. Classification performance was assessed using accuracy, sensitivity, F1 score, and AUC-ROC. SHAP analysis validated feature importance, and Q-value heatmaps evaluated statistical significance.ResultsThe Transformer-based model achieved superior segmentation results with DSC (0.94 ± 0.01 training, 0.92 ± 0.02 test), IoU (0.91 ± 0.01 training, 0.89 ± 0.02 test), HD95 (3.0 ± 0.3 mm training, 3.3 ± 0.4 mm test), and Pixel-Wise Accuracy (0.96 ± 0.01 training, 0.94 ± 0.02 test), consistently outperforming U-Net across all metrics. For classification, Transformer-segmented features with the Stacking Ensemble achieved the highest test results: 93% accuracy, 92% sensitivity, 93% F1 score, and 95% AUC. U-Net-segmented features achieved lower metrics, with the best test accuracy at 84%. SHAP analysis confirmed the importance of features like Gray-Level Non-Uniformity and Zone Entropy.ConclusionThis study demonstrates the superiority of Transformer-based segmentation integrated with radiomic feature selection and robust classification models. The framework provides a precise and interpretable solution for breast cancer diagnosis, with potential for scalability to 3D imaging and multimodal datasets.

Harnessing infrared thermography and multi-convolutional neural networks for early breast cancer detection.

Attallah O

pubmed logopapersJul 28 2025
Breast cancer is a relatively common carcinoma among women worldwide and remains a considerable public health concern. Consequently, the prompt identification of cancer is crucial, as research indicates that 96% of cancers are treatable if diagnosed prior to metastasis. Despite being considered the gold standard for breast cancer evaluation, conventional mammography possesses inherent drawbacks, including accessibility issues, especially in rural regions, and discomfort associated with the procedure. Therefore, there has been a surge in interest in non-invasive, radiation-free alternative diagnostic techniques, such as thermal imaging (thermography). Thermography employs infrared thermal sensors to capture and assess temperature maps of human breasts for the identification of potential tumours based on areas of thermal irregularity. This study proposes an advanced computer-aided diagnosis (CAD) system called Thermo-CAD to assess early breast cancer detection using thermal imaging, aimed at assisting radiologists. The CAD system employs a variety of deep learning techniques, specifically incorporating multiple convolutional neural networks (CNNs) to enhance diagnostic accuracy and reliability. To effectively integrate multiple deep features and diminish the dimensionality of features derived from each CNN, feature transformation and selection methods, including non-negative matrix factorization and Relief-F, are used leading to a reduction in classification complexity. The Thermo-CAD system is assessed utilising two datasets: the DMR-IR (Database for Mastology Research Infrared Images), for distinguishing between normal and abnormal breast tissues, and a novel thermography dataset to distinguish abnormal instances as benign or malignant. Thermo-CAD has proven to be an outstanding CAD system for thermographic breast cancer detection, attaining 100% accuracy on the DMR-IR dataset (normal versus abnormal breast cancer) using CSVM and MGSVM classifiers, and lower accuracy using LSVM and QSVM classifiers. However, it showed a lower ability to distinguish benign from malignant cases (second dataset), achieving an accuracy of 79.3% using CSVM. Yet, it remains a promising tool for early-stage cancer detection, especially in resource-constrained environments.

Performance of AI-Based software in predicting malignancy risk in breast lesions identified on targeted ultrasound.

Lima IRM, Cruz RM, de Lima Rodrigues CL, Lago BM, da Cunha RF, Damião SQ, Wanderley MC, Bitencourt AGV

pubmed logopapersJul 27 2025
Targeted ultrasound is commonly used to identify lesions characterized on magnetic resonance imaging (MRI) that were not recognized on initial mammography or ultrasound and is especially valuable for guiding percutaneous biopsies. Although artificial intelligence (AI) algorithms have been used to differentiate benign from malignant breast lesions on ultrasound, their application in classifying lesions on targeted ultrasound has not yet been studied. To evaluate the performance of AI-based software in predicting malignancy risk in breast lesions identified on targeted ultrasound. This was a retrospective, cross-sectional, single-center study that included patients with breast lesions identified on MRI who underwent targeted ultrasound and percutaneous ultrasound-guided biopsy. The ultrasound findings were analyzed using AI-based software and subsequently correlated with the pathological results. 334 lesions were evaluated, including 183 mass and 151 non-mass lesions. On histological analysis, there were 257 (76.9 %) benign lesions, and 77 (23.1 %) malignant. Both the AI software and radiologists demonstrated high sensitivity in predicting the malignancy risk of the lesions. The specificity was higher when evaluated by the radiologist using the AI software compared to the radiologist's evaluation alone (p < 0.001). All lesions classified as BI-RADS 2 or 3 on targeted ultrasound by the radiologist or the AI software (n = 72; 21.6 %) showed benign pathology results. The AI software, when integrated into the radiologist's evaluation, demonstrated high diagnostic accuracy and improved specificity for both mass and non-mass lesions on targeted ultrasound, supporting more accurate biopsy decisions and potentially reducing false positives without missing cancers.

Hybrid Deep Learning and Handcrafted Feature Fusion for Mammographic Breast Cancer Classification

Maximilian Tschuchnig, Michael Gadermayr, Khalifa Djemal

arxiv logopreprintJul 26 2025
Automated breast cancer classification from mammography remains a significant challenge due to subtle distinctions between benign and malignant tissue. In this work, we present a hybrid framework combining deep convolutional features from a ResNet-50 backbone with handcrafted descriptors and transformer-based embeddings. Using the CBIS-DDSM dataset, we benchmark our ResNet-50 baseline (AUC: 78.1%) and demonstrate that fusing handcrafted features with deep ResNet-50 and DINOv2 features improves AUC to 79.6% (setup d1), with a peak recall of 80.5% (setup d1) and highest F1 score of 67.4% (setup d1). Our experiments show that handcrafted features not only complement deep representations but also enhance performance beyond transformer-based embeddings. This hybrid fusion approach achieves results comparable to state-of-the-art methods while maintaining architectural simplicity and computational efficiency, making it a practical and effective solution for clinical decision support.

Multimodal prediction based on ultrasound for response to neoadjuvant chemotherapy in triple negative breast cancer.

Lyu M, Yi S, Li C, Xie Y, Liu Y, Xu Z, Wei Z, Lin H, Zheng Y, Huang C, Lin X, Liu Z, Pei S, Huang B, Shi Z

pubmed logopapersJul 25 2025
Pathological complete response (pCR) can guide surgical strategy and postoperative treatments in triple-negative breast cancer (TNBC). In this study, we developed a Breast Cancer Response Prediction (BCRP) model to predict the pCR in patients with TNBC. The BCRP model integrated multi-dimensional longitudinal quantitative imaging features, clinical factors and features from the Breast Imaging Data and Reporting System (BI-RADS). Multi-dimensional longitudinal quantitative imaging features, including deep learning features and radiomics features, were extracted from multiview B-mode and colour Doppler ultrasound images before and after treatment. The BCRP model achieved the areas under the receiver operating curves (AUCs) of 0.94 [95% confidence interval (CI), 0.91-0.98] and 0.84 [95%CI, 0.75-0.92] in the training and external test cohorts, respectively. Additionally, the low BCRP score was an independent risk factor for event-free survival (P < 0.05). The BCRP model showed a promising ability in predicting response to neoadjuvant chemotherapy in TNBC, and could provide valuable information for survival.

A novel approach for breast cancer detection using a Nesterov accelerated adam optimizer with an attention mechanism.

Saber A, Emara T, Elbedwehy S, Hassan E

pubmed logopapersJul 25 2025
Image-based automatic breast tumor detection has become a significant research focus, driven by recent advancements in machine learning (ML) algorithms. Traditional disease detection methods often involve manual feature extraction from images, a process requiring extensive expertise from specialists and pathologists. This labor-intensive approach is not only time-consuming but also impractical for widespread application. However, advancements in digital technologies and computer vision have enabled convolutional neural networks (CNNs) to learn features automatically, thereby overcoming these challenges. This paper presents a deep neural network model based on the MobileNet-V2 architecture, enhanced with a convolutional block attention mechanism for identifying tumor types in ultrasound images. The attention module improves the MobileNet-V2 model's performance by highlighting disease-affected areas within the images. The proposed model refines features extracted by MobileNet-V2 using the Nesterov-accelerated Adaptive Moment Estimation (Nadam) optimizer. This integration enhances convergence and stability, leading to improved classification accuracy. The proposed approach was evaluated on the BUSI ultrasound image dataset. Experimental results demonstrated strong performance, achieving an accuracy of 99.1%, sensitivity of 99.7%, specificity of 99.5%, precision of 97.7%, and an area under the curve (AUC) of 1.0 using an 80-20 data split. Additionally, under 10-fold cross-validation, the model achieved an accuracy of 98.7%, sensitivity of 99.1%, specificity of 98.3%, precision of 98.4%, F1-score of 98.04%, and an AUC of 0.99.

Joint Holistic and Lesion Controllable Mammogram Synthesis via Gated Conditional Diffusion Model

Xin Li, Kaixiang Yang, Qiang Li, Zhiwei Wang

arxiv logopreprintJul 25 2025
Mammography is the most commonly used imaging modality for breast cancer screening, driving an increasing demand for deep-learning techniques to support large-scale analysis. However, the development of accurate and robust methods is often limited by insufficient data availability and a lack of diversity in lesion characteristics. While generative models offer a promising solution for data synthesis, current approaches often fail to adequately emphasize lesion-specific features and their relationships with surrounding tissues. In this paper, we propose Gated Conditional Diffusion Model (GCDM), a novel framework designed to jointly synthesize holistic mammogram images and localized lesions. GCDM is built upon a latent denoising diffusion framework, where the noised latent image is concatenated with a soft mask embedding that represents breast, lesion, and their transitional regions, ensuring anatomical coherence between them during the denoising process. To further emphasize lesion-specific features, GCDM incorporates a gated conditioning branch that guides the denoising process by dynamically selecting and fusing the most relevant radiomic and geometric properties of lesions, effectively capturing their interplay. Experimental results demonstrate that GCDM achieves precise control over small lesion areas while enhancing the realism and diversity of synthesized mammograms. These advancements position GCDM as a promising tool for clinical applications in mammogram synthesis. Our code is available at https://github.com/lixinHUST/Gated-Conditional-Diffusion-Model/

Deep Learning-Driven High Spatial Resolution Attenuation Imaging for Ultrasound Tomography (AI-UT).

Liu M, Kou Z, Wiskin JW, Czarnota GJ, Oelze ML

pubmed logopapersJul 24 2025
Ultrasonic attenuation can be used to characterize tissue properties of the human breast. Both quantitative ultrasound (QUS) and ultrasound tomography (USCT) can provide attenuation estimation. However, limitations have been identified for both approaches. In QUS, the generation of attenuation maps involves separating the whole image into different data blocks. The optimal size of the data block is around 15 to 30 pulse lengths, which dramatically decreases the spatial resolution for attenuation imaging. In USCT, the attenuation is often estimated with a full wave inversion (FWI) method, which is affected by background noise. In order to achieve a high resolution attenuation image with low variance, a deep learning (DL) based method was proposed. In the approach, RF data from 60 angle views from the QTI Breast Acoustic CT<sup>TM</sup> Scanner were acquired as the input and attenuation images as the output. To improve image quality for the DL method, the spatial correlation between speed of sound (SOS) and attenuation were used as a constraint in the model. The results indicated that by including the SOS structural information, the performance of the model was improved. With a higher spatial resolution attenuation image, further segmentation of the breast can be achieved. The structural information and actual attenuation values provided by DL-generated attenuation images were validated with the values from the literature and the SOS-based segmentation map. The information provided by DL-generated attenuation images can be used as an additional biomarker for breast cancer diagnosis.

Enhanced HER-2 prediction in breast cancer through synergistic integration of deep learning, ultrasound radiomics, and clinical data.

Hu M, Zhang L, Wang X, Xiao X

pubmed logopapersJul 24 2025
This study integrates ultrasound Radiomics with clinical data to enhance the diagnostic accuracy of HER-2 expression status in breast cancer, aiming to provide more reliable treatment strategies for this aggressive disease. We included ultrasound images and clinicopathologic data from 210 female breast cancer patients, employing a Generative Adversarial Network (GAN) to enhance image clarity and segment the region of interest (ROI) for Radiomics feature extraction. Features were optimized through Z-score normalization and various statistical methods. We constructed and compared multiple machine learning models, including Linear Regression, Random Forest, and XGBoost, with deep learning models such as CNNs (ResNet101, VGG19) and Transformer technology. The Grad-CAM technique was used to visualize the decision-making process of the deep learning models. The Deep Learning Radiomics (DLR) model integrated Radiomics features with deep learning features, and a combined model further integrated clinical features to predict HER-2 status. The LightGBM and ResNet101 models showed high performance, but the combined model achieved the highest AUC values in both training and testing, demonstrating the effectiveness of integrating diverse data sources. The study successfully demonstrates that the fusion of deep learning with Radiomics analysis significantly improves the prediction accuracy of HER-2 status, offering a new strategy for personalized breast cancer treatment and prognostic assessments.
Page 4 of 23224 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.