Sort by:
Page 20 of 30293 results

Whole-lesion-aware network based on freehand ultrasound video for breast cancer assessment: a prospective multicenter study.

Han J, Gao Y, Huo L, Wang D, Xie X, Zhang R, Xiao M, Zhang N, Lei M, Wu Q, Ma L, Sun C, Wang X, Liu L, Cheng S, Tang B, Wang L, Zhu Q, Wang Y

pubmed logopapersJun 16 2025
The clinical application of artificial intelligence (AI) models based on breast ultrasound static images has been hindered in real-world workflows due to operator-dependence of standardized image acquisition and incomplete view of breast lesions on static images. To better exploit the real-time advantages of ultrasound and more conducive to clinical application, we proposed a whole-lesion-aware network based on freehand ultrasound video (WAUVE) scanning in an arbitrary direction for predicting overall breast cancer risk score. The WAUVE was developed using 2912 videos (2912 lesions) of 2771 patients retrospectively collected from May 2020 to August 2022 in two hospitals. We compared the diagnostic performance of WAUVE with static 2D-ResNet50 and dynamic TimeSformer models in the internal validation set. Subsequently, a dataset comprising 190 videos (190 lesions) from 175 patients prospectively collected from December 2022 to April 2023 in two other hospitals, was used as an independent external validation set. A reader study was conducted by four experienced radiologists on the external validation set. We compared the diagnostic performance of WAUVE with the four experienced radiologists and evaluated the auxiliary value of model for radiologists. The WAUVE demonstrated superior performance compared to the 2D-ResNet50 model, while similar to the TimeSformer model. In the external validation set, WAUVE achieved an area under the receiver operating characteristic curve (AUC) of 0.8998 (95% CI = 0.8529-0.9439), and showed a comparable diagnostic performance to that of four experienced radiologists in terms of sensitivity (97.39% vs. 98.48%, p = 0.36), specificity (49.33% vs. 50.00%, p = 0.92), and accuracy (78.42% vs.79.34%, p = 0.60). With the WAUVE model assistance, the average specificity of four experienced radiologists was improved by 6.67%, and higher consistency was achieved (from 0.807 to 0.838). The WAUVE based on non-standardized ultrasound scanning demonstrated excellent performance in breast cancer assessment which yielded outcomes similar to those of experienced radiologists, indicating the clinical application of the WAUVE model promising.

ThreeF-Net: Fine-grained feature fusion network for breast ultrasound image segmentation.

Bian X, Liu J, Xu S, Liu W, Mei L, Xiao C, Yang F

pubmed logopapersJun 14 2025
Convolutional Neural Networks (CNNs) have achieved remarkable success in breast ultrasound image segmentation, but they still face several challenges when dealing with breast lesions. Due to the limitations of CNNs in modeling long-range dependencies, they often perform poorly in handling issues such as similar intensity distributions, irregular lesion shapes, and blurry boundaries, leading to low segmentation accuracy. To address these issues, we propose the ThreeF-Net, a fine-grained feature fusion network. This network combines the advantages of CNNs and Transformers, aiming to simultaneously capture local features and model long-range dependencies, thereby improving the accuracy and stability of segmentation tasks. Specifically, we designed a Transformer-assisted Dual Encoder Architecture (TDE), which integrates convolutional modules and self-attention modules to achieve collaborative learning of local and global features. Additionally, we designed a Global Group Feature Extraction (GGFE) module, which effectively fuses the features learned by CNNs and Transformers, enhancing feature representation ability. To further improve model performance, we also introduced a Dynamic Fine-grained Convolution (DFC) module, which significantly improves lesion boundary segmentation accuracy by dynamically adjusting convolution kernels and capturing multi-scale features. Comparative experiments with state-of-the-art segmentation methods on three public breast ultrasound datasets demonstrate that ThreeF-Net outperforms existing methods across multiple key evaluation metrics.

Application of Machine Learning to Breast MR Imaging.

Lo Gullo R, van Veldhuizen V, Roa T, Kapetas P, Teuwen J, Pinker K

pubmed logopapersJun 14 2025
The demand for breast imaging services continues to grow, driven by expanding indications in breast cancer diagnosis and treatment. This increasing demand underscores the potential role of artificial intelligence (AI) to enhance workflow efficiency as well as to further unlock the abundant imaging data to achieve improvements along the breast cancer pathway. Although AI has made significant advancements in mammography and digital breast tomosynthesis, with commercially available computer-aided detection (CAD systems) widely used for breast cancer screening and detection, its adoption in breast MRI has been slower. This lag is primarily attributed to the inherent complexity of breast MRI examinations and also hence the more limited availability of large, well-annotated publicly available breast MRI datasets. Despite these challenges, interest in AI implementation in breast MRI remains strong, fueled by the expanding use and indications for breast MRI. This article explores the implementation of AI in breast MRI across the breast cancer care pathway, highlighting its potential to revolutionize the way we detect and manage breast cancer. By addressing current challenges and examining emerging AI applications, we aim to provide a comprehensive overview of how AI is reshaping breast MRI and improving outcomes for patients.

BreastDCEDL: Curating a Comprehensive DCE-MRI Dataset and developing a Transformer Implementation for Breast Cancer Treatment Response Prediction

Naomi Fridman, Bubby Solway, Tomer Fridman, Itamar Barnea, Anat Goldshtein

arxiv logopreprintJun 13 2025
Breast cancer remains a leading cause of cancer-related mortality worldwide, making early detection and accurate treatment response monitoring critical priorities. We present BreastDCEDL, a curated, deep learning-ready dataset comprising pre-treatment 3D Dynamic Contrast-Enhanced MRI (DCE-MRI) scans from 2,070 breast cancer patients drawn from the I-SPY1, I-SPY2, and Duke cohorts, all sourced from The Cancer Imaging Archive. The raw DICOM imaging data were rigorously converted into standardized 3D NIfTI volumes with preserved signal integrity, accompanied by unified tumor annotations and harmonized clinical metadata including pathologic complete response (pCR), hormone receptor (HR), and HER2 status. Although DCE-MRI provides essential diagnostic information and deep learning offers tremendous potential for analyzing such complex data, progress has been limited by lack of accessible, public, multicenter datasets. BreastDCEDL addresses this gap by enabling development of advanced models, including state-of-the-art transformer architectures that require substantial training data. To demonstrate its capacity for robust modeling, we developed the first transformer-based model for breast DCE-MRI, leveraging Vision Transformer (ViT) architecture trained on RGB-fused images from three contrast phases (pre-contrast, early post-contrast, and late post-contrast). Our ViT model achieved state-of-the-art pCR prediction performance in HR+/HER2- patients (AUC 0.94, accuracy 0.93). BreastDCEDL includes predefined benchmark splits, offering a framework for reproducible research and enabling clinically meaningful modeling in breast cancer imaging.

OneTouch Automated Photoacoustic and Ultrasound Imaging of Breast in Standing Pose.

Zhang H, Zheng E, Zheng W, Huang C, Xi Y, Cheng Y, Yu S, Chakraborty S, Bonaccio E, Takabe K, Fan XC, Xu W, Xia J

pubmed logopapersJun 12 2025
We developed an automated photoacoustic and ultrasound breast tomography system that images the patient in the standing pose. The system, named OneTouch-PAT, utilized linear transducer arrays with optical-acoustic combiners for effective dual-modal imaging. During scanning, subjects only need to gently attach their breasts to the imaging window, and co-registered three-dimensional ultrasonic and photoacoustic images of the breast can be obtained within one minute. Our system has a large field of view of 17 cm by 15 cm and achieves an imaging depth of 3 cm with sub-millimeter resolution. A three-dimensional deep-learning network was also developed to further improve the image quality by improving the 3D resolution, enhancing vasculature, eliminating skin signals, and reducing noise. The performance of the system was tested on four healthy subjects and 61 patients with breast cancer. Our results indicate that the ultrasound structural information can be combined with the photoacoustic vascular information for better tissue characterization. Representative cases from different molecular subtypes have indicated different photoacoustic and ultrasound features that could potentially be used for imaging-based cancer classification. Statistical analysis among all patients indicates that the regional photoacoustic intensity and vessel branching points are indicators of breast malignancy. These promising results suggest that our system could significantly enhance breast cancer diagnosis and classification.

Multimodal deep learning for enhanced breast cancer diagnosis on sonography.

Wei TR, Chang A, Kang Y, Patel M, Fang Y, Yan Y

pubmed logopapersJun 12 2025
This study introduces a novel multimodal deep learning model tailored for the differentiation of benign and malignant breast masses using dual-view breast ultrasound images (radial and anti-radial views) in conjunction with corresponding radiology reports. The proposed multimodal model architecture includes specialized image and text encoders for independent feature extraction, along with a transformation layer to align the multimodal features for the subsequent classification task. The model achieved an area of the curve of 85% and outperformed unimodal models with 6% and 8% in Youden index. Additionally, our multimodal model surpassed zero-shot predictions generated by prominent foundation models such as CLIP and MedCLIP. In direct comparison with classification results based on physician-assessed ratings, our model exhibited clear superiority, highlighting its practical significance in diagnostics. By integrating both image and text modalities, this study exemplifies the potential of multimodal deep learning in enhancing diagnostic performance, laying the foundation for developing robust and transparent AI-assisted solutions.

Using a Large Language Model for Breast Imaging Reporting and Data System Classification and Malignancy Prediction to Enhance Breast Ultrasound Diagnosis: Retrospective Study.

Miaojiao S, Xia L, Xian Tao Z, Zhi Liang H, Sheng C, Songsong W

pubmed logopapersJun 11 2025
Breast ultrasound is essential for evaluating breast nodules, with Breast Imaging Reporting and Data System (BI-RADS) providing standardized classification. However, interobserver variability among radiologists can affect diagnostic accuracy. Large language models (LLMs) like ChatGPT-4 have shown potential in medical imaging interpretation. This study explores its feasibility in improving BI-RADS classification consistency and malignancy prediction compared to radiologists. This study aims to evaluate the feasibility of using LLMs, particularly ChatGPT-4, to assess the consistency and diagnostic accuracy of standardized breast ultrasound imaging reports, using pathology as the reference standard. This retrospective study analyzed breast nodule ultrasound data from 671 female patients (mean 45.82, SD 9.20 years; range 26-75 years) who underwent biopsy or surgical excision at our hospital between June 2019 and June 2024. ChatGPT-4 was used to interpret BI-RADS classifications and predict benign versus malignant nodules. The study compared the model's performance to that of two senior radiologists (≥15 years of experience) and two junior radiologists (<5 years of experience) using key diagnostic metrics, including accuracy, sensitivity, specificity, area under the receiver operating characteristic curve, P values, and odds ratios with 95% CIs. Two diagnostic models were evaluated: (1) image interpretation model, where ChatGPT-4 classified nodules based on BI-RADS features, and (2) image-to-text-LLM model, where radiologists provided textual descriptions, and ChatGPT-4 determined malignancy probability based on keywords. Radiologists were blinded to pathological outcomes, and BI-RADS classifications were finalized through consensus. ChatGPT-4 achieved an overall BI-RADS classification accuracy of 96.87%, outperforming junior radiologists (617/671, 91.95% and 604/671, 90.01%, P<.01). For malignancy prediction, ChatGPT-4 achieved an area under the receiver operating characteristic curve of 0.82 (95% CI 0.79-0.85), an accuracy of 80.63% (541/671 cases), a sensitivity of 90.56% (259/286 cases), and a specificity of 73.51% (283/385 cases). The image interpretation model demonstrated performance comparable to senior radiologists, while the image-to-text-LLM model further improved diagnostic accuracy for all radiologists, increasing their sensitivity and specificity significantly (P<.001). Statistical analyses, including the McNemar test and DeLong test, confirmed that ChatGPT-4 outperformed junior radiologists (P<.01) and showed noninferiority compared to senior radiologists (P>.05). Pathological diagnoses served as the reference standard, ensuring robust evaluation reliability. Integrating ChatGPT-4 into an image-to-text-LLM workflow improves BI-RADS classification accuracy and supports radiologists in breast ultrasound diagnostics. These results demonstrate its potential as a decision-support tool to enhance diagnostic consistency and reduce variability.

Diagnostic accuracy of machine learning-based magnetic resonance imaging models in breast cancer classification: a systematic review and meta-analysis.

Zhang J, Wu Q, Lei P, Zhu X, Li B

pubmed logopapersJun 11 2025
This meta-analysis evaluates the diagnostic accuracy of machine learning (ML)-based magnetic resonance imaging (MRI) models in distinguishing benign from malignant breast lesions and explores factors influencing their performance. A systematic search of PubMed, Embase, Cochrane Library, Scopus, and Web of Science identified 12 eligible studies (from 3,739 records) up to August 2024. Data were extracted to calculate sensitivity, specificity, and area under the curve (AUC) using bivariate models in R 4.4.1. Study quality was assessed via QUADAS-2. Pooled sensitivity and specificity were 0.86 (95% CI: 0.82-0.90) and 0.82 (95% CI: 0.78-0.86), respectively, with an overall AUC of 0.90 (95% CI: 0.85-0.90). Diagnostic odds ratio (DOR) was 39.11 (95% CI: 25.04-53.17). Support vector machine (SVM) classifiers outperformed Naive Bayes, with higher sensitivity (0.88 vs. 0.86) and specificity (0.82 vs. 0.78). Heterogeneity was primarily attributed to MRI equipment (P = 0.037). ML-based MRI models demonstrate high diagnostic accuracy for breast cancer classification, with pooled sensitivity of 0.86 (95% CI: 0.82-0.90), specificity of 0.82 (95% CI: 0.78-0.86), and AUC of 0.90 (95% CI: 0.85-0.90). These results support their clinical utility as screening and diagnostic adjuncts, while highlighting the need for standardized protocols to improve generalizability.

A machine learning approach for personalized breast radiation dosimetry in CT: Integrating radiomics and deep neural networks.

Tzanis E, Stratakis J, Damilakis J

pubmed logopapersJun 11 2025
To develop a machine learning-based workflow for patient-specific breast radiation dosimetry in CT. Two hundred eighty-six chest CT examinations, with corresponding right and left breast contours, were retrospectively collected from the radiotherapy department at our institution to develop and validate breast segmentation U-Nets. Additionally, Monte Carlo simulations were performed for each CT scan to determine radiation doses to the breasts. The derived breast doses, along with predictors such as X-ray tube current and radiomic features, were then used to train deep neural networks (DNNs) for breast dose prediction. The breast segmentation models achieved a mean dice similarity coefficient of 0.92, with precision and sensitivity scores above 0.90 for both breasts, indicating high segmentation accuracy. The DNNs demonstrated close alignment with ground truth values, with mean predicted doses of 5.05 ± 0.50 mGy for the right breast and 5.06 ± 0.55 mGy for the left breast, compared to ground truth values of 5.03 ± 0.57 mGy and 5.02 ± 0.61 mGy, respectively. The mean absolute percentage errors were 4.01 % (range: 3.90 %-4.12 %) for the right breast and 4.82 % (range: 4.56 %-5.11 %) for the left breast. The mean inference time was 30.2 ± 4.3 s. Statistical analysis showed no significant differences between predicted and actual doses (p ≥ 0.07). This study presents an automated, machine learning-based workflow for breast radiation dosimetry in CT, integrating segmentation and dose prediction models. The models and code are available at: https://github.com/eltzanis/ML-based-Breast-Radiation-Dosimetry-in-CT.
Page 20 of 30293 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.