Sort by:
Page 12 of 56552 results

Deep learning-based prediction of axillary pathological complete response in patients with breast cancer using longitudinal multiregional ultrasound.

Liu Y, Wang Y, Huang J, Pei S, Wang Y, Cui Y, Yan L, Yao M, Wang Y, Zhu Z, Huang C, Liu Z, Liang C, Shi J, Li Z, Pei X, Wu L

pubmed logopapersAug 27 2025
Noninvasive biomarkers that capture the longitudinal multiregional tumour burden in patients with breast cancer may improve the assessment of residual nodal disease and guide axillary surgery. Additionally, a significant barrier to the clinical translation of the current data-driven deep learning model is the lack of interpretability. This study aims to develop and validate an information shared-private (iShape) model to predict axillary pathological complete response in patients with axillary lymph node (ALN)-positive breast cancer receiving neoadjuvant therapy (NAT) by learning common and specific image representations from longitudinal primary tumour and ALN ultrasound images. A total of 1135 patients with biopsy-proven ALN-positive breast cancer who received NAT were included in this multicentre, retrospective study. The iShape was trained on a dataset of 371 patients and validated on three external validation sets (EVS1-3), with 295, 244, and 225 patients, respectively. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). The false-negative rates (FNRs) of iShape alone and in combination with sentinel lymph node biopsy (SLNB) were also evaluated. Imaging feature visualisation and RNA sequencing analysis were performed to explore the underlying basis of iShape. The iShape achieved AUCs of 0.950-0.971 for EVS 1-3, which were better than those of the clinical model and the image signatures derived from the primary tumour, longitudinal primary tumour, or ALN (P < 0.05, as per the DeLong test). The performance of iShape remained satisfactory in subgroup analyses stratified by age, menstrual status, T stage, molecular subtype, treatment regimens, and machine type (AUCs of 0.812-1.000). More importantly, the FNR of iShape was 7.7%-8.1% in the EVSs, and the FNR of SLNB decreased from 13.4% to 3.6% with the aid of iShape in patients receiving SLNB and ALN dissection. The decision-making process of iShape was explained by feature visualisation. Additionally, RNA sequencing analysis revealed that a lower deep learning score was associated with immune infiltration and tumour proliferation pathways. The iShape model demonstrated good performance for the precise quantification of ALN status in patients with ALN-positive breast cancer receiving NAT, potentially benefiting individualised decision-making, and avoiding unnecessary axillary lymph node dissection. This study was supported by (1) Noncommunicable Chronic Diseases-National Science and Technology Major Project (No. 2024ZD0531100); (2) Key-Area Research and Development Program of Guangdong Province (No. 2021B0101420006); (3) National Natural Science Foundation of China (No. 82472051, 82471947, 82271941, 82272088); (4) National Science Foundation for Young Scientists of China (No. 82402270, 82202095, 82302190); (5) Guangzhou Municipal Science and Technology Planning Project (No. 2025A04J4773, 2025A04J4774); (6) the Natural Science Foundation of Guangdong Province of China (No. 2025A1515011607); (7) Medical Scientific Research Foundation of Guangdong Province of China (No. A2024403); (8) Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (No. 2022B1212010011); (9) Outstanding Youth Science Foundation of Yunnan Basic Research Project (No. 202401AY070001-316); (10) Innovative Research Team of Yunnan Province (No. 202505AS350013).

[Comparison of diagnostic performance between artificial intelligence-assisted automated breast ultrasound and handheld ultrasound in breast cancer screening].

Yi DS, Sun WY, Song HP, Zhao XL, Hu SY, Gu X, Gao Y, Zhao FH

pubmed logopapersAug 26 2025
<b>Objective:</b> To compare the diagnostic performance of artificial intelligence-assisted automated breast ultrasound (AI-ABUS) with traditional handheld ultrasound (HHUS) in breast cancer screening. <b>Methods:</b> A total of 36 171 women undergoing breast cancer ultrasound screening in Futian District, Shenzhen, between July 1, 2023 and June 30, 2024 were prospectively recruited and assigned to either the AI-ABUS or HHUS group based on the screening modality used. In the AI-ABUS group, image acquisition was performed on-site by technicians, and two ultrasound physicians conducted remote diagnoses with AI assistance, supported by a follow-up management system. In the HHUS group, one ultrasound physician conducted both image acquisition and diagnosis on-site, and follow-up was led by clinical physicians. Based on the reported malignancy rates of different BI-RADS categories, the number of undiagnosed breast cancer cases in individuals without pathology was estimated, and adjusted detection rates were calculated. Primary outcomes included screening positive rate, biopsy rate, cancer detection rate, loss-to-follow-up rate, specificity, and sensitivity. <b>Results:</b> The median age [interquartile range, <i>M</i> (<i>Q</i><sub>1</sub>, <i>Q</i><sub>3</sub>)] of the 36 171 women was 43.8 (36.6, 50.8) years. A total of 14 766 women (40.82%) were screened with AI-ABUS and 21 405 (59.18%) with HHUS. Baseline characteristics showed no significant differences between the groups (all <i>P</i>>0.05). The AI-ABUS group had a lower screening positive rate [0.59% (87/14 766) vs 1.94% (416/21 405)], but higher biopsy rate [47.13% (41/87) vs 16.10% (67/416)], higher cancer detection rate [1.69‰ (25/14 766) vs 0.47‰ (10/21 428)], and lower loss-to-follow-up rate (6.90% vs 71.39%) compared to the HHUS group (all <i>P</i><0.05). There was no statistically significant difference in the distribution of breast cancer pathological stages among those who underwent biopsy between the two groups (<i>P</i>>0.05). The specificity of AI-ABUS was higher than that of HHUS [89.77% (13, 231/14 739) vs 74.12% (15, 858/21 394), <i>P</i><0.05], while sensitivity did not differ significantly [92.59% (25/27) vs 90.91% (10/11), <i>P</i>>0.05]. After estimating undiagnosed cancer cases among participants without pathology, the adjusted detection rate was 2.30‰ (34/14 766) in the AI-ABUS group and ranged from 1.17‰ to 2.75‰ [(25-59)/21 428] in the HHUS group. In the minimum estimation scenario, the detection rate in the AI-ABUS group was significantly higher (<i>P</i><0.05); in the maximum estimation scenario, the difference was not statistically significant (<i>P</i>>0.05). <b>Conclusions:</b> The AI-ABUS model, combined with an intelligent follow-up management system, enables a higher breast cancer detection rate with a lower screening positive rate, improved specificity, and reduced loss to follow-up. This suggests AI-ABUS is a promising alternative model for breast cancer screening.

Relation knowledge distillation 3D-ResNet-based deep learning for breast cancer molecular subtypes prediction on ultrasound videos: a multicenter study.

Wu Y, Zhou L, Zhao J, Peng Y, Li X, Wang Y, Zhu S, Hou C, Du P, Ling L, Wang Y, Tian J, Sun L

pubmed logopapersAug 26 2025
To develop and test a relation knowledge distillation three-dimensional residual network (RKD-R3D) model for predicting breast cancer molecular subtypes using ultrasound (US) videos to aid clinical personalized management. This multicentre study retrospectively included 882 breast cancer patients (2375 US videos and 9499 images) between January 2017 and December 2021, which was divided into training, validation, and internal test cohorts. Additionally, 86 patients was collected between May 2023 and November 2023 as the external test cohort. St. Gallen molecular subtypes (luminal A, luminal B, HER2-positive, and triple-negative) were confirmed via postoperative immunohistochemistry. The RKD-R3D based on US videos was developed and validated to predict four-classification molecular subtypes of breast cancer. The predictive performance of RKD-R3D was compared with RKD-R2D, traditional R3D, and preoperative core needle biopsy (CNB). The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, accuracy, balanced accuracy, precision, recall, and F1-score were analyzed. RKD-R3D (AUC: 0.88, 0.95) outperformed RKD-R2D (AUC: 0.72, 0.85) and traditional R3D (AUC: 0.65, 0.79) in predicting four-classification breast cancer molecular subtypes in the internal and external test cohorts. RKD-R3D outperformed CNB (Accuracy: 0.87 vs. 0.79) in the external test cohort, achieved good performance in predicting triple negative from non-triple negative breast cancers (AUC: 0.98), and obtained satisfactory prediction performance for both T1 and non-T1 lesions (AUC: 0.96, 0.90). RKD-R3D when used with US videos becomes a potential supplementary tool to non-invasively assess breast cancer molecular subtypes.

Enhanced Sarcopenia Detection in Nursing Home Residents Using Ultrasound Radiomics and Machine Learning.

Fu H, Luo S, Zhuo Y, Lian R, Chen X, Jiang W, Wang L, Yang M

pubmed logopapersAug 26 2025
Ultrasound only has low-to-moderate accuracy for sarcopenia. We aimed to investigate whether ultrasound radiomics combined with machine learning enhances sarcopenia diagnostic accuracy compared with conventional ultrasound parameters among older adults in long-term care. Diagnostic accuracy study. A total of 628 residents from 15 nursing homes in China. Sarcopenia diagnosis followed AWGS 2019 criteria. Ultrasound of thigh muscles (rectus femoris [ReF], vastus intermedius [VI], and quadriceps femoris [QF]) was performed. Conventional parameters (muscle thickness [MT], echo intensity [EI]) and radiomic features were extracted. Participants were split into training (70%)/validation (30%) sets. Conventional (muscle thickness + EI), radiomics, and integrated (MT, echo intensity, radiomics, basic clinical data including age, sex, and body mass index) models were built using 5 machine learning algorithms (including logistic regression [LR]). Performance was assessed in the validation set using the area under the receiver operating characteristic curve (AUC), calibration, and decision curve analysis (DCA). Sarcopenia prevalence was 61.9%. The LR algorithm consistently exhibited superior performance. The diagnostic accuracy of the ultrasound radiomic models was superior to that of the models based on conventional ultrasound parameters, regardless of muscle group. The integrated models further improved the accuracy, achieving AUCs (95% CIs) of 0.85 (0.79-0.91) for ReF, 0.81 (0.75-0.87) for VI, and 0.83 (0.77-0.90) for QF. In the validation set, the AUCs (95% CIs) for the conventional ultrasound models were 0.70 (0.63-0.78) for ReF, 0.73 (0.65-0.80) for VI, and 0.75 (0.68-0.82) for QF. The corresponding AUCs (95% CIs) for the radiomics models were 0.76 (0.69-0.83) for ReF, 0.76 (0.69-0.83) for VI, and 0.78 (0.71-0.85) for QF. The integrated models demonstrated good calibration and net benefit in DCA. Ultrasound radiomics, especially when integrated with conventional parameters and clinical data using LR, significantly improves sarcopenia diagnostic accuracy in nursing home residents. This accessible, noninvasive approach holds promise for enhancing sarcopenia screening and early detection in long-term care settings.

MedVQA-TREE: A Multimodal Reasoning and Retrieval Framework for Sarcopenia Prediction

Pardis Moradbeiki, Nasser Ghadiri, Sayed Jalal Zahabi, Uffe Kock Wiil, Kristoffer Kittelmann Brockhattingen, Ali Ebrahimi

arxiv logopreprintAug 26 2025
Accurate sarcopenia diagnosis via ultrasound remains challenging due to subtle imaging cues, limited labeled data, and the absence of clinical context in most models. We propose MedVQA-TREE, a multimodal framework that integrates a hierarchical image interpretation module, a gated feature-level fusion mechanism, and a novel multi-hop, multi-query retrieval strategy. The vision module includes anatomical classification, region segmentation, and graph-based spatial reasoning to capture coarse, mid-level, and fine-grained structures. A gated fusion mechanism selectively integrates visual features with textual queries, while clinical knowledge is retrieved through a UMLS-guided pipeline accessing PubMed and a sarcopenia-specific external knowledge base. MedVQA-TREE was trained and evaluated on two public MedVQA datasets (VQA-RAD and PathVQA) and a custom sarcopenia ultrasound dataset. The model achieved up to 99% diagnostic accuracy and outperformed previous state-of-the-art methods by over 10%. These results underscore the benefit of combining structured visual understanding with guided knowledge retrieval for effective AI-assisted diagnosis in sarcopenia.

HarmonicEchoNet: Leveraging harmonic convolutions for automated standard plane detection in fetal heart ultrasound videos.

Sarker MMK, Mishra D, Alsharid M, Hernandez-Cruz N, Ahuja R, Patey O, Papageorghiou AT, Noble JA

pubmed logopapersAug 26 2025
Fetal echocardiography offers non-invasive and real-time imaging acquisition of fetal heart images to identify congenital heart conditions. Manual acquisition of standard heart views is time-consuming, whereas automated detection remains challenging due to high spatial similarity across anatomical views with subtle local image appearance variations. To address these challenges, we introduce a very lightweight frequency-guided deep learning-based model named HarmonicEchoNet that can automatically detect heart standard views in a transverse sweep or freehand ultrasound scan of the fetal heart. HarmonicEchoNet uses harmonic convolution blocks (HCBs) and a harmonic spatial and channel squeeze-and-excitation (hscSE) module. The HCBs apply a Discrete Cosine Transform (DCT)-based harmonic decomposition to input features, which are then combined using learned weights. The hscSE module identifies significant regions in the spatial domain to improve feature extraction of the fetal heart anatomical structures, capturing both spatial and channel-wise dependencies in an ultrasound image. The combination of these modules improves model performance relative to recent CNN-based, transformer-based, and CNN+transformer-based image classification models. We use four datasets from two private studies, PULSE (Perception Ultrasound by Learning Sonographic Experience) and CAIFE (Clinical Artificial Intelligence in Fetal Echocardiography), to develop and evaluate HarmonicEchoNet models. Experimental results show that HarmonicEchoNet is 10-15 times faster than ConvNeXt, DeiT, and VOLO, with an inference time of just 3.9 ms. It also achieves 2%-7% accuracy improvement in classifying fetal heart standard planes compared to these baselines. Furthermore, with just 19.9 million parameters compared to ConvNeXt's 196.24 million, HarmonicEchoNet is nearly ten times more parameter-efficient.

2D Ultrasound Elasticity Imaging of Abdominal Aortic Aneurysms Using Deep Neural Networks

Utsav Ratna Tuladhar, Richard Simon, Doran Mix, Michael Richards

arxiv logopreprintAug 25 2025
Abdominal aortic aneurysms (AAA) pose a significant clinical risk due to their potential for rupture, which is often asymptomatic but can be fatal. Although maximum diameter is commonly used for risk assessment, diameter alone is insufficient as it does not capture the properties of the underlying material of the vessel wall, which play a critical role in determining the risk of rupture. To overcome this limitation, we propose a deep learning-based framework for elasticity imaging of AAAs with 2D ultrasound. Leveraging finite element simulations, we generate a diverse dataset of displacement fields with their corresponding modulus distributions. We train a model with U-Net architecture and normalized mean squared error (NMSE) to infer the spatial modulus distribution from the axial and lateral components of the displacement fields. This model is evaluated across three experimental domains: digital phantom data from 3D COMSOL simulations, physical phantom experiments using biomechanically distinct vessel models, and clinical ultrasound exams from AAA patients. Our simulated results demonstrate that the proposed deep learning model is able to reconstruct modulus distributions, achieving an NMSE score of 0.73\%. Similarly, in phantom data, the predicted modular ratio closely matches the expected values, affirming the model's ability to generalize to phantom data. We compare our approach with an iterative method which shows comparable performance but higher computation time. In contrast, the deep learning method can provide quick and effective estimates of tissue stiffness from ultrasound images, which could help assess the risk of AAA rupture without invasive procedures.

TransSeg: Leveraging Transformer with Channel-Wise Attention and Semantic Memory for Semi-Supervised Ultrasound Segmentation.

Lyu J, Li L, Al-Hazzaa SAF, Wang C, Hossain MS

pubmed logopapersAug 25 2025
During labor, transperineal ultrasound imaging can acquire real-time midsagittal images, through which the pubic symphysis and fetal head can be accurately identified, and the angle of progression (AoP) between them can be calculated, thereby quantitatively evaluating the descent and position of the fetal head in the birth canal in real time. However, current segmentation methods based on convolutional neural networks (CNNs) and Transformers generally depend heavily on large-scale manually annotated data, which limits their adoption in practical applications. In light of this limitation, this paper develops a new Transformer-based Semi-supervised Segmentation Network (TransSeg). This method employs a Vision Transformer as the backbone network and introduces a Channel-wise Cross Attention (CCA) mechanism to effectively reconstruct the features of unlabeled samples into the labeled feature space, promoting architectural innovation in semi-supervised segmentation and eliminating the need for complex training strategies. In addition, we design a Semantic Information Storage (S-InfoStore) module and a Channel Semantic Update (CSU) strategy to dynamically store and update feature representations of unlabeled samples, thereby continuously enhancing their expressiveness in the feature space and significantly improving the model's utilization of unlabeled data. We conduct a systematic evaluation of the proposed method on the FH-PS-AoP dataset. Experimental results demonstrate that TransSeg outperforms existing mainstream methods across all evaluation metrics, verifying its effectiveness and advancement in semi-supervised semantic segmentation tasks.

Breast Cancer Diagnosis Using a Dual-Modality Complementary Deep Learning Network With Integrated Attention Mechanism Fusion of B-Mode Ultrasound and Shear Wave Elastography.

Dong L, Cai X, Ge H, Sun L, Pan X, Sun F, Meng Q

pubmed logopapersAug 25 2025
To develop and evaluate a Dual-modality Complementary Feature Attention Network (DCFAN) that integrates spatial and stiffness information from B-mode ultrasound and shear wave elastography (SWE) for improved breast tumor classification and axillary lymph node (ALN) metastasis prediction. A total of 387 paired B-mode and SWE images from 218 patients were retrospectively analyzed. The proposed DCFAN incorporates attention mechanisms to effectively fuse structural features from B-mode ultrasound with stiffness features from SWE. Two classification tasks were performed: (1) differentiating benign from malignant tumors, and (2) classifying benign tumors, malignant tumors without ALN metastasis, and malignant tumors with ALN metastasis. Model performance was assessed using accuracy, sensitivity, specificity, and AUC, and compared with conventional CNN-based models and two radiologists with varying experience. In Task 1, DCFAN achieved an accuracy of 94.36% ± 1.45% and the highest AUC of 0.97. In Task 2, it attained 91.70% ± 3.77% accuracy and an average AUC of 0.83. The multimodal approach significantly outperformed the single-modality models in both tasks. Notably, in Task 1, DCFAN demonstrated higher specificity (94.9%) compared to the experienced radiologist (p = 0.002), and yielded higher F1-scores than both radiologists. It also outperformed several state-of-the-art deep learning models in diagnostic accuracy. DCFAN demonstrated robust and superior performance over existing CNN-based methods and radiologists in both breast tumor classification and ALN metastasis prediction. This approach may serve as a valuable assistive tool to enhance diagnostic accuracy in breast ultrasound.

ControlEchoSynth: Boosting Ejection Fraction Estimation Models via Controlled Video Diffusion

Nima Kondori, Hanwen Liang, Hooman Vaseli, Bingyu Xie, Christina Luong, Purang Abolmaesumi, Teresa Tsang, Renjie Liao

arxiv logopreprintAug 25 2025
Synthetic data generation represents a significant advancement in boosting the performance of machine learning (ML) models, particularly in fields where data acquisition is challenging, such as echocardiography. The acquisition and labeling of echocardiograms (echo) for heart assessment, crucial in point-of-care ultrasound (POCUS) settings, often encounter limitations due to the restricted number of echo views available, typically captured by operators with varying levels of experience. This study proposes a novel approach for enhancing clinical diagnosis accuracy by synthetically generating echo views. These views are conditioned on existing, real views of the heart, focusing specifically on the estimation of ejection fraction (EF), a critical parameter traditionally measured from biplane apical views. By integrating a conditional generative model, we demonstrate an improvement in EF estimation accuracy, providing a comparative analysis with traditional methods. Preliminary results indicate that our synthetic echoes, when used to augment existing datasets, not only enhance EF estimation but also show potential in advancing the development of more robust, accurate, and clinically relevant ML models. This approach is anticipated to catalyze further research in synthetic data applications, paving the way for innovative solutions in medical imaging diagnostics.
Page 12 of 56552 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.