Sort by:
Page 19 of 24237 results

Transformer model based on Sonazoid contrast-enhanced ultrasound for microvascular invasion prediction in hepatocellular carcinoma.

Qin Q, Pang J, Li J, Gao R, Wen R, Wu Y, Liang L, Que Q, Liu C, Peng J, Lv Y, He Y, Lin P, Yang H

pubmed logopapersMay 19 2025
Microvascular invasion (MVI) is strongly associated with the prognosis of patients with hepatocellular carcinoma (HCC). To evaluate the value of Transformer models with Sonazoid contrast-enhanced ultrasound (CEUS) in the preoperative prediction of MVI. This retrospective study included 164 HCC patients. Deep learning features and radiomic features were extracted from arterial and Kupffer phase images, alongside the collection of clinicopathological parameters. Normality was assessed using the Shapiro-Wilk test. The Mann‒Whitney U-test and least absolute shrinkage and selection operator algorithm were applied to screen features. Transformer, radiomic, and clinical prediction models for MVI were constructed with logistic regression. Repeated random splits followed a 7:3 ratio, with model performance evaluated over 50 iterations. The area under the receiver operating characteristic curve (AUC, 95% confidence interval [CI]), sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), decision curve, and calibration curve were used to evaluate the performance of the models. The DeLong test was applied to compare performance between models. The Bonferroni method was used to control type I error rates arising from multiple comparisons. A two-sided p-value of < 0.05 was considered statistically significant. In the training set, the diagnostic performance of the arterial-phase Transformer (AT) and Kupffer-phase Transformer (KT) models were better than that of the radiomic and clinical (Clin) models (p < 0.0001). In the validation set, both the AT and KT models outperformed the radiomic and Clin models in terms of diagnostic performance (p < 0.05). The AUC (95% CI) for the AT model was 0.821 (0.72-0.925) with an accuracy of 80.0%, and the KT model was 0.859 (0.766-0.977) with an accuracy of 70.0%. Logistic regression analysis indicated that tumor size (p = 0.016) and alpha-fetoprotein (AFP) (p = 0.046) were independent predictors of MVI. Transformer models using Sonazoid CEUS have potential for effectively identifying MVI-positive patients preoperatively.

Portable Ultrasound Bladder Volume Measurement Over Entire Volume Range Using a Deep Learning Artificial Intelligence Model in a Selected Cohort: A Proof of Principle Study.

Jeong HJ, Seol A, Lee S, Lim H, Lee M, Oh SJ

pubmed logopapersMay 19 2025
We aimed to prospectively investigate whether bladder volume measured using deep learning artificial intelligence (AI) algorithms (AI-BV) is more accurate than that measured using conventional methods (C-BV) if using a portable ultrasound bladder scanner (PUBS). Patients who underwent filling cystometry because of lower urinary tract symptoms between January 2021 and July 2022 were enrolled. Every time the bladder was filled serially with normal saline from 0 mL to maximum cystometric capacity in 50 mL increments, C-BV was measured using PUBS. Ultrasound images obtained during this process were manually annotated to define the bladder contour, which was used to build a deep learning AI model. The true bladder volume (T-BV) for each bladder volume range was compared with C-BV and AI-BV for analysis. We enrolled 250 patients (213 men and 37 women), and a deep learning AI model was established using 1912 bladder images. There was a significant difference between C-BV (205.5 ± 170.8 mL) and T-BV (190.5 ± 165.7 mL) (p = 0.001), but no significant difference between AI-BV (197.0 ± 161.1 mL) and T-BV (190.5 ± 165.7 mL) (p = 0.081). In bladder volume ranges of 101-150, 151-200, and 201-300 mL, there were significant differences in the percentage of volume differences between [C-BV and T-BV] and [AI-BV and T-BV] (p < 0.05), but no significant difference if converted to absolute values (p > 0.05). C-BV (R<sup>2</sup> = 0.91, p < 0.001) and AI-BV (R<sup>2</sup> = 0.90, p < 0.001) were highly correlated with T-BV. The mean difference between AI-BV and T-BV (6.5 ± 50.4) was significantly smaller than that between C-BV and T-BV (15.0 ± 50.9) (p = 0.001). Following image pre-processing, deep learning AI-BV more accurately estimated true BV than conventional methods in this selected cohort on internal validation. Determination of the clinical relevance of these findings and performance in external cohorts requires further study. The clinical trial was conducted using an approved product for its approved indication, so approval from the Ministry of Food and Drug Safety (MFDS) was not required. Therefore, there is no clinical trial registration number.

Development and validation of ultrasound-based radiomics deep learning model to identify bone erosion in rheumatoid arthritis.

Yan L, Xu J, Ye X, Lin M, Gong Y, Fang Y, Chen S

pubmed logopapersMay 19 2025
To develop and validate a deep learning radiomics fusion model (DLR) based on ultrasound (US) images to identify bone erosion in rheumatoid arthritis (RA) patients. A total of 432 patients with RA at two institutions were collected. Three hundred twelve patients from center 1 were randomly divided into a training set (N = 218) and an internal test set (N = 94) in a 7:3 ratio; meanwhile, 124 patients from center 2 were as an external test set. Radiomics (Rad) and deep learning (DL) features were extracted based on hand-crafted radiomics and deep transfer learning networks. The least absolute shrinkage and selection operator regression was employed to establish DLR fusion feature from the Rad and DL features. Subsequently, 10 machine learning algorithms were used to construct models and the final optimal model was selected. The performance of models was evaluated using receiver operating characteristic (ROC) and decision curve analysis (DCA). The diagnostic efficacy of sonographers was compared with and without the assistance of the optimal model. LR was chosen as the optimal algorithm for model construction account for superior performance (Rad/DL/DLR: area under the curve [AUC] = 0.906/0.974/0.979) in the training set. In the internal test set, DLR_LR as the final model had the highest AUC (AUC = 0.966), which was also validated in the external test set (AUC = 0.932). With the aid of DLR_LR model, the overall performance of both junior and senior sonographers improved significantly (P < 0.05), and there was no significant difference between the junior sonographer with DLR_LR model assistance and the senior sonographer without assistance (P > 0.05). DLR model based on US images is the best performer and is expected to become an important tool for identifying bone erosion in RA patients. Key Points • DLR model based on US images is the best performer in identifying BE in RA patients. • DLR model may assist the sonographers to improve the accuracy of BE evaluations.

Semiautomated segmentation of breast tumor on automatic breast ultrasound image using a large-scale model with customized modules.

Zhou Y, Ye M, Ye H, Zeng S, Shu X, Pan Y, Wu A, Liu P, Zhang G, Cai S, Chen S

pubmed logopapersMay 19 2025
To verify the capability of the Segment Anything Model for medical images in 3D (SAM-Med3D), tailored with low-rank adaptation (LoRA) strategies, in segmenting breast tumors in Automated Breast Ultrasound (ABUS) images. This retrospective study collected data from 329 patients diagnosed with breast cancer (average age 54 years). The dataset was randomly divided into training (n = 204), validation (n = 29), and test sets (n = 59). Two experienced radiologists manually annotated the regions of interest of each sample in the dataset, which served as ground truth for training and evaluating the SAM-Med3D model with additional customized modules. For semi-automatic tumor segmentation, points were randomly sampled within the lesion areas to simulate the radiologists' clicks in real-world scenarios. The segmentation performance was evaluated using the Dice coefficient. A total of 492 cases (200 from the "Tumor Detection, Segmentation, and Classification Challenge on Automated 3D Breast Ultrasound (TDSC-ABUS) 2023 challenge") were subjected to semi-automatic segmentation inference. The average Dice Similariy Coefficient (DSC) scores for the training, validation, and test sets of the Lishui dataset were 0.75, 0.78, and 0.75, respectively. The Breast Imaging Reporting and Data System (BI-RADS) categories of all samples range from BI-RADS 3 to 6, yielding an average DSC coefficient between 0.73 and 0.77. By categorizing the samples (lesion volumes ranging from 1.64 to 100.03 cm<sup>3</sup>) based on lesion size, the average DSC falls between 0.72 and 0.77.And the overall average DSC for the TDSC-ABUS 2023 challenge dataset was 0.79, with the test set achieving a sora-of-art scores of 0.79. The SAM-Med3D model with additional customized modules demonstrates good performance in semi-automatic 3D ABUS breast cancer tumor segmentation, indicating its feasibility for application in computer-aided diagnosis systems.

ChatGPT-4-Driven Liver Ultrasound Radiomics Analysis: Advantages and Drawbacks Compared to Traditional Techniques.

Sultan L, Venkatakrishna SSB, Anupindi S, Andronikou S, Acord M, Otero H, Darge K, Sehgal C, Holmes J

pubmed logopapersMay 18 2025
Artificial intelligence (AI) is transforming medical imaging, with large language models such as ChatGPT-4 emerging as potential tools for automated image interpretation. While AI-driven radiomics has shown promise in diagnostic imaging, the efficacy of ChatGPT-4 in liver ultrasound analysis remains largely unexamined. This study evaluates the capability of ChatGPT-4 in liver ultrasound radiomics, specifically its ability to differentiate fibrosis, steatosis, and normal liver tissue, compared to conventional image analysis software. Seventy grayscale ultrasound images from a preclinical liver disease model, including fibrosis (n=31), fatty liver (n=18), and normal liver (n=21), were analyzed. ChatGPT-4 extracted texture features, which were compared to those obtained using Interactive Data Language (IDL), a traditional image analysis software. One-way ANOVA was used to identify statistically significant features differentiating liver conditions, and logistic regression models were employed to assess diagnostic performance. ChatGPT-4 extracted nine key textural features-echo intensity, heterogeneity, skewness, kurtosis, contrast, homogeneity, dissimilarity, angular second moment, and entropy-all of which significantly differed across liver conditions (p < 0.05). Among individual features, echo intensity achieved the highest F1-score (0.85). When combined, ChatGPT-4 attained 76% accuracy and 83% sensitivity in classifying liver disease. ROC analysis demonstrated strong discriminatory performance, with AUC values of 0.75 for fibrosis, 0.87 for normal liver, and 0.97 for steatosis. Compared to Interactive Data Language (IDL) image analysis software, ChatGPT-4 exhibited slightly lower sensitivity (0.83 vs. 0.89) but showed moderate correlation (R = 0.68, p < 0.0001) with IDL-derived features. However, it significantly outperformed IDL in processing efficiency, reducing analysis time by 40%, highlighting its potential for high throughput radiomic analysis. Despite slightly lower sensitivity than IDL, ChatGPT-4 demonstrated high feasibility for ultrasound radiomics, offering faster processing, high-throughput analysis, and automated multi-image evaluation. These findings support its potential integration into AI-driven imaging workflows, with further refinements needed to enhance feature reproducibility and diagnostic accuracy.

OpenPros: A Large-Scale Dataset for Limited View Prostate Ultrasound Computed Tomography

Hanchen Wang, Yixuan Wu, Yinan Feng, Peng Jin, Shihang Feng, Yiming Mao, James Wiskin, Baris Turkbey, Peter A. Pinto, Bradford J. Wood, Songting Luo, Yinpeng Chen, Emad Boctor, Youzuo Lin

arxiv logopreprintMay 18 2025
Prostate cancer is one of the most common and lethal cancers among men, making its early detection critically important. Although ultrasound imaging offers greater accessibility and cost-effectiveness compared to MRI, traditional transrectal ultrasound methods suffer from low sensitivity, especially in detecting anteriorly located tumors. Ultrasound computed tomography provides quantitative tissue characterization, but its clinical implementation faces significant challenges, particularly under anatomically constrained limited-angle acquisition conditions specific to prostate imaging. To address these unmet needs, we introduce OpenPros, the first large-scale benchmark dataset explicitly developed for limited-view prostate USCT. Our dataset includes over 280,000 paired samples of realistic 2D speed-of-sound (SOS) phantoms and corresponding ultrasound full-waveform data, generated from anatomically accurate 3D digital prostate models derived from real clinical MRI/CT scans and ex vivo ultrasound measurements, annotated by medical experts. Simulations are conducted under clinically realistic configurations using advanced finite-difference time-domain and Runge-Kutta acoustic wave solvers, both provided as open-source components. Through comprehensive baseline experiments, we demonstrate that state-of-the-art deep learning methods surpass traditional physics-based approaches in both inference efficiency and reconstruction accuracy. Nevertheless, current deep learning models still fall short of delivering clinically acceptable high-resolution images with sufficient accuracy. By publicly releasing OpenPros, we aim to encourage the development of advanced machine learning algorithms capable of bridging this performance gap and producing clinically usable, high-resolution, and highly accurate prostate ultrasound images. The dataset is publicly accessible at https://open-pros.github.io/.

Fair ultrasound diagnosis via adversarial protected attribute aware perturbations on latent embeddings.

Xu Z, Tang F, Quan Q, Yao Q, Kong Q, Ding J, Ning C, Zhou SK

pubmed logopapersMay 17 2025
Deep learning techniques have significantly enhanced the convenience and precision of ultrasound image diagnosis, particularly in the crucial step of lesion segmentation. However, recent studies reveal that both train-from-scratch models and pre-trained models often exhibit performance disparities across sex and age attributes, leading to biased diagnoses for different subgroups. In this paper, we propose APPLE, a novel approach designed to mitigate unfairness without altering the parameters of the base model. APPLE achieves this by learning fair perturbations in the latent space through a generative adversarial network. Extensive experiments on both a publicly available dataset and an in-house ultrasound image dataset demonstrate that our method improves segmentation and diagnostic fairness across all sensitive attributes and various backbone architectures compared to the base models. Through this study, we aim to highlight the critical importance of fairness in medical segmentation and contribute to the development of a more equitable healthcare system.

Foundation versus Domain-Specific Models for Left Ventricular Segmentation on Cardiac Ultrasound

Chao, C.-J., Gu, Y., Kumar, W., Xiang, T., Appari, L., Wu, J., Farina, J. M., Wraith, R., Jeong, J., Arsanjani, R., Garvan, K. C., Oh, J. K., Langlotz, C. P., Banerjee, I., Li, F.-F., Adeli, E.

medrxiv logopreprintMay 17 2025
The Segment Anything Model (SAM) was fine-tuned on the EchoNet-Dynamic dataset and evaluated on external transthoracic echocardiography (TTE) and Point-of-Care Ultrasound (POCUS) datasets from CAMUS (University Hospital of St Etienne) and Mayo Clinic (99 patients: 58 TTE, 41 POCUS). Fine-tuned SAM was superior or comparable to MedSAM. The fine-tuned SAM also outperformed EchoNet and U-Net models, demonstrating strong generalization, especially on apical 2-chamber (A2C) images (fine-tuned SAM vs. EchoNet: CAMUS-A2C: DSC 0.891 {+/-} 0.040 vs. 0.752 {+/-} 0.196, p<0.0001) and POCUS (DSC 0.857 {+/-} 0.047 vs. 0.667 {+/-} 0.279, p<0.0001). Additionally, SAM-enhanced workflow reduced annotation time by 50% (11.6 {+/-} 4.5 sec vs. 5.7 {+/-} 1.7 sec, p<0.0001) while maintaining segmentation quality. We demonstrated an effective strategy for fine-tuning a vision foundation model for enhancing clinical workflow efficiency and supporting human-AI collaboration.

Exploring interpretable echo analysis using self-supervised parcels.

Majchrowska S, Hildeman A, Mokhtari R, Diethe T, Teare P

pubmed logopapersMay 17 2025
The application of AI for predicting critical heart failure endpoints using echocardiography is a promising avenue to improve patient care and treatment planning. However, fully supervised training of deep learning models in medical imaging requires a substantial amount of labelled data, posing significant challenges due to the need for skilled medical professionals to annotate image sequences. Our study addresses this limitation by exploring the potential of self-supervised learning, emphasising interpretability, robustness, and safety as crucial factors in cardiac imaging analysis. We leverage self-supervised learning on a large unlabelled dataset, facilitating the discovery of features applicable to a various downstream tasks. The backbone model not only generates informative features for training smaller models using simple techniques but also produces features that are interpretable by humans. The study employs a modified Self-supervised Transformer with Energy-based Graph Optimisation (STEGO) network on top of self-DIstillation with NO labels (DINO) as a backbone model, pre-trained on diverse medical and non-medical data. This approach facilitates the generation of self-segmented outputs, termed "parcels", which identify distinct anatomical sub-regions of the heart. Our findings highlight the robustness of these self-learned parcels across diverse patient profiles and phases of the cardiac cycle phases. Moreover, these parcels offer high interpretability and effectively encapsulate clinically relevant cardiac substructures. We conduct a comprehensive evaluation of the proposed self-supervised approach on publicly available datasets, demonstrating its adaptability to a wide range of requirements. Our results underscore the potential of self-supervised learning to address labelled data scarcity in medical imaging, offering a path to improve cardiac imaging analysis and enhance the efficiency and interpretability of diagnostic procedures, thus positively impacting patient care and clinical decision-making.

Deep learning model based on ultrasound images predicts BRAF V600E mutation in papillary thyroid carcinoma.

Yu Y, Zhao C, Guo R, Zhang Y, Li X, Liu N, Lu Y, Han X, Tang X, Mao R, Peng C, Yu J, Zhou J

pubmed logopapersMay 16 2025
BRAF V600E mutation status detection facilitates prognosis prediction in papillary thyroid carcinoma (PTC). We developed a deep-learning model to determine the BRAF V600E status in PTC. PTC from three centers were collected as the training set (1341 patients), validation set (148 patients), and external test set (135 patients). After testing the performance of the ResNeSt-50, Vision Transformer, and Swin Transformer V2 (SwinT) models, SwinT was chosen as the optimal backbone. An integrated BrafSwinT model was developed by combining the backbone with a radiomics feature branch and a clinical parameter branch. BrafSwinT demonstrated an AUC of 0.869 in the external test set, outperforming the original SwinT, Vision Transformer, and ResNeSt-50 models (AUC: 0.782-0.824; <i>p</i> value: 0.017-0.041). BrafSwinT showed promising results in determining BRAF V600E mutation status in PTC based on routinely acquired ultrasound images and basic clinical information, thus facilitating risk stratification.
Page 19 of 24237 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.