Sort by:
Page 39 of 41404 results

Intra- and Peritumoral Radiomics Based on Ultrasound Images for Preoperative Differentiation of Follicular Thyroid Adenoma, Carcinoma, and Follicular Tumor With Uncertain Malignant Potential.

Fu Y, Mei F, Shi L, Ma Y, Liang H, Huang L, Fu R, Cui L

pubmed logopapersMay 10 2025
Differentiating between follicular thyroid adenoma (FTA), carcinoma (FTC), and follicular tumor with uncertain malignant potential (FT-UMP) remains challenging due to their overlapping ultrasound characteristics. This retrospective study aimed to enhance preoperative diagnostic accuracy by utilizing intra- and peritumoral radiomics based on ultrasound images. We collected post-thyroidectomy ultrasound images from 774 patients diagnosed with FTA (n = 429), FTC (n = 158), or FT-UMP (n = 187) between January 2018 and December 2023. Six peritumoral regions were expanded by 5%-30% in 5% increments, with the segment-anything model utilizing prompt learning to detect the field of view and constrain the expanded boundaries. A stepwise classification strategy addressing three tasks was implemented: distinguishing FTA from the other types (task 1), differentiating FTC from FT-UMP (task 2), and classifying all three tumors. Diagnostic models were developed by combining radiomic features from tumor and peritumoral regions with clinical characteristics. Clinical characteristics combined with intratumoral and 5% peritumoral radiomic features performed best across all tasks (Test set: area under the curves, 0.93 for task 1 and 0.90 for task 2; diagnostic accuracy, 79.9%). The DeLong test indicated that all peritumoral radiomics significantly improved intratumoral radiomics performance and clinical characteristics (p < 0.04). The 5% peritumoral regions showed the best performance, though not all results were significant (p = 0.01-0.91). Ultrasound-based intratumoral and peritumoral radiomics can significantly enhance preoperative diagnostic accuracy for FTA, FTC, and FT-UMP, leading to improved treatment strategies and patient outcomes. Furthermore, the 5% peritumoral area may indicate regions of potential tumor invasion requiring further investigation.

UltrasOM: A mamba-based network for 3D freehand ultrasound reconstruction using optical flow.

Sun R, Liu C, Wang W, Song Y, Sun T

pubmed logopapersMay 10 2025
Three-dimensional (3D) ultrasound (US) reconstruction is of significant value in clinical diagnosis, characterized by its safety, portability, low cost, and high real-time capabilities. 3D freehand ultrasound reconstruction aims to eliminate the need for tracking devices, relying solely on image data to infer the spatial relationships between frames. However, inherent jitter during handheld scanning introduces significant inaccuracies, making current methods ineffective in precisely predicting the spatial motions of ultrasound image frames. This leads to substantial cumulative errors over long sequence modeling, resulting in deformations or artifacts in the reconstructed volume. To address these challenges, we proposed UltrasOM, a 3D ultrasound reconstruction network designed for spatial relative motion estimation. Initially, we designed a video embedding module that integrates optical flow dynamics with original static information to enhance motion change features between frames. Next, we developed a Mamba-based spatiotemporal attention module, utilizing multi-layer stacked Space-Time Blocks to effectively capture global spatiotemporal correlations within video frame sequences. Finally, we incorporated correlation loss and motion speed loss to prevent overfitting related to scanning speed and pose, enhancing the model's generalization capability. Experimental results on a dataset of 200 forearm cases, comprising 58,011 frames, demonstrated that the proposed method achieved a final drift rate (FDR) of 10.24 %, a frame-to-frame distance error (DE) of 7.34 mm, a symmetric Hausdorff distance error (HD) of 10.81 mm, and a mean angular error (MEA) of 2.05°, outperforming state-of-the-art methods by 13.24 %, 15.11 %, 3.57 %, and 6.32 %, respectively. By integrating optical flow features and deeply exploring contextual spatiotemporal dependencies, the proposed network can directly predict the relative motions between multiple frames of ultrasound images without the need for tracking, surpassing the accuracy of existing methods.

Batch Augmentation with Unimodal Fine-tuning for Multimodal Learning

H M Dipu Kabir, Subrota Kumar Mondal, Mohammad Ali Moni

arxiv logopreprintMay 10 2025
This paper proposes batch augmentation with unimodal fine-tuning to detect the fetus's organs from ultrasound images and associated clinical textual information. We also prescribe pre-training initial layers with investigated medical data before the multimodal training. At first, we apply a transferred initialization with the unimodal image portion of the dataset with batch augmentation. This step adjusts the initial layer weights for medical data. Then, we apply neural networks (NNs) with fine-tuned initial layers to images in batches with batch augmentation to obtain features. We also extract information from descriptions of images. We combine this information with features obtained from images to train the head layer. We write a dataloader script to load the multimodal data and use existing unimodal image augmentation techniques with batch augmentation for the multimodal data. The dataloader brings a new random augmentation for each batch to get a good generalization. We investigate the FPU23 ultrasound and UPMC Food-101 multimodal datasets. The multimodal large language model (LLM) with the proposed training provides the best results among the investigated methods. We receive near state-of-the-art (SOTA) performance on the UPMC Food-101 dataset. We share the scripts of the proposed method with traditional counterparts at the following repository: github.com/dipuk0506/multimodal

Dynamic AI Ultrasound-Assisted Diagnosis System to Reduce Unnecessary Fine Needle Aspiration of Thyroid Nodules.

Li F, Tao S, Ji M, Liu L, Qin Z, Yang X, Wu R, Zhan J

pubmed logopapersMay 9 2025
This study aims to compare the diagnostic efficiency of the American College of Radiology-Thyroid Imaging, Reporting, and Data System (ACR-TIRADS), fine-needle aspiration (FNA) cytopathology alone, and the dynamic artificial intelligence (AI) diagnostic system. A total of 1035 patients from three hospitals were included in the study. Of these, 590 were from the retrospective dataset and 445 cases were from the prospective dataset. The diagnostic accuracy of the dynamic AI system in the thyroid nodules was evaluated in comparison to the gold standard of postoperative pathology. The sensitivity, specificity, ROC, and diagnostic differences in the κ-factor relative to the gold standard were analyzed for the AI system and the FNA. The dynamic AI diagnostic system showed good diagnostic stability in different ages and sexes and nodules of different sizes. The diagnostic AUC of the dynamic AI system showed a significant improvement from 0.89 to 0.93 compared to ACR TI-RADS. Compared to that of FNA cytopathology, the diagnostic efficacy of the dynamic AI system was found to be no statistical difference in both the retrospective cohort and the prospective cohort. The dynamic AI diagnostic system enhances the accuracy of ACR TI-RADS-based diagnoses and has the potential to replace biopsies, thus reducing the necessity for invasive procedures in patients.

CirnetamorNet: An ultrasonic temperature measurement network for microwave hyperthermia based on deep learning.

Cui F, Du Y, Qin L, Li B, Li C, Meng X

pubmed logopapersMay 9 2025
Microwave thermotherapy is a promising approach for cancer treatment, but accurate noninvasive temperature monitoring remains challenging. This study aims to achieve accurate temperature prediction during microwave thermotherapy by efficiently integrating multi-feature data, thereby improving the accuracy and reliability of noninvasive thermometry techniques. We proposed an enhanced recurrent neural network architecture, namely CirnetamorNet. The experimental data acquisition system is developed by using the material that simulates the characteristics of human tissue to construct the body model. Ultrasonic image data at different temperatures were collected, and 5 parameters with high temperature correlation were extracted from gray scale covariance matrix and Homodyned-K distribution. Using multi-feature data as input and temperature prediction as output, the CirnetamorNet model is constructed by multi-head attention mechanism. Model performance was evaluated by analyzing training losses, predicting mean square error and accuracy, and ablation experiments were performed to evaluate the contribution of each module. Compared with common models, the CirnetamorNet model performs well, with training losses as low as 1.4589 and mean square error of only 0.1856. Its temperature prediction accuracy of 0.3°C exceeds that of many advanced models. Ablation experiments show that the removal of any key module of the model will lead to performance degradation, which proves that the collaboration of all modules is significant for improving the performance of the model. The proposed CirnetamorNet model exhibits exceptional performance in noninvasive thermometry for microwave thermotherapy. It offers a novel approach to multi-feature data fusion in the medical field and holds significant practical application value.

Artificial intelligence applied to ultrasound diagnosis of pelvic gynecological tumors: a systematic review and meta-analysis.

Geysels A, Garofalo G, Timmerman S, Barreñada L, De Moor B, Timmerman D, Froyman W, Van Calster B

pubmed logopapersMay 8 2025
To perform a systematic review on artificial intelligence (AI) studies focused on identifying and differentiating pelvic gynecological tumors on ultrasound scans. Studies developing or validating AI models for diagnosing gynecological pelvic tumors on ultrasound scans were eligible for inclusion. We systematically searched PubMed, Embase, Web of Science, and Cochrane Central from their database inception until April 30th, 2024. To assess the quality of the included studies, we adapted the QUADAS-2 risk of bias tool to address the unique challenges of AI in medical imaging. Using multi-level random effects models, we performed a meta-analysis to generate summary estimates of the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. To provide a reference point of current diagnostic support tools for ultrasound examiners, we descriptively compared the pooled performance to that of the well-recognized ADNEX model on external validation. Subgroup analyses were performed to explore sources of heterogeneity. From 9151 records retrieved, 44 studies were eligible: 40 on ovarian, three on endometrial, and one on myometrial pathology. Overall, 95% were at high risk of bias - primarily due to inappropriate study inclusion criteria, the absence of a patient-level split of training and testing image sets, and no calibration assessment. For ovarian tumors, the summary AUC for AI models distinguishing benign from malignant tumors was 0.89 (95% CI: 0.85-0.92). In lower-risk studies (at least three low-risk domains), the summary AUC dropped to 0.87 (0.83-0.90), with deep learning models outperforming radiomics-based machine learning approaches in this subset. Only five studies included an external validation, and six evaluated calibration performance. In a recent systematic review of external validation studies, the ADNEX model had a pooled AUC of 0.93 (0.91-0.94) in studies at low risk of bias. Studies on endometrial and myometrial pathologies were reported individually. Although AI models show promising discriminative performances for diagnosing gynecological tumors on ultrasound, most studies have methodological shortcomings that result in a high risk of bias. In addition, the ADNEX model appears to outperform most AI approaches for ovarian tumors. Future research should emphasize robust study designs - ideally large, multicenter, and prospective cohorts that mirror real-world populations - along with external validation, proper calibration, and standardized reporting. This study was pre-registered with Open Science Framework (OSF): https://doi.org/10.17605/osf.io/bhkst.

Hierarchical diagnosis of breast phyllodes tumors enabled by deep learning of ultrasound images: a retrospective multi-center study.

Yan Y, Liu Y, Wang Y, Jiang T, Xie J, Zhou Y, Liu X, Yan M, Zheng Q, Xu H, Chen J, Sui L, Chen C, Ru R, Wang K, Zhao A, Li S, Zhu Y, Zhang Y, Wang VY, Xu D

pubmed logopapersMay 8 2025
Phyllodes tumors (PTs) are rare breast tumors with high recurrence rates, current methods relying on post-resection pathology often delay detection and require further surgery. We propose a deep-learning-based Phyllodes Tumors Hierarchical Diagnosis Model (PTs-HDM) for preoperative identification and grading. Ultrasound images from five hospitals were retrospectively collected, with all patients having undergone surgical pathological confirmation of either PTs or fibroadenomas (FAs). PTs-HDM follows a two-stage classification: first distinguishing PTs from FAs, then grading PTs into benign or borderline/malignant. Model performance metrics including AUC and accuracy were quantitatively evaluated. A comparative analysis was conducted between the algorithm's diagnostic capabilities and those of radiologists with varying clinical experience within an external validation cohort. Through the provision of PTs-HDM's automated classification outputs and associated thermal activation mapping guidance, we systematically assessed the enhancement in radiologists' diagnostic concordance and classification accuracy. A total of 712 patients were included. On the external test set, PTs-HDM achieved an AUC of 0.883, accuracy of 87.3% for PT vs. FA classification. Subgroup analysis showed high accuracy for tumors < 2 cm (90.9%). In hierarchical classification, the model obtained an AUC of 0.856 and accuracy of 80.9%. Radiologists' performance improved with PTs-HDM assistance, with binary classification accuracy increasing from 82.7%, 67.7%, and 64.2-87.6%, 76.6%, and 82.1% for senior, attending, and resident radiologists, respectively. Their hierarchical classification AUCs improved from 0.566 to 0.827 to 0.725-0.837. PTs-HDM also enhanced inter-radiologist consistency, increasing Kappa values from - 0.05 to 0.41 to 0.12 to 0.65, and the intraclass correlation coefficient from 0.19 to 0.45. PTs-HDM shows strong diagnostic performance, especially for small lesions, and improves radiologists' accuracy across all experience levels, bridging diagnostic gaps and providing reliable support for PTs' hierarchical diagnosis.

Effective data selection via deep learning processes and corresponding learning strategies in ultrasound image classification.

Lee H, Kwak JY, Lee E

pubmed logopapersMay 8 2025
In this study, we propose a novel approach to enhancing transfer learning by optimizing data selection through deep learning techniques and corresponding innovative learning strategies. This method is particularly beneficial when the available dataset has reached its limit and cannot be further expanded. Our approach focuses on maximizing the use of existing data to improve learning outcomes which offers an effective solution for data-limited applications in medical imaging classification. The proposed method consists of two stages. In the first stage, an original network performs the initial classification. When the original network exhibits low confidence in its predictions, ambiguous classifications are passed to a secondary decision-making step involving a newly trained network, referred to as the True network. The True network shares the same architecture as the original network but is trained on a subset of the original dataset that is selected based on consensus among multiple independent networks. It is then used to verify the classification results of the original network, identifying and correcting any misclassified images. To evaluate the effectiveness of our approach, we conducted experiments using thyroid nodule ultrasound images with the ResNet101 and Vision Transformer architectures along with eleven other pre-trained neural networks. The proposed method led to performance improvements across all five key metrics, accuracy, sensitivity, specificity, F1-score, and AUC, compared to using only the original or True networks in ResNet101. Additionally, the True network showed strong performance when applied to the Vision Transformer and similar enhancements were observed across multiple convolutional neural network architectures. Furthermore, to assess the robustness and adaptability of our method across different medical imaging modalities, we applied it to dermoscopic images and observed similar performance enhancements. These results provide evidence of the effectiveness of our approach in improving transfer learning-based medical image classification without requiring additional training data.

Construction of risk prediction model of sentinel lymph node metastasis in breast cancer patients based on machine learning algorithm.

Yang Q, Liu C, Wang Y, Dong G, Sun J

pubmed logopapersMay 8 2025
The aim of this study was to develop and validate a machine learning (ML) based prediction model for sentinel lymph node metastasis in breast cancer to identify patients with a high risk of sentinel lymph node metastasis. In this machine learning study, we retrospectively collected 225 female breast cancer patients who underwent sentinel lymph node biopsy (SLNB). Feature screening was performed using the logistic regression analysis. Subsequently, five ML algorithms, namely LOGIT, LASSO, XGBOOST, RANDOM FOREST model and GBM model were employed to train and develop an ML model. In addition, model interpretation was performed by the Shapley Additive Explanations (SHAP) analysis to clarify the importance of each feature of the model and its decision basis. Combined univariate and multivariate logistic regression analysis, identified Multifocal, LVI, Maximum Diameter, Shape US, Maximum Cortical Thickness as significant predictors. We than successfully leveraged machine learning algorithms, particularly the RANDOM FOREST model, to develop a predictive model for sentinel lymph node metastasis in breast cancer. Finally, the SHAP method identified Maximum Diameter and Maximum Cortical Thickness as the primary decision factors influencing the ML model's predictions. With the integration of pathological and imaging characteristics, ML algorithm can accurately predict sentinel lymph node metastasis in breast cancer patients. The RANDOM FOREST model showed ideal performance. With the incorporation of these models in the clinic, can helpful for clinicians to identify patients at risk of sentinel lymph node metastasis of breast cancer and make more reasonable treatment decisions.

Machine learning model for diagnosing salivary gland adenoid cystic carcinoma based on clinical and ultrasound features.

Su HZ, Li ZY, Hong LC, Wu YH, Zhang F, Zhang ZB, Zhang XD

pubmed logopapersMay 8 2025
To develop and validate machine learning (ML) models for diagnosing salivary gland adenoid cystic carcinoma (ACC) in the salivary glands based on clinical and ultrasound features. A total of 365 patients with ACC or non-ACC of the salivary glands treated at two centers were enrolled in training cohort, internal and external validation cohorts. Synthetic minority oversampling technique was used to address the class imbalance. The least absolute shrinkage and selection operator (LASSO) regression identified optimal features, which were subsequently utilized to construct predictive models employing five ML algorithms. The performance of the models was evaluated across a comprehensive array of learning metrics, prominently the area under the receiver operating characteristic curve (AUC). Through LASSO regression analysis, six key features-sex, pain symptoms, number, cystic areas, rat tail sign, and polar vessel-were identified and subsequently utilized to develop five ML models. Among these models, the support vector machine (SVM) model demonstrated superior performance, achieving the highest AUCs of 0.899 and 0.913, accuracy of 90.54% and 91.53%, and F1 scores of 0.774 and 0.783 in both the internal and external validation cohorts, respectively. Decision curve analysis further revealed that the SVM model offered enhanced clinical utility compared to the other models. The ML model based on clinical and US features provide an accurate and noninvasive method for distinguishing ACC from non-ACC. This machine learning model, constructed based on clinical and ultrasound characteristics, serves as a valuable tool for the identification of salivary gland adenoid cystic carcinoma. Rat tail sign and polar vessel on US predict adenoid cystic carcinoma (ACC). Machine learning models based on clinical and US features can identify ACC. The support vector machine model performed robustly and accurately.
Page 39 of 41404 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.