Sort by:
Page 5 of 24237 results

Clinical benefits of deep learning-assisted ultrasound in predicting lymph node metastasis in pancreatic cancer patients.

Wen DY, Chen JM, Tang ZP, Pang JS, Qin Q, Zhang L, He Y, Yang H

pubmed logopapersJun 23 2025
This study aimed to develop and validate a deep learning radiomics nomogram (DLRN) derived from ultrasound images to improve predictive accuracy for lymph node metastasis (LNM) in pancreatic cancer (PC) patients. A retrospective analysis of 249 histopathologically confirmed PC cases, including 78 with LNM, was conducted, with an 8:2 division into training and testing cohorts. Eight transfer learning models and a baseline logistic regression model incorporating handcrafted radiomic and clinicopathological features were developed to evaluate predictive performance. Diagnostic effectiveness was assessed for junior and senior ultrasound physicians, both with and without DLRN assistance. InceptionV3 showed the highest performance among DL models (AUC = 0.844), while the DLRN model, integrating deep learning and radiomic features, demonstrated superior accuracy (AUC = 0.909), robust calibration, and significant clinical utility per decision curve analysis. DLRN assistance notably enhanced diagnostic performance, with AUC improvements of 0.238 (<i>p</i> = 0.006) for junior and 0.152 (<i>p</i> = 0.085) for senior physicians. The ultrasound-based DLRN model exhibits strong predictive capability for LNM in PC, offering a valuable decision-support tool that bolsters diagnostic accuracy, especially among less experienced clinicians, thereby supporting more tailored therapeutic strategies for PC patients.

Physiological Response of Tissue-Engineered Vascular Grafts to Vasoactive Agents in an Ovine Model.

Guo M, Villarreal D, Watanabe T, Wiet M, Ulziibayar A, Morrison A, Nelson K, Yuhara S, Hussaini SF, Shinoka T, Breuer C

pubmed logopapersJun 23 2025
Tissue-engineered vascular grafts (TEVGs) are emerging as promising alternatives to synthetic grafts, particularly in pediatric cardiovascular surgery. While TEVGs have demonstrated growth potential, compliance, and resistance to calcification, their functional integration into the circulation, especially their ability to respond to physiological stimuli, remains underexplored. Vasoreactivity, the dynamic contraction or dilation of blood vessels in response to vasoactive agents, is a key property of native vessels that affects systemic hemodynamics and long-term vascular function. This study aimed to develop and validate an <i>in vivo</i> protocol to assess the vasoreactive capacity of TEVGs implanted as inferior vena cava (IVC) interposition grafts in a large animal model. Bone marrow-seeded TEVGs were implanted in the thoracic IVC of Dorset sheep. A combination of intravascular ultrasound (IVUS) imaging and invasive hemodynamic monitoring was used to evaluate vessel response to norepinephrine (NE) and sodium nitroprusside (SNP). Cross-sectional luminal area changes were measured using a custom Python-based software package (VIVUS) that leverages deep learning for IVUS image segmentation. Physiological parameters including blood pressure, heart rate, and cardiac output were continuously recorded. NE injections induced significant, dose-dependent vasoconstriction of TEVGs, with peak reductions in luminal area averaging ∼15% and corresponding increases in heart rate and mean arterial pressure. Conversely, SNP did not elicit measurable vasodilation in TEVGs, likely due to structural differences in venous tissue, the low-pressure environment of the thoracic IVC, and systemic confounders. Overall, the TEVGs demonstrated active, rapid, and reversible vasoconstrictive behavior in response to pharmacologic stimuli. This study presents a novel <i>in vivo</i> method for assessing TEVG vasoreactivity using real-time imaging and hemodynamic data. TEVGs possess functional vasoactivity, suggesting they may play an active role in modulating venous return and systemic hemodynamics. These findings are particularly relevant for Fontan patients and other scenarios where dynamic venous regulation is critical. Future work will compare TEVG vasoreactivity with native veins and synthetic grafts to further characterize their physiological integration and potential clinical benefits.

GPT-4o and Specialized AI in Breast Ultrasound Imaging: A comparative Study on Accuracy, Agreement, Limitations, and Diagnostic Potential.

Sanli DET, Sanli AN, Buyukdereli Atadag Y, Kurt A, Esmerer E

pubmed logopapersJun 23 2025
This study aimed to evaluate the ability of ChatGPT and Breast Ultrasound Helper, a special ChatGPT-based subprogram trained on ultrasound image analysis, to analyze and differentiate benign and malignant breast lesions on ultrasound images. Ultrasound images of histopathologically confirmed breast cancer and fibroadenoma patients were read GPT-4o (the latest ChatGPT version) and Breast Ultrasound Helper (BUH), a tool from the "Explore" section of ChatGPT. Both were prompted in English using ACR BI-RADS Breast Ultrasound Lexicon criteria: lesion shape, orientation, margin, internal echo pattern, echogenicity, posterior acoustic features, microcalcifications or hyperechoic foci, perilesional hyperechoic rim, edema or architectural distortion, lesion size, and BI-RADS category. Two experienced radiologists evaluated the images and the responses of the programs in consensus. The outputs, BI-RADS category agreement, and benign/malignant discrimination were statistically compared. A total of 232 ultrasound images were analyzed, of which 133 (57.3%) were malignant and 99 (42.7%) benign. In comparative analysis, BUH showed superior performance overall, with higher kappa values and statistically significant results across multiple features (P .001). However, the overall level of agreement with the radiologists' consensus for all features was similar for BUH (κ: 0.387-0.755) and GPT-4o (κ: 0.317-0.803). On the other hand, BI-RADS category agreement was slightly higher in GPT-4o than in BUH (69.4% versus 65.9%), but BUH was slightly more successful in distinguishing benign lesions from malignant lesions (65.9% versus 67.7%). Although both AI tools show moderate-good performance in ultrasound image analysis, their limited compatibility with radiologists' evaluations and BI-RADS categorization suggests that their clinical application in breast ultrasound interpretation is still early and unreliable.

STACT-Time: Spatio-Temporal Cross Attention for Cine Thyroid Ultrasound Time Series Classification

Irsyad Adam, Tengyue Zhang, Shrayes Raman, Zhuyu Qiu, Brandon Taraku, Hexiang Feng, Sile Wang, Ashwath Radhachandran, Shreeram Athreya, Vedrana Ivezic, Peipei Ping, Corey Arnold, William Speier

arxiv logopreprintJun 22 2025
Thyroid cancer is among the most common cancers in the United States. Thyroid nodules are frequently detected through ultrasound (US) imaging, and some require further evaluation via fine-needle aspiration (FNA) biopsy. Despite its effectiveness, FNA often leads to unnecessary biopsies of benign nodules, causing patient discomfort and anxiety. To address this, the American College of Radiology Thyroid Imaging Reporting and Data System (TI-RADS) has been developed to reduce benign biopsies. However, such systems are limited by interobserver variability. Recent deep learning approaches have sought to improve risk stratification, but they often fail to utilize the rich temporal and spatial context provided by US cine clips, which contain dynamic global information and surrounding structural changes across various views. In this work, we propose the Spatio-Temporal Cross Attention for Cine Thyroid Ultrasound Time Series Classification (STACT-Time) model, a novel representation learning framework that integrates imaging features from US cine clips with features from segmentation masks automatically generated by a pretrained model. By leveraging self-attention and cross-attention mechanisms, our model captures the rich temporal and spatial context of US cine clips while enhancing feature representation through segmentation-guided learning. Our model improves malignancy prediction compared to state-of-the-art models, achieving a cross-validation precision of 0.91 (plus or minus 0.02) and an F1 score of 0.89 (plus or minus 0.02). By reducing unnecessary biopsies of benign nodules while maintaining high sensitivity for malignancy detection, our model has the potential to enhance clinical decision-making and improve patient outcomes.

Ultrasound placental image texture analysis using artificial intelligence and deep learning models to predict hypertension in pregnancy.

Arora U, Vigneshwar P, Sai MK, Yadav R, Sengupta D, Kumar M

pubmed logopapersJun 21 2025
This study considers the application of ultrasound placental image texture analysis for the prediction of hypertensive disorders of pregnancy (HDP) using deep learning (DL) algorithm. In this prospective observational study, placental ultrasound images were taken serially at 11-14 weeks (T1), 20-24 weeks (T2), and 28-32 weeks (T3). Pregnant women with blood pressure at or above 140/90 mmHg on two occasions 4 h apart were considered to have HDP. The image data of women with HDP were compared with those with a normal outcome using DL techniques such as convolutional neural networks (CNN), transfer learning, and a Vision Transformer (ViT) with a TabNet classifier. The accuracy and the Cohen kappa scores of the different DL techniques were compared. A total of 600/1008 (59.5%) subjects had a normal outcome, and 143/1008 (14.2%) had HDP; the reminder, 265/1008 (26.3%), had other adverse outcomes. In the basic CNN model, the accuracy was 81.6% for T1, 80% for T2, and 82.8% for T3. Using the Efficient Net B0 transfer learning model, the accuracy was 87.7%, 85.3%, and 90.3% for T1, T2, and T3, respectively. Using a TabNet classifier with a ViT, the accuracy and area under the receiver operating characteristic curve scores were 91.4% and 0.915 for T1, 90.2% and 0.904 for T2, and 90.3% and 0.907 for T3. The sensitivity and specificity for HDP prediction using ViT were 89.1% and 91.7% for T1, 86.6% and 93.7% for T2, and 85.6% and 94.6% for T3. Ultrasound placental image texture analysis using DL could differentiate women with a normal outcome and those with HDP with excellent accuracy and could open new avenues for research in this field.

SE-ATT-YOLO- A deep learning driven ultrasound based respiratory motion compensation system for precision radiotherapy.

Kuo CC, Pillai AG, Liao AH, Yu HW, Ramanathan S, Zhou H, Boominathan CM, Jeng SC, Chiou JF, Chuang HC

pubmed logopapersJun 21 2025
The therapeutic management of neoplasm employs high level energy beam to ablate malignant cells, which can cause collateral damage to adjacent normal tissue. Furthermore, respiration-induced organ motion, during radiotherapy can lead to significant displacement of neoplasms. In this work, a non-invasive ultrasound-based deep learning algorithm for respiratory motion compensation system (RMCS) was developed to mitigate the effect of respiratory motion induced neoplasm movement in radiotherapy. The deep learning algorithm generated based on modified YOLOv8n (You Only Look Once), by incorporating squeeze and excitation blocks for channel wise recalibration and enhanced attention mechanisms for spatial channel focus (SE-ATT-YOLO) to cope up with enhanced ultrasound image detection in real time scenario. The trained model was inferred with ultrasound movement of human diaphragm and tracked the bounding box coordinates using BoT-Sort, which drives the RMCS. The SE-ATT-YOLO model achieved mean average precision (mAP) of 0.88 which outperforms YOLOv8n with the value of 0.85. The root mean square error (RMSE) obtained from prerecorded respiratory signals with the compensated RMCS signal was calculated. The model achieved an inference speed of approximately 50 FPS. The RMSE values recorded were 4.342 for baseline shift, 3.105 for sinusoidal signal, 1.778 for deep breath, and 1.667 for slow signal. The SE-ATT-YOLO model outperformed all the results of previous models. The loss function uncertainty in YOLOv8n model was rectified in SE-ATT YOLO depicting the stability of the model. The model' stability, speed and accuracy of the model optimized the performance of the RMCS.

Development of Radiomics-Based Risk Prediction Models for Stages of Hashimoto's Thyroiditis Using Ultrasound, Clinical, and Laboratory Factors.

Chen JH, Kang K, Wang XY, Chi JN, Gao XM, Li YX, Huang Y

pubmed logopapersJun 21 2025
To develop a radiomics risk-predictive model for differentiating the different stages of Hashimoto's thyroiditis (HT). Data from patients with HT who underwent definitive surgical pathology between January 2018 and December 2023 were retrospectively collected and categorized into early HT (HT patients with simple positive antibodies or simultaneously accompanied by elevated thyroid hormones) and late HT (HT patients with positive antibodies and beginning to present subclinical hypothyroidism or developing hypothyroidism). Ultrasound images and five clinical and 12 laboratory indicators were obtained. Six classifiers were used to construct radiomics models. The gradient boosting decision tree (GBDT) classifier was used to screen for the best features to explore the main risk factors for differentiating early HT. The performance of each model was evaluated by receiver operating characteristic (ROC) curve. The model was validated using one internal and two external test cohorts. A total of 785 patients were enrolled. Extreme gradient boosting (XGBOOST) showed best performance in the training cohort, with an AUC of 0.999 (0.998, 1), and AUC values of 0.993 (0.98, 1), 0.947 (0.866, 1), and 0.98 (0.939, 1), respectively, in the internal test, first external, and second external cohorts. Ultrasound radiomic features contributed to 78.6% (11/14) of the model. The first-order feature of traverse section of thyroid ultrasound image, texture feature gray-level run length matrix (GLRLM) of longitudinal section of thyroid ultrasound image and free thyroxine showed the greatest contributions in the model. Our study developed and tested a risk-predictive model that effectively differentiated HT stages to more precisely and actively manage patients with HT at an earlier stage.

Automatic Multi-Task Segmentation and Vulnerability Assessment of Carotid Plaque on Contrast-Enhanced Ultrasound Images and Videos via Deep Learning.

Hu B, Zhang H, Jia C, Chen K, Tang X, He D, Zhang L, Gu S, Chen J, Zhang J, Wu R, Chen SL

pubmed logopapersJun 20 2025
Intraplaque neovascularization (IPN) within carotid plaque is a crucial indicator of plaque vulnerability. Contrast-enhanced ultrasound (CEUS) is a valuable tool for assessing IPN by evaluating the location and quantity of microbubbles within the carotid plaque. However, this task is typically performed by experienced radiologists. Here we propose a deep learning-based multi-task model for the automatic segmentation and IPN grade classification of carotid plaque on CEUS images and videos. We also compare the performance of our model with that of radiologists. To simulate the clinical practice of radiologists, who often use CEUS videos with dynamic imaging to track microbubble flow and identify IPN, we develop a workflow for plaque vulnerability assessment using CEUS videos. Our multi-task model outperformed individually trained segmentation and classification models, achieving superior performance in IPN grade classification based on CEUS images. Specifically, our model achieved a high segmentation Dice coefficient of 84.64% and a high classification accuracy of 81.67%. Moreover, our model surpassed the performance of junior and medium-level radiologists, providing more accurate IPN grading of carotid plaque on CEUS images. For CEUS videos, our model achieved a classification accuracy of 80.00% in IPN grading. Overall, our multi-task model demonstrates great performance in the automatic, accurate, objective, and efficient IPN grading in both CEUS images and videos. This work holds significant promise for enhancing the clinical diagnosis of plaque vulnerability associated with IPN in CEUS evaluations.

Automatic Detection of B-Lines in Lung Ultrasound Based on the Evaluation of Multiple Characteristic Parameters Using Raw RF Data.

Shen W, Zhang Y, Zhang H, Zhong H, Wan M

pubmed logopapersJun 20 2025
B-line artifacts in lung ultrasound, pivotal for diagnosing pulmonary conditions, warrant automated recognition to enhance diagnostic accuracy. In this paper, a lung ultrasound B-line vertical artifact identification method based on radio frequency (RF) signal was proposed. B-line regions were distinguished from non-B-line regions by inputting multiple characteristic parameters into nonlinear support vector machine (SVM). Six characteristic parameters were evaluated, including permutation entropy, information entropy, kurtosis, skewness, Nakagami shape factor, and approximate entropy. Following the evaluation that demonstrated the performance differences in parameter recognition, Principal Component Analysis (PCA) was utilized to reduce the dimensionality to a four-dimensional feature set for input into a nonlinear Support Vector Machine (SVM) for classification purposes. Four types of experiments were conducted: a sponge with dripping water model, gelatin phantoms containing either glass beads or gelatin droplets, and in vivo experiments. By employing precise feature selection and analyzing scan lines rather than full images, this approach significantly reduced the dependency on large image datasets without compromising discriminative accuracy. The method exhibited performance comparable to contemporary image-based deep learning approaches, which, while highly effective, typically necessitate extensive data for training and require expert annotation of large datasets to establish ground truth. Owing to the optimized architecture of our model, efficient sample recognition was achieved, with the capability to process between 27,000 and 33,000 scan lines per second (resulting in a frame rate exceeding 100 FPS, with 256 scan lines per frame), thus supporting real-time analysis. The results demonstrate that the accuracy of the method to classify a scan line as belonging to a B-line region was up to 88%, with sensitivity reaching up to 90%, specificity up to 87%, and an F1-score up to 89%. This approach effectively reflects the performance of scan line classification pertinent to B-line identification. Our approach reduces the reliance on large annotated datasets, thereby streamlining the preprocessing phase.

MVKD-Trans: A Multi-View Knowledge Distillation Vision Transformer Architecture for Breast Cancer Classification Based on Ultrasound Images.

Ling D, Jiao X

pubmed logopapersJun 20 2025
Breast cancer is the leading cancer threatening women's health. In recent years, deep neural networks have outperformed traditional methods in terms of both accuracy and efficiency for breast cancer classification. However, most ultrasound-based breast cancer classification methods rely on single-perspective information, which may lead to higher misdiagnosis rates. In this study, we propose a multi-view knowledge distillation vision transformer architecture (MVKD-Trans) for the classification of benign and malignant breast tumors. We utilize multi-view ultrasound images of the same tumor to capture diverse features. Additionally, we employ a shuffle module for feature fusion, extracting channel and spatial dual-attention information to improve the model's representational capability. Given the limited computational capacity of ultrasound devices, we also utilize knowledge distillation (KD) techniques to compress the multi-view network into a single-view network. The results show that the accuracy, area under the ROC curve (AUC), sensitivity, specificity, precision, and F1 score of the model are 88.15%, 91.23%, 81.41%, 90.73%, 78.29%, and 79.69%, respectively. The superior performance of our approach, compared to several existing models, highlights its potential to significantly enhance the understanding and classification of breast cancer.
Page 5 of 24237 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.