Sort by:
Page 37 of 56556 results

Development and validation of a SOTA-based system for biliopancreatic segmentation and station recognition system in EUS.

Zhang J, Zhang J, Chen H, Tian F, Zhang Y, Zhou Y, Jiang Z

pubmed logopapersJun 23 2025
Endoscopic ultrasound (EUS) is a vital tool for diagnosing biliopancreatic disease, offering detailed imaging to identify key abnormalities. Its interpretation demands expertise, which limits its accessibility for less trained practitioners. Thus, the creation of tools or systems to assist in interpreting EUS images is crucial for improving diagnostic accuracy and efficiency. To develop an AI-assisted EUS system for accurate pancreatic and biliopancreatic duct segmentation, and evaluate its impact on endoscopists' ability to identify biliary-pancreatic diseases during segmentation and anatomical localization. The EUS-AI system was designed to perform station positioning and anatomical structure segmentation. A total of 45,737 EUS images from 1852 patients were used for model training. Among them, 2881 images were for internal testing, and 2747 images from 208 patients were for external validation. Additionally, 340 images formed a man-machine competition test set. During the research process, various newer state-of-the-art (SOTA) deep learning algorithms were also compared. In classification, in the station recognition task, compared to the ResNet-50 and YOLOv8-CLS algorithms, the Mean Teacher algorithm achieved the highest accuracy, with an average of 95.60% (92.07%-99.12%) in the internal test set and 92.72% (88.30%-97.15%) in the external test set. For segmentation, compared to the UNet ++ and YOLOv8 algorithms, the U-Net v2 algorithm was optimal. Ultimately, the EUS-AI system was constructed using the optimal models from two tasks, and a man-machine competition experiment was conducted. The results demonstrated that the performance of the EUS-AI system significantly outperformed that of mid-level endoscopists, both in terms of position recognition (p < 0.001) and pancreas and biliopancreatic duct segmentation tasks (p < 0.001, p = 0.004). The EUS-AI system is expected to significantly shorten the learning curve for the pancreatic EUS examination and enhance procedural standardization.

STACT-Time: Spatio-Temporal Cross Attention for Cine Thyroid Ultrasound Time Series Classification

Irsyad Adam, Tengyue Zhang, Shrayes Raman, Zhuyu Qiu, Brandon Taraku, Hexiang Feng, Sile Wang, Ashwath Radhachandran, Shreeram Athreya, Vedrana Ivezic, Peipei Ping, Corey Arnold, William Speier

arxiv logopreprintJun 22 2025
Thyroid cancer is among the most common cancers in the United States. Thyroid nodules are frequently detected through ultrasound (US) imaging, and some require further evaluation via fine-needle aspiration (FNA) biopsy. Despite its effectiveness, FNA often leads to unnecessary biopsies of benign nodules, causing patient discomfort and anxiety. To address this, the American College of Radiology Thyroid Imaging Reporting and Data System (TI-RADS) has been developed to reduce benign biopsies. However, such systems are limited by interobserver variability. Recent deep learning approaches have sought to improve risk stratification, but they often fail to utilize the rich temporal and spatial context provided by US cine clips, which contain dynamic global information and surrounding structural changes across various views. In this work, we propose the Spatio-Temporal Cross Attention for Cine Thyroid Ultrasound Time Series Classification (STACT-Time) model, a novel representation learning framework that integrates imaging features from US cine clips with features from segmentation masks automatically generated by a pretrained model. By leveraging self-attention and cross-attention mechanisms, our model captures the rich temporal and spatial context of US cine clips while enhancing feature representation through segmentation-guided learning. Our model improves malignancy prediction compared to state-of-the-art models, achieving a cross-validation precision of 0.91 (plus or minus 0.02) and an F1 score of 0.89 (plus or minus 0.02). By reducing unnecessary biopsies of benign nodules while maintaining high sensitivity for malignancy detection, our model has the potential to enhance clinical decision-making and improve patient outcomes.

SE-ATT-YOLO- A deep learning driven ultrasound based respiratory motion compensation system for precision radiotherapy.

Kuo CC, Pillai AG, Liao AH, Yu HW, Ramanathan S, Zhou H, Boominathan CM, Jeng SC, Chiou JF, Chuang HC

pubmed logopapersJun 21 2025
The therapeutic management of neoplasm employs high level energy beam to ablate malignant cells, which can cause collateral damage to adjacent normal tissue. Furthermore, respiration-induced organ motion, during radiotherapy can lead to significant displacement of neoplasms. In this work, a non-invasive ultrasound-based deep learning algorithm for respiratory motion compensation system (RMCS) was developed to mitigate the effect of respiratory motion induced neoplasm movement in radiotherapy. The deep learning algorithm generated based on modified YOLOv8n (You Only Look Once), by incorporating squeeze and excitation blocks for channel wise recalibration and enhanced attention mechanisms for spatial channel focus (SE-ATT-YOLO) to cope up with enhanced ultrasound image detection in real time scenario. The trained model was inferred with ultrasound movement of human diaphragm and tracked the bounding box coordinates using BoT-Sort, which drives the RMCS. The SE-ATT-YOLO model achieved mean average precision (mAP) of 0.88 which outperforms YOLOv8n with the value of 0.85. The root mean square error (RMSE) obtained from prerecorded respiratory signals with the compensated RMCS signal was calculated. The model achieved an inference speed of approximately 50 FPS. The RMSE values recorded were 4.342 for baseline shift, 3.105 for sinusoidal signal, 1.778 for deep breath, and 1.667 for slow signal. The SE-ATT-YOLO model outperformed all the results of previous models. The loss function uncertainty in YOLOv8n model was rectified in SE-ATT YOLO depicting the stability of the model. The model' stability, speed and accuracy of the model optimized the performance of the RMCS.

Ultrasound placental image texture analysis using artificial intelligence and deep learning models to predict hypertension in pregnancy.

Arora U, Vigneshwar P, Sai MK, Yadav R, Sengupta D, Kumar M

pubmed logopapersJun 21 2025
This study considers the application of ultrasound placental image texture analysis for the prediction of hypertensive disorders of pregnancy (HDP) using deep learning (DL) algorithm. In this prospective observational study, placental ultrasound images were taken serially at 11-14 weeks (T1), 20-24 weeks (T2), and 28-32 weeks (T3). Pregnant women with blood pressure at or above 140/90 mmHg on two occasions 4 h apart were considered to have HDP. The image data of women with HDP were compared with those with a normal outcome using DL techniques such as convolutional neural networks (CNN), transfer learning, and a Vision Transformer (ViT) with a TabNet classifier. The accuracy and the Cohen kappa scores of the different DL techniques were compared. A total of 600/1008 (59.5%) subjects had a normal outcome, and 143/1008 (14.2%) had HDP; the reminder, 265/1008 (26.3%), had other adverse outcomes. In the basic CNN model, the accuracy was 81.6% for T1, 80% for T2, and 82.8% for T3. Using the Efficient Net B0 transfer learning model, the accuracy was 87.7%, 85.3%, and 90.3% for T1, T2, and T3, respectively. Using a TabNet classifier with a ViT, the accuracy and area under the receiver operating characteristic curve scores were 91.4% and 0.915 for T1, 90.2% and 0.904 for T2, and 90.3% and 0.907 for T3. The sensitivity and specificity for HDP prediction using ViT were 89.1% and 91.7% for T1, 86.6% and 93.7% for T2, and 85.6% and 94.6% for T3. Ultrasound placental image texture analysis using DL could differentiate women with a normal outcome and those with HDP with excellent accuracy and could open new avenues for research in this field.

Development of Radiomics-Based Risk Prediction Models for Stages of Hashimoto's Thyroiditis Using Ultrasound, Clinical, and Laboratory Factors.

Chen JH, Kang K, Wang XY, Chi JN, Gao XM, Li YX, Huang Y

pubmed logopapersJun 21 2025
To develop a radiomics risk-predictive model for differentiating the different stages of Hashimoto's thyroiditis (HT). Data from patients with HT who underwent definitive surgical pathology between January 2018 and December 2023 were retrospectively collected and categorized into early HT (HT patients with simple positive antibodies or simultaneously accompanied by elevated thyroid hormones) and late HT (HT patients with positive antibodies and beginning to present subclinical hypothyroidism or developing hypothyroidism). Ultrasound images and five clinical and 12 laboratory indicators were obtained. Six classifiers were used to construct radiomics models. The gradient boosting decision tree (GBDT) classifier was used to screen for the best features to explore the main risk factors for differentiating early HT. The performance of each model was evaluated by receiver operating characteristic (ROC) curve. The model was validated using one internal and two external test cohorts. A total of 785 patients were enrolled. Extreme gradient boosting (XGBOOST) showed best performance in the training cohort, with an AUC of 0.999 (0.998, 1), and AUC values of 0.993 (0.98, 1), 0.947 (0.866, 1), and 0.98 (0.939, 1), respectively, in the internal test, first external, and second external cohorts. Ultrasound radiomic features contributed to 78.6% (11/14) of the model. The first-order feature of traverse section of thyroid ultrasound image, texture feature gray-level run length matrix (GLRLM) of longitudinal section of thyroid ultrasound image and free thyroxine showed the greatest contributions in the model. Our study developed and tested a risk-predictive model that effectively differentiated HT stages to more precisely and actively manage patients with HT at an earlier stage.

Automatic Multi-Task Segmentation and Vulnerability Assessment of Carotid Plaque on Contrast-Enhanced Ultrasound Images and Videos via Deep Learning.

Hu B, Zhang H, Jia C, Chen K, Tang X, He D, Zhang L, Gu S, Chen J, Zhang J, Wu R, Chen SL

pubmed logopapersJun 20 2025
Intraplaque neovascularization (IPN) within carotid plaque is a crucial indicator of plaque vulnerability. Contrast-enhanced ultrasound (CEUS) is a valuable tool for assessing IPN by evaluating the location and quantity of microbubbles within the carotid plaque. However, this task is typically performed by experienced radiologists. Here we propose a deep learning-based multi-task model for the automatic segmentation and IPN grade classification of carotid plaque on CEUS images and videos. We also compare the performance of our model with that of radiologists. To simulate the clinical practice of radiologists, who often use CEUS videos with dynamic imaging to track microbubble flow and identify IPN, we develop a workflow for plaque vulnerability assessment using CEUS videos. Our multi-task model outperformed individually trained segmentation and classification models, achieving superior performance in IPN grade classification based on CEUS images. Specifically, our model achieved a high segmentation Dice coefficient of 84.64% and a high classification accuracy of 81.67%. Moreover, our model surpassed the performance of junior and medium-level radiologists, providing more accurate IPN grading of carotid plaque on CEUS images. For CEUS videos, our model achieved a classification accuracy of 80.00% in IPN grading. Overall, our multi-task model demonstrates great performance in the automatic, accurate, objective, and efficient IPN grading in both CEUS images and videos. This work holds significant promise for enhancing the clinical diagnosis of plaque vulnerability associated with IPN in CEUS evaluations.

MVKD-Trans: A Multi-View Knowledge Distillation Vision Transformer Architecture for Breast Cancer Classification Based on Ultrasound Images.

Ling D, Jiao X

pubmed logopapersJun 20 2025
Breast cancer is the leading cancer threatening women's health. In recent years, deep neural networks have outperformed traditional methods in terms of both accuracy and efficiency for breast cancer classification. However, most ultrasound-based breast cancer classification methods rely on single-perspective information, which may lead to higher misdiagnosis rates. In this study, we propose a multi-view knowledge distillation vision transformer architecture (MVKD-Trans) for the classification of benign and malignant breast tumors. We utilize multi-view ultrasound images of the same tumor to capture diverse features. Additionally, we employ a shuffle module for feature fusion, extracting channel and spatial dual-attention information to improve the model's representational capability. Given the limited computational capacity of ultrasound devices, we also utilize knowledge distillation (KD) techniques to compress the multi-view network into a single-view network. The results show that the accuracy, area under the ROC curve (AUC), sensitivity, specificity, precision, and F1 score of the model are 88.15%, 91.23%, 81.41%, 90.73%, 78.29%, and 79.69%, respectively. The superior performance of our approach, compared to several existing models, highlights its potential to significantly enhance the understanding and classification of breast cancer.

Automatic Detection of B-Lines in Lung Ultrasound Based on the Evaluation of Multiple Characteristic Parameters Using Raw RF Data.

Shen W, Zhang Y, Zhang H, Zhong H, Wan M

pubmed logopapersJun 20 2025
B-line artifacts in lung ultrasound, pivotal for diagnosing pulmonary conditions, warrant automated recognition to enhance diagnostic accuracy. In this paper, a lung ultrasound B-line vertical artifact identification method based on radio frequency (RF) signal was proposed. B-line regions were distinguished from non-B-line regions by inputting multiple characteristic parameters into nonlinear support vector machine (SVM). Six characteristic parameters were evaluated, including permutation entropy, information entropy, kurtosis, skewness, Nakagami shape factor, and approximate entropy. Following the evaluation that demonstrated the performance differences in parameter recognition, Principal Component Analysis (PCA) was utilized to reduce the dimensionality to a four-dimensional feature set for input into a nonlinear Support Vector Machine (SVM) for classification purposes. Four types of experiments were conducted: a sponge with dripping water model, gelatin phantoms containing either glass beads or gelatin droplets, and in vivo experiments. By employing precise feature selection and analyzing scan lines rather than full images, this approach significantly reduced the dependency on large image datasets without compromising discriminative accuracy. The method exhibited performance comparable to contemporary image-based deep learning approaches, which, while highly effective, typically necessitate extensive data for training and require expert annotation of large datasets to establish ground truth. Owing to the optimized architecture of our model, efficient sample recognition was achieved, with the capability to process between 27,000 and 33,000 scan lines per second (resulting in a frame rate exceeding 100 FPS, with 256 scan lines per frame), thus supporting real-time analysis. The results demonstrate that the accuracy of the method to classify a scan line as belonging to a B-line region was up to 88%, with sensitivity reaching up to 90%, specificity up to 87%, and an F1-score up to 89%. This approach effectively reflects the performance of scan line classification pertinent to B-line identification. Our approach reduces the reliance on large annotated datasets, thereby streamlining the preprocessing phase.

Optimized YOLOv8 for enhanced breast tumor segmentation in ultrasound imaging.

Mostafa AM, Alaerjan AS, Aldughayfiq B, Allahem H, Mahmoud AA, Said W, Shabana H, Ezz M

pubmed logopapersJun 19 2025
Breast cancer significantly affects people's health globally, making early and accurate diagnosis vital. While ultrasound imaging is safe and non-invasive, its manual interpretation is subjective. This study explores machine learning (ML) techniques to improve breast ultrasound image segmentation, comparing models trained on combined versus separate classes of benign and malignant tumors. The YOLOv8 object detection algorithm is applied to the image segmentation task, aiming to capitalize on its robust feature detection capabilities. We utilized a dataset of 780 ultrasound images categorized into benign and malignant classes to train several deep learning (DL) models: UNet, UNet with DenseNet-121, VGG16, VGG19, and an adapted YOLOv8. These models were evaluated in two experimental setups-training on a combined dataset and training on separate datasets for benign and malignant classes. Performance metrics such as Dice Coefficient, Intersection over Union (IoU), and mean Average Precision (mAP) were used to assess model effectiveness. The study demonstrated substantial improvements in model performance when trained on separate classes, with the UNet model's F1-score increasing from 77.80 to 84.09% and Dice Coefficient from 75.58 to 81.17%, and the adapted YOLOv8 model achieving an F1-score improvement from 93.44 to 95.29% and Dice Coefficient from 82.10 to 84.40%. These results highlight the advantage of specialized model training and the potential of using advanced object detection algorithms for segmentation tasks. This research underscores the significant potential of using specialized training strategies and innovative model adaptations in medical imaging segmentation, ultimately contributing to better patient outcomes.

AGE-US: automated gestational age estimation based on fetal ultrasound images

César Díaz-Parga, Marta Nuñez-Garcia, Maria J. Carreira, Gabriel Bernardino, Nicolás Vila-Blanco

arxiv logopreprintJun 19 2025
Being born small carries significant health risks, including increased neonatal mortality and a higher likelihood of future cardiac diseases. Accurate estimation of gestational age is critical for monitoring fetal growth, but traditional methods, such as estimation based on the last menstrual period, are in some situations difficult to obtain. While ultrasound-based approaches offer greater reliability, they rely on manual measurements that introduce variability. This study presents an interpretable deep learning-based method for automated gestational age calculation, leveraging a novel segmentation architecture and distance maps to overcome dataset limitations and the scarcity of segmentation masks. Our approach achieves performance comparable to state-of-the-art models while reducing complexity, making it particularly suitable for resource-constrained settings and with limited annotated data. Furthermore, our results demonstrate that the use of distance maps is particularly suitable for estimating femur endpoints.
Page 37 of 56556 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.