Sort by:
Page 4 of 14136 results

Coarse for Fine: Bounding Box Supervised Thyroid Ultrasound Image Segmentation Using Spatial Arrangement and Hierarchical Prediction Consistency.

Chi J, Lin G, Li Z, Zhang W, Chen JH, Huang Y

pubmed logopapersJun 1 2025
Weakly-supervised learning methods have become increasingly attractive for medical image segmentation, but suffered from a high dependence on quantifying the pixel-wise affinities of low-level features, which are easily corrupted in thyroid ultrasound images, resulting in segmentation over-fitting to weakly annotated regions without precise delineation of target boundaries. We propose a dual-branch weakly-supervised learning framework to optimize the backbone segmentation network by calibrating semantic features into rational spatial distribution under the indirect, coarse guidance of the bounding box mask. Specifically, in the spatial arrangement consistency branch, the maximum activations sampled from the preliminary segmentation prediction and the bounding box mask along the horizontal and vertical dimensions are compared to measure the rationality of the approximate target localization. In the hierarchical prediction consistency branch, the target and background prototypes are encapsulated from the semantic features under the combined guidance of the preliminary segmentation prediction and the bounding box mask. The secondary segmentation prediction induced from the prototypes is compared with the preliminary prediction to quantify the rationality of the elaborated target and background semantic feature perception. Experiments on three thyroid datasets illustrate that our model outperforms existing weakly-supervised methods for thyroid gland and nodule segmentation and is comparable to the performance of fully-supervised methods with reduced annotation time. The proposed method has provided a weakly-supervised segmentation strategy by simultaneously considering the target's location and the rationality of target and background semantic features distribution. It can improve the applicability of deep learning based segmentation in the clinical practice.

Human-AI collaboration for ultrasound diagnosis of thyroid nodules: a clinical trial.

Edström AB, Makouei F, Wennervaldt K, Lomholt AF, Kaltoft M, Melchiors J, Hvilsom GB, Bech M, Tolsgaard M, Todsen T

pubmed logopapersJun 1 2025
This clinical trial examined how the articifial intelligence (AI)-based diagnostics system S-Detect for Thyroid influences the ultrasound diagnostic work-up of thyroid ultrasound (US) performed by different US users in clinical practice and how different US users influences the diagnostic accuracy of S-Detect. We conducted a clinical trial with 20 participants, including medical students, US novice physicians, and US experienced physicians. Five patients with thyroid nodules (one malignant and four benign) volunteered to undergo a thyroid US scan performed by all 20 participants using the same US systems with S-Detect installed. Participants performed a focused thyroid US on each patient case and made a nodule classification according to the European Thyroid Imaging Reporting And Data System (EU-TIRADS). They then performed a S-Detect analysis of the same nodule and were asked to re-evaluate their EU-TIRADS reporting. From the EU-TIRADS assessments by participants, we derived a biopsy recommendation outcome of whether fine needle aspiration biopsy (FNAB) was recommended. The mean diagnostic accuracy for S-Detect was 71.3% (range 40-100%) among all participants, with no significant difference between the groups (p = 0.31). The accuracy of our biopsy recommendation outcome was 69.8% before and 69.2% after AI for all participants (p = 0.75). In this trial, we did not find S-Detect to improve the thyroid diagnostic work-up in clinical practice among novice and intermediate ultrasound operators. However, the operator had a substantial impact on the AI-generated ultrasound diagnosis, with a variation in diagnostic accuracy from 40 to 100%, despite the same patients and ultrasound machines being used in the trial.

Prediction of BRAF and TERT status in PTCs by machine learning-based ultrasound radiomics methods: A multicenter study.

Shi H, Ding K, Yang XT, Wu TF, Zheng JY, Wang LF, Zhou BY, Sun LP, Zhang YF, Zhao CK, Xu HX

pubmed logopapersJun 1 2025
Preoperative identification of genetic mutations is conducive to individualized treatment and management of papillary thyroid carcinoma (PTC) patients. <i>Purpose</i>: To investigate the predictive value of the machine learning (ML)-based ultrasound (US) radiomics approaches for BRAF V600E and TERT promoter status (individually and coexistence) in PTC. This multicenter study retrospectively collected data of 1076 PTC patients underwent genetic testing detection for BRAF V600E and TERT promoter between March 2016 and December 2021. Radiomics features were extracted from routine grayscale ultrasound images, and gene status-related features were selected. Then these features were included to nine different ML models to predicting different mutations, and optimal models plus statistically significant clinical information were also conducted. The models underwent training and testing, and comparisons were performed. The Decision Tree-based US radiomics approach had superior prediction performance for the BRAF V600E mutation compared to the other eight ML models, with an area under the curve (AUC) of 0.767 versus 0.547-0.675 (p < 0.05). The US radiomics methodology employing Logistic Regression exhibited the highest accuracy in predicting TERT promoter mutations (AUC, 0.802 vs. 0.525-0.701, p < 0.001) and coexisting BRAF V600E and TERT promoter mutations (0.805 vs. 0.678-0.743, p < 0.001) within the test set. The incorporation of clinical factors enhanced predictive performances to 0.810 for BRAF V600E mutant, 0.897 for TERT promoter mutations, and 0.900 for dual mutations in PTCs. The machine learning-based US radiomics methods, integrated with clinical characteristics, demonstrated effectiveness in predicting the BRAF V600E and TERT promoter mutations in PTCs.

Advancing Acoustic Droplet Vaporization for Tissue Characterization Using Quantitative Ultrasound and Transfer Learning.

Kaushik A, Fabiilli ML, Myers DD, Fowlkes JB, Aliabouzar M

pubmed logopapersJun 1 2025
Acoustic droplet vaporization (ADV) is an emerging technique with expanding applications in biomedical ultrasound. ADV-generated bubbles can function as microscale probes that provide insights into the mechanical properties of their surrounding microenvironment. This study investigated the acoustic and imaging characteristics of phase-shift nanodroplets in fibrin-based, tissue-mimicking hydrogels using passive cavitation detection and active imaging techniques, including B-mode and contrast-enhanced ultrasound. The findings demonstrated that the backscattered signal intensities and pronounced nonlinear acoustic responses, including subharmonic and higher harmonic frequencies, of ADV-generated bubbles correlated inversely with fibrin density. Additionally, we quantified the mean echo intensity, bubble cloud area, and second-order texture features of the generated ADV bubbles across varying fibrin densities. ADV bubbles in softer hydrogels displayed significantly higher mean echo intensities, larger bubble cloud areas, and more heterogeneous textures. In contrast, texture uniformity, characterized by variance, homogeneity, and energy, correlated directly with fibrin density. Furthermore, we incorporated transfer learning with convolutional neural networks, adapting AlexNet into two specialized models for differentiating fibrin hydrogels. The integration of deep learning techniques with ADV offers great potential, paving the way for future advancements in biomedical diagnostics.

Adaptive ensemble loss and multi-scale attention in breast ultrasound segmentation with UMA-Net.

Dar MF, Ganivada A

pubmed logopapersJun 1 2025
The generalization of deep learning (DL) models is critical for accurate lesion segmentation in breast ultrasound (BUS) images. Traditional DL models often struggle to generalize well due to the high frequency and scale variations inherent in BUS images. Moreover, conventional loss functions used in these models frequently result in imbalanced optimization, either prioritizing region overlap or boundary accuracy, which leads to suboptimal segmentation performance. To address these issues, we propose UMA-Net, an enhanced UNet architecture specifically designed for BUS image segmentation. UMA-Net integrates residual connections, attention mechanisms, and a bottleneck with atrous convolutions to effectively capture multi-scale contextual information without compromising spatial resolution. Additionally, we introduce an adaptive ensemble loss function that dynamically balances the contributions of different loss components during training, ensuring optimization across key segmentation metrics. This novel approach mitigates the imbalances found in conventional loss functions. We validate UMA-Net on five diverse BUS datasets-BUET, BUSI, Mendeley, OMI, and UDIAT-demonstrating superior performance. Our findings highlight the importance of addressing frequency and scale variations, confirming UMA-Net as a robust and generalizable solution for BUS image segmentation.

Deep Learning to Localize Photoacoustic Sources in Three Dimensions: Theory and Implementation.

Gubbi MR, Bell MAL

pubmed logopapersJun 1 2025
Surgical tool tip localization and tracking are essential components of surgical and interventional procedures. The cross sections of tool tips can be considered as acoustic point sources to achieve these tasks with deep learning applied to photoacoustic channel data. However, source localization was previously limited to the lateral and axial dimensions of an ultrasound transducer. In this article, we developed a novel deep learning-based 3-D photoacoustic point source localization system using an object detection-based approach extended from our previous work. In addition, we derived theoretical relationships among point source locations, sound speeds, and waveform shapes in raw photoacoustic channel data frames. We then used this theory to develop a novel deep learning instance segmentation-based 3-D point source localization system. When tested with 4000 simulated, 993 phantom, and 1983 ex vivo channel data frames, the two systems achieved F1 scores as high as 99.82%, 93.05%, and 98.20%, respectively, and Euclidean localization errors (mean ± one standard deviation) as low as ${1.46} \; \pm \; {1.11}$ mm, ${1.58} \; \pm \; {1.30}$ mm, and ${1.55} \; \pm \; {0.86}$ mm, respectively. In addition, the instance segmentation-based system simultaneously estimated sound speeds with absolute errors (mean ± one standard deviation) of ${19.22} \; \pm \; {26.26}$ m/s in simulated data and standard deviations ranging 14.6-32.3 m/s in experimental data. These results demonstrate the potential of the proposed photoacoustic imaging-based methods to localize and track tool tips in three dimensions during surgical and interventional procedures.

Diagnosis of carpal tunnel syndrome using deep learning with comparative guidance.

Sim J, Lee S, Kim S, Jeong SH, Yoon J, Baek S

pubmed logopapersJun 1 2025
This study aims to develop a deep learning model for a robust diagnosis of Carpal Tunnel Syndrome (CTS) based on comparative classification leveraging the ultrasound images of the thenar and hypothenar muscles. We recruited 152 participants, both patients with varying severities of CTS and healthy individuals. The enrolled patients underwent ultrasonography, which provided ultrasound image data of the thenar and hypothenar muscles from the median and ulnar nerves. These images were used to train a deep learning model. We compared the performance of our model with previous comparative methods using echo intensity ratio or machine learning, and non-comparative methods based on deep learning. During the training process, comparative guidance based on cosine similarity was used so that the model learns to automatically identify the abnormal differences in echotexture between the ultrasound images of the thenar and hypothenar muscles. The proposed deep learning model with comparative guidance showed the highest performance. The comparison of Receiver operating characteristic (ROC) curves between models demonstrated that the Comparative guidance was effective in autonomously identifying complex features within the CTS dataset. The proposed deep learning model with comparative guidance was shown to be effective in automatically identifying important features for CTS diagnosis from the ultrasound images. The proposed comparative approach was found to be robust to the traditional problems in ultrasound image analysis such as different cut-off values and anatomical variation of patients. Proposed deep learning methodology facilitates accurate and efficient diagnosis of CTS from ultrasound images.

Automatic Segmentation of Ultrasound-Guided Quadratus Lumborum Blocks Based on Artificial Intelligence.

Wang Q, He B, Yu J, Zhang B, Yang J, Liu J, Ma X, Wei S, Li S, Zheng H, Tang Z

pubmed logopapersJun 1 2025
Ultrasound-guided quadratus lumborum block (QLB) technology has become a widely used perioperative analgesia method during abdominal and pelvic surgeries. Due to the anatomical complexity and individual variability of the quadratus lumborum muscle (QLM) on ultrasound images, nerve blocks heavily rely on anesthesiologist experience. Therefore, using artificial intelligence (AI) to identify different tissue regions in ultrasound images is crucial. In our study, we retrospectively collected 112 patients (3162 images) and developed a deep learning model named Q-VUM, which is a U-shaped network based on the Visual Geometry Group 16 (VGG16) network. Q-VUM precisely segments various tissues, including the QLM, the external oblique muscle, the internal oblique muscle, the transversus abdominis muscle (collectively referred to as the EIT), and the bones. Furthermore, we evaluated Q-VUM. Our model demonstrated robust performance, achieving mean intersection over union (mIoU), mean pixel accuracy, dice coefficient, and accuracy values of 0.734, 0.829, 0.841, and 0.944, respectively. The IoU, recall, precision, and dice coefficient achieved for the QLM were 0.711, 0.813, 0.850, and 0.831, respectively. Additionally, the Q-VUM predictions showed that 85% of the pixels in the blocked area fell within the actual blocked area. Finally, our model exhibited stronger segmentation performance than did the common deep learning segmentation networks (0.734 vs. 0.720 and 0.720, respectively). In summary, we proposed a model named Q-VUM that can accurately identify the anatomical structure of the quadratus lumborum in real time. This model aids anesthesiologists in precisely locating the nerve block site, thereby reducing potential complications and enhancing the effectiveness of nerve block procedures.

Prediction of Lymph Node Metastasis in Lung Cancer Using Deep Learning of Endobronchial Ultrasound Images With Size on CT and PET-CT Findings.

Oh JE, Chung HS, Gwon HR, Park EY, Kim HY, Lee GK, Kim TS, Hwangbo B

pubmed logopapersJun 1 2025
Echo features of lymph nodes (LNs) influence target selection during endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). This study evaluates deep learning's diagnostic capabilities on EBUS images for detecting mediastinal LN metastasis in lung cancer, emphasising the added value of integrating a region of interest (ROI), LN size on CT, and PET-CT findings. We analysed 2901 EBUS images from 2055 mediastinal LN stations in 1454 lung cancer patients. ResNet18-based deep learning models were developed to classify images of true positive malignant and true negative benign LNs diagnosed by EBUS-TBNA using different inputs: original images, ROI images, and CT size and PET-CT data. Model performance was assessed using the area under the receiver operating characteristic curve (AUROC) and other diagnostic metrics. The model using only original EBUS images showed the lowest AUROC (0.870) and accuracy (80.7%) in classifying LN images. Adding ROI information slightly increased the AUROC (0.896) without a significant difference (p = 0.110). Further adding CT size resulted in a minimal change in AUROC (0.897), while adding PET-CT (original + ROI + PET-CT) showed a significant improvement (0.912, p = 0.008 vs. original; p = 0.002 vs. original + ROI + CT size). The model combining original and ROI EBUS images with CT size and PET-CT findings achieved the highest AUROC (0.914, p = 0.005 vs. original; p = 0.018 vs. original + ROI + PET-CT) and accuracy (82.3%). Integrating an ROI, LN size on CT, and PET-CT findings into the deep learning analysis of EBUS images significantly enhances the diagnostic capability of models for detecting mediastinal LN metastasis in lung cancer, with the integration of PET-CT data having a substantial impact.

Tailoring ventilation and respiratory management in pediatric critical care: optimizing care with precision medicine.

Beauchamp FO, Thériault J, Sauthier M

pubmed logopapersJun 1 2025
Critically ill children admitted to the intensive care unit frequently need respiratory care to support the lung function. Mechanical ventilation is a complex field with multiples parameters to set. The development of precision medicine will allow clinicians to personalize respiratory care and improve patients' outcomes. Lung and diaphragmatic ultrasound, electrical impedance tomography, neurally adjusted ventilatory assist ventilation, as well as the use of monitoring data in machine learning models are increasingly used to tailor care. Each modality offers insights into different aspects of the patient's respiratory system function and enables the adjustment of treatment to better support the patient's physiology. Precision medicine in respiratory care has been associated with decreased ventilation time, increased extubation and ventilation wean success and increased ability to identify phenotypes to guide treatment and predict outcomes. This review will focus on the use of precision medicine in the setting of pediatric acute respiratory distress syndrome, asthma, bronchiolitis, extubation readiness trials and ventilation weaning, ventilation acquired pneumonia and other respiratory tract infections. Precision medicine is revolutionizing respiratory care and will decrease complications associated with ventilation. More research is needed to standardize its use and better evaluate its impact on patient outcomes.
Page 4 of 14136 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.