Sort by:
Page 23 of 26252 results

Recognizing artery segments on carotid ultrasonography using embedding concatenation of deep image and vision-language models.

Lo CM, Sung SF

pubmed logopapersMay 14 2025
Evaluating large artery atherosclerosis is critical for predicting and preventing ischemic strokes. Ultrasonographic assessment of the carotid arteries is the preferred first-line examination due to its ease of use, noninvasive, and absence of radiation exposure. This study proposed an automated classification model for the common carotid artery (CCA), carotid bulb, internal carotid artery (ICA), and external carotid artery (ECA) to enhance the quantification of carotid artery examinations.&#xD;Approach: A total of 2,943 B-mode ultrasound images (CCA: 1,563; bulb: 611; ICA: 476; ECA: 293) from 288 patients were collected. Three distinct sets of embedding features were extracted from artificial intelligence networks including pre-trained DenseNet201, vision Transformer (ViT), and echo contrastive language-image pre-training (EchoCLIP) models using deep learning architectures for pattern recognition. These features were then combined in a support vector machine (SVM) classifier to interpret the anatomical structures in B-mode images.&#xD;Main results: After ten-fold cross-validation, the model achieved an accuracy of 82.3%, which was significantly better than using individual feature sets, with a p-value of <0.001.&#xD;Significance: The proposed model could make carotid artery examinations more accurate and consistent with the achieved classification accuracy. The source code is available at https://github.com/buddykeywordw/Artery-Segments-Recognition&#xD.

Novel AI Guided Non-Expert Compression Ultrasound DVT Diagnostic Pathway May Reduce Vascular Laboratory Venous Testing <sup>†</sup>.

Avgerinos E, Spiliopoulos S, Psachoulia F, Yfantis A, Plakas G, Grigoriadis S, Speranza G, Kakisis Y

pubmed logopapersMay 14 2025
Ultrasonography and D-dimer testing are established modalities for evaluating potential lower extremity deep venous thrombosis (DVT). The ThinkSono Guidance system is an AI based software allowing non-ultrasound trained providers to perform compression ultrasounds for evaluation by remote interpreters. This study evaluates its clinical utilisation and potential reduction of venous duplexes and waiting times. Patients with suspected DVTs were prospectively recruited through the institution's emergency department. Patients underwent an AI guided two region proximal DVT compression examination by non-ultrasound trained providers using the ThinkSono Guidance system and D-dimer testing. Ultrasound images remotely reviewed by the on call radiologist were rated for diagnostic quality; all images of sufficient quality were assessed as either "Compressible/no proximal DVT" or "Inadequate imaging/possible DVT". All patients assessed as "compressible" with negative D-dimers were discharged. All other patients were sent for a venous duplex scan. Time to diagnosis, sensitivity, and specificity of ThinkSono Guidance against D-dimers and full duplex scans were calculated. Fifty three patients (average age 56 ± 18 years, 45% females) were scanned with ThinkSono Guidance by one of three non-ultrasound trained providers. All scans were of diagnostic quality. ThinkSono Guidance with radiologist review yielded 45 negative DVT diagnoses (85%). Seventeen of these with negative D-dimers were discharged (32%), 28 required duplex ultrasound testing per trial protocol (23 due to positive D-dimers, five due to unavailability of D-dimer). All of these duplexes were negative (100% sensitivity). Eight patients were suspected to have DVT by the reviewing radiologist, and duplex confirmed DVT in six patients (96% ThinkSono Guidance specificity, 36% D-dimer specificity). ThinkSono Guidance scans averaged 6.75 minutes for scan and review. The median time from scan initiation to review was 37.5 minutes. This suggests a significant proportion of patients with suspected DVT could safely avoid duplex ultrasound and D-dimer testing using the ThinkSono system, setting the basis for a novel AI assisted diagnostic pathway.

DEMAC-Net: A Dual-Encoder Multiattention Collaborative Network for Cervical Nerve Pathway and Adjacent Anatomical Structure Segmentation.

Cui H, Duan J, Lin L, Wu Q, Guo W, Zang Q, Zhou M, Fang W, Hu Y, Zou Z

pubmed logopapersMay 13 2025
Currently, cervical anesthesia is performed using three main approaches: superficial cervical plexus block, deep cervical plexus block, and intermediate plexus nerve block. However, each technique carries inherent risks and demands significant clinical expertise. Ultrasound imaging, known for its real-time visualization capabilities and accessibility, is widely used in both diagnostic and interventional procedures. Nevertheless, accurate segmentation of small and irregularly shaped structures such as the cervical and brachial plexuses remains challenging due to image noise, complex anatomical morphology, and limited annotated training data. This study introduces DEMAC-Net-a dual-encoder, multiattention collaborative network-to significantly improve the segmentation accuracy of these neural structures. By precisely identifying the cervical nerve pathway (CNP) and adjacent anatomical tissues, DEMAC-Net aims to assist clinicians, especially those less experienced, in effectively guiding anesthesia procedures and accurately identifying optimal needle insertion points. Consequently, this improvement is expected to enhance clinical safety, reduce procedural risks, and streamline decision-making efficiency during ultrasound-guided regional anesthesia. DEMAC-Net combines a dual-encoder architecture with the Spatial Understanding Convolution Kernel (SUCK) and the Spatial-Channel Attention Module (SCAM) to extract multi-scale features effectively. Additionally, a Global Attention Gate (GAG) and inter-layer fusion modules refine relevant features while suppressing noise. A novel dataset, Neck Ultrasound Dataset (NUSD), was introduced, containing 1,500 annotated ultrasound images across seven anatomical regions. Extensive experiments were conducted on both NUSD and the BUSI public dataset, comparing DEMAC-Net to state-of-the-art models using metrics such as Dice Similarity Coefficient (DSC) and Intersection over Union (IoU). On the NUSD dataset, DEMAC-Net achieved a mean DSC of 93.3%, outperforming existing models. For external validation on the BUSI dataset, it demonstrated superior generalization, achieving a DSC of 87.2% and a mean IoU of 77.4%, surpassing other advanced methods. Notably, DEMAC-Net displayed consistent segmentation stability across all tested structures. The proposed DEMAC-Net significantly improves segmentation accuracy for small nerves and complex anatomical structures in ultrasound images, outperforming existing methods in terms of accuracy and computational efficiency. This framework holds great potential for enhancing ultrasound-guided procedures, such as peripheral nerve blocks, by providing more precise anatomical localization, ultimately improving clinical outcomes.

Improving AI models for rare thyroid cancer subtype by text guided diffusion models.

Dai F, Yao S, Wang M, Zhu Y, Qiu X, Sun P, Qiu C, Yin J, Shen G, Sun J, Wang M, Wang Y, Yang Z, Sang J, Wang X, Sun F, Cai W, Zhang X, Lu H

pubmed logopapersMay 13 2025
Artificial intelligence applications in oncology imaging often struggle with diagnosing rare tumors. We identify significant gaps in detecting uncommon thyroid cancer types with ultrasound, where scarce data leads to frequent misdiagnosis. Traditional augmentation strategies do not capture the unique disease variations, hindering model training and performance. To overcome this, we propose a text-driven generative method that fuses clinical insights with image generation, producing synthetic samples that realistically reflect rare subtypes. In rigorous evaluations, our approach achieves substantial gains in diagnostic metrics, surpasses existing methods in authenticity and diversity measures, and generalizes effectively to other private and public datasets with various rare cancers. In this work, we demonstrate that text-guided image augmentation substantially enhances model accuracy and robustness for rare tumor detection, offering a promising avenue for more reliable and widespread clinical adoption.

Deep learning based on ultrasound images to predict platinum resistance in patients with epithelial ovarian cancer.

Su C, Miao K, Zhang L, Dong X

pubmed logopapersMay 13 2025
The study aimed at developing and validating a deep learning (DL) model based on the ultrasound imaging for predicting the platinum resistance of patients with epithelial ovarian cancer (EOC). 392 patients were enrolled in this retrospective study who had been diagnosed with EOC between 2014 and 2020 and underwent pelvic ultrasound before initial treatment. A DL model was developed to predict patients' platinum resistance, and the model underwent evaluation through receiver-operating characteristic (ROC) curves, decision curve analysis (DCA), and calibration curve. The ROC curves showed that the area under the curve (AUC) of the DL model for predicting patients' platinum resistance in the internal and external test sets were 0.86 (95% CI 0.83-0.90) and 0.86 (95% CI 0.84-0.89), respectively. The model demonstrated high clinical value through clinical decision curve analysis and exhibited good calibration efficiency in the training cohort. Kaplan-Meier analyses showed that the model's optimal cutoff value successfully distinguished between patients at high and low risk of recurrence, with hazard ratios of 3.1 (95% CI 2.3-4.1, P < 0.0001) and 2.9 (95% CI 2.3-3.9; P < 0.0001) in the high-risk group of the internal and external test sets, serving as a prognostic indicator. The DL model based on ultrasound imaging can predict platinum resistance in patients with EOC and may support clinicians in making the most appropriate treatment decisions.

Use of Artificial Intelligence in Recognition of Fetal Open Neural Tube Defect on Prenatal Ultrasound.

Kumar M, Arora U, Sengupta D, Nain S, Meena D, Yadav R, Perez M

pubmed logopapersMay 12 2025
To compare the axial cranial ultrasound images of normal and open neural tube defect (NTD) fetuses using a deep learning (DL) model and to assess its predictive accuracy in identifying open NTD.It was a prospective case-control study. Axial trans-thalamic fetal ultrasound images of participants with open fetal NTD and normal controls between 14 and 28 weeks of gestation were taken after consent. The images were divided into training, testing, and validation datasets randomly in the ratio of 70:15:15. The images were further processed and classified using DL convolutional neural network (CNN) transfer learning (TL) models. The TL models were trained for 50 epochs. The data was analyzed in terms of Cohen kappa score, accuracy score, area under receiver operating curve (AUROC) score, F1 score validity, sensitivity, and specificity of the test.A total of 59 cases and 116 controls were fully followed. Efficient net B0, Visual Geometry Group (VGG), and Inception V3 TL models were used. Both Efficient net B0 and VGG16 models gave similar high training and validation accuracy (100 and 95.83%, respectively). Using inception V3, the training and validation accuracy was 98.28 and 95.83%, respectively. The sensitivity and specificity of Efficient NetB0 was 100 and 89%, respectively, and was the best.The analysis of the changes in axial images of the fetal cranium using the DL model, Efficient Net B0 proved to be an effective model to be used in clinical application for the identification of open NTD. · Open spina bifida is often missed due to the nonrecognition of the lemon sign on ultrasound.. · Image classification using DL identified open spina bifida with excellent accuracy.. · The research is clinically relevant in low- and middle-income countries..

Artificial intelligence-assisted diagnosis of early allograft dysfunction based on ultrasound image and data.

Meng Y, Wang M, Niu N, Zhang H, Yang J, Zhang G, Liu J, Tang Y, Wang K

pubmed logopapersMay 12 2025
Early allograft dysfunction (EAD) significantly affects liver transplantation prognosis. This study evaluated the effectiveness of artificial intelligence (AI)-assisted methods in accurately diagnosing EAD and identifying its causes. The primary metric for assessing the accuracy was the area under the receiver operating characteristic curve (AUC). Accuracy, sensitivity, and specificity were calculated and analyzed to compare the performance of the AI models with each other and with radiologists. EAD classification followed the criteria established by Olthoff et al. A total of 582 liver transplant patients who underwent transplantation between December 2012 and June 2021 were selected. Among these, 117 patients (mean age 33.5 ± 26.5 years, 80 men) were evaluated. The ultrasound parameters, images, and clinical information of patients were extracted from the database to train the AI model. The AUC for the ultrasound-spectrogram fusion network constructed from four ultrasound images and medical data was 0.968 (95%CI: 0.940, 0.991), outperforming radiologists by 30% for all metrics. AI assistance significantly improved diagnostic accuracy, sensitivity, and specificity (P < 0.050) for both experienced and less-experienced physicians. EAD lacks efficient diagnosis and causation analysis methods. The integration of AI and ultrasound enhances diagnostic accuracy and causation analysis. By modeling only images and data related to blood flow, the AI model effectively analyzed patients with EAD caused by abnormal blood supply. Our model can assist radiologists in reducing judgment discrepancies, potentially benefitting patients with EAD in underdeveloped regions. Furthermore, it enables targeted treatment for those with abnormal blood supply.

Creation of an Open-Access Lung Ultrasound Image Database For Deep Learning and Neural Network Applications

Kumar, A., Nandakishore, P., Gordon, A. J., Baum, E., Madhok, J., Duanmu, Y., Kugler, J.

medrxiv logopreprintMay 11 2025
BackgroundLung ultrasound (LUS) offers advantages over traditional imaging for diagnosing pulmonary conditions, with superior accuracy compared to chest X-ray and similar performance to CT at lower cost. Despite these benefits, widespread adoption is limited by operator dependency, moderate interrater reliability, and training requirements. Deep learning (DL) could potentially address these challenges, but development of effective algorithms is hindered by the scarcity of comprehensive image repositories with proper metadata. MethodsWe created an open-source dataset of LUS images derived a multi-center study involving N=226 adult patients presenting with respiratory symptoms to emergency departments between March 2020 and April 2022. Images were acquired using a standardized scanning protocol (12-zone or modified 8-zone) with various point-of-care ultrasound devices. Three blinded researchers independently analyzed each image following consensus guidelines, with disagreements adjudicated to provide definitive interpretations. Videos were pre-processed to remove identifiers, and frames were extracted and resized to 128x128 pixels. ResultsThe dataset contains 1,874 video clips comprising 303,977 frames. Half of the participants (50%) had COVID-19 pneumonia. Among all clips, 66% contained no abnormalities, 18% contained B-lines, 4.5% contained consolidations, 6.4% contained both B-lines and consolidations, and 5.2% had indeterminate findings. Pathological findings varied significantly by lung zone, with anterior zones more frequently normal and less likely to show consolidations compared to lateral and posterior zones. DiscussionThis dataset represents one of the largest annotated LUS repositories to date, including both COVID-19 and non-COVID-19 patients. The comprehensive metadata and expert interpretations enhance its utility for DL applications. Despite limitations including potential device-specific characteristics and COVID-19 predominance, this repository provides a valuable resource for developing AI tools to improve LUS acquisition and interpretation.

Intra- and Peritumoral Radiomics Based on Ultrasound Images for Preoperative Differentiation of Follicular Thyroid Adenoma, Carcinoma, and Follicular Tumor With Uncertain Malignant Potential.

Fu Y, Mei F, Shi L, Ma Y, Liang H, Huang L, Fu R, Cui L

pubmed logopapersMay 10 2025
Differentiating between follicular thyroid adenoma (FTA), carcinoma (FTC), and follicular tumor with uncertain malignant potential (FT-UMP) remains challenging due to their overlapping ultrasound characteristics. This retrospective study aimed to enhance preoperative diagnostic accuracy by utilizing intra- and peritumoral radiomics based on ultrasound images. We collected post-thyroidectomy ultrasound images from 774 patients diagnosed with FTA (n = 429), FTC (n = 158), or FT-UMP (n = 187) between January 2018 and December 2023. Six peritumoral regions were expanded by 5%-30% in 5% increments, with the segment-anything model utilizing prompt learning to detect the field of view and constrain the expanded boundaries. A stepwise classification strategy addressing three tasks was implemented: distinguishing FTA from the other types (task 1), differentiating FTC from FT-UMP (task 2), and classifying all three tumors. Diagnostic models were developed by combining radiomic features from tumor and peritumoral regions with clinical characteristics. Clinical characteristics combined with intratumoral and 5% peritumoral radiomic features performed best across all tasks (Test set: area under the curves, 0.93 for task 1 and 0.90 for task 2; diagnostic accuracy, 79.9%). The DeLong test indicated that all peritumoral radiomics significantly improved intratumoral radiomics performance and clinical characteristics (p < 0.04). The 5% peritumoral regions showed the best performance, though not all results were significant (p = 0.01-0.91). Ultrasound-based intratumoral and peritumoral radiomics can significantly enhance preoperative diagnostic accuracy for FTA, FTC, and FT-UMP, leading to improved treatment strategies and patient outcomes. Furthermore, the 5% peritumoral area may indicate regions of potential tumor invasion requiring further investigation.

Batch Augmentation with Unimodal Fine-tuning for Multimodal Learning

H M Dipu Kabir, Subrota Kumar Mondal, Mohammad Ali Moni

arxiv logopreprintMay 10 2025
This paper proposes batch augmentation with unimodal fine-tuning to detect the fetus's organs from ultrasound images and associated clinical textual information. We also prescribe pre-training initial layers with investigated medical data before the multimodal training. At first, we apply a transferred initialization with the unimodal image portion of the dataset with batch augmentation. This step adjusts the initial layer weights for medical data. Then, we apply neural networks (NNs) with fine-tuned initial layers to images in batches with batch augmentation to obtain features. We also extract information from descriptions of images. We combine this information with features obtained from images to train the head layer. We write a dataloader script to load the multimodal data and use existing unimodal image augmentation techniques with batch augmentation for the multimodal data. The dataloader brings a new random augmentation for each batch to get a good generalization. We investigate the FPU23 ultrasound and UPMC Food-101 multimodal datasets. The multimodal large language model (LLM) with the proposed training provides the best results among the investigated methods. We receive near state-of-the-art (SOTA) performance on the UPMC Food-101 dataset. We share the scripts of the proposed method with traditional counterparts at the following repository: github.com/dipuk0506/multimodal
Page 23 of 26252 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.