Sort by:
Page 11 of 26252 results

Transformer-based robotic ultrasound 3D tracking for capsule robot in GI tract.

Liu X, He C, Wu M, Ping A, Zavodni A, Matsuura N, Diller E

pubmed logopapersJun 9 2025
Ultrasound (US) imaging is a promising modality for real-time monitoring of robotic capsule endoscopes navigating through the gastrointestinal (GI) tract. It offers high temporal resolution and safety but is limited by a narrow field of view, low visibility in gas-filled regions and challenges in detecting out-of-plane motions. This work addresses these issues by proposing a novel robotic ultrasound tracking system capable of long-distance 3D tracking and active re-localization when the capsule is lost due to motion or artifacts. We develop a hybrid deep learning-based tracking framework combining convolutional neural networks (CNNs) and a transformer backbone. The CNN component efficiently encodes spatial features, while the transformer captures long-range contextual dependencies in B-mode US images. This model is integrated with a robotic arm that adaptively scans and tracks the capsule. The system's performance is evaluated using ex vivo colon phantoms under varying imaging conditions, with physical perturbations introduced to simulate realistic clinical scenarios. The proposed system achieved continuous 3D tracking over distances exceeding 90 cm, with a mean centroid localization error of 1.5 mm and over 90% detection accuracy. We demonstrated 3D tracking in a more complex workspace featuring two curved sections to simulate anatomical challenges. This suggests the strong resilience of the tracking system to motion-induced artifacts and geometric variability. The system maintained real-time tracking at 9-12 FPS and successfully re-localized the capsule within seconds after tracking loss, even under gas artifacts and acoustic shadowing. This study presents a hybrid CNN-transformer system for automatic, real-time 3D ultrasound tracking of capsule robots over long distances. The method reliably handles occlusions, view loss and image artifacts, offering millimeter-level tracking accuracy. It significantly reduces clinical workload through autonomous detection and re-localization. Future work includes improving probe-tissue interaction handling and validating performance in live animal and human trials to assess physiological impacts.

Evaluation of AI diagnostic systems for breast ultrasound: comparative analysis with radiologists and the effect of AI assistance.

Tsuyuzaki S, Fujioka T, Yamaga E, Katsuta L, Mori M, Yashima Y, Hara M, Sato A, Onishi I, Tsukada J, Aruga T, Kubota K, Tateishi U

pubmed logopapersJun 9 2025
The purpose of this study is to evaluate the diagnostic accuracy of an artificial intelligence (AI)-based Computer-Aided Diagnosis (CADx) system for breast ultrasound, compare its performance with radiologists, and assess the effect of AI-assisted diagnosis. This study aims to investigate the system's ability to differentiate between benign and malignant breast masses among Japanese patients. This retrospective study included 171 breast mass ultrasound images (92 benign, 79 malignant). The AI system, BU-CAD™, provided Breast Imaging Reporting and Data System (BI-RADS) categorization, which was compared with the performance of three radiologists. Diagnostic accuracy, sensitivity, specificity, and area under the curve (AUC) were analyzed. Radiologists' diagnostic performance with and without AI assistance was also compared, and their reading time was measured using a stopwatch. The AI system demonstrated a sensitivity of 91.1%, specificity of 92.4%, and an AUC of 0.948. It showed comparable diagnostic performance to Radiologist 1, with 10 years of experience in breast imaging (0.948 vs. 0.950; p = 0.893), and superior performance to Radiologist 2 (7 years of experience, 0.948 vs. 0.881; p = 0.015) and Radiologist 3 (3 years of experience, 0.948 vs. 0.832; p = 0.001). When comparing diagnostic performance with and without AI, the use of AI significantly improved the AUC for Radiologists 2 and 3 (p = 0.001 and 0.005, respectively). However, there was no significant difference for Radiologist 1 (p = 0.139). In terms of diagnosis time, the use of AI reduced the reading time for all radiologists. Although there was no significant difference in diagnostic performance between AI and Radiologist 1, the use of AI substantially decreased the diagnosis time for Radiologist 1 as well. The AI system significantly improved diagnostic efficiency and accuracy, particularly for junior radiologists, highlighting its potential clinical utility in breast ultrasound diagnostics.

Ultrasound Radiomics and Dual-Mode Ultrasonic Elastography Based Machine Learning Model for the Classification of Benign and Malignant Thyroid Nodules.

Yan J, Zhou X, Zheng Q, Wang K, Gao Y, Liu F, Pan L

pubmed logopapersJun 9 2025
The present study aims to construct a random forest (RF) model based on ultrasound radiomics and elastography, offering a new approach for the differentiation of thyroid nodules (TNs). We retrospectively analyzed 152 TNs from 127 patients and developed four machine learning models. The examination was performed using the Resona 9Pro equipped with a 15-4 MHz linear array probe. The region of interest (ROI) was delineated with 3D Slicer. Using the RF algorithm, four models were developed based on sound touch elastography (STE) parameters, strain elastography (SE) parameters, and the selected radiomic features: the STE model, SE model, radiomics model, and the combined model. Decision Curve Analysis (DCA) is employed to assess the clinical benefit of each model. The DeLong test is used to determine whether the area under the curves (AUC) values of different models are statistically significant. A total of 1396 radiomic features were extracted using the Pyradiomics package. After screening, a total of 7 radiomic features were ultimately included in the construction of the model. In STE, SE, radiomics model, and combined model, the AUCs are 0.699 (95% CI: 0.570-0.828), 0.812 (95% CI: 0.683-0.941), 0.851 (95% CI: 0.739-0.964) and 0.911 (95% CI: 0.806-1.000), respectively. In these models, the combined model and the radiomics model exhibited outstanding performance. The combined model, integrating elastography and radiomics, demonstrates superior predictive accuracy compared to single models, offering a promising approach for the diagnosis of TNs.

HAIBU-ReMUD: Reasoning Multimodal Ultrasound Dataset and Model Bridging to General Specific Domains

Shijie Wang, Yilun Zhang, Zeyu Lai, Dexing Kong

arxiv logopreprintJun 9 2025
Multimodal large language models (MLLMs) have shown great potential in general domains but perform poorly in some specific domains due to a lack of domain-specific data, such as image-text data or vedio-text data. In some specific domains, there is abundant graphic and textual data scattered around, but lacks standardized arrangement. In the field of medical ultrasound, there are ultrasonic diagnostic books, ultrasonic clinical guidelines, ultrasonic diagnostic reports, and so on. However, these ultrasonic materials are often saved in the forms of PDF, images, etc., and cannot be directly used for the training of MLLMs. This paper proposes a novel image-text reasoning supervised fine-tuning data generation pipeline to create specific domain quadruplets (image, question, thinking trace, and answer) from domain-specific materials. A medical ultrasound domain dataset ReMUD is established, containing over 45,000 reasoning and non-reasoning supervised fine-tuning Question Answering (QA) and Visual Question Answering (VQA) data. The ReMUD-7B model, fine-tuned on Qwen2.5-VL-7B-Instruct, outperforms general-domain MLLMs in medical ultrasound field. To facilitate research, the ReMUD dataset, data generation codebase, and ReMUD-7B parameters will be released at https://github.com/ShiDaizi/ReMUD, addressing the data shortage issue in specific domain MLLMs.

Integration of artificial intelligence into cardiac ultrasonography practice.

Shaulian SY, Gala D, Makaryus AN

pubmed logopapersJun 9 2025
Over the last several decades, echocardiography has made numerous technological advancements, with one of the most significant being the integration of artificial intelligence (AI). AI algorithms assist novice operators to acquire diagnostic-quality images and automate complex analyses. This review explores the integration of AI into various echocardiographic modalities, including transthoracic, transesophageal, intracardiac, and point-of-care ultrasound. It examines how AI enhances image acquisition, streamlines analysis, and improves diagnostic performance across routine, critical care, and complex cardiac imaging. To conduct this review, PubMed was searched using targeted keywords aligned with each section of the paper, focusing primarily on peer-reviewed articles published from 2020 onward. Earlier studies were included when foundational or frequently cited. The findings were organized thematically to highlight clinical relevance and practical applications. Challenges persist in clinical application, including algorithmic bias, ethical concerns, and the need for clinician training and AI oversight. Despite these, AI's potential to revolutionize cardiovascular care through precision and accessibility remains unparalleled, with benefits likely to far outweigh obstacles if appropriately applied and implemented in cardiac ultrasonography.

Foundation versus domain-specific models for left ventricular segmentation on cardiac ultrasound.

Chao CJ, Gu YR, Kumar W, Xiang T, Appari L, Wu J, Farina JM, Wraith R, Jeong J, Arsanjani R, Kane GC, Oh JK, Langlotz CP, Banerjee I, Fei-Fei L, Adeli E

pubmed logopapersJun 6 2025
The Segment Anything Model (SAM) was fine-tuned on the EchoNet-Dynamic dataset and evaluated on external transthoracic echocardiography (TTE) and Point-of-Care Ultrasound (POCUS) datasets from CAMUS (University Hospital of St Etienne) and Mayo Clinic (99 patients: 58 TTE, 41 POCUS). Fine-tuned SAM was superior or comparable to MedSAM. The fine-tuned SAM also outperformed EchoNet and U-Net models, demonstrating strong generalization, especially on apical 2-chamber (A2C) images (fine-tuned SAM vs. EchoNet: CAMUS-A2C: DSC 0.891 ± 0.040 vs. 0.752 ± 0.196, p < 0.0001) and POCUS (DSC 0.857 ± 0.047 vs. 0.667 ± 0.279, p < 0.0001). Additionally, SAM-enhanced workflow reduced annotation time by 50% (11.6 ± 4.5 sec vs. 5.7 ± 1.7 sec, p < 0.0001) while maintaining segmentation quality. We demonstrated an effective strategy for fine-tuning a vision foundation model for enhancing clinical workflow efficiency and supporting human-AI collaboration.

The value of intratumoral and peritumoral ultrasound radiomics model constructed using multiple machine learning algorithms for non-mass breast cancer.

Liu J, Chen J, Qiu L, Li R, Li Y, Li T, Leng X

pubmed logopapersJun 6 2025
To investigate the diagnostic capability of multiple machine learning algorithms combined with intratumoral and peritumoral ultrasound radiomics models for non-massive breast cancer in dense breast backgrounds. Manual segmentation of ultrasound images was performed to define the intratumoral region of interest (ROI), and five peritumoral ROIs were generated by extending the contours by 1 to 5 mm. A total of 851 radiomics features were extracted from these regions and filtered using statistical methods. Thirteen machine learning algorithms were employed to create radiomics models for the intratumoral and peritumoral areas. The best model was combined with clinical ultrasound predictive factors to form a joint model, which was evaluated using ROC curves, calibration curves, and decision curve analysis (DCA).Based on this model, a nomogram was developed, demonstrating high predictive performance, with C-index values of 0.982 and 0.978.The model incorporating the intratumoral and peritumoral 2 mm regions outperformed other models, indicating its effectiveness in distinguishing between benign and malignant breast lesions. This study concludes that ultrasound imaging, particularly in the intratumoral and peritumoral 2 mm regions, has significant potential for diagnosing non-massive breast cancer, and the nomogram can assist clinical decision-making.

Automatic Segmentation of Ultrasound-Guided Transverse Thoracic Plane Block Using Convolutional Neural Networks.

Liu W, Ma X, Han X, Yu J, Zhang B, Liu L, Liu Y, Chu F, Liu Y, Wei S, Li B, Tang Z, Jiang J, Wang Q

pubmed logopapersJun 6 2025
Ultrasound-guided transverse thoracic plane (TTP) block has been shown to be highly effective in relieving postoperative pain in a variety of surgeries involving the anterior chest wall. Accurate identification of the target structure on ultrasound images is key to the successful implementation of TTP block. Nevertheless, the complexity of anatomical structures in the targeted blockade area coupled with the potential for adverse clinical incidents presents considerable challenges, particularly for anesthesiologists who are less experienced. This study applied deep learning methods to TTP block and developed a deep learning model to achieve real-time region segmentation in ultrasound to assist doctors in the accurate identification of the target nerve. Using 2329 images from 155 patients, we successfully segmented key structures associated with TTP areas and nerve blocks, including the transversus thoracis muscle, lungs, and bones. The achieved IoU (Intersection over Union) scores are 0.7272, 0.9736, and 0.8244 in that order. Recall metrics were 0.8305, 0.9896, and 0.9336 respectively, whilst Dice coefficients reached 0.8421, 0.9866, and 0.9037, particularly with an accuracy surpassing 97% in the identification of perilous lung regions. The real-time segmentation frame rate of the model for ultrasound video was as high as 42.7 fps, thus meeting the exigencies of performing nerve blocks under real-time ultrasound guidance in clinical practice. This study introduces TTP-Unet, a deep learning model specifically designed for TTP block, capable of automatically identifying crucial anatomical structures within ultrasound images of TTP block, thereby offering a practicable solution to attenuate the clinical difficulty associated with TTP block technique.

Predictive Model for the Detection of Subclinical Atherosclerosis in HIV Patients on Antiretroviral Treatment.

Gálvez-Barrón C, Gamarra-Calvo S, Blanco Ramos JR, Sanjoaquín Conde I, Pérez-López C, Miñarro A, Verdejo-Muñoz G

pubmed logopapersJun 5 2025
Patients living with HIV (PLHIV) have a higher cardiovascular risk than others, which is why the early detection of atherosclerosis in this population is important. The present study reports predictive models of subclinical atherosclerosis for this population of patients, made up of variables that are easily collected in the clinic. The study design is a cross-sectional observational study. PLHIV without established cardiovascular disease were recruited for this study. Predictive models of subclinical atherosclerosis (Doppler ultrasound) were developed by testing sociodemographic variables, pathological history, data related to HIV infection, laboratory parameters, and capillaroscopy as potential predictors. Logistic regression with internal validation (bootstrapping) and machine learning techniques were used to develop the models. Data from 96 HIV patients were analysed, 19 (19.8%) of whom had subclinical atherosclerosis. The predictors that went into both machine learning models and the regression model were hypertension, dyslipidaemia, protease inhibitors, triglycerides, fibrinogen, and alkaline phosphatase. Age and C-reactive protein were also part of the machine learning models. The logistic regression model had an area under the receiver operating characteristic curve (AUC) of 0.91 (95% CI: 0.84-0.99), which became 0.80 after internal validation by bootstrapping. The ma-chine learning techniques produced models with AUCs ranging from 0.73 to 0.86. We report predictive models for subclinical atherosclerosis in PLHIV, demonstrating relevant predictive performance based on easily accessible parameters, making them potentially useful as a screening tool. However, given the study's limitations-primarily the sample size-external validation in larger cohorts is warranted.

Intratumoral and peritumoral ultrasound radiomics analysis for predicting HER2-low expression in HER2-negative breast cancer patients: a retrospective analysis of dual-central study.

Wang J, Gu Y, Zhan Y, Li R, Bi Y, Gao L, Wu X, Shao J, Chen Y, Ye L, Peng M

pubmed logopapersJun 5 2025
This study aims to explore whether intratumoral and peritumoral ultrasound radiomics of ultrasound images can predict the low expression status of human epidermal growth factor receptor 2 (HER2) in HER2-negative breast cancer patients. HER2-negative breast cancer patients were recruited retrospectively and randomly divided into a training cohort (n = 303) and a test cohort (n = 130) at a ratio of 7:3. The region of interest within the breast ultrasound image was designated as the intratumoral region, and expansions of 3 mm, 5 mm, and 8 mm from this region were considered as the peritumoral regions for the extraction of ultrasound radiomic features. Feature extraction and selection were performed, and radiomics scores (Rad-score) were obtained in four ultrasound radiomics scenarios: intratumoral only, intratumoral + peritumoral 3 mm, intratumoral + peritumoral 5 mm, and intratumoral + peritumoral 8 mm. An optimal combined nomogram radiomic model incorporating clinical features was established and validated. Subsequently, the diagnostic performance of the radiomic models was evaluated. The results indicated that the intratumoral + peritumoral (5 mm) ultrasound radiomics exhibited the excellent diagnostic performance in evaluated the HER2 low expression. The nomogram combining intratumoral + peritumoral (5 mm) and clinical features showed superior diagnostic performance, achieving an area under the curve (AUC) of 0.911 and 0.869 in the training and test cohorts, respectively. The combination of intratumoral + peritumoral (5 mm) ultrasound radiomics and clinical features possesses the capability to accurately predict the low-expression status of HER2 in HER2-negative breast cancer patients.
Page 11 of 26252 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.