Sort by:
Page 41 of 56556 results

Preoperative prediction model for benign and malignant gallbladder polyps on the basis of machine-learning algorithms.

Zeng J, Hu W, Wang Y, Jiang Y, Peng J, Li J, Liu X, Zhang X, Tan B, Zhao D, Li K, Zhang S, Cao J, Qu C

pubmed logopapersJun 10 2025
This study aimed to differentiate between benign and malignant gallbladder polyps preoperatively by developing a prediction model integrating preoperative transabdominal ultrasound and clinical features using machine-learning algorithms. A retrospective analysis was conducted on clinical and ultrasound data from 1,050 patients at 2 centers who underwent cholecystectomy for gallbladder polyps. Six machine-learning algorithms were used to develop preoperative models for predicting benign and malignant gallbladder polyps. Internal and external test cohorts evaluated model performance. The Shapley Additive Explanations algorithm was used to understand feature importance. The main study cohort included 660 patients with benign polyps and 285 patients with malignant polyps, randomly divided into a 3:1 stratified training and internal test cohorts. The external test cohorts consisted of 73 benign and 32 malignant polyps. In the training cohort, the Shapley Additive Explanations algorithm, on the basis of variables selected by Least Absolute Shrinkage and Selection Operator regression and multivariate logistic regression, further identified 6 key predictive factors: polyp size, age, fibrinogen, carbohydrate antigen 19-9, presence of stones, and cholinesterase. Using these factors, 6 predictive models were developed. The random forest model outperformed others, with an area under the curve of 0.963, 0.940, and 0.958 in the training, internal, and external test cohorts, respectively. Compared with previous studies, the random forest model demonstrated excellent clinical utility and predictive performance. In addition, the Shapley Additive Explanations algorithm was used to visualize feature importance, and an online calculation platform was developed. The random forest model, combining preoperative ultrasound and clinical features, accurately predicts benign and malignant gallbladder polyps, offering valuable guidance for clinical decision-making.

DCD: A Semantic Segmentation Model for Fetal Ultrasound Four-Chamber View

Donglian Li, Hui Guo, Minglang Chen, Huizhen Chen, Jialing Chen, Bocheng Liang, Pengchen Liang, Ying Tan

arxiv logopreprintJun 10 2025
Accurate segmentation of anatomical structures in the apical four-chamber (A4C) view of fetal echocardiography is essential for early diagnosis and prenatal evaluation of congenital heart disease (CHD). However, precise segmentation remains challenging due to ultrasound artifacts, speckle noise, anatomical variability, and boundary ambiguity across different gestational stages. To reduce the workload of sonographers and enhance segmentation accuracy, we propose DCD, an advanced deep learning-based model for automatic segmentation of key anatomical structures in the fetal A4C view. Our model incorporates a Dense Atrous Spatial Pyramid Pooling (Dense ASPP) module, enabling superior multi-scale feature extraction, and a Convolutional Block Attention Module (CBAM) to enhance adaptive feature representation. By effectively capturing both local and global contextual information, DCD achieves precise and robust segmentation, contributing to improved prenatal cardiac assessment.

Radiomics-based machine learning atherosclerotic carotid artery disease in ultrasound: systematic review with meta-analysis of RQS.

Vacca S, Scicolone R, Pisu F, Cau R, Yang Q, Annoni A, Pontone G, Costa F, Paraskevas KI, Nicolaides A, Suri JS, Saba L

pubmed logopapersJun 9 2025
Stroke, a leading global cause of mortality and neurological disability, is often associated with atherosclerotic carotid artery disease. Distinguishing between symptomatic and asymptomatic carotid artery disease is crucial for appropriate treatment decisions. Radiomics, a quantitative image analysis technique, and machine learning (ML) have emerged as promising tools in Ultrasound (US) imaging, potentially providing a helpful tool in the screening of such lesions. Pubmed, Web of Science and Scopus databases were searched for relevant studies published from January 2005 to May 2023. The Radiomics Quality Score (RQS) was used to assess methodological quality of studies included in the review. The Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) assessed the risk of bias. Sensitivity, specificity, and logarithmic diagnostic odds ratio (logDOR) meta-analyses have been conducted, alongside an influence analysis. RQS assessed methodological quality, revealing an overall low score and consistent findings with other radiology domains. QUADAS-2 indicated an overall low risk, except for two studies with high bias. The meta-analysis demonstrated that radiomics-based ML models for predicting culprit plaques on US had a satisfactory performance, with a sensitivity of 0.84 and specificity of 0.82. The logDOR analysis confirmed the positive results, yielding a pooled logDOR of 3.54. The summary ROC curve provided an AUC of 0.887. Radiomics combined with ML provide high sensitivity and low false positive rate for carotid plaque vulnerability assessment on US. However, current evidence is not definitive, given the low overall study quality and high inter-study heterogeneity. High quality, prospective studies are needed to confirm the potential of these promising techniques.

Ultrasound Radiomics and Dual-Mode Ultrasonic Elastography Based Machine Learning Model for the Classification of Benign and Malignant Thyroid Nodules.

Yan J, Zhou X, Zheng Q, Wang K, Gao Y, Liu F, Pan L

pubmed logopapersJun 9 2025
The present study aims to construct a random forest (RF) model based on ultrasound radiomics and elastography, offering a new approach for the differentiation of thyroid nodules (TNs). We retrospectively analyzed 152 TNs from 127 patients and developed four machine learning models. The examination was performed using the Resona 9Pro equipped with a 15-4 MHz linear array probe. The region of interest (ROI) was delineated with 3D Slicer. Using the RF algorithm, four models were developed based on sound touch elastography (STE) parameters, strain elastography (SE) parameters, and the selected radiomic features: the STE model, SE model, radiomics model, and the combined model. Decision Curve Analysis (DCA) is employed to assess the clinical benefit of each model. The DeLong test is used to determine whether the area under the curves (AUC) values of different models are statistically significant. A total of 1396 radiomic features were extracted using the Pyradiomics package. After screening, a total of 7 radiomic features were ultimately included in the construction of the model. In STE, SE, radiomics model, and combined model, the AUCs are 0.699 (95% CI: 0.570-0.828), 0.812 (95% CI: 0.683-0.941), 0.851 (95% CI: 0.739-0.964) and 0.911 (95% CI: 0.806-1.000), respectively. In these models, the combined model and the radiomics model exhibited outstanding performance. The combined model, integrating elastography and radiomics, demonstrates superior predictive accuracy compared to single models, offering a promising approach for the diagnosis of TNs.

Transformer-based robotic ultrasound 3D tracking for capsule robot in GI tract.

Liu X, He C, Wu M, Ping A, Zavodni A, Matsuura N, Diller E

pubmed logopapersJun 9 2025
Ultrasound (US) imaging is a promising modality for real-time monitoring of robotic capsule endoscopes navigating through the gastrointestinal (GI) tract. It offers high temporal resolution and safety but is limited by a narrow field of view, low visibility in gas-filled regions and challenges in detecting out-of-plane motions. This work addresses these issues by proposing a novel robotic ultrasound tracking system capable of long-distance 3D tracking and active re-localization when the capsule is lost due to motion or artifacts. We develop a hybrid deep learning-based tracking framework combining convolutional neural networks (CNNs) and a transformer backbone. The CNN component efficiently encodes spatial features, while the transformer captures long-range contextual dependencies in B-mode US images. This model is integrated with a robotic arm that adaptively scans and tracks the capsule. The system's performance is evaluated using ex vivo colon phantoms under varying imaging conditions, with physical perturbations introduced to simulate realistic clinical scenarios. The proposed system achieved continuous 3D tracking over distances exceeding 90 cm, with a mean centroid localization error of 1.5 mm and over 90% detection accuracy. We demonstrated 3D tracking in a more complex workspace featuring two curved sections to simulate anatomical challenges. This suggests the strong resilience of the tracking system to motion-induced artifacts and geometric variability. The system maintained real-time tracking at 9-12 FPS and successfully re-localized the capsule within seconds after tracking loss, even under gas artifacts and acoustic shadowing. This study presents a hybrid CNN-transformer system for automatic, real-time 3D ultrasound tracking of capsule robots over long distances. The method reliably handles occlusions, view loss and image artifacts, offering millimeter-level tracking accuracy. It significantly reduces clinical workload through autonomous detection and re-localization. Future work includes improving probe-tissue interaction handling and validating performance in live animal and human trials to assess physiological impacts.

HAIBU-ReMUD: Reasoning Multimodal Ultrasound Dataset and Model Bridging to General Specific Domains

Shijie Wang, Yilun Zhang, Zeyu Lai, Dexing Kong

arxiv logopreprintJun 9 2025
Multimodal large language models (MLLMs) have shown great potential in general domains but perform poorly in some specific domains due to a lack of domain-specific data, such as image-text data or vedio-text data. In some specific domains, there is abundant graphic and textual data scattered around, but lacks standardized arrangement. In the field of medical ultrasound, there are ultrasonic diagnostic books, ultrasonic clinical guidelines, ultrasonic diagnostic reports, and so on. However, these ultrasonic materials are often saved in the forms of PDF, images, etc., and cannot be directly used for the training of MLLMs. This paper proposes a novel image-text reasoning supervised fine-tuning data generation pipeline to create specific domain quadruplets (image, question, thinking trace, and answer) from domain-specific materials. A medical ultrasound domain dataset ReMUD is established, containing over 45,000 reasoning and non-reasoning supervised fine-tuning Question Answering (QA) and Visual Question Answering (VQA) data. The ReMUD-7B model, fine-tuned on Qwen2.5-VL-7B-Instruct, outperforms general-domain MLLMs in medical ultrasound field. To facilitate research, the ReMUD dataset, data generation codebase, and ReMUD-7B parameters will be released at https://github.com/ShiDaizi/ReMUD, addressing the data shortage issue in specific domain MLLMs.

Integration of artificial intelligence into cardiac ultrasonography practice.

Shaulian SY, Gala D, Makaryus AN

pubmed logopapersJun 9 2025
Over the last several decades, echocardiography has made numerous technological advancements, with one of the most significant being the integration of artificial intelligence (AI). AI algorithms assist novice operators to acquire diagnostic-quality images and automate complex analyses. This review explores the integration of AI into various echocardiographic modalities, including transthoracic, transesophageal, intracardiac, and point-of-care ultrasound. It examines how AI enhances image acquisition, streamlines analysis, and improves diagnostic performance across routine, critical care, and complex cardiac imaging. To conduct this review, PubMed was searched using targeted keywords aligned with each section of the paper, focusing primarily on peer-reviewed articles published from 2020 onward. Earlier studies were included when foundational or frequently cited. The findings were organized thematically to highlight clinical relevance and practical applications. Challenges persist in clinical application, including algorithmic bias, ethical concerns, and the need for clinician training and AI oversight. Despite these, AI's potential to revolutionize cardiovascular care through precision and accessibility remains unparalleled, with benefits likely to far outweigh obstacles if appropriately applied and implemented in cardiac ultrasonography.

Evaluation of AI diagnostic systems for breast ultrasound: comparative analysis with radiologists and the effect of AI assistance.

Tsuyuzaki S, Fujioka T, Yamaga E, Katsuta L, Mori M, Yashima Y, Hara M, Sato A, Onishi I, Tsukada J, Aruga T, Kubota K, Tateishi U

pubmed logopapersJun 9 2025
The purpose of this study is to evaluate the diagnostic accuracy of an artificial intelligence (AI)-based Computer-Aided Diagnosis (CADx) system for breast ultrasound, compare its performance with radiologists, and assess the effect of AI-assisted diagnosis. This study aims to investigate the system's ability to differentiate between benign and malignant breast masses among Japanese patients. This retrospective study included 171 breast mass ultrasound images (92 benign, 79 malignant). The AI system, BU-CAD™, provided Breast Imaging Reporting and Data System (BI-RADS) categorization, which was compared with the performance of three radiologists. Diagnostic accuracy, sensitivity, specificity, and area under the curve (AUC) were analyzed. Radiologists' diagnostic performance with and without AI assistance was also compared, and their reading time was measured using a stopwatch. The AI system demonstrated a sensitivity of 91.1%, specificity of 92.4%, and an AUC of 0.948. It showed comparable diagnostic performance to Radiologist 1, with 10 years of experience in breast imaging (0.948 vs. 0.950; p = 0.893), and superior performance to Radiologist 2 (7 years of experience, 0.948 vs. 0.881; p = 0.015) and Radiologist 3 (3 years of experience, 0.948 vs. 0.832; p = 0.001). When comparing diagnostic performance with and without AI, the use of AI significantly improved the AUC for Radiologists 2 and 3 (p = 0.001 and 0.005, respectively). However, there was no significant difference for Radiologist 1 (p = 0.139). In terms of diagnosis time, the use of AI reduced the reading time for all radiologists. Although there was no significant difference in diagnostic performance between AI and Radiologist 1, the use of AI substantially decreased the diagnosis time for Radiologist 1 as well. The AI system significantly improved diagnostic efficiency and accuracy, particularly for junior radiologists, highlighting its potential clinical utility in breast ultrasound diagnostics.

Hybrid adaptive attention deep supervision-guided U-Net for breast lesion segmentation in ultrasound computed tomography images.

Liu X, Zhou L, Cai M, Zheng H, Zheng S, Wang X, Wang Y, Ding M

pubmed logopapersJun 9 2025
Breast cancer is the second deadliest cancer among women after lung cancer. Though the breast cancer death rate continues to decline in the past 20 years, the stages IV and III breast cancer death rates remain high. Therefore, an automated breast cancer diagnosis system is of great significance for early screening of breast lesions to improve the survival rate of patients. This paper proposes a deep learning-based network hybrid adaptive attention deep supervision-guided U-Net (HAA-DSUNet) for breast lesion segmentation of breast ultrasound computed tomography (BUCT) images, which replaces the traditionally sampled convolution module of U-Net with the hybrid adaptive attention module (HAAM), aiming to enlarge the receptive field and probe rich global features while preserving fine details. Moreover, we apply the contrast loss to intermediate outputs as deep supervision to minimize the information loss during upsampling. Finally, the segmentation prediction results are further processed by filtering, segmentation, and morphology to obtain the final results. We conducted the experiment on our two UCT image datasets HCH and HCH-PHMC, and the highest Dice score is 0.8729 and IoU is 0.8097, which outperform all the other state-of-the-art methods. It is demonstrated that our algorithm is effective in segmenting the legion from BUCT images.

Automatic Segmentation of Ultrasound-Guided Transverse Thoracic Plane Block Using Convolutional Neural Networks.

Liu W, Ma X, Han X, Yu J, Zhang B, Liu L, Liu Y, Chu F, Liu Y, Wei S, Li B, Tang Z, Jiang J, Wang Q

pubmed logopapersJun 6 2025
Ultrasound-guided transverse thoracic plane (TTP) block has been shown to be highly effective in relieving postoperative pain in a variety of surgeries involving the anterior chest wall. Accurate identification of the target structure on ultrasound images is key to the successful implementation of TTP block. Nevertheless, the complexity of anatomical structures in the targeted blockade area coupled with the potential for adverse clinical incidents presents considerable challenges, particularly for anesthesiologists who are less experienced. This study applied deep learning methods to TTP block and developed a deep learning model to achieve real-time region segmentation in ultrasound to assist doctors in the accurate identification of the target nerve. Using 2329 images from 155 patients, we successfully segmented key structures associated with TTP areas and nerve blocks, including the transversus thoracis muscle, lungs, and bones. The achieved IoU (Intersection over Union) scores are 0.7272, 0.9736, and 0.8244 in that order. Recall metrics were 0.8305, 0.9896, and 0.9336 respectively, whilst Dice coefficients reached 0.8421, 0.9866, and 0.9037, particularly with an accuracy surpassing 97% in the identification of perilous lung regions. The real-time segmentation frame rate of the model for ultrasound video was as high as 42.7 fps, thus meeting the exigencies of performing nerve blocks under real-time ultrasound guidance in clinical practice. This study introduces TTP-Unet, a deep learning model specifically designed for TTP block, capable of automatically identifying crucial anatomical structures within ultrasound images of TTP block, thereby offering a practicable solution to attenuate the clinical difficulty associated with TTP block technique.
Page 41 of 56556 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.