Sort by:
Page 7 of 24237 results

MoNetV2: Enhanced Motion Network for Freehand 3D Ultrasound Reconstruction

Mingyuan Luo, Xin Yang, Zhongnuo Yan, Yan Cao, Yuanji Zhang, Xindi Hu, Jin Wang, Haoxuan Ding, Wei Han, Litao Sun, Dong Ni

arxiv logopreprintJun 16 2025
Three-dimensional (3D) ultrasound (US) aims to provide sonographers with the spatial relationships of anatomical structures, playing a crucial role in clinical diagnosis. Recently, deep-learning-based freehand 3D US has made significant advancements. It reconstructs volumes by estimating transformations between images without external tracking. However, image-only reconstruction poses difficulties in reducing cumulative drift and further improving reconstruction accuracy, particularly in scenarios involving complex motion trajectories. In this context, we propose an enhanced motion network (MoNetV2) to enhance the accuracy and generalizability of reconstruction under diverse scanning velocities and tactics. First, we propose a sensor-based temporal and multi-branch structure that fuses image and motion information from a velocity perspective to improve image-only reconstruction accuracy. Second, we devise an online multi-level consistency constraint that exploits the inherent consistency of scans to handle various scanning velocities and tactics. This constraint exploits both scan-level velocity consistency, path-level appearance consistency, and patch-level motion consistency to supervise inter-frame transformation estimation. Third, we distill an online multi-modal self-supervised strategy that leverages the correlation between network estimation and motion information to further reduce cumulative errors. Extensive experiments clearly demonstrate that MoNetV2 surpasses existing methods in both reconstruction quality and generalizability performance across three large datasets.

AN INNOVATIVE MACHINE LEARNING-BASED ALGORITHM FOR DIAGNOSING PEDIATRIC OVARIAN TORSION.

Boztas AE, Sencan E, Payza AD, Sencan A

pubmed logopapersJun 16 2025
We aimed to develop a machine-learning(ML) algorithm consisting of physical examination, sonographic findings, and laboratory markers. The data of 70 patients with confirmed ovarian torsion followed and treated in our clinic for ovarian torsion and 73 patients for control group that presented to the emergency department with similar complaints but didn't have ovarian torsion detected on ultrasound as the control group between 2013-2023 were retrospectively analyzed. Sonographic findings, laboratory values, and clinical status of patients were examined and fed into three supervised ML systems to identify and develop viable decision algorithms. Presence of nausea/vomiting and symptom duration was statistically significant(p<0.05) for ovarian torsion. Presence of abdominal pain and palpable mass on physical examination weren't significant(p>0.05). White blood cell count(WBC), neutrophile/lymphocyte ratio(NLR), systemic immune-inflammation index(SII) and systemic inflammation response index(SIRI), high values of C-reactive protein was highly significant in prediction of torsion( p<0.001,p<0.05). Ovarian size ratio, medialization, follicular ring sign, presence of free fluid in pelvis in ultrasound demonstrated statistical significance in the torsion group(p<0.001). We used supervised ML algorithms, including decision trees, random forests, and LightGBM, to classify patients as either control or having torsion. We evaluated the models using 5-fold cross-validation, achieving an average F1-score of 98%, an accuracy of 98%, and a specificity of 100% across each fold with the decision tree model. This study represents the first development of a ML algorithm that integrates clinical, laboratory and ultrasonographic findings for the diagnosis of pediatric ovarian torsion with over 98% accuracy.

Can automation and artificial intelligence reduce echocardiography scan time and ultrasound system interaction?

Hollitt KJ, Milanese S, Joseph M, Perry R

pubmed logopapersJun 16 2025
The number of patients referred for and requiring a transthoracic echocardiogram (TTE) has increased over the years resulting in more cardiac sonographers reporting work related musculoskeletal pain. We sought to determine if a scanning protocol that replaced conventional workflows with advanced technologies such as multiplane imaging, artificial intelligence (AI) and automation could be used to optimise conventional workflows and potentially reduce ergonomic risk for cardiac sonographers. The aim was to assess whether this alternate protocol could reduce active scanning time as well as interaction with the ultrasound machine compared to a standard echocardiogram without a reduction in image quality and interpretability. Volunteer participants were recruited for a study that comprised of two TTE's with separate protocols. Both were clinically complete, but Protocol A combined automation, AI assisted acquisition and measurement, simultaneous and multiplane imaging whilst Protocol B reflected a standard scanning protocol without these additional technologies. Keystrokes were significantly reduced with the advanced protocol as compared to the typical protocol (230.9 ± 24.2 vs. 502.8 ± 56.2; difference 271.9 ± 61.3, p < 0.001). Furthermore, there was a reduction in scan time with protocol A compared to protocol B the standard TTE protocol (13.4 ± 2.3 min vs. 18.0 ± 2.6 min; difference 4.6 ± 2.9 min, p < 0.001) as well as a decrease of approximately 27% in the time the sonographers were required to reach beyond a neutral position on the ultrasound console. A TTE protocol that embraces modern technologies such as AI, automation, and multiplane imaging shows potential for a reduction in ultrasound keystrokes and scan time without a reduction in quality and interpretability. This may aid a reduction in ergonomic workload as compared to a standard TTE.

Predicting mucosal healing in Crohn's disease: development of a deep-learning model based on intestinal ultrasound images.

Ma L, Chen Y, Fu X, Qin J, Luo Y, Gao Y, Li W, Xiao M, Cao Z, Shi J, Zhu Q, Guo C, Wu J

pubmed logopapersJun 16 2025
Predicting treatment response in Crohn's disease (CD) is essential for making an optimal therapeutic regimen, but relevant models are lacking. This study aimed to develop a deep learning model based on baseline intestinal ultrasound (IUS) images and clinical information to predict mucosal healing. Consecutive CD patients who underwent pretreatment IUS were retrospectively recruited at a tertiary hospital. A total of 1548 IUS images of longitudinal diseased bowel segments were collected and divided into a training cohort and a test cohort. A convolutional neural network model was developed to predict mucosal healing after one year of standardized treatment. The model's efficacy was validated using the five-fold internal cross-validation and further tested in the test cohort. A total of 190 patients (68.9% men, mean age 32.3 ± 14.1 years) were enrolled, consisting of 1038 IUS images of mucosal healing and 510 images of no mucosal healing. The mean area under the curve in the test cohort was 0.73 (95% CI: 0.68-0.78), with the mean sensitivity of 68.1% (95% CI: 60.5-77.4%), specificity of 69.5% (95% CI: 60.1-77.2%), positive prediction value of 80.0% (95% CI: 74.5-84.9%), negative prediction value of 54.8% (95% CI: 48.0-63.7%). Heat maps showing the deep-learning decision-making process revealed that information from the bowel wall, serous surface, and surrounding mesentery was mainly considered by the model. We developed a deep learning model based on IUS images to predict mucosal healing in CD with notable accuracy. Further validation and improvement of this model with more multi-center, real-world data are needed. Predicting treatment response in CD is essential to making an optimal therapeutic regimen. In this study, a deep-learning model using pretreatment ultrasound images and clinical information was generated to predict mucosal healing with an AUC of 0.73. Response to medication treatment is highly variable among patients with CD. High-resolution IUS images of the intestinal wall may hide significant characteristics for treatment response. A deep-learning model capable of predicting treatment response was generated using pretreatment IUS images.

Whole-lesion-aware network based on freehand ultrasound video for breast cancer assessment: a prospective multicenter study.

Han J, Gao Y, Huo L, Wang D, Xie X, Zhang R, Xiao M, Zhang N, Lei M, Wu Q, Ma L, Sun C, Wang X, Liu L, Cheng S, Tang B, Wang L, Zhu Q, Wang Y

pubmed logopapersJun 16 2025
The clinical application of artificial intelligence (AI) models based on breast ultrasound static images has been hindered in real-world workflows due to operator-dependence of standardized image acquisition and incomplete view of breast lesions on static images. To better exploit the real-time advantages of ultrasound and more conducive to clinical application, we proposed a whole-lesion-aware network based on freehand ultrasound video (WAUVE) scanning in an arbitrary direction for predicting overall breast cancer risk score. The WAUVE was developed using 2912 videos (2912 lesions) of 2771 patients retrospectively collected from May 2020 to August 2022 in two hospitals. We compared the diagnostic performance of WAUVE with static 2D-ResNet50 and dynamic TimeSformer models in the internal validation set. Subsequently, a dataset comprising 190 videos (190 lesions) from 175 patients prospectively collected from December 2022 to April 2023 in two other hospitals, was used as an independent external validation set. A reader study was conducted by four experienced radiologists on the external validation set. We compared the diagnostic performance of WAUVE with the four experienced radiologists and evaluated the auxiliary value of model for radiologists. The WAUVE demonstrated superior performance compared to the 2D-ResNet50 model, while similar to the TimeSformer model. In the external validation set, WAUVE achieved an area under the receiver operating characteristic curve (AUC) of 0.8998 (95% CI = 0.8529-0.9439), and showed a comparable diagnostic performance to that of four experienced radiologists in terms of sensitivity (97.39% vs. 98.48%, p = 0.36), specificity (49.33% vs. 50.00%, p = 0.92), and accuracy (78.42% vs.79.34%, p = 0.60). With the WAUVE model assistance, the average specificity of four experienced radiologists was improved by 6.67%, and higher consistency was achieved (from 0.807 to 0.838). The WAUVE based on non-standardized ultrasound scanning demonstrated excellent performance in breast cancer assessment which yielded outcomes similar to those of experienced radiologists, indicating the clinical application of the WAUVE model promising.

Ultrasound for breast cancer detection: A bibliometric analysis of global trends between 2004 and 2024.

Sun YY, Shi XT, Xu LL

pubmed logopapersJun 16 2025
With the advancement of computer technology and imaging equipment, ultrasound has emerged as a crucial tool in breast cancer diagnosis. To gain deeper insights into the research landscape of ultrasound in breast cancer diagnosis, this study employed bibliometric methods for a comprehensive analysis spanning from 2004 to 2024, analyzing 3523 articles from 2176 institutions in 82 countries/regions. Over this period, publications on ultrasound diagnosis of breast cancer showed a fluctuating growth trend from 2004 to 2024. Notably, China, Seoul National University and Kim EK emerged as leading contributors in ultrasound for breast cancer detection, with the most published and cited journals being Ultrasound Med Biol and Radiology. The research spots in this area included "breast lesion", "dense breast" and "breast-conserving surgery", while "machine learning", "ultrasonic imaging", "convolutional neural network", "case report", "pathological complete response", "deep learning", "artificial intelligence" and "classification" are anticipated to become future research frontiers. This groundbreaking bibliometric analysis and visualization of ultrasonic breast cancer diagnosis publications offer clinical medical professionals a reliable research focus and direction.

A Semi-supervised Ultrasound Image Segmentation Network Integrating Enhanced Mask Learning and Dynamic Temperature-controlled Self-distillation.

Xu L, Huang Y, Zhou H, Mao Q, Yin W

pubmed logopapersJun 16 2025
Ultrasound imaging is widely used in clinical practice due to its advantages of no radiation and real-time capability. However, its image quality is often degraded by speckle noise, low contrast, and blurred boundaries, which pose significant challenges for automatic segmentation. In recent years, deep learning methods have achieved notable progress in ultrasound image segmentation. Nonetheless, these methods typically require large-scale annotated datasets, incur high computational costs, and suffer from slow inference speeds, limiting their clinical applicability. To overcome these limitations, we propose EML-DMSD, a novel semi-supervised segmentation network that combines Enhanced Mask Learning (EML) and Dynamic Temperature-Controlled Multi-Scale Self-Distillation (DMSD). The EML module improves the model's robustness to noise and boundary ambiguity, while the DMSD module introduces a teacher-free, multi-scale self-distillation strategy with dynamic temperature adjustment to boost inference efficiency and reduce reliance on extensive resources. Experiments on multiple ultrasound benchmark datasets demonstrate that EML-DMSD achieves superior segmentation accuracy with efficient inference, highlighting its strong generalization ability and clinical potential.

Interpretable deep fuzzy network-aided detection of central lymph node metastasis status in papillary thyroid carcinoma.

Wang W, Ning Z, Zhang J, Zhang Y, Wang W

pubmed logopapersJun 16 2025
The non-invasive assessment of central lymph node metastasis (CLNM) in patients with papillary thyroid carcinoma (PTC) plays a crucial role in assisting treatment decision and prognosis planning. This study aims to use an interpretable deep fuzzy network guided by expert knowledge to predict the CLNM status of patients with PTC from ultrasound images. A total of 1019 PTC patients were enrolled in this study, comprising 465 CLNM patients and 554 non-CLNM patients. Pathological diagnosis served as the gold standard to determine metastasis status. Clinical and morphological features of thyroid were collected as expert knowledge to guide the deep fuzzy network in predicting CLNM status. The network consisted of a region of interest (ROI) segmentation module, a knowledge-aware feature extraction module, and a fuzzy prediction module. The network was trained on 652 patients, validated on 163 patients and tested on 204 patients. The model exhibited promising performance in predicting CLNM status, achieving the area under the receiver operating characteristic curve (AUC), accuracy, precision, sensitivity and specificity of 0.786 (95% CI 0.720-0.846), 0.745 (95% CI 0.681-0.799), 0.727 (95% CI 0.636-0.819), 0.696 (95% CI 0.594-0.789), and 0.786 (95% CI 0.712-0.864), respectively. In addition, the rules of the fuzzy system in the model are easy to understand and explain, and have good interpretability. The deep fuzzy network guided by expert knowledge predicted CLNM status of PTC patients with high accuracy and good interpretability, and may be considered as an effective tool to guide preoperative clinical decision-making.

ThreeF-Net: Fine-grained feature fusion network for breast ultrasound image segmentation.

Bian X, Liu J, Xu S, Liu W, Mei L, Xiao C, Yang F

pubmed logopapersJun 14 2025
Convolutional Neural Networks (CNNs) have achieved remarkable success in breast ultrasound image segmentation, but they still face several challenges when dealing with breast lesions. Due to the limitations of CNNs in modeling long-range dependencies, they often perform poorly in handling issues such as similar intensity distributions, irregular lesion shapes, and blurry boundaries, leading to low segmentation accuracy. To address these issues, we propose the ThreeF-Net, a fine-grained feature fusion network. This network combines the advantages of CNNs and Transformers, aiming to simultaneously capture local features and model long-range dependencies, thereby improving the accuracy and stability of segmentation tasks. Specifically, we designed a Transformer-assisted Dual Encoder Architecture (TDE), which integrates convolutional modules and self-attention modules to achieve collaborative learning of local and global features. Additionally, we designed a Global Group Feature Extraction (GGFE) module, which effectively fuses the features learned by CNNs and Transformers, enhancing feature representation ability. To further improve model performance, we also introduced a Dynamic Fine-grained Convolution (DFC) module, which significantly improves lesion boundary segmentation accuracy by dynamically adjusting convolution kernels and capturing multi-scale features. Comparative experiments with state-of-the-art segmentation methods on three public breast ultrasound datasets demonstrate that ThreeF-Net outperforms existing methods across multiple key evaluation metrics.

Enhancing Free-hand 3D Photoacoustic and Ultrasound Reconstruction using Deep Learning.

Lee S, Kim S, Seo M, Park S, Imrus S, Ashok K, Lee D, Park C, Lee S, Kim J, Yoo JH, Kim M

pubmed logopapersJun 13 2025
This study introduces a motion-based learning network with a global-local self-attention module (MoGLo-Net) to enhance 3D reconstruction in handheld photoacoustic and ultrasound (PAUS) imaging. Standard PAUS imaging is often limited by a narrow field of view (FoV) and the inability to effectively visualize complex 3D structures. The 3D freehand technique, which aligns sequential 2D images for 3D reconstruction, faces significant challenges in accurate motion estimation without relying on external positional sensors. MoGLo-Net addresses these limitations through an innovative adaptation of the self-attention mechanism, which effectively exploits the critical regions, such as fully-developed speckle areas or high-echogenic tissue regions within successive ultrasound images to accurately estimate the motion parameters. This facilitates the extraction of intricate features from individual frames. Additionally, we employ a patch-wise correlation operation to generate a correlation volume that is highly correlated with the scanning motion. A custom loss function was also developed to ensure robust learning with minimized bias, leveraging the characteristics of the motion parameters. Experimental evaluations demonstrated that MoGLo-Net surpasses current state-of-the-art methods in both quantitative and qualitative performance metrics. Furthermore, we expanded the application of 3D reconstruction technology beyond simple B-mode ultrasound volumes to incorporate Doppler ultrasound and photoacoustic imaging, enabling 3D visualization of vasculature. The source code for this study is publicly available at: https://github.com/pnu-amilab/US3D.
Page 7 of 24237 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.