Sort by:
Page 31 of 56556 results

Association between muscle mass assessed by an artificial intelligence-based ultrasound imaging system and quality of life in patients with cancer-related malnutrition.

de Luis D, Cebria A, Primo D, Izaola O, Godoy EJ, Gomez JJL

pubmed logopapersJul 1 2025
Emerging evidence suggests that diminished skeletal muscle mass is associated with lower health-related quality of life (HRQOL) in individuals with cancer. There are no studies that we know of in the literature that use ultrasound system to evaluate muscle mass and its relationship with HRQOL. The aim of our study was to evaluate the relationship between HRQOL determined by the EuroQol-5D tool and muscle mass determined by an artificial intelligence-based ultrasound system at the rectus femoris (RF) level in outpatients with cancer. Anthropometric data by bioimpedance (BIA), muscle mass by ultrasound by an artificial intelligence-based at the RF level, biochemistry determination, dynamometry and HRQOL were measured. A total of 158 patients with cancer were included with a mean age of 70.6 ±9.8 years. The mean body mass index was 24.4 ± 4.1 kg/m<sup>2</sup> with a mean body weight of 63.9 ± 11.7 kg (38% females and 62% males). A total of 57 patients had a severe degree of malnutrition (36.1%). The distribution of the location of the tumors was 66 colon-rectum cancer (41.7%), 56 esophageal-stomach cancer (35.4%), 16 pancreatic cancer (10.1%), and 20.2% other locations. A positive correlation cross-sectional area (CSA), muscle thickness (MT), pennation angle, (BIA) parameters, and muscle strength was detected. Patients in the groups below the median for the visual scale and the EuroQol-5D index had lower CSA and MT, BIA, and muscle strength values. CSA (beta 4.25, 95% CI 2.03-6.47) remained in the multivariate model as dependent variable (visual scale) and muscle strength (beta 0.008, 95% CI 0.003-0.14) with EuroQol-5D index. Muscle strength and pennation angle by US were associated with better score in dimensions of mobility, self-care, and daily activities. CSA, MT, and pennation angle of RF determined by an artificial intelligence-based muscle ultrasound system in outpatients with cancer were related to HRQOL determined by EuroQol-5D.

A multi-task neural network for full waveform ultrasonic bone imaging.

Li P, Liu T, Ma H, Li D, Liu C, Ta D

pubmed logopapersJul 1 2025
It is a challenging task to use ultrasound for bone imaging, as the bone tissue has a complex structure with high acoustic impedance and speed-of-sound (SOS). Recently, full waveform inversion (FWI) has shown promising imaging for musculoskeletal tissues. However, the FWI showed a limited ability and tended to produce artifacts in bone imaging because the inversion process would be more easily trapped in local minimum for bone tissue with a large discrepancy in SOS distribution between bony and soft tissues. In addition, the application of FWI required a high computational burden and relatively long iterations. The objective of this study was to achieve high-resolution ultrasonic imaging of bone using a deep learning-based FWI approach. In this paper, we proposed a novel network named CEDD-Unet. The CEDD-Unet adopts a Dual-Decoder architecture, with the first decoder tasked with reconstructing the SOS model, and the second decoder tasked with finding the main boundaries between bony and soft tissues. To effectively capture multi-scale spatial-temporal features from ultrasound radio frequency (RF) signals, we integrated a Convolutional LSTM (ConvLSTM) module. Additionally, an Efficient Multi-scale Attention (EMA) module was incorporated into the encoder to enhance feature representation and improve reconstruction accuracy. Using the ultrasonic imaging modality with a ring array transducer, the performance of CEDD-Unet was tested on the SOS model datasets from human bones (noted as Dataset1) and mouse bones (noted as Dataset2), and compared with three classic reconstruction architectures (Unet, Unet++, and Att-Unet), four state-of-the-art architecture (InversionNet, DD-Net, UPFWI, and DEFE-Unet). Experiments showed that CEDD-Unet outperforms all competing methods, achieving the lowest MAE of 23.30 on Dataset1 and 25.29 on Dataset2, the highest SSIM of 0.9702 on Dataset1 and 0.9550 on Dataset2, and the highest PSNR of 30.60 dB on Dataset1 and 32.87 dB on Dataset2. Our method demonstrated superior reconstruction quality, with clearer bone boundaries, reduced artifacts, and improved consistency with ground truth. Moreover, CEDD-Unet surpasses traditional FWI by producing sharper skeletal SOS reconstructions, reducing computational cost, and eliminating the reliance for an initial model. Ablation studies further confirm the effectiveness of each network component. The results suggest that CEDD-Unet is a promising deep learning-based FWI method for high-resolution bone imaging, with the potential to reconstruct accurate and sharp-edged skeletal SOS models.

TIER-LOC: Visual Query-based Video Clip Localization in fetal ultrasound videos with a multi-tier transformer.

Mishra D, Saha P, Zhao H, Hernandez-Cruz N, Patey O, Papageorghiou AT, Noble JA

pubmed logopapersJul 1 2025
In this paper, we introduce the Visual Query-based task of Video Clip Localization (VQ-VCL) for medical video understanding. Specifically, we aim to retrieve a video clip containing frames similar to a given exemplar frame from a given input video. To solve the task, we propose a novel visual query-based video clip localization model called TIER-LOC. TIER-LOC is designed to improve video clip retrieval, especially in fine-grained videos by extracting features from different levels, i.e., coarse to fine-grained, referred to as TIERS. The aim is to utilize multi-Tier features for detecting subtle differences, and adapting to scale or resolution variations, leading to improved video-clip retrieval. TIER-LOC has three main components: (1) a Multi-Tier Spatio-Temporal Transformer to fuse spatio-temporal features extracted from multiple Tiers of video frames with features from multiple Tiers of the visual query enabling better video understanding. (2) a Multi-Tier, Dual Anchor Contrastive Loss to deal with real-world annotation noise which can be notable at event boundaries and in videos featuring highly similar objects. (3) a Temporal Uncertainty-Aware Localization Loss designed to reduce the model sensitivity to imprecise event boundary. This is achieved by relaxing hard boundary constraints thus allowing the model to learn underlying class patterns and not be influenced by individual noisy samples. To demonstrate the efficacy of TIER-LOC, we evaluate it on two ultrasound video datasets and an open-source egocentric video dataset. First, we develop a sonographer workflow assistive task model to detect standard-frame clips in fetal ultrasound heart sweeps. Second, we assess our model's performance in retrieving standard-frame clips for detecting fetal anomalies in routine ultrasound scans, using the large-scale PULSE dataset. Lastly, we test our model's performance on an open-source computer vision video dataset by creating a VQ-VCL fine-grained video dataset based on the Ego4D dataset. Our model outperforms the best-performing state-of-the-art model by 7%, 4%, and 4% on the three video datasets, respectively.

Accurate and Efficient Fetal Birth Weight Estimation from 3D Ultrasound

Jian Wang, Qiongying Ni, Hongkui Yu, Ruixuan Yao, Jinqiao Ying, Bin Zhang, Xingyi Yang, Jin Peng, Jiongquan Chen, Junxuan Yu, Wenlong Shi, Chaoyu Chen, Zhongnuo Yan, Mingyuan Luo, Gaocheng Cai, Dong Ni, Jing Lu, Xin Yang

arxiv logopreprintJul 1 2025
Accurate fetal birth weight (FBW) estimation is essential for optimizing delivery decisions and reducing perinatal mortality. However, clinical methods for FBW estimation are inefficient, operator-dependent, and challenging to apply in cases of complex fetal anatomy. Existing deep learning methods are based on 2D standard ultrasound (US) images or videos that lack spatial information, limiting their prediction accuracy. In this study, we propose the first method for directly estimating FBW from 3D fetal US volumes. Our approach integrates a multi-scale feature fusion network (MFFN) and a synthetic sample-based learning framework (SSLF). The MFFN effectively extracts and fuses multi-scale features under sparse supervision by incorporating channel attention, spatial attention, and a ranking-based loss function. SSLF generates synthetic samples by simply combining fetal head and abdomen data from different fetuses, utilizing semi-supervised learning to improve prediction performance. Experimental results demonstrate that our method achieves superior performance, with a mean absolute error of $166.4\pm155.9$ $g$ and a mean absolute percentage error of $5.1\pm4.6$%, outperforming existing methods and approaching the accuracy of a senior doctor. Code is available at: https://github.com/Qioy-i/EFW.

ADAptation: Reconstruction-based Unsupervised Active Learning for Breast Ultrasound Diagnosis

Yaofei Duan, Yuhao Huang, Xin Yang, Luyi Han, Xinyu Xie, Zhiyuan Zhu, Ping He, Ka-Hou Chan, Ligang Cui, Sio-Kei Im, Dong Ni, Tao Tan

arxiv logopreprintJul 1 2025
Deep learning-based diagnostic models often suffer performance drops due to distribution shifts between training (source) and test (target) domains. Collecting and labeling sufficient target domain data for model retraining represents an optimal solution, yet is limited by time and scarce resources. Active learning (AL) offers an efficient approach to reduce annotation costs while maintaining performance, but struggles to handle the challenge posed by distribution variations across different datasets. In this study, we propose a novel unsupervised Active learning framework for Domain Adaptation, named ADAptation, which efficiently selects informative samples from multi-domain data pools under limited annotation budget. As a fundamental step, our method first utilizes the distribution homogenization capabilities of diffusion models to bridge cross-dataset gaps by translating target images into source-domain style. We then introduce two key innovations: (a) a hypersphere-constrained contrastive learning network for compact feature clustering, and (b) a dual-scoring mechanism that quantifies and balances sample uncertainty and representativeness. Extensive experiments on four breast ultrasound datasets (three public and one in-house/multi-center) across five common deep classifiers demonstrate that our method surpasses existing strong AL-based competitors, validating its effectiveness and generalization for clinical domain adaptation. The code is available at the anonymized link: https://github.com/miccai25-966/ADAptation.

Enhancing ultrasonographic detection of hepatocellular carcinoma with artificial intelligence: current applications, challenges and future directions.

Wongsuwan J, Tubtawee T, Nirattisaikul S, Danpanichkul P, Cheungpasitporn W, Chaichulee S, Kaewdech A

pubmed logopapersJul 1 2025
Hepatocellular carcinoma (HCC) remains a leading cause of cancer-related mortality worldwide, with early detection playing a crucial role in improving survival rates. Artificial intelligence (AI), particularly in medical image analysis, has emerged as a potential tool for HCC diagnosis and surveillance. Recent advancements in deep learning-driven medical imaging have demonstrated significant potential in enhancing early HCC detection, particularly in ultrasound (US)-based surveillance. This review provides a comprehensive analysis of the current landscape, challenges, and future directions of AI in HCC surveillance, with a specific focus on the application in US imaging. Additionally, it explores AI's transformative potential in clinical practice and its implications for improving patient outcomes. We examine various AI models developed for HCC diagnosis, highlighting their strengths and limitations, with a particular emphasis on deep learning approaches. Among these, convolutional neural networks have shown notable success in detecting and characterising different focal liver lesions on B-mode US often outperforming conventional radiological assessments. Despite these advancements, several challenges hinder AI integration into clinical practice, including data heterogeneity, a lack of standardisation, concerns regarding model interpretability, regulatory constraints, and barriers to real-world clinical adoption. Addressing these issues necessitates the development of large, diverse, and high-quality data sets to enhance the robustness and generalisability of AI models. Emerging trends in AI for HCC surveillance, such as multimodal integration, explainable AI, and real-time diagnostics, offer promising advancements. These innovations have the potential to significantly improve the accuracy, efficiency, and clinical applicability of AI-driven HCC surveillance, ultimately contributing to enhanced patient outcomes.

Ultrasound-based classification of follicular thyroid Cancer using deep convolutional neural networks with transfer learning.

Agyekum EA, Yuzhi Z, Fang Y, Agyekum DN, Wang X, Issaka E, Li C, Shen X, Qian X, Wu X

pubmed logopapersJul 1 2025
This study aimed to develop and validate convolutional neural network (CNN) models for distinguishing follicular thyroid carcinoma (FTC) from follicular thyroid adenoma (FTA). Additionally, this current study compared the performance of CNN models with the American College of Radiology Thyroid Imaging Reporting and Data System (ACR-TIRADS) and Chinese Thyroid Imaging Reporting and Data System (C-TIRADS) ultrasound-based malignancy risk stratification systems. A total of 327 eligible patients with FTC and FTA who underwent preoperative thyroid ultrasound examination were retrospectively enrolled between August 2017, and August 2024. Patients were randomly assigned to a training cohort (n = 263) and a test cohort (n = 64) in an 8:2 ratio using stratified sampling. Five CNN models, including VGG16, ResNet101, MobileNetV2, ResNet152, and ResNet50, pre-trained with ImageNet, were developed and tested to distinguish FTC from FTA. The CNN models exhibited good performance, yielding areas under the receiver operating characteristic curve (AUC) ranging from 0.64 to 0.77. The ResNet152 model demonstrated the highest AUC (0.77; 95% CI, 0.67-0.87) for distinguishing between FTC and FTA. Decision curve and calibration curve analyses demonstrated the models' favorable clinical value and calibration. Furthermore, when comparing the performance of the developed models with that of the C-TIRADS and ACR-TIRADS systems, the models developed in this study demonstrated superior performance. This can potentially guide appropriate management of FTC in patients with follicular neoplasms.

Knowledge mapping of ultrasound technology and triple-negative breast cancer: a visual and bibliometric analysis.

Wan Y, Shen Y, Wang J, Zhang T, Fu X

pubmed logopapersJul 1 2025
This study aims to explore the application of ultrasound technology in triple-negative breast cancer (TNBC) using bibliometric methods. It presents a visual knowledge map to exhibit global research dynamics and elucidates the research directions, hotspots, trends, and frontiers in this field. The Web of Science Core Collection database was used, and CiteSpace and VOSviewer software were employed to visualize the annual publication volume, collaborative networks (including countries, institutions, and authors), citation characteristics (such as references, co-citations, and publications), as well as keywords (including emergence and clustering) related to ultrasound applications in TNBC over the past 15 years. A total of 310 papers were included. The first paper was published in 2010, and after that, publications in this field really took off, especially after 2020. China emerged as the leading country in terms of publication volume, while Shanghai Jiao Tong University had the highest output among institutions. Memorial Sloan Kettering Cancer Center was recognized as a key research institution within this domain. Adrada BE was the most prolific author in terms of publication count. Ko Es held the highest citation frequency among authors. Co-occurrence analysis of keywords revealed that the top three keywords by frequency were "triple-negative breast cancer," "breast cancer," and "sonography." The timeline visualization indicated a strong temporal continuity in the clusters of "breast cancer," "recommendations," "biopsy," "estrogen receptor," and "radiomics." The keyword with the highest emergence value was "neoplasms" (6.80). Trend analysis of emerging terms indicated a growing focus on "machine learning approaches," "prognosis," and "molecular subtypes," with "machine learning approach" emerging as a significant keyword currently. This study provided a systematic analysis of the current state of ultrasound technology applications in TNBC. It highlighted that "machine learning methods" have emerged as a central focus and frontier in this research area, both presently and for the foreseeable future. The findings offer valuable theoretical insights for the application of ultrasound technology in TNBC diagnosis and treatment and establish a solid foundation for further advancements in medical imaging research related to TNBC.

An adaptive deep learning approach based on InBNFus and CNNDen-GRU networks for breast cancer and maternal fetal classification using ultrasound images.

Fatima M, Khan MA, Mirza AM, Shin J, Alasiry A, Marzougui M, Cha J, Chang B

pubmed logopapersJul 1 2025
Convolutional Neural Networks (CNNs), a sophisticated deep learning technique, have proven highly effective in identifying and classifying abnormalities related to various diseases. The manual classification of these is a hectic and time-consuming process; therefore, it is essential to develop a computerized technique. Most existing methods are designed to address a single specific problem, limiting their adaptability. In this work, we proposed a novel adaptive deep-learning framework for simultaneously classifying breast cancer and maternal-fetal ultrasound datasets. Data augmentation was applied in the preprocessing phase to address the data imbalance problem. After, two novel architectures are proposed: InBnFUS and CNNDen-GRU. The InBnFUS network combines 5-Blocks inception-based architecture (Model 1) and 5-Blocks inverted bottleneck-based architecture (Model 2) through a depth-wise concatenation layer, while CNNDen-GRU incorporates 5-Blocks dense architecture with an integrated GRU layer. Post-training features were extracted from the global average pooling and GRU layer and classified using neural network classifiers. The experimental evaluation achieved enhanced accuracy rates of 99.0% for breast cancer, 96.6% for maternal-fetal (common planes), and 94.6% for maternal-fetal (brain) datasets. Additionally, the models consistently achieve high precision, recall, and F1 scores across both datasets. A comprehensive ablation study has been performed, and the results show the superior performance of the proposed models.
Page 31 of 56556 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.