Sort by:
Page 3 of 24237 results

Uncertainty-aware Diffusion and Reinforcement Learning for Joint Plane Localization and Anomaly Diagnosis in 3D Ultrasound

Yuhao Huang, Yueyue Xu, Haoran Dou, Jiaxiao Deng, Xin Yang, Hongyu Zheng, Dong Ni

arxiv logopreprintJun 30 2025
Congenital uterine anomalies (CUAs) can lead to infertility, miscarriage, preterm birth, and an increased risk of pregnancy complications. Compared to traditional 2D ultrasound (US), 3D US can reconstruct the coronal plane, providing a clear visualization of the uterine morphology for assessing CUAs accurately. In this paper, we propose an intelligent system for simultaneous automated plane localization and CUA diagnosis. Our highlights are: 1) we develop a denoising diffusion model with local (plane) and global (volume/text) guidance, using an adaptive weighting strategy to optimize attention allocation to different conditions; 2) we introduce a reinforcement learning-based framework with unsupervised rewards to extract the key slice summary from redundant sequences, fully integrating information across multiple planes to reduce learning difficulty; 3) we provide text-driven uncertainty modeling for coarse prediction, and leverage it to adjust the classification probability for overall performance improvement. Extensive experiments on a large 3D uterine US dataset show the efficacy of our method, in terms of plane localization and CUA diagnosis. Code is available at https://github.com/yuhoo0302/CUA-US.

Federated Breast Cancer Detection Enhanced by Synthetic Ultrasound Image Augmentation

Hongyi Pan, Ziliang Hong, Gorkem Durak, Ziyue Xu, Ulas Bagci

arxiv logopreprintJun 29 2025
Federated learning (FL) has emerged as a promising paradigm for collaboratively training deep learning models across institutions without exchanging sensitive medical data. However, its effectiveness is often hindered by limited data availability and non-independent, identically distributed data across participating clients, which can degrade model performance and generalization. To address these challenges, we propose a generative AI based data augmentation framework that integrates synthetic image sharing into the federated training process for breast cancer diagnosis via ultrasound images. Specifically, we train two simple class-specific Deep Convolutional Generative Adversarial Networks: one for benign and one for malignant lesions. We then simulate a realistic FL setting using three publicly available breast ultrasound image datasets: BUSI, BUS-BRA, and UDIAT. FedAvg and FedProx are adopted as baseline FL algorithms. Experimental results show that incorporating a suitable number of synthetic images improved the average AUC from 0.9206 to 0.9237 for FedAvg and from 0.9429 to 0.9538 for FedProx. We also note that excessive use of synthetic data reduced performance, underscoring the importance of maintaining a balanced ratio of real and synthetic samples. Our findings highlight the potential of generative AI based data augmentation to enhance FL results in the breast ultrasound image classification task.

Hierarchical Corpus-View-Category Refinement for Carotid Plaque Risk Grading in Ultrasound

Zhiyuan Zhu, Jian Wang, Yong Jiang, Tong Han, Yuhao Huang, Ang Zhang, Kaiwen Yang, Mingyuan Luo, Zhe Liu, Yaofei Duan, Dong Ni, Tianhong Tang, Xin Yang

arxiv logopreprintJun 29 2025
Accurate carotid plaque grading (CPG) is vital to assess the risk of cardiovascular and cerebrovascular diseases. Due to the small size and high intra-class variability of plaque, CPG is commonly evaluated using a combination of transverse and longitudinal ultrasound views in clinical practice. However, most existing deep learning-based multi-view classification methods focus on feature fusion across different views, neglecting the importance of representation learning and the difference in class features. To address these issues, we propose a novel Corpus-View-Category Refinement Framework (CVC-RF) that processes information from Corpus-, View-, and Category-levels, enhancing model performance. Our contribution is four-fold. First, to the best of our knowledge, we are the foremost deep learning-based method for CPG according to the latest Carotid Plaque-RADS guidelines. Second, we propose a novel center-memory contrastive loss, which enhances the network's global modeling capability by comparing with representative cluster centers and diverse negative samples at the Corpus level. Third, we design a cascaded down-sampling attention module to fuse multi-scale information and achieve implicit feature interaction at the View level. Finally, a parameter-free mixture-of-experts weighting strategy is introduced to leverage class clustering knowledge to weight different experts, enabling feature decoupling at the Category level. Experimental results indicate that CVC-RF effectively models global features via multi-level refinement, achieving state-of-the-art performance in the challenging CPG task.

Lightweight Physics-Informed Zero-Shot Ultrasound Plane Wave Denoising

Hojat Asgariandehkordi, Mostafa Sharifzadeh, Hassan Rivaz

arxiv logopreprintJun 26 2025
Ultrasound Coherent Plane Wave Compounding (CPWC) enhances image contrast by combining echoes from multiple steered transmissions. While increasing the number of angles generally improves image quality, it drastically reduces the frame rate and can introduce blurring artifacts in fast-moving targets. Moreover, compounded images remain susceptible to noise, particularly when acquired with a limited number of transmissions. We propose a zero-shot denoising framework tailored for low-angle CPWC acquisitions, which enhances contrast without relying on a separate training dataset. The method divides the available transmission angles into two disjoint subsets, each used to form compound images that include higher noise levels. The new compounded images are then used to train a deep model via a self-supervised residual learning scheme, enabling it to suppress incoherent noise while preserving anatomical structures. Because angle-dependent artifacts vary between the subsets while the underlying tissue response is similar, this physics-informed pairing allows the network to learn to disentangle the inconsistent artifacts from the consistent tissue signal. Unlike supervised methods, our model requires no domain-specific fine-tuning or paired data, making it adaptable across anatomical regions and acquisition setups. The entire pipeline supports efficient training with low computational cost due to the use of a lightweight architecture, which comprises only two convolutional layers. Evaluations on simulation, phantom, and in vivo data demonstrate superior contrast enhancement and structure preservation compared to both classical and deep learning-based denoising methods.

Automated breast ultrasound features associated with diagnostic performance of Multiview convolutional neural network according to radiologists' experience.

Choi EJ, Wang Y, Choi H, Youk JH, Byon JH, Choi S, Ko S, Jin GY

pubmed logopapersJun 26 2025
To investigate automated breast ultrasound (ABUS) features affecting the use of Multiview convolutional neural network (CNN) for breast lesions according to radiologists' experience. A total of 656 breast lesions (152 malignant and 504 benign lesions) were included and reviewed by six radiologists for background echotexture, glandular tissue component (GTC), and lesion type and size without as well as with Multiview CNN. The sensitivity, specificity, and the area under the receiver operating curve (AUC) for ABUS features were compared between two sessions according to radiologists' experience. Radiology residents showed significant AUC improvement with the Multiview CNN for mass (0.81 to 0.91, P=0.003) and non-mass lesions (0.56 to 0.90, P=0.007), all background echotextures (homogeneous-fat: 0.84 to 0.94, P=0.04; homogeneous-fibroglandular: 0.85 to 0.93, P=0.01; heterogeneous: 0.68 to 0.88, P=0.002), all GTC levels (minimal: 0.86 to 0.93, P=0.001; mild: 0.82 to 0.94, P=0.003; moderate: 0.75 to 0.88, P=0.01; marked: 0.68 to 0.89, P<0.001), and lesions ≤10mm (≤5 mm: 0.69 to 0.86, P<0.001; 6-10 mm: 0.83 to 0.92, P<0.001). Breast specialists showed significant AUC improvement with the Multiview CNN in heterogeneous echotexture (0.90 to 0.95, P=0.03), marked GTC (0.88 to 0.95, P<0.001), and lesions ≤10mm (≤5 mm: 0.89 to 0.93, P=0.02; 6-10 mm: 0.95 to 0.98, P=0.01). With the Multiview CNN, the performance of ABUS in radiology residents was improved regardless of lesion type, background echotexture, or GTC. For breast lesions smaller than 10 mm, both radiology residents and breast specialists showed better performance of ABUS.

Application Value of Deep Learning-Based AI Model in the Classification of Breast Nodules.

Zhi S, Cai X, Zhou W, Qian P

pubmed logopapersJun 25 2025
<b>Aims/Background</b> Breast nodules are highly prevalent among women, and ultrasound is a widely used screening tool. However, single ultrasound examinations often result in high false-positive rates, leading to unnecessary biopsies. Artificial intelligence (AI) has demonstrated the potential to improve diagnostic accuracy, reducing misdiagnosis and minimising inter-observer variability. This study developed a deep learning-based AI model to evaluate its clinical utility in assisting sonographers with the Breast Imaging Reporting and Data System (BI-RADS) classification of breast nodules. <b>Methods</b> A retrospective analysis was conducted on 558 patients with breast nodules classified as BI-RADS categories 3 to 5, confirmed through pathological examination at The People's Hospital of Pingyang County between December 2019 and December 2023. The image dataset was divided into a training set, validation set, and test set, and a convolutional neural network (CNN) was used to construct a deep learning-based AI model. Patients underwent ultrasound examination and AI-assisted diagnosis. The receiver operating characteristic (ROC) curve was used to analyse the performance of the AI model, physician adjudication results, and the diagnostic efficacy of physicians before and after AI model assistance. Cohen's weighted Kappa coefficient was used to assess the consistency of BI-RADS classification among five ultrasound physicians before and after AI model assistance. Additionally, statistical analyses were performed to evaluate changes in BI-RADS classification results before and after AI model assistance for each physician. <b>Results</b> According to pathological examination, 765 of the 1026 breast nodules were benign, while 261 were malignant. The sensitivity, specificity, and accuracy of routine ultrasonography in diagnosing benign and malignant nodules were 80.85%, 91.59%, and 88.31%, respectively. In comparison, the AI system achieved a sensitivity of 89.36%, specificity of 92.52%, and accuracy of 91.56%. Furthermore, AI model assistance significantly improved the consistency of physicians' BI-RADS classification (<i>p</i> < 0.001). <b>Conclusion</b> A deep learning-based AI model constructed using ultrasound images can enhance the differentiation between benign and malignant breast nodules and improve classification accuracy, thereby reducing the incidence of missed and misdiagnoses.

Fusing Radiomic Features with Deep Representations for Gestational Age Estimation in Fetal Ultrasound Images

Fangyijie Wang, Yuan Liang, Sourav Bhattacharjee, Abey Campbell, Kathleen M. Curran, Guénolé Silvestre

arxiv logopreprintJun 25 2025
Accurate gestational age (GA) estimation, ideally through fetal ultrasound measurement, is a crucial aspect of providing excellent antenatal care. However, deriving GA from manual fetal biometric measurements depends on the operator and is time-consuming. Hence, automatic computer-assisted methods are demanded in clinical practice. In this paper, we present a novel feature fusion framework to estimate GA using fetal ultrasound images without any measurement information. We adopt a deep learning model to extract deep representations from ultrasound images. We extract radiomic features to reveal patterns and characteristics of fetal brain growth. To harness the interpretability of radiomics in medical imaging analysis, we estimate GA by fusing radiomic features and deep representations. Our framework estimates GA with a mean absolute error of 8.0 days across three trimesters, outperforming current machine learning-based methods at these gestational ages. Experimental results demonstrate the robustness of our framework across different populations in diverse geographical regions. Our code is publicly available on \href{https://github.com/13204942/RadiomicsImageFusion_FetalUS}{GitHub}.

[Thyroid nodule segmentation method integrating receiving weighted key-value architecture and spherical geometric features].

Zhu L, Wei G

pubmed logopapersJun 25 2025
To address the high computational complexity of the Transformer in the segmentation of ultrasound thyroid nodules and the loss of image details or omission of key spatial information caused by traditional image sampling techniques when dealing with high-resolution, complex texture or uneven density two-dimensional ultrasound images, this paper proposes a thyroid nodule segmentation method that integrates the receiving weighted key-value (RWKV) architecture and spherical geometry feature (SGF) sampling technology. This method effectively captures the details of adjacent regions through two-dimensional offset prediction and pixel-level sampling position adjustment, achieving precise segmentation. Additionally, this study introduces a patch attention module (PAM) to optimize the decoder feature map using a regional cross-attention mechanism, enabling it to focus more precisely on the high-resolution features of the encoder. Experiments on the thyroid nodule segmentation dataset (TN3K) and the digital database for thyroid images (DDTI) show that the proposed method achieves dice similarity coefficients (DSC) of 87.24% and 80.79% respectively, outperforming existing models while maintaining a lower computational complexity. This approach may provide an efficient solution for the precise segmentation of thyroid nodules.

A New Aortic Valve Calcium Scoring Framework for Automatic Calcification Detection in Echocardiography.

Cakir M, Kablan EB, Ekinci M, Sahin M

pubmed logopapersJun 25 2025
Aortic valve calcium scoring is an essential tool for diagnosing, treating, monitoring, and assessing the risk of aortic stenosis. The current gold standard for determining the aortic valve calcium score is computed tomography (CT). However, CT is costly and exposes patients to ionizing radiation, making it less ideal for frequent monitoring. Echocardiography, a safer and more affordable alternative that avoids radiation, is more widely accessible, but its variability between and within experts leads to subjective interpretations. Given these limitations, there is a clear need for an automated, objective method to measure the aortic valve calcium score from echocardiography, which could reduce costs and improve patient safety. In this paper, we first employ the YOLOv5 method to detect the region of interest in the aorta within echocardiography images. Building on this, we propose a novel approach that combines UNet and diffusion model architectures to segment calcified areas within the identified region, forming the foundation for automated aortic valve calcium scoring. This architecture leverages UNet's localization capabilities and the diffusion model's strengths in capturing fine-grained structures, enhancing both segmentation accuracy and consistency. The proposed method achieves 85.08% precision, 80.01% recall, and 71.13% Dice score on a novel dataset comprising 160 echocardiography images from 86 distinct patients. This system enables cardiologists to focus more on critical aspects of diagnosis by providing a faster, more objective, and cost-effective method for aortic valve calcium scoring and eliminating the risk of radiation exposure.

Integrating handheld ultrasound in rheumatology: A review of benefits and drawbacks.

Sabido-Sauri R, Eder L, Emery P, Aydin SZ

pubmed logopapersJun 25 2025
Musculoskeletal ultrasound is a key tool in rheumatology for diagnosing and managing inflammatory arthritis. Traditional ultrasound systems, while effective, can be cumbersome and costly, limiting their use in many clinical settings. Handheld ultrasound (HHUS) devices, which are portable, affordable, and user-friendly, have emerged as a promising alternative. This review explores the role of HHUS in rheumatology, specifically evaluating its impact on diagnostic accuracy, ease of use, and utility in screening for inflammatory arthritis. The review also addresses key challenges, such as image quality, storage and data security, and the potential for integrating artificial intelligence to improve device performance. We compare HHUS devices to cart-based ultrasound machines, discuss their advantages and limitations, and examine the potential for widespread adoption. Our findings suggest that HHUS devices can effectively support musculoskeletal assessments and offer significant benefits in resource-limited settings. However, proper training, standardized protocols, and continued technological advancements are essential for optimizing their use in clinical practice.
Page 3 of 24237 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.