Sort by:
Page 48 of 56552 results

High Volume Rate 3D Ultrasound Reconstruction with Diffusion Models

Tristan S. W. Stevens, Oisín Nolan, Oudom Somphone, Jean-Luc Robert, Ruud J. G. van Sloun

arxiv logopreprintMay 28 2025
Three-dimensional ultrasound enables real-time volumetric visualization of anatomical structures. Unlike traditional 2D ultrasound, 3D imaging reduces the reliance on precise probe orientation, potentially making ultrasound more accessible to clinicians with varying levels of experience and improving automated measurements and post-exam analysis. However, achieving both high volume rates and high image quality remains a significant challenge. While 3D diverging waves can provide high volume rates, they suffer from limited tissue harmonic generation and increased multipath effects, which degrade image quality. One compromise is to retain the focusing in elevation while leveraging unfocused diverging waves in the lateral direction to reduce the number of transmissions per elevation plane. Reaching the volume rates achieved by full 3D diverging waves, however, requires dramatically undersampling the number of elevation planes. Subsequently, to render the full volume, simple interpolation techniques are applied. This paper introduces a novel approach to 3D ultrasound reconstruction from a reduced set of elevation planes by employing diffusion models (DMs) to achieve increased spatial and temporal resolution. We compare both traditional and supervised deep learning-based interpolation methods on a 3D cardiac ultrasound dataset. Our results show that DM-based reconstruction consistently outperforms the baselines in image quality and downstream task performance. Additionally, we accelerate inference by leveraging the temporal consistency inherent to ultrasound sequences. Finally, we explore the robustness of the proposed method by exploiting the probabilistic nature of diffusion posterior sampling to quantify reconstruction uncertainty and demonstrate improved recall on out-of-distribution data with synthetic anomalies under strong subsampling.

Incorporating organ deformation in biological modeling and patient outcome study for permanent prostate brachytherapy.

To S, Mavroidis P, Chen RC, Wang A, Royce T, Tan X, Zhu T, Lian J

pubmed logopapersMay 28 2025
Permanent prostate brachytherapy has inherent intraoperative organ deformation due to the inflatable trans-rectal ultrasound probe cover. Since the majority of the dose is delivered postoperatively with no deformation, the dosimetry approved at the time of implant may not accurately represent the dose delivered to the target and organs at risk. We aimed to evaluate the biological effect of the prostate deformation and its correlation with patient-reported outcomes. We prospectively acquired ultrasound images of the prostate pre- and postprobe cover inflation for 27 patients undergoing I-125 seed implant. The coordinates of implanted seeds from approved clinical plan were transferred to deformation-corrected prostate to simulate the actual dosimetry using a machine learning-based deformable image registration. The DVHs of both sets of plans were reduced to biologically effective dose (BED) distribution and subsequently to Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP) metrics. The change in fourteen patient-reported rectal and urinary symptoms between pretreatment to 6 months post-op time points were correlated with the TCP and NTCP metrics using the area under the curve (AUC) and odds ratio (OR). Between the clinical and the deformation corrected research plans, the mean TCP decreased by 9.4% (p < 0.01), whereas mean NTCP of rectum decreased by 10.3% and that of urethra increased by 16.3%, respectively (p < 0.01). For the diarrhea symptom, the deformation corrected research plans showed AUC=0.75 and OR = 8.9 (1.3-58.8) for the threshold NTCP>20%, while the clinical plan showed AUC=0.56 and OR = 1.4 (0.2 to 9.0). For the symptom of urinary control, the deformation corrected research plans showed AUC = 0.70, OR = 6.9 (0.6 to 78.0) for the threshold of NTCP>15%, while the clinical plan showed AUC = 0.51 and no positive OR. Taking organ deformation into consideration, clinical brachytherapy plans showed worse tumor coverage, worse urethra sparing but better rectal sparing. The deformation corrected research plans showed a stronger correlation with the patient-reported outcome than the clinical plans for the symptoms of diarrhea and urinary control.

Contrast-Enhanced Ultrasound for Hepatocellular Carcinoma Diagnosis-<i>AJR</i> Expert Panel Narrative Review.

Li L, Burgio MD, Fetzer DT, Ferraioli G, Lyshchik A, Meloni MF, Rafailidis V, Sidhu PS, Vilgrain V, Wilson SR, Zhou J

pubmed logopapersMay 28 2025
Despite growing clinical use of contrast-enhanced ultrasound (CEUS), inconsistency remains in the modality's role in clinical pathways for hepatocellular carcinoma (HCC) diagnosis and management. This AJR Expert Panel Narrative Review provides practical insights on the use of CEUS for the diagnosis of HCC across populations, including individuals at high risk for HCC, individuals with metabolic dysfunction-associated steatotic liver disease, and remaining individuals not at high risk for HCC. Considerations addressed with respect to high-risk patients include CEUS diagnostic criteria for HCC, use of CEUS for differentiating HCC from non-HCC malignancy, use of CEUS for small (≤2 cm) lesions, use of CEUS for characterizing occult lesions on B-mode ultrasound, and use of CEUS for indeterminate lesions on CT or MRI. Representative literature addressing the use of CEUS for HCC diagnosis as well as gaps in knowledge requiring further investigation are highlighted. Throughout these discussions, the article distinguishes two broad types of ultrasound contrast agents used for liver imaging: pure blood-pool agents and a combined blood-pool and Kupffer-cell agent. Additional topics include the use of CEUS for treatment response assessment after nonradiation therapies and implications of artificial intelligence technologies. The article concludes with a series of consensus statements from the author panel.

An orchestration learning framework for ultrasound imaging: Prompt-Guided Hyper-Perception and Attention-Matching Downstream Synchronization.

Lin Z, Li S, Wang S, Gao Z, Sun Y, Lam CT, Hu X, Yang X, Ni D, Tan T

pubmed logopapersMay 27 2025
Ultrasound imaging is pivotal in clinical diagnostics due to its affordability, portability, safety, real-time capability, and non-invasive nature. It is widely utilized for examining various organs, such as the breast, thyroid, ovary, cardiac, and more. However, the manual interpretation and annotation of ultrasound images are time-consuming and prone to variability among physicians. While single-task artificial intelligence (AI) solutions have been explored, they are not ideal for scaling AI applications in medical imaging. Foundation models, although a trending solution, often struggle with real-world medical datasets due to factors such as noise, variability, and the incapability of flexibly aligning prior knowledge with task adaptation. To address these limitations, we propose an orchestration learning framework named PerceptGuide for general-purpose ultrasound classification and segmentation. Our framework incorporates a novel orchestration mechanism based on prompted hyper-perception, which adapts to the diverse inductive biases required by different ultrasound datasets. Unlike self-supervised pre-trained models, which require extensive fine-tuning, our approach leverages supervised pre-training to directly capture task-relevant features, providing a stronger foundation for multi-task and multi-organ ultrasound imaging. To support this research, we compiled a large-scale Multi-task, Multi-organ public ultrasound dataset (M<sup>2</sup>-US), featuring images from 9 organs and 16 datasets, encompassing both classification and segmentation tasks. Our approach employs four specific prompts-Object, Task, Input, and Position-to guide the model, ensuring task-specific adaptability. Additionally, a downstream synchronization training stage is introduced to fine-tune the model for new data, significantly improving generalization capabilities and enabling real-world applications. Experimental results demonstrate the robustness and versatility of our framework in handling multi-task and multi-organ ultrasound image processing, outperforming both specialist models and existing general AI solutions. Compared to specialist models, our method improves segmentation from 82.26% to 86.45%, classification from 71.30% to 79.08%, while also significantly reducing model parameters.

Improving Breast Cancer Diagnosis in Ultrasound Images Using Deep Learning with Feature Fusion and Attention Mechanism.

Asif S, Yan Y, Feng B, Wang M, Zheng Y, Jiang T, Fu R, Yao J, Lv L, Song M, Sui L, Yin Z, Wang VY, Xu D

pubmed logopapersMay 27 2025
Early detection of malignant lesions in ultrasound images is crucial for effective cancer diagnosis and treatment. While traditional methods rely on radiologists, deep learning models can improve accuracy, reduce errors, and enhance efficiency. This study explores the application of a deep learning model for classifying benign and malignant lesions, focusing on its performance and interpretability. In this study, we proposed a feature fusion-based deep learning model for classifying benign and malignant lesions in ultrasound images. The model leverages advanced architectures such as MobileNetV2 and DenseNet121, enhanced with feature fusion and attention mechanisms to boost classification accuracy. The clinical dataset comprises 2171 images collected from 1758 patients between December 2020 and May 2024. Additionally, we utilized the publicly available BUSI dataset, consisting of 780 images from female patients aged 25 to 75, collected in 2018. To enhance interpretability, we applied Grad-CAM, Saliency Maps, and shapley additive explanations (SHAP) techniques to explain the model's decision-making. A comparative analysis with radiologists of varying expertise levels is also conducted. The proposed model exhibited the highest performance, achieving an AUC of 0.9320 on our private dataset and an area under the curve (AUC) of 0.9834 on the public dataset, significantly outperforming traditional deep convolutional neural network models. It also exceeded the diagnostic performance of radiologists, showcasing its potential as a reliable tool for medical image classification. The model's success can be attributed to its incorporation of advanced architectures, feature fusion, and attention mechanisms. The model's decision-making process was further clarified using interpretability techniques like Grad-CAM, Saliency Maps, and SHAP, offering insights into its ability to focus on relevant image features for accurate classification. The proposed deep learning model offers superior accuracy in classifying benign and malignant lesions in ultrasound images, outperforming traditional models and radiologists. Its strong performance, coupled with interpretability techniques, demonstrates its potential as a reliable and efficient tool for medical diagnostics. The datasets generated and analyzed during the current study are not publicly available due to the nature of this research and participants of this study, but may be available from the corresponding author on reasonable request.

Prostate Cancer Screening with Artificial Intelligence-Enhanced Micro-Ultrasound: A Comparative Study with Traditional Methods

Muhammad Imran, Wayne G. Brisbane, Li-Ming Su, Jason P. Joseph, Wei Shao

arxiv logopreprintMay 27 2025
Background and objective: Micro-ultrasound (micro-US) is a novel imaging modality with diagnostic accuracy comparable to MRI for detecting clinically significant prostate cancer (csPCa). We investigated whether artificial intelligence (AI) interpretation of micro-US can outperform clinical screening methods using PSA and digital rectal examination (DRE). Methods: We retrospectively studied 145 men who underwent micro-US guided biopsy (79 with csPCa, 66 without). A self-supervised convolutional autoencoder was used to extract deep image features from 2D micro-US slices. Random forest classifiers were trained using five-fold cross-validation to predict csPCa at the slice level. Patients were classified as csPCa-positive if 88 or more consecutive slices were predicted positive. Model performance was compared with a classifier using PSA, DRE, prostate volume, and age. Key findings and limitations: The AI-based micro-US model and clinical screening model achieved AUROCs of 0.871 and 0.753, respectively. At a fixed threshold, the micro-US model achieved 92.5% sensitivity and 68.1% specificity, while the clinical model showed 96.2% sensitivity but only 27.3% specificity. Limitations include a retrospective single-center design and lack of external validation. Conclusions and clinical implications: AI-interpreted micro-US improves specificity while maintaining high sensitivity for csPCa detection. This method may reduce unnecessary biopsies and serve as a low-cost alternative to PSA-based screening. Patient summary: We developed an AI system to analyze prostate micro-ultrasound images. It outperformed PSA and DRE in detecting aggressive cancer and may help avoid unnecessary biopsies.

ScanAhead: Simplifying standard plane acquisition of fetal head ultrasound.

Men Q, Zhao H, Drukker L, Papageorghiou AT, Noble JA

pubmed logopapersMay 26 2025
The fetal standard plane acquisition task aims to detect an Ultrasound (US) image characterized by specified anatomical landmarks and appearance for assessing fetal growth. However, in practice, due to variability in human operator skill and possible fetal motion, it can be challenging for a human operator to acquire a satisfactory standard plane. To support a human operator with this task, this paper first describes an approach to automatically predict the fetal head standard plane from a video segment approaching the standard plane. A transformer-based image predictor is proposed to produce a high-quality standard plane by understanding diverse scales of head anatomy within the US video frame. Because of the visual gap between the video frames and standard plane image, the predictor is equipped with an offset adaptor that performs domain adaption to translate the off-plane structures to the anatomies that would usually appear in a standard plane view. To enhance the anatomical details of the predicted US image, the approach is extended by utilizing a second modality, US probe movement, that provides 3D location information. Quantitative and qualitative studies conducted on two different head biometry planes demonstrate that the proposed US image predictor produces clinically plausible standard planes with superior performance to comparative published methods. The results of dual-modality solution show an improved visualization with enhanced anatomical details of the predicted US image. Clinical evaluations are also conducted to demonstrate the consistency between the predicted echo textures and the expected echo patterns seen in a typical real standard plane, which indicates its clinical feasibility for improving the standard plane acquisition process.

Deep learning model for malignancy prediction of TI-RADS 4 thyroid nodules with high-risk characteristics using multimodal ultrasound: A multicentre study.

Chu X, Wang T, Chen M, Li J, Wang L, Wang C, Wang H, Wong ST, Chen Y, Li H

pubmed logopapersMay 26 2025
The automatic screening of thyroid nodules using computer-aided diagnosis holds great promise in reducing missed and misdiagnosed cases in clinical practice. However, most current research focuses on single-modal images and does not fully leverage the comprehensive information from multimodal medical images, limiting model performance. To enhance screening accuracy, this study uses a deep learning framework that integrates high-dimensional convolutions of B-mode ultrasound (BMUS) and strain elastography (SE) images to predict the malignancy of TI-RADS 4 thyroid nodules with high-risk features. First, we extract nodule regions from the images and expand the boundary areas. Then, adaptive particle swarm optimization (APSO) and contrast limited adaptive histogram equalization (CLAHE) algorithms are applied to enhance ultrasound image contrast. Finally, deep learning techniques are used to extract and fuse high-dimensional features from both ultrasound modalities to classify benign and malignant thyroid nodules. The proposed model achieved an AUC of 0.937 (95 % CI 0.917-0.949) and 0.927 (95 % CI 0.907-0.948) in the test and external validation sets, respectively, demonstrating strong generalization ability. When compared with the diagnostic performance of three groups of radiologists, the model outperformed them significantly. Meanwhile, with the model's assistance, all three radiologist groups showed improved diagnostic performance. Furthermore, heatmaps generated by the model show a high alignment with radiologists' expertise, further confirming its credibility. The results indicate that our model can assist in clinical thyroid nodule diagnosis, reducing the risk of missed and misdiagnosed diagnoses, particularly for high-risk populations, and holds significant clinical value.

Can intraoperative improvement of radial endobronchial ultrasound imaging enhance the diagnostic yield in peripheral pulmonary lesions?

Nishida K, Ito T, Iwano S, Okachi S, Nakamura S, Chrétien B, Chen-Yoshikawa TF, Ishii M

pubmed logopapersMay 26 2025
Data regarding the diagnostic efficacy of radial endobronchial ultrasound (R-EBUS) findings obtained via transbronchial needle aspiration (TBNA)/biopsy (TBB) with endobronchial ultrasonography with a guide sheath (EBUS-GS) for peripheral pulmonary lesions (PPLs) are lacking. We evaluated whether intraoperative probe repositioning improves R-EBUS imaging and affects diagnostic yield and safety of EBUS-guided sampling for PPLs. We retrospectively studied 363 patients with PPLs who underwent TBNA/TBB (83 lesions) or TBB (280 lesions) using EBUS-GS. Based on the R-EBUS findings before and after these procedures, patients were categorized into three groups: the improved R-EBUS image (n = 52), unimproved R-EBUS image (n = 69), and initial within-lesion groups (n = 242). The impact of improved R-EBUS findings on diagnostic yield and complications was assessed using multivariable logistic regression, adjusting for lesion size, lesion location, and the presence of a bronchus leading to the lesion on CT. A separate exploratory random-forest model with SHAP analysis was used to explore factors associated with successful repositioning in lesions not initially "within." The diagnostic yield in the improved R-EBUS group was significantly higher than that in the unimproved R-EBUS group (76.9% vs. 46.4%, p = 0.001). The regression model revealed that the improvement in intraoperative R-EBUS findings was associated with a high diagnostic yield (odds ratio: 3.55, 95% confidence interval, 1.57-8.06, p = 0.002). Machine learning analysis indicated that inner lesion location and radiographic visibility were the most influential predictors of successful repositioning. The complication rates were similar across all groups (total complications: 5.8% vs. 4.3% vs. 6.2%, p = 0.943). Improved R-EBUS findings during TBNA/TBB or TBB with EBUS-GS were associated with a high diagnostic yield without an increase in complications, even when the initial R-EBUS findings were inadequate. This suggests that repeated intraoperative probe repositioning can safely boost outcomes.

Detecting microcephaly and macrocephaly from ultrasound images using artificial intelligence.

Mengistu AK, Assaye BT, Flatie AB, Mossie Z

pubmed logopapersMay 26 2025
Microcephaly and macrocephaly, which are abnormal congenital markers, are associated with developmental and neurologic deficits. Hence, there is a medically imperative need to conduct ultrasound imaging early on. However, resource-limited countries such as Ethiopia are confronted with inadequacies such that access to trained personnel and diagnostic machines inhibits the exact and continuous diagnosis from being met. This study aims to develop a fetal head abnormality detection model from ultrasound images via deep learning. Data were collected from three Ethiopian healthcare facilities to increase model generalizability. The recruitment period for this study started on November 9, 2024, and ended on November 30, 2024. Several preprocessing techniques have been performed, such as augmentation, noise reduction, and normalization. SegNet, UNet, FCN, MobileNetV2, and EfficientNet-B0 were applied to segment and measure fetal head structures using ultrasound images. The measurements were classified as microcephaly, macrocephaly, or normal using WHO guidelines for gestational age, and then the model performance was compared with that of existing industry experts. The metrics used for evaluation included accuracy, precision, recall, the F1 score, and the Dice coefficient. This study was able to demonstrate the feasibility of using SegNet for automatic segmentation, measurement of abnormalities of the fetal head, and classification of macrocephaly and microcephaly, with an accuracy of 98% and a Dice coefficient of 0.97. Compared with industry experts, the model achieved accuracies of 92.5% and 91.2% for the BPD and HC measurements, respectively. Deep learning models can enhance prenatal diagnosis workflows, especially in resource-constrained settings. Future work needs to be done on optimizing model performance, trying complex models, and expanding datasets to improve generalizability. If these technologies are adopted, they can be used in prenatal care delivery. Not applicable.
Page 48 of 56552 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.