Sort by:
Page 1 of 20199 results
Next

Breast tumor diagnosis via multimodal deep learning using ultrasound B-mode and Nakagami images.

Muhtadi S, Gallippi CM

pubmed logopapersNov 1 2025
We propose and evaluate multimodal deep learning (DL) approaches that combine ultrasound (US) B-mode and Nakagami parametric images for breast tumor classification. It is hypothesized that integrating tissue brightness information from B-mode images with scattering properties from Nakagami images will enhance diagnostic performance compared with single-input approaches. An EfficientNetV2B0 network was used to develop multimodal DL frameworks that took as input (i) numerical two-dimensional (2D) maps or (ii) rendered red-green-blue (RGB) representations of both B-mode and Nakagami data. The diagnostic performance of these frameworks was compared with single-input counterparts using 831 US acquisitions from 264 patients. In addition, gradient-weighted class activation mapping was applied to evaluate diagnostically relevant information utilized by the different networks. The multimodal architectures demonstrated significantly higher area under the receiver operating characteristic curve (AUC) values ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ) than their monomodal counterparts, achieving an average improvement of 10.75%. In addition, the multimodal networks incorporated, on average, 15.70% more diagnostically relevant tissue information. Among the multimodal models, those using RGB representations as input outperformed those that utilized 2D numerical data maps ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ). The top-performing multimodal architecture achieved a mean AUC of 0.896 [95% confidence interval (CI): 0.813 to 0.959] when performance was assessed at the image level and 0.848 (95% CI: 0.755 to 0.903) when assessed at the lesion level. Incorporating B-mode and Nakagami information together in a multimodal DL framework improved classification outcomes and increased the amount of diagnostically relevant information accessed by networks, highlighting the potential for automating and standardizing US breast cancer diagnostics to enhance clinical outcomes.

EfficientNet-Based Attention Residual U-Net With Guided Loss for Breast Tumor Segmentation in Ultrasound Images.

Jasrotia H, Singh C, Kaur S

pubmed logopapersJul 1 2025
Breast cancer poses a major health concern for women globally. Effective segmentation of breast tumors for ultrasound images is crucial for early diagnosis and treatment. Conventional convolutional neural networks have shown promising results in this domain but face challenges due to image complexities and are computationally expensive, limiting their practical application in real-time clinical settings. We propose Eff-AResUNet-GL, a segmentation model using EfficienetNet-B3 as the encoder. This design integrates attention gates in skip connections to focus on significant features and residual blocks in the decoder to retain details and reduce gradient loss. Additionally, guided loss functions are applied at each decoder layer to generate better features, subsequently improving segmentation accuracy. Experimental results on BUSIS and Dataset B demonstrate that Eff-AResUNet-GL achieves superior performance and computational efficiency compared to state-of-the-art models, showing robustness in handling complex breast ultrasound images. Eff-AResUNet-GL offers a practical, high-performing solution for breast tumor segmentation, demonstrating potential clinical through improved segmentation accuracy and efficiency.

A Deep Learning Model Based on High-Frequency Ultrasound Images for Classification of Different Stages of Liver Fibrosis.

Zhang L, Tan Z, Li C, Mou L, Shi YL, Zhu XX, Luo Y

pubmed logopapersJul 1 2025
To develop a deep learning model based on high-frequency ultrasound images to classify different stages of liver fibrosis in chronic hepatitis B patients. This retrospective multicentre study included chronic hepatitis B patients who underwent both high-frequency and low-frequency liver ultrasound examinations between January 2014 and August 2024 at six hospitals. Paired images were employed to train the HF-DL and the LF-DL models independently. Three binary tasks were conducted: (1) Significant Fibrosis (S0-1 vs. S2-4); (2) Advanced Fibrosis (S0-2 vs. S3-4); (3) Cirrhosis (S0-3 vs. S4). Hepatic pathological results constituted the ground truth for algorithm development and evaluation. The diagnostic value of high-frequency and low-frequency liver ultrasound images was compared across commonly used CNN networks. The HF-DL model performance was compared against the LF-DL model, FIB-4, APRI, and with SWE (external test set). The calibration of models was plotted. The clinical benefits were calculated. Subgroup analysis for patients with different characteristics (BMI, ALT, inflammation level, alcohol consumption level) was conducted. The HF-DL model demonstrated consistently superior diagnostic performance across all stages of liver fibrosis compared to the LF-DL model, FIB-4, APRI and SWE, particularly in classifying advanced fibrosis (0.93 [95% CI 0.90-0.95], 0.93 [95% CI 0.89-0.96], p < 0.01). The HF-DL model demonstrates significantly improved performance in both target patient detection and negative population exclusion. The HF-DL model based on high-frequency ultrasound images outperforms other routinely used non-invasive modalities across different stages of liver fibrosis, particularly in advanced fibrosis, and may offer considerable clinical value.

Use of Artificial Intelligence and Machine Learning in Critical Care Ultrasound.

Peck M, Conway H

pubmed logopapersJul 1 2025
This article explores the transformative potential of artificial intelligence (AI) in critical care ultrasound AI technologies, notably deep learning and convolutional neural networks, now assisting in image acquisition, interpretation, and quality assessment, streamlining workflow and reducing operator variability. By automating routine tasks, AI enhances diagnostic accuracy and bridges training gaps, potentially democratizing advanced ultrasound techniques. Furthermore, AI's integration into tele-ultrasound systems shows promise in extending expert-level diagnostics to underserved areas, significantly broadening access to quality care. The article highlights the ongoing need for explainable AI systems to gain clinician trust and facilitate broader adoption.

Deep Learning Model for Real-Time Nuchal Translucency Assessment at Prenatal US.

Zhang Y, Yang X, Ji C, Hu X, Cao Y, Chen C, Sui H, Li B, Zhen C, Huang W, Deng X, Yin L, Ni D

pubmed logopapersJul 1 2025
Purpose To develop and evaluate an artificial intelligence-based model for real-time nuchal translucency (NT) plane identification and measurement in prenatal US assessments. Materials and Methods In this retrospective multicenter study conducted from January 2022 to October 2023, the Automated Identification and Measurement of NT (AIM-NT) model was developed and evaluated using internal and external datasets. NT plane assessment, including identification of the NT plane and measurement of NT thickness, was independently conducted by AIM-NT and experienced radiologists, with the results subsequently audited by radiology specialists and accuracy compared between groups. To assess alignment of artificial intelligence with radiologist workflow, discrepancies between the AIM-NT model and radiologists in NT plane identification time and thickness measurements were evaluated. Results The internal dataset included a total of 3959 NT images from 3153 fetuses, and the external dataset included 267 US videos from 267 fetuses. On the internal testing dataset, AIM-NT achieved an area under the receiver operating characteristic curve of 0.92 for NT plane identification. On the external testing dataset, there was no evidence of differences between AIM-NT and radiologists in NT plane identification accuracy (88.8% vs 87.6%, <i>P</i> = .69) or NT thickness measurements on standard and nonstandard NT planes (<i>P</i> = .29 and .59). AIM-NT demonstrated high consistency with radiologists in NT plane identification time, with 1-minute discrepancies observed in 77.9% of cases, and NT thickness measurements, with a mean difference of 0.03 mm and mean absolute error of 0.22 mm (95% CI: 0.19, 0.25). Conclusion AIM-NT demonstrated high accuracy in identifying the NT plane and measuring NT thickness on prenatal US images, showing minimal discrepancies with radiologist workflow. <b>Keywords:</b> Ultrasound, Fetus, Segmentation, Feature Detection, Diagnosis, Convolutional Neural Network (CNN) <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Horii in this issue.

Association between muscle mass assessed by an artificial intelligence-based ultrasound imaging system and quality of life in patients with cancer-related malnutrition.

de Luis D, Cebria A, Primo D, Izaola O, Godoy EJ, Gomez JJL

pubmed logopapersJul 1 2025
Emerging evidence suggests that diminished skeletal muscle mass is associated with lower health-related quality of life (HRQOL) in individuals with cancer. There are no studies that we know of in the literature that use ultrasound system to evaluate muscle mass and its relationship with HRQOL. The aim of our study was to evaluate the relationship between HRQOL determined by the EuroQol-5D tool and muscle mass determined by an artificial intelligence-based ultrasound system at the rectus femoris (RF) level in outpatients with cancer. Anthropometric data by bioimpedance (BIA), muscle mass by ultrasound by an artificial intelligence-based at the RF level, biochemistry determination, dynamometry and HRQOL were measured. A total of 158 patients with cancer were included with a mean age of 70.6 ±9.8 years. The mean body mass index was 24.4 ± 4.1 kg/m<sup>2</sup> with a mean body weight of 63.9 ± 11.7 kg (38% females and 62% males). A total of 57 patients had a severe degree of malnutrition (36.1%). The distribution of the location of the tumors was 66 colon-rectum cancer (41.7%), 56 esophageal-stomach cancer (35.4%), 16 pancreatic cancer (10.1%), and 20.2% other locations. A positive correlation cross-sectional area (CSA), muscle thickness (MT), pennation angle, (BIA) parameters, and muscle strength was detected. Patients in the groups below the median for the visual scale and the EuroQol-5D index had lower CSA and MT, BIA, and muscle strength values. CSA (beta 4.25, 95% CI 2.03-6.47) remained in the multivariate model as dependent variable (visual scale) and muscle strength (beta 0.008, 95% CI 0.003-0.14) with EuroQol-5D index. Muscle strength and pennation angle by US were associated with better score in dimensions of mobility, self-care, and daily activities. CSA, MT, and pennation angle of RF determined by an artificial intelligence-based muscle ultrasound system in outpatients with cancer were related to HRQOL determined by EuroQol-5D.

TIER-LOC: Visual Query-based Video Clip Localization in fetal ultrasound videos with a multi-tier transformer.

Mishra D, Saha P, Zhao H, Hernandez-Cruz N, Patey O, Papageorghiou AT, Noble JA

pubmed logopapersJul 1 2025
In this paper, we introduce the Visual Query-based task of Video Clip Localization (VQ-VCL) for medical video understanding. Specifically, we aim to retrieve a video clip containing frames similar to a given exemplar frame from a given input video. To solve the task, we propose a novel visual query-based video clip localization model called TIER-LOC. TIER-LOC is designed to improve video clip retrieval, especially in fine-grained videos by extracting features from different levels, i.e., coarse to fine-grained, referred to as TIERS. The aim is to utilize multi-Tier features for detecting subtle differences, and adapting to scale or resolution variations, leading to improved video-clip retrieval. TIER-LOC has three main components: (1) a Multi-Tier Spatio-Temporal Transformer to fuse spatio-temporal features extracted from multiple Tiers of video frames with features from multiple Tiers of the visual query enabling better video understanding. (2) a Multi-Tier, Dual Anchor Contrastive Loss to deal with real-world annotation noise which can be notable at event boundaries and in videos featuring highly similar objects. (3) a Temporal Uncertainty-Aware Localization Loss designed to reduce the model sensitivity to imprecise event boundary. This is achieved by relaxing hard boundary constraints thus allowing the model to learn underlying class patterns and not be influenced by individual noisy samples. To demonstrate the efficacy of TIER-LOC, we evaluate it on two ultrasound video datasets and an open-source egocentric video dataset. First, we develop a sonographer workflow assistive task model to detect standard-frame clips in fetal ultrasound heart sweeps. Second, we assess our model's performance in retrieving standard-frame clips for detecting fetal anomalies in routine ultrasound scans, using the large-scale PULSE dataset. Lastly, we test our model's performance on an open-source computer vision video dataset by creating a VQ-VCL fine-grained video dataset based on the Ego4D dataset. Our model outperforms the best-performing state-of-the-art model by 7%, 4%, and 4% on the three video datasets, respectively.

A multi-task neural network for full waveform ultrasonic bone imaging.

Li P, Liu T, Ma H, Li D, Liu C, Ta D

pubmed logopapersJul 1 2025
It is a challenging task to use ultrasound for bone imaging, as the bone tissue has a complex structure with high acoustic impedance and speed-of-sound (SOS). Recently, full waveform inversion (FWI) has shown promising imaging for musculoskeletal tissues. However, the FWI showed a limited ability and tended to produce artifacts in bone imaging because the inversion process would be more easily trapped in local minimum for bone tissue with a large discrepancy in SOS distribution between bony and soft tissues. In addition, the application of FWI required a high computational burden and relatively long iterations. The objective of this study was to achieve high-resolution ultrasonic imaging of bone using a deep learning-based FWI approach. In this paper, we proposed a novel network named CEDD-Unet. The CEDD-Unet adopts a Dual-Decoder architecture, with the first decoder tasked with reconstructing the SOS model, and the second decoder tasked with finding the main boundaries between bony and soft tissues. To effectively capture multi-scale spatial-temporal features from ultrasound radio frequency (RF) signals, we integrated a Convolutional LSTM (ConvLSTM) module. Additionally, an Efficient Multi-scale Attention (EMA) module was incorporated into the encoder to enhance feature representation and improve reconstruction accuracy. Using the ultrasonic imaging modality with a ring array transducer, the performance of CEDD-Unet was tested on the SOS model datasets from human bones (noted as Dataset1) and mouse bones (noted as Dataset2), and compared with three classic reconstruction architectures (Unet, Unet++, and Att-Unet), four state-of-the-art architecture (InversionNet, DD-Net, UPFWI, and DEFE-Unet). Experiments showed that CEDD-Unet outperforms all competing methods, achieving the lowest MAE of 23.30 on Dataset1 and 25.29 on Dataset2, the highest SSIM of 0.9702 on Dataset1 and 0.9550 on Dataset2, and the highest PSNR of 30.60 dB on Dataset1 and 32.87 dB on Dataset2. Our method demonstrated superior reconstruction quality, with clearer bone boundaries, reduced artifacts, and improved consistency with ground truth. Moreover, CEDD-Unet surpasses traditional FWI by producing sharper skeletal SOS reconstructions, reducing computational cost, and eliminating the reliance for an initial model. Ablation studies further confirm the effectiveness of each network component. The results suggest that CEDD-Unet is a promising deep learning-based FWI method for high-resolution bone imaging, with the potential to reconstruct accurate and sharp-edged skeletal SOS models.

Development and Validation an AI Model to Improve the Diagnosis of Deep Infiltrating Endometriosis for Junior Sonologists.

Xu J, Zhang A, Zheng Z, Cao J, Zhang X

pubmed logopapersJul 1 2025
This study aims to develop and validate an artificial intelligence (AI) model based on ultrasound (US) videos and images to improve the performance of junior sonologists in detecting deep infiltrating endometriosis (DE). In this retrospective study, data were collected from female patients who underwent US examinations and had DE. The US image records were divided into two parts. First, during the model development phase, an AI-DE model was trained employing YOLOv8 to detect pelvic DE nodules. Subsequently, its clinical applicability was evaluated by comparing the diagnostic performance of junior sonologists with and without AI-model assistance. The AI-DE model was trained using 248 images, which demonstrated high performance, with a mAP50 (mean Average Precision at IoU threshold 0.5) of 0.9779 on the test set. Total 147 images were used for evaluate the diagnostic performance. The diagnostic performance of junior sonologists improved with the assistance of the AI-DE model. The area under the receiver operating characteristic (AUROC) curve improved from 0.748 (95% CI, 0.624-0.867) to 0.878 (95% CI, 0.792-0.964; p < 0.0001) for junior sonologist A, and from 0.713 (95% CI, 0.592-0.835) to 0.798 (95% CI, 0.677-0.919; p < 0.0001) for junior sonologist B. Notably, the sensitivity of both sonologists increased significantly, with the largest increase from 77.42% to 94.35%. The AI-DE model based on US images showed good performance in DE detection and significantly improved the diagnostic performance of junior sonologists.

Deep Learning Based on Ultrasound Images Differentiates Parotid Gland Pleomorphic Adenomas and Warthin Tumors.

Li Y, Zou M, Zhou X, Long X, Liu X, Yao Y

pubmed logopapersJul 1 2025
Exploring the clinical significance of employing deep learning methodologies on ultrasound images for the development of an automated model to accurately identify pleomorphic adenomas and Warthin tumors in salivary glands. A retrospective study was conducted on 91 patients who underwent ultrasonography examinations between January 2016 and December 2023 and were subsequently diagnosed with pleomorphic adenoma or Warthin's tumor based on postoperative pathological findings. A total of 526 ultrasonography images were collected for analysis. Convolutional neural network (CNN) models, including ResNet18, MobileNetV3Small, and InceptionV3, were trained and validated using these images for the differentiation of pleomorphic adenoma and Warthin's tumor. Performance evaluation metrics such as receiver operating characteristic (ROC) curves, area under the curve (AUC), sensitivity, specificity, positive predictive value, and negative predictive value were utilized. Two ultrasound physicians, with varying levels of expertise, conducted independent evaluations of the ultrasound images. Subsequently, a comparative analysis was performed between the diagnostic outcomes of the ultrasound physicians and the results obtained from the best-performing model. Inter-rater agreement between routine ultrasonography interpretation by the two expert ultrasonographers and the automatic identification diagnosis of the best model in relation to pathological results was assessed using kappa tests. The deep learning models achieved favorable performance in differentiating pleomorphic adenoma from Warthin's tumor. The ResNet18, MobileNetV3Small, and InceptionV3 models exhibited diagnostic accuracies of 82.4% (AUC: 0.932), 87.0% (AUC: 0.946), and 77.8% (AUC: 0.811), respectively. Among these models, MobileNetV3Small demonstrated the highest performance. The experienced ultrasonographer achieved a diagnostic accuracy of 73.5%, with sensitivity, specificity, positive predictive value, and negative predictive value of 73.7%, 73.3%, 77.8%, and 68.8%, respectively. The less-experienced ultrasonographer achieved a diagnostic accuracy of 69.0%, with sensitivity, specificity, positive predictive value, and negative predictive value of 66.7%, 71.4%, 71.4%, and 66.7%, respectively. The kappa test revealed strong consistency between the best-performing deep learning model and postoperative pathological diagnoses (kappa value: .778, <i>p</i>-value < .001). In contrast, the less-experienced ultrasonographer demonstrated poor consistency in image interpretations (kappa value: .380, <i>p</i>-value < .05). The diagnostic accuracy of the best deep learning model was significantly higher than that of the ultrasonographers, and the experienced ultrasonographer exhibited higher diagnostic accuracy than the less-experienced one. This study demonstrates the promising performance of a deep learning-based method utilizing ultrasonography images for the differentiation of pleomorphic adenoma and Warthin's tumor. The approach reduces subjective errors, provides decision support for clinicians, and improves diagnostic consistency.
Page 1 of 20199 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.