Sort by:
Page 10 of 56552 results

Left ventricular ejection fraction assessment: artificial intelligence compared to echocardiography expert and cardiac magnetic resonance measurements.

Mołek-Dziadosz P, Woźniak A, Furman-Niedziejko A, Pieszko K, Szachowicz-Jaworska J, Miszalski-Jamka T, Krupiński M, Dweck MR, Nessler J, Gackowski A

pubmed logopapersSep 1 2025
 Cardiac magnetic resonance (CMR) is the gold standard for assessing left ventricular ejection fraction (LVEF). Artificial intelligence (AI) - based echocardiographic analysis is increasingly utilized in clinical practice.  This study compares measurements of LVEF between echocardiography (ECHO) assessed by experts and automated AI, in comparison to CMR as the reference standard.  We retrospectively analyzed 118 patients who underwent both CMR and ECHO within 7 days. LVEF measured by CMR was compared with results obtained from an AI-based software which automatically analyzed all stored DICOM loops (Multi loop AI analysis) in echocardiography (ECHO). Additionally, AI results were repeated using only one best quality loop for 2 and one for 4 chamber views (One Loop AI Analysis) in ECHO. These results were further compared with standard ECHO analysis performed by two independent experts. Agreement was investigated using Pearson's correlation and Bland-Altman analysis as well as Cohen's Kappa and concordance for categorization of LVEF into subgroups (≤30%, 31-40%, 41-50%, 51-70%; and >70%).  Both Experts demonstrated strong inter-reader agreement (R = 0.88, κ = 0.77) and correlated well with CMR LVEF (Expert 1: R = 0.86, κ = 0.74; Expert 2: R = 0.85, κ = 0.68). Multi loop AI analysis correlated strongly with CMR (R = 0.87, κ = 0.68) and Experts (R = 0.88-0.90). One Loop AI Analysis demonstrated numerically higher concordance with CMR LVEF (R = 0.89, κ = 0.75) compared to Multi loop AI analysis and Experts.  AI-based analysis showed similar LVEF assessment as human experts in comparison to CMR results. AI-based ECHO analysis are promising, but the obtained results should be interpreted with caution.

Comparison of the diagnostic performance of the artificial intelligence-based TIRADS algorithm with established classification systems for thyroid nodules.

Bozkuş A, Başar Y, Güven K

pubmed logopapersSep 1 2025
This study aimed to evaluate and compare the diagnostic performance of various Thyroid Imaging Reporting and Data Systems (TIRADS), with a particular focus on the artificial intelligence-based TIRADS (AI-TIRADS), in characterizing thyroid nodules. In this retrospective study conducted between April 2016 and May 2022, 1,322 thyroid nodules from 1,139 patients with confirmed cytopathological diagnoses were included. Each nodule was assessed using TIRADS classifications defined by the American College of Radiology (ACR-TIRADS), the American Thyroid Association (ATA-TIRADS), the European Thyroid Association (EU-TIRADS), the Korean Thyroid Association (K-TIRADS), and the AI-TIRADS. Three radiologists independently evaluated the ultrasound (US) characteristics of the nodules using all classification systems. Diagnostic performance was assessed using sensitivity, specificity, positive predictive value (PPV), and negative predictive value, and comparisons were made using the McNemar test. Among the nodules, 846 (64%) were benign, 299 (22.6%) were of intermediate risk, and 147 (11.1%) were malignant. The AI-TIRADS demonstrated a PPV of 21.2% and a specificity of 53.6%, outperforming the other systems in specificity without compromising sensitivity. The specificities of the ACR-TIRADS, the ATA-TIRADS, the EU-TIRADS, and the K-TIRADS were 44.6%, 39.3%, 40.1%, and 40.1%, respectively (all pairwise comparisons with the AI-TIRADS: <i>P</i> < 0.001). The PPVs for the ACR-TIRADS, the ATA-TIRADS, the EU-TIRADS, and the K-TIRADS were 18.5%, 17.9%, 17.9%, and 17.4%, respectively (all pairwise comparisons with the AI-TIRADS, excluding the ACR-TIRADS: <i>P</i> < 0.05). The AI-TIRADS shows promise in improving diagnostic specificity and reducing unnecessary biopsies in thyroid nodule assessment while maintaining high sensitivity. The findings suggest that the AI-TIRADS may enhance risk stratification, leading to better patient management. Additionally, the study found that the presence of multiple suspicious US features markedly increases the risk of malignancy, whereas isolated features do not substantially elevate the risk. The AI-TIRADS can enhance thyroid nodule risk stratification by improving diagnostic specificity and reducing unnecessary biopsies, potentially leading to more efficient patient management and better utilization of healthcare resources.

Acoustic Interference Suppression in Ultrasound images for Real-Time HIFU Monitoring Using an Image-Based Latent Diffusion Model

Dejia Cai, Yao Ran, Kun Yang, Xinwang Shi, Yingying Zhou, Kexian Wu, Yang Xu, Yi Hu, Xiaowei Zhou

arxiv logopreprintSep 1 2025
High-Intensity Focused Ultrasound (HIFU) is a non-invasive therapeutic technique widely used for treating various diseases. However, the success and safety of HIFU treatments depend on real-time monitoring, which is often hindered by interference when using ultrasound to guide HIFU treatment. To address these challenges, we developed HIFU-ILDiff, a novel deep learning-based approach leveraging latent diffusion models to suppress HIFU-induced interference in ultrasound images. The HIFU-ILDiff model employs a Vector Quantized Variational Autoencoder (VQ-VAE) to encode noisy ultrasound images into a lower-dimensional latent space, followed by a latent diffusion model that iteratively removes interference. The denoised latent vectors are then decoded to reconstruct high-resolution, interference-free ultrasound images. We constructed a comprehensive dataset comprising 18,872 image pairs from in vitro phantoms, ex vivo tissues, and in vivo animal data across multiple imaging modalities and HIFU power levels to train and evaluate the model. Experimental results demonstrate that HIFU-ILDiff significantly outperforms the commonly used Notch Filter method, achieving a Structural Similarity Index (SSIM) of 0.796 and Peak Signal-to-Noise Ratio (PSNR) of 23.780 compared to SSIM of 0.443 and PSNR of 14.420 for the Notch Filter under in vitro scenarios. Additionally, HIFU-ILDiff achieves real-time processing at 15 frames per second, markedly faster than the Notch Filter's 5 seconds per frame. These findings indicate that HIFU-ILDiff is able to denoise HIFU interference in ultrasound guiding images for real-time monitoring during HIFU therapy, which will greatly improve the treatment precision in current clinical applications.

Enhancing diagnostic precision for thyroid C-TIRADS category 4 nodules: a hybrid deep learning and machine learning model integrating grayscale and elastographic ultrasound features.

Zou D, Lyu F, Pan Y, Fan X, Du J, Mai X

pubmed logopapersSep 1 2025
Accurate and timely diagnosis of thyroid cancer is critical for clinical care, and artificial intelligence can enhance this process. This study aims to develop and validate an intelligent assessment model called C-TNet, based on the Chinese Guidelines for Ultrasound Malignancy Risk Stratification of Thyroid Nodules (C-TIRADS) and real-time elasticity imaging. The goal is to differentiate between benign and malignant characteristics of thyroid nodules classified as C-TIRADS category 4. We evaluated the performance of C-TNet against ultrasonographers and BMNet, a model trained exclusively on histopathological findings indicating benign or malignant nature. The study included 3,545 patients with pathologically confirmed C-TIRADS category 4 thyroid nodules from two tertiary hospitals in China: Affiliated Hospital of Integrated Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine (n=3,463 patients) and Jiangyin People's Hospital (n=82 patients). The cohort from Affiliated Hospital of Integrated Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine was randomly divided into a training set and validation set (7:3 ratio), while the cohort from Jiangyin People's Hospital served as the external validation set. The C-TNet model was developed by extracting image features from the training set and integrating them with six commonly used classifier algorithms: logistic regression (LR), linear discriminant analysis (LDA), random forest (RF), kernel support vector machine (K-SVM), adaptive boosting (AdaBoost), and Naive Bayes (NB). Its performance was evaluated using both internal and external validation sets, with statistical differences analyzed through the Chi-squared test. C-TNet model effectively integrates feature extraction from deep neural networks with a RF classifier, utilizing grayscale and elastography ultrasound data. It successfully differentiates benign from malignant thyroid nodules, achieving an area under the curve (AUC) of 0.873, comparable to the performance of senior physicians (AUC: 0.868). The model demonstrates generalizability across diverse clinical settings, positioning itself as a transformative decision-support tool for enhancing the risk stratification of thyroid nodules.

Temporal Representation Learning for Real-Time Ultrasound Analysis

Yves Stebler, Thomas M. Sutter, Ece Ozkan, Julia E. Vogt

arxiv logopreprintSep 1 2025
Ultrasound (US) imaging is a critical tool in medical diagnostics, offering real-time visualization of physiological processes. One of its major advantages is its ability to capture temporal dynamics, which is essential for assessing motion patterns in applications such as cardiac monitoring, fetal development, and vascular imaging. Despite its importance, current deep learning models often overlook the temporal continuity of ultrasound sequences, analyzing frames independently and missing key temporal dependencies. To address this gap, we propose a method for learning effective temporal representations from ultrasound videos, with a focus on echocardiography-based ejection fraction (EF) estimation. EF prediction serves as an ideal case study to demonstrate the necessity of temporal learning, as it requires capturing the rhythmic contraction and relaxation of the heart. Our approach leverages temporally consistent masking and contrastive learning to enforce temporal coherence across video frames, enhancing the model's ability to represent motion patterns. Evaluated on the EchoNet-Dynamic dataset, our method achieves a substantial improvement in EF prediction accuracy, highlighting the importance of temporally-aware representation learning for real-time ultrasound analysis.

Combining curriculum learning and weakly supervised attention for enhanced thyroid nodule assessment in ultrasound imaging.

Keatmanee C, Songsaeng D, Klabwong S, Nakaguro Y, Kunapinun A, Ekpanyapong M, Dailey MN

pubmed logopapersSep 1 2025
The accurate assessment of thyroid nodules, which are increasingly common with age and lifestyle factors, is essential for early malignancy detection. Ultrasound imaging, the primary diagnostic tool for this purpose, holds promise when paired with deep learning. However, challenges persist with small datasets, where conventional data augmentation can introduce noise and obscure essential diagnostic features. To address dataset imbalance and enhance model generalization, this study integrates curriculum learning with a weakly supervised attention network to improve diagnostic accuracy for thyroid nodule classification. This study integrates curriculum learning with attention-guided data augmentation to improve deep learning model performance in classifying thyroid nodules. Using verified datasets from Siriraj Hospital, the model was trained progressively, beginning with simpler images and gradually incorporating more complex cases. This structured learning approach is designed to enhance the model's diagnostic accuracy by refining its ability to distinguish benign from malignant nodules. Among the curriculum learning schemes tested, schematic IV achieved the best results, with a precision of 100% for benign and 70% for malignant nodules, a recall of 82% for benign and 100% for malignant, and F1-scores of 90% and 83%, respectively. This structured approach improved the model's diagnostic sensitivity and robustness. These findings suggest that automated thyroid nodule assessment, supported by curriculum learning, has the potential to complement radiologists in clinical practice, enhancing diagnostic accuracy and aiding in more reliable malignancy detection.

Artificial intelligence-enhanced ultrasound imaging for thyroid nodule detection and malignancy classification: a study on YOLOv11.

Yang J, Luo Z, Wen Y, Zhang J

pubmed logopapersSep 1 2025
Thyroid nodules are a common clinical concern, with accurate diagnosis being critical for effective treatment and improved patient outcomes. Traditional ultrasound examinations rely heavily on the physician's experience, which can lead to diagnostic variability. The integration of artificial intelligence (AI) into medical imaging offers a promising solution for enhancing diagnostic accuracy and efficiency. This study aimed to evaluate the effectiveness of the You Only Look Once v. 11 (YOLOv11) model in detecting and classifying thyroid nodules through ultrasound images, with the goal of supporting real-time clinical decision-making and improving diagnostic workflows. We used the YOLOv11 model to analyze a dataset of 1,503 thyroid ultrasound images, divided into training (1,203 images), validation (150 images), and test (150 images) sets, comprising 742 benign and 778 malignant nodules. Advanced data augmentation and transfer learning techniques were applied to optimize model performance. Comparative analysis was conducted with other YOLO variants (YOLOv3 to YOLOv10) and residual network 50 (ResNet50) to assess their diagnostic capabilities. The YOLOv11 model exhibited superior performance in thyroid nodule detection as compared to other YOLO variants (from YOLOv3 to YOLOv10) and ResNet50. At an intersection over union (IoU) of 0.5, YOLOv11 achieved a precision (P) of 0.841 and recall (R) of 0.823, outperforming ResNet50's P of 0.8333 and R of 0.8025. Among the YOLO variants, YOLOv11 consistently achieved the highest P and R values. For benign nodules, YOLOv11 obtained a P of 0.835 and R of 0.833, while for malignant nodules, it reached a P of 0.846 and a R of 0.813. Within the YOLOv11 model itself, performance varied across different IoU thresholds (0.25, 0.5, 0.7, and 0.9). Lower IoU thresholds generally resulted in better performance metrics, with P and R values decreasing as the IoU threshold increased. YOLOv11 proved to be a powerful tool for thyroid nodule detection and malignancy classification, offering high P and real-time performance. These attributes are vital for dynamic ultrasound examinations and enhancing diagnostic efficiency. Future research will focus on expanding datasets and validating the model's clinical utility in real-time settings.

Analysis of intra- and inter-observer variability in 4D liver ultrasound landmark labeling.

Wulff D, Ernst F

pubmed logopapersSep 1 2025
Four-dimensional (4D) ultrasound imaging is widely used in clinics for diagnostics and therapy guidance. Accurate target tracking in 4D ultrasound is crucial for autonomous therapy guidance systems, such as radiotherapy, where precise tumor localization ensures effective treatment. Supervised deep learning approaches rely on reliable ground truth, making accurate labels essential. We investigate the reliability of expert-labeled ground truth data by evaluating intra- and inter-observer variability in landmark labeling for 4D ultrasound imaging in the liver. Eight 4D liver ultrasound sequences were labeled by eight expert observers, each labeling eight landmarks three times. Intra- and inter-observer variability was quantified, and observer survey and motion analysis were conducted to determine factors influencing labeling accuracy, such as ultrasound artifacts and motion amplitude. The mean intra-observer variability ranged from <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>1.58</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>0.90</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> to <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>2.05</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>1.22</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> depending on the observer. The inter-observer variability for the two observer groups was <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>2.68</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>1.69</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> and <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>3.06</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>1.74</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> . The observer survey and motion analysis revealed that ultrasound artifacts significantly affected labeling accuracy due to limited landmark visibility, whereas motion amplitude had no measurable effect. Our measured mean landmark motion was <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>11.56</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>5.86</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> . We highlight variability in expert-labeled ground truth data for 4D ultrasound imaging and identify ultrasound artifacts as a major source of labeling inaccuracies. These findings underscore the importance of addressing observer variability and artifact-related challenges to improve the reliability of ground truth data for evaluating target tracking algorithms in 4D ultrasound applications.

Army Medic Performance in Trauma Sonography: The Impact of Artificial Intelligence Assistance in Focused Assessments With Sonography in Trauma-A Prospective Randomized Controlled Trial.

Hartline CPTAD, Hartvickson MAJS, Perdue CPTMJ, Sandoval CPTC, Walker LTCJD, Soules CPTA, Mitchell COLCA

pubmed logopapersAug 31 2025
Noncompressible truncal hemorrhage is a leading cause of preventable death in military prehospital settings, particularly in combat environments where advanced imaging is unavailable. The Focused Assessment with Sonography in Trauma (FAST) exam is critical for diagnosing intra-abdominal bleeding. However, Army medics typically lack formal ultrasound training. This study examines whether artificial intelligence (AI) assistance can enhance medics' proficiency in performing FAST exams, thereby improving the speed and accuracy of trauma triage in austere conditions. This is a prospective, randomized controlled trial that involved 60 Army medics who performed 3-view abdominal FAST exams, both with and without AI assistance, using the EchoNous Kosmos device. Investigators randomized participants into 2 groups and evaluated based on time to completion, adequacy of imaging, and confidence in using the device. Two trained investigators assessed adequacy and the participants reported confidence in the device using a 5-point Likert scale. We then analyzed data using the t-test for parametric data, the Wilcoxon rank-sum test, and Cohen's Kappa test for interrater reliability. The AI-assisted group completed the FAST exam in an average of 142.57 seconds compared to 143.87 seconds (P = .9) for the non-AI-assisted group, demonstrating no statistically significant difference in time. However, the AI-assisted group demonstrated significantly higher adequacy in the left upper quadrant and pelvic views (P = .008 and P = .004, respectively). Participants reported significantly higher confidence in the AI-assisted group, with a median score of 4.00 versus 2.50 (P = .006). Interrater agreement was moderate to substantial, with Cohen's Kappa values indicating significant reliability. AI assistance did not significantly reduce the time required to complete a FAST exam but improved image adequacy and user confidence. These findings suggest that AI tools can enhance the quality of FAST exams conducted by minimally trained medics in combat settings. Further research is needed to explore integrating AI-assisted ultrasound training in military medic curricula to optimize trauma care in austere environments.

Adaptive Contrast Adjustment Module: A Clinically-Inspired Plug-and-Play Approach for Enhanced Fetal Plane Classification

Yang Chen, Sanglin Zhao, Baoyu Chen, Mans Gustaf

arxiv logopreprintAug 31 2025
Fetal ultrasound standard plane classification is essential for reliable prenatal diagnosis but faces inherent challenges, including low tissue contrast, boundary ambiguity, and operator-dependent image quality variations. To overcome these limitations, we propose a plug-and-play adaptive contrast adjustment module (ACAM), whose core design is inspired by the clinical practice of doctors adjusting image contrast to obtain clearer and more discriminative structural information. The module employs a shallow texture-sensitive network to predict clinically plausible contrast parameters, transforms input images into multiple contrast-enhanced views through differentiable mapping, and fuses them within downstream classifiers. Validated on a multi-center dataset of 12,400 images across six anatomical categories, the module consistently improves performance across diverse models, with accuracy of lightweight models increasing by 2.02 percent, accuracy of traditional models increasing by 1.29 percent, and accuracy of state-of-the-art models increasing by 1.15 percent. The innovation of the module lies in its content-aware adaptation capability, replacing random preprocessing with physics-informed transformations that align with sonographer workflows while improving robustness to imaging heterogeneity through multi-view fusion. This approach effectively bridges low-level image features with high-level semantics, establishing a new paradigm for medical image analysis under real-world image quality variations.
Page 10 of 56552 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.