Sort by:
Page 28 of 56556 results

Predicting Cardiopulmonary Exercise Testing Performance in Patients Undergoing Transthoracic Echocardiography - An AI Based, Multimodal Model

Alishetti, S., Pan, W., Beecy, A. N., Liu, Z., Gong, A., Huang, Z., Clerkin, K. J., Goldsmith, R. L., Majure, D. T., Kelsey, C., vanMaanan, D., Ruhl, J., Tesfuzigta, N., Lancet, E., Kumaraiah, D., Sayer, G., Estrin, D., Weinberger, K., Kuleshov, V., Wang, F., Uriel, N.

medrxiv logopreprintJul 6 2025
Background and AimsTransthoracic echocardiography (TTE) is a widely available tool for diagnosing and managing heart failure but has limited predictive value for survival. Cardiopulmonary exercise test (CPET) performance strongly correlates with survival in heart failure patients but is less accessible. We sought to develop an artificial intelligence (AI) algorithm using TTE and electronic medical records to predict CPET peak oxygen consumption (peak VO2) [≤] 14 mL/kg/min. MethodsAn AI model was trained to predict peak VO2 [≤] 14 mL/kg/min from TTE images, structured TTE reports, demographics, medications, labs, and vitals. The training set included patients with a TTE within 6 months of a CPET. Performance was retrospectively tested in a held-out group from the development cohort and an external validation cohort. Results1,127 CPET studies paired with concomitant TTE were identified. The best performance was achieved by using all components (TTE images, all structured clinical data). The model performed well at predicting a peak VO2 [≤] 14 mL/kg/min, with an AUROC of 0.84 (development cohort) and 0.80 (external validation cohort). It performed consistently well using higher ([≤] 18 mL/kg/min) and lower ([≤] 12 mL/kg/min) cut-offs. ConclusionsThis multimodal AI model effectively categorized patients into low and high risk predicted peak VO2, demonstrating the potential to identify previously unrecognized patients in need of advanced heart failure therapies where CPET is not available.

Artificial Intelligence-Assisted Standard Plane Detection in Hip Ultrasound for Developmental Dysplasia of the Hip: A Novel Real-Time Deep Learning Approach.

Darilmaz MF, Demirel M, Altun HO, Adiyaman MC, Bilgili F, Durmaz H, Sağlam Y

pubmed logopapersJul 6 2025
Developmental dysplasia of the hip (DDH) includes a range of conditions caused by inadequate hip joint development. Early diagnosis is essential to prevent long-term complications. Ultrasound, particularly the Graf method, is commonly used for DDH screening, but its interpretation is highly operator-dependent and lacks standardization, especially in identifying the correct standard plane. This variability often leads to misdiagnosis, particularly among less experienced users. This study presents AI-SPS, an AI-based instant standard plane detection software for real-time hip ultrasound analysis. Using 2,737 annotated frames, including 1,737 standard and 1,000 non-standard examples extracted from 45 clinical ultrasound videos, we trained and evaluated two object detection models: SSD-MobileNet V2 and YOLOv11n. The software was further validated on an independent set of 934 additional frames (347 standard and 587 non-standard) from the same video sources. YOLOv11n achieved an accuracy of 86.3%, precision of 0.78, recall of 0.88, and F1-score of 0.83, outperforming SSD-MobileNet V2, which reached an accuracy of 75.2%. These results indicate that AI-SPS can detect the standard plane with expert-level performance and improve consistency in DDH screening. By reducing operator variability, the software supports more reliable ultrasound assessments. Integration with live systems and Graf typing may enable a fully automated DDH diagnostic workflow. Level of Evidence: Level III, diagnostic study.

Artificial Intelligence in Prenatal Ultrasound: A Systematic Review of Diagnostic Tools for Detecting Congenital Anomalies

Dunne, J., Kumarasamy, C., Belay, D. G., Betran, A. P., Gebremedhin, A. T., Mengistu, S., Nyadanu, S. D., Roy, A., Tessema, G., Tigest, T., Pereira, G.

medrxiv logopreprintJul 5 2025
BackgroundArtificial intelligence (AI) has potentially shown promise in interpreting ultrasound imaging through flexible pattern recognition and algorithmic learning, but implementation in clinical practice remains limited. This study aimed to investigate the current application of AI in prenatal ultrasounds to identify congenital anomalies, and to synthesise challenges and opportunities for the advancement of AI-assisted ultrasound diagnosis. This comprehensive analysis addresses the clinical translation gap between AI performance metrics and practical implementation in prenatal care. MethodsSystematic searches were conducted in eight electronic databases (CINAHL Plus, Ovid/EMBASE, Ovid/MEDLINE, ProQuest, PubMed, Scopus, Web of Science and Cochrane Library) and Google Scholar from inception to May 2025. Studies were included if they applied an AI-assisted ultrasound diagnostic tool to identify a congenital anomaly during pregnancy. This review adhered to PRISMA guidelines for systematic reviews. We evaluated study quality using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) guidelines. FindingsOf 9,918 records, 224 were identified for full-text review and 20 met the inclusion criteria. The majority of studies (11/20, 55%) were conducted in China, with most published after 2020 (16/20, 80%). All AI models were developed as an assistive tool for anomaly detection or classification. Most models (85%) focused on single-organ systems: heart (35%), brain/cranial (30%), or facial features (20%), while three studies (15%) attempted multi-organ anomaly detection. Fifty percent of the included studies reported exceptionally high model performance, with both sensitivity and specificity exceeding 0.95, with AUC-ROC values ranging from 0.91 to 0.97. Most studies (75%) lacked external validation, with internal validation often limited to small training and testing datasets. InterpretationWhile AI applications in prenatal ultrasound showed potential, current evidence indicates significant limitations in their practical implementation. Much work is required to optimise their application, including the external validation of diagnostic models with clinical utility to have real-world implications. Future research should prioritise larger-scale multi-centre studies, developing multi-organ anomaly detection capabilities rather than the current single-organ focus, and robust evaluation of AI tools in real-world clinical settings.

EdgeSRIE: A hybrid deep learning framework for real-time speckle reduction and image enhancement on portable ultrasound systems

Hyunwoo Cho, Jongsoo Lee, Jinbum Kang, Yangmo Yoo

arxiv logopreprintJul 5 2025
Speckle patterns in ultrasound images often obscure anatomical details, leading to diagnostic uncertainty. Recently, various deep learning (DL)-based techniques have been introduced to effectively suppress speckle; however, their high computational costs pose challenges for low-resource devices, such as portable ultrasound systems. To address this issue, EdgeSRIE, which is a lightweight hybrid DL framework for real-time speckle reduction and image enhancement in portable ultrasound imaging, is introduced. The proposed framework consists of two main branches: an unsupervised despeckling branch, which is trained by minimizing a loss function between speckled images, and a deblurring branch, which restores blurred images to sharp images. For hardware implementation, the trained network is quantized to 8-bit integer precision and deployed on a low-resource system-on-chip (SoC) with limited power consumption. In the performance evaluation with phantom and in vivo analyses, EdgeSRIE achieved the highest contrast-to-noise ratio (CNR) and average gradient magnitude (AGM) compared with the other baselines (different 2-rule-based methods and other 4-DL-based methods). Furthermore, EdgeSRIE enabled real-time inference at over 60 frames per second while satisfying computational requirements (< 20K parameters) on actual portable ultrasound hardware. These results demonstrated the feasibility of EdgeSRIE for real-time, high-quality ultrasound imaging in resource-limited environments.

AI-enabled obstetric point-of-care ultrasound as an emerging technology in low- and middle-income countries: provider and health system perspectives.

Della Ripa S, Santos N, Walker D

pubmed logopapersJul 4 2025
In many low- and middle-income countries (LMICs), widespread access to obstetric ultrasound is challenged by lack of trained providers, workload, and inadequate resources required for sustainability. Artificial intelligence (AI) is a powerful tool for automating image acquisition and interpretation and may help overcome these barriers. This study explored stakeholders' opinions about how AI-enabled point-of-care ultrasound (POCUS) might change current antenatal care (ANC) services in LMICs and identified key considerations for introduction. We purposely sampled midwives, doctors, researchers, and implementors for this mixed methods study, with a focus on those who live or work in African LMICs. Individuals completed an anonymous web-based survey, then participated in an interview or focus group. Among the 41 participants, we captured demographics, experience with and perceptions of standard POCUS, and reactions to an AI-enabled POCUS prototype description. Qualitative data were analyzed by thematic content analysis and quantitative Likert and rank-order data were aggregated as frequencies; the latter was presented alongside illustrative quotes to highlight overall versus nuanced perceptions. The following themes emerged: (1) priority AI capabilities; (2) potential impact on ANC quality, services and clinical outcomes; (3) health system integration considerations; and (4) research priorities. First, AI-enabled POCUS elicited concerns around algorithmic accuracy and compromised clinical acumen due to over-reliance on AI, but an interest in gestational age automation. Second, there was overall agreement that both standard and AI-enabled POCUS could improve ANC attendance (75%, 65%, respectively), provider-client trust (82%, 60%), and providers' confidence in clinical decision-making (85%, 70%). AI consistently elicited more uncertainty among respondents. Third, health system considerations emerged including task sharing with midwives, ultrasound training delivery and curricular content, and policy-related issues such as data security and liability risks. For both standard and AI-enabled POCUS, clinical decision support and referral strengthening were deemed necessary to improve outcomes. Lastly, ranked priority research areas included algorithm accuracy across diverse populations and impact on ANC performance indicators; mortality indicators were less prioritized. Optimism that AI-enabled POCUS can increase access in settings with limited personnel and resources is coupled with expressions of caution and potential risks that warrant careful consideration and exploration.

A Multimodal Ultrasound-Driven Approach for Automated Tumor Assessment with B-Mode and Multi-Frequency Harmonic Motion Images.

Hu S, Liu Y, Wang R, Li X, Konofagou EE

pubmed logopapersJul 4 2025
Harmonic Motion Imaging (HMI) is an ultrasound elasticity imaging method that measures the mechanical properties of tissue using amplitude-modulated acoustic radiation force (AM-ARF). Multi-frequency HMI (MF-HMI) excites tissue at various AM frequencies simultaneously, allowing for image optimization without prior knowledge of inclusion size and stiffness. However, challenges remain in size estimation as inconsistent boundary effects result in different perceived sizes across AM frequencies. Herein, we developed an automated assessment method for tumor and focused ultrasound surgery (FUS) induced lesions using a transformer-based multi-modality neural network, HMINet, and further automated neoadjuvant chemotherapy (NACT) response prediction. HMINet was trained on 380 pairs of MF-HMI and B-mode images of phantoms and in vivo orthotopic breast cancer mice (4T1). Test datasets included phantoms (n = 32), in vivo 4T1 mice (n = 24), breast cancer patients (n = 20), FUS-induced lesions in ex vivo animal tissue and in vivo clinical settings with real-time inference, with average segmentation accuracy (Dice) of 0.91, 0.83, 0.80, and 0.81, respectively. HMINet outperformed state-of-the-art models; we also demonstrated the enhanced robustness of the multi-modality strategy over B-mode-only, both quantitatively through Dice scores and in terms of interpretation using saliency analysis. The contribution of AM frequency based on the number of salient pixels showed that the most significant AM frequencies are 800 and 200 Hz across clinical cases. We developed an automated, multimodality ultrasound-based tumor and FUS lesion assessment method, which facilitates the clinical translation of stiffness-based breast cancer treatment response prediction and real-time image-guided FUS therapy.

Ultrasound Imaging and Machine Learning to Detect Missing Hand Motions for Individuals Receiving Targeted Muscle Reinnervation for Nerve-Pain Prevention.

Moukarzel ARE, Fitzgerald J, Battraw M, Pereira C, Li A, Marasco P, Joiner WM, Schofield J

pubmed logopapersJul 4 2025
Targeted muscle reinnervation (TMR) was initially developed as a technique for bionic prosthetic control but has since become a widely adopted strategy for managing pain and preventing neuroma formation after amputation. This shift in TMR's motivation has influenced surgical approaches, in ways that may challenge conventional electromyography (EMG)-based prosthetic control. The primary goal is often to simply reinnervate nerves to accessible muscles. This contrasts the earlier, more complex TMR surgeries that optimize EMG signal detection by carefully selecting target muscles near the skin's surface and manipulate residual anatomy to electrically isolate muscle activity. Consequently, modern TMR surgeries can involve less consideration for factors such as the depth of the reinnervated muscles or electrical crosstalk between closely located reinnervated muscles, all of which can impair the effectiveness of conventional prosthetic control systems. We recruited 4 participants with TMR, varying levels of upper limb loss, and diverse sets of reinnervated muscles. Participants attempted performing movements with their missing hands and we used a muscle activity measurement technique that employs ultrasound imaging and machine learning (sonomyography) to classify the resulting muscle movements. We found that attempted missing hand movements resulted in unique patterns of deformation in the reinnervated muscles and applying a K-nearest neighbors machine learning algorithm, we could predict 4-10 hand movements for each participant with 83.3-99.4% accuracy. Our findings suggest that despite the shifting motivations for performing TMR surgery this new generation of the surgical procedure not only offers prophylactic benefits but also retains promising opportunities for bionic prosthetic control.

Comparison of neural networks for classification of urinary tract dilation from renal ultrasounds: evaluation of agreement with expert categorization.

Chung K, Wu S, Jeanne C, Tsai A

pubmed logopapersJul 4 2025
Urinary tract dilation (UTD) is a frequent problem in infants. Automated and objective classification of UTD from renal ultrasounds would streamline their interpretations. To develop and evaluate the performance of different deep learning models in predicting UTD classifications from renal ultrasound images. We searched our image archive to identify renal ultrasounds performed in infants ≤ 3-months-old for the clinical indications of prenatal UTD and urinary tract infection (9/2023-8/2024). An expert pediatric uroradiologist provided the ground truth UTD labels for representative sagittal sonographic renal images. Three different deep learning models trained with cross-entropy loss were adapted with four-fold cross-validation experiments to determine the overall performance. Our curated database included 492 right and 487 left renal ultrasounds (mean age ± standard deviation = 1.2 ± 0.1 months for both cohorts, with 341 boys/151 girls and 339 boys/148 girls, respectively). The model prediction accuracies for the right and left kidneys were 88.7% (95% confidence interval [CI], [85.8%, 91.5%]) and 80.5% (95% CI, [77.6%, 82.9%]), with weighted kappa scores of 0.90 (95% CI, [0.88, 0.91]) and 0.87 (95% CI, [0.82, 0.92]), respectively. When predictions were binarized into mild (normal/P1) and severe (UTD P2/P3) dilation, accuracies of the right and left kidneys increased to 96.3% (95% CI, [94.9%, 97.8%]) and 91.3% (95% CI, [88.5%, 94.2%]), but agreements decreased to 0.78 (95% CI, [0.73, 0.82]) and 0.75 (95% CI, [0.68, 0.82]), respectively. Deep learning models demonstrated high accuracy and agreement in classifying UTD from infant renal ultrasounds, supporting their potential as decision-support tools in clinical workflows.

Multi-modal convolutional neural network-based thyroid cytology classification and diagnosis.

Yang D, Li T, Li L, Chen S, Li X

pubmed logopapersJul 4 2025
The cytologic diagnosis of thyroid nodules' benign and malignant nature based on cytological smears obtained through ultrasound-guided fine-needle aspiration is crucial for determining subsequent treatment plans. The development of artificial intelligence (AI) can assist pathologists in improving the efficiency and accuracy of cytological diagnoses. We propose a novel diagnostic model based on a network architecture that integrates cytologic images and digital ultrasound image features (CI-DUF) to solve the multi-class classification task of thyroid fine-needle aspiration cytology. We compare this model with a model relying solely on cytologic images (CI) and evaluate its performance and clinical application potential in thyroid cytology diagnosis. A retrospective analysis was conducted on 384 patients with 825 thyroid cytologic images. These images were used as a dataset for training the models, which were divided into training and testing sets in an 8:2 ratio to assess the performance of both the CI and CI-DUF diagnostic models. The AUROC of the CI model for thyroid cytology diagnosis was 0.9119, while the AUROC of the CI-DUF diagnostic model was 0.9326. Compared with the CI model, the CI-DUF model showed significantly increased accuracy, sensitivity, and specificity in the cytologic classification of papillary carcinoma, follicular neoplasm, medullary carcinoma, and benign lesions. The proposed CI-DUF diagnostic model, which intergrates multi-modal information, shows better diagnostic performance than the CI model that relies only on cytologic images, particularly excelling in thyroid cytology classification.

Hybrid-View Attention for csPCa Classification in TRUS

Zetian Feng, Juan Fu, Xuebin Zou, Hongsheng Ye, Hong Wu, Jianhua Zhou, Yi Wang

arxiv logopreprintJul 4 2025
Prostate cancer (PCa) is a leading cause of cancer-related mortality in men, and accurate identification of clinically significant PCa (csPCa) is critical for timely intervention. Transrectal ultrasound (TRUS) is widely used for prostate biopsy; however, its low contrast and anisotropic spatial resolution pose diagnostic challenges. To address these limitations, we propose a novel hybrid-view attention (HVA) network for csPCa classification in 3D TRUS that leverages complementary information from transverse and sagittal views. Our approach integrates a CNN-transformer hybrid architecture, where convolutional layers extract fine-grained local features and transformer-based HVA models global dependencies. Specifically, the HVA comprises intra-view attention to refine features within a single view and cross-view attention to incorporate complementary information across views. Furthermore, a hybrid-view adaptive fusion module dynamically aggregates features along both channel and spatial dimensions, enhancing the overall representation. Experiments are conducted on an in-house dataset containing 590 subjects who underwent prostate biopsy. Comparative and ablation results prove the efficacy of our method. The code is available at https://github.com/mock1ngbrd/HVAN.
Page 28 of 56556 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.