Sort by:
Page 8 of 41404 results

SUP-Net: Slow-time Upsampling Network for Aliasing Removal in Doppler Ultrasound.

Nahas H, Yu ACH

pubmed logopapersJul 24 2025
Doppler ultrasound modalities, which include spectral Doppler and color flow imaging, are frequently used tools for flow diagnostics because of their real-time point-of-care applicability and high temporal resolution. When implemented using pulse-echo sensing and phase shift estimation principles, this modality's pulse repetition frequency (PRF) is known to influence the maximum detectable velocity. If the PRF is inevitably set below the Nyquist limit due to imaging requirements or hardware constraints, aliasing errors or spectral overlap may corrupt the estimated flow data. To solve this issue, we have devised a deep learning-based framework, powered by a custom slow-time upsampling network (SUP-Net) that leverages spatiotemporal characteristics to upsample the received ultrasound signals across pulse echoes acquired using high-frame-rate ultrasound (HiFRUS). Our framework infers high-PRF signals from signals acquired at low PRF, thereby improving Doppler ultrasound's flow estimation quality. SUP-Net was trained and evaluated on in vivo femoral acquisitions from 20 participants and was applied recursively to resolve scenarios with excessive aliasing across a range of PRFs. We report the successful reconstruction of slow-time signals with frequency content that exceeds the Nyquist limit once and twice. By operating on the fundamental slow-time signals, our framework can resolve aliasing-related artifacts in several downstream modalities, including color Doppler and pulse wave Doppler.

Deep Learning-Driven High Spatial Resolution Attenuation Imaging for Ultrasound Tomography (AI-UT).

Liu M, Kou Z, Wiskin JW, Czarnota GJ, Oelze ML

pubmed logopapersJul 24 2025
Ultrasonic attenuation can be used to characterize tissue properties of the human breast. Both quantitative ultrasound (QUS) and ultrasound tomography (USCT) can provide attenuation estimation. However, limitations have been identified for both approaches. In QUS, the generation of attenuation maps involves separating the whole image into different data blocks. The optimal size of the data block is around 15 to 30 pulse lengths, which dramatically decreases the spatial resolution for attenuation imaging. In USCT, the attenuation is often estimated with a full wave inversion (FWI) method, which is affected by background noise. In order to achieve a high resolution attenuation image with low variance, a deep learning (DL) based method was proposed. In the approach, RF data from 60 angle views from the QTI Breast Acoustic CT<sup>TM</sup> Scanner were acquired as the input and attenuation images as the output. To improve image quality for the DL method, the spatial correlation between speed of sound (SOS) and attenuation were used as a constraint in the model. The results indicated that by including the SOS structural information, the performance of the model was improved. With a higher spatial resolution attenuation image, further segmentation of the breast can be achieved. The structural information and actual attenuation values provided by DL-generated attenuation images were validated with the values from the literature and the SOS-based segmentation map. The information provided by DL-generated attenuation images can be used as an additional biomarker for breast cancer diagnosis.

A Dynamic Machine Learning Model to Predict Angiographic Vasospasm After Aneurysmal Subarachnoid Hemorrhage.

Sen RD, McGrath MC, Shenoy VS, Meyer RM, Park C, Fong CT, Lele AV, Kim LJ, Levitt MR, Wang LL, Sekhar LN

pubmed logopapersJul 24 2025
The goal of this study was to develop a highly precise, dynamic machine learning model centered on daily transcranial Doppler ultrasound (TCD) data to predict angiographic vasospasm (AV) in the context of aneurysmal subarachnoid hemorrhage (aSAH). A retrospective review of patients with aSAH treated at a single institution was performed. The primary outcome was AV, defined as angiographic narrowing of any intracranial artery at any time point during admission from risk assessment. Standard demographic, clinical, and radiographic data were collected. Quantitative data including mean arterial pressure, cerebral perfusion pressure, daily serum sodium, and hourly ventriculostomy output were collected. Detailed daily TCD data of intracranial arteries including maximum velocities, pulsatility indices, and Lindegaard ratios were collected. Three predictive machine learning models were created and compared: A static multivariate logistics regression model based on data collected on the date of admission (Baseline Model; BM), a standard TCD model using middle cerebral artery flow velocity and Lindegaard ratio measurements (SM), and a machine learning long short term memory (LSTM) model using all data trended through the hospitalization. A total of 424 patients with aSAH were reviewed, 78 of whom developed AV. In predicting AV at any time point in the future, the LSTM model had the highest precision (0.571) and accuracy (0.776), whereas the SM model had the highest overall performance with an F1 score of 0.566. In predicting AV within 5 days, the LSTM continued to have the highest precision (0.488) and accuracy (0.803). After an ablation test removing all non-TCD elements, the LSTM model improved to a precision of 0.824. Longitudinal TCD data can be used to create a dynamic machine learning model with higher precision than static TCD measurements for predicting AV after aSAH.

TextSAM-EUS: Text Prompt Learning for SAM to Accurately Segment Pancreatic Tumor in Endoscopic Ultrasound

Pascal Spiegler, Taha Koleilat, Arash Harirpoush, Corey S. Miller, Hassan Rivaz, Marta Kersten-Oertel, Yiming Xiao

arxiv logopreprintJul 24 2025
Pancreatic cancer carries a poor prognosis and relies on endoscopic ultrasound (EUS) for targeted biopsy and radiotherapy. However, the speckle noise, low contrast, and unintuitive appearance of EUS make segmentation of pancreatic tumors with fully supervised deep learning (DL) models both error-prone and dependent on large, expert-curated annotation datasets. To address these challenges, we present TextSAM-EUS, a novel, lightweight, text-driven adaptation of the Segment Anything Model (SAM) that requires no manual geometric prompts at inference. Our approach leverages text prompt learning (context optimization) through the BiomedCLIP text encoder in conjunction with a LoRA-based adaptation of SAM's architecture to enable automatic pancreatic tumor segmentation in EUS, tuning only 0.86% of the total parameters. On the public Endoscopic Ultrasound Database of the Pancreas, TextSAM-EUS with automatic prompts attains 82.69% Dice and 85.28% normalized surface distance (NSD), and with manual geometric prompts reaches 83.10% Dice and 85.70% NSD, outperforming both existing state-of-the-art (SOTA) supervised DL models and foundation models (e.g., SAM and its variants). As the first attempt to incorporate prompt learning in SAM-based medical image segmentation, TextSAM-EUS offers a practical option for efficient and robust automatic EUS segmentation. Code is available at https://github.com/HealthX-Lab/TextSAM-EUS .

Synthetic data trained open-source language models are feasible alternatives to proprietary models for radiology reporting.

Pandita A, Keniston A, Madhuripan N

pubmed logopapersJul 23 2025
The study assessed the feasibility of using synthetic data to fine-tune various open-source LLMs for free text to structured data conversation in radiology, comparing their performance with GPT models. A training set of 3000 synthetic thyroid nodule dictations was generated to train six open-source models (Starcoderbase-1B, Starcoderbase-3B, Mistral-7B, Llama-3-8B, Llama-2-13B, and Yi-34B). ACR TI-RADS template was the target model output. The model performance was tested on 50 thyroid nodule dictations from MIMIC-III patient dataset and compared against 0-shot, 1-shot, and 5-shot performance of GPT-3.5 and GPT-4. GPT-4 5-shot and Yi-34B showed the highest performance with no statistically significant difference between the models. Various open models outperformed GPT models with statistical significance. Overall, models trained with synthetic data showed performance comparable to GPT models in structured text conversion in our study. Given privacy preserving advantages, open LLMs can be utilized as a viable alternative to proprietary GPT models.

Fetal neurobehavior and consciousness: a systematic review of 4D ultrasound evidence and ethical challenges.

Pramono MBA, Andonotopo W, Bachnas MA, Dewantiningrum J, Sanjaya INH, Sulistyowati S, Stanojevic M, Kurjak A

pubmed logopapersJul 23 2025
Recent advancements in four-dimensional (4D) ultrasonography have enabled detailed observation of fetal behavior <i>in utero</i>, including facial movements, limb gestures, and stimulus responses. These developments have prompted renewed inquiry into whether such behaviors are merely reflexive or represent early signs of integrated neural function. However, the relationship between fetal movement patterns and conscious awareness remains scientifically uncertain and ethically contested. A systematic review was conducted in accordance with PRISMA 2020 guidelines. Four databases (PubMed, Scopus, Embase, Web of Science) were searched for English-language articles published from 2000 to 2025, using keywords including "fetal behavior," "4D ultrasound," "neurodevelopment," and "consciousness." Studies were included if they involved human fetuses, used 4D ultrasound or functional imaging modalities, and offered interpretation relevant to neurobehavioral or ethical analysis. A structured appraisal using AMSTAR-2 was applied to assess study quality. Data were synthesized narratively to map fetal behaviors onto developmental milestones and evaluate their interpretive limits. Seventy-four studies met inclusion criteria, with 23 rated as high-quality. Fetal behaviors such as yawning, hand-to-face movement, and startle responses increased in complexity between 24-34 weeks gestation. These patterns aligned with known neurodevelopmental events, including thalamocortical connectivity and cortical folding. However, no study provided definitive evidence linking observed behaviors to conscious experience. Emerging applications of artificial intelligence in ultrasound analysis were found to enhance pattern recognition but lack external validation. Fetal behavior observed via 4D ultrasound may reflect increasing neural integration but should not be equated with awareness. Interpretations must remain cautious, avoiding anthropomorphic assumptions. Ethical engagement requires attention to scientific limits, sociocultural diversity, and respect for maternal autonomy as imaging technologies continue to evolve.

Non-invasive meningitis screening in neonates and infants: multicentre international study.

Ajanovic S, Jobst B, Jiménez J, Quesada R, Santos F, Carandell F, Lopez-Azorín M, Valverde E, Ybarra M, Bravo MC, Petrone P, Sial H, Muñoz D, Agut T, Salas B, Carreras N, Alarcón A, Iriondo M, Luaces C, Sidat M, Zandamela M, Rodrigues P, Graça D, Ngovene S, Bramugy J, Cossa A, Mucasse C, Buck WC, Arias S, El Abbass C, Tligi H, Barkat A, Ibáñez A, Parrilla M, Elvira L, Calvo C, Pellicer A, Cabañas F, Bassat Q

pubmed logopapersJul 23 2025
Meningitis diagnosis requires a lumbar puncture (LP) to obtain cerebrospinal fluid (CSF) for a laboratory-based analysis. In high-income settings, LPs are part of the systematic approach to screen for meningitis, and most yield negative results. In low- and middle-income settings, LPs are seldom performed, and suspected cases are often treated empirically. The aim of this study was to validate a non-invasive transfontanellar white blood cell (WBC) counter in CSF to screen for meningitis. We conducted a prospective study across three Spanish hospitals, one Mozambican and one Moroccan hospital (2020-2023). We included patients under 24 months with suspected meningitis, an open fontanelle, and a LP performed within 24 h from recruitment. High-resolution-ultrasound (HRUS) images of the CSF were obtained using a customized probe. A deep-learning model was trained to classify CSF patterns based on LPs WBC counts, using a 30cells/mm<sup>3</sup> threshold. The algorithm was applied to 3782 images from 76 patients. It correctly classified 17/18 CSFs with <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>≥</mo></math> 30 WBC, and 55/58 controls (sensitivity 94.4%, specificity 94.8%). The only false negative was paired to a traumatic LP with 40 corrected WBC/mm<sup>3</sup>. This non-invasive device could be an accurate tool for screening meningitis in neonates and young infants, modulating LP indications. Our non-invasive, high-resolution ultrasound device achieved 94% accuracy in detecting elevated leukocyte counts in neonates and infants with suspected meningitis, compared to the gold standard (lumbar punctures and laboratory analysis). This first-in-class screening device introduces the first non-invasive method for neonatal and infant meningitis screening, potentially modulating lumbar puncture indications. This technology could substantially reduce lumbar punctures in low-suspicion cases and provides a viable alternative critically ill patients worldwide or in settings where lumbar punctures are unfeasible, especially in low-income countries).

CLIF-Net: Intersection-guided Cross-view Fusion Network for Infection Detection from Cranial Ultrasound

Yu, M., Peterson, M. R., Burgoine, K., Harbaugh, T., Olupot-Olupot, P., Gladstone, M., Hagmann, C., Cowan, F. M., Weeks, A., Morton, S. U., Mulondo, R., Mbabazi-Kabachelor, E., Schiff, S. J., Monga, V.

medrxiv logopreprintJul 22 2025
This paper addresses the problem of detecting possible serious bacterial infection (pSBI) of infancy, i.e. a clinical presentation consistent with bacterial sepsis in newborn infants using cranial ultrasound (cUS) images. The captured image set for each patient enables multiview imagery: coronal and sagittal, with geometric overlap. To exploit this geometric relation, we develop a new learning framework, called the intersection-guided Crossview Local-and Image-level Fusion Network (CLIF-Net). Our technique employs two distinct convolutional neural network branches to extract features from coronal and sagittal images with newly developed multi-level fusion blocks. Specifically, we leverage the spatial position of these images to locate the intersecting region. We then identify and enhance the semantic features from this region across multiple levels using cross-attention modules, facilitating the acquisition of mutually beneficial and more representative features from both views. The final enhanced features from the two views are then integrated and projected through the image-level fusion layer, outputting pSBI and non-pSBI class probabilities. We contend that our method of exploiting multi-view cUS images enables a first of its kind, robust 3D representation tailored for pSBI detection. When evaluated on a dataset of 302 cUS scans from Mbale Regional Referral Hospital in Uganda, CLIF-Net demonstrates substantially enhanced performance, surpassing the prevailing state-of-the-art infection detection techniques.

DualSwinUnet++: An enhanced Swin-Unet architecture with dual decoders for PTMC segmentation.

Dialameh M, Rajabzadeh H, Sadeghi-Goughari M, Sim JS, Kwon HJ

pubmed logopapersJul 22 2025
Precise segmentation of papillary thyroid microcarcinoma (PTMC) during ultrasound-guided radiofrequency ablation (RFA) is critical for effective treatment but remains challenging due to acoustic artifacts, small lesion size, and anatomical variability. In this study, we propose DualSwinUnet++, a dual-decoder transformer-based architecture designed to enhance PTMC segmentation by incorporating thyroid gland context. DualSwinUnet++ employs independent linear projection heads for each decoder and a residual information flow mechanism that passes intermediate features from the first (thyroid) decoder to the second (PTMC) decoder via concatenation and transformation. These design choices allow the model to condition tumor prediction explicitly on gland morphology without shared gradient interference. Trained on a clinical ultrasound dataset with 691 annotated RFA images and evaluated against state-of-the-art models, DualSwinUnet++ achieves superior Dice and Jaccard scores while maintaining sub-200ms inference latency. The results demonstrate the model's suitability for near real-time surgical assistance and its effectiveness in improving segmentation accuracy in challenging PTMC cases.

Artificial Intelligence Empowers Novice Users to Acquire Diagnostic-Quality Echocardiography.

Trost B, Rodrigues L, Ong C, Dezellus A, Goldberg YH, Bouchat M, Roger E, Moal O, Singh V, Moal B, Lafitte S

pubmed logopapersJul 22 2025
Cardiac ultrasound exams provide real-time data to guide clinical decisions but require highly trained sonographers. Artificial intelligence (AI) that uses deep learning algorithms to guide novices in the acquisition of diagnostic echocardiographic studies may broaden access and improve care. The objective of this trial was to evaluate whether nurses without previous ultrasound experience (novices) could obtain diagnostic-quality acquisitions of 10 echocardiographic views using AI-based software. This noninferiority study was prospective, international, nonrandomized, and conducted at 2 medical centers, in the United States and France, from November 2023 to August 2024. Two limited cardiac exams were performed on adult patients scheduled for a clinically indicated echocardiogram; one was conducted by a novice using AI guidance and one by an expert (experienced sonographer or cardiologist) without it. Primary endpoints were evaluated by 5 experienced cardiologists to assess whether the novice exam was of sufficient quality to visually analyze the left ventricular size and function, the right ventricle size, and the presence of nontrivial pericardial effusion. Secondary endpoints included 8 additional cardiac parameters. A total of 240 patients (mean age 62.6 years; 117 women (48.8%); mean body mass index 26.6 kg/m<sup>2</sup>) completed the study. One hundred percent of the exams performed by novices with the studied software were of sufficient quality to assess the primary endpoints. Cardiac parameters assessed in exams conducted by novices and experts were strongly correlated. AI-based software provides a safe means for novices to perform diagnostic-quality cardiac ultrasounds after a short training period.
Page 8 of 41404 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.