Sort by:
Page 24 of 56556 results

CLIF-Net: Intersection-guided Cross-view Fusion Network for Infection Detection from Cranial Ultrasound

Yu, M., Peterson, M. R., Burgoine, K., Harbaugh, T., Olupot-Olupot, P., Gladstone, M., Hagmann, C., Cowan, F. M., Weeks, A., Morton, S. U., Mulondo, R., Mbabazi-Kabachelor, E., Schiff, S. J., Monga, V.

medrxiv logopreprintJul 22 2025
This paper addresses the problem of detecting possible serious bacterial infection (pSBI) of infancy, i.e. a clinical presentation consistent with bacterial sepsis in newborn infants using cranial ultrasound (cUS) images. The captured image set for each patient enables multiview imagery: coronal and sagittal, with geometric overlap. To exploit this geometric relation, we develop a new learning framework, called the intersection-guided Crossview Local-and Image-level Fusion Network (CLIF-Net). Our technique employs two distinct convolutional neural network branches to extract features from coronal and sagittal images with newly developed multi-level fusion blocks. Specifically, we leverage the spatial position of these images to locate the intersecting region. We then identify and enhance the semantic features from this region across multiple levels using cross-attention modules, facilitating the acquisition of mutually beneficial and more representative features from both views. The final enhanced features from the two views are then integrated and projected through the image-level fusion layer, outputting pSBI and non-pSBI class probabilities. We contend that our method of exploiting multi-view cUS images enables a first of its kind, robust 3D representation tailored for pSBI detection. When evaluated on a dataset of 302 cUS scans from Mbale Regional Referral Hospital in Uganda, CLIF-Net demonstrates substantially enhanced performance, surpassing the prevailing state-of-the-art infection detection techniques.

DualSwinUnet++: An enhanced Swin-Unet architecture with dual decoders for PTMC segmentation.

Dialameh M, Rajabzadeh H, Sadeghi-Goughari M, Sim JS, Kwon HJ

pubmed logopapersJul 22 2025
Precise segmentation of papillary thyroid microcarcinoma (PTMC) during ultrasound-guided radiofrequency ablation (RFA) is critical for effective treatment but remains challenging due to acoustic artifacts, small lesion size, and anatomical variability. In this study, we propose DualSwinUnet++, a dual-decoder transformer-based architecture designed to enhance PTMC segmentation by incorporating thyroid gland context. DualSwinUnet++ employs independent linear projection heads for each decoder and a residual information flow mechanism that passes intermediate features from the first (thyroid) decoder to the second (PTMC) decoder via concatenation and transformation. These design choices allow the model to condition tumor prediction explicitly on gland morphology without shared gradient interference. Trained on a clinical ultrasound dataset with 691 annotated RFA images and evaluated against state-of-the-art models, DualSwinUnet++ achieves superior Dice and Jaccard scores while maintaining sub-200ms inference latency. The results demonstrate the model's suitability for near real-time surgical assistance and its effectiveness in improving segmentation accuracy in challenging PTMC cases.

Artificial Intelligence Empowers Novice Users to Acquire Diagnostic-Quality Echocardiography.

Trost B, Rodrigues L, Ong C, Dezellus A, Goldberg YH, Bouchat M, Roger E, Moal O, Singh V, Moal B, Lafitte S

pubmed logopapersJul 22 2025
Cardiac ultrasound exams provide real-time data to guide clinical decisions but require highly trained sonographers. Artificial intelligence (AI) that uses deep learning algorithms to guide novices in the acquisition of diagnostic echocardiographic studies may broaden access and improve care. The objective of this trial was to evaluate whether nurses without previous ultrasound experience (novices) could obtain diagnostic-quality acquisitions of 10 echocardiographic views using AI-based software. This noninferiority study was prospective, international, nonrandomized, and conducted at 2 medical centers, in the United States and France, from November 2023 to August 2024. Two limited cardiac exams were performed on adult patients scheduled for a clinically indicated echocardiogram; one was conducted by a novice using AI guidance and one by an expert (experienced sonographer or cardiologist) without it. Primary endpoints were evaluated by 5 experienced cardiologists to assess whether the novice exam was of sufficient quality to visually analyze the left ventricular size and function, the right ventricle size, and the presence of nontrivial pericardial effusion. Secondary endpoints included 8 additional cardiac parameters. A total of 240 patients (mean age 62.6 years; 117 women (48.8%); mean body mass index 26.6 kg/m<sup>2</sup>) completed the study. One hundred percent of the exams performed by novices with the studied software were of sufficient quality to assess the primary endpoints. Cardiac parameters assessed in exams conducted by novices and experts were strongly correlated. AI-based software provides a safe means for novices to perform diagnostic-quality cardiac ultrasounds after a short training period.

An Improved Diagnostic Deep Learning Model for Cervical Lymphadenopathy Characterization.

Gong W, Li M, Wang S, Jiang Y, Wu J, Li X, Ma C, Luo H, Zhou H

pubmed logopapersJul 21 2025
To validate the diagnostic performance of a B-mode ultrasound-based deep learning (DL) model in distinguishing benign and malignant cervical lymphadenopathy (CLP). A total of 210 CLPs with conclusive pathological results were retrospectively included and separated as training (n = 169) or test cohort (n = 41) randomly at a ratio of 4:1. A DL model integrating convolutional neural network, deformable convolution network and attention mechanism was developed. Three diagnostic models were developed: (a) Model I, CLPs with at least one suspicious B-mode ultrasound feature (ratio of longitudinal to short diameter < 2, irregular margin, hyper-echogenicity, hilus absence, cystic necrosis and calcification) were deemed malignant; (b) Model II: total risk score of B-mode ultrasound features obtained by multivariate logistic regression and (c) Model III: CLPs with positive DL output are deemed malignant. The diagnostic utility of these models was assessed by the area under the receiver operating curve (AUC) and corresponding sensitivity and specificity. Multivariate analysis indicated that DL positive result was the most important factor associated with malignant CLPs [odds ratio (OR) = 39.05, p < 0.001], only followed by hilus absence (OR = 6.01, p = 0.001) in the training cohort. In the test cohort, the AUC of the DL model (0.871) was significantly higher than that in model I (AUC = 0.681, p = 0.04) and model II (AUC = 0.679, p = 0.03), respectively. In addition, model III obtained 93.3% specificity, which was significantly higher than that in model I (40.0%, p = 0.002) and model II (60.0%, p = 0.03), respectively. Although the sensitivity of model I was the highest, it did not show a significant difference compared to that of model III (96.2% vs.80.8%, p = 0.083). B-mode ultrasound-based DL is a potentially robust tool for the differential diagnosis of benign and malignant CLPs.

Noninvasive Deep Learning System for Preoperative Diagnosis of Follicular-Like Thyroid Neoplasms Using Ultrasound Images: A Multicenter, Retrospective Study.

Shen H, Huang Y, Yan W, Zhang C, Liang T, Yang D, Feng X, Liu S, Wang Y, Cao W, Cheng Y, Chen H, Ni Q, Wang F, You J, Jin Z, He W, Sun J, Yang D, Liu L, Cao B, Zhang X, Li Y, Pei S, Zhang S, Zhang B

pubmed logopapersJul 21 2025
To propose a deep learning (DL) system for the preoperative diagnosis of follicular-like thyroid neoplasms (FNs) using routine ultrasound images. Preoperative diagnosis of malignancy in nodules suspicious for an FN remains challenging. Ultrasound, fine-needle aspiration cytology, and intraoperative frozen section pathology cannot unambiguously distinguish between benign and malignant FNs, leading to unnecessary biopsies and operations in benign nodules. This multicenter, retrospective study included 3634 patients who underwent ultrasound and received a definite diagnosis of FN from 11 centers, comprising thyroid follicular adenoma (n=1748), follicular carcinoma (n=299), and follicular variant of papillary thyroid carcinoma (n=1587). Four DL models including Inception-v3, ResNet50, Inception-ResNet-v2, and DenseNet161 were constructed on a training set (n=2587, 6178 images) and were verified on an internal validation set (n=648, 1633 images) and an external validation set (n=399, 847 images). The diagnostic efficacy of the DL models was evaluated against the ACR TI-RADS regarding the area under the curve (AUC), sensitivity, specificity, and unnecessary biopsy rate. When externally validated, the four DL models yielded robust and comparable performance, with AUCs of 82.2%-85.2%, sensitivities of 69.6%-76.0%, and specificities of 84.1%-89.2%, which outperformed the ACR TI-RADS. Compared to ACR TI-RADS, the DL models showed a higher biopsy rate of malignancy (71.6% -79.9% vs 37.7%, P<0.001) and a significantly lower unnecessary FNAB rate (8.5% -12.8% vs 40.7%, P<0.001). This study provides a noninvasive DL tool for accurate preoperative diagnosis of FNs, showing better performance than ACR TI-RADS and reducing unnecessary invasive interventions.

OpenBreastUS: Benchmarking Neural Operators for Wave Imaging Using Breast Ultrasound Computed Tomography

Zhijun Zeng, Youjia Zheng, Hao Hu, Zeyuan Dong, Yihang Zheng, Xinliang Liu, Jinzhuo Wang, Zuoqiang Shi, Linfeng Zhang, Yubing Li, He Sun

arxiv logopreprintJul 20 2025
Accurate and efficient simulation of wave equations is crucial in computational wave imaging applications, such as ultrasound computed tomography (USCT), which reconstructs tissue material properties from observed scattered waves. Traditional numerical solvers for wave equations are computationally intensive and often unstable, limiting their practical applications for quasi-real-time image reconstruction. Neural operators offer an innovative approach by accelerating PDE solving using neural networks; however, their effectiveness in realistic imaging is limited because existing datasets oversimplify real-world complexity. In this paper, we present OpenBreastUS, a large-scale wave equation dataset designed to bridge the gap between theoretical equations and practical imaging applications. OpenBreastUS includes 8,000 anatomically realistic human breast phantoms and over 16 million frequency-domain wave simulations using real USCT configurations. It enables a comprehensive benchmarking of popular neural operators for both forward simulation and inverse imaging tasks, allowing analysis of their performance, scalability, and generalization capabilities. By offering a realistic and extensive dataset, OpenBreastUS not only serves as a platform for developing innovative neural PDE solvers but also facilitates their deployment in real-world medical imaging problems. For the first time, we demonstrate efficient in vivo imaging of the human breast using neural operator solvers.

[A multi-feature fusion-based model for fetal orientation classification from intrapartum ultrasound videos].

Zheng Z, Yang X, Wu S, Zhang S, Lyu G, Liu P, Wang J, He S

pubmed logopapersJul 20 2025
To construct an intelligent analysis model for classifying fetal orientation during intrapartum ultrasound videos based on multi-feature fusion. The proposed model consists of the Input, Backbone Network and Classification Head modules. The Input module carries out data augmentation to improve the sample quality and generalization ability of the model. The Backbone Network was responsible for feature extraction based on Yolov8 combined with CBAM, ECA, PSA attention mechanism and AIFI feature interaction module. The Classification Head consists of a convolutional layer and a softmax function to output the final probability value of each class. The images of the key structures (the eyes, face, head, thalamus, and spine) were annotated with frames by physicians for model training to improve the classification accuracy of the anterior occipital, posterior occipital, and transverse occipital orientations. The experimental results showed that the proposed model had excellent performance in the tire orientation classification task with the classification accuracy reaching 0.984, an area under the PR curve (average accuracy) of 0.993, and area under the ROC curve of 0.984, and a kappa consistency test score of 0.974. The prediction results by the deep learning model were highly consistent with the actual classification results. The multi-feature fusion model proposed in this study can efficiently and accurately classify fetal orientation in intrapartum ultrasound videos.

Commercialization of medical artificial intelligence technologies: challenges and opportunities.

Li B, Powell D, Lee R

pubmed logopapersJul 18 2025
Artificial intelligence (AI) is already having a significant impact on healthcare. For example, AI-guided imaging can improve the diagnosis/treatment of vascular diseases, which affect over 200 million people globally. Recently, Chiu and colleagues (2024) developed an AI algorithm that supports nurses with no ultrasound training in diagnosing abdominal aortic aneurysms (AAA) with similar accuracy as ultrasound-trained physicians. This technology can therefore improve AAA screening; however, achieving clinical impact with new AI technologies requires careful consideration of commercialization strategies, including funding, compliance with safety and regulatory frameworks, health technology assessment, regulatory approval, reimbursement, and clinical guideline integration.

DUSTrack: Semi-automated point tracking in ultrasound videos

Praneeth Namburi, Roger Pallarès-López, Jessica Rosendorf, Duarte Folgado, Brian W. Anthony

arxiv logopreprintJul 18 2025
Ultrasound technology enables safe, non-invasive imaging of dynamic tissue behavior, making it a valuable tool in medicine, biomechanics, and sports science. However, accurately tracking tissue motion in B-mode ultrasound remains challenging due to speckle noise, low edge contrast, and out-of-plane movement. These challenges complicate the task of tracking anatomical landmarks over time, which is essential for quantifying tissue dynamics in many clinical and research applications. This manuscript introduces DUSTrack (Deep learning and optical flow-based toolkit for UltraSound Tracking), a semi-automated framework for tracking arbitrary points in B-mode ultrasound videos. We combine deep learning with optical flow to deliver high-quality and robust tracking across diverse anatomical structures and motion patterns. The toolkit includes a graphical user interface that streamlines the generation of high-quality training data and supports iterative model refinement. It also implements a novel optical-flow-based filtering technique that reduces high-frequency frame-to-frame noise while preserving rapid tissue motion. DUSTrack demonstrates superior accuracy compared to contemporary zero-shot point trackers and performs on par with specialized methods, establishing its potential as a general and foundational tool for clinical and biomechanical research. We demonstrate DUSTrack's versatility through three use cases: cardiac wall motion tracking in echocardiograms, muscle deformation analysis during reaching tasks, and fascicle tracking during ankle plantarflexion. As an open-source solution, DUSTrack offers a powerful, flexible framework for point tracking to quantify tissue motion from ultrasound videos. DUSTrack is available at https://github.com/praneethnamburi/DUSTrack.

Deep learning-based ultrasound diagnostic model for follicular thyroid carcinoma.

Wang Y, Lu W, Xu L, Xu H, Kong D

pubmed logopapersJul 18 2025
It is challenging to preoperatively diagnose follicular thyroid carcinoma (FTC) on ultrasound images. This study aimed to develop an end-to-end diagnostic model that can classify thyroid tumors into benign tumors, FTC and other malignant tumors based on deep learning. This retrospective multi-center study included 10,771 consecutive adult patients who underwent conventional ultrasound and postoperative pathology between January 2018 and September 2021. We proposed a novel data augmentation method and a mixed loss function to solve an imbalanced dataset and applied them to a pre-trained convolutional neural network and transformer model that could effectively extract image features. The proposed model can directly identify FTC from other malignant subtypes and benign tumors based on ultrasound images. The testing dataset included 1078 patients (mean age, 47.3 years ± 11.8 (SD); 811 female patients; FTCs, 39 of 1078 (3.6%); Other malignancies, 385 of 1078 (35.7%)). The proposed classification model outperformed state-of-the-art models on differentiation of FTC from other malignant sub-types and benign ones, achieved an excellent diagnosis performance with balanced-accuracy 0.87, AUC 0.96 (95% CI: 0.96, 0.96), mean sensitivity 0.87 and mean specificity 0.92. Meanwhile, it was superior to radiologists included in this study for thyroid tumor diagnosis (balanced-accuracy: Junior 0.60, p < 0.001; Mid-level 0.59, p < 0.001; Senior 0.66, p < 0.001). The developed classification model addressed the class-imbalanced problem and achieved higher performance in differentiating FTC from other malignant subtypes and benign tumors compared with existing methods. Question Deep learning has the potential to improve preoperatively diagnostic accuracy for follicular thyroid carcinoma (FTC). Findings The proposed model achieved high accuracy, sensitivity and specificity in diagnosing follicular thyroid carcinoma, outperforming other models. Clinical relevance The proposed model is a promising computer-aided diagnostic tool for the clinical diagnosis of FTC, which potentially could help reduce missed diagnosis and misdiagnosis for FTC.
Page 24 of 56556 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.