Sort by:
Page 37 of 41404 results

Machine learning prediction of pathological complete response to neoadjuvant chemotherapy with peritumoral breast tumor ultrasound radiomics: compare with intratumoral radiomics and clinicopathologic predictors.

Yao J, Zhou W, Jia X, Zhu Y, Chen X, Zhan W, Zhou J

pubmed logopapersMay 16 2025
Noninvasive, accurate and novel approaches to predict patients who will achieve pathological complete response (pCR) after neoadjuvant chemotherapy (NAC) could assist treatment strategies. The aim of this study was to explore the application of machine learning (ML) based peritumoral ultrasound radiomics signature (PURS), compared with intratumoral radiomics (IURS) and clinicopathologic factors, for early prediction of pCR. We analyzed 358 locally advanced breast cancer patients (250 in the training set and 108 in the test set), who accepted NAC and post NAC surgery at our institution. The clinical and pathological data were analyzed using the independent t test and the Chi-square test to determine the factors associated with pCR. The PURS and IURS of baseline breast tumors were extracted by using 3D-slicer and PyRadiomics software. Five ML classifiers including linear discriminant analysis (LDA), support vector machine (SVM), random forest (RF), logistic regression (LR), and adaptive boosting (AdaBoost) were applied to construct radiomics predictive models. The performance of PURS, IURS models and clinicopathologic predictors were assessed with respect to sensitivity, specificity, accuracy and the areas under the curve (AUCs). Ninety-seven patients achieved pCR. The clinicopathologic predictors obtained an AUC of 0.759. Among PURS models, the RF classifier achieved better efficacy (AUC of 0.889) than LR (0.849), AdaBoost (0.823), SVM (0.746) and LDA (0.732). The RF classifier also obtained a maximum AUC of 0.931 than 0.920 (AdaBoost), 0.875 (LR), 0.825 (SVM), and 0.798 (LDA) in IURS models in the test set. The RF based PURS yielded higher predictive ability (AUC 0.889; 95% CI 0.814, 0.947) than clinicopathologic factors (AUC 0.759; 95% CI 0.657, 0.861; p < 0.05), but lower efficacy compared with IURS (AUC 0.931; 95% CI 0.865, 0.980; p < 0.05). The peritumoral US radiomics, as a novel potential biomarker, can assist clinical therapy decisions.

Computer-aided assessment for enlarged fetal heart with deep learning model.

Nurmaini S, Sapitri AI, Roseno MT, Rachmatullah MN, Mirani P, Bernolian N, Darmawahyuni A, Tutuko B, Firdaus F, Islami A, Arum AW, Bastian R

pubmed logopapersMay 16 2025
Enlarged fetal heart conditions may indicate congenital heart diseases or other complications, making early detection through prenatal ultrasound essential. However, manual assessments by sonographers are often subjective, time-consuming, and inconsistent. This paper proposes a deep learning approach using the You Only Look Once (YOLO) architecture to automate fetal heart enlargement assessment. Using a set of ultrasound videos, YOLOv8 with a CBAM module demonstrated superior performance compared to YOLOv11 with self-attention. Incorporating the ResNeXtBlock-a residual network with cardinality-additionally enhanced accuracy and prediction consistency. The model exhibits strong capability in detecting fetal heart enlargement, offering a reliable computer-aided tool for sonographers during prenatal screenings. Further validation is required to confirm its clinical applicability. By improving early and accurate detection, this approach has the potential to enhance prenatal care, facilitate timely interventions, and contribute to better neonatal health outcomes.

Automated Microbubble Discrimination in Ultrasound Localization Microscopy by Vision Transformer.

Wang R, Lee WN

pubmed logopapersMay 15 2025
Ultrasound localization microscopy (ULM) has revolutionized microvascular imaging by breaking the acoustic diffraction limit. However, different ULM workflows depend heavily on distinct prior knowledge, such as the impulse response and empirical selection of parameters (e.g., the number of microbubbles (MBs) per frame M), or the consistency of training-test dataset in deep learning (DL)-based studies. We hereby propose a general ULM pipeline that reduces priors. Our approach leverages a DL model that simultaneously distills microbubble signals and reduces speckle from every frame without estimating the impulse response and M. Our method features an efficient channel attention vision transformer (ViT) and a progressive learning strategy, enabling it to learn global information through training on progressively increasing patch sizes. Ample synthetic data were generated using the k-Wave toolbox to simulate various MB patterns, thus overcoming the deficiency of labeled data. The ViT output was further processed by a standard radial symmetry method for sub-pixel localization. Our method performed well on model-unseen public datasets: one in silico dataset with ground truth and four in vivo datasets of mouse tumor, rat brain, rat brain bolus, and rat kidney. Our pipeline outperformed conventional ULM, achieving higher positive predictive values (precision in DL, 0.88-0.41 vs. 0.83-0.16) and improved accuracy (root-mean-square errors: 0.25-0.14 λ vs. 0.31-0.13 λ) across a range of signal-to-noise ratios from 60 dB to 10 dB. Our model could detect more vessels in diverse in vivo datasets while achieving comparable resolutions to the standard method. The proposed ViT-based model, seamlessly integrated with state-of-the-art downstream ULM steps, improved the overall ULM performance with no priors.

CLIF-Net: Intersection-guided Cross-view Fusion Network for Infection Detection from Cranial Ultrasound.

Yu M, Peterson MR, Burgoine K, Harbaugh T, Olupot-Olupot P, Gladstone M, Hagmann C, Cowan FM, Weeks A, Morton SU, Mulondo R, Mbabazi-Kabachelor E, Schiff SJ, Monga V

pubmed logopapersMay 15 2025
This paper addresses the problem of detecting possible serious bacterial infection (pSBI) of infancy, i.e. a clinical presentation consistent with bacterial sepsis in newborn infants using cranial ultrasound (cUS) images. The captured image set for each patient enables multiview imagery: coronal and sagittal, with geometric overlap. To exploit this geometric relation, we develop a new learning framework, called the intersection-guided Crossview Local- and Image-level Fusion Network (CLIF-Net). Our technique employs two distinct convolutional neural network branches to extract features from coronal and sagittal images with newly developed multi-level fusion blocks. Specifically, we leverage the spatial position of these images to locate the intersecting region. We then identify and enhance the semantic features from this region across multiple levels using cross-attention modules, facilitating the acquisition of mutually beneficial and more representative features from both views. The final enhanced features from the two views are then integrated and projected through the image-level fusion layer, outputting pSBI and non-pSBI class probabilities. We contend that our method of exploiting multi-view cUS images enables a first of its kind, robust 3D representation tailored for pSBI detection. When evaluated on a dataset of 302 cUS scans from Mbale Regional Referral Hospital in Uganda, CLIF-Net demonstrates substantially enhanced performance, surpassing the prevailing state-of-the-art infection detection techniques.

Modifying the U-Net's Encoder-Decoder Architecture for Segmentation of Tumors in Breast Ultrasound Images.

Derakhshandeh S, Mahloojifar A

pubmed logopapersMay 15 2025
Segmentation is one of the most significant steps in image processing. Segmenting an image is a technique that makes it possible to separate a digital image into various areas based on the different characteristics of pixels in the image. In particular, the segmentation of breast ultrasound images is widely used for cancer identification. As a result of image segmentation, it is possible to make early diagnoses of a diseases via medical images in a very effective way. Due to various ultrasound artifacts and noises, including speckle noise, low signal-to-noise ratio, and intensity heterogeneity, the process of accurately segmenting medical images, such as ultrasound images, is still a challenging task. In this paper, we present a new method to improve the accuracy and effectiveness of breast ultrasound image segmentation. More precisely, we propose a neural network (NN) based on U-Net and an encoder-decoder architecture. By taking U-Net as the basis, both the encoder and decoder parts are developed by combining U-Net with other deep neural networks (Res-Net and MultiResUNet) and introducing a new approach and block (Co-Block), which preserve as much as possible the low-level and the high-level features. The designed network is evaluated using the Breast Ultrasound Images (BUSI) Dataset. It consists of 780 images, and the images are categorized into three classes, which are normal, benign, and malignant. According to our extensive evaluations on a public breast ultrasound dataset, the designed network segments the breast lesions more accurately than other state-of-the-art deep learning methods. With only 8.88 M parameters, our network (CResU-Net) obtained 82.88%, 77.5%, 90.3%, and 98.4% in terms of Dice similarity coefficients (DSC), intersection over union (IoU), area under curve (AUC), and global accuracy (ACC), respectively, on the BUSI dataset.

Segmentation of the thoracolumbar fascia in ultrasound imaging: a deep learning approach.

Bonaldi L, Pirri C, Giordani F, Fontanella CG, Stecco C, Uccheddu F

pubmed logopapersMay 15 2025
Only in recent years it has been demonstrated that the thoracolumbar fascia is involved in low back pain (LBP), thus highlighting its implications for treatments. Furthermore, an easily accessible and non-invasive way to investigate the fascia in real time is the ultrasound examination, which to be reliable as is, it must overcome the challenges related to the configuration of the machine and the experience of the operator. Therefore, the lack of a clear understanding of the fascial system combined with the penalty related to the setting of the ultrasound acquisition has generated a gap that makes its effective evaluation difficult during clinical routine. The aim of the present work is to fill this gap by investigating the effectiveness of using a deep learning approach to segment the thoracolumbar fascia from ultrasound imaging. A total of 538 ultrasound images of the thoracolumbar fascia of LBP subjects were finally used to train and test a deep learning network. An additional test set (so-called Test set 2) was collected from another center, operator, machine manufacturer, patient cohort, and protocol to improve the generalizability of the study. A U-Net-based architecture was demonstrated to be able to segment these structures with a final training accuracy of 0.99 and a validation accuracy of 0.91. The accuracy of the prediction computed on a test set (87 images not included in the training set) reached the 0.94, with a mean intersection over union index of 0.82 and a Dice-score of 0.76. These latter metrics were outperformed by those in Test set 2. The validity of the predictions was also verified and confirmed by two expert clinicians. Automatic identification of the thoracolumbar fascia has shown promising results to thoroughly investigate its alteration and target a personalized rehabilitation intervention based on each patient-specific scenario.

Automated high precision PCOS detection through a segment anything model on super resolution ultrasound ovary images.

Reka S, Praba TS, Prasanna M, Reddy VNN, Amirtharajan R

pubmed logopapersMay 15 2025
PCOS (Poly-Cystic Ovary Syndrome) is a multifaceted disorder that often affects the ovarian morphology of women of their reproductive age, resulting in the development of numerous cysts on the ovaries. Ultrasound imaging typically diagnoses PCOS, which helps clinicians assess the size, shape, and existence of cysts in the ovaries. Nevertheless, manual ultrasound image analysis is often challenging and time-consuming, resulting in inter-observer variability. To effectively treat PCOS and prevent its long-term effects, prompt and accurate diagnosis is crucial. In such cases, a prediction model based on deep learning can help physicians by streamlining the diagnosis procedure, reducing time and potential errors. This article proposes a novel integrated approach, QEI-SAM (Quality Enhanced Image - Segment Anything Model), for enhancing image quality and ovarian cyst segmentation for accurate prediction. GAN (Generative Adversarial Networks) and CNN (Convolutional Neural Networks) are the most recent cutting-edge innovations that have supported the system in attaining the expected result. The proposed QEI-SAM model used Enhanced Super Resolution Generative Adversarial Networks (ESRGAN) for image enhancement to increase the resolution, sharpening the edges and restoring the finer structure of the ultrasound ovary images and achieved a better SSIM of 0.938, PSNR value of 38.60 and LPIPS value of 0.0859. Then, it incorporates the Segment Anything Model (SAM) to segment ovarian cysts and achieve the highest Dice coefficient of 0.9501 and IoU score of 0.9050. Furthermore, Convolutional Neural Network - ResNet 50, ResNet 101, VGG 16, VGG 19, AlexNet and Inception v3 have been implemented to diagnose PCOS promptly. Finally, VGG 19 has achieved the highest accuracy of 99.31%.

Interobserver agreement between artificial intelligence models in the thyroid imaging and reporting data system (TIRADS) assessment of thyroid nodules.

Leoncini A, Trimboli P

pubmed logopapersMay 15 2025
As ultrasound (US) is the most accurate tool for assessing the thyroid nodule (TN) risk of malignancy (RoM), international societies have published various Thyroid Imaging and Reporting Data Systems (TIRADSs). With the recent advent of artificial intelligence (AI), clinicians and researchers should ask themselves how AI could interpret the terminology of the TIRADSs and whether or not AIs agree in the risk assessment of TNs. The study aim was to analyze the interobserver agreement (IOA) between AIs in assessing the RoM of TNs across various TIRADSs categories using a cases series created combining TIRADSs descriptors. ChatGPT, Google Gemini, and Claude were compared. ACR-TIRADS, EU-TIRADS, and K-TIRADS, were employed to evaluate the AI assessment. Multiple written scenarios for the three TIRADS were created, the cases were evaluated by the three AIs, and their assessments were analyzed and compared. The IOA was estimated by comparing the kappa (κ) values. Ninety scenarios were created. With ACR-TIRADS the IOA analysis gave κ = 0.58 between ChatGPT and Gemini, 0.53 between ChatGPT and Claude, and 0.90 between Gemini and Claude. With EU-TIRADS it was observed κ value = 0.73 between ChatGPT and Gemini, 0.62 between ChatGPT and Claude, and 0.72 between Gemini and Claude. With K-TIRADS it was found κ = 0.88 between ChatGPT and Gemini, 0.70 between ChatGPT and Claude, and 0.61 between Gemini and Claude. This study found that there were non-negligible variability between the three AIs. Clinicians and patients should be aware of these new findings.

Recognizing artery segments on carotid ultrasonography using embedding concatenation of deep image and vision-language models.

Lo CM, Sung SF

pubmed logopapersMay 14 2025
Evaluating large artery atherosclerosis is critical for predicting and preventing ischemic strokes. Ultrasonographic assessment of the carotid arteries is the preferred first-line examination due to its ease of use, noninvasive, and absence of radiation exposure. This study proposed an automated classification model for the common carotid artery (CCA), carotid bulb, internal carotid artery (ICA), and external carotid artery (ECA) to enhance the quantification of carotid artery examinations.&#xD;Approach: A total of 2,943 B-mode ultrasound images (CCA: 1,563; bulb: 611; ICA: 476; ECA: 293) from 288 patients were collected. Three distinct sets of embedding features were extracted from artificial intelligence networks including pre-trained DenseNet201, vision Transformer (ViT), and echo contrastive language-image pre-training (EchoCLIP) models using deep learning architectures for pattern recognition. These features were then combined in a support vector machine (SVM) classifier to interpret the anatomical structures in B-mode images.&#xD;Main results: After ten-fold cross-validation, the model achieved an accuracy of 82.3%, which was significantly better than using individual feature sets, with a p-value of <0.001.&#xD;Significance: The proposed model could make carotid artery examinations more accurate and consistent with the achieved classification accuracy. The source code is available at https://github.com/buddykeywordw/Artery-Segments-Recognition&#xD.

Novel AI Guided Non-Expert Compression Ultrasound DVT Diagnostic Pathway May Reduce Vascular Laboratory Venous Testing <sup>†</sup>.

Avgerinos E, Spiliopoulos S, Psachoulia F, Yfantis A, Plakas G, Grigoriadis S, Speranza G, Kakisis Y

pubmed logopapersMay 14 2025
Ultrasonography and D-dimer testing are established modalities for evaluating potential lower extremity deep venous thrombosis (DVT). The ThinkSono Guidance system is an AI based software allowing non-ultrasound trained providers to perform compression ultrasounds for evaluation by remote interpreters. This study evaluates its clinical utilisation and potential reduction of venous duplexes and waiting times. Patients with suspected DVTs were prospectively recruited through the institution's emergency department. Patients underwent an AI guided two region proximal DVT compression examination by non-ultrasound trained providers using the ThinkSono Guidance system and D-dimer testing. Ultrasound images remotely reviewed by the on call radiologist were rated for diagnostic quality; all images of sufficient quality were assessed as either "Compressible/no proximal DVT" or "Inadequate imaging/possible DVT". All patients assessed as "compressible" with negative D-dimers were discharged. All other patients were sent for a venous duplex scan. Time to diagnosis, sensitivity, and specificity of ThinkSono Guidance against D-dimers and full duplex scans were calculated. Fifty three patients (average age 56 ± 18 years, 45% females) were scanned with ThinkSono Guidance by one of three non-ultrasound trained providers. All scans were of diagnostic quality. ThinkSono Guidance with radiologist review yielded 45 negative DVT diagnoses (85%). Seventeen of these with negative D-dimers were discharged (32%), 28 required duplex ultrasound testing per trial protocol (23 due to positive D-dimers, five due to unavailability of D-dimer). All of these duplexes were negative (100% sensitivity). Eight patients were suspected to have DVT by the reviewing radiologist, and duplex confirmed DVT in six patients (96% ThinkSono Guidance specificity, 36% D-dimer specificity). ThinkSono Guidance scans averaged 6.75 minutes for scan and review. The median time from scan initiation to review was 37.5 minutes. This suggests a significant proportion of patients with suspected DVT could safely avoid duplex ultrasound and D-dimer testing using the ThinkSono system, setting the basis for a novel AI assisted diagnostic pathway.
Page 37 of 41404 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.