Sort by:
Page 46 of 56552 results

Ultrasound-based radiomics and machine learning for enhanced diagnosis of knee osteoarthritis: Evaluation of diagnostic accuracy, sensitivity, specificity, and predictive value.

Kiso T, Okada Y, Kawata S, Shichiji K, Okumura E, Hatsumi N, Matsuura R, Kaminaga M, Kuwano H, Okumura E

pubmed logopapersJun 1 2025
To evaluate the usefulness of radiomics features extracted from ultrasonographic images in diagnosing and predicting the severity of knee osteoarthritis (OA). In this single-center, prospective, observational study, radiomics features were extracted from standing radiographs and ultrasonographic images of knees of patients aged 40-85 years with primary medial OA and without OA. Analysis was conducted using LIFEx software (version 7.2.n), ANOVA, and LASSO regression. The diagnostic accuracy of three different models, including a statistical model incorporating background factors and machine learning models, was evaluated. Among 491 limbs analyzed, 318 were OA and 173 were non-OA cases. The mean age was 72.7 (±8.7) and 62.6 (±11.3) years in the OA and non-OA groups, respectively. The OA group included 81 (25.5 %) men and 237 (74.5 %) women, whereas the non-OA group included 73 men (42.2 %) and 100 (57.8 %) women. A statistical model using the cutoff value of MORPHOLOGICAL_SurfaceToVolumeRatio (IBSI:2PR5) achieved a specificity of 0.98 and sensitivity of 0.47. Machine learning diagnostic models (Model 2) demonstrated areas under the curve (AUCs) of 0.88 (discriminant analysis) and 0.87 (logistic regression), with sensitivities of 0.80 and 0.81 and specificities of 0.82 and 0.80, respectively. For severity prediction, the statistical model using MORPHOLOGICAL_SurfaceToVolumeRatio (IBSI:2PR5) showed sensitivity and specificity values of 0.78 and 0.86, respectively, whereas machine learning models achieved an AUC of 0.92, sensitivity of 0.81, and specificity of 0.85 for severity prediction. The use of radiomics features in diagnosing knee OA shows potential as a supportive tool for enhancing clinicians' decision-making.

Combining Deep Data-Driven and Physics-Inspired Learning for Shear Wave Speed Estimation in Ultrasound Elastography.

Tehrani AKZ, Schoen S, Candel I, Gu Y, Guo P, Thomenius K, Pierce TT, Wang M, Tadross R, Washburn M, Rivaz H, Samir AE

pubmed logopapersJun 1 2025
The shear wave elastography (SWE) provides quantitative markers for tissue characterization by measuring the shear wave speed (SWS), which reflects tissue stiffness. SWE uses an acoustic radiation force pulse sequence to generate shear waves that propagate laterally through tissue with transient displacements. These waves travel perpendicular to the applied force, and their displacements are tracked using high-frame-rate ultrasound. Estimating the SWS map involves two main steps: speckle tracking and SWS estimation. Speckle tracking calculates particle velocity by measuring RF/IQ data displacement between adjacent firings, while SWS estimation methods typically compare particle velocity profiles of samples that are laterally a few millimeters apart. Deep learning (DL) methods have gained attention for SWS estimation, often relying on supervised training using simulated data. However, these methods may struggle with real-world data, which can differ significantly from the simulated training data, potentially leading to artifacts in the estimated SWS map. To address this challenge, we propose a physics-inspired learning approach that utilizes real data without known SWS values. Our method employs an adaptive unsupervised loss function, allowing the network to train with the real noisy data to minimize the artifacts and improve the robustness. We validate our approach using experimental phantom data and in vivo liver data from two human subjects, demonstrating enhanced accuracy and reliability in SWS estimation compared with conventional and supervised methods. This hybrid approach leverages the strengths of both data-driven and physics-inspired learning, offering a promising solution for more accurate and robust SWS mapping in clinical applications.

Prediction of BRAF and TERT status in PTCs by machine learning-based ultrasound radiomics methods: A multicenter study.

Shi H, Ding K, Yang XT, Wu TF, Zheng JY, Wang LF, Zhou BY, Sun LP, Zhang YF, Zhao CK, Xu HX

pubmed logopapersJun 1 2025
Preoperative identification of genetic mutations is conducive to individualized treatment and management of papillary thyroid carcinoma (PTC) patients. <i>Purpose</i>: To investigate the predictive value of the machine learning (ML)-based ultrasound (US) radiomics approaches for BRAF V600E and TERT promoter status (individually and coexistence) in PTC. This multicenter study retrospectively collected data of 1076 PTC patients underwent genetic testing detection for BRAF V600E and TERT promoter between March 2016 and December 2021. Radiomics features were extracted from routine grayscale ultrasound images, and gene status-related features were selected. Then these features were included to nine different ML models to predicting different mutations, and optimal models plus statistically significant clinical information were also conducted. The models underwent training and testing, and comparisons were performed. The Decision Tree-based US radiomics approach had superior prediction performance for the BRAF V600E mutation compared to the other eight ML models, with an area under the curve (AUC) of 0.767 versus 0.547-0.675 (p < 0.05). The US radiomics methodology employing Logistic Regression exhibited the highest accuracy in predicting TERT promoter mutations (AUC, 0.802 vs. 0.525-0.701, p < 0.001) and coexisting BRAF V600E and TERT promoter mutations (0.805 vs. 0.678-0.743, p < 0.001) within the test set. The incorporation of clinical factors enhanced predictive performances to 0.810 for BRAF V600E mutant, 0.897 for TERT promoter mutations, and 0.900 for dual mutations in PTCs. The machine learning-based US radiomics methods, integrated with clinical characteristics, demonstrated effectiveness in predicting the BRAF V600E and TERT promoter mutations in PTCs.

BUS-M2AE: Multi-scale Masked Autoencoder for Breast Ultrasound Image Analysis.

Yu L, Gou B, Xia X, Yang Y, Yi Z, Min X, He T

pubmed logopapersJun 1 2025
Masked AutoEncoder (MAE) has demonstrated significant potential in medical image analysis by reducing the cost of manual annotations. However, MAE and its recent variants are not well-developed for ultrasound images in breast cancer diagnosis, as they struggle to generalize to the task of distinguishing ultrasound breast tumors of varying sizes. This limitation hinders the model's ability to adapt to the diverse morphological characteristics of breast tumors. In this paper, we propose a novel Breast UltraSound Multi-scale Masked AutoEncoder (BUS-M2AE) model to address the limitations of the general MAE. BUS-M2AE incorporates multi-scale masking methods at both the token level during the image patching stage and the feature level during the feature learning stage. These two multi-scale masking methods enable flexible strategies to match the explicit masked patches and the implicit features with varying tumor scales. By introducing these multi-scale masking methods in the image patching and feature learning phases, BUS-M2AE allows the pre-trained vision transformer to adaptively perceive and accurately distinguish breast tumors of different sizes, thereby improving the model's overall performance in handling diverse tumor morphologies. Comprehensive experiments demonstrate that BUS-M2AE outperforms recent MAE variants and commonly used supervised learning methods in breast cancer classification and tumor segmentation tasks.

ICPPNet: A semantic segmentation network model based on inter-class positional prior for scoliosis reconstruction in ultrasound images.

Wang C, Zhou Y, Li Y, Pang W, Wang L, Du W, Yang H, Jin Y

pubmed logopapersJun 1 2025
Considering the radiation hazard of X-ray, safer, more convenient and cost-effective ultrasound methods are gradually becoming new diagnostic approaches for scoliosis. For ultrasound images of spine regions, it is challenging to accurately identify spine regions in images due to relatively small target areas and the presence of a lot of interfering information. Therefore, we developed a novel neural network that incorporates prior knowledge to precisely segment spine regions in ultrasound images. We constructed a dataset of ultrasound images of spine regions for semantic segmentation. The dataset contains 3136 images of 30 patients with scoliosis. And we propose a network model (ICPPNet), which fully utilizes inter-class positional prior knowledge by combining an inter-class positional probability heatmap, to achieve accurate segmentation of target areas. ICPPNet achieved an average Dice similarity coefficient of 70.83% and an average 95% Hausdorff distance of 11.28 mm on the dataset, demonstrating its excellent performance. The average error between the Cobb angle measured by our method and the Cobb angle measured by X-ray images is 1.41 degrees, and the coefficient of determination is 0.9879 with a strong correlation. ICPPNet provides a new solution for the medical image segmentation task with positional prior knowledge between target classes. And ICPPNet strongly supports the subsequent reconstruction of spine models using ultrasound images.

Eigenhearts: Cardiac diseases classification using eigenfaces approach.

Groun N, Villalba-Orero M, Casado-Martín L, Lara-Pezzi E, Valero E, Le Clainche S, Garicano-Mena J

pubmed logopapersJun 1 2025
In the realm of cardiovascular medicine, medical imaging plays a crucial role in accurately classifying cardiac diseases and making precise diagnoses. However, the integration of data science techniques in this field presents significant challenges, as it requires a large volume of images, while ethical constraints, high costs, and variability in imaging protocols limit data acquisition. As a consequence, it is necessary to investigate different avenues to overcome this challenge. In this contribution, we offer an innovative tool to conquer this limitation. In particular, we delve into the application of a well recognized method known as the eigenfaces approach to classify cardiac diseases. This approach was originally motivated for efficiently representing pictures of faces using principal component analysis, which provides a set of eigenvectors (aka eigenfaces), explaining the variation between face images. Given its effectiveness in face recognition, we sought to evaluate its applicability to more complex medical imaging datasets. In particular, we integrate this approach with convolutional neural networks to classify echocardiography images taken from mice in five distinct cardiac conditions (healthy, diabetic cardiomyopathy, myocardial infarction, obesity and TAC hypertension). The results show a substantial and noteworthy enhancement when employing the singular value decomposition for pre-processing, with classification accuracy increasing by approximately 50%.

From Guidelines to Intelligence: How AI Refines Thyroid Nodule Biopsy Decisions.

Zeng W, He Y, Xu R, Mai W, Chen Y, Li S, Yi W, Ma L, Xiong R, Liu H

pubmed logopapersMay 31 2025
To evaluate the value of combining American College of Radiology (ACR) Thyroid Imaging Reporting and Data System (TI-RADS) with the Demetics ultrasound diagnostic system in reducing the rate of fine-needle aspiration (FNA) biopsies for thyroid nodules. A retrospective study analyzed 548 thyroid nodules from 454 patients, all meeting ACR TI-RADS guidelines (category ≥3 and diameter ≥10 mm) for FNA. Nodule was reclassified using the combined ACR TI-RADS and Demetics system (De TI-RADS), and the biopsy rates were compared. Using ACR TI-RADS alone, the biopsy rate was 70.6% (387/548), with a positive predictive value (PPV) of 52.5% (203/387), an unnecessary biopsy rate of 47.5% (184/387) and a missed diagnosis rate of 11.0% (25/228). Incorporating Demetics reduced the biopsy rate to 48.1% (264/548), the unnecessary biopsy rate to 17.4% (46/265) and the missed diagnosis rate to 4.4% (10/228), while increasing PPV to 82.6% (218/264). All differences between ACR TI-RADS and De TI-RADS were statistically significant (p < 0.05). The integration of ACR TI-RADS with the Demetics system improves nodule risk assessment by enhancing diagnostic and efficiency. This approach reduces unnecessary biopsies and missed diagnoses while increasing PPV, offering a more reliable tool for clinicians and patients.

Subclinical atrial fibrillation prediction based on deep learning and strain analysis using echocardiography.

Huang SH, Lin YC, Chen L, Unankard S, Tseng VS, Tsao HM, Tang GJ

pubmed logopapersMay 31 2025
Subclinical atrial fibrillation (SCAF), also known as atrial high-rate episodes (AHREs), refers to asymptomatic heart rate elevations associated with increased risks of atrial fibrillation and cardiovascular events. Although deep learning (DL) models leveraging echocardiographic images from ultrasound are widely used for cardiac function analysis, their application to AHRE prediction remains unexplored. This study introduces a novel DL-based framework for automatic AHRE detection using echocardiograms. The approach encompasses left atrium (LA) segmentation, LA strain feature extraction, and AHRE classification. Data from 117 patients with cardiac implantable electronic devices undergoing echocardiography were analyzed, with 80% allocated to the development set and 20% to the test set. LA segmentation accuracy was quantified using the Dice coefficient, yielding scores of 0.923 for the LA cavity and 0.741 for the LA wall. For AHRE classification, metrics such as area under the curve (AUC), accuracy, sensitivity, and specificity were employed. A transformer-based model integrating patient characteristics demonstrated robust performance, achieving mean AUC of 0.815, accuracy of 0.809, sensitivity of 0.800, and specificity of 0.783 for a 24-h AHRE duration threshold. This framework represents a reliable tool for AHRE assessment and holds significant potential for early SCAF detection, enhancing clinical decision-making and patient outcomes.

Discriminating Clear Cell From Non-Clear Cell Renal Cell Carcinoma: A Machine Learning Approach Using Contrast-enhanced Ultrasound Radiomics.

Liang M, Wu S, Ou B, Wu J, Qiu H, Zhao X, Luo B

pubmed logopapersMay 31 2025
The aim of this investigation is to assess the clinical usefulness of a machine learning model using contrast-enhanced ultrasound (CEUS) radiomics in discriminating clear cell renal cell carcinoma (ccRCC) from non-ccRCC. A total of 292 patients with pathologically confirmed RCC subtypes underwent CEUS (development set. n = 231; validation set, n = 61) in a retrospective study. Radiomics features were derived from CEUS images acquired during the cortical and parenchymal phases. Radiomics models were developed using logistic regression (LR), support vector machine, decision tree, naive Bayes, gradient boosting machine, and random forest. The suitable model was identified based on the area under the receiver operating characteristic curve (AUC). Appropriate clinical CEUS features were identified through univariate and multivariate LR analyses to develop a clinical model. By integrating radiomics and clinical CEUS features, a combined model was established. A comprehensive evaluation of the models' performance was conducted. After the reduction and selection process were applied to 2250 radiomics features, the final set of 8 features was considered valuable. Among the models, the LR model had the highest performance on the validation set and showed good robustness. In both the development and validation sets, both the radiomics (AUC, 0.946 and 0.927) and the combined models (AUC, 0.949 and 0.925) outperformed the clinical model (AUC, 0.851 and 0.768), showing higher AUC values (all p < 0.05). The combined model exhibited favorable calibration and clinical benefit. The combined model integrating clinical CEUS and CEUS radiomics features demonstrated good diagnostic performance in discriminating ccRCC from non-ccRCC.

Real-time brain tumor detection in intraoperative ultrasound: From model training to deployment in the operating room.

Cepeda S, Esteban-Sinovas O, Romero R, Singh V, Shett P, Moiyadi A, Zemmoura I, Giammalva GR, Del Bene M, Barbotti A, DiMeco F, West TR, Nahed BV, Arrese I, Hornero R, Sarabia R

pubmed logopapersMay 30 2025
Intraoperative ultrasound (ioUS) is a valuable tool in brain tumor surgery due to its versatility, affordability, and seamless integration into the surgical workflow. However, its adoption remains limited, primarily because of the challenges associated with image interpretation and the steep learning curve required for effective use. This study aimed to enhance the interpretability of ioUS images by developing a real-time brain tumor detection system deployable in the operating room. We collected 2D ioUS images from the BraTioUS and ReMIND datasets, annotated with expert-refined tumor labels. Using the YOLO11 architecture and its variants, we trained object detection models to identify brain tumors. The dataset included 1732 images from 192 patients, divided into training, validation, and test sets. Data augmentation expanded the training set to 11,570 images. In the test dataset, YOLO11s achieved the best balance of precision and computational efficiency, with a mAP@50 of 0.95, mAP@50-95 of 0.65, and a processing speed of 34.16 frames per second. The proposed solution was prospectively validated in a cohort of 20 consecutively operated patients diagnosed with brain tumors. Neurosurgeons confirmed its seamless integration into the surgical workflow, with real-time predictions accurately delineating tumor regions. These findings highlight the potential of real-time object detection algorithms to enhance ioUS-guided brain tumor surgery, addressing key challenges in interpretation and providing a foundation for future development of computer vision-based tools for neuro-oncological surgery.
Page 46 of 56552 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.