Sort by:
Page 43 of 56552 results

Synthetic Ultrasound Image Generation for Breast Cancer Diagnosis Using cVAE-WGAN Models: An Approach Based on Generative Artificial Intelligence

Mondillo, G., Masino, M., Colosimo, S., Perrotta, A., Frattolillo, V., Abbate, F. G.

medrxiv logopreprintJun 2 2025
The scarcity and imbalance of medical image datasets hinder the development of robust computer-aided diagnosis (CAD) systems for breast cancer. This study explores the application of advanced generative models, based on generative artificial intelligence (GenAI), for the synthesis of digital breast ultrasound images. Using a hybrid Conditional Variational Autoencoder-Wasserstein Generative Adversarial Network (CVAE-WGAN) architecture, we developed a system to generate high-quality synthetic images conditioned on the class (malignant vs. normal/benign). These synthetic images, generated from the low-resolution BreastMNIST dataset and filtered for quality, were systematically integrated with real training data at different mixing ratios (W). The performance of a CNN classifier trained on these mixed datasets was evaluated against a baseline model trained only on real data balanced with SMOTE. The optimal integration (mixing weight W=0.25) produced a significant performance increase on the real test set: +8.17% in macro-average F1-score and +4.58% in accuracy compared to using real data alone. Analysis confirmed the originality of the generated samples. This approach offers a promising solution for overcoming data limitations in image-based breast cancer diagnostics, potentially improving the capabilities of CAD systems.

A new method for placental volume measurements using tracked 2D ultrasound and automatic image segmentation.

Sagberg K, Lie T, F Peterson H, Hillestad V, Eskild A, Bø LE

pubmed logopapersJun 1 2025
Placental volume measurements can potentially identify high-risk pregnancies. We aimed to develop and validate a new method for placental volume measurements using tracked 2D ultrasound and automatic image segmentation. We included 43 pregnancies at gestational week 27 and acquired placental images using a 2D ultrasound probe with position tracking, and trained a convolutional neural network (CNN) for automatic image segmentation. The automatically segmented 2D images were combined with tracking data to calculate placental volume. For 15 of the included pregnancies, placental volume was also estimated based on MRI examinations, 3D ultrasound and manually segmented 2D ultrasound images. The ultrasound methods were compared to MRI (gold standard). The CNN demonstrated good performance in automatic image segmentation (F1-score 0.84). The correlation with MRI-based placental volume was similar for tracked 2D ultrasound using automatically segmented images (absolute agreement intraclass correlation coefficient [ICC] 0.58, 95% CI 0.13-0.84) and manually segmented images (ICC 0.59, 95% CI 0.13-0.84). The 3D ultrasound method showed lower ICC (0.35, 95% CI -0.11 to 0.74) than the methods based on tracked 2D ultrasound. Tracked 2D ultrasound with automatic image segmentation is a promising new method for placental volume measurements and has potential for further improvement.

Significant reduction in manual annotation costs in ultrasound medical image database construction through step by step artificial intelligence pre-annotation.

Zheng F, XingMing L, JuYing X, MengYing T, BaoJian Y, Yan S, KeWei Y, ZhiKai L, Cheng H, KeLan Q, XiHao C, WenFei D, Ping H, RunYu W, Ying Y, XiaoHui B

pubmed logopapersJun 1 2025
This study investigates the feasibility of reducing manual image annotation costs in medical image database construction by utilizing a step by step approach where the Artificial Intelligence model (AI model) trained on a previous batch of data automatically pre-annotates the next batch of image data, taking ultrasound image of thyroid nodule annotation as an example. The study used YOLOv8 as the AI model. During the AI model training, in addition to conventional image augmentation techniques, augmentation methods specifically tailored for ultrasound images were employed to balance the quantity differences between thyroid nodule classes and enhance model training effectiveness. The study found that training the model with augmented data significantly outperformed training with raw images data. When the number of original images number was only 1,360, with 7 thyroid nodule classifications, pre-annotation using the AI model trained on augmented data could save at least 30% of the manual annotation workload for junior physicians. When the scale of original images number reached 6,800, the classification accuracy of the AI model trained on augmented data was very close with that of junior physicians, eliminating the need for manual preliminary annotation.

Ensemble learning of deep CNN models and two stage level prediction of Cobb angle on surface topography in adolescents with idiopathic scoliosis.

Hassan M, Gonzalez Ruiz JM, Mohamed N, Burke TN, Mei Q, Westover L

pubmed logopapersJun 1 2025
This study employs Convolutional Neural Networks (CNNs) as feature extractors with appended regression layers for the non-invasive prediction of Cobb Angle (CA) from Surface Topography (ST) scans in adolescents with Idiopathic Scoliosis (AIS). The aim is to minimize radiation exposure during critical growth periods by offering a reliable, non-invasive assessment tool. The efficacy of various CNN-based feature extractors-DenseNet121, EfficientNetB0, ResNet18, SqueezeNet, and a modified U-Net-was evaluated on a dataset of 654 ST scans using a regression analysis framework for accurate CA prediction. The dataset comprised 590 training and 64 testing scans. Performance was evaluated using Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and accuracy in classifying scoliosis severity (mild, moderate, severe) based on CA measurements. The EfficientNetB0 feature extractor outperformed other models, demonstrating strong performance on the training set (R=0.96, R=20.93) and achieving an MAE of 6.13<sup>∘</sup> and RMSE of 7.5<sup>∘</sup> on the test set. In terms of scoliosis severity classification, it achieved high precision (84.62%) and specificity (95.65% for mild cases and 82.98% for severe cases), highlighting its clinical applicability in AIS management. The regression-based approach using the EfficientNetB0 as a feature extractor presents a significant advancement for accurately determining CA from ST scans, offering a promising tool for improving scoliosis severity categorization and management in adolescents.

Prediction of Lymph Node Metastasis in Lung Cancer Using Deep Learning of Endobronchial Ultrasound Images With Size on CT and PET-CT Findings.

Oh JE, Chung HS, Gwon HR, Park EY, Kim HY, Lee GK, Kim TS, Hwangbo B

pubmed logopapersJun 1 2025
Echo features of lymph nodes (LNs) influence target selection during endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). This study evaluates deep learning's diagnostic capabilities on EBUS images for detecting mediastinal LN metastasis in lung cancer, emphasising the added value of integrating a region of interest (ROI), LN size on CT, and PET-CT findings. We analysed 2901 EBUS images from 2055 mediastinal LN stations in 1454 lung cancer patients. ResNet18-based deep learning models were developed to classify images of true positive malignant and true negative benign LNs diagnosed by EBUS-TBNA using different inputs: original images, ROI images, and CT size and PET-CT data. Model performance was assessed using the area under the receiver operating characteristic curve (AUROC) and other diagnostic metrics. The model using only original EBUS images showed the lowest AUROC (0.870) and accuracy (80.7%) in classifying LN images. Adding ROI information slightly increased the AUROC (0.896) without a significant difference (p = 0.110). Further adding CT size resulted in a minimal change in AUROC (0.897), while adding PET-CT (original + ROI + PET-CT) showed a significant improvement (0.912, p = 0.008 vs. original; p = 0.002 vs. original + ROI + CT size). The model combining original and ROI EBUS images with CT size and PET-CT findings achieved the highest AUROC (0.914, p = 0.005 vs. original; p = 0.018 vs. original + ROI + PET-CT) and accuracy (82.3%). Integrating an ROI, LN size on CT, and PET-CT findings into the deep learning analysis of EBUS images significantly enhances the diagnostic capability of models for detecting mediastinal LN metastasis in lung cancer, with the integration of PET-CT data having a substantial impact.

Enhancing diagnostic accuracy of thyroid nodules: integrating self-learning and artificial intelligence in clinical training.

Kim D, Hwang YA, Kim Y, Lee HS, Lee E, Lee H, Yoon JH, Park VY, Rho M, Yoon J, Lee SE, Kwak JY

pubmed logopapersJun 1 2025
This study explores a self-learning method as an auxiliary approach in residency training for distinguishing between benign and malignant thyroid nodules. Conducted from March to December 2022, internal medicine residents underwent three repeated learning sessions with a "learning set" comprising 3000 thyroid nodule images. Diagnostic performances for internal medicine residents were assessed before the study, after every learning session, and for radiology residents before and after one-on-one education, using a "test set," comprising 120 thyroid nodule images. Finally, all residents repeated the same test using artificial intelligence computer-assisted diagnosis (AI-CAD). Twenty-one internal medicine and eight radiology residents participated. Initially, internal medicine residents had a lower area under the receiver operating characteristic curve (AUROC) than radiology residents (0.578 vs. 0.701, P < 0.001), improving post-learning (0.578 to 0.709, P < 0.001) to a comparable level with radiology residents (0.709 vs. 0.735, P = 0.17). Further improvement occurred with AI-CAD for both group (0.709 to 0.755, P < 0.001; 0.735 to 0.768, P = 0.03). The proposed iterative self-learning method using a large volume of ultrasonographic images can assist beginners, such as residents, in thyroid imaging to differentiate benign and malignant thyroid nodules. Additionally, AI-CAD can improve the diagnostic performance across varied levels of experience in thyroid imaging.

Ultrasound measurement of relative tongue size and its correlation with tongue mobility for healthy individuals.

Sun J, Kitamura T, Nota Y, Yamane N, Hayashi R

pubmed logopapersJun 1 2025
The size of an individual's tongue relative to the oral cavity is associated with articulation speed [Feng, Lu, Zheng, Chi, and Honda, in Proceedings of the 10th Biennial Asia Pacific Conference on Speech, Language, and Hearing (2017), pp. 17-19)] and may affect speech clarity. This study introduces an ultrasound-based method for measuring relative tongue size, termed ultrasound-based relative tongue size (uRTS), as a cost-effective alternative to the magnetic resonance imaging (MRI) based method. Using deep learning to extract the tongue contour, uRTS was calculated from tongue and oropharyngeal cavity sizes in the midsagittal plane. Results from ten speakers showed a strong correlation between uRTS and MRI-based measurements (r = 0.87) and a negative correlation with tongue movement speed (r = -0.73), indicating uRTS is a useful index for assessing tongue size.

Deep Learning-Based Automated Measurement of Cervical Length in Transvaginal Ultrasound Images of Pregnant Women.

Kwon H, Sun S, Cho HC, Yun HS, Park S, Jung YJ, Kwon JY, Seo JK

pubmed logopapersJun 1 2025
Cervical length (CL) measurement using transvaginal ultrasound is an effective screening tool to assess the risk of preterm birth. An adequate assessment of CL is crucial, however, manual sonographic CL measurement is highly operator-dependent and cumbersome. Therefore, a reliable and reproducible automatic method for CL measurement is in high demand to reduce inter-rater variability and improve workflow. Despite the increasing use of artificial intelligence techniques in ultrasound, applying deep learning (DL) to analyze ultrasound images of the cervix remains a challenge due to low signal-to-noise ratios and difficulties in capturing the cervical canal, which appears as a thin line and with extremely low contrast against the surrounding tissues. To address these challenges, we have developed CL-Net, a novel DL network that incorporates expert anatomical knowledge to identify the cervix, similar to the approach taken by clinicians. CL-Net captures anatomical features related to CL measurement, facilitating the identification of the cervical canal. It then identifies the cervical canal and automatically provides reproducible and reliable CL measurements. CL-Net achieved a success rate of 95.5% in recognizing the cervical canal, comparable to that of human experts (96.4%). Furthermore, the differences between the CL measurements of CL-Net and ground truth were considerably smaller than those made by non-experts and were comparable to those made by experts (median 1.36 mm, IQR 0.87-2.82 mm, range 0.06-6.95 mm for straight cervix; median 1.31 mm, IQR 0.61-2.65 mm, range 0.01-8.18 mm for curved one).

[Capabilities and Advances of Transrectal Ultrasound in 2025].

Kaufmann S, Kruck S

pubmed logopapersJun 1 2025
Transrectal ultrasound, particularly in the combination of high-frequency ultrasound and MR-TRUS fusion technologies, provides a highly precise and effective method for correlation and targeted biopsy of suspicious intraprostatic lesions detected by MRI. Advances in imaging technology, driven by 29 Mhz micro-ultrasound transducers, robotic-assisted systems, and the integration of AI-based analyses, promise further improvements in diagnostic accuracy and a reduction in unnecessary biopsies. Further technological advancements and improved TRUS training could contribute to a decentralized and cost-effective diagnostic evaluation of prostate cancer in the future.

FedBCD: Federated Ultrasound Video and Image Joint Learning for Breast Cancer Diagnosis.

Deng T, Huang C, Cai M, Liu Y, Liu M, Lin J, Shi Z, Zhao B, Huang J, Liang C, Han G, Liu Z, Wang Y, Han C

pubmed logopapersJun 1 2025
Ultrasonography plays an essential role in breast cancer diagnosis. Current deep learning based studies train the models on either images or videos in a centralized learning manner, lacking consideration of joint benefits between two different modality models or the privacy issue of data centralization. In this study, we propose the first decentralized learning solution for joint learning with breast ultrasound video and image, called FedBCD. To enable the model to learn from images and videos simultaneously and seamlessly in client-level local training, we propose a Joint Ultrasound Video and Image Learning (JUVIL) model to bridge the dimension gap between video and image data by incorporating temporal and spatial adapters. The parameter-efficient design of JUVIL with trainable adapters and frozen backbone further reduces the computational cost and communication burden of federated learning, finally improving the overall efficiency. Moreover, considering conventional model-wise aggregation may lead to unstable federated training due to different modalities, data capacities in different clients, and different functionalities across layers. We further propose a Fisher information matrix (FIM) guided Layer-wise Aggregation method named FILA. By measuring layer-wise sensitivity with FIM, FILA assigns higher contributions to the clients with lower sensitivity, improving personalized performance during federated training. Extensive experiments on three image clients and one video client demonstrate the benefits of joint learning architecture, especially for the ones with small-scale data. FedBCD significantly outperforms nine federated learning methods on both video-based and image-based diagnoses, demonstrating the superiority and potential for clinical practice. Code is released at https://github.com/tianpeng-deng/FedBCD.
Page 43 of 56552 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.