Sort by:
Page 40 of 56556 results

Exploring the limit of image resolution for human expert classification of vascular ultrasound images in giant cell arteritis and healthy subjects: the GCA-US-AI project.

Bauer CJ, Chrysidis S, Dejaco C, Koster MJ, Kohler MJ, Monti S, Schmidt WA, Mukhtyar CB, Karakostas P, Milchert M, Ponte C, Duftner C, de Miguel E, Hocevar A, Iagnocco A, Terslev L, Døhn UM, Nielsen BD, Juche A, Seitz L, Keller KK, Karalilova R, Daikeler T, Mackie SL, Torralba K, van der Geest KSM, Boumans D, Bosch P, Tomelleri A, Aschwanden M, Kermani TA, Diamantopoulos A, Fredberg U, Inanc N, Petzinna SM, Albarqouni S, Behning C, Schäfer VS

pubmed logopapersJun 12 2025
Prompt diagnosis of giant cell arteritis (GCA) with ultrasound is crucial for preventing severe ocular and other complications, yet expertise in ultrasound performance is scarce. The development of an artificial intelligence (AI)-based assistant that facilitates ultrasound image classification and helps to diagnose GCA early promises to close the existing gap. In the projection of the planned AI, this study investigates the minimum image resolution required for human experts to reliably classify ultrasound images of arteries commonly affected by GCA for the presence or absence of GCA. Thirty-one international experts in GCA ultrasonography participated in a web-based exercise. They were asked to classify 10 ultrasound images for each of 5 vascular segments as GCA, normal, or not able to classify. The following segments were assessed: (1) superficial common temporal artery, (2) its frontal and (3) parietal branches (all in transverse view), (4) axillary artery in transverse view, and 5) axillary artery in longitudinal view. Identical images were shown at different resolutions, namely 32 × 32, 64 × 64, 128 × 128, 224 × 224, and 512 × 512 pixels, thereby resulting in a total of 250 images to be classified by every study participant. Classification performance improved with increasing resolution up to a threshold, plateauing at 224 × 224 pixels. At 224 × 224 pixels, the overall classification sensitivity was 0.767 (95% CI, 0.737-0.796), and specificity was 0.862 (95% CI, 0.831-0.888). A resolution of 224 × 224 pixels ensures reliable human expert classification and aligns with the input requirements of many common AI-based architectures. Thus, the results of this study substantially guide projected AI development.

Radiomics and machine learning for predicting valve vegetation in infective endocarditis: a comparative analysis of mitral and aortic valves using TEE imaging.

Esmaely F, Moradnejad P, Boudagh S, Bitarafan-Rajabi A

pubmed logopapersJun 12 2025
Detecting valve vegetation in infective endocarditis (IE) poses challenges, particularly with mechanical valves, because acoustic shadowing artefacts often obscure critical diagnostic details. This study aimed to classify native and prosthetic mitral and aortic valves with and without vegetation using radiomics and machine learning. 286 TEE scans from suspected IE cases (August 2023-November 2024) were analysed alongside 113 rejected IE as control cases. Frames were preprocessed using the Extreme Total Variation Bilateral (ETVB) filter, and radiomics features were extracted for classification using machine learning models, including Random Forest, Decision Tree, SVM, k-NN, and XGBoost. in order to evaluate the models, AUC, ROC curves, and Decision Curve Analysis (DCA) were used. For native mitral valves, SVM achieved the highest performance with an AUC of 0.88, a sensitivity of 0.91, and a specificity of 0.87. Mechanical mitral valves also showed optimal results with SVM (AUC: 0.85, sensitivity: 0.73, specificity: 0.92). Native aortic valves were best classified using SVM (AUC: 0.86, sensitivity: 0.87, specificity: 0.86), while Random Forest excelled for mechanical aortic valves (AUC: 0.81, sensitivity: 0.89, specificity: 0.78). These findings suggest that combining the models with the clinician's report may enhance the diagnostic accuracy of TEE, particularly in the absence of advanced imaging methods like PET/CT.

Using a Large Language Model for Breast Imaging Reporting and Data System Classification and Malignancy Prediction to Enhance Breast Ultrasound Diagnosis: Retrospective Study.

Miaojiao S, Xia L, Xian Tao Z, Zhi Liang H, Sheng C, Songsong W

pubmed logopapersJun 11 2025
Breast ultrasound is essential for evaluating breast nodules, with Breast Imaging Reporting and Data System (BI-RADS) providing standardized classification. However, interobserver variability among radiologists can affect diagnostic accuracy. Large language models (LLMs) like ChatGPT-4 have shown potential in medical imaging interpretation. This study explores its feasibility in improving BI-RADS classification consistency and malignancy prediction compared to radiologists. This study aims to evaluate the feasibility of using LLMs, particularly ChatGPT-4, to assess the consistency and diagnostic accuracy of standardized breast ultrasound imaging reports, using pathology as the reference standard. This retrospective study analyzed breast nodule ultrasound data from 671 female patients (mean 45.82, SD 9.20 years; range 26-75 years) who underwent biopsy or surgical excision at our hospital between June 2019 and June 2024. ChatGPT-4 was used to interpret BI-RADS classifications and predict benign versus malignant nodules. The study compared the model's performance to that of two senior radiologists (≥15 years of experience) and two junior radiologists (<5 years of experience) using key diagnostic metrics, including accuracy, sensitivity, specificity, area under the receiver operating characteristic curve, P values, and odds ratios with 95% CIs. Two diagnostic models were evaluated: (1) image interpretation model, where ChatGPT-4 classified nodules based on BI-RADS features, and (2) image-to-text-LLM model, where radiologists provided textual descriptions, and ChatGPT-4 determined malignancy probability based on keywords. Radiologists were blinded to pathological outcomes, and BI-RADS classifications were finalized through consensus. ChatGPT-4 achieved an overall BI-RADS classification accuracy of 96.87%, outperforming junior radiologists (617/671, 91.95% and 604/671, 90.01%, P<.01). For malignancy prediction, ChatGPT-4 achieved an area under the receiver operating characteristic curve of 0.82 (95% CI 0.79-0.85), an accuracy of 80.63% (541/671 cases), a sensitivity of 90.56% (259/286 cases), and a specificity of 73.51% (283/385 cases). The image interpretation model demonstrated performance comparable to senior radiologists, while the image-to-text-LLM model further improved diagnostic accuracy for all radiologists, increasing their sensitivity and specificity significantly (P<.001). Statistical analyses, including the McNemar test and DeLong test, confirmed that ChatGPT-4 outperformed junior radiologists (P<.01) and showed noninferiority compared to senior radiologists (P>.05). Pathological diagnoses served as the reference standard, ensuring robust evaluation reliability. Integrating ChatGPT-4 into an image-to-text-LLM workflow improves BI-RADS classification accuracy and supports radiologists in breast ultrasound diagnostics. These results demonstrate its potential as a decision-support tool to enhance diagnostic consistency and reduce variability.

MoNetV2: Enhanced Motion Network for Freehand 3-D Ultrasound Reconstruction.

Luo M, Yang X, Yan Z, Cao Y, Zhang Y, Hu X, Wang J, Ding H, Han W, Sun L, Ni D

pubmed logopapersJun 11 2025
Three-dimensional ultrasound (US) aims to provide sonographers with the spatial relationships of anatomical structures, playing a crucial role in clinical diagnosis. Recently, deep-learning-based freehand 3-D US has made significant advancements. It reconstructs volumes by estimating transformations between images without external tracking. However, image-only reconstruction poses difficulties in reducing cumulative drift and further improving reconstruction accuracy, particularly in scenarios involving complex motion trajectories. In this context, we propose an enhanced motion network (MoNetV2) to enhance the accuracy and generalizability of reconstruction under diverse scanning velocities and tactics. First, we propose a sensor-based temporal and multibranch structure (TMS) that fuses image and motion information from a velocity perspective to improve image-only reconstruction accuracy. Second, we devise an online multilevel consistency constraint (MCC) that exploits the inherent consistency of scans to handle various scanning velocities and tactics. This constraint exploits scan-level velocity consistency (SVC), path-level appearance consistency (PAC), and patch-level motion consistency (PMC) to supervise interframe transformation estimation. Third, we distill an online multimodal self-supervised strategy (MSS) that leverages the correlation between network estimation and motion information to further reduce cumulative errors. Extensive experiments clearly demonstrate that MoNetV2 surpasses existing methods in both reconstruction quality and generalizability performance across three large datasets.

Biologically Inspired Deep Learning Approaches for Fetal Ultrasound Image Classification

Rinat Prochii, Elizaveta Dakhova, Pavel Birulin, Maxim Sharaev

arxiv logopreprintJun 10 2025
Accurate classification of second-trimester fetal ultrasound images remains challenging due to low image quality, high intra-class variability, and significant class imbalance. In this work, we introduce a simple yet powerful, biologically inspired deep learning ensemble framework that-unlike prior studies focused on only a handful of anatomical targets-simultaneously distinguishes 16 fetal structures. Drawing on the hierarchical, modular organization of biological vision systems, our model stacks two complementary branches (a "shallow" path for coarse, low-resolution cues and a "detailed" path for fine, high-resolution features), concatenating their outputs for final prediction. To our knowledge, no existing method has addressed such a large number of classes with a comparably lightweight architecture. We trained and evaluated on 5,298 routinely acquired clinical images (annotated by three experts and reconciled via Dawid-Skene), reflecting real-world noise and variability rather than a "cleaned" dataset. Despite this complexity, our ensemble (EfficientNet-B0 + EfficientNet-B6 with LDAM-Focal loss) identifies 90% of organs with accuracy > 0.75 and 75% of organs with accuracy > 0.85-performance competitive with more elaborate models applied to far fewer categories. These results demonstrate that biologically inspired modular stacking can yield robust, scalable fetal anatomy recognition in challenging clinical settings.

Adapting Vision-Language Foundation Model for Next Generation Medical Ultrasound Image Analysis

Jingguo Qu, Xinyang Han, Tonghuan Xiao, Jia Ai, Juan Wu, Tong Zhao, Jing Qin, Ann Dorothy King, Winnie Chiu-Wing Chu, Jing Cai, Michael Tin-Cheung Yingınst

arxiv logopreprintJun 10 2025
Medical ultrasonography is an essential imaging technique for examining superficial organs and tissues, including lymph nodes, breast, and thyroid. It employs high-frequency ultrasound waves to generate detailed images of the internal structures of the human body. However, manually contouring regions of interest in these images is a labor-intensive task that demands expertise and often results in inconsistent interpretations among individuals. Vision-language foundation models, which have excelled in various computer vision applications, present new opportunities for enhancing ultrasound image analysis. Yet, their performance is hindered by the significant differences between natural and medical imaging domains. This research seeks to overcome these challenges by developing domain adaptation methods for vision-language foundation models. In this study, we explore the fine-tuning pipeline for vision-language foundation models by utilizing large language model as text refiner with special-designed adaptation strategies and task-driven heads. Our approach has been extensively evaluated on six ultrasound datasets and two tasks: segmentation and classification. The experimental results show that our method can effectively improve the performance of vision-language foundation models for ultrasound image analysis, and outperform the existing state-of-the-art vision-language and pure foundation models. The source code of this study is available at \href{https://github.com/jinggqu/NextGen-UIA}{GitHub}.

Sonopermeation combined with stroma normalization enables complete cure using nano-immunotherapy in murine breast tumors.

Neophytou C, Charalambous A, Voutouri C, Angeli S, Panagi M, Stylianopoulos T, Mpekris F

pubmed logopapersJun 10 2025
Nano-immunotherapy shows great promise in improving patient outcomes, as seen in advanced triple-negative breast cancer, but it does not cure the disease, with median survival under two years. Therefore, understanding resistance mechanisms and developing strategies to enhance its effectiveness in breast cancer is crucial. A key resistance mechanism is the pronounced desmoplasia in the tumor microenvironment, which leads to dysfunction of tumor blood vessels and thus, to hypoperfusion, limited drug delivery and hypoxia. Ultrasound sonopermeation and agents that normalize the tumor stroma have been employed separately to restore vascular abnormalities in tumors with some success. Here, we performed in vivo studies in two murine, orthotopic breast tumor models to explore if combination of ultrasound sonopermeation with a stroma normalization drug can synergistically improve tumor perfusion and enhance the efficacy of nano-immunotherapy. We found that the proposed combinatorial treatment can drastically reduce primary tumor growth and in many cases tumors were no longer measurable. Overall survival studies showed that all mice that received the combination treatment survived and rechallenge experiments revealed that the survivors obtained immunological memory. Employing ultrasound elastography and contrast enhanced ultrasound along with proteomics analysis, flow cytometry and immunofluorescene staining, we found the combinatorial treatment reduced tumor stiffness to normal levels, restoring tumor perfusion and oxygenation. Furthermore, it increased infiltration and activity of immune cells and altered the levels of immunosupportive chemokines. Finally, using machine learning analysis, we identified that tumor stiffness, CD8<sup>+</sup> T cells and M2-type macrophages were strong predictors of treatment response.

Artificial intelligence and endoanal ultrasound: pioneering automated differentiation of benign anal and sphincter lesions.

Mascarenhas M, Almeida MJ, Martins M, Mendes F, Mota J, Cardoso P, Mendes B, Ferreira J, Macedo G, Poças C

pubmed logopapersJun 10 2025
Anal injuries, such as lacerations and fissures, are challenging to diagnose because of their anatomical complexity. Endoanal ultrasound (EAUS) has proven to be a reliable tool for detailed visualization of anal structures but relies on expert interpretation. Artificial intelligence (AI) may offer a solution for more accurate and consistent diagnoses. This study aims to develop and test a convolutional neural network (CNN)-based algorithm for automatic classification of fissures and anal lacerations (internal and external) on EUAS. A single-center retrospective study analyzed 238 EUAS radial probe exams (April 2022-January 2024), categorizing 4528 frames into fissures (516), external lacerations (2174), and internal lacerations (1838), following validation by three experts. Data was split 80% for training and 20% for testing. Performance metrics included sensitivity, specificity, and accuracy. For external lacerations, the CNN achieved 82.5% sensitivity, 93.5% specificity, and 88.2% accuracy. For internal lacerations, achieved 91.7% sensitivity, 85.9% specificity, and 88.2% accuracy. For anal fissures, achieved 100% sensitivity, specificity, and accuracy. This first EUAS AI-assisted model for differentiating benign anal injuries demonstrates excellent diagnostic performance. It highlights AI's potential to improve accuracy, reduce reliance on expertise, and support broader clinical adoption. While currently limited by small dataset and single-center scope, this work represents a significant step towards integrating AI in proctology.

Adapting Vision-Language Foundation Model for Next Generation Medical Ultrasound Image Analysis

Jingguo Qu, Xinyang Han, Tonghuan Xiao, Jia Ai, Juan Wu, Tong Zhao, Jing Qin, Ann Dorothy King, Winnie Chiu-Wing Chu, Jing Cai, Michael Tin-Cheung Ying

arxiv logopreprintJun 10 2025
Medical ultrasonography is an essential imaging technique for examining superficial organs and tissues, including lymph nodes, breast, and thyroid. It employs high-frequency ultrasound waves to generate detailed images of the internal structures of the human body. However, manually contouring regions of interest in these images is a labor-intensive task that demands expertise and often results in inconsistent interpretations among individuals. Vision-language foundation models, which have excelled in various computer vision applications, present new opportunities for enhancing ultrasound image analysis. Yet, their performance is hindered by the significant differences between natural and medical imaging domains. This research seeks to overcome these challenges by developing domain adaptation methods for vision-language foundation models. In this study, we explore the fine-tuning pipeline for vision-language foundation models by utilizing large language model as text refiner with special-designed adaptation strategies and task-driven heads. Our approach has been extensively evaluated on six ultrasound datasets and two tasks: segmentation and classification. The experimental results show that our method can effectively improve the performance of vision-language foundation models for ultrasound image analysis, and outperform the existing state-of-the-art vision-language and pure foundation models. The source code of this study is available at https://github.com/jinggqu/NextGen-UIA.

Uncertainty estimation for trust attribution to speed-of-sound reconstruction with variational networks.

Laguna S, Zhang L, Bezek CD, Farkas M, Schweizer D, Kubik-Huch RA, Goksel O

pubmed logopapersJun 10 2025
Speed-of-sound (SoS) is a biomechanical characteristic of tissue, and its imaging can provide a promising biomarker for diagnosis. Reconstructing SoS images from ultrasound acquisitions can be cast as a limited-angle computed-tomography problem, with variational networks being a promising model-based deep learning solution. Some acquired data frames may, however, get corrupted by noise due to, e.g., motion, lack of contact, and acoustic shadows, which in turn negatively affects the resulting SoS reconstructions. We propose to use the uncertainty in SoS reconstructions to attribute trust to each individual acquired frame. Given multiple acquisitions, we then use an uncertainty-based automatic selection among these retrospectively, to improve diagnostic decisions. We investigate uncertainty estimation based on Monte Carlo Dropout and Bayesian Variational Inference. We assess our automatic frame selection method for differential diagnosis of breast cancer, distinguishing between benign fibroadenoma and malignant carcinoma. We evaluate 21 lesions classified as BI-RADS 4, which represents suspicious cases for probable malignancy. The most trustworthy frame among four acquisitions of each lesion was identified using uncertainty-based criteria. Selecting a frame informed by uncertainty achieved an area under curve of 76% and 80% for Monte Carlo Dropout and Bayesian Variational Inference, respectively, superior to any uncertainty-uninformed baselines with the best one achieving 64%. A novel use of uncertainty estimation is proposed for selecting one of multiple data acquisitions for further processing and decision making.
Page 40 of 56556 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.