Sort by:
Page 15 of 81804 results

Evaluating the Efficacy of Various Deep Learning Architectures for Automated Preprocessing and Identification of Impacted Maxillary Canines in Panoramic Radiographs.

Alenezi O, Bhattacharjee T, Alseed HA, Tosun YI, Chaudhry J, Prasad S

pubmed logopapersAug 2 2025
Previously, automated cropping and a reasonable classification accuracy for distinguishing impacted and non-impacted canines were demonstrated. This study evaluates multiple convolutional neural network (CNN) architectures for improving accuracy as a step towards a fully automated software for identification of impacted maxillary canines (IMCs) in panoramic radiographs (PRs). Eight CNNs (SqueezeNet, GoogLeNet, NASNet-Mobile, ShuffleNet, VGG-16, ResNet 50, DenseNet 201, and Inception V3) were compared in terms of their ability to classify 2 groups of PRs (impacted: n = 91; and non-impacted: n = 91 maxillary canines) before pre-processing and after applying automated cropping. For the PRs with impacted and non-impacted maxillary canines, GoogLeNet achieved the highest classification performance among the tested CNN architectures. Area under the curve (AUC) values of the Receiver Operating Characteristic (ROC) analysis without preprocessing and with preprocessing were 0.9 and 0.99 respectively, compared to 0.84 and 0.96 respectively with SqueezeNet. Among the tested CNN architectures, GoogLeNet achieved the highest performance on this dataset for the automated identification of impacted maxillary canines on both cropped and uncropped PRs.

Artificial Intelligence in Abdominal, Gynecological, Obstetric, Musculoskeletal, Vascular and Interventional Ultrasound.

Graumann O, Cui Xin W, Goudie A, Blaivas M, Braden B, Campbell Westerway S, Chammas MC, Dong Y, Gilja OH, Hsieh PC, Jiang Tian A, Liang P, Möller K, Nolsøe CP, Săftoiu A, Dietrich CF

pubmed logopapersAug 2 2025
Artificial Intelligence (AI) is a theoretical framework and systematic development of computational models designed to execute tasks that traditionally require human cognition. In medical imaging, AI is used for various modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and pathologies across multiple organ systems. However, integrating AI into medical ultrasound presents unique challenges compared to modalities like CT and MRI due to its operator-dependent nature and inherent variability in the image acquisition process. AI application to ultrasound holds the potential to mitigate multiple variabilities, recalibrate interpretative consistency, and uncover diagnostic patterns that may be difficult for humans to detect. Progress has led to significant innovation in medical ultrasound-based AI applications, facilitating their adoption in various clinical settings and for multiple diseases. This manuscript primarily aims to provide a concise yet comprehensive exploration of current and emerging AI applications in medical ultrasound within abdominal, musculoskeletal, and obstetric & gynecological and interventional medical ultrasound. The secondary aim is to discuss present limitations and potential challenges such technological implementations may encounter.

Anatomical Considerations for Achieving Optimized Outcomes in Individualized Cochlear Implantation.

Timm ME, Avallone E, Timm M, Salcher RB, Rudnik N, Lenarz T, Schurzig D

pubmed logopapersAug 1 2025
Machine learning models can assist with the selection of electrode arrays required for optimal insertion angles. Cochlea implantation is a successful therapy in patients with severe to profound hearing loss. The effectiveness of a cochlea implant depends on precise insertion and positioning of electrode array within the cochlea, which is known for its variability in shape and size. Preoperative imaging like CT or MRI plays a significant role in evaluating cochlear anatomy and planning the surgical approach to optimize outcomes. In this study, preoperative and postoperative CT and CBCT data of 558 cochlea-implant patients were analyzed in terms of the influence of anatomical factors and insertion depth onto the resulting insertion angle. Machine learning models can predict insertion depths needed for optimal insertion angles, with performance improving by including cochlear dimensions in the models. A simple linear regression using just the insertion depth explained 88% of variability, whereas adding cochlear length or diameter and width further improved predictions up to 94%.

Your other Left! Vision-Language Models Fail to Identify Relative Positions in Medical Images

Daniel Wolf, Heiko Hillenhagen, Billurvan Taskin, Alex Bäuerle, Meinrad Beer, Michael Götz, Timo Ropinski

arxiv logopreprintAug 1 2025
Clinical decision-making relies heavily on understanding relative positions of anatomical structures and anomalies. Therefore, for Vision-Language Models (VLMs) to be applicable in clinical practice, the ability to accurately determine relative positions on medical images is a fundamental prerequisite. Despite its importance, this capability remains highly underexplored. To address this gap, we evaluate the ability of state-of-the-art VLMs, GPT-4o, Llama3.2, Pixtral, and JanusPro, and find that all models fail at this fundamental task. Inspired by successful approaches in computer vision, we investigate whether visual prompts, such as alphanumeric or colored markers placed on anatomical structures, can enhance performance. While these markers provide moderate improvements, results remain significantly lower on medical images compared to observations made on natural images. Our evaluations suggest that, in medical imaging, VLMs rely more on prior anatomical knowledge than on actual image content for answering relative position questions, often leading to incorrect conclusions. To facilitate further research in this area, we introduce the MIRP , Medical Imaging Relative Positioning, benchmark dataset, designed to systematically evaluate the capability to identify relative positions in medical images.

From Consensus to Standardization: Evaluating Deep Learning for Nerve Block Segmentation in Ultrasound Imaging.

Pelletier ED, Jeffries SD, Suissa N, Sarty I, Malka N, Song K, Sinha A, Hemmerling TM

pubmed logopapersAug 1 2025
Deep learning can automate nerve identification by learning from expert-labeled examples to detect and highlight nerves in ultrasound images. This study aims to evaluate the performance of deep-learning models in identifying nerves for ultrasound-guided nerve blocks. A total of 3594 raw ultrasound images were collected from public sources-an open GitHub repository and publicly available YouTube videos-covering 9 nerve block regions: Transversus Abdominis Plane (TAP), Femoral Nerve, Posterior Rectus Sheath, Median and Ulnar Nerves, Pectoralis Plane, Sciatic Nerve, Infraclavicular Brachial Plexus, Supraclavicular Brachial Plexus, and Interscalene Brachial Plexus. Of these, 10 images per nerve region were kept for testing, with each image labeled by 10 expert anesthesiologists. The remaining 3504 were labeled by a medical anesthesia resident and augmented to create a diverse training dataset of 25,000 images per nerve region. Additionally, 908 negative ultrasound images, which do not contain the targeted nerve structures, were included to improve model robustness. Ten convolutional neural network-based deep-learning architectures were selected to identify nerve structures. Models were trained using a 5-fold cross-validation approach on an Extended Video Graphics Array (EVGA) GeForce RTX 3090 GPU, with batch size, number of epochs, and the Adam optimizer adjusted to enhance the models' effectiveness. Posttraining, models were evaluated on a set of 10 images per nerve region, using the Dice score (range: 0 to 1, where 1 indicates perfect agreement and 0 indicates no overlap) to compare model predictions with expert-labeled images. Further validation was conducted by 10 medical experts who assessed whether they would insert a needle into the model's predictions. Statistical analyses were performed to explore the relationship between Dice scores and expert responses. The R2U-Net model achieved the highest average Dice score (0.7619) across all nerve regions, outperforming other models (0.7123-0.7619). However, statistically significant differences in model performance were observed only for the TAP nerve region (χ² = 26.4, df = 9, P = .002, ε² = 0.267). Expert evaluations indicated high accuracy in the model predictions, particularly for the Popliteal nerve region, where experts agreed to insert a needle based on all 100 model-generated predictions. Logistic modeling suggested that higher Dice overlap might increase the odds of expert acceptance in the Supraclavicular region (odds ratio [OR] = 8.59 × 10⁴, 95% confidence interval [CI], 0.33-2.25 × 10¹⁰; P = .073). The findings demonstrate the potential of deep-learning models, such as R2U-Net, to deliver consistent segmentation results in ultrasound-guided nerve block procedures.

Do We Need Pre-Processing for Deep Learning Based Ultrasound Shear Wave Elastography?

Sarah Grube, Sören Grünhagen, Sarah Latus, Michael Meyling, Alexander Schlaefer

arxiv logopreprintAug 1 2025
Estimating the elasticity of soft tissue can provide useful information for various diagnostic applications. Ultrasound shear wave elastography offers a non-invasive approach. However, its generalizability and standardization across different systems and processing pipelines remain limited. Considering the influence of image processing on ultrasound based diagnostics, recent literature has discussed the impact of different image processing steps on reliable and reproducible elasticity analysis. In this work, we investigate the need of ultrasound pre-processing steps for deep learning-based ultrasound shear wave elastography. We evaluate the performance of a 3D convolutional neural network in predicting shear wave velocities from spatio-temporal ultrasound images, studying different degrees of pre-processing on the input images, ranging from fully beamformed and filtered ultrasound images to raw radiofrequency data. We compare the predictions from our deep learning approach to a conventional time-of-flight method across four gelatin phantoms with different elasticity levels. Our results demonstrate statistically significant differences in the predicted shear wave velocity among all elasticity groups, regardless of the degree of pre-processing. Although pre-processing slightly improves performance metrics, our results show that the deep learning approach can reliably differentiate between elasticity groups using raw, unprocessed radiofrequency data. These results show that deep learning-based approaches could reduce the need for and the bias of traditional ultrasound pre-processing steps in ultrasound shear wave elastography, enabling faster and more reliable clinical elasticity assessments.

Automated Assessment of Choroidal Mass Dimensions Using Static and Dynamic Ultrasonographic Imaging

Emmert, N., Wall, G., Nabavi, A., Rahdar, A., Wilson, M., King, B., Cernichiaro-Espinosa, L., Yousefi, S.

medrxiv logopreprintAug 1 2025
PurposeTo develop and validate an artificial intelligence (AI)-based model that automatically measures choroidal mass dimensions on B{square}scan ophthalmic ultrasound still images and cine loops. DesignRetrospective diagnostic accuracy study with internal and external validation. ParticipantsThe dataset included 1,822 still images and 283 cine loops of choroidal masses for model development and testing. An additional 182 still images were used for external validation, and 302 control images with other diagnoses were included to assess specificity MethodsA deep convolutional neural network (CNN) based on the U-Net architecture was developed to automatically measure the apical height and basal diameter of choroidal masses on B-scan ultrasound. All still images were manually annotated by expert graders and reviewed by a senior ocular oncologist. Cine loops were analyzed frame by frame and the frame with the largest detected mass dimensions was selected for evaluation. Outcome MeasuresThe primary outcome was the models measurement accuracy, defined by the mean absolute error (MAE) in millimeters, compared to expert manual annotations, for both apical height and basal diameter. Secondary metrics included the Dice coefficient, coefficient of determination (R2), and mean pixel distance between predicted and reference measurements. ResultsOn the internal test set of still images, the model successfully detected the tumor in 99.7% of cases. The mean absolute error (MAE) was 0.38 {+/-} 0.55 mm for apical height (95.1% of measurements <1 mm of the expert annotation) and was 0.99 {+/-} 1.15 mm for basal diameter (64.4% of measurements <1 mm). Linear agreement between predicted and reference measurements was strong, with R2 values of 0.74 for apical height and 0.89 for basal diameter. When applied to the control set of 302 control images, the model demonstrated a moderate false positive rate. On the external validation set, the model maintained comparable accuracy. Among the cine loops, the model detected tumors in 89.4% of cases with comparable accuracy. ConclusionDeep learning can deliver fast, reproducible, millimeter{square}level measurements of choroidal mass dimensions with robust performance across different mass types and imaging sources. These findings support the potential clinical utility of AI-assisted measurement tools in ocular oncology workflows.

Explainable multimodal deep learning for predicting thyroid cancer lateral lymph node metastasis using ultrasound imaging.

Shen P, Yang Z, Sun J, Wang Y, Qiu C, Wang Y, Ren Y, Liu S, Cai W, Lu H, Yao S

pubmed logopapersAug 1 2025
Preoperative prediction of lateral lymph node metastasis is clinically crucial for guiding surgical strategy and prognosis assessment, yet precise prediction methods are lacking. We therefore develop Lateral Lymph Node Metastasis Network (LLNM-Net), a bidirectional-attention deep-learning model that fuses multimodal data (preoperative ultrasound images, radiology reports, pathological findings, and demographics) from 29,615 patients and 9836 surgical cases across seven centers. Integrating nodule morphology and position with clinical text, LLNM-Net achieves an Area Under the Curve (AUC) of 0.944 and 84.7% accuracy in multicenter testing, outperforming human experts (64.3% accuracy) and surpassing previous models by 7.4%. Here we show tumors within 0.25 cm of the thyroid capsule carry >72% metastasis risk, with middle and upper lobes as high-risk regions. Leveraging location, shape, echogenicity, margins, demographics, and clinician inputs, LLNM-Net further attains an AUC of 0.983 for identifying high-risk patients. The model is thus a promising for tool for preoperative screening and risk stratification.

Deep learning-based super-resolution US radiomics to differentiate testicular seminoma and non-seminoma: an international multicenter study.

Zhang Y, Lu S, Peng C, Zhou S, Campo I, Bertolotto M, Li Q, Wang Z, Xu D, Wang Y, Xu J, Wu Q, Hu X, Zheng W, Zhou J

pubmed logopapersAug 1 2025
Subvariants of testicular germ cell tumor (TGCT) significantly affect therapeutic strategies and patient prognosis. However, preoperatively distinguishing seminoma (SE) from non-seminoma (n-SE) remains a challenge. This study aimed to evaluate the performance of a deep learning-based super-resolution (SR) US radiomics model for SE/n-SE differentiation. This international multicenter retrospective study recruited patients with confirmed TGCT between 2015 and 2023. A pre-trained SR reconstruction algorithm was applied to enhance native resolution (NR) images. NR and SR radiomics models were constructed, and the superior model was then integrated with clinical features to construct clinical-radiomics models. Diagnostic performance was evaluated by ROC analysis (AUC) and compared with radiologists' assessments using the DeLong test. A total of 486 male patients were enrolled for training (n = 338), domestic (n = 92), and international (n = 59) validation sets. The SR radiomics model achieved AUCs of 0.90, 0.82, and 0.91, respectively, in the training, domestic, and international validation sets, significantly surpassing the NR model (p < 0.001, p = 0.031, and p = 0.001, respectively). The clinical-radiomics model exhibited a significantly higher across both domestic and international validation sets compared to the SR radiomics model alone (0.95 vs 0.82, p = 0.004; 0.97 vs 0.91, p = 0.031). Moreover, the clinical-radiomics model surpassed the performance of experienced radiologists in both domestic (AUC, 0.95 vs 0.85, p = 0.012) and international (AUC, 0.97 vs 0.77, p < 0.001) validation cohorts. The SR-based clinical-radiomics model can effectively differentiate between SE and n-SE. This international multicenter study demonstrated that a radiomics model of deep learning-based SR reconstructed US images enabled effective differentiation between SE and n-SE. Clinical parameters and radiologists' assessments exhibit limited diagnostic accuracy for SE/n-SE differentiation in TGCT. Based on scrotal US images of TGCT, the SR radiomics models performed better than the NR radiomics models. The SR-based clinical-radiomics model outperforms both the radiomics model and radiologists' assessment, enabling accurate, non-invasive preoperative differentiation between SE and n-SE.

Rapid review: Growing usage of Multimodal Large Language Models in healthcare.

Gupta P, Zhang Z, Song M, Michalowski M, Hu X, Stiglic G, Topaz M

pubmed logopapersAug 1 2025
Recent advancements in large language models (LLMs) have led to multimodal LLMs (MLLMs), which integrate multiple data modalities beyond text. Although MLLMs show promise, there is a gap in the literature that empirically demonstrates their impact in healthcare. This paper summarizes the applications of MLLMs in healthcare, highlighting their potential to transform health practices. A rapid literature review was conducted in August 2024 using World Health Organization (WHO) rapid-review methodology and PRISMA standards, with searches across four databases (Scopus, Medline, PubMed and ACM Digital Library) and top-tier conferences-including NeurIPS, ICML, AAAI, MICCAI, CVPR, ACL and EMNLP. Articles on MLLMs healthcare applications were included for analysis based on inclusion and exclusion criteria. The search yielded 115 articles, 39 included in the final analysis. Of these, 77% appeared online (preprints and published) in 2024, reflecting the emergence of MLLMs. 80% of studies were from Asia and North America (mainly China and US), with Europe lagging. Studies split evenly between pre-built MLLMs evaluations (60% focused on GPT versions) and custom MLLMs/frameworks development with task-specific customizations. About 81% of studies examined MLLMs for diagnosis and reporting in radiology, pathology, and ophthalmology, with additional applications in education, surgery, and mental health. Prompting strategies, used in 80% of studies, improved performance in nearly half. However, evaluation practices were inconsistent with 67% reported accuracy. Error analysis was mostly anecdotal, with only 18% categorized failure types. Only 13% validated explainability through clinician feedback. Clinical deployment was demonstrated in just 3% of studies, and workflow integration, governance, and safety were rarely addressed. MLLMs offer substantial potential for healthcare transformation through multimodal data integration. Yet, methodological inconsistencies, limited validation, and underdeveloped deployment strategies highlight the need for standardized evaluation metrics, structured error analysis, and human-centered design to support safe, scalable, and trustworthy clinical adoption.
Page 15 of 81804 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.