Sort by:
Page 5 of 1311301 results

Adoption of artificial intelligence in healthcare: survey of health system priorities, successes, and challenges.

Poon EG, Lemak CH, Rojas JC, Guptill J, Classen D

pubmed logopapersJul 1 2025
The US healthcare system faces significant challenges, including clinician burnout, operational inefficiencies, and concerns about patient safety. Artificial intelligence (AI), particularly generative AI, has the potential to address these challenges, but its adoption, effectiveness, and barriers to implementation are not well understood. To evaluate the current state of AI adoption in US healthcare systems, assess successes and barriers to implementation during the early generative AI era. This cross-sectional survey was conducted in Fall 2024, and included 67 health systems members of the Scottsdale Institute, a collaborative of US non-profit healthcare organizations. Forty-three health systems completed the survey (64% response rate). Respondents provided data on the deployment status and perceived success of 37 AI use cases across 10 categories. The primary outcomes were the extent of AI use case development, piloting, or deployment, the degree of reported success for AI use cases, and the most significant barriers to adoption. Across the 43 responding health systems, AI adoption and perceptions of success varied significantly. Ambient Notes, a generative AI tool for clinical documentation, was the only use case with 100% of respondents reporting adoption activities, and 53% reported a high degree of success with using AI for Clinical Documentation. Imaging and radiology emerged as the most widely deployed clinical AI use case, with 90% of organizations reporting at least partial deployment, although successes with diagnostic use cases were limited. Similarly, many organizations have deployed AI for clinical risk stratification such as early sepsis detection, but only 38% report high success in this area. Immature AI tools were identified a significant barrier to adoption, cited by 77% of respondents, followed by financial concerns (47%) and regulatory uncertainty (40%). Ambient Notes is rapidly advancing in US healthcare systems and demonstrating early success. Other AI use cases show varying degrees of adoption and success, constrained by barriers such as immature AI tools, financial concerns, and regulatory uncertainty. Addressing these challenges through robust evaluations, shared strategies, and governance models will be essential to ensure effective integration and adoption of AI into healthcare practice.

A multi-task neural network for full waveform ultrasonic bone imaging.

Li P, Liu T, Ma H, Li D, Liu C, Ta D

pubmed logopapersJul 1 2025
It is a challenging task to use ultrasound for bone imaging, as the bone tissue has a complex structure with high acoustic impedance and speed-of-sound (SOS). Recently, full waveform inversion (FWI) has shown promising imaging for musculoskeletal tissues. However, the FWI showed a limited ability and tended to produce artifacts in bone imaging because the inversion process would be more easily trapped in local minimum for bone tissue with a large discrepancy in SOS distribution between bony and soft tissues. In addition, the application of FWI required a high computational burden and relatively long iterations. The objective of this study was to achieve high-resolution ultrasonic imaging of bone using a deep learning-based FWI approach. In this paper, we proposed a novel network named CEDD-Unet. The CEDD-Unet adopts a Dual-Decoder architecture, with the first decoder tasked with reconstructing the SOS model, and the second decoder tasked with finding the main boundaries between bony and soft tissues. To effectively capture multi-scale spatial-temporal features from ultrasound radio frequency (RF) signals, we integrated a Convolutional LSTM (ConvLSTM) module. Additionally, an Efficient Multi-scale Attention (EMA) module was incorporated into the encoder to enhance feature representation and improve reconstruction accuracy. Using the ultrasonic imaging modality with a ring array transducer, the performance of CEDD-Unet was tested on the SOS model datasets from human bones (noted as Dataset1) and mouse bones (noted as Dataset2), and compared with three classic reconstruction architectures (Unet, Unet++, and Att-Unet), four state-of-the-art architecture (InversionNet, DD-Net, UPFWI, and DEFE-Unet). Experiments showed that CEDD-Unet outperforms all competing methods, achieving the lowest MAE of 23.30 on Dataset1 and 25.29 on Dataset2, the highest SSIM of 0.9702 on Dataset1 and 0.9550 on Dataset2, and the highest PSNR of 30.60 dB on Dataset1 and 32.87 dB on Dataset2. Our method demonstrated superior reconstruction quality, with clearer bone boundaries, reduced artifacts, and improved consistency with ground truth. Moreover, CEDD-Unet surpasses traditional FWI by producing sharper skeletal SOS reconstructions, reducing computational cost, and eliminating the reliance for an initial model. Ablation studies further confirm the effectiveness of each network component. The results suggest that CEDD-Unet is a promising deep learning-based FWI method for high-resolution bone imaging, with the potential to reconstruct accurate and sharp-edged skeletal SOS models.

SegQC: a segmentation network-based framework for multi-metric segmentation quality control and segmentation error detection in volumetric medical images.

Specktor-Fadida B, Ben-Sira L, Ben-Bashat D, Joskowicz L

pubmed logopapersJul 1 2025
Quality control (QC) of structures segmentation in volumetric medical images is important for identifying segmentation errors in clinical practice and for facilitating model development by enhancing network performance in semi-supervised and active learning scenarios. This paper introduces SegQC, a novel framework for segmentation quality estimation and segmentation error detection. SegQC computes an estimate measure of the quality of a segmentation in volumetric scans and in their individual slices and identifies possible segmentation error regions within a slice. The key components of SegQC include: 1) SegQCNet, a deep network that inputs a scan and its segmentation mask and outputs segmentation error probabilities for each voxel in the scan; 2) three new segmentation quality metrics computed from the segmentation error probabilities; 3) a new method for detecting possible segmentation errors in scan slices computed from the segmentation error probabilities. We introduce a novel evaluation scheme to measure segmentation error discrepancies based on an expert radiologist's corrections of automatically produced segmentations that yields smaller observer variability and is closer to actual segmentation errors. We demonstrate SegQC on three fetal structures in 198 fetal MRI scans - fetal brain, fetal body and the placenta. To assess the benefits of SegQC, we compare it to the unsupervised Test Time Augmentation (TTA)-based QC and to supervised autoencoder (AE)-based QC. Our studies indicate that SegQC outperforms TTA-based quality estimation for whole scans and individual slices in terms of Pearson correlation and MAE for fetal body and fetal brain structures segmentation as well as for volumetric overlap metrics estimation of the placenta structure. Compared to both unsupervised TTA and supervised AE methods, SegQC achieves lower MAE for both 3D and 2D Dice estimates and higher Pearson correlation for volumetric Dice. Our segmentation error detection method achieved recall and precision rates of 0.77 and 0.48 for fetal body, and 0.74 and 0.55 for fetal brain segmentation error detection, respectively. Ranking derived from metrics estimation surpasses rankings based on entropy and sum for TTA and SegQCNet estimations, respectively. SegQC provides high-quality metrics estimation for both 2D and 3D medical images as well as error localization within slices, offering important improvements to segmentation QC.

Application and optimization of the U-Net++ model for cerebral artery segmentation based on computed tomographic angiography images.

Kim H, Seo KH, Kim K, Shim J, Lee Y

pubmed logopapersJul 1 2025
Accurate segmentation of cerebral arteries on computed tomography angiography (CTA) images is essential for the diagnosis and management of cerebrovascular diseases, including ischemic stroke. This study implemented a deep learning-based U-Net++ model for cerebral artery segmentation in CTA images, focusing on optimizing pruning levels by analyzing the trade-off between segmentation performance and computational cost. Dual-energy CTA and direct subtraction CTA datasets were utilized to segment the internal carotid and vertebral arteries in close proximity to the bone. We implemented four pruning levels (L1-L4) in the U-Net++ model and evaluated the segmentation performance using accuracy, intersection over union, F1-score, boundary F1-score, and Hausdorff distance. Statistical analyses were conducted to assess the significance of segmentation performance differences across pruning levels. In addition, we measured training and inference times to evaluate the trade-off between segmentation performance and computational efficiency. Applying deep supervision improved segmentation performance across all factors. While the L4 pruning level achieved the highest segmentation performance, L3 significantly reduced training and inference times (by an average of 51.56 % and 22.62 %, respectively), while incurring only a small decrease in segmentation performance (7.08 %) compared to L4. These results suggest that L3 achieves an optimal balance between performance and computational cost. This study demonstrates that pruning levels in U-Net++ models can be optimized to reduce computational cost while maintaining effective segmentation performance. By simplifying deep learning models, this approach can improve the efficiency of cerebrovascular segmentation, contributing to faster and more accurate diagnoses in clinical settings.

CausalMixNet: A mixed-attention framework for causal intervention in robust medical image diagnosis.

Zhang Y, Huang YA, Hu Y, Liu R, Wu J, Huang ZA, Tan KC

pubmed logopapersJul 1 2025
Confounding factors inherent in medical images can significantly impact the causal exploration capabilities of deep learning models, resulting in compromised accuracy and diminished generalization performance. In this paper, we present an innovative methodology named CausalMixNet that employs query-mixed intra-attention and key&value-mixed inter-attention to probe causal relationships between input images and labels. For mitigating unobservable confounding factors, CausalMixNet integrates the non-local reasoning module (NLRM) and the key&value-mixed inter-attention (KVMIA) to conduct a front-door adjustment strategy. Furthermore, CausalMixNet incorporates a patch-masked ranking module (PMRM) and query-mixed intra-attention (QMIA) to enhance mediator learning, thereby facilitating causal intervention. The patch mixing mechanism applied to query/(key&value) features within QMIA and KVMIA specifically targets lesion-related feature enhancement and the inference of average causal effect inference. CausalMixNet consistently outperforms existing methods, achieving superior accuracy and F1-scores across in-domain and out-of-domain scenarios on multiple datasets, with an average improvement of 3% over the closest competitor. Demonstrating robustness against noise, gender bias, and attribute bias, CausalMixNet excels in handling unobservable confounders, maintaining stable performance even in challenging conditions.

Tumor grade-titude: XGBoost radiomics paves the way for RCC classification.

Ellmann S, von Rohr F, Komina S, Bayerl N, Amann K, Polifka I, Hartmann A, Sikic D, Wullich B, Uder M, Bäuerle T

pubmed logopapersJul 1 2025
This study aimed to develop and evaluate a non-invasive XGBoost-based machine learning model using radiomic features extracted from pre-treatment CT images to differentiate grade 4 renal cell carcinoma (RCC) from lower-grade tumours. A total of 102 RCC patients who underwent contrast-enhanced CT scans were included in the analysis. Radiomic features were extracted, and a two-step feature selection methodology was applied to identify the most relevant features for classification. The XGBoost model demonstrated high performance in both training (AUC = 0.87) and testing (AUC = 0.92) sets, with no significant difference between the two (p = 0.521). The model also exhibited high sensitivity, specificity, positive predictive value, and negative predictive value. The selected radiomic features captured both the distribution of intensity values and spatial relationships, which may provide valuable insights for personalized treatment decision-making. Our findings suggest that the XGBoost model has the potential to be integrated into clinical workflows to facilitate personalized adjuvant immunotherapy decision-making, ultimately improving patient outcomes. Further research is needed to validate the model in larger, multicentre cohorts and explore the potential of combining radiomic features with other clinical and molecular data.

MED-NCA: Bio-inspired medical image segmentation.

Kalkhof J, Ihm N, Köhler T, Gregori B, Mukhopadhyay A

pubmed logopapersJul 1 2025
The reliance on computationally intensive U-Net and Transformer architectures significantly limits their accessibility in low-resource environments, creating a technological divide that hinders global healthcare equity, especially in medical diagnostics and treatment planning. This divide is most pronounced in low- and middle-income countries, primary care facilities, and conflict zones. We introduced MED-NCA, Neural Cellular Automata (NCA) based segmentation models characterized by their low parameter count, robust performance, and inherent quality control mechanisms. These features drastically lower the barriers to high-quality medical image analysis in resource-constrained settings, allowing the models to run efficiently on hardware as minimal as a Raspberry Pi or a smartphone. Building upon the foundation laid by MED-NCA, this paper extends its validation across eight distinct anatomies, including the hippocampus and prostate (MRI, 3D), liver and spleen (CT, 3D), heart and lung (X-ray, 2D), breast tumor (Ultrasound, 2D), and skin lesion (Image, 2D). Our comprehensive evaluation demonstrates the broad applicability and effectiveness of MED-NCA in various medical imaging contexts, matching the performance of two magnitudes larger UNet models. Additionally, we introduce NCA-VIS, a visualization tool that gives insight into the inference process of MED-NCA and allows users to test its robustness by applying various artifacts. This combination of efficiency, broad applicability, and enhanced interpretability makes MED-NCA a transformative solution for medical image analysis, fostering greater global healthcare equity by making advanced diagnostics accessible in even the most resource-limited environments.

Automated vertebrae identification and segmentation with structural uncertainty analysis in longitudinal CT scans of patients with multiple myeloma.

Madzia-Madzou DK, Jak M, de Keizer B, Verlaan JJ, Minnema MC, Gilhuijs K

pubmed logopapersJul 1 2025
Optimize deep learning-based vertebrae segmentation in longitudinal CT scans of multiple myeloma patients using structural uncertainty analysis. Retrospective CT scans from 474 multiple myeloma patients were divided into train (179 patients, 349 scans, 2005-2011) and test cohort (295 patients, 671 scans, 2012-2020). An enhanced segmentation pipeline was developed on the train cohort. It integrated vertebrae segmentation using an open-source deep learning method (Payer's) with a post-hoc structural uncertainty analysis. This analysis identified inconsistencies, automatically correcting them or flagging uncertain regions for human review. Segmentation quality was assessed through vertebral shape analysis using topology. Metrics included 'identification rate', 'longitudinal vertebral match rate', 'success rate' and 'series success rate' and evaluated across age/sex subgroups. Statistical analysis included McNemar and Wilcoxon signed-rank tests, with p < 0.05 indicating significant improvement. Payer's method achieved an identification rate of 95.8% and success rate of 86.7%. The proposed pipeline automatically improved these metrics to 98.8% and 96.0%, respectively (p < 0.001). Additionally, 3.6% of scans were marked for human inspection, increasing the success rate from 96.0% to 98.8% (p < 0.001). The vertebral match rate increased from 97.0% to 99.7% (p < 0.001), and the series success rate from 80.0% to 95.4% (p < 0.001). Subgroup analysis showed more consistent performance across age and sex groups. The proposed pipeline significantly outperforms Payer's method, enhancing segmentation accuracy and reducing longitudinal matching errors while minimizing evaluation workload. Its uncertainty analysis ensures robust performance, making it a valuable tool for longitudinal studies in multiple myeloma.

TIER-LOC: Visual Query-based Video Clip Localization in fetal ultrasound videos with a multi-tier transformer.

Mishra D, Saha P, Zhao H, Hernandez-Cruz N, Patey O, Papageorghiou AT, Noble JA

pubmed logopapersJul 1 2025
In this paper, we introduce the Visual Query-based task of Video Clip Localization (VQ-VCL) for medical video understanding. Specifically, we aim to retrieve a video clip containing frames similar to a given exemplar frame from a given input video. To solve the task, we propose a novel visual query-based video clip localization model called TIER-LOC. TIER-LOC is designed to improve video clip retrieval, especially in fine-grained videos by extracting features from different levels, i.e., coarse to fine-grained, referred to as TIERS. The aim is to utilize multi-Tier features for detecting subtle differences, and adapting to scale or resolution variations, leading to improved video-clip retrieval. TIER-LOC has three main components: (1) a Multi-Tier Spatio-Temporal Transformer to fuse spatio-temporal features extracted from multiple Tiers of video frames with features from multiple Tiers of the visual query enabling better video understanding. (2) a Multi-Tier, Dual Anchor Contrastive Loss to deal with real-world annotation noise which can be notable at event boundaries and in videos featuring highly similar objects. (3) a Temporal Uncertainty-Aware Localization Loss designed to reduce the model sensitivity to imprecise event boundary. This is achieved by relaxing hard boundary constraints thus allowing the model to learn underlying class patterns and not be influenced by individual noisy samples. To demonstrate the efficacy of TIER-LOC, we evaluate it on two ultrasound video datasets and an open-source egocentric video dataset. First, we develop a sonographer workflow assistive task model to detect standard-frame clips in fetal ultrasound heart sweeps. Second, we assess our model's performance in retrieving standard-frame clips for detecting fetal anomalies in routine ultrasound scans, using the large-scale PULSE dataset. Lastly, we test our model's performance on an open-source computer vision video dataset by creating a VQ-VCL fine-grained video dataset based on the Ego4D dataset. Our model outperforms the best-performing state-of-the-art model by 7%, 4%, and 4% on the three video datasets, respectively.
Page 5 of 1311301 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.