Sort by:
Page 90 of 3143139 results

Region Uncertainty Estimation for Medical Image Segmentation with Noisy Labels.

Han K, Wang S, Chen J, Qian C, Lyu C, Ma S, Qiu C, Sheng VS, Huang Q, Liu Z

pubmed logopapersJul 14 2025
The success of deep learning in 3D medical image segmentation hinges on training with a large dataset of fully annotated 3D volumes, which are difficult and time-consuming to acquire. Although recent foundation models (e.g., segment anything model, SAM) can utilize sparse annotations to reduce annotation costs, segmentation tasks involving organs and tissues with blurred boundaries remain challenging. To address this issue, we propose a region uncertainty estimation framework for Computed Tomography (CT) image segmentation using noisy labels. Specifically, we propose a sample-stratified training strategy that stratifies samples according to their varying quality labels, prioritizing confident and fine-grained information at each training stage. This sample-to-voxel level processing enables more reliable supervision information to propagate to noisy label data, thus effectively mitigating the impact of noisy annotations. Moreover, we further design a boundary-guided regional uncertainty estimation module that adapts sample hierarchical training to assist in evaluating sample confidence. Experiments conducted across multiple CT datasets demonstrate the superiority of our proposed method over several competitive approaches under various noise conditions. Our proposed reliable label propagation strategy not only significantly reduces the cost of medical image annotation and robust model training but also improves the segmentation performance in scenarios with imperfect annotations, thus paving the way towards the application of medical segmentation foundation models under low-resource and remote scenarios. Code will be available at https://github.com/KHan-UJS/NoisyLabel.

Self-supervised Upsampling for Reconstructions with Generalized Enhancement in Photoacoustic Computed Tomography.

Deng K, Luo Y, Zuo H, Chen Y, Gu L, Liu MY, Lan H, Luo J, Ma C

pubmed logopapersJul 14 2025
Photoacoustic computed tomography (PACT) is an emerging hybrid imaging modality with potential applications in biomedicine. A major roadblock to the widespread adoption of PACT is the limited number of detectors, which gives rise to spatial aliasing and manifests as streak artifacts in the reconstructed image. A brute-force solution to the problem is to increase the number of detectors, which, however, is often undesirable due to escalated costs. In this study, we present a novel self-supervised learning approach, to overcome this long-standing challenge. We found that small blocks of PACT channel data show similarity at various downsampling rates. Based on this observation, a neural network trained on downsampled data can reliably perform accurate interpolation without requiring densely-sampled ground truth data, which is typically unavailable in real practice. Our method has undergone validation through numerical simulations, controlled phantom experiments, as well as ex vivo and in vivo animal tests, across multiple PACT systems. We have demonstrated that our technique provides an effective and cost-efficient solution to address the under-sampling issue in PACT, thereby enhancing the capabilities of this imaging technology.

Associations of Computerized Tomography-Based Body Composition and Food Insecurity in Bariatric Surgery Patients.

Sizemore JA, Magudia K, He H, Landa K, Bartholomew AJ, Howell TC, Michaels AD, Fong P, Greenberg JA, Wilson L, Palakshappa D, Seymour KA

pubmed logopapersJul 14 2025
Food insecurity (FI) is associated with increased adiposity and obesity-related medical conditions, and body composition can affect metabolic risk. Bariatric surgery effectively treats obesity and metabolic diseases. The association of FI with baseline computerized tomography (CT)-based body composition and bariatric surgery outcomes was investigated in this exploratory study. Fifty-four retrospectively identified adults had bariatric surgery, preoperative CT scan from 2017 to 2019, completed a six-item food security survey, and had body composition measured by bioelectrical impedance analysis (BIA). Skeletal muscle, visceral fat, and subcutaneous fat areas were determined from abdominal CT and normalized to published age, sex, and race reference values. Anthropometric data, related medical conditions, and medications were collected preoperatively, and at 6 months and at 12 months postoperatively. Patients were stratified into food security (FS) or FI based on survey responses. Fourteen (26%) patients were categorized as FI. Patients with FI had lower skeletal muscle area and higher subcutaneous fat area than patients with FS on baseline CT exam (p < 0.05). There was no difference in baseline BIA between patients with FS and FI. The two groups had similar weight loss, reduction in obesity-related medications, and healthcare utilization following bariatric surgery at 6 and 12 months postoperatively. Patients with FI had higher subcutaneous fat and lower skeletal muscle than patients with FS by baseline CT exam, findings which were not detected by BIA. CT analysis enabled by an artificial intelligence workflow offers more precise and detailed body composition data.

Automated multiclass segmentation of liver vessel structures in CT images using deep learning approaches: a liver surgery pre-planning tool.

Sarkar S, Rahmani M, Farnia P, Ahmadian A, Mozayani N

pubmed logopapersJul 14 2025
Accurate liver vessel segmentation is essential for effective liver surgery pre-planning, and reducing surgical risks since it enables the precise localization and extensive assessment of complex vessel structures. Manual liver vessel segmentation is a time-intensive process reliant on operator expertise and skill. The complex, tree-like architecture of hepatic and portal veins, which are interwoven and anatomically variable, further complicates this challenge. This study addresses these challenges by proposing the UNETR (U-Net Transformers) architecture for the multi-class segmentation of portal and hepatic veins in liver CT images. UNETR leverages a transformer-based encoder to effectively capture long-range dependencies, overcoming the limitations of convolutional neural networks (CNNs) in handling complex anatomical structures. The proposed method was evaluated on contrast-enhanced CT images from the IRCAD as well as a locally dataset developed from a hospital. On the local dataset, the UNETR model achieved Dice coefficients of 49.71% for portal veins, 69.39% for hepatic veins, and 76.74% for overall vessel segmentation, while reaching Dice coefficients of 62.54% for vessel segmentation on the IRCAD dataset. These results highlight the method's effectiveness in identifying complex vessel structures across diverse datasets. These findings underscore the critical role of advanced architectures and precise annotations in improving segmentation accuracy. This work provides a foundation for future advancements in automated liver surgery pre-planning, with the potential to enhance clinical outcomes significantly. The implementation code is available on GitHub: https://github.com/saharsarkar/Multiclass-Vessel-Segmentation .

Feasibility study of fully automatic measurement of adenoid size on lateral neck and head radiographs using deep learning.

Hao D, Tang L, Li D, Miao S, Dong C, Cui J, Gao C, Li J

pubmed logopapersJul 14 2025
The objective and reliable quantification of adenoid size is pivotal for precise clinical diagnosis and the formulation of effective treatment strategies. Conventional manual measurement techniques, however, are often labor-intensive and time-consuming. To develop and validate a fully automated system for measuring adenoid size using deep learning (DL) on lateral head and neck radiographs. In this retrospective study, we analyzed 711 lateral head and neck radiographs collected from two centers between February and July 2023. A DL-based adenoid size measurement system was developed, utilizing Fujioka's method. The system employed the RTMDet network and RTMPose networks for accurate landmark detection, and mathematical formulas were applied to determine adenoid size. To evaluate consistency and reliability of the system, we employed the intra-class correlation coefficient (ICC), mean absolute difference (MAD), and Bland-Altman plots as key assessment metrics. The DL-based system exhibited high reliability in the prediction of adenoid, nasopharynx, and adenoid-nasopharyngeal ratio measurements, showcasing strong agreement with the reference standard. The results indicated an ICC for adenoid measurements of 0.902 [95%CI, 0.872-0.925], with a MAD of 1.189 and a root mean square (RMS) of 1.974. For nasopharynx measurements, the ICC was 0.868 [95%CI, 0.828-0.899], with a MAD of 1.671 and an RMS of 1.916. Additionally, the adenoid-nasopharyngeal ratio measurements yielded an ICC of 0.911 [95%CI, 0.883-0.932], a MAD of 0.054, and an RMS of 0.076. The developed DL-based system effectively automates the measurement of the adenoid-nasopharyngeal ratio, adenoid, and nasopharynx on lateral neck or head radiographs, showcasing high reliability.

Deep Learning-Based Prediction for Bone Cement Leakage During Percutaneous Kyphoplasty Using Preoperative Computed Tomography: MODEL Development and Validation.

Chen R, Wang T, Liu X, Xi Y, Liu D, Xie T, Wang A, Fan N, Yuan S, Du P, Jiao S, Zhang Y, Zang L

pubmed logopapersJul 14 2025
Retrospective study. To develop a deep learning (DL) model to predict bone cement leakage (BCL) subtypes during percutaneous kyphoplasty (PKP) using preoperative computed tomography (CT) as well as employing multicenter data to evaluate the effectiveness and generalizability of the model. DL excels at automatically extracting features from medical images. However, there is a lack of models that can predict BCL subtypes based on preoperative images. This study included an internal dataset for DL model training, validation, and testing as well as an external dataset for additional model testing. Our model integrated a segment localization module based on vertebral segmentation via three-dimensional (3D) U-Net with a classification module based on 3D ResNet-50. Vertebral level mismatch rates were calculated, and confusion matrixes were used to compare the performance of the DL model with that of spine surgeons in predicting BCL subtypes. Furthermore, the simple Cohen's kappa coefficient was used to assess the reliability of spine surgeons and the DL model against the reference standard. A total of 901 patients containing 997 eligible segments were included in the internal dataset. The model demonstrated a vertebral segment identification accuracy of 96.9%. It also showed high area under the curve (AUC) values of 0.734-0.831 and sensitivities of 0.649-0.900 for BCL prediction in the internal dataset. Similar favorable AUC values of 0.709-0.818 and sensitivities of 0.706-0.857 were observed in the external dataset, indicating the stability and generalizability of the model. Moreover, the model outperformed nonexpert spine surgeons in predicting BCL subtypes, except for type II. The model achieved satisfactory accuracy, reliability, generalizability, and interpretability in predicting BCL subtypes, outperforming nonexpert spine surgeons. This study offers valuable insights for assessing osteoporotic vertebral compression fractures, thereby aiding preoperative surgical decision-making. 3.

Digitalization of Prison Records Supports Artificial Intelligence Application.

Whitford WG

pubmed logopapersJul 14 2025
Artificial intelligence (AI)-empowered data processing tools improve our ability to assess, measure, and enhance medical interventions. AI-based tools automate the extraction of data from histories, test results, imaging, prescriptions, and treatment outcomes, and transform them into unified, accessible records. They are powerful in converting unstructured data such as clinical notes, magnetic resonance images, and electroencephalograms into structured, actionable formats. For example, in the extraction and classification of diseases, symptoms, medications, treatments, and dates from even incomplete and fragmented clinical notes, pathology reports, images, and histological markers. Especially because the demographics within correctional facilities greatly diverge from the general population, the adoption of electronic health records and AI-enabled data processing will play a crucial role in improving disease detection, treatment management, and the overall efficiency of health care within prison systems.

Pathological omics prediction of early and advanced colon cancer based on artificial intelligence model.

Wang Z, Wu Y, Li Y, Wang Q, Yi H, Shi H, Sun X, Liu C, Wang K

pubmed logopapersJul 14 2025
Artificial intelligence (AI) models based on pathological slides have great potential to assist pathologists in disease diagnosis and have become an important research direction in the field of medical image analysis. The aim of this study was to develop an AI model based on whole-slide images to predict the stage of colon cancer. In this study, a total of 100 pathological slides of colon cancer patients were collected as the training set, and 421 pathological slides of colon cancer were downloaded from The Cancer Genome Atlas (TCGA) database as the external validation set. Cellprofiler and CLAM tools were used to extract pathological features, and machine learning algorithms and deep learning algorithms were used to construct prediction models. The area under the curve (AUC) of the best machine learning model was 0.78 in the internal test set and 0.68 in the external test set. The AUC of the deep learning model in the internal test set was 0.889, and the accuracy of the model was 0.854. The AUC of the deep learning model in the external test set was 0.700. The prediction model has the potential to generalize in the process of combining pathological omics diagnosis. Compared with machine learning, deep learning has higher recognition and accuracy of images, and the performance of the model is better.

Is a score enough? Pitfalls and solutions for AI severity scores.

Bernstein MH, van Assen M, Bruno MA, Krupinski EA, De Cecco C, Baird GL

pubmed logopapersJul 14 2025
Severity scores, which often refer to the likelihood or probability of a pathology, are commonly provided by artificial intelligence (AI) tools in radiology. However, little attention has been given to the use of these AI scores, and there is a lack of transparency into how they are generated. In this comment, we draw on key principles from psychological science and statistics to elucidate six human factors limitations of AI scores that undermine their utility: (1) variability across AI systems; (2) variability within AI systems; (3) variability between radiologists; (4) variability within radiologists; (5) unknown distribution of AI scores; and (6) perceptual challenges. We hypothesize that these limitations can be mitigated by providing the false discovery rate and false omission rate for each score as a threshold. We discuss how this hypothesis could be empirically tested. KEY POINTS: The radiologist-AI interaction has not been given sufficient attention. The utility of AI scores is limited by six key human factors limitations. We propose a hypothesis for how to mitigate these limitations by using false discovery rate and false omission rate.

Human-centered explainability evaluation in clinical decision-making: a critical review of the literature.

Bauer JM, Michalowski M

pubmed logopapersJul 14 2025
This review paper comprehensively summarizes healthcare provider (HCP) evaluation of explanations produced by explainable artificial intelligence methods to support point-of-care, patient-specific, clinical decision-making (CDM) within medical settings. It highlights the critical need to incorporate human-centered (HCP) evaluation approaches based on their CDM needs, processes, and goals. The review was conducted in Ovid Medline and Scopus databases, following the Institute of Medicine's methodological standards and PRISMA guidelines. An individual study appraisal was conducted using design-specific appraisal tools. MaxQDA software was used for data extraction and evidence table procedures. Of the 2673 unique records retrieved, 25 records were included in the final sample. Studies were excluded if they did not meet this review's definitions of HCP evaluation (1156), healthcare use (995), explainable AI (211), and primary research (285), and if they were not available in English (1). The sample focused primarily on physicians and diagnostic imaging use cases and revealed wide-ranging evaluation measures. The synthesis of sampled studies suggests a potential common measure of clinical explainability with 3 indicators of interpretability, fidelity, and clinical value. There is an opportunity to extend the current model-centered evaluation approaches to incorporate human-centered metrics, supporting the transition into practice. Future research should aim to clarify and expand key concepts in HCP evaluation, propose a comprehensive evaluation model positioned in current theoretical knowledge, and develop a valid instrument to support comparisons.
Page 90 of 3143139 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.