Sort by:
Page 120 of 3453445 results

Poincare guided geometric UNet for left atrial epicardial adipose tissue segmentation in Dixon MRI images.

Firouznia M, Ylipää E, Henningsson M, Carlhäll CJ

pubmed logopapersJul 15 2025
Epicardial Adipose Tissue (EAT) is a recognized risk factor for cardiovascular diseases and plays a pivotal role in the pathophysiology of Atrial Fibrillation (AF). Accurate automatic segmentation of the EAT around the Left Atrium (LA) from Magnetic Resonance Imaging (MRI) data remains challenging. While Convolutional Neural Networks excel at multi-scale feature extraction using stacked convolutions, they struggle to capture long-range self-similarity and hierarchical relationships, which are essential in medical image segmentation. In this study, we present and validate PoinUNet, a deep learning model that integrates a Poincaré embedding layer into a 3D UNet to enhance LA wall and fat segmentation from Dixon MRI data. By using hyperbolic space learning, PoinUNet captures complex LA and EAT relationships and addresses class imbalance and fat geometry challenges using a new loss function. Sixty-six participants, including forty-eight AF patients, were scanned at 1.5T. The first network identified fat regions, while the second utilized Poincaré embeddings and convolutional layers for precise segmentation, enhanced by fat fraction maps. PoinUNet achieved a Dice Similarity Coefficient of 0.87 and a Hausdorff distance of 9.42 on the test set. This performance surpasses state-of-the-art methods, providing accurate quantification of the LA wall and LA EAT.

Assessing MRI-based Artificial Intelligence Models for Preoperative Prediction of Microvascular Invasion in Hepatocellular Carcinoma: A Systematic Review and Meta-analysis.

Han X, Shan L, Xu R, Zhou J, Lu M

pubmed logopapersJul 15 2025
To evaluate the performance of magnetic resonance imaging (MRI)-based artificial intelligence (AI) in the preoperative prediction of microvascular invasion (MVI) in patients with hepatocellular carcinoma (HCC). A systematic search of PubMed, Embase, and Web of Science was conducted up to May 2025, following PRISMA guidelines. Studies using MRI-based AI models with histopathologically confirmed MVI were included. Study quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) framework. Statistical synthesis used bivariate random-effects models. Twenty-nine studies were included, totaling 2838 internal and 1161 external validation cases. Pooled internal validation showed a sensitivity of 0.81 (95% CI: 0.76-0.85), specificity of 0.82 (95% CI: 0.78-0.85), diagnostic odds ratio (DOR) of 19.33 (95% CI: 13.15-28.42), and area under the curve (AUC) of 0.88 (95% CI: 0.85-0.91). External validation yielded a comparable AUC of 0.85. Traditional machine learning methods achieved higher sensitivity than deep learning approaches in both internal and external validation cohorts (both P < 0.05). Studies incorporating both radiomics and clinical features demonstrated superior sensitivity and specificity compared to radiomics-only models (P < 0.01). MRI-based AI demonstrates high performance for preoperative prediction of MVI in HCC, particularly for MRI-based models that combine multimodal imaging and clinical variables. However, substantial heterogeneity and low GRADE levels may affect the strength of the evidence, highlighting the need for methodological standardization and multicenter prospective validation to ensure clinical applicability.

Identification of high-risk hepatoblastoma in the CHIC risk stratification system based on enhanced CT radiomics features.

Yang Y, Si J, Zhang K, Li J, Deng Y, Wang F, Liu H, He L, Chen X

pubmed logopapersJul 15 2025
Survival of patients with high-risk hepatoblastoma remains low, and early identification of high-risk hepatoblastoma is critical. To investigate the clinical value of contrast-enhanced computed tomography (CECT) radiomics in predicting high-risk hepatoblastoma. Clinical and CECT imaging data were retrospectively collected from 162 children who were treated at our hospital and pathologically diagnosed with hepatoblastoma. Patients were categorized into high-risk and non-high-risk groups according to the Children's Hepatic Tumors International Collaboration - Hepatoblastoma Study (CHIC-HS). Subsequently, these cases were randomized into training and test groups in a ratio of 7:3. The region of interest (ROI) was first outlined in the pre-treatment venous images, and subsequently the best features were extracted and filtered, and the radiomics model was built by three machine learning methods: namely, Bagging Decision Tree (BDT), Logistic Regression (LR), and Stochastic Gradient Descent (SGD). The AUC, 95 % CI, and accuracy of the model were calculated, and the model performance was evaluated by the DeLong test. The AUCs of the Bagging decision tree model were 0.966 (95 % CI: 0.938-0.994) and 0.875 (95 % CI: 0.77-0.98) for the training and test sets, respectively, with accuracies of 0.841 and 0.816,respectively. The logistic regression model has AUCs of 0.901 (95 % CI: 0.839-0.963) and 0.845 (95 % CI: 0.721-0.968) for the training and test sets, with accuracies of 0.788 and 0.735, respectively. The stochastic gradient descent model has AUCs of 0.788 (95 % CI: 0.712 -0.863) and 0.742 (95 % CI: 0.627-0.857) with accuracies of 0.735 and 0.653, respectively. CECT-based imaging histology identifies high-risk hepatoblastomas and may provide additional imaging biomarkers for identifying high-risk hepatoblastomas.

Multimodal Radiopathomics Signature for Prediction of Response to Immunotherapy-based Combination Therapy in Gastric Cancer Using Interpretable Machine Learning.

Huang W, Wang X, Zhong R, Li Z, Zhou K, Lyu Q, Han JE, Chen T, Islam MT, Yuan Q, Ahmad MU, Chen S, Chen C, Huang J, Xie J, Shen Y, Xiong W, Shen L, Xu Y, Yang F, Xu Z, Li G, Jiang Y

pubmed logopapersJul 15 2025
Immunotherapy has become a cornerstone in the treatment of advanced gastric cancer (GC). However, identifying reliable predictive biomarkers remains a considerable challenge. This study demonstrates the potential of integrating multimodal baseline data, including computed tomography scan images and digital H&E-stained pathology images, with biological interpretation to predict the response to immunotherapy-based combination therapy using a multicenter cohort of 298 GC patients. By employing seven machine learning approaches, we developed a radiopathomics signature (RPS) to predict treatment response and stratify prognostic risk in GC. The RPS demonstrated area under the receiver-operating-characteristic curves (AUCs) of 0.978 (95% CI, 0.950-1.000), 0.863 (95% CI, 0.744-0.982), and 0.822 (95% CI, 0.668-0.975) in the training, internal validation, and external validation cohorts, respectively, outperforming conventional biomarkers such as CPS, MSI-H, EBV, and HER-2. Kaplan-Meier analysis revealed significant differences of survival between high- and low-risk groups, especially in advanced-stage and non-surgical patients. Additionally, genetic analyses revealed that the RPS correlates with enhanced immune regulation pathways and increased infiltration of memory B cells. The interpretable RPS provides accurate predictions for treatment response and prognosis in GC and holds potential for guiding more precise, patient-specific treatment strategies while offering insights into immune-related mechanisms.

LUMEN-A Deep Learning Pipeline for Analysis of the 3D Morphology of the Cerebral Lenticulostriate Arteries from Time-of-Flight 7T MRI.

Li R, Chatterjee S, Jiaerken Y, Zhou X, Radhakrishna C, Benjamin P, Nannoni S, Tozer DJ, Markus HS, Rodgers CT

pubmed logopapersJul 15 2025
The lenticulostriate arteries (LSAs) supply critical subcortical brain structures and are affected in cerebral small vessel disease (CSVD). Changes in their morphology are linked to cardiovascular risk factors and may indicate early pathology. 7T Time-of-Flight MR angiography (TOF-MRA) enables clear LSA visualisation. We aimed to develop a semi-automated pipeline for quantifying 3D LSA morphology from 7T TOF-MRA in CSVD patients. We used data from a local 7T CSVD study to create a pipeline, LUMEN, comprising two stages: vessel segmentation and LSA quantification. For segmentation, we fine-tuned a deep learning model, DS6, and compared it against nnU-Net and a Frangi-filter pipeline, MSFDF. For quantification, centrelines of LSAs within basal ganglia were extracted to compute branch counts, length, tortuosity, and maximum curvature. This pipeline was applied to 69 subjects, with results compared to traditional analysis measuring LSA morphology on 2D coronal maximum intensity projection (MIP) images. For vessel segmentation, fine-tuned DS6 achieved the highest test Dice score (0.814±0.029) and sensitivity, whereas nnU-Net achieved the best balanced average Hausdorff distance and precision. Visual inspection confirmed that DS6 was most sensitive in detecting LSAs with weak signals. Across 69 subjects, the pipeline with DS6 identified 23.5±8.5 LSA branches. Branch length inside the basal ganglia was 26.4±3.5 mm, and tortuosity was 1.5±0.1. Extracted LSA metrics from 2D MIP analysis and our 3D analysis showed fair-to-moderate correlations. Outliers highlighted the added value of 3D analysis. This open-source deep-learning-based pipeline offers a validated tool quantifying 3D LSA morphology in CSVD patients from 7T-TOF-MRA for clinical research.

Learning homeomorphic image registration via conformal-invariant hyperelastic regularisation.

Zou J, Debroux N, Liu L, Qin J, Schönlieb CB, Aviles-Rivero AI

pubmed logopapersJul 15 2025
Deformable image registration is a fundamental task in medical image analysis and plays a crucial role in a wide range of clinical applications. Recently, deep learning-based approaches have been widely studied for deformable medical image registration and achieved promising results. However, existing deep learning image registration techniques do not theoretically guarantee topology-preserving transformations. This is a key property to preserve anatomical structures and achieve plausible transformations that can be used in real clinical settings. We propose a novel framework for deformable image registration. Firstly, we introduce a novel regulariser based on conformal-invariant properties in a nonlinear elasticity setting. Our regulariser enforces the deformation field to be mooth, invertible and orientation-preserving. More importantly, we strictly guarantee topology preservation yielding to a clinical meaningful registration. Secondly, we boost the performance of our regulariser through coordinate MLPs, where one can view the to-be-registered images as continuously differentiable entities. We demonstrate, through numerical and visual experiments, that our framework is able to outperform current techniques for image registration.

Region Uncertainty Estimation for Medical Image Segmentation with Noisy Labels.

Han K, Wang S, Chen J, Qian C, Lyu C, Ma S, Qiu C, Sheng VS, Huang Q, Liu Z

pubmed logopapersJul 14 2025
The success of deep learning in 3D medical image segmentation hinges on training with a large dataset of fully annotated 3D volumes, which are difficult and time-consuming to acquire. Although recent foundation models (e.g., segment anything model, SAM) can utilize sparse annotations to reduce annotation costs, segmentation tasks involving organs and tissues with blurred boundaries remain challenging. To address this issue, we propose a region uncertainty estimation framework for Computed Tomography (CT) image segmentation using noisy labels. Specifically, we propose a sample-stratified training strategy that stratifies samples according to their varying quality labels, prioritizing confident and fine-grained information at each training stage. This sample-to-voxel level processing enables more reliable supervision information to propagate to noisy label data, thus effectively mitigating the impact of noisy annotations. Moreover, we further design a boundary-guided regional uncertainty estimation module that adapts sample hierarchical training to assist in evaluating sample confidence. Experiments conducted across multiple CT datasets demonstrate the superiority of our proposed method over several competitive approaches under various noise conditions. Our proposed reliable label propagation strategy not only significantly reduces the cost of medical image annotation and robust model training but also improves the segmentation performance in scenarios with imperfect annotations, thus paving the way towards the application of medical segmentation foundation models under low-resource and remote scenarios. Code will be available at https://github.com/KHan-UJS/NoisyLabel.

Self-supervised Upsampling for Reconstructions with Generalized Enhancement in Photoacoustic Computed Tomography.

Deng K, Luo Y, Zuo H, Chen Y, Gu L, Liu MY, Lan H, Luo J, Ma C

pubmed logopapersJul 14 2025
Photoacoustic computed tomography (PACT) is an emerging hybrid imaging modality with potential applications in biomedicine. A major roadblock to the widespread adoption of PACT is the limited number of detectors, which gives rise to spatial aliasing and manifests as streak artifacts in the reconstructed image. A brute-force solution to the problem is to increase the number of detectors, which, however, is often undesirable due to escalated costs. In this study, we present a novel self-supervised learning approach, to overcome this long-standing challenge. We found that small blocks of PACT channel data show similarity at various downsampling rates. Based on this observation, a neural network trained on downsampled data can reliably perform accurate interpolation without requiring densely-sampled ground truth data, which is typically unavailable in real practice. Our method has undergone validation through numerical simulations, controlled phantom experiments, as well as ex vivo and in vivo animal tests, across multiple PACT systems. We have demonstrated that our technique provides an effective and cost-efficient solution to address the under-sampling issue in PACT, thereby enhancing the capabilities of this imaging technology.

Associations of Computerized Tomography-Based Body Composition and Food Insecurity in Bariatric Surgery Patients.

Sizemore JA, Magudia K, He H, Landa K, Bartholomew AJ, Howell TC, Michaels AD, Fong P, Greenberg JA, Wilson L, Palakshappa D, Seymour KA

pubmed logopapersJul 14 2025
Food insecurity (FI) is associated with increased adiposity and obesity-related medical conditions, and body composition can affect metabolic risk. Bariatric surgery effectively treats obesity and metabolic diseases. The association of FI with baseline computerized tomography (CT)-based body composition and bariatric surgery outcomes was investigated in this exploratory study. Fifty-four retrospectively identified adults had bariatric surgery, preoperative CT scan from 2017 to 2019, completed a six-item food security survey, and had body composition measured by bioelectrical impedance analysis (BIA). Skeletal muscle, visceral fat, and subcutaneous fat areas were determined from abdominal CT and normalized to published age, sex, and race reference values. Anthropometric data, related medical conditions, and medications were collected preoperatively, and at 6 months and at 12 months postoperatively. Patients were stratified into food security (FS) or FI based on survey responses. Fourteen (26%) patients were categorized as FI. Patients with FI had lower skeletal muscle area and higher subcutaneous fat area than patients with FS on baseline CT exam (p < 0.05). There was no difference in baseline BIA between patients with FS and FI. The two groups had similar weight loss, reduction in obesity-related medications, and healthcare utilization following bariatric surgery at 6 and 12 months postoperatively. Patients with FI had higher subcutaneous fat and lower skeletal muscle than patients with FS by baseline CT exam, findings which were not detected by BIA. CT analysis enabled by an artificial intelligence workflow offers more precise and detailed body composition data.

Automated multiclass segmentation of liver vessel structures in CT images using deep learning approaches: a liver surgery pre-planning tool.

Sarkar S, Rahmani M, Farnia P, Ahmadian A, Mozayani N

pubmed logopapersJul 14 2025
Accurate liver vessel segmentation is essential for effective liver surgery pre-planning, and reducing surgical risks since it enables the precise localization and extensive assessment of complex vessel structures. Manual liver vessel segmentation is a time-intensive process reliant on operator expertise and skill. The complex, tree-like architecture of hepatic and portal veins, which are interwoven and anatomically variable, further complicates this challenge. This study addresses these challenges by proposing the UNETR (U-Net Transformers) architecture for the multi-class segmentation of portal and hepatic veins in liver CT images. UNETR leverages a transformer-based encoder to effectively capture long-range dependencies, overcoming the limitations of convolutional neural networks (CNNs) in handling complex anatomical structures. The proposed method was evaluated on contrast-enhanced CT images from the IRCAD as well as a locally dataset developed from a hospital. On the local dataset, the UNETR model achieved Dice coefficients of 49.71% for portal veins, 69.39% for hepatic veins, and 76.74% for overall vessel segmentation, while reaching Dice coefficients of 62.54% for vessel segmentation on the IRCAD dataset. These results highlight the method's effectiveness in identifying complex vessel structures across diverse datasets. These findings underscore the critical role of advanced architectures and precise annotations in improving segmentation accuracy. This work provides a foundation for future advancements in automated liver surgery pre-planning, with the potential to enhance clinical outcomes significantly. The implementation code is available on GitHub: https://github.com/saharsarkar/Multiclass-Vessel-Segmentation .
Page 120 of 3453445 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.