Sort by:
Page 12 of 1331322 results

Non-iterative and uncertainty-aware MRI-based liver fat estimation using an unsupervised deep learning method.

Meneses JP, Tejos C, Makalic E, Uribe S

pubmed logopapersSep 17 2025
Liver proton density fat fraction (PDFF), the ratio between fat-only and overall proton densities, is an extensively validated biomarker associated with several diseases. In recent years, numerous deep learning-based methods for estimating PDFF have been proposed to optimize acquisition and post-processing times without sacrificing accuracy, compared to conventional methods. However, the lack of interpretability and the often poor generalizability of these DL-based models undermine the adoption of such techniques in clinical practice. In this work, we propose an Artificial Intelligence-based Decomposition of water and fat with Echo Asymmetry and Least-squares (AI-DEAL) method, designed to estimate both proton density fat fraction (PDFF) and the associated uncertainty maps. Once trained, AI-DEAL performs a one-shot MRI water-fat separation by first calculating the nonlinear confounder variables, R<sub>2</sub><sup>∗</sup> and off-resonance field. It then employs a weighted least squares approach to compute water-only and fat-only signals, along with their corresponding covariance matrix, which are subsequently used to derive the PDFF and its associated uncertainty. We validated our method using in vivo liver CSE-MRI, a fat-water phantom, and a numerical phantom. AI-DEAL demonstrated PDFF biases of 0.25% and -0.12% at two liver ROIs, outperforming state-of-the-art deep learning-based techniques. Although trained using in vivo data, our method exhibited PDFF biases of -3.43% in the fat-water phantom and -0.22% in the numerical phantom with no added noise. The latter bias remained approximately constant when noise was introduced. Furthermore, the estimated uncertainties showed good agreement with the observed errors and the variations within each ROI, highlighting their potential value for assessing the reliability of the resulting PDFF maps.

Advancing X-ray microcomputed tomography image processing of avian eggshells: An improved registration metric for multiscale 3D images and resolution-enhanced segmentation of eggshell pores using edge-attentive neural networks.

Jia S, Piché N, McKee MD, Reznikov N

pubmed logopapersSep 17 2025
Avian eggs exhibit a variety of shapes and sizes, reflecting different reproductive strategies. The eggshell not only protects the egg contents, but also regulates gas and water vapor exchange vital for embryonic development. While many studies have explored eggshell ultrastructure, the distribution of pores across the entire shell is less well understood because of a trade-off between resolution and field-of-view in imaging. To overcome this, a neural network was developed for resolution enhancement of low-resolution 3D tomographic data, while performing voxel-wise labeling. Trained on X-ray microcomputed tomography images of ostrich, guillemot and crow eggshells from a natural history museum collection, the model used stepwise magnification to create low- and high-resolution training sets. Registration performance was validated with a novel metric based on local grayscale gradients. An edge-attentive loss function prevented bias towards the dominant background class (95% of all voxels), ensuring accurate labeling of eggshell (5%) and pore (0.1%) voxels. The results indicate that besides edge-attention and class balancing, 3D context preservation and 3D convolution are of paramount importance for extrapolating subvoxel features.

FEU-Diff: A Diffusion Model With Fuzzy Evidence-Driven Dynamic Uncertainty Fusion for Medical Image Segmentation.

Geng S, Jiang S, Hou T, Yao H, Huang J, Ding W

pubmed logopapersSep 16 2025
Diffusion models, as a class of generative frameworks based on step-wise denoising, have recently attracted significant attention in the field of medical image segmentation. However, existing diffusion-based methods typically rely on static fusion strategies to integrate conditional priors with denoised features, making them difficult to adaptively balance their respective contributions at different denoising stages. Moreover, these methods often lack explicit modeling of pixel-level uncertainty in ambiguous regions, which may lead to the loss of structural details during the iterative denoising process, ultimately compromising the accuracy (Acc) and completeness of the final segmentation results. To this end, we propose FEU-Diff, a diffusion-based segmentation framework that integrates fuzzy evidence modeling and uncertainty fusion (UF) mechanisms. Specifically, a fuzzy semantic enhancement (FSE) module is designed to model pixel-level uncertainty through Gaussian membership functions and fuzzy logic rules, enhancing the model's ability to identify and represent ambiguous boundaries. An evidence dynamic fusion (EDF) module estimates feature confidence via a Dirichlet-based distribution and adaptively guides the fusion of conditional information and denoised features across different denoising stages. Furthermore, the UF module quantifies discrepancies among multisource predictions to compensate for structural detail loss during the iterative denoising process. Extensive experiments on four public datasets show that FEU-Diff consistently outperforms state-of-the-art (SOTA) methods, achieving an average gain of 1.42% in the Dice similarity coefficient (DSC), 1.47% in intersection over union (IoU), and a 2.26 mm reduction in the 95th percentile Hausdorff distance (HD95). In addition, our method generates uncertainty maps that enhance clinical interpretability.

Automated Field of View Prescription for Whole-body Magnetic Resonance Imaging Using Deep Learning Based Body Region Segmentations.

Quinsten AS, Bojahr C, Nassenstein K, Straus J, Holtkamp M, Salhöfer L, Umutlu L, Forsting M, Haubold J, Wen Y, Kohnke J, Borys K, Nensa F, Hosch R

pubmed logopapersSep 16 2025
Manual field-of-view (FoV) prescription in whole-body magnetic resonance imaging (WB-MRI) is vital for ensuring comprehensive anatomic coverage and minimising artifacts, thereby enhancing image quality. However, this procedure is time-consuming, subject to operator variability, and adversely impacts both patient comfort and workflow efficiency. To overcome these limitations, an automated system was developed and evaluated that prescribes multiple consecutive FoV stations for WB-MRI using deep-learning (DL)-based three-dimensional anatomic segmentations. A total of 374 patients (mean age: 50.5 ± 18.2 y; 52% females) who underwent WB-MRI, including T2-weighted Half-Fourier acquisition single-shot turbo spin-echo (T2-HASTE) and fast whole-body localizer (FWBL) sequences acquired during continuous table movement on a 3T MRI system, were retrospectively collected between March 2012 and January 2025. An external cohort of 10 patients, acquired on two 1.5T scanners, was utilized for generalizability testing. Complementary nnUNet-v2 models were fine-tuned to segment tissue compartments, organs, and a whole-body (WB) outline on FWBL images. From these predicted segmentations, 5 consecutive FoVs (head/neck, thorax, liver, pelvis, and spine) were generated. Segmentation accuracy was quantified by Sørensen-Dice coefficients (DSC), Precision (P), Recall (R), and Specificity (S). Clinical utility was assessed on 30 test cases by 4 blinded experts using Likert scores and a 4-way ranking against 3 radiographer prescriptions. Interrater reliability and statistical comparisons were employed using the intraclass correlation coefficient (ICC), Kendall W, Friedman, and Wilcoxon signed-rank tests. Mean DSCs were 0.98 for torso (P = 0.98, R = 0.98, S = 1.00), 0.96 for head/neck (P = 0.95, R = 0.96, S = 1.00), 0.94 for abdominal cavity (P = 0.95, R = 0.94, S = 1.00), 0.90 for thoracic cavity (P = 0.90, R = 0.91, S = 1.00), 0.86 for liver (P = 0.85, R = 0.87, S = 1.00), and 0.63 for spinal cord (P = 0.64, R = 0.63, S = 1.00). The clinical utility was evidenced by assessments from 2 expert radiologists and 2 radiographers, with 98.3% and 87.5% of cases rated as clinically acceptable in the internal test data set and the external test data set. Predicted FoVs received the highest ranking in 60% of cases. They placed within the top 2 in 85.8% of cases, outperforming radiographers with 9 and 13 years of experience (P < 0.001) and matching the performance of a radiographer with 20 years of experience. DL-based three-dimensional anatomic segmentations enable accurate and reliable multistation FoV prescription for WB-MRI, achieving expert-level performance while significantly reducing manual workload. Automated FoV planning has the potential to standardize WB-MRI acquisition, reduce interoperator variability, and enhance workflow efficiency, thereby facilitating broader clinical adoption.

A Computational Pipeline for Patient-Specific Modeling of Thoracic Aortic Aneurysm: From Medical Image to Finite Element Analysis

Jiasong Chen, Linchen Qian, Ruonan Gong, Christina Sun, Tongran Qin, Thuy Pham, Caitlin Martin, Mohammad Zafar, John Elefteriades, Wei Sun, Liang Liang

arxiv logopreprintSep 16 2025
The aorta is the body's largest arterial vessel, serving as the primary pathway for oxygenated blood within the systemic circulation. Aortic aneurysms consistently rank among the top twenty causes of mortality in the United States. Thoracic aortic aneurysm (TAA) arises from abnormal dilation of the thoracic aorta and remains a clinically significant disease, ranking as one of the leading causes of death in adults. A thoracic aortic aneurysm ruptures when the integrity of all aortic wall layers is compromised due to elevated blood pressure. Currently, three-dimensional computed tomography (3D CT) is considered the gold standard for diagnosing TAA. The geometric characteristics of the aorta, which can be quantified from medical imaging, and stresses on the aortic wall, which can be obtained by finite element analysis (FEA), are critical in evaluating the risk of rupture and dissection. Deep learning based image segmentation has emerged as a reliable method for extracting anatomical regions of interest from medical images. Voxel based segmentation masks of anatomical structures are typically converted into structured mesh representation to enable accurate simulation. Hexahedral meshes are commonly used in finite element simulations of the aorta due to their computational efficiency and superior simulation accuracy. Due to anatomical variability, patient specific modeling enables detailed assessment of individual anatomical and biomechanics behaviors, supporting precise simulations, accurate diagnoses, and personalized treatment strategies. Finite element (FE) simulations provide valuable insights into the biomechanical behaviors of tissues and organs in clinical studies. Developing accurate FE models represents a crucial initial step in establishing a patient-specific, biomechanically based framework for predicting the risk of TAA.

Prediction of cerebrospinal fluid intervention in fetal ventriculomegaly via AI-powered normative modelling.

Zhou M, Rajan SA, Nedelec P, Bayona JB, Glenn O, Gupta N, Gano D, George E, Rauschecker AM

pubmed logopapersSep 16 2025
Fetal ventriculomegaly (VM) is common and largely benign when isolated. However, it can occasionally progress to hydrocephalus, a more severe condition associated with increased mortality and neurodevelopmental delay that may require surgical postnatal intervention. Accurate differentiation between VM and hydrocephalus is essential but remains challenging, relying on subjective assessment and limited two-dimensional measurements. Deep learning-based segmentation offers a promising solution for objective and reproducible volumetric analysis. This work presents an AI-powered method for segmentation, volume quantification, and classification of the ventricles in fetal brain MRI to predict need for postnatal intervention. This retrospective study included 222 patients with singleton pregnancies. An nnUNet was trained to segment the fetal ventricles on 20 manually segmented, institutional fetal brain MRIs combined with 80 studies from a publicly available dataset. The validated model was then applied to 138 normal fetal brain MRIs to generate a normative reference range across a range of gestational ages (18-36 weeks). Finally it was applied to 64 fetal brains with VM (14 of which required postnatal intervention). ROC curves and AUC to predict VM and need for postnatal intervention were calculated. The nnUNet predicted segmentation of the fetal ventricles in the reference dataset were high quality and accurate (median Dice score 0.96, IQR 0.93-0.99). A normative reference range of ventricular volumes across gestational ages was developed using automated segmentation volumes. The optimal threshold for identifying VM was 2 standard deviations from normal with sensitivity of 92% and specificity of 93% (AUC 0.97, 95% CI 0.91-0.98). When normalized to intracranial volume, fetal ventricular volume was higher and subarachnoid volume lower among those who required postnatal intervention (p<0.001, p=0.003). The optimal threshold for identifying need for postnatal intervention was 11 standard deviations from normal with sensitivity of 86% and specificity of 100% (AUC 0.97, 95% CI 0.86-1.00). This work introduces a deep-learning based method for fast and accurate quantification of ventricular volumes in fetal brain MRI. A normative reference standard derived using this method can predict VM and need for postnatal CSF intervention. Increased ventricular volume is a strong predictor for postnatal intervention. VM = ventriculomegaly, 2D = two-dimensional, 3D = three-dimensional, ROC = receiver operating characteristics, AUC = area under curve.

Automated brain extraction for canine magnetic resonance images.

Lesta GD, Deserno TM, Abani S, Janisch J, Hänsch A, Laue M, Winzer S, Dickinson PJ, De Decker S, Gutierrez-Quintana R, Subbotin A, Bocharova K, McLarty E, Lemke L, Wang-Leandro A, Spohn F, Volk HA, Nessler JN

pubmed logopapersSep 16 2025
Brain extraction is a common preprocessing step when working with intracranial medical imaging data. While several tools exist to automate the preprocessing of magnetic resonance imaging (MRI) of the human brain, none are available for canine MRIs. We present a pipeline mapping separate 2D scans to a 3D image, and a neural network for canine brain extraction. The training dataset consisted of T1-weighted and contrast-enhanced images from 68 dogs of different breeds, all cranial conformations (mesaticephalic, dolichocephalic, brachycephalic), with several pathological conditions, taken at three institutions. Testing was performed on a similarly diverse group of 10 dogs with images from a 4th institution. The model achieved excellent results in terms of Dice ([Formula: see text]) and Jaccard ([Formula: see text]) metrics and generalised well across different MRI scanners, the three aforementioned skull types, and variations in head size and breed. The pipeline was effective for a combination of one to three acquisition planes (i.e., transversal, dorsal, and sagittal). Aside from the T1 weighted imaging training datasets, the model also performed well on other MRI sequences with Jaccardian indices and median Dice scores ranging from 0.86 to 0.89 and 0.92 to 0.94, respectively. Our approach was robust for automated brain extraction. Variations in canine anatomy and performance degradation in multi-scanner data can largely be mitigated through normalisation and augmentation techniques. Brain extraction, as a preprocessing step, can improve the accuracy of an algorithm for abnormality classification in MRI image slices.

FunKAN: Functional Kolmogorov-Arnold Network for Medical Image Enhancement and Segmentation

Maksim Penkin, Andrey Krylov

arxiv logopreprintSep 16 2025
Medical image enhancement and segmentation are critical yet challenging tasks in modern clinical practice, constrained by artifacts and complex anatomical variations. Traditional deep learning approaches often rely on complex architectures with limited interpretability. While Kolmogorov-Arnold networks offer interpretable solutions, their reliance on flattened feature representations fundamentally disrupts the intrinsic spatial structure of imaging data. To address this issue we propose a Functional Kolmogorov-Arnold Network (FunKAN) -- a novel interpretable neural framework, designed specifically for image processing, that formally generalizes the Kolmogorov-Arnold representation theorem onto functional spaces and learns inner functions using Fourier decomposition over the basis Hermite functions. We explore FunKAN on several medical image processing tasks, including Gibbs ringing suppression in magnetic resonance images, benchmarking on IXI dataset. We also propose U-FunKAN as state-of-the-art binary medical segmentation model with benchmarks on three medical datasets: BUSI (ultrasound images), GlaS (histological structures) and CVC-ClinicDB (colonoscopy videos), detecting breast cancer, glands and polyps, respectively. Experiments on those diverse datasets demonstrate that our approach outperforms other KAN-based backbones in both medical image enhancement (PSNR, TV) and segmentation (IoU, F1). Our work bridges the gap between theoretical function approximation and medical image analysis, offering a robust, interpretable solution for clinical applications.

MBLEformer: Multi-Scale Bidirectional Lesion Enhancement Transformer for Cervical Cancer Image Segmentation.

Li S, Chen P, Zhang J, Wang B

pubmed logopapersSep 16 2025
Accurate segmentation of lesion areas from Lugol's Iodine Staining images is crucial for screening pre-cancerous cervical lesions. However, in underdeveloped regions lacking skilled clinicians, this method may lead to misdiagnosis and missed diagnoses. In recent years, deep learning methods have been widely applied to assist in medical image segmentation. This study aims to improve the accuracy of cervical cancer lesion segmentation by addressing the limitations of Convolutional Neural Networks (CNNs) and attention mechanisms in capturing global features and refining upsampling details. This paper presents a Multi-Scale Bidirectional Lesion Enhancement Network, named MBLEformer, which employs the Swin Transformer encoder to extract image features at multiple stages and utilizes a multi-scale attention mechanism to capture semantic features from different perspectives. Additionally, a bidirectional lesion enhancement upsampling strategy is introduced to refine the edge details of lesion areas. Experimental results demonstrate that the proposed model exhibits superior segmentation performance on a proprietary cervical cancer colposcopic dataset, outperforming other medical image segmentation methods, with a mean Intersection over Union (mIoU) of 82.5%, accuracy, and specificity of 94.9% and 83.6%. MBLEformer significantly improves the accuracy of lesion segmentation in iodine-stained cervical cancer images, with the potential to enhance the efficiency and accuracy of pre-cancerous lesion diagnosis and help address the issue of imbalanced medical resources.

MambaDiff: Mamba-Enhanced Diffusion Model for 3D Medical Image Segmentation.

Liu Y, Feng Y, Cheng J, Zhan H, Zhu Z

pubmed logopapersSep 15 2025
Accurate 3D medical image segmentation is crucial for diagnosis and treatment. Diffusion models demonstrate promising performance in medical image segmentation tasks due to the progressive nature of the generation process and the explicit modeling of data distributions. However, the weak guidance of conditional information and insufficient feature extraction in diffusion models lead to the loss of fine-grained features and structural consistency in the segmentation results, thereby affecting the accuracy of medical image segmentation. To address this challenge, we propose a Mamba-Enhanced Diffusion Model for 3D Medical Image Segmentation. We extract multilevel semantic features from the original images using an encoder and tightly integrate them with the denoising process of the diffusion model through a Semantic Hierarchical Embedding (SHE) mechanism, to capture the intricate relationship between the noisy label and image data. Meanwhile, we design a Global-Slice Perception Mamba (GSPM) layer, which integrates multi-dimensional perception mechanisms to endow the model with comprehensive spatial reasoning and feature extraction capabilities. Experimental results show that our proposed MambaDiff achieves more competitive performance compared to prior arts with substantially fewer parameters on four public medical image segmentation datasets including BraTS 2021, BraTS 2024, LiTS and MSD Hippocampus. The source code of our method is available at https://github.com/yuliu316316/MambaDiff.
Page 12 of 1331322 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.