Sort by:
Page 59 of 1341332 results

SegMamba-V2: Long-range Sequential Modeling Mamba For General 3D Medical Image Segmentation.

Xing Z, Ye T, Yang Y, Cai D, Gai B, Wu XJ, Gao F, Zhu L

pubmed logopapersJul 18 2025
The Transformer architecture has demonstrated remarkable results in 3D medical image segmentation due to its capability of modeling global relationships. However, it poses a significant computational burden when processing high-dimensional medical images. Mamba, as a State Space Model (SSM), has recently emerged as a notable approach for modeling long-range dependencies in sequential data. Although a substantial amount of Mamba-based research has focused on natural language and 2D image processing, few studies explore the capability of Mamba on 3D medical images. In this paper, we propose SegMamba-V2, a novel 3D medical image segmentation model, to effectively capture long-range dependencies within whole-volume features at each scale. To achieve this goal, we first devise a hierarchical scale downsampling strategy to enhance the receptive field and mitigate information loss during downsampling. Furthermore, we design a novel tri-orientated spatial Mamba block that extends the global dependency modeling process from one plane to three orthogonal planes to improve feature representation capability. Moreover, we collect and annotate a large-scale dataset (named CRC-2000) with fine-grained categories to facilitate benchmarking evaluation in 3D colorectal cancer (CRC) segmentation. We evaluate the effectiveness of our SegMamba-V2 on CRC-2000 and three other large-scale 3D medical image segmentation datasets, covering various modalities, organs, and segmentation targets. Experimental results demonstrate that our Segmamba-V2 outperforms state-of-the-art methods by a significant margin, which indicates the universality and effectiveness of the proposed model on 3D medical image segmentation tasks. The code for SegMamba-V2 is publicly available at: https://github.com/ge-xing/SegMamba-V2.

Feasibility and accuracy of the fully automated three-dimensional echocardiography right ventricular quantification software in children: validation against cardiac magnetic resonance.

Liu Q, Zheng Z, Zhang Y, Wu A, Lou J, Chen X, Yuan Y, Xie M, Zhang L, Sun P, Sun W, Lv Q

pubmed logopapersJul 18 2025
Previous studies have confirmed that fully automated three-dimensional echocardiography (3DE) right ventricular (RV) quantification software can accurately assess adult RV function. However, data on its accuracy in children are scarce. This study aimed to test the accuracy of the software in children using cardiac magnetic resonance (MR) as the gold standard. This study prospectively enrolled 82 children who underwent both echocardiography and cardiac MR within 24 h. The RV end-diastolic volume (EDV), end-systolic volume (ESV), and ejection fraction (EF) were obtained using the novel 3DE-RV quantification software and compared with cardiac MR values across different groups. The novel 3DE-RV quantification software was feasible in all 82 children (100%). Fully automated analysis was achieved in 35% patients with an analysis time of 8 ± 2 s and 100% reproducibility. Manual editing was necessary in the remaining 65% patients. The 3DE-derived RV volumes and EF correlated well with cardiac MR measurements (RVEDV, r=0.93; RVESV, r=0.90; RVEF, r=0.82; all P <0.001). Although the automated approach slightly underestimated RV volumes and overestimated RVEF compared with cardiac MR in the entire cohort, the bias was smaller in children with RVEF ≥ 45%, normal RV size, and good 3DE image quality. Fully automated 3DE-RV quantification software provided accurate and completely reproducible results in 35% children without any adjustment. The RV volumes and EF measured using the automated 3DE method correlated well with those from cardiac MR, especially in children with RVEF ≥ 45%, normal RV size, and good 3DE image quality. Therefore, the novel automated 3DE method may achieve rapid and accurate assessment of RV function in children with normal heart anatomy.

Accuracy and Time Efficiency of Artificial Intelligence-Driven Tooth Segmentation on CBCT Images: A Validation Study Using Two Implant Planning Software Programs.

Ntovas P, Sirirattanagool P, Asavanamuang P, Jain S, Tavelli L, Revilla-León M, Galarraga-Vinueza ME

pubmed logopapersJul 18 2025
To assess the accuracy and time efficiency of manual versus artificial intelligence (AI)-driven tooth segmentation on cone-beam computed tomography (CBCT) images, using AI tools integrated within implant planning software, and to evaluate the impact of artifacts, dental arch, tooth type, and region. Fourteen patients who underwent CBCT scans were randomly selected for this study. Using the acquired datasets, 67 extracted teeth were segmented using one manual and two AI-driven tools. The segmentation time for each method was recorded. The extracted teeth were scanned with an intraoral scanner to serve as the reference. The virtual models generated by each segmentation method were superimposed with the surface scan models to calculate volumetric discrepancies. The discrepancy between the evaluated AI-driven and manual segmentation methods ranged from 0.10 to 0.98 mm, with a mean RMS of 0.27 (0.11) mm. Manual segmentation resulted in less RMS deviation compared to both AI-driven methods (CDX; BSB) (p < 0.05). Significant differences were observed between all investigated segmentation methods, both for the overall tooth area and each region, with the apical portion of the root showing the lowest accuracy (p < 0.05). Tooth type did not have a significant effect on segmentation (p > 0.05). Both AI-driven segmentation methods reduced segmentation time compared to manual segmentation (p < 0.05). AI-driven segmentation can generate reliable virtual 3D tooth models, with accuracy comparable to that of manual segmentation performed by experienced clinicians, while also significantly improving time efficiency. To further enhance accuracy in cases involving restoration artifacts, continued development and optimization of AI-driven tooth segmentation models are necessary.

Cardiac Function Assessment with Deep-Learning-Based Automatic Segmentation of Free-Running 4D Whole-Heart CMR

Ogier, A. C., Baup, S., Ilanjian, G., Touray, A., Rocca, A., Banus Cobo, J., Monton Quesada, I., Nicoletti, M., Ledoux, J.-B., Richiardi, J., Holtackers, R. J., Yerly, J., Stuber, M., Hullin, R., Rotzinger, D., van Heeswijk, R. B.

medrxiv logopreprintJul 17 2025
BackgroundFree-running (FR) cardiac MRI enables free-breathing ECG-free fully dynamic 5D (3D spatial+cardiac+respiration dimensions) imaging but poses significant challenges for clinical integration due to the volume and complexity of image analysis. Existing segmentation methods are tailored to 2D cine or static 3D acquisitions and cannot leverage the unique spatial-temporal wealth of FR data. PurposeTo develop and validate a deep learning (DL)-based segmentation framework for isotropic 3D+cardiac cycle FR cardiac MRI that enables accurate, fast, and clinically meaningful anatomical and functional analysis. MethodsFree-running, contrast-free bSSFP acquisitions at 1.5T and contrast-enhanced GRE acquisitions at 3T were used to reconstruct motion-resolved 5D datasets. From these, the end-expiratory respiratory phase was retained to yield fully isotropic 4D datasets. Automatic propagation of a limited set of manual segmentations was used to segment the left and right ventricular blood pool (LVB, RVB) and left ventricular myocardium (LVM) on reformatted short-axis (SAX) end-systolic (ES) and end-diastolic (ED) images. These were used to train a 3D nnU-Net model. Validation was performed using geometric metrics (Dice similarity coefficient [DSC], relative volume difference [RVD]), clinical metrics (ED and ES volumes, ejection fraction [EF]), and physiological consistency metrics (systole-diastole LVM volume mismatch and LV-RV stroke volume agreement). To assess the robustness and flexibility of the approach, we evaluated multiple additional DL training configurations such as using 4D propagation-based data augmentation to incorporate all cardiac phases into training. ResultsThe main proposed method achieved automatic segmentation within a minute, delivering high geometric accuracy and consistency (DSC: 0.94 {+/-} 0.01 [LVB], 0.86 {+/-} 0.02 [LVM], 0.92 {+/-} 0.01 [RVB]; RVD: 2.7%, 5.8%, 4.5%). Clinical LV metrics showed excellent agreement (ICC > 0.98 for EDV/ESV/EF, bias < 2 mL for EDV/ESV, < 1% for EF), while RV metrics remained clinically reliable (ICC > 0.93 for EDV/ESV/EF, bias < 1 mL for EDV/ESV, < 1% for EF) but exhibited wider limits of agreement. Training on all cardiac phases improved temporal coherence, reducing LVM volume mismatch from 4.0% to 2.6%. ConclusionThis study validates a DL-based method for fast and accurate segmentation of whole-heart free-running 4D cardiac MRI. Robust performance across diverse protocols and evaluation with complementary metrics that match state-of-the-art benchmarks supports its integration into clinical and research workflows, helping to overcome a key barrier to the broader adoption of free-running imaging.

Patient-Specific and Interpretable Deep Brain Stimulation Optimisation Using MRI and Clinical Review Data

Mikroulis, A., Lasica, A., Filip, P., Bakstein, E., Novak, D.

medrxiv logopreprintJul 17 2025
BackgroundOptimisation of Deep Brain Stimulation (DBS) settings is a key aspect in achieving clinical efficacy in movement disorders, such as the Parkinsons disease. Modern techniques attempt to solve the problem through data-intensive statistical and machine learning approaches, adding significant overhead to the existing clinical workflows. Here, we present an optimisation approach for DBS electrode contact and current selection, grounded in routinely collected MRI data, well-established tools (Lead-DBS) and, optionally, clinical review records. MethodsThe pipeline, packaged in a cross-platform tool, uses lead reconstruction data and simulation of volume of tissue activated to estimate the contacts in optimal position relative to the target structure, and suggest optimal stimulation current. The tool then allows further interactive user optimisation of the current settings. Existing electrode contact evaluations can be optionally included in the calculation process for further fine-tuning and adverse effect avoidance. ResultsBased on a sample of 177 implanted electrode reconstructions from 89 Parkinsons disease patients, we demonstrate that DBS parameter setting by our algorithm is more effective in covering the target structure (Wilcoxon p<6e-12, Hedges g>0.34) and minimising electric field leakage to neighbouring regions (p<2e-15, g>0.84) compared to expert parameter settings. ConclusionThe proposed automated method, for optimisation of the DBS electrode contact and current selection shows promising results and is readily applicable to existing clinical workflows. We demonstrate that the algorithmically selected contacts perform better than manual selections according to electric field calculations, allowing for a comparable clinical outcome without the iterative optimisation procedure.

Deep Learning-Based Fetal Lung Segmentation from Diffusion-weighted MRI Images and Lung Maturity Evaluation for Fetal Growth Restriction

Zhennan Xiao, Katharine Brudkiewicz, Zhen Yuan, Rosalind Aughwane, Magdalena Sokolska, Joanna Chappell, Trevor Gaunt, Anna L. David, Andrew P. King, Andrew Melbourne

arxiv logopreprintJul 17 2025
Fetal lung maturity is a critical indicator for predicting neonatal outcomes and the need for post-natal intervention, especially for pregnancies affected by fetal growth restriction. Intra-voxel incoherent motion analysis has shown promising results for non-invasive assessment of fetal lung development, but its reliance on manual segmentation is time-consuming, thus limiting its clinical applicability. In this work, we present an automated lung maturity evaluation pipeline for diffusion-weighted magnetic resonance images that consists of a deep learning-based fetal lung segmentation model and a model-fitting lung maturity assessment. A 3D nnU-Net model was trained on manually segmented images selected from the baseline frames of 4D diffusion-weighted MRI scans. The segmentation model demonstrated robust performance, yielding a mean Dice coefficient of 82.14%. Next, voxel-wise model fitting was performed based on both the nnU-Net-predicted and manual lung segmentations to quantify IVIM parameters reflecting tissue microstructure and perfusion. The results suggested no differences between the two. Our work shows that a fully automated pipeline is possible for supporting fetal lung maturity assessment and clinical decision-making.

A multi-stage training and deep supervision based segmentation approach for 3D abdominal multi-organ segmentation.

Wu P, An P, Zhao Z, Guo R, Ma X, Qu Y, Xu Y, Yu H

pubmed logopapersJul 17 2025
Accurate X-ray Computed tomography (CT) image segmentation of the abdominal organs is fundamental for diagnosing abdominal diseases, planning cancer treatment, and formulating radiotherapy strategies. However, the existing deep learning based models for three-dimensional (3D) CT image abdominal multi-organ segmentation face challenges, including complex organ distribution, scarcity of labeled data, and diversity of organ structures, leading to difficulties in model training and convergence and low segmentation accuracy. To address these issues, a novel multi-stage training and a deep supervision model based segmentation approach is proposed. It primary integrates multi-stage training, pseudo- labeling technique, and a developed deep supervision model with attention mechanism (DLAU-Net), specifically designed for 3D abdominal multi-organ segmentation. The DLAU-Net enhances segmentation performance and model adaptability through an improved network architecture. The multi-stage training strategy accelerates model convergence and enhances generalizability, effectively addressing the diversity of abdominal organ structures. The introduction of pseudo-labeling training alleviates the bottleneck of labeled data scarcity and further improves the model's generalization performance and training efficiency. Experiments were conducted on a large dataset provided by the FLARE 2023 Challenge. Comprehensive ablation studies and comparative experiments were conducted to validate the effectiveness of the proposed method. Our method achieves an average organ accuracy (AVG) of 90.5% and a Dice Similarity Coefficient (DSC) of 89.05% and exhibits exceptional performance in terms of training speed and handling data diversity, particularly in the segmentation tasks of critical abdominal organs such as the liver, spleen, and kidneys, significantly outperforming existing comparative methods.

BDEC: Brain Deep Embedded Clustering Model for Resting State fMRI Group-Level Parcellation of the Human Cerebral Cortex.

Zhu J, Ma X, Wei B, Zhong Z, Zhou H, Jiang F, Zhu H, Yi C

pubmed logopapersJul 17 2025
To develop a robust group-level brain parcellation method using deep learning based on resting-state functional magnetic resonance imaging (rs-fMRI), aiming to release the model assumptions made by previous approaches. We proposed Brain Deep Embedded Clustering (BDEC), a deep clustering model that employs a loss function designed to maximize inter-class separation and enhance intra-class similarity, thereby promoting the formation of functionally coherent brain regions. Compared to ten widely used brain parcellation methods, the BDEC model demonstrates significantly improved performance in various functional homogeneity metrics. It also showed favorable results in parcellation validity, downstream tasks, task inhomogeneity, and generalization capability. The BDEC model effectively captures intrinsic functional properties of the brain, supporting reliable and generalizable parcellation outcomes. BDEC provides a useful parcellation for brain network analysis and dimensionality reduction of rs-fMRI data, while also contributing to a deeper understanding of the brain's functional organization.

AortaDiff: Volume-Guided Conditional Diffusion Models for Multi-Branch Aortic Surface Generation

Delin An, Pan Du, Jian-Xun Wang, Chaoli Wang

arxiv logopreprintJul 17 2025
Accurate 3D aortic construction is crucial for clinical diagnosis, preoperative planning, and computational fluid dynamics (CFD) simulations, as it enables the estimation of critical hemodynamic parameters such as blood flow velocity, pressure distribution, and wall shear stress. Existing construction methods often rely on large annotated training datasets and extensive manual intervention. While the resulting meshes can serve for visualization purposes, they struggle to produce geometrically consistent, well-constructed surfaces suitable for downstream CFD analysis. To address these challenges, we introduce AortaDiff, a diffusion-based framework that generates smooth aortic surfaces directly from CT/MRI volumes. AortaDiff first employs a volume-guided conditional diffusion model (CDM) to iteratively generate aortic centerlines conditioned on volumetric medical images. Each centerline point is then automatically used as a prompt to extract the corresponding vessel contour, ensuring accurate boundary delineation. Finally, the extracted contours are fitted into a smooth 3D surface, yielding a continuous, CFD-compatible mesh representation. AortaDiff offers distinct advantages over existing methods, including an end-to-end workflow, minimal dependency on large labeled datasets, and the ability to generate CFD-compatible aorta meshes with high geometric fidelity. Experimental results demonstrate that AortaDiff performs effectively even with limited training data, successfully constructing both normal and pathologically altered aorta meshes, including cases with aneurysms or coarctation. This capability enables the generation of high-quality visualizations and positions AortaDiff as a practical solution for cardiovascular research.

Domain-randomized deep learning for neuroimage analysis

Malte Hoffmann

arxiv logopreprintJul 17 2025
Deep learning has revolutionized neuroimage analysis by delivering unprecedented speed and accuracy. However, the narrow scope of many training datasets constrains model robustness and generalizability. This challenge is particularly acute in magnetic resonance imaging (MRI), where image appearance varies widely across pulse sequences and scanner hardware. A recent domain-randomization strategy addresses the generalization problem by training deep neural networks on synthetic images with randomized intensities and anatomical content. By generating diverse data from anatomical segmentation maps, the approach enables models to accurately process image types unseen during training, without retraining or fine-tuning. It has demonstrated effectiveness across modalities including MRI, computed tomography, positron emission tomography, and optical coherence tomography, as well as beyond neuroimaging in ultrasound, electron and fluorescence microscopy, and X-ray microtomography. This tutorial paper reviews the principles, implementation, and potential of the synthesis-driven training paradigm. It highlights key benefits, such as improved generalization and resistance to overfitting, while discussing trade-offs such as increased computational demands. Finally, the article explores practical considerations for adopting the technique, aiming to accelerate the development of generalizable tools that make deep learning more accessible to domain experts without extensive computational resources or machine learning knowledge.
Page 59 of 1341332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.