Sort by:
Page 8 of 99986 results

X-UNet:A novel global context-aware collaborative fusion U-shaped network with progressive feature fusion of codec for medical image segmentation.

Xu S, Chen Y, Zhang X, Sun F, Chen S, Ou Y, Luo C

pubmed logopapersAug 7 2025
Due to the inductive bias of convolutions, CNNs perform hierarchical feature extraction efficiently in the field of medical image segmentation. However, the local correlation assumption of inductive bias limits the ability of convolutions to focus on global information, which has led to the performance of Transformer-based methods surpassing that of CNNs in some segmentation tasks in recent years. Although combining with Transformers can solve this problem, it also introduces computational complexity and considerable parameters. In addition, narrowing the encoder-decoder semantic gap for high-quality mask generation is a key challenge, addressed in recent works through feature aggregation from different skip connections. However, this often results in semantic mismatches and additional noise. In this paper, we propose a novel segmentation method, X-UNet, whose backbones employ the CFGC (Collaborative Fusion with Global Context-aware) module. The CFGC module enables multi-scale feature extraction and effective global context modeling. Simultaneously, we employ the CSPF (Cross Split-channel Progressive Fusion) module to progressively align and fuse features from corresponding encoder and decoder stages through channel-wise operations, offering a novel approach to feature integration. Experimental results demonstrate that X-UNet, with fewer computations and parameters, exhibits superior performance on various medical image datasets.The code and models are available on https://github.com/XSJ0410/X-UNet.

EATHOA: Elite-evolved hiking algorithm for global optimization and precise multi-thresholding image segmentation in intracerebral hemorrhage images.

Abdel-Salam M, Houssein EH, Emam MM, Samee NA, Gharehchopogh FS, Bacanin N

pubmed logopapersAug 6 2025
Intracerebral hemorrhage (ICH) is a life-threatening condition caused by bleeding in the brain, with high mortality rates, particularly in the acute phase. Accurate diagnosis through medical image segmentation plays a crucial role in early intervention and treatment. However, existing segmentation methods, such as region-growing, clustering, and deep learning, face significant limitations when applied to complex images like ICH, especially in multi-threshold image segmentation (MTIS). As the number of thresholds increases, these methods often become computationally expensive and exhibit degraded segmentation performance. To address these challenges, this paper proposes an Elite-Adaptive-Turbulent Hiking Optimization Algorithm (EATHOA), an enhanced version of the Hiking Optimization Algorithm (HOA), specifically designed for high-dimensional and multimodal optimization problems like ICH image segmentation. EATHOA integrates three novel strategies including Elite Opposition-Based Learning (EOBL) for improving population diversity and exploration, Adaptive k-Average-Best Mutation (AKAB) for dynamically balancing exploration and exploitation, and a Turbulent Operator (TO) for escaping local optima and enhancing the convergence rate. Extensive experiments were conducted on the CEC2017 and CEC2022 benchmark functions to evaluate EATHOA's global optimization performance, where it consistently outperformed other state-of-the-art algorithms. The proposed EATHOA was then applied to solve the MTIS problem in ICH images at six different threshold levels. EATHOA achieved peak values of PSNR (34.4671), FSIM (0.9710), and SSIM (0.8816), outperforming recent methods in segmentation accuracy and computational efficiency. These results demonstrate the superior performance of EATHOA and its potential as a powerful tool for medical image analysis, offering an effective and computationally efficient solution for the complex challenges of ICH image segmentation.

AI-Guided Cardiac Computer Tomography in Type 1 Diabetes Patients with Low Coronary Artery Calcium Score.

Wohlfahrt P, Pazderník M, Marhefková N, Roland R, Adla T, Earls J, Haluzík M, Dubský M

pubmed logopapersAug 6 2025
<b><i>Objective:</i></b> Cardiovascular risk stratification based on traditional risk factors lacks precision at the individual level. While coronary artery calcium (CAC) scoring enhances risk prediction by detecting calcified atherosclerotic plaques, it may underestimate risk in individuals with noncalcified plaques-a pattern common in younger type 1 diabetes (T1D) patients. Understanding the prevalence of noncalcified atherosclerosis in T1D is crucial for developing more effective screening strategies. Therefore, this study aimed to assess the burden of clinically significant atherosclerosis in T1D patients with CAC <100 using artificial intelligence (AI)-guided quantitative coronary computed tomographic angiography (AI-QCT). <b><i>Methods:</i></b> This study enrolled T1D patients aged ≥30 years with disease duration ≥10 years and no manifest or symptomatic atherosclerotic cardiovascular disease (ASCVD). CAC and carotid ultrasound were assessed in all participants. AI-QCT was performed in patients with CAC 0 and at least one plaque in the carotid arteries or those with CAC 1-99. <b><i>Results:</i></b> Among the 167 participants (mean age 52 ± 10 years; 44% women; T1D duration 29 ± 11 years), 93 (56%) had CAC = 0, 46 (28%) had CAC 1-99, 8 (5%) had CAC 100-299, and 20 (12%) had CAC ≥300. AI-QCT was performed in a subset of 52 patients. Only 11 (21%) had no evidence of coronary artery disease. Significant coronary stenosis was identified in 17% of patients, and 30 (73%) presented with at least one high-risk plaque. Compared with CAC-based risk categories, AI-QCT reclassified 58% of patients, and 21% compared with the STENO1 risk categories. There was only fair agreement between AI-QCT and CAC (κ = 0.25), and a slight agreement between AI-QCT and STENO1 risk categories (κ = 0.02). <b><i>Conclusion:</i></b> AI-QCT may reveal subclinical atherosclerotic burden and high-risk features that remain undetected by traditional risk models or CAC. These findings challenge the assumption that a low CAC score equates to a low cardiovascular risk in T1D.

Automated Deep Learning-based Segmentation of the Dentate Nucleus Using Quantitative Susceptibility Mapping MRI.

Shiraishi DH, Saha S, Adanyeguh IM, Cocozza S, Corben LA, Deistung A, Delatycki MB, Dogan I, Gaetz W, Georgiou-Karistianis N, Graf S, Grisoli M, Henry PG, Jarola GM, Joers JM, Langkammer C, Lenglet C, Li J, Lobo CC, Lock EF, Lynch DR, Mareci TH, Martinez ARM, Monti S, Nigri A, Pandolfo M, Reetz K, Roberts TP, Romanzetti S, Rudko DA, Scaravilli A, Schulz JB, Subramony SH, Timmann D, França MC, Harding IH, Rezende TJR

pubmed logopapersAug 6 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop a dentate nucleus (DN) segmentation tool using deep learning (DL) applied to brain MRI-based quantitative susceptibility mapping (QSM) images. Materials and Methods Brain QSM images from healthy controls and individuals with cerebellar ataxia or multiple sclerosis were collected from nine different datasets (2016-2023) worldwide for this retrospective study (ClinicalTrials.gov Identifier: NCT04349514). Manual delineation of the DN was performed by experienced raters. Automated segmentation performance was evaluated against manual reference segmentations following training with several DL architectures. A two-step approach was used, consisting of a localization model followed by DN segmentation. Performance metrics included intraclass correlation coefficient (ICC), Dice score, and Pearson correlation coefficient. Results The training and testing datasets comprised 328 individuals (age range, 11-64 years; 171 female), including 141 healthy individuals and 187 with cerebellar ataxia or multiple sclerosis. The manual tracing protocol produced reference standards with high intrarater (average ICC 0.91) and interrater reliability (average ICC 0.78). Initial DL architecture exploration indicated that the nnU-Net framework performed best. The two-step localization plus segmentation pipeline achieved a Dice score of 0.90 ± 0.03 and 0.89 ± 0.04 for left and right DN segmentation, respectively. In external testing, the proposed algorithm outperformed the current leading automated tool (mean Dice scores for left and right DN: 0.86 ± 0.04 vs 0.57 ± 0.22, <i>P</i> < .001; 0.84 ± 0.07 vs 0.58 ± 0.24, <i>P</i> < .001). The model demonstrated generalizability across datasets unseen during the training step, with automated segmentations showing high correlation with manual annotations (left DN: r = 0.74; <i>P</i> < .001; right DN: r = 0.48; <i>P</i> = .03). Conclusion The proposed model accurately and efficiently segmented the DN from brain QSM images. The model is publicly available (https://github.com/art2mri/DentateSeg). ©RSNA, 2025.

TCSAFormer: Efficient Vision Transformer with Token Compression and Sparse Attention for Medical Image Segmentation

Zunhui Xia, Hongxing Li, Libin Lan

arxiv logopreprintAug 6 2025
In recent years, transformer-based methods have achieved remarkable progress in medical image segmentation due to their superior ability to capture long-range dependencies. However, these methods typically suffer from two major limitations. First, their computational complexity scales quadratically with the input sequences. Second, the feed-forward network (FFN) modules in vanilla Transformers typically rely on fully connected layers, which limits models' ability to capture local contextual information and multiscale features critical for precise semantic segmentation. To address these issues, we propose an efficient medical image segmentation network, named TCSAFormer. The proposed TCSAFormer adopts two key ideas. First, it incorporates a Compressed Attention (CA) module, which combines token compression and pixel-level sparse attention to dynamically focus on the most relevant key-value pairs for each query. This is achieved by pruning globally irrelevant tokens and merging redundant ones, significantly reducing computational complexity while enhancing the model's ability to capture relationships between tokens. Second, it introduces a Dual-Branch Feed-Forward Network (DBFFN) module as a replacement for the standard FFN to capture local contextual features and multiscale information, thereby strengthening the model's feature representation capability. We conduct extensive experiments on three publicly available medical image segmentation datasets: ISIC-2018, CVC-ClinicDB, and Synapse, to evaluate the segmentation performance of TCSAFormer. Experimental results demonstrate that TCSAFormer achieves superior performance compared to existing state-of-the-art (SOTA) methods, while maintaining lower computational overhead, thus achieving an optimal trade-off between efficiency and accuracy.

Improving 3D Thin Vessel Segmentation in Brain TOF-MRA via a Dual-space Context-Aware Network.

Shan W, Li X, Wang X, Li Q, Wang Z

pubmed logopapersAug 6 2025
3D cerebrovascular segmentation poses a significant challenge, akin to locating a line within a vast 3D environment. This complexity can be substantially reduced by projecting the vessels onto a 2D plane, enabling easier segmentation. In this paper, we create a vessel-segmentation-friendly space using a clinical visualization technique called maximum intensity projection (MIP). Leveraging this, we propose a Dual-space Context-Aware Network (DCANet) for 3D vessel segmentation, designed to capture even the finest vessel structures accurately. DCANet begins by transforming a magnetic resonance angiography (MRA) volume into a 3D Regional-MIP volume, where each Regional-MIP slice is constructed by projecting adjacent MRA slices. This transformation highlights vessels as prominent continuous curves rather than the small circular or ellipsoidal cross-sections seen in MRA slices. DCANet encodes vessels separately in the MRA and the projected Regional-MIP spaces and introduces the Regional-MIP Image Fusion Block (MIFB) between these dual spaces to selectively integrate contextual features from Regional-MIP into MRA. Following dual-space encoding, DCANet employs a Dual-mask Spatial Guidance TransFormer (DSGFormer) decoder to focus on vessel regions while effectively excluding background areas, which reduces the learning burden and improves segmentation accuracy. We benchmark DCANet on four datasets: two public datasets, TubeTK and IXI-IOP, and two in-house datasets, Xiehe and IXI-HH. The results demonstrate that DCANet achieves superior performance, with improvements in average DSC values of at least 2.26%, 2.17%, 2.62%, and 2.58% for thin vessels, respectively. Codes are available at: https://github.com/shanwq/DCANet.

Conditional Fetal Brain Atlas Learning for Automatic Tissue Segmentation

Johannes Tischer, Patric Kienast, Marlene Stümpflen, Gregor Kasprian, Georg Langs, Roxane Licandro

arxiv logopreprintAug 6 2025
Magnetic Resonance Imaging (MRI) of the fetal brain has become a key tool for studying brain development in vivo. Yet, its assessment remains challenging due to variability in brain maturation, imaging protocols, and uncertain estimates of Gestational Age (GA). To overcome these, brain atlases provide a standardized reference framework that facilitates objective evaluation and comparison across subjects by aligning the atlas and subjects in a common coordinate system. In this work, we introduce a novel deep-learning framework for generating continuous, age-specific fetal brain atlases for real-time fetal brain tissue segmentation. The framework combines a direct registration model with a conditional discriminator. Trained on a curated dataset of 219 neurotypical fetal MRIs spanning from 21 to 37 weeks of gestation. The method achieves high registration accuracy, captures dynamic anatomical changes with sharp structural detail, and robust segmentation performance with an average Dice Similarity Coefficient (DSC) of 86.3% across six brain tissues. Furthermore, volumetric analysis of the generated atlases reveals detailed neurotypical growth trajectories, providing valuable insights into the maturation of the fetal brain. This approach enables individualized developmental assessment with minimal pre-processing and real-time performance, supporting both research and clinical applications. The model code is available at https://github.com/cirmuw/fetal-brain-atlas

Segmenting Whole-Body MRI and CT for Multiorgan Anatomic Structure Delineation.

Häntze H, Xu L, Mertens CJ, Dorfner FJ, Donle L, Busch F, Kader A, Ziegelmayer S, Bayerl N, Navab N, Rueckert D, Schnabel J, Aerts HJWL, Truhn D, Bamberg F, Weiss J, Schlett CL, Ringhof S, Niendorf T, Pischon T, Kauczor HU, Nonnenmacher T, Kröncke T, Völzke H, Schulz-Menger J, Maier-Hein K, Hering A, Prokop M, van Ginneken B, Makowski MR, Adams LC, Bressem KK

pubmed logopapersAug 6 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and validate MRSegmentator, a retrospective cross-modality deep learning model for multiorgan segmentation of MRI scans. Materials and Methods This retrospective study trained MRSegmentator on 1,200 manually annotated UK Biobank Dixon MRI sequences (50 participants), 221 in-house abdominal MRI sequences (177 patients), and 1228 CT scans from the TotalSegmentator-CT dataset. A human-in-the-loop annotation workflow leveraged cross-modality transfer learning from an existing CT segmentation model to segment 40 anatomic structures. The model's performance was evaluated on 900 MRI sequences from 50 participants in the German National Cohort (NAKO), 60 MRI sequences from AMOS22 dataset, and 29 MRI sequences from TotalSegmentator-MRI. Reference standard manual annotations were used for comparison. Metrics to assess segmentation quality included Dice Similarity Coefficient (DSC). Statistical analyses included organ-and sequence-specific mean ± SD reporting and two-sided <i>t</i> tests for demographic effects. Results 139 participants were evaluated; demographic information was available for 70 (mean age 52.7 years ± 14.0 [SD], 36 female). Across all test datasets, MRSegmentator demonstrated high class wise DSC for well-defined organs (lungs: 0.81-0.96, heart: 0.81-0.94) and organs with anatomic variability (liver: 0.82-0.96, kidneys: 0.77-0.95). Smaller structures showed lower DSC (portal/splenic veins: 0.64-0.78, adrenal glands: 0.56-0.69). The average DSC on the external testing using NAKO data, ranged from 0.85 ± 0.08 for T2-HASTE to 0.91 ± 0.05 for in-phase sequences. The model generalized well to CT, achieving mean DSC of 0.84 ± 0.12 on AMOS CT data. Conclusion MRSegmentator accurately segmented 40 anatomic structures on MRI and generalized to CT; outperforming existing open-source tools. Published under a CC BY 4.0 license.

Development of a deep learning based approach for multi-material decomposition in spectral CT: a proof of principle in silico study.

Rajagopal JR, Rapaka S, Farhadi F, Abadi E, Segars WP, Nowak T, Sharma P, Pritchard WF, Malayeri A, Jones EC, Samei E, Sahbaee P

pubmed logopapersAug 6 2025
Conventional approaches to material decomposition in spectral CT face challenges related to precise algorithm calibration across imaged conditions and low signal quality caused by variable object size and reduced dose. In this proof-of-principle study, a deep learning approach to multi-material decomposition was developed to quantify iodine, gadolinium, and calcium in spectral CT. A dual-phase network architecture was trained using synthetic datasets containing computational models of cylindrical and virtual patient phantoms. Classification and quantification performance was evaluated across a range of patient size and dose parameters. The model was found to accurately classify (accuracy: cylinders - 98%, virtual patients - 97%) and quantify materials (mean absolute percentage difference: cylinders - 8-10%, virtual patients - 10-15%) in both datasets. Performance in virtual patient phantoms improved as the hybrid training dataset included a larger contingent of virtual patient phantoms (accuracy: 48% with 0 virtual patients to 97% with 8 virtual patients). For both datasets, the algorithm was able to maintain strong performance under challenging conditions of large patient size and reduced dose. This study shows the validity of a deep-learning based approach to multi-material decomposition trained with in-silico images that can overcome the limitations of conventional material decomposition approaches.

DDTracking: A Deep Generative Framework for Diffusion MRI Tractography with Streamline Local-Global Spatiotemporal Modeling

Yijie Li, Wei Zhang, Xi Zhu, Ye Wu, Yogesh Rathi, Lauren J. O'Donnell, Fan Zhang

arxiv logopreprintAug 6 2025
This paper presents DDTracking, a novel deep generative framework for diffusion MRI tractography that formulates streamline propagation as a conditional denoising diffusion process. In DDTracking, we introduce a dual-pathway encoding network that jointly models local spatial encoding (capturing fine-scale structural details at each streamline point) and global temporal dependencies (ensuring long-range consistency across the entire streamline). Furthermore, we design a conditional diffusion model module, which leverages the learned local and global embeddings to predict streamline propagation orientations for tractography in an end-to-end trainable manner. We conduct a comprehensive evaluation across diverse, independently acquired dMRI datasets, including both synthetic and clinical data. Experiments on two well-established benchmarks with ground truth (ISMRM Challenge and TractoInferno) demonstrate that DDTracking largely outperforms current state-of-the-art tractography methods. Furthermore, our results highlight DDTracking's strong generalizability across heterogeneous datasets, spanning varying health conditions, age groups, imaging protocols, and scanner types. Collectively, DDTracking offers anatomically plausible and robust tractography, presenting a scalable, adaptable, and end-to-end learnable solution for broad dMRI applications. Code is available at: https://github.com/yishengpoxiao/DDtracking.git
Page 8 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.