Sort by:
Page 361 of 6646636 results

Zhu J, Ma X, Wei B, Zhong Z, Zhou H, Jiang F, Zhu H, Yi C

pubmed logopapersJul 17 2025
To develop a robust group-level brain parcellation method using deep learning based on resting-state functional magnetic resonance imaging (rs-fMRI), aiming to release the model assumptions made by previous approaches. We proposed Brain Deep Embedded Clustering (BDEC), a deep clustering model that employs a loss function designed to maximize inter-class separation and enhance intra-class similarity, thereby promoting the formation of functionally coherent brain regions. Compared to ten widely used brain parcellation methods, the BDEC model demonstrates significantly improved performance in various functional homogeneity metrics. It also showed favorable results in parcellation validity, downstream tasks, task inhomogeneity, and generalization capability. The BDEC model effectively captures intrinsic functional properties of the brain, supporting reliable and generalizable parcellation outcomes. BDEC provides a useful parcellation for brain network analysis and dimensionality reduction of rs-fMRI data, while also contributing to a deeper understanding of the brain's functional organization.

Wu P, An P, Zhao Z, Guo R, Ma X, Qu Y, Xu Y, Yu H

pubmed logopapersJul 17 2025
Accurate X-ray Computed tomography (CT) image segmentation of the abdominal organs is fundamental for diagnosing abdominal diseases, planning cancer treatment, and formulating radiotherapy strategies. However, the existing deep learning based models for three-dimensional (3D) CT image abdominal multi-organ segmentation face challenges, including complex organ distribution, scarcity of labeled data, and diversity of organ structures, leading to difficulties in model training and convergence and low segmentation accuracy. To address these issues, a novel multi-stage training and a deep supervision model based segmentation approach is proposed. It primary integrates multi-stage training, pseudo- labeling technique, and a developed deep supervision model with attention mechanism (DLAU-Net), specifically designed for 3D abdominal multi-organ segmentation. The DLAU-Net enhances segmentation performance and model adaptability through an improved network architecture. The multi-stage training strategy accelerates model convergence and enhances generalizability, effectively addressing the diversity of abdominal organ structures. The introduction of pseudo-labeling training alleviates the bottleneck of labeled data scarcity and further improves the model's generalization performance and training efficiency. Experiments were conducted on a large dataset provided by the FLARE 2023 Challenge. Comprehensive ablation studies and comparative experiments were conducted to validate the effectiveness of the proposed method. Our method achieves an average organ accuracy (AVG) of 90.5% and a Dice Similarity Coefficient (DSC) of 89.05% and exhibits exceptional performance in terms of training speed and handling data diversity, particularly in the segmentation tasks of critical abdominal organs such as the liver, spleen, and kidneys, significantly outperforming existing comparative methods.

Debnath RK, Rahman MA, Azam S, Zhang Y, Jonkman M

pubmed logopapersJul 17 2025
Precise liver segmentation is critical for accurate diagnosis and effective treatment planning, serving as a foundation for medical image analysis. However, existing methods struggle with limited labeled data, poor generalizability, and insufficient integration of anatomical and clinical features. To address these limitations, we propose a novel Few-Shot Segmentation model with Unified Liver Representation (FSS-ULivR), which employs a ResNet-based encoder enhanced with Squeeze-and-Excitation modules to improve feature learning, an enhanced prototype module that utilizes a transformer block and channel attention for dynamic feature refinement, and a decoder with improved attention gates and residual refinement strategies to recover spatial details from encoder skip connections. Through extensive experiments, our FSS-ULivR model achieved an outstanding Dice coefficient of 98.94%, Intersection over Union (IoU) of 97.44% and a specificity of 93.78% on the Liver Tumor Segmentation Challenge dataset. Cross-dataset evaluations further demonstrated its generalizability, with Dice scores of 95.43%, 92.98%, 90.72%, and 94.05% on 3DIRCADB01, Colorectal Liver Metastases, Computed Tomography Organs (CT-ORG), and Medical Segmentation Decathlon Task 3: Liver datasets, respectively. In multi-organ segmentation on CT-ORG, it delivered Dice scores ranging from 85.93% to 94.26% across bladder, bones, kidneys, and lungs. For brain tumor segmentation on BraTS 2019 and 2020 datasets, average Dice scores were 90.64% and 89.36% across whole tumor, tumor core, and enhancing tumor regions. These results emphasize the clinical importance of our model by demonstrating its ability to deliver precise and reliable segmentation through artificial intelligence techniques and engineering solutions, even in scenarios with scarce annotated data.

Han Zhang, Xiangde Luo, Yong Chen, Kang Li

arxiv logopreprintJul 17 2025
Annotation variability remains a substantial challenge in medical image segmentation, stemming from ambiguous imaging boundaries and diverse clinical expertise. Traditional deep learning methods producing single deterministic segmentation predictions often fail to capture these annotator biases. Although recent studies have explored multi-rater segmentation, existing methods typically focus on a single perspective -- either generating a probabilistic ``gold standard'' consensus or preserving expert-specific preferences -- thus struggling to provide a more omni view. In this study, we propose DiffOSeg, a two-stage diffusion-based framework, which aims to simultaneously achieve both consensus-driven (combining all experts' opinions) and preference-driven (reflecting experts' individual assessments) segmentation. Stage I establishes population consensus through a probabilistic consensus strategy, while Stage II captures expert-specific preference via adaptive prompts. Demonstrated on two public datasets (LIDC-IDRI and NPC-170), our model outperforms existing state-of-the-art methods across all evaluated metrics. Source code is available at https://github.com/string-ellipses/DiffOSeg .

Zahra TehraniNasab, Amar Kumar, Tal Arbel

arxiv logopreprintJul 17 2025
Medical image synthesis presents unique challenges due to the inherent complexity and high-resolution details required in clinical contexts. Traditional generative architectures such as Generative Adversarial Networks (GANs) or Variational Auto Encoder (VAEs) have shown great promise for high-resolution image generation but struggle with preserving fine-grained details that are key for accurate diagnosis. To address this issue, we introduce Pixel Perfect MegaMed, the first vision-language foundation model to synthesize images at resolutions of 1024x1024. Our method deploys a multi-scale transformer architecture designed specifically for ultra-high resolution medical image generation, enabling the preservation of both global anatomical context and local image-level details. By leveraging vision-language alignment techniques tailored to medical terminology and imaging modalities, Pixel Perfect MegaMed bridges the gap between textual descriptions and visual representations at unprecedented resolution levels. We apply our model to the CheXpert dataset and demonstrate its ability to generate clinically faithful chest X-rays from text prompts. Beyond visual quality, these high-resolution synthetic images prove valuable for downstream tasks such as classification, showing measurable performance gains when used for data augmentation, particularly in low-data regimes. Our code is accessible through the project website - https://tehraninasab.github.io/pixelperfect-megamed.

Zhennan Xiao, Katharine Brudkiewicz, Zhen Yuan, Rosalind Aughwane, Magdalena Sokolska, Joanna Chappell, Trevor Gaunt, Anna L. David, Andrew P. King, Andrew Melbourne

arxiv logopreprintJul 17 2025
Fetal lung maturity is a critical indicator for predicting neonatal outcomes and the need for post-natal intervention, especially for pregnancies affected by fetal growth restriction. Intra-voxel incoherent motion analysis has shown promising results for non-invasive assessment of fetal lung development, but its reliance on manual segmentation is time-consuming, thus limiting its clinical applicability. In this work, we present an automated lung maturity evaluation pipeline for diffusion-weighted magnetic resonance images that consists of a deep learning-based fetal lung segmentation model and a model-fitting lung maturity assessment. A 3D nnU-Net model was trained on manually segmented images selected from the baseline frames of 4D diffusion-weighted MRI scans. The segmentation model demonstrated robust performance, yielding a mean Dice coefficient of 82.14%. Next, voxel-wise model fitting was performed based on both the nnU-Net-predicted and manual lung segmentations to quantify IVIM parameters reflecting tissue microstructure and perfusion. The results suggested no differences between the two. Our work shows that a fully automated pipeline is possible for supporting fetal lung maturity assessment and clinical decision-making.

Jinseo An, Min Jin Lee, Kyu Won Shim, Helen Hong

arxiv logopreprintJul 17 2025
Accurate segmentation of orbital bones in facial computed tomography (CT) images is essential for the creation of customized implants for reconstruction of defected orbital bones, particularly challenging due to the ambiguous boundaries and thin structures such as the orbital medial wall and orbital floor. In these ambiguous regions, existing segmentation approaches often output disconnected or under-segmented results. We propose a novel framework that corrects segmentation results by leveraging consensus from multiple diffusion model outputs. Our approach employs a conditional Bernoulli diffusion model trained on diverse annotation patterns per image to generate multiple plausible segmentations, followed by a consensus-driven correction that incorporates position proximity, consensus level, and gradient direction similarity to correct challenging regions. Experimental results demonstrate that our method outperforms existing methods, significantly improving recall in ambiguous regions while preserving the continuity of thin structures. Furthermore, our method automates the manual process of segmentation result correction and can be applied to image-guided surgical planning and surgery.

Kenza Bouzid, Shruthi Bannur, Daniel Coelho de Castro, Anton Schwaighofer, Javier Alvarez-Valle, Stephanie L. Hyland

arxiv logopreprintJul 17 2025
Interpretability can improve the safety, transparency and trust of AI models, which is especially important in healthcare applications where decisions often carry significant consequences. Mechanistic interpretability, particularly through the use of sparse autoencoders (SAEs), offers a promising approach for uncovering human-interpretable features within large transformer-based models. In this study, we apply Matryoshka-SAE to the radiology-specialised multimodal large language model, MAIRA-2, to interpret its internal representations. Using large-scale automated interpretability of the SAE features, we identify a range of clinically relevant concepts - including medical devices (e.g., line and tube placements, pacemaker presence), pathologies such as pleural effusion and cardiomegaly, longitudinal changes and textual features. We further examine the influence of these features on model behaviour through steering, demonstrating directional control over generations with mixed success. Our results reveal practical and methodological challenges, yet they offer initial insights into the internal concepts learned by MAIRA-2 - marking a step toward deeper mechanistic understanding and interpretability of a radiology-adapted multimodal large language model, and paving the way for improved model transparency. We release the trained SAEs and interpretations: https://huggingface.co/microsoft/maira-2-sae.

Wei C, Wang J, Xue Y, Jiang J, Cao M, Li S, Chen X

pubmed logopapersJul 17 2025
BackgroundSubjective cognitive decline (SCD) is recognized as an early phase in the progression of Alzheimer's disease (AD).ObjectiveTo explore the abnormal patterns of morphological and functional connectivity coupling (MC-FC coupling) and their potential diagnostic significance in SCD.MethodsThe data of 52 individuals with SCD and 51 age-gender-education matched healthy controls (HC) who underwent resting-state functional magnetic resonance imaging and high-resolution 3D T<sub>1</sub>-weighted imaging were retrieved to build the MC and FC of gray matter. Support vector machine (SVM) methods were used for differentiating between SCD and HC.ResultsSCD individuals exhibited MC-FC decoupling in the frontoparietal network compared with HC (p = 0.002, 5000 permutations). Using these adjusted MC-FC coupling metrics, SVM analysis achieved 74.76% accuracy, 64.71% sensitivity, and 92.31% specificity (p < 0.001, 5000 permutations). Additionally, the stronger MC-FC coupling of the left inferior temporal gyrus (r = 0.294, p = 0.034) and right posterior cingulate gyrus (r = 0.372, p = 0.007) in SCD individuals was positively correlated with subjective memory complaint performance.ConclusionsThe findings of this study provide insight into the idiosyncratic feature of brain organization underlying SCD from the prospective of MC-FC coupling and highlight the potential of MC-FC coupling for the identification of the preclinical stage of AD.

Zhang M, Sheng J

pubmed logopapersJul 17 2025
Conservative treatment remains a viable option for selected patients with ectopic pregnancy (EP), but failure may lead to rupture and serious complications. Currently, serum β-hCG is the main predictor for treatment outcomes, yet its accuracy is limited. This study aimed to develop and validate a predictive model that integrates radiomic features derived from super-resolution (SR) ultrasound images with clinical biomarkers to improve risk stratification. A total of 228 patients with EP receiving conservative treatment were retrospectively included, with 169 classified as treatment success and 59 as failure. SR images were generated using a deep learning-based generative adversarial network (GAN). Radiomic features were extracted from both normal-resolution (NR) and SR ultrasound images. Features with intraclass correlation coefficient (ICC) ≥ 0.75 were retained after intra- and inter-observer evaluation. Feature selection involved statistical testing and Least Absolute Shrinkage and Selection Operator (LASSO) regression. Random forest algorithms were used to construct NR and SR models. A clinical model based on serum β-hCG was also developed. The Clin-SR model was constructed by fusing SR radiomics with β-hCG values. Model performance was evaluated using area under the curve (AUC), calibration, and decision curve analysis (DCA). An independent temporal validation cohort (n = 40; 20 failures, 20 successes) was used to validation of the nomogram derived from the Clin-SR model. The SR model significantly outperformed the NR model in the test cohort (AUC: 0.791 ± 0.015 vs. 0.629 ± 0.083). In a representative iteration, the Clin-SR fusion model achieved an AUC of 0.870 ± 0.015, with good calibration and net clinical benefit, suggesting reliable performance in predicting conservative treatment failure. In the independent validation cohort, the nomogram demonstrated good generalizability with an AUC of 0.808 and consistent calibration across risk thresholds. Key contributing radiomic features included Gray Level Variance and Voxel Volume, reflecting lesion heterogeneity and size. The Clin-SR model, which integrates deep learning-enhanced SR ultrasound radiomics with serum β-hCG, offers a robust and non-invasive tool for predicting conservative treatment failure in ectopic pregnancy. This multimodal approach enhances early risk stratification and supports personalized clinical decision-making, potentially reducing overtreatment and emergency interventions.
Page 361 of 6646636 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.