Sort by:
Page 33 of 3363359 results

A clinically relevant morpho-molecular classification of lung neuroendocrine tumours

Sexton-Oates, A., Mathian, E., Candeli, N., Lim, Y., Voegele, C., Di Genova, A., Mange, L., Li, Z., van Weert, T., Hillen, L. M., Blazquez-Encinas, R., Gonzalez-Perez, A., Morrison, M. L., Lauricella, E., Mangiante, L., Bonheme, L., Moonen, L., Absenger, G., Altmuller, J., Degletagne, C., Brustugun, O. T., Cahais, V., Centonze, G., Chabrier, A., Cuenin, C., Damiola, F., de Montpreville, V. T., Deleuze, J.-F., Dingemans, A.-M. C., Fadel, E., Gadot, N., Ghantous, A., Graziano, P., Hofman, P., Hofman, V., Ibanez-Costa, A., Lacomme, S., Lopez-Bigas, N., Lund-Iversen, M., Milione, M., Muscarella, L

medrxiv logopreprintJul 18 2025
Lung neuroendocrine tumours (NETs, also known as carcinoids) are rapidly rising in incidence worldwide but have unknown aetiology and limited therapeutic options beyond surgery. We conducted multi-omic analyses on over 300 lung NETs including whole-genome sequencing (WGS), transcriptome profiling, methylation arrays, spatial RNA sequencing, and spatial proteomics. The integration of multi-omic data provides definitive proof of the existence of four strikingly different molecular groups that vary in patient characteristics, genomic and transcriptomic profiles, microenvironment, and morphology, as much as distinct diseases. Among these, we identify a new molecular group, enriched for highly aggressive supra-carcinoids, that displays an immune-rich microenvironment linked to tumour--macrophage crosstalk, and we uncover an undifferentiated cell population within supra-carcinoids, explaining their molecular and behavioural link to high-grade lung neuroendocrine carcinomas. Deep learning models accurately identified the Ca A1, Ca A2, and Ca B groups based on morphology alone, outperforming current histological criteria. The characteristic tumour microenvironment of supra-carcinoids and the validation of a panel of immunohistochemistry markers for the other three molecular groups demonstrates that these groups can be accurately identified based solely on morphological features, facilitating their implementation in the clinical setting. Our proposed morpho-molecular classification highlights group-specific therapeutic opportunities, including DLL3, FGFR, TERT, and BRAF inhibitors. Overall, our findings unify previously proposed molecular classifications and refine the lung cancer map by revealing novel tumour types and potential treatments, with significant implications for prognosis and treatment decision-making.

Early Vascular Aging Determined by 3-Dimensional Aortic Geometry: Genetic Determinants and Clinical Consequences.

Beeche C, Zhao B, Tavolinejad H, Pourmussa B, Kim J, Duda J, Gee J, Witschey WR, Chirinos JA

pubmed logopapersJul 17 2025
Vascular aging is an important phenotype characterized by structural and geometric remodeling. Some individuals exhibit supernormal vascular aging, associated with improved cardiovascular outcomes; others experience early vascular aging, linked to adverse cardiovascular outcomes. The aorta is the artery that exhibits the most prominent age-related changes; however, the biological mechanisms underlying aortic aging, its genetic architecture, and its relationship with cardiovascular structure, function, and disease states remain poorly understood. We developed sex-specific models to quantify aortic age on the basis of aortic geometric phenotypes derived from 3-dimensional tomographic imaging data in 2 large biobanks: the UK Biobank and the Penn Medicine BioBank. Convolutional neural ne2rk-assisted 3-dimensional segmentation of the aorta was performed in 56 104 magnetic resonance imaging scans in the UK Biobank and 6757 computed tomography scans in the Penn Medicine BioBank. Aortic vascular age index (AVAI) was calculated as the difference between the vascular age predicted from geometric phenotypes and the chronological age, expressed as a percent of chronological age. We assessed associations with cardiovascular structure and function using multivariate linear regression and examined the genetic architecture of AVAI through genome-wide association studies, followed by Mendelian randomization to assess causal associations. We also constructed a polygenic risk score for AVAI. AVAI displayed numerous associations with cardiac structure and function, including increased left ventricular mass (standardized β=0.144 [95% CI, 0.138, 0.149]; <i>P</i><0.0001), wall thickness (standardized β=0.061 [95% CI, 0.054, 0.068]; <i>P</i><0.0001), and left atrial volume maximum (standardized β=0.060 [95% CI, 0.050, 0.069]; <i>P</i><0.0001). AVAI exhibited high genetic heritability (<i>h</i><sup>2</sup>=40.24%). We identified 54 independent genetic loci (<i>P</i><5×10<sup>-</sup><sup>8</sup>) associated with AVAI, which further exhibited gene-level associations with the fibrillin-1 (<i>FBN1</i>) and elastin (<i>ELN1</i>) genes. Mendelian randomization supported causal associations between AVAI and atrial fibrillation, vascular dementia, aortic aneurysm, and aortic dissection. A polygenic risk score for AVAI was associated with an increased prevalence of atrial fibrillation, hypertension, aortic aneurysm, and aortic dissection. Early aortic aging is significantly associated with adverse cardiac remodeling and important cardiovascular disease states. AVAI exhibits a polygenic, highly heritable genetic architecture. Mendelian randomization analyses support a causal association between AVAI and cardiovascular diseases, including atrial fibrillation, vascular dementia, aortic aneurysms, and aortic dissection.

Automatic selection of optimal TI for flow-independent dark-blood delayed-enhancement MRI.

Popescu AB, Rehwald W, Wendell D, Chevalier C, Itu LM, Suciu C, Chitiboi T

pubmed logopapersJul 17 2025
Propose and evaluate an automatic approach for predicting the optimal inversion time (TI) for dark and gray blood images for flow-independent dark-blood delayed-enhancement (FIDDLE) acquisition based on free-breathing FIDDLE TI-scout images. In 267 patients, the TI-scout sequence acquired single-shot magnetization-prepared and associated reference images (without preparation) on a 3 T Magnetom Vida and a 1.5 T Magnetom Sola scanner. Data were reconstructed into phase-corrected TI-scout images typically covering TIs from 140 to 440 ms (20 ms increment). A deep learning network was trained to segment the myocardium and blood pool in reference images. These segmentation masks were transferred to the TI-scout images to derive intensity features of myocardium and blood, with which T<sub>1</sub>-recovery curves were determined by logarithmic fitting. The optimal TI for dark and gray blood images were derived as linear functions of the TI in which both T<sub>1</sub>-curves cross. This TI-prediction pipeline was evaluated in 64 clinical subjects. The pipeline predicted optimal TIs with an average error less than 10 ms compared to manually annotated optimal TIs. The presented approach reliably and automatically predicted optimal TI for dark and gray blood FIDDLE acquisition, with an average error less than the TI increment of the FIDDLE TI-scout sequence.

BDEC: Brain Deep Embedded Clustering Model for Resting State fMRI Group-Level Parcellation of the Human Cerebral Cortex.

Zhu J, Ma X, Wei B, Zhong Z, Zhou H, Jiang F, Zhu H, Yi C

pubmed logopapersJul 17 2025
To develop a robust group-level brain parcellation method using deep learning based on resting-state functional magnetic resonance imaging (rs-fMRI), aiming to release the model assumptions made by previous approaches. We proposed Brain Deep Embedded Clustering (BDEC), a deep clustering model that employs a loss function designed to maximize inter-class separation and enhance intra-class similarity, thereby promoting the formation of functionally coherent brain regions. Compared to ten widely used brain parcellation methods, the BDEC model demonstrates significantly improved performance in various functional homogeneity metrics. It also showed favorable results in parcellation validity, downstream tasks, task inhomogeneity, and generalization capability. The BDEC model effectively captures intrinsic functional properties of the brain, supporting reliable and generalizable parcellation outcomes. BDEC provides a useful parcellation for brain network analysis and dimensionality reduction of rs-fMRI data, while also contributing to a deeper understanding of the brain's functional organization.

A multi-stage training and deep supervision based segmentation approach for 3D abdominal multi-organ segmentation.

Wu P, An P, Zhao Z, Guo R, Ma X, Qu Y, Xu Y, Yu H

pubmed logopapersJul 17 2025
Accurate X-ray Computed tomography (CT) image segmentation of the abdominal organs is fundamental for diagnosing abdominal diseases, planning cancer treatment, and formulating radiotherapy strategies. However, the existing deep learning based models for three-dimensional (3D) CT image abdominal multi-organ segmentation face challenges, including complex organ distribution, scarcity of labeled data, and diversity of organ structures, leading to difficulties in model training and convergence and low segmentation accuracy. To address these issues, a novel multi-stage training and a deep supervision model based segmentation approach is proposed. It primary integrates multi-stage training, pseudo- labeling technique, and a developed deep supervision model with attention mechanism (DLAU-Net), specifically designed for 3D abdominal multi-organ segmentation. The DLAU-Net enhances segmentation performance and model adaptability through an improved network architecture. The multi-stage training strategy accelerates model convergence and enhances generalizability, effectively addressing the diversity of abdominal organ structures. The introduction of pseudo-labeling training alleviates the bottleneck of labeled data scarcity and further improves the model's generalization performance and training efficiency. Experiments were conducted on a large dataset provided by the FLARE 2023 Challenge. Comprehensive ablation studies and comparative experiments were conducted to validate the effectiveness of the proposed method. Our method achieves an average organ accuracy (AVG) of 90.5% and a Dice Similarity Coefficient (DSC) of 89.05% and exhibits exceptional performance in terms of training speed and handling data diversity, particularly in the segmentation tasks of critical abdominal organs such as the liver, spleen, and kidneys, significantly outperforming existing comparative methods.

FSS-ULivR: a clinically-inspired few-shot segmentation framework for liver imaging using unified representations and attention mechanisms.

Debnath RK, Rahman MA, Azam S, Zhang Y, Jonkman M

pubmed logopapersJul 17 2025
Precise liver segmentation is critical for accurate diagnosis and effective treatment planning, serving as a foundation for medical image analysis. However, existing methods struggle with limited labeled data, poor generalizability, and insufficient integration of anatomical and clinical features. To address these limitations, we propose a novel Few-Shot Segmentation model with Unified Liver Representation (FSS-ULivR), which employs a ResNet-based encoder enhanced with Squeeze-and-Excitation modules to improve feature learning, an enhanced prototype module that utilizes a transformer block and channel attention for dynamic feature refinement, and a decoder with improved attention gates and residual refinement strategies to recover spatial details from encoder skip connections. Through extensive experiments, our FSS-ULivR model achieved an outstanding Dice coefficient of 98.94%, Intersection over Union (IoU) of 97.44% and a specificity of 93.78% on the Liver Tumor Segmentation Challenge dataset. Cross-dataset evaluations further demonstrated its generalizability, with Dice scores of 95.43%, 92.98%, 90.72%, and 94.05% on 3DIRCADB01, Colorectal Liver Metastases, Computed Tomography Organs (CT-ORG), and Medical Segmentation Decathlon Task 3: Liver datasets, respectively. In multi-organ segmentation on CT-ORG, it delivered Dice scores ranging from 85.93% to 94.26% across bladder, bones, kidneys, and lungs. For brain tumor segmentation on BraTS 2019 and 2020 datasets, average Dice scores were 90.64% and 89.36% across whole tumor, tumor core, and enhancing tumor regions. These results emphasize the clinical importance of our model by demonstrating its ability to deliver precise and reliable segmentation through artificial intelligence techniques and engineering solutions, even in scenarios with scarce annotated data.

DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model

Han Zhang, Xiangde Luo, Yong Chen, Kang Li

arxiv logopreprintJul 17 2025
Annotation variability remains a substantial challenge in medical image segmentation, stemming from ambiguous imaging boundaries and diverse clinical expertise. Traditional deep learning methods producing single deterministic segmentation predictions often fail to capture these annotator biases. Although recent studies have explored multi-rater segmentation, existing methods typically focus on a single perspective -- either generating a probabilistic ``gold standard'' consensus or preserving expert-specific preferences -- thus struggling to provide a more omni view. In this study, we propose DiffOSeg, a two-stage diffusion-based framework, which aims to simultaneously achieve both consensus-driven (combining all experts' opinions) and preference-driven (reflecting experts' individual assessments) segmentation. Stage I establishes population consensus through a probabilistic consensus strategy, while Stage II captures expert-specific preference via adaptive prompts. Demonstrated on two public datasets (LIDC-IDRI and NPC-170), our model outperforms existing state-of-the-art methods across all evaluated metrics. Source code is available at https://github.com/string-ellipses/DiffOSeg .

Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images

Zahra TehraniNasab, Amar Kumar, Tal Arbel

arxiv logopreprintJul 17 2025
Medical image synthesis presents unique challenges due to the inherent complexity and high-resolution details required in clinical contexts. Traditional generative architectures such as Generative Adversarial Networks (GANs) or Variational Auto Encoder (VAEs) have shown great promise for high-resolution image generation but struggle with preserving fine-grained details that are key for accurate diagnosis. To address this issue, we introduce Pixel Perfect MegaMed, the first vision-language foundation model to synthesize images at resolutions of 1024x1024. Our method deploys a multi-scale transformer architecture designed specifically for ultra-high resolution medical image generation, enabling the preservation of both global anatomical context and local image-level details. By leveraging vision-language alignment techniques tailored to medical terminology and imaging modalities, Pixel Perfect MegaMed bridges the gap between textual descriptions and visual representations at unprecedented resolution levels. We apply our model to the CheXpert dataset and demonstrate its ability to generate clinically faithful chest X-rays from text prompts. Beyond visual quality, these high-resolution synthetic images prove valuable for downstream tasks such as classification, showing measurable performance gains when used for data augmentation, particularly in low-data regimes. Our code is accessible through the project website - https://tehraninasab.github.io/pixelperfect-megamed.

Deep Learning-Based Fetal Lung Segmentation from Diffusion-weighted MRI Images and Lung Maturity Evaluation for Fetal Growth Restriction

Zhennan Xiao, Katharine Brudkiewicz, Zhen Yuan, Rosalind Aughwane, Magdalena Sokolska, Joanna Chappell, Trevor Gaunt, Anna L. David, Andrew P. King, Andrew Melbourne

arxiv logopreprintJul 17 2025
Fetal lung maturity is a critical indicator for predicting neonatal outcomes and the need for post-natal intervention, especially for pregnancies affected by fetal growth restriction. Intra-voxel incoherent motion analysis has shown promising results for non-invasive assessment of fetal lung development, but its reliance on manual segmentation is time-consuming, thus limiting its clinical applicability. In this work, we present an automated lung maturity evaluation pipeline for diffusion-weighted magnetic resonance images that consists of a deep learning-based fetal lung segmentation model and a model-fitting lung maturity assessment. A 3D nnU-Net model was trained on manually segmented images selected from the baseline frames of 4D diffusion-weighted MRI scans. The segmentation model demonstrated robust performance, yielding a mean Dice coefficient of 82.14%. Next, voxel-wise model fitting was performed based on both the nnU-Net-predicted and manual lung segmentations to quantify IVIM parameters reflecting tissue microstructure and perfusion. The results suggested no differences between the two. Our work shows that a fully automated pipeline is possible for supporting fetal lung maturity assessment and clinical decision-making.

From Variability To Accuracy: Conditional Bernoulli Diffusion Models with Consensus-Driven Correction for Thin Structure Segmentation

Jinseo An, Min Jin Lee, Kyu Won Shim, Helen Hong

arxiv logopreprintJul 17 2025
Accurate segmentation of orbital bones in facial computed tomography (CT) images is essential for the creation of customized implants for reconstruction of defected orbital bones, particularly challenging due to the ambiguous boundaries and thin structures such as the orbital medial wall and orbital floor. In these ambiguous regions, existing segmentation approaches often output disconnected or under-segmented results. We propose a novel framework that corrects segmentation results by leveraging consensus from multiple diffusion model outputs. Our approach employs a conditional Bernoulli diffusion model trained on diverse annotation patterns per image to generate multiple plausible segmentations, followed by a consensus-driven correction that incorporates position proximity, consensus level, and gradient direction similarity to correct challenging regions. Experimental results demonstrate that our method outperforms existing methods, significantly improving recall in ambiguous regions while preserving the continuity of thin structures. Furthermore, our method automates the manual process of segmentation result correction and can be applied to image-guided surgical planning and surgery.
Page 33 of 3363359 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.