Sort by:
Page 26 of 1331328 results

Enhancing Corpus Callosum Segmentation in Fetal MRI via Pathology-Informed Domain Randomization

Marina Grifell i Plana, Vladyslav Zalevskyi, Léa Schmidt, Yvan Gomez, Thomas Sanchez, Vincent Dunet, Mériam Koob, Vanessa Siffredi, Meritxell Bach Cuadra

arxiv logopreprintAug 28 2025
Accurate fetal brain segmentation is crucial for extracting biomarkers and assessing neurodevelopment, especially in conditions such as corpus callosum dysgenesis (CCD), which can induce drastic anatomical changes. However, the rarity of CCD severely limits annotated data, hindering the generalization of deep learning models. To address this, we propose a pathology-informed domain randomization strategy that embeds prior knowledge of CCD manifestations into a synthetic data generation pipeline. By simulating diverse brain alterations from healthy data alone, our approach enables robust segmentation without requiring pathological annotations. We validate our method on a cohort comprising 248 healthy fetuses, 26 with CCD, and 47 with other brain pathologies, achieving substantial improvements on CCD cases while maintaining performance on both healthy fetuses and those with other pathologies. From the predicted segmentations, we derive clinically relevant biomarkers, such as corpus callosum length (LCC) and volume, and show their utility in distinguishing CCD subtypes. Our pathology-informed augmentation reduces the LCC estimation error from 1.89 mm to 0.80 mm in healthy cases and from 10.9 mm to 0.7 mm in CCD cases. Beyond these quantitative gains, our approach yields segmentations with improved topological consistency relative to available ground truth, enabling more reliable shape-based analyses. Overall, this work demonstrates that incorporating domain-specific anatomical priors into synthetic data pipelines can effectively mitigate data scarcity and enhance analysis of rare but clinically significant malformations.

AI-driven body composition monitoring and its prognostic role in mCRPC undergoing lutetium-177 PSMA radioligand therapy: insights from a retrospective single-center analysis.

Ruhwedel T, Rogasch J, Galler M, Schatka I, Wetz C, Furth C, Biernath N, De Santis M, Shnayien S, Kolck J, Geisel D, Amthauer H, Beetz NL

pubmed logopapersAug 28 2025
Body composition (BC) analysis is performed to quantify the relative amounts of different body tissues as a measure of physical fitness and tumor cachexia. We hypothesized that relative changes in body composition (BC) parameters, assessed by an artificial intelligence-based, PACS-integrated software, between baseline imaging before the start of radioligand therapy (RLT) and interim staging after two RLT cycles could predict overall survival (OS) in patients with metastatic castration-resistant prostate cancer. We conducted a single-center, retrospective analysis of 92 patients with mCRPC undergoing [<sup>177</sup>Lu]Lu-PSMA RLT between September 2015 and December 2023. All patients had [<sup>68</sup> Ga]Ga-PSMA-11 PET/CT at baseline (≤ 6 weeks before the first RLT cycle) and at interim staging (6-8 weeks after the second RLT cycle) allowing for longitudinal BC assessment. During follow-up, 78 patients (85%) died. Median OS was 16.3 months. Median follow-up time in survivors was 25.6 months. The 1 year mortality rate was 32.6% (95%CI 23.0-42.2%) and the 5 year mortality rate was 92.9% (95%CI 85.8-100.0%). In multivariable regression, relative change in visceral adipose tissue (VAT) (HR: 0.26; p = 0.006), previous chemotherapy of any type (HR: 2.4; p = 0.003), the presence of liver metastases (HR: 2.4; p = 0.018) and a higher baseline De Ritis ratio (HR: 1.4; p < 0.001) remained independent predictors of OS. Patients with a higher decrease in VAT (< -20%) had a median OS of 10.2 months versus 18.5 months in patients with a lower VAT decrease or VAT increase (≥ -20%) (log-rank test: p = 0.008). In a separate Cox model, the change in VAT predicted OS (p = 0.005) independent of the best PSA response after 1-2 RLT cycles (p = 0.09), and there was no interaction between the two (p = 0.09). PACS-Integrated, AI-based BC monitoring detects relative changes in the VAT, Which was an independent predictor of shorter OS in our population of patients undergoing RLT.

Dino U-Net: Exploiting High-Fidelity Dense Features from Foundation Models for Medical Image Segmentation

Yifan Gao, Haoyue Li, Feng Yuan, Xiaosong Wang, Xin Gao

arxiv logopreprintAug 28 2025
Foundation models pre-trained on large-scale natural image datasets offer a powerful paradigm for medical image segmentation. However, effectively transferring their learned representations for precise clinical applications remains a challenge. In this work, we propose Dino U-Net, a novel encoder-decoder architecture designed to exploit the high-fidelity dense features of the DINOv3 vision foundation model. Our architecture introduces an encoder built upon a frozen DINOv3 backbone, which employs a specialized adapter to fuse the model's rich semantic features with low-level spatial details. To preserve the quality of these representations during dimensionality reduction, we design a new fidelity-aware projection module (FAPM) that effectively refines and projects the features for the decoder. We conducted extensive experiments on seven diverse public medical image segmentation datasets. Our results show that Dino U-Net achieves state-of-the-art performance, consistently outperforming previous methods across various imaging modalities. Our framework proves to be highly scalable, with segmentation accuracy consistently improving as the backbone model size increases up to the 7-billion-parameter variant. The findings demonstrate that leveraging the superior, dense-pretrained features from a general-purpose foundation model provides a highly effective and parameter-efficient approach to advance the accuracy of medical image segmentation. The code is available at https://github.com/yifangao112/DinoUNet.

Learning What is Worth Learning: Active and Sequential Domain Adaptation for Multi-modal Gross Tumor Volume Segmentation

Jingyun Yang, Guoqing Zhang, Jingge Wang, Yang Li

arxiv logopreprintAug 28 2025
Accurate gross tumor volume segmentation on multi-modal medical data is critical for radiotherapy planning in nasopharyngeal carcinoma and glioblastoma. Recent advances in deep neural networks have brought promising results in medical image segmentation, leading to an increasing demand for labeled data. Since labeling medical images is time-consuming and labor-intensive, active learning has emerged as a solution to reduce annotation costs by selecting the most informative samples to label and adapting high-performance models with as few labeled samples as possible. Previous active domain adaptation (ADA) methods seek to minimize sample redundancy by selecting samples that are farthest from the source domain. However, such one-off selection can easily cause negative transfer, and access to source medical data is often limited. Moreover, the query strategy for multi-modal medical data remains unexplored. In this work, we propose an active and sequential domain adaptation framework for dynamic multi-modal sample selection in ADA. We derive a query strategy to prioritize labeling and training on the most valuable samples based on their informativeness and representativeness. Empirical validation on diverse gross tumor volume segmentation tasks demonstrates that our method achieves favorable segmentation performance, significantly outperforming state-of-the-art ADA methods. Code is available at the git repository: \href{https://github.com/Hiyoochan/mmActS}{mmActS}.

Ultra-low-field MRI for imaging of severe multiple sclerosis: a case-controlled study

Bergsland, N., Burnham, A., Dwyer, M. G., Bartnik, A., Schweser, F., Kennedy, C., Tranquille, A., Semy, M., Schnee, E., Young-Hong, D., Eckert, S., Hojnacki, D., Reilly, C., Benedict, R. H., Weinstock-Guttman, B., Zivadinov, R.

medrxiv logopreprintAug 27 2025
BackgroundSevere multiple sclerosis (MS) presents challenges for clinical research due to mobility constraints and specialized care needs. Traditional MRI studies often exclude this population, limiting understanding of severe MS progression. Portable, ultra-low-field MRI enables bedside imaging. ObjectivesTo (i) assess the feasibility of portable MRI in severe MS, (ii) compare measurement approaches for automated tissue volumetry from ultra-low-field MRI. MethodsThis prospective study enrolled 40 progressive MS patients (24 severely disabled, 16 less severe) from academic and skilled nursing settings. Participants underwent 0.064T MRI for tissue volumetry using conventional and artificial intelligence (AI)-driven segmentation. Clinical assessments included physical disability and cognition. Group comparisons and MRI-clinical associations were assessed. ResultsMRI passed rigorous quality control, reflecting complete brain coverage and lack of motion artifact, in 38/40 participants. In terms of severe versus less severe disease, the largest effect sizes were obtained with conventionally-calculated gray matter (GM) volume (partial 2=0.360), cortical GM volume (partial 2=0.349), and whole brain volume (partial 2=0.290) while an AI-based approach yielded the highest effect size for white matter volume (partial 2=0.209). For clinical outcomes, the most consistent associations were found using conventional processing while AI-based methods were dependent on algorithm and input image, especially for cortical GM volume. ConclusionPortable, ultralow-field MRI is a feasible bedside tool that can provide insights into late-stage neurodegeneration in individuals living with severe MS. However, careful consideration is required in implementing tissue volumetry pipelines as findings are heavily dependent on the choice of algorithm and input.

Robust Quantification of Affected Brain Volume from Computed Tomography Perfusion: A Hybrid Approach Combining Deep Learning and Singular Value Decomposition.

Kim GY, Yang HS, Hwang J, Lee K, Choi JW, Jung WS, Kim REY, Kim D, Lee M

pubmed logopapersAug 27 2025
Volumetric estimation of affected brain volumes using computed tomography perfusion (CTP) is crucial in the management of acute ischemic stroke (AIS) and relies on commercial software, which has limitations such as variations in results due to image quality. To predict affected brain volume accurately and robustly, we propose a hybrid approach that integrates singular value decomposition (SVD), deep learning (DL), and machine learning (ML) techniques. We included 449 CTP images of patients with AIS with manually annotated vessel landmarks provided by expert radiologists, collected between 2021 and 2023. We developed a CNN-based approach for predicting eight vascular landmarks from CTP images, integrating ML components. We then used SVD-related methods to generate perfusion maps and compared the results with those of the RapidAI software (RapidAI, Menlo Park, California). The proposed CNN model achieved an average Euclidean distance error of 4.63 <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>±</mo></math> 2.00 mm on the vessel localization. Without the ML components, compared to RapidAI, our method yielded concordance correlation coefficient (CCC) scores of 0.898 for estimating volumes with cerebral blood flow (CBF) < 30% and 0.715 for Tmax > 6 s. Using the ML method, it achieved CCC scores of 0.905 for CBF < 30% and 0.879 for Tmax > 6 s. For the data assessment, it achieved 0.8 accuracy. We developed a robust hybrid model combining DL and ML techniques for volumetric estimation of affected brain volumes using CTP in patients with AIS, demonstrating improved accuracy and robustness compared to existing commercial solutions.

Is the medical image segmentation problem solved? A survey of current developments and future directions

Guoping Xu, Jayaram K. Udupa, Jax Luo, Songlin Zhao, Yajun Yu, Scott B. Raymond, Hao Peng, Lipeng Ning, Yogesh Rathi, Wei Liu, You Zhang

arxiv logopreprintAug 27 2025
Medical image segmentation has advanced rapidly over the past two decades, largely driven by deep learning, which has enabled accurate and efficient delineation of cells, tissues, organs, and pathologies across diverse imaging modalities. This progress raises a fundamental question: to what extent have current models overcome persistent challenges, and what gaps remain? In this work, we provide an in-depth review of medical image segmentation, tracing its progress and key developments over the past decade. We examine core principles, including multiscale analysis, attention mechanisms, and the integration of prior knowledge, across the encoder, bottleneck, skip connections, and decoder components of segmentation networks. Our discussion is organized around seven key dimensions: (1) the shift from supervised to semi-/unsupervised learning, (2) the transition from organ segmentation to lesion-focused tasks, (3) advances in multi-modality integration and domain adaptation, (4) the role of foundation models and transfer learning, (5) the move from deterministic to probabilistic segmentation, (6) the progression from 2D to 3D and 4D segmentation, and (7) the trend from model invocation to segmentation agents. Together, these perspectives provide a holistic overview of the trajectory of deep learning-based medical image segmentation and aim to inspire future innovation. To support ongoing research, we maintain a continually updated repository of relevant literature and open-source resources at https://github.com/apple1986/medicalSegReview

Deep Learning-Based 3D and 2D Approaches for Skeletal Muscle Segmentation on Low-Dose CT Images.

Timpano G, Veltri P, Vizza P, Cascini GL, Manti F

pubmed logopapersAug 27 2025
Automated segmentation of skeletal muscle from computed tomography (CT) images is essential for large-scale quantitative body composition analysis. However, manual segmentation is time-consuming and impractical for routine or high-throughput use. This study presents a systematic comparison of two-dimensional (2D) and three-dimensional (3D) deep learning architectures for segmenting skeletal muscle at the anatomically standardized level of the third lumbar vertebra (L3) in low-dose computed tomography (LDCT) scans. We implemented and evaluated the DeepLabv3+ (2D) and UNet3+ (3D) architectures on a curated dataset of 537 LDCT scans, applying preprocessing protocols, L3 slice selection, and region of interest extraction. The model performance was evaluated using a comprehensive set of evaluation metrics, including Dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (HD95). DeepLabv3+ achieved the highest segmentation accuracy (DSC = 0.982 ± 0.010, HD95 = 1.04 ± 0.46 mm), while UNet3+ showed competitive performance (DSC = 0.967 ± 0.013, HD95 = 1.27 ± 0.58 mm) with 26 times fewer parameters (1.27 million vs. 33.6 million) and lower inference time. Both models exceeded or matched results reported in the recent CT-based muscle segmentation literature. This work offers practical insights into architecture selection for automated LDCT-based muscle segmentation workflows, with a focus on the L3 vertebral level, which remains the gold standard in muscle quantification protocols.

MedNet-PVS: A MedNeXt-Based Deep Learning Model for Automated Segmentation of Perivascular Spaces

Zhen Xuen Brandon Low, Rory Zhang, Hang Min, William Pham, Lucy Vivash, Jasmine Moses, Miranda Lynch, Karina Dorfman, Cassandra Marotta, Shaun Koh, Jacob Bunyamin, Ella Rowsthorn, Alex Jarema, Himashi Peiris, Zhaolin Chen, Sandy R. Shultz, David K. Wright, Dexiao Kong, Sharon L. Naismith, Terence J. O'Brien, Ying Xia, Meng Law, Benjamin Sinclair

arxiv logopreprintAug 27 2025
Enlarged perivascular spaces (PVS) are increasingly recognized as biomarkers of cerebral small vessel disease, Alzheimer's disease, stroke, and aging-related neurodegeneration. However, manual segmentation of PVS is time-consuming and subject to moderate inter-rater reliability, while existing automated deep learning models have moderate performance and typically fail to generalize across diverse clinical and research MRI datasets. We adapted MedNeXt-L-k5, a Transformer-inspired 3D encoder-decoder convolutional network, for automated PVS segmentation. Two models were trained: one using a homogeneous dataset of 200 T2-weighted (T2w) MRI scans from the Human Connectome Project-Aging (HCP-Aging) dataset and another using 40 heterogeneous T1-weighted (T1w) MRI volumes from seven studies across six scanners. Model performance was evaluated using internal 5-fold cross validation (5FCV) and leave-one-site-out cross validation (LOSOCV). MedNeXt-L-k5 models trained on the T2w images of the HCP-Aging dataset achieved voxel-level Dice scores of 0.88+/-0.06 (white matter, WM), comparable to the reported inter-rater reliability of that dataset, and the highest yet reported in the literature. The same models trained on the T1w images of the HCP-Aging dataset achieved a substantially lower Dice score of 0.58+/-0.09 (WM). Under LOSOCV, the model had voxel-level Dice scores of 0.38+/-0.16 (WM) and 0.35+/-0.12 (BG), and cluster-level Dice scores of 0.61+/-0.19 (WM) and 0.62+/-0.21 (BG). MedNeXt-L-k5 provides an efficient solution for automated PVS segmentation across diverse T1w and T2w MRI datasets. MedNeXt-L-k5 did not outperform the nnU-Net, indicating that the attention-based mechanisms present in transformer-inspired models to provide global context are not required for high accuracy in PVS segmentation.

ProMUS-NET: Artificial intelligence detects more prostate cancer than urologists on micro-ultrasonography.

Zhou SR, Zhang L, Choi MH, Vesal S, Kinnaird A, Brisbane WG, Lughezzani G, Maffei D, Fasulo V, Albers P, Fan RE, Shao W, Sonn GA, Rusu M

pubmed logopapersAug 27 2025
To improve sensitivity and inter-reader consistency of prostate cancer localisation on micro-ultrasonography (MUS) by developing a deep learning model for automatic cancer segmentation, and to compare model performance with that of expert urologists. We performed an institutional review board-approved prospective collection of MUS images from patients undergoing magnetic resonance imaging (MRI)-ultrasonography fusion guided biopsy at a single institution. Patients underwent 14-core systematic biopsy and additional targeted sampling of suspicious MRI lesions. Biopsy pathology and MRI information were cross-referenced to annotate the locations of International Society of Urological Pathology Grade Group (GG) ≥2 clinically significant cancer on MUS images. We trained a no-new U-Net model - the Prostate Micro-Ultrasound Network (ProMUS-NET) - to localise GG ≥2 cancer on these image stacks in a fivefold cross-validation. Performance was compared vs that of six expert urologists in a matched sub-cohort. The artificial intelligence (AI) model achieved an area under the receiver-operating characteristic curve of 0.92 and detected more cancers than urologists (lesion-level sensitivity 73% vs 58%; patient-level sensitivity 77% vs 66%). AI lesion-level sensitivity for peripheral zone lesions was 86.2%. Our AI model identified prostate cancer lesions on MUS with high sensitivity and specificity. Further work is ongoing to improve margin overlap, to reduce false positives, and to perform external validation. AI-assisted prostate cancer detection on MUS has great potential to improve biopsy diagnosis by urologists.
Page 26 of 1331328 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.