Sort by:
Page 15 of 26257 results

TissUnet: Improved Extracranial Tissue and Cranium Segmentation for Children through Adulthood

Markiian Mandzak, Elvira Yang, Anna Zapaishchykova, Yu-Hui Chen, Lucas Heilbroner, John Zielke, Divyanshu Tak, Reza Mojahed-Yazdi, Francesca Romana Mussa, Zezhong Ye, Sridhar Vajapeyam, Viviana Benitez, Ralph Salloum, Susan N. Chi, Houman Sotoudeh, Jakob Seidlitz, Sabine Mueller, Hugo J. W. L. Aerts, Tina Y. Poussaint, Benjamin H. Kann

arxiv logopreprintJun 6 2025
Extracranial tissues visible on brain magnetic resonance imaging (MRI) may hold significant value for characterizing health conditions and clinical decision-making, yet they are rarely quantified. Current tools have not been widely validated, particularly in settings of developing brains or underlying pathology. We present TissUnet, a deep learning model that segments skull bone, subcutaneous fat, and muscle from routine three-dimensional T1-weighted MRI, with or without contrast enhancement. The model was trained on 155 paired MRI-computed tomography (CT) scans and validated across nine datasets covering a wide age range and including individuals with brain tumors. In comparison to AI-CT-derived labels from 37 MRI-CT pairs, TissUnet achieved a median Dice coefficient of 0.79 [IQR: 0.77-0.81] in a healthy adult cohort. In a second validation using expert manual annotations, median Dice was 0.83 [IQR: 0.83-0.84] in healthy individuals and 0.81 [IQR: 0.78-0.83] in tumor cases, outperforming previous state-of-the-art method. Acceptability testing resulted in an 89% acceptance rate after adjudication by a tie-breaker(N=108 MRIs), and TissUnet demonstrated excellent performance in the blinded comparative review (N=45 MRIs), including both healthy and tumor cases in pediatric populations. TissUnet enables fast, accurate, and reproducible segmentation of extracranial tissues, supporting large-scale studies on craniofacial morphology, treatment effects, and cardiometabolic risk using standard brain T1w MRI.

Magnetic resonance imaging and the evaluation of vestibular schwannomas: a systematic review

Lee, K. S., Wijetilake, N., Connor, S., Vercauteren, T., Shapey, J.

medrxiv logopreprintJun 6 2025
IntroductionThe assessment of vestibular schwannoma (VS) requires a standardized measurement approach as growth is a key element in defining treatment strategy for VS. Volumetric measurements offer higher sensitivity and precision, but existing methods of segmentation, are labour-intensive, lack standardisation and are prone to variability and subjectivity. A new core set of measurement indicators reported consistently, will support clinical decision-making and facilitate evidence synthesis. This systematic review aimed to identify indicators used in 1) magnetic resonance imaging (MRI) acquisition and 2) measurement or 3) growth of VS. This work is expected to inform a Delphi consensus. MethodsSystematic searches of Medline, Embase and Cochrane Central were undertaken on 4th October 2024. Studies that assessed the evaluation of VS with MRI, between 2014 and 2024 were included. ResultsThe final dataset consisted of 102 studies and 19001 patients. Eighty-six (84.3%) studies employed post contrast T1 as the MRI acquisition of choice for evaluating VS. Nine (8.8%) studies additionally employed heavily weighted T2 sequences such as constructive interference in steady state (CISS) and FIESTA-C. Only 45 (44.1%) studies reported the slice thickness with the majority 38 (84.4%) choosing <3mm in thickness. Fifty-eight (56.8%) studies measured volume whilst 49 (48.0%) measured the largest linear dimension; 14 (13.7%) studies used both measurements. Four studies employed semi-automated or automated segmentation processes to measure the volumes of VS. Of 68 studies investigating growth, 54 (79.4%) provided a threshold. Significant variation in volumetric growth was observed but the threshold for significant percentage change reported by most studies was 20% (n = 18). ConclusionSubstantial variation in MRI acquisition, and methods for evaluating measurement and growth of VS, exists across the literature. This lack of standardization is likely attributed to resource constraints and the fact that currently available volumetric segmentation methods are very labour-intensive. Following the identification of the indicators employed in the literature, this study aims to develop a Delphi consensus for the standardized measurement of VS and uptake in employing a data-driven artificial intelligence-based measuring tools.

Deep learning-enabled MRI phenotyping uncovers regional body composition heterogeneity and disease associations in two European population cohorts

Mertens, C. J., Haentze, H., Ziegelmayer, S., Kather, J. N., Truhn, D., Kim, S. H., Busch, F., Weller, D., Wiestler, B., Graf, M., Bamberg, F., Schlett, C. L., Weiss, J. B., Ringhof, S., Can, E., Schulz-Menger, J., Niendorf, T., Lammert, J., Molwitz, I., Kader, A., Hering, A., Meddeb, A., Nawabi, J., Schulze, M. B., Keil, T., Willich, S. N., Krist, L., Hadamitzky, M., Hannemann, A., Bassermann, F., Rueckert, D., Pischon, T., Hapfelmeier, A., Makowski, M. R., Bressem, K. K., Adams, L. C.

medrxiv logopreprintJun 6 2025
Body mass index (BMI) does not account for substantial inter-individual differences in regional fat and muscle compartments, which are relevant for the prevalence of cardiometabolic and cancer conditions. We applied a validated deep learning pipeline for automated segmentation of whole-body MRI scans in 45,851 adults from the UK Biobank and German National Cohort, enabling harmonized quantification of visceral (VAT), gluteofemoral (GFAT), and abdominal subcutaneous adipose tissue (ASAT), liver fat fraction (LFF), and trunk muscle volume. Associations with clinical conditions were evaluated using compartment measures adjusted for age, sex, height, and BMI. Our analysis demonstrates that regional adiposity and muscle volume show distinct associations with cardiometabolic and cancer prevalence, and that substantial disease heterogeneity exists within BMI strata. The analytic framework and reference data presented here will support future risk stratification efforts and facilitate the integration of automated MRI phenotyping into large-scale population and clinical research.

Detecting neurodegenerative changes in glaucoma using deep mean kurtosis-curve-corrected tractometry

Kasa, L. W., Schierding, W., Kwon, E., Holdsworth, S., Danesh-Meyer, H. V.

medrxiv logopreprintJun 6 2025
Glaucoma is increasingly recognized as a neurodegenerative condition involving both retinal and central nervous system structures. Here, we present an integrated framework that combines MK-Curve-corrected diffusion kurtosis imaging (DKI), tractometry, and deep autoencoder-based normative modeling to detect localized white matter abnormalities associated with glaucoma. Using UK Biobank diffusion MRI data, we show that MK-Curve approach corrects anatomically implausible values and improves the reliability of DKI metrics - particularly mean (MK), radial (RK), and axial kurtosis (AK) - in regions of complex fiber architecture. Tractometry revealed reduced MK in glaucoma patients along the optic radiation, inferior longitudinal fasciculus, and inferior fronto-occipital fasciculus, but not in a non-visual control tract, supporting disease specificity. These abnormalities were spatially localized, with significant changes observed at multiple points along the tracts. MK demonstrated greater sensitivity than MD and exhibited altered distributional features, reflecting microstructural heterogeneity not captured by standard metrics. Node-wise MK values in the right optic radiation showed weak but significant correlations with retinal OCT measures (ganglion cell layer and retinal nerve fiber layer thickness), reinforcing the biological relevance of these findings. Deep autoencoder-based modeling further enabled subject-level anomaly detection that aligned spatially with group-level changes and outperformed traditional approaches. Together, our results highlight the potential of advanced diffusion modeling and deep learning for sensitive, individualized detection of glaucomatous neurodegeneration and support their integration into future multimodal imaging pipelines in neuro-ophthalmology.

Clinically Interpretable Deep Learning via Sparse BagNets for Epiretinal Membrane and Related Pathology Detection

Ofosu Mensah, S., Neubauer, J., Ayhan, M. S., Djoumessi Donteu, K. R., Koch, L. M., Uzel, M. M., Gelisken, F., Berens, P.

medrxiv logopreprintJun 6 2025
Epiretinal membrane (ERM) is a vitreoretinal interface disease that, if not properly addressed, can lead to vision impairment and negatively affect quality of life. For ERM detection and treatment planning, Optical Coherence Tomography (OCT) has become the primary imaging modality, offering non-invasive, high-resolution cross-sectional imaging of the retina. Deep learning models have also led to good ERM detection performance on OCT images. Nevertheless, most deep learning models cannot be easily understood by clinicians, which limits their acceptance in clinical practice. Post-hoc explanation methods have been utilised to support the uptake of models, albeit, with partial success. In this study, we trained a sparse BagNet model, an inherently interpretable deep learning model, to detect ERM in OCT images. It performed on par with a comparable black-box model and generalised well to external data. In a multitask setting, it also accurately predicted other changes related to the ERM pathophysiology. Through a user study with ophthalmologists, we showed that the visual explanations readily provided by the sparse BagNet model for its decisions are well-aligned with clinical expertise. We propose potential directions for clinical implementation of the sparse BagNet model to guide clinical decisions in practice.

Stable Vision Concept Transformers for Medical Diagnosis

Lijie Hu, Songning Lai, Yuan Hua, Shu Yang, Jingfeng Zhang, Di Wang

arxiv logopreprintJun 5 2025
Transparency is a paramount concern in the medical field, prompting researchers to delve into the realm of explainable AI (XAI). Among these XAI methods, Concept Bottleneck Models (CBMs) aim to restrict the model's latent space to human-understandable high-level concepts by generating a conceptual layer for extracting conceptual features, which has drawn much attention recently. However, existing methods rely solely on concept features to determine the model's predictions, which overlook the intrinsic feature embeddings within medical images. To address this utility gap between the original models and concept-based models, we propose Vision Concept Transformer (VCT). Furthermore, despite their benefits, CBMs have been found to negatively impact model performance and fail to provide stable explanations when faced with input perturbations, which limits their application in the medical field. To address this faithfulness issue, this paper further proposes the Stable Vision Concept Transformer (SVCT) based on VCT, which leverages the vision transformer (ViT) as its backbone and incorporates a conceptual layer. SVCT employs conceptual features to enhance decision-making capabilities by fusing them with image features and ensures model faithfulness through the integration of Denoised Diffusion Smoothing. Comprehensive experiments on four medical datasets demonstrate that our VCT and SVCT maintain accuracy while remaining interpretable compared to baselines. Furthermore, even when subjected to perturbations, our SVCT model consistently provides faithful explanations, thus meeting the needs of the medical field.

SAM-aware Test-time Adaptation for Universal Medical Image Segmentation

Jianghao Wu, Yicheng Wu, Yutong Xie, Wenjia Bai, You Zhang, Feilong Tang, Yulong Li, Yasmeen George, Imran Razzak

arxiv logopreprintJun 5 2025
Universal medical image segmentation using the Segment Anything Model (SAM) remains challenging due to its limited adaptability to medical domains. Existing adaptations, such as MedSAM, enhance SAM's performance in medical imaging but at the cost of reduced generalization to unseen data. Therefore, in this paper, we propose SAM-aware Test-Time Adaptation (SAM-TTA), a fundamentally different pipeline that preserves the generalization of SAM while improving its segmentation performance in medical imaging via a test-time framework. SAM-TTA tackles two key challenges: (1) input-level discrepancies caused by differences in image acquisition between natural and medical images and (2) semantic-level discrepancies due to fundamental differences in object definition between natural and medical domains (e.g., clear boundaries vs. ambiguous structures). Specifically, our SAM-TTA framework comprises (1) Self-adaptive Bezier Curve-based Transformation (SBCT), which adaptively converts single-channel medical images into three-channel SAM-compatible inputs while maintaining structural integrity, to mitigate the input gap between medical and natural images, and (2) Dual-scale Uncertainty-driven Mean Teacher adaptation (DUMT), which employs consistency learning to align SAM's internal representations to medical semantics, enabling efficient adaptation without auxiliary supervision or expensive retraining. Extensive experiments on five public datasets demonstrate that our SAM-TTA outperforms existing TTA approaches and even surpasses fully fine-tuned models such as MedSAM in certain scenarios, establishing a new paradigm for universal medical image segmentation. Code can be found at https://github.com/JianghaoWu/SAM-TTA.

Exploring Adversarial Watermarking in Transformer-Based Models: Transferability and Robustness Against Defense Mechanism for Medical Images

Rifat Sadik, Tanvir Rahman, Arpan Bhattacharjee, Bikash Chandra Halder, Ismail Hossain

arxiv logopreprintJun 5 2025
Deep learning models have shown remarkable success in dermatological image analysis, offering potential for automated skin disease diagnosis. Previously, convolutional neural network(CNN) based architectures have achieved immense popularity and success in computer vision (CV) based task like skin image recognition, generation and video analysis. But with the emergence of transformer based models, CV tasks are now are nowadays carrying out using these models. Vision Transformers (ViTs) is such a transformer-based models that have shown success in computer vision. It uses self-attention mechanisms to achieve state-of-the-art performance across various tasks. However, their reliance on global attention mechanisms makes them susceptible to adversarial perturbations. This paper aims to investigate the susceptibility of ViTs for medical images to adversarial watermarking-a method that adds so-called imperceptible perturbations in order to fool models. By generating adversarial watermarks through Projected Gradient Descent (PGD), we examine the transferability of such attacks to CNNs and analyze the performance defense mechanism -- adversarial training. Results indicate that while performance is not compromised for clean images, ViTs certainly become much more vulnerable to adversarial attacks: an accuracy drop of as low as 27.6%. Nevertheless, adversarial training raises it up to 90.0%.

Prediction of impulse control disorders in Parkinson's disease: a longitudinal machine learning study

Vamvakas, A., van Balkom, T., van Wingen, G., Booij, J., Weintraub, D., Berendse, H. W., van den Heuvel, O. A., Vriend, C.

medrxiv logopreprintJun 5 2025
BackgroundImpulse control disorders (ICD) in Parkinsons disease (PD) patients mainly occur as adverse effects of dopamine replacement therapy. Despite several known risk factors associated with ICD development, this cannot yet be accurately predicted at PD diagnosis. ObjectivesWe aimed to investigate the predictability of incident ICD by baseline measures of demographic, clinical, dopamine transporter single photon emission computed tomography (DAT-SPECT), and genetic variables. MethodsWe used demographic and clinical data of medication-free PD patients from two longitudinal datasets; Parkinsons Progression Markers Initiative (PPMI) (n=311) and Amsterdam UMC (n=72). We extracted radiomic and latent features from DAT-SPECT. We used single nucleotic polymorphisms (SNPs) from PPMIs NeuroX and Exome sequencing data. Four machine learning classifiers were trained on combinations of the input feature sets, to predict incident ICD at any follow-up assessment. Classification performance was measured with 10x5-fold cross-validation. ResultsICD prevalence at any follow-up was 0.32. The highest performance in predicting incident ICD (AUC=0.66) was achieved by the models trained on clinical features only. Anxiety severity and age of PD onset were identified as the most important features. Performance did not improve with adding features from DAT-SPECT or SNPs. We observed significantly higher performance (AUC=0.74) when classifying patients who developed ICD within four years from diagnosis compared with those tested negative for seven or more years. ConclusionsPrediction accuracy for later ICD development, at the time of PD diagnosis, is limited; however, it increases for shorter time-to-event predictions. Neither DAT-SPECT nor genetic data improve the predictability obtained using demographic and clinical variables alone.

DWI and Clinical Characteristics Correlations in Acute Ischemic Stroke After Thrombolysis

Li, J., Huang, C., Liu, Y., Li, Y., Zhang, J., Xiao, M., yan, Z., zhao, H., Zeng, X., Mu, J.

medrxiv logopreprintJun 5 2025
ObjectiveMagnetic Resonance Diffusion-Weighted Imaging (DWI) is a crucial tool for diagnosing acute ischemic stroke, yet some patients present as DWI-negative. This study aims to analyze the imaging differences and associated clinical characteristics in acute ischemic stroke patients receiving intravenous thrombolysis, in order to enhance understanding of DWI-negative strokes. MethodsRetrospective collection of clinical data from acute ischemic stroke patients receiving intravenous thrombolysis at the Stroke Center of the First Affiliated Hospital of Chongqing Medical University from January 2017 to June 2023, categorized into DWI-positive and negative groups. Descriptive statistics, univariate analysis, binary logistic regression, and machine learning model were utilized to assess the predictive value of clinical features. Additionally, telephone follow-up was conducted for DWI-negative patients to record medication compliance, stroke recurrence, and mortality, with Fine-Gray competing risk model used to analyze recurrent risk factors. ResultsThe incidence rate of DWI-negative ischemic stroke is 22.74%. Factors positively associated with DWI-positive cases include onset to needle time (ONT), onset to first MRI time (OMT), NIHSS score at 1 week of hospitalization (NIHSS-1w), hyperlipidemia (HLP), and atrial fibrillation (AF) (p<0.05, OR>1). Conversely, recurrent ischemic stroke (RIS) and platelet count (PLT) are negatively correlated with DWI-positive cases (p<0.05, OR<1). Trial of Org 10172 in Acute Stroke Treatment (TOAST) classification significantly influences DWI presentation (p=0.01), but the specific impact of etiological subtypes remains unclear. Machine learning models suggest that the features with the highest predictive value, in descending order, are AF, HLP, OMT, ONT, NIHSS difference within 24 hours post-thrombolysis(NIHSS-d(0-24h)PT), and RIS. ConclusionsNIHSS-1w, OMT, ONT, HLP, and AF can predict DWI-positive findings, while platelet count and RIS are associated with DWI-negative cases. AF and HLP demonstrate the highest predictive value. DWI-negative patients have a higher risk of stroke recurrence than mortality in the short term, with a potential correlation between TOAST classification and recurrence risk.
Page 15 of 26257 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.