Sort by:
Page 25 of 1061052 results

VidFuncta: Towards Generalizable Neural Representations for Ultrasound Videos

Julia Wolleb, Florentin Bieder, Paul Friedrich, Hemant D. Tagare, Xenophon Papademetris

arxiv logopreprintJul 29 2025
Ultrasound is widely used in clinical care, yet standard deep learning methods often struggle with full video analysis due to non-standardized acquisition and operator bias. We offer a new perspective on ultrasound video analysis through implicit neural representations (INRs). We build on Functa, an INR framework in which each image is represented by a modulation vector that conditions a shared neural network. However, its extension to the temporal domain of medical videos remains unexplored. To address this gap, we propose VidFuncta, a novel framework that leverages Functa to encode variable-length ultrasound videos into compact, time-resolved representations. VidFuncta disentangles each video into a static video-specific vector and a sequence of time-dependent modulation vectors, capturing both temporal dynamics and dataset-level redundancies. Our method outperforms 2D and 3D baselines on video reconstruction and enables downstream tasks to directly operate on the learned 1D modulation vectors. We validate VidFuncta on three public ultrasound video datasets -- cardiac, lung, and breast -- and evaluate its downstream performance on ejection fraction prediction, B-line detection, and breast lesion classification. These results highlight the potential of VidFuncta as a generalizable and efficient representation framework for ultrasound videos. Our code is publicly available under https://github.com/JuliaWolleb/VidFuncta_public.

GDAIP: A Graph-Based Domain Adaptive Framework for Individual Brain Parcellation

Jianfei Zhu, Haiqi Zhu, Shaohui Liu, Feng Jiang, Baichun Wei, Chunzhi Yi

arxiv logopreprintJul 29 2025
Recent deep learning approaches have shown promise in learning such individual brain parcellations from functional magnetic resonance imaging (fMRI). However, most existing methods assume consistent data distributions across domains and struggle with domain shifts inherent to real-world cross-dataset scenarios. To address this challenge, we proposed Graph Domain Adaptation for Individual Parcellation (GDAIP), a novel framework that integrates Graph Attention Networks (GAT) with Minimax Entropy (MME)-based domain adaptation. We construct cross-dataset brain graphs at both the group and individual levels. By leveraging semi-supervised training and adversarial optimization of the prediction entropy on unlabeled vertices from target brain graph, the reference atlas is adapted from the group-level brain graph to the individual brain graph, enabling individual parcellation under cross-dataset settings. We evaluated our method using parcellation visualization, Dice coefficient, and functional homogeneity. Experimental results demonstrate that GDAIP produces individual parcellations with topologically plausible boundaries, strong cross-session consistency, and ability of reflecting functional organization.

Distribution-Based Masked Medical Vision-Language Model Using Structured Reports

Shreyank N Gowda, Ruichi Zhang, Xiao Gu, Ying Weng, Lu Yang

arxiv logopreprintJul 29 2025
Medical image-language pre-training aims to align medical images with clinically relevant text to improve model performance on various downstream tasks. However, existing models often struggle with the variability and ambiguity inherent in medical data, limiting their ability to capture nuanced clinical information and uncertainty. This work introduces an uncertainty-aware medical image-text pre-training model that enhances generalization capabilities in medical image analysis. Building on previous methods and focusing on Chest X-Rays, our approach utilizes structured text reports generated by a large language model (LLM) to augment image data with clinically relevant context. These reports begin with a definition of the disease, followed by the `appearance' section to highlight critical regions of interest, and finally `observations' and `verdicts' that ground model predictions in clinical semantics. By modeling both inter- and intra-modal uncertainty, our framework captures the inherent ambiguity in medical images and text, yielding improved representations and performance on downstream tasks. Our model demonstrates significant advances in medical image-text pre-training, obtaining state-of-the-art performance on multiple downstream tasks.

Enhancing efficiency in paediatric brain tumour segmentation using a pathologically diverse single-center clinical dataset

A. Piffer, J. A. Buchner, A. G. Gennari, P. Grehten, S. Sirin, E. Ross, I. Ezhov, M. Rosier, J. C. Peeken, M. Piraud, B. Menze, A. Guerreiro Stücklin, A. Jakab, F. Kofler

arxiv logopreprintJul 29 2025
Background Brain tumours are the most common solid malignancies in children, encompassing diverse histological, molecular subtypes and imaging features and outcomes. Paediatric brain tumours (PBTs), including high- and low-grade gliomas (HGG, LGG), medulloblastomas (MB), ependymomas, and rarer forms, pose diagnostic and therapeutic challenges. Deep learning (DL)-based segmentation offers promising tools for tumour delineation, yet its performance across heterogeneous PBT subtypes and MRI protocols remains uncertain. Methods A retrospective single-centre cohort of 174 paediatric patients with HGG, LGG, medulloblastomas (MB), ependymomas, and other rarer subtypes was used. MRI sequences included T1, T1 post-contrast (T1-C), T2, and FLAIR. Manual annotations were provided for four tumour subregions: whole tumour (WT), T2-hyperintensity (T2H), enhancing tumour (ET), and cystic component (CC). A 3D nnU-Net model was trained and tested (121/53 split), with segmentation performance assessed using the Dice similarity coefficient (DSC) and compared against intra- and inter-rater variability. Results The model achieved robust performance for WT and T2H (mean DSC: 0.85), comparable to human annotator variability (mean DSC: 0.86). ET segmentation was moderately accurate (mean DSC: 0.75), while CC performance was poor. Segmentation accuracy varied by tumour type, MRI sequence combination, and location. Notably, T1, T1-C, and T2 alone produced results nearly equivalent to the full protocol. Conclusions DL is feasible for PBTs, particularly for T2H and WT. Challenges remain for ET and CC segmentation, highlighting the need for further refinement. These findings support the potential for protocol simplification and automation to enhance volumetric assessment and streamline paediatric neuro-oncology workflows.

Neural Autoregressive Modeling of Brain Aging

Ridvan Yesiloglu, Wei Peng, Md Tauhidul Islam, Ehsan Adeli

arxiv logopreprintJul 29 2025
Brain aging synthesis is a critical task with broad applications in clinical and computational neuroscience. The ability to predict the future structural evolution of a subject's brain from an earlier MRI scan provides valuable insights into aging trajectories. Yet, the high-dimensionality of data, subtle changes of structure across ages, and subject-specific patterns constitute challenges in the synthesis of the aging brain. To overcome these challenges, we propose NeuroAR, a novel brain aging simulation model based on generative autoregressive transformers. NeuroAR synthesizes the aging brain by autoregressively estimating the discrete token maps of a future scan from a convenient space of concatenated token embeddings of a previous and future scan. To guide the generation, it concatenates into each scale the subject's previous scan, and uses its acquisition age and the target age at each block via cross-attention. We evaluate our approach on both the elderly population and adolescent subjects, demonstrating superior performance over state-of-the-art generative models, including latent diffusion models (LDM) and generative adversarial networks, in terms of image fidelity. Furthermore, we employ a pre-trained age predictor to further validate the consistency and realism of the synthesized images with respect to expected aging patterns. NeuroAR significantly outperforms key models, including LDM, demonstrating its ability to model subject-specific brain aging trajectories with high fidelity.

segcsvdPVS: A convolutional neural network-based tool for quantification of enlarged perivascular spaces (PVS) on T1-weighted images

Gibson, E., Ramirez, J., Woods, L. A., Berberian, S., Ottoy, J., Scott, C., Yhap, V., Gao, F., Coello, r. D., Valdes-Hernandez, m., Lange, A., Tartaglia, C., Kumar, S., Binns, M. A., Bartha, R., Symons, S., Swartz, R. H., Masellis, M., Singh, N., MacIntosh, B. J., Wardlaw, J. M., Black, S. E., Lim, A. S., Goubran, M.

medrxiv logopreprintJul 29 2025
IntroductionEnlarged perivascular spaces (PVS) are imaging markers of cerebral small vessel disease (CSVD) that are associated with age, disease phenotypes, and overall health. Quantification of PVS is challenging but necessary to expand an understanding of their role in cerebrovascular pathology. Accurate and automated segmentation of PVS on T1-weighted images would be valuable given the widespread use of T1-weighted imaging protocols in multisite clinical and research datasets. MethodsWe introduce segcsvdPVS, a convolutional neural network (CNN)-based tool for automated PVS segmentation on T1-weighted images. segcsvdPVS was developed using a novel hierarchical approach that builds on existing tools and incorporates robust training strategies to enhance the accuracy and consistency of PVS segmentation. Performance was evaluated using a comprehensive evaluation strategy that included comparison to existing benchmark methods, ablation-based validation, accuracy validation against manual ground truth annotations, correlation with age-related PVS burden as a biological benchmark, and extensive robustness testing. ResultssegcsvdPVS achieved strong object-level performance for basal ganglia PVS (DSC = 0.78), exhibiting both high sensitivity (SNS = 0.80) and precision (PRC = 0.78). Although voxel-level precision was lower (PRC = 0.57), manual correction improved this by only ~3%, indicating that the additional voxels reflected primary boundary- or extent-related differences rather than correctable false positive error. For non-basal ganglia PVS, segcsvdPVS outperformed benchmark methods, exhibiting higher voxel-level performance across several metrics (DSC = 0.60, SNS = 0.67, PRC = 0.57, NSD = 0.77), despite overall lower performance relative to basal ganglia PVS. Additionally, the association between age and segmentation-derived measures of PVS burden were consistently stronger and more reliable for segcsvdPVS compared to benchmark methods across three cohorts (test6, ADNI, CAHHM), providing further evidence of the accuracy and consistency of its segmentation output. ConclusionssegcsvdPVS demonstrates robust performance across diverse imaging conditions and improved sensitivity to biologically meaningful associations, supporting its utility as a T1-based PVS segmentation tool.

Deep learning aging marker from retinal images unveils sex-specific clinical and genetic signatures

Trofimova, O., Böttger, L., Bors, S., Pan, Y., Liefers, B., Beyeler, M. J., Presby, D. M., Bontempi, D., Hastings, J., Klaver, C. C. W., Bergmann, S.

medrxiv logopreprintJul 29 2025
Retinal fundus images offer a non-invasive window into systemic aging. Here, we fine-tuned a foundation model (RETFound) to predict chronological age from color fundus images in 71,343 participants from the UK Biobank, achieving a mean absolute error of 2.85 years. The resulting retinal age gap (RAG), i.e., the difference between predicted and chronological age, was associated with cardiometabolic traits, inflammation, cognitive performance, mortality, dementia, cancer, and incident cardiovascular disease. Genome-wide analyses identified genes related to longevity, metabolism, neurodegeneration, and age-related eye diseases. Sex-stratified models revealed consistent performance but divergent biological signatures: males had younger-appearing retinas and stronger links to metabolic syndrome, while in females, both model attention and genetic associations pointed to a greater involvement of retinal vasculature. Our study positions retinal aging as a biologically meaningful and sex-sensitive biomarker that can support more personalized approaches to risk assessment and aging-related healthcare.

Implicit Spatiotemporal Bandwidth Enhancement Filter by Sine-activated Deep Learning Model for Fast 3D Photoacoustic Tomography

I Gede Eka Sulistyawan, Takuro Ishii, Riku Suzuki, Yoshifumi Saijo

arxiv logopreprintJul 28 2025
3D photoacoustic tomography (3D-PAT) using high-frequency hemispherical transducers offers near-omnidirectional reception and enhanced sensitivity to the finer structural details encoded in the high-frequency components of the broadband photoacoustic (PA) signal. However, practical constraints such as limited number of channels with bandlimited sampling rate often result in sparse and bandlimited sensors that degrade image quality. To address this, we revisit the 2D deep learning (DL) approach applied directly to sensor-wise PA radio-frequency (PARF) data. Specifically, we introduce sine activation into the DL model to restore the broadband nature of PARF signals given the observed band-limited and high-frequency PARF data. Given the scarcity of 3D training data, we employ simplified training strategies by simulating random spherical absorbers. This combination of sine-activated model and randomized training is designed to emphasize bandwidth learning over dataset memorization. Our model was evaluated on a leaf skeleton phantom, a micro-CT-verified 3D spiral phantom and in-vivo human palm vasculature. The results showed that the proposed training mechanism on sine-activated model was well-generalized across the different tests by effectively increasing the sensor density and recovering the spatiotemporal bandwidth. Qualitatively, the sine-activated model uniquely enhanced high-frequency content that produces clearer vascular structure with fewer artefacts. Quantitatively, the sine-activated model exhibits full bandwidth at -12 dB spectrum and significantly higher contrast-to-noise ratio with minimal loss of structural similarity index. Lastly, we optimized our approach to enable fast enhanced 3D-PAT at 2 volumes-per-second for better practical imaging of a free-moving targets.

Enhancing and Accelerating Brain MRI through Deep Learning Reconstruction Using Prior Subject-Specific Imaging

Amirmohammad Shamaei, Alexander Stebner, Salome, Bosshart, Johanna Ospel, Gouri Ginde, Mariana Bento, Roberto Souza

arxiv logopreprintJul 28 2025
Magnetic resonance imaging (MRI) is a crucial medical imaging modality. However, long acquisition times remain a significant challenge, leading to increased costs, and reduced patient comfort. Recent studies have shown the potential of using deep learning models that incorporate information from prior subject-specific MRI scans to improve reconstruction quality of present scans. Integrating this prior information requires registration of the previous scan to the current image reconstruction, which can be time-consuming. We propose a novel deep-learning-based MRI reconstruction framework which consists of an initial reconstruction network, a deep registration model, and a transformer-based enhancement network. We validated our method on a longitudinal dataset of T1-weighted MRI scans with 2,808 images from 18 subjects at four acceleration factors (R5, R10, R15, R20). Quantitative metrics confirmed our approach's superiority over existing methods (p < 0.05, Wilcoxon signed-rank test). Furthermore, we analyzed the impact of our MRI reconstruction method on the downstream task of brain segmentation and observed improved accuracy and volumetric agreement with reference segmentations. Our approach also achieved a substantial reduction in total reconstruction time compared to methods that use traditional registration algorithms, making it more suitable for real-time clinical applications. The code associated with this work is publicly available at https://github.com/amirshamaei/longitudinal-mri-deep-recon.

Brain White Matter Microstructure Associations with Blood Markers of the GSH Redox cycle in Schizophrenia

Pavan, T., Steullet, P., Aleman-Gomez, Y., Jenni, R., Schilliger, Z., Cleusix, M., Alameda, L., Do, K. Q., Conus, P., Hagmann, P., Dwir, D., Klauser, P., Jelescu, I.

medrxiv logopreprintJul 28 2025
In groups of patients suffering from schizophrenia (SZ), redox dysregulation was reported in both peripheral fluids and brain. It has been hypothesized that such dysregulation, including alterations of the glutathione (GSH) cycle could participate in the brain white matter (WM) abnormalities in SZ due to the oligodendrocytes susceptibility to oxidative stress. In this study we aim to assess the differences between 82 schizophrenia patients (PT) and 86 healthy controls (HC) in GSH-redox peripheral blood markers: GSH peroxidase (GPx), reductase (GR) enzymatic activities and their ratio (GPx/GR-ratio), evaluating the hypotheses that alterations in the homeostasis of the systemic GSH cycle may be associated with pathological mechanisms in the brain WM in PT. To do so, we employ the advanced diffusion MRI methods: Diffusion Kurtosis Imaging (DKI) and White Matter Tract Integrity-Watson (WMTI-W), which provide excellent sensitivity to demyelination and neuroinflammation. We show that GPx levels are higher (p=0.00041) in female control participants and decrease with aging (p=0.026). We find differences between PT and HC in the association of GR and mean kurtosis (MK, p<0.0001). Namely, lower MK was associated with higher blood GR activity in HC, but not in PT, suggesting that high GR activity (a hallmark of reductive stress) in HC was linked to changes in myelin integrity. However, GSH-redox peripheral blood markers did not explain the WM anomalies detected in PT, or the design of the present study could not detect subtle phenomenon, if present.
Page 25 of 1061052 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.