Sort by:
Page 497 of 7527514 results

Manouvriez D, Kuchcinski G, Roca V, Sillaire AR, Bertoux M, Delbeuck X, Pruvo JP, Lecerf S, Pasquier F, Lebouvier T, Lopes R

pubmed logopapersJul 3 2025
Early-onset Alzheimer's disease (EOAD) population is a clinically, genetically and pathologically heterogeneous condition. Identifying biomarkers related to disease progression is crucial for advancing clinical trials and improving therapeutic strategies. This study aims to differentiate EOAD patients with varying rates of progression using Brain Age Gap Estimation (BrainAGE)-based clustering algorithm applied to structural magnetic resonance images (MRI). A retrospective analysis of a longitudinal cohort consisting of 142 participants who met the criteria for early-onset probable Alzheimer's disease was conducted. Participants were assessed clinically, neuropsychologically and with structural MRI at baseline and annually for 6 years. A Brain Age Gap Estimation (BrainAGE) deep learning model pre-trained on 3,227 3D T1-weighted MRI of healthy subjects was used to extract encoded MRI representations at baseline. Then, k-means clustering was performed on these encoded representations to stratify the population. The resulting clusters were then analyzed for disease severity, cognitive phenotype and brain volumes at baseline and longitudinally. The optimal number of clusters was determined to be 2. Clusters differed significantly in BrainAGE scores (5.44 [± 8] years vs 15.25 [± 5 years], p < 0.001). The high BrainAGE cluster was associated with older age (p = 0.001) and higher proportion of female patients (p = 0.005), as well as greater disease severity based on Mini Mental State Examination (MMSE) scores (19.32 [±4.62] vs 14.14 [±6.93], p < 0.001) and gray matter volume (0.35 [±0.03] vs 0.32 [±0.02], p < 0.001). Longitudinal analyses revealed significant differences in disease progression (MMSE decline of -2.35 [±0.15] pts/year vs -3.02 [±0.25] pts/year, p = 0.02; CDR 1.58 [±0.10] pts/year vs 1.99 [±0.16] pts/year, p = 0.03). K-means clustering of BrainAGE encoded representations stratified EOAD patients based on varying rates of disease progression. These findings underscore the potential of using BrainAGE as a biomarker for better understanding and managing EOAD.

Li L, Wei W, Yang L, Zhang W, Dong J, Liu Y, Huang H, Zhao W

pubmed logopapersJul 3 2025
Low-dose CT (LDCT) significantly reduces the radiation dose received by patients, however, dose reduction introduces additional noise and artifacts. Currently, denoising methods based on convolutional neural networks (CNNs) face limitations in long-range modeling capabilities, while Transformer-based denoising methods, although capable of powerful long-range modeling, suffer from high computational complexity. Furthermore, the denoised images predicted by deep learning-based techniques inevitably exhibit differences in noise distribution compared to normal-dose CT (NDCT) images, which can also impact the final image quality and diagnostic outcomes. This paper proposes CT-Mamba, a hybrid convolutional State Space Model for LDCT image denoising. The model combines the local feature extraction advantages of CNNs with Mamba's strength in capturing long-range dependencies, enabling it to capture both local details and global context. Additionally, we introduce an innovative spatially coherent Z-shaped scanning scheme to ensure spatial continuity between adjacent pixels in the image. We design a Mamba-driven deep noise power spectrum (NPS) loss function to guide model training, ensuring that the noise texture of the denoised LDCT images closely resembles that of NDCT images, thereby enhancing overall image quality and diagnostic value. Experimental results have demonstrated that CT-Mamba performs excellently in reducing noise in LDCT images, enhancing detail preservation, and optimizing noise texture distribution, and exhibits higher statistical similarity with the radiomics features of NDCT images. The proposed CT-Mamba demonstrates outstanding performance in LDCT denoising and holds promise as a representative approach for applying the Mamba framework to LDCT denoising tasks.

Michal Golovanevsky, Pranav Mahableshwarkar, Carsten Eickhoff, Ritambhara Singh

arxiv logopreprintJul 3 2025
Multimodal deep learning holds promise for improving clinical prediction by integrating diverse patient data, including text, imaging, time-series, and structured demographics. Contrastive learning facilitates this integration by producing a unified representation that can be reused across tasks, reducing the need for separate models or encoders. Although contrastive learning has seen success in vision-language domains, its use in clinical settings remains largely limited to image and text pairs. We propose the Pipeline for Contrastive Modality Evaluation and Encoding (PiCME), which systematically assesses five clinical data types from MIMIC: discharge summaries, radiology reports, chest X-rays, demographics, and time-series. We pre-train contrastive models on all 26 combinations of two to five modalities and evaluate their utility on in-hospital mortality and phenotype prediction. To address performance plateaus with more modalities, we introduce a Modality-Gated LSTM that weights each modality according to its contrastively learned importance. Our results show that contrastive models remain competitive with supervised baselines, particularly in three-modality settings. Performance declines beyond three modalities, which supervised models fail to recover. The Modality-Gated LSTM mitigates this drop, improving AUROC from 73.19% to 76.93% and AUPRC from 51.27% to 62.26% in the five-modality setting. We also compare contrastively learned modality importance scores with attribution scores and evaluate generalization across demographic subgroups, highlighting strengths in interpretability and fairness. PiCME is the first to scale contrastive learning across all modality combinations in MIMIC, offering guidance for modality selection, training strategies, and equitable clinical prediction.

Lisa Herzog, Pascal Bühler, Ezequiel de la Rosa, Beate Sick, Susanne Wegener

arxiv logopreprintJul 3 2025
Mechanical thrombectomy has become the standard of care in patients with stroke due to large vessel occlusion (LVO). However, only 50% of successfully treated patients show a favorable outcome. We developed and evaluated interpretable deep learning models to predict functional outcomes in terms of the modified Rankin Scale score alongside individualized treatment effects (ITEs) using data of 449 LVO stroke patients from a randomized clinical trial. Besides clinical variables, we considered non-contrast CT (NCCT) and angiography (CTA) scans which were integrated using novel foundation models to make use of advanced imaging information. Clinical variables had a good predictive power for binary functional outcome prediction (AUC of 0.719 [0.666, 0.774]) which could slightly be improved when adding CTA imaging (AUC of 0.737 [0.687, 0.795]). Adding NCCT scans or a combination of NCCT and CTA scans to clinical features yielded no improvement. The most important clinical predictor for functional outcome was pre-stroke disability. While estimated ITEs were well calibrated to the average treatment effect, discriminatory ability was limited indicated by a C-for-Benefit statistic of around 0.55 in all models. In summary, the models allowed us to jointly integrate CT imaging and clinical features while achieving state-of-the-art prediction performance and ITE estimates. Yet, further research is needed to particularly improve ITE estimation.

Beeche C, Kim J, Tavolinejad H, Zhao B, Sharma R, Duda J, Gee J, Dako F, Verma A, Morse C, Hou B, Shen L, Sagreiya H, Davatzikos C, Damrauer S, Ritchie MD, Rader D, Long Q, Chen T, Kahn CE, Chirinos J, Witschey WR

pubmed logopapersJul 3 2025
Generalizable foundation models for computed tomographic (CT) medical imaging data are emerging AI tools anticipated to vastly improve clinical workflow efficiency. However, existing models are typically trained within narrow imaging contexts, including limited anatomical coverage, contrast settings, and clinical indications. These constraints reduce their ability to generalize across the broad spectrum of real-world presentations encountered in volumetric CT imaging data. We introduce Percival, a vision-language foundation model trained on over 400,000 CT volumes and paired radiology reports from more than 50,000 participants enrolled in the Penn Medicine BioBank. Percival employs a dual-encoder architecture with a transformer-based image encoder and a BERT-style language encoder, aligned via symmetric contrastive learning. Percival was validated on over 20,000 participants imaging data encompassing over 100,000 CT volumes. In image-text recall tasks, Percival outperforms models trained on limited anatomical windows. To assess Percival's clinical knowledge, we evaluated the biologic, phenotypic and prognostic relevance using laboratory-wide, phenome-wide association studies and survival analyses, uncovering a rich latent structure aligned with physiological measurements and disease phenotypes.

Bey, P., Dhindsa, K., Rackoll, T., Feldheim, J., Bönstrup, M., Thomalla, G., Schulz, R., Cheng, B., Gerloff, C., Endres, M., Nave, A. H., Ritter, P.

medrxiv logopreprintJul 3 2025
Recent advances in the treatment of acute ischemic stroke contribute to improved patient outcomes, yet the mechanisms driving long-term disease trajectory are not well-understood. Current trends in the literature emphasize the distributed disruptive impact of stroke lesions on brain network organization. While most studies use population-derived data to investigate lesion interference on healthy tissue, the potential for individualized treatment strategies remains underexplored due to a lack of availability and effective utilization of the necessary clinical imaging data. To validate the potential for individualized patient evaluation, we explored and compared the differential information in network models based on normative and individual data. We further present our novel deep learning approach providing usable and accurate estimates of individual stroke impact utilizing minimal imaging data, thus bridging the data gap hindering individualized treatment planning. We created normative and individual disconnectomes for each of 78 patients (mean age 65.1 years, 32 females) from two independent cohort studies. MRI data and Barthel Index, as a measure of activities of daily living, were collected in the acute and early sub-acute phase after stroke (baseline) and at three months post stroke incident. Disconnectomes were subsequently described using 12 network metrics, including clustering coefficient and transitivity. Metrics were first compared between disconnectomes and further utilized as features in a classifier to predict a patients disease trajectory, as defined by three months Barthel Index. We then developed a deep learning architecture based on graph convolution and trained it to predict properties of the individual disconnectomes from the normative disconnectomes. Both disconnectomes showed statistically significant differences in topology and predictive power. Normative disconnectomes included a statistically significant larger number of connections (N=604 for normative versus N=210 for individual) and agreement between network properties ranged from r2=0.01 for clustering coefficient to r2=0.8 for assortativity, highlighting the impact of disconnectome choice on subsequent analysis. To predict patient deficit severity, individual data achieved an AUC score of 0.94 compared to an AUC score of 0.85 for normative based features. Our deep learning estimates showed high correlation with individual features (mean r2=0.94) and a comparable performance with an AUC score of 0.93. We were able to show how normative data-based analysis of stroke disconnections provides limited information regarding patient recovery. In contrast, individual data provided higher prognostic precision. We presented a novel approach to curb the need for individual data while retaining most of the differential information encoding individual patient disease trajectory.

Woof, W. A., de Guimaraes, T. A. C., Al-Khuzaei, S., Daich Varela, M., Shah, M., Naik, G., Sen, S., Bagga, P., Chan, Y. W., Mendes, B. S., Lin, S., Ghoshal, B., Liefers, B., Fu, D. J., Georgiou, M., da Silva, A. S., Nguyen, Q., Liu, Y., Fujinami-Yokokawa, Y., Sumodhee, D., Furman, J., Patel, P. J., Moghul, I., Moosajee, M., Sallum, J., De Silva, S. R., Lorenz, B., Herrmann, P., Holz, F. G., Fujinami, K., Webster, A. R., Mahroo, O. A., Downes, S. M., Madhusudhan, S., Balaskas, K., Michaelides, M., Pontikos, N.

medrxiv logopreprintJul 3 2025
PurposeTo quantify spectral-domain optical coherence tomography (SD-OCT) images cross-sectionally and longitudinally in a large cohort of molecularly characterized patients with inherited retinal disease (IRDs) from the UK. DesignRetrospective study of imaging data. ParticipantsPatients with a clinical and molecularly confirmed diagnosis of IRD who have undergone macular SD-OCT imaging at Moorfields Eye Hospital (MEH) between 2011 and 2019. We retrospectively identified 4,240 IRD patients from the MEH database (198 distinct IRD genes), including 69,664 SD-OCT macular volumes. MethodsEight features of interest were defined: retina, fovea, intraretinal cystic spaces (ICS), subretinal fluid (SRF), subretinal hyper-reflective material (SHRM), pigment epithelium detachment (PED), ellipsoid zone loss (EZ-loss) and retinal pigment epithelium loss (RPE-loss). Manual annotations of five b-scans per SD-OCT volume was performed for the retinal features by four graders based on a defined grading protocol. A total of 1,749 b-scans from 360 SD-OCT volumes across 275 patients were annotated for the eight retinal features for training and testing of a neural-network-based segmentation model, AIRDetect-OCT, which was then applied to the entire imaging dataset. Main Outcome MeasuresPerformance of AIRDetect-OCT, comparing to inter-grader agreement was evaluated using Dice score on a held-out dataset. Feature prevalence, volume and area were analysed cross-sectionally and longitudinally. ResultsThe inter-grader Dice score for manual segmentation was [&ge;]90% for retina, ICS, SRF, SHRM and PED, >77% for both EZ-loss and RPE-loss. Model-grader agreement was >80% for segmentation of retina, ICS, SRF, SHRM, and PED, and >68% for both EZ-loss and RPE-loss. Automatic segmentation was applied to 272,168 b-scans across 7,405 SD-OCT volumes from 3,534 patients encompassing 176 unique genes. Accounting for age, male patients exhibited significantly more EZ-loss (19.6mm2 vs 17.9mm2, p<2.8x10-4) and RPE-loss (7.79mm2 vs 6.15mm2, p<3.2x10-6) than females. RPE-loss was significantly higher in Asian patients than other ethnicities (9.37mm2 vs 7.29mm2, p<0.03). ICS average total volume was largest in RS1 (0.47mm3) and NR2E3 (0.25mm3), SRF in BEST1 (0.21mm3) and PED in EFEMP1 (0.34mm3). BEST1 and PROM1 showed significantly different patterns of EZ-loss (p<10-4) and RPE-loss (p<0.02) comparing the dominant to the recessive forms. Sectoral analysis revealed significantly increased EZ-loss in the inferior quadrant compared to superior quadrant for RHO ({Delta}=-0.414 mm2, p=0.036) and EYS ({Delta}=-0.908 mm2, p=1.5x10-4). In ABCA4 retinopathy, more severe genotypes (group A) were associated with faster progression of EZ-loss (2.80{+/-}0.62 mm2/yr), whilst the p.(Gly1961Glu) variant (group D) was associated with slower progression (0.56 {+/-}0.18 mm2/yr). There were also sex differences within groups with males in group A experiencing significantly faster rates of progression of RPE-loss (2.48 {+/-}1.40 mm2/yr vs 0.87 {+/-}0.62 mm2/yr, p=0.047), but lower rates in groups B, C, and D. ConclusionsAIRDetect-OCT, a novel deep learning algorithm, enables large-scale OCT feature quantification in IRD patients uncovering cross-sectional and longitudinal phenotype correlations with demographic and genotypic parameters.

Zhao X, Liang F, Long C, Yuan Z, Zhao J

pubmed logopapersJul 2 2025
Medical image translation is of great value but is very difficult due to the requirement with style change of noise pattern and anatomy invariance of image content. Various deep learning methods like the mainstream GAN, Transformer and Diffusion models have been developed to learn the multi-modal mapping to obtain the translated images, but the results from the generator are still far from being perfect for medical images. In this paper, we propose a robust multi-contrast translation framework for MRI medical images with knowledge distillation and adversarial attack, which can be integrated with any generator. The additional refinement network consists of teacher and student modules with similar structures but different inputs. Unlike the existing knowledge distillation works, our teacher module is designed as a registration network with more inputs to better learn the noise distribution well and further refine the translated results in the training stage. The knowledge is then well distilled to the student module to ensure that better translation results are generated. We also introduce an adversarial attack module before the generator. Such a black-box attacker can generate meaningful perturbations and adversarial examples throughout the training process. Our model has been tested on two public MRI medical image datasets considering different types and levels of perturbations, and each designed module is verified by the ablation study. The extensive experiments and comparison with SOTA methods have strongly demonstrated our model's superiority of refinement and robustness.

Chen K, Han H, Wei J, Zhang Y

pubmed logopapersJul 2 2025
Image registration is a key technique in image processing and analysis. Due to its high complexity, the traditional registration frameworks often fail to meet real-time demands in practice. To address the real-time demand, several deep learning networks for registration have been proposed, including the supervised and the unsupervised networks. Unsupervised networks rely on large amounts of training data to minimize specific loss functions, but the lack of physical information constraints results in the lower accuracy compared with the supervised networks. However, the supervised networks in medical image registration face two major challenges: physical mesh folding and the scarcity of labeled training data. To address these two challenges, we propose a novel few-shot learning framework for image registration. The framework contains two parts: random diffeomorphism generator (RDG) and a supervised few-shot learning network for image registration. By randomly generating a complex vector field, the RDG produces a series of diffeomorphism. With the help of diffeomorphism generated by RDG, one can use only a few image data (theoretically, one image data is enough) to generate a series of labels for training the supervised few-shot learning network. Concerning the elimination of the physical mesh folding phenomenon, in the proposed network, the loss function is only required to ensure the smoothness of deformation (no other control for mesh folding elimination is necessary). The experimental results indicate that the proposed method demonstrates superior performance in eliminating physical mesh folding when compared to other existing learning-based methods. Our code is available at this link https://github.com/weijunping111/RDG-TMI.git.

Mohammadreza Amirian, Michael Bach, Oscar Jimenez-del-Toro, Christoph Aberle, Roger Schaer, Vincent Andrearczyk, Jean-Félix Maestrati, Maria Martin Asiain, Kyriakos Flouris, Markus Obmann, Clarisse Dromain, Benoît Dufour, Pierre-Alexandre Alois Poletti, Hendrik von Tengg-Kobligk, Rolf Hügli, Martin Kretzschmar, Hatem Alkadhi, Ender Konukoglu, Henning Müller, Bram Stieltjes, Adrien Depeursinge

arxiv logopreprintJul 2 2025
Artificial intelligence (AI) has introduced numerous opportunities for human assistance and task automation in medicine. However, it suffers from poor generalization in the presence of shifts in the data distribution. In the context of AI-based computed tomography (CT) analysis, significant data distribution shifts can be caused by changes in scanner manufacturer, reconstruction technique or dose. AI harmonization techniques can address this problem by reducing distribution shifts caused by various acquisition settings. This paper presents an open-source benchmark dataset containing CT scans of an anthropomorphic phantom acquired with various scanners and settings, which purpose is to foster the development of AI harmonization techniques. Using a phantom allows fixing variations attributed to inter- and intra-patient variations. The dataset includes 1378 image series acquired with 13 scanners from 4 manufacturers across 8 institutions using a harmonized protocol as well as several acquisition doses. Additionally, we present a methodology, baseline results and open-source code to assess image- and feature-level stability and liver tissue classification, promoting the development of AI harmonization strategies.
Page 497 of 7527514 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,500+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.