Sort by:
Page 145 of 3453445 results

BrainAGE latent representation clustering is associated with longitudinal disease progression in early-onset Alzheimer's disease.

Manouvriez D, Kuchcinski G, Roca V, Sillaire AR, Bertoux M, Delbeuck X, Pruvo JP, Lecerf S, Pasquier F, Lebouvier T, Lopes R

pubmed logopapersJul 3 2025
Early-onset Alzheimer's disease (EOAD) population is a clinically, genetically and pathologically heterogeneous condition. Identifying biomarkers related to disease progression is crucial for advancing clinical trials and improving therapeutic strategies. This study aims to differentiate EOAD patients with varying rates of progression using Brain Age Gap Estimation (BrainAGE)-based clustering algorithm applied to structural magnetic resonance images (MRI). A retrospective analysis of a longitudinal cohort consisting of 142 participants who met the criteria for early-onset probable Alzheimer's disease was conducted. Participants were assessed clinically, neuropsychologically and with structural MRI at baseline and annually for 6 years. A Brain Age Gap Estimation (BrainAGE) deep learning model pre-trained on 3,227 3D T1-weighted MRI of healthy subjects was used to extract encoded MRI representations at baseline. Then, k-means clustering was performed on these encoded representations to stratify the population. The resulting clusters were then analyzed for disease severity, cognitive phenotype and brain volumes at baseline and longitudinally. The optimal number of clusters was determined to be 2. Clusters differed significantly in BrainAGE scores (5.44 [± 8] years vs 15.25 [± 5 years], p < 0.001). The high BrainAGE cluster was associated with older age (p = 0.001) and higher proportion of female patients (p = 0.005), as well as greater disease severity based on Mini Mental State Examination (MMSE) scores (19.32 [±4.62] vs 14.14 [±6.93], p < 0.001) and gray matter volume (0.35 [±0.03] vs 0.32 [±0.02], p < 0.001). Longitudinal analyses revealed significant differences in disease progression (MMSE decline of -2.35 [±0.15] pts/year vs -3.02 [±0.25] pts/year, p = 0.02; CDR 1.58 [±0.10] pts/year vs 1.99 [±0.16] pts/year, p = 0.03). K-means clustering of BrainAGE encoded representations stratified EOAD patients based on varying rates of disease progression. These findings underscore the potential of using BrainAGE as a biomarker for better understanding and managing EOAD.

CT-Mamba: A hybrid convolutional State Space Model for low-dose CT denoising.

Li L, Wei W, Yang L, Zhang W, Dong J, Liu Y, Huang H, Zhao W

pubmed logopapersJul 3 2025
Low-dose CT (LDCT) significantly reduces the radiation dose received by patients, however, dose reduction introduces additional noise and artifacts. Currently, denoising methods based on convolutional neural networks (CNNs) face limitations in long-range modeling capabilities, while Transformer-based denoising methods, although capable of powerful long-range modeling, suffer from high computational complexity. Furthermore, the denoised images predicted by deep learning-based techniques inevitably exhibit differences in noise distribution compared to normal-dose CT (NDCT) images, which can also impact the final image quality and diagnostic outcomes. This paper proposes CT-Mamba, a hybrid convolutional State Space Model for LDCT image denoising. The model combines the local feature extraction advantages of CNNs with Mamba's strength in capturing long-range dependencies, enabling it to capture both local details and global context. Additionally, we introduce an innovative spatially coherent Z-shaped scanning scheme to ensure spatial continuity between adjacent pixels in the image. We design a Mamba-driven deep noise power spectrum (NPS) loss function to guide model training, ensuring that the noise texture of the denoised LDCT images closely resembles that of NDCT images, thereby enhancing overall image quality and diagnostic value. Experimental results have demonstrated that CT-Mamba performs excellently in reducing noise in LDCT images, enhancing detail preservation, and optimizing noise texture distribution, and exhibits higher statistical similarity with the radiomics features of NDCT images. The proposed CT-Mamba demonstrates outstanding performance in LDCT denoising and holds promise as a representative approach for applying the Mamba framework to LDCT denoising tasks.

A Pan-Organ Vision-Language Model for Generalizable 3D CT Representations.

Beeche C, Kim J, Tavolinejad H, Zhao B, Sharma R, Duda J, Gee J, Dako F, Verma A, Morse C, Hou B, Shen L, Sagreiya H, Davatzikos C, Damrauer S, Ritchie MD, Rader D, Long Q, Chen T, Kahn CE, Chirinos J, Witschey WR

pubmed logopapersJul 3 2025
Generalizable foundation models for computed tomographic (CT) medical imaging data are emerging AI tools anticipated to vastly improve clinical workflow efficiency. However, existing models are typically trained within narrow imaging contexts, including limited anatomical coverage, contrast settings, and clinical indications. These constraints reduce their ability to generalize across the broad spectrum of real-world presentations encountered in volumetric CT imaging data. We introduce Percival, a vision-language foundation model trained on over 400,000 CT volumes and paired radiology reports from more than 50,000 participants enrolled in the Penn Medicine BioBank. Percival employs a dual-encoder architecture with a transformer-based image encoder and a BERT-style language encoder, aligned via symmetric contrastive learning. Percival was validated on over 20,000 participants imaging data encompassing over 100,000 CT volumes. In image-text recall tasks, Percival outperforms models trained on limited anatomical windows. To assess Percival's clinical knowledge, we evaluated the biologic, phenotypic and prognostic relevance using laboratory-wide, phenome-wide association studies and survival analyses, uncovering a rich latent structure aligned with physiological measurements and disease phenotypes.

Robust Multi-contrast MRI Medical Image Translation via Knowledge Distillation and Adversarial Attack.

Zhao X, Liang F, Long C, Yuan Z, Zhao J

pubmed logopapersJul 2 2025
Medical image translation is of great value but is very difficult due to the requirement with style change of noise pattern and anatomy invariance of image content. Various deep learning methods like the mainstream GAN, Transformer and Diffusion models have been developed to learn the multi-modal mapping to obtain the translated images, but the results from the generator are still far from being perfect for medical images. In this paper, we propose a robust multi-contrast translation framework for MRI medical images with knowledge distillation and adversarial attack, which can be integrated with any generator. The additional refinement network consists of teacher and student modules with similar structures but different inputs. Unlike the existing knowledge distillation works, our teacher module is designed as a registration network with more inputs to better learn the noise distribution well and further refine the translated results in the training stage. The knowledge is then well distilled to the student module to ensure that better translation results are generated. We also introduce an adversarial attack module before the generator. Such a black-box attacker can generate meaningful perturbations and adversarial examples throughout the training process. Our model has been tested on two public MRI medical image datasets considering different types and levels of perturbations, and each designed module is verified by the ablation study. The extensive experiments and comparison with SOTA methods have strongly demonstrated our model's superiority of refinement and robustness.

A novel few-shot learning framework for supervised diffeomorphic image registration network.

Chen K, Han H, Wei J, Zhang Y

pubmed logopapersJul 2 2025
Image registration is a key technique in image processing and analysis. Due to its high complexity, the traditional registration frameworks often fail to meet real-time demands in practice. To address the real-time demand, several deep learning networks for registration have been proposed, including the supervised and the unsupervised networks. Unsupervised networks rely on large amounts of training data to minimize specific loss functions, but the lack of physical information constraints results in the lower accuracy compared with the supervised networks. However, the supervised networks in medical image registration face two major challenges: physical mesh folding and the scarcity of labeled training data. To address these two challenges, we propose a novel few-shot learning framework for image registration. The framework contains two parts: random diffeomorphism generator (RDG) and a supervised few-shot learning network for image registration. By randomly generating a complex vector field, the RDG produces a series of diffeomorphism. With the help of diffeomorphism generated by RDG, one can use only a few image data (theoretically, one image data is enough) to generate a series of labels for training the supervised few-shot learning network. Concerning the elimination of the physical mesh folding phenomenon, in the proposed network, the loss function is only required to ensure the smoothness of deformation (no other control for mesh folding elimination is necessary). The experimental results indicate that the proposed method demonstrates superior performance in eliminating physical mesh folding when compared to other existing learning-based methods. Our code is available at this link https://github.com/weijunping111/RDG-TMI.git.

3D MedDiffusion: A 3D Medical Latent Diffusion Model for Controllable and High-quality Medical Image Generation.

Wang H, Liu Z, Sun K, Wang X, Shen D, Cui Z

pubmed logopapersJul 2 2025
The generation of medical images presents significant challenges due to their high-resolution and three-dimensional nature. Existing methods often yield suboptimal performance in generating high-quality 3D medical images, and there is currently no universal generative framework for medical imaging. In this paper, we introduce a 3D Medical Latent Diffusion (3D MedDiffusion) model for controllable, high-quality 3D medical image generation. 3D MedDiffusion incorporates a novel, highly efficient Patch-Volume Autoencoder that compresses medical images into latent space through patch-wise encoding and recovers back into image space through volume-wise decoding. Additionally, we design a new noise estimator to capture both local details and global structural information during diffusion denoising process. 3D MedDiffusion can generate fine-detailed, high-resolution images (up to 512x512x512) and effectively adapt to various downstream tasks as it is trained on large-scale datasets covering CT and MRI modalities and different anatomical regions (from head to leg). Experimental results demonstrate that 3D MedDiffusion surpasses state-of-the-art methods in generative quality and exhibits strong generalizability across tasks such as sparse-view CT reconstruction, fast MRI reconstruction, and data augmentation for segmentationand classification. Source code and checkpoints are available at https://github.com/ShanghaiTech-IMPACT/3D-MedDiffusion.

Habitat-Derived Radiomic Features of Planning Target Volume to Determine the Local Recurrence After Radiotherapy in Patients with Gliomas: A Feasibility Study.

Wang Y, Lin L, Hu Z, Wang H

pubmed logopapersJul 2 2025
To develop a machine learning-based predictive model for local recurrence after radiotherapy in patients with gliomas, with interpretability enhanced through SHapley Additive exPlanations (SHAP). We retrospectively enrolled 145 patients with pathologically confirmed gliomas who underwent brain radiotherapy (training: validation = 102:43). Physiological and structural magnetic resonance imaging (MRI) were used to define habitat regions. A total of 2153 radiomic features were extracted from each MRI sequence in each habitat region, respectively. Relief and Recursive Feature Elimination were used for radiomic feature selection. Support vector machine (SVM) and random forest models incorporating clinical and radiomic features were constructed for each habitat region. The SHAP method was used to explain the predictive model. In the training cohort and validation cohort, the Physiological_Habitat1 (e-THRIVE)_radiomic SVM model demonstrated the best AUC of 0.703 (95% CI 0.569-0.836) and 0.670 (95% CI 0.623-0.717) compared to the other radiomic models. The SHAP summary plot and SHAP force plot were used to interpret the best-performing Physiological_Habitat1 (e-THRIVE)_radiomic SVM model. Radiomic features derived from the Physiological_Habitat1 (e-THRIVE) were predictive of local recurrence in glioma patients following radiotherapy. The SHAP method provided insights into how the tumor microenvironment might influence the effectiveness of radiotherapy in postoperative gliomas.

Heterogeneity Habitats -Derived Radiomics of Gd-EOB-DTPA Enhanced MRI for Predicting Proliferation of Hepatocellular Carcinoma.

Sun S, Yu Y, Xiao S, He Q, Jiang Z, Fan Y

pubmed logopapersJul 2 2025
To construct and validate the optimal model for preoperative prediction of proliferative HCC based on habitat-derived radiomics features of Gd-EOB-DTPA-Enhanced MRI. A total of 187 patients who underwent Gd-EOB-DTPA-enhanced MRI before curative partial hepatectomy were divided into training (n=130, 50 proliferative and 80 nonproliferative HCC) and validation cohort (n=57, 25 proliferative and 32 nonproliferative HCC). Habitat subregion generation was performed using the Gaussian Mixture Model (GMM) clustering method to cluster all pixels to identify similar subregions within the tumor. Radiomic features were extracted from each tumor subregion in the arterial phase (AP) and hepatobiliary phase (HBP). Independent sample t tests, Pearson correlation coefficient, and Least Absolute Shrinkage and Selection Operator (LASSO) algorithm were performed to select the optimal features of subregions. After feature integration and selection, machine-learning classification models using the sci-kit-learn library were constructed. Receiver Operating Characteristic (ROC) curves and the DeLong test were performed to compare the identified performance for predicting proliferative HCC among these models. The optimal number of clusters was determined to be 3 based on the Silhouette coefficient. 20, 12, and 23 features were retained from the AP, HBP, and the combined AP and HBP habitat (subregions 1, 2, 3) radiomics features. Three models were constructed with these selected features in AP, HBP, and the combined AP and HBP habitat radiomics features. The ROC analysis and DeLong test show that the Naive Bayes model of AP and HBP habitat radiomics (AP-HBP-Hab-Rad) archived the best performance. Finally, the combined model using the Light Gradient Boosting Machine (LightGBM) algorithm, incorporating the AP-HBP-Hab-Rad, age, and AFP (Alpha-Fetoprotein), was identified as the optimal model for predicting proliferative HCC. For the training and validation cohort, the accuracy, sensitivity, specificity, and AUC were 0.923, 0.880, 0.950, 0.966 (95% CI: 0.937-0.994) and 0.825, 0.680, 0.937, 0.877 (95% CI: 0.786-0.969), respectively. In its validation cohort of the combined model, the AUC value was statistically higher than the other models (P<0.01). A combined model, including AP-HBP-Hab-Rad, serum AFP, and age using the LightGBM algorithm, can satisfactorily predict proliferative HCC preoperatively.
Page 145 of 3453445 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.