Sort by:
Page 635 of 7647636 results

Chen W, McMillan AB

pubmed logopapersJun 2 2025
This paper introduces an efficient sub-model ensemble framework aimed at enhancing the interpretability of medical deep learning models, thus increasing their clinical applicability. By generating uncertainty maps, this framework enables end-users to evaluate the reliability of model outputs. We developed a strategy to generate diverse models from a single well-trained checkpoint, facilitating the training of a model family. This involves producing multiple outputs from a single input, fusing them into a final output, and estimating uncertainty based on output disagreements. Implemented using U-Net and UNETR models for segmentation and synthesis tasks, this approach was tested on CT body segmentation and MR-CT synthesis datasets. It achieved a mean Dice coefficient of 0.814 in segmentation and a Mean Absolute Error of 88.17 HU in synthesis, improved from 89.43 HU by pruning. Additionally, the framework was evaluated under image corruption and data undersampling, maintaining correlation between uncertainty and error, which highlights its robustness. These results suggest that the proposed approach not only maintains the performance of well-trained models but also enhances interpretability through effective uncertainty estimation, applicable to both convolutional and transformer models in a range of imaging tasks.

Nanammal V, Rajalakshmi S, Remya V, Ranjith S

pubmed logopapersJun 2 2025
In modern healthcare, telemedicine, health records, and AI-driven diagnostics depend on medical image watermarking to secure chest X-rays for pneumonia diagnosis, ensuring data integrity, confidentiality, and authenticity. A 2024 study found over 70 % of healthcare institutions faced medical image data breaches. Yet, current methods falter in imperceptibility, robustness against attacks, and deployment efficiency. ViTU-Net integrates cutting-edge techniques to address these multifaceted challenges in medical image security and analysis. The model's core component, the Vision Transformer (ViT) encoder, efficiently captures global dependencies and spatial information, while the U-Net decoder enhances image reconstruction, with both components leveraging the Adaptive Hierarchical Spatial Attention (AHSA) module for improved spatial processing. Additionally, the patch-based LSB embedding mechanism ensures focused embedding of reversible fragile watermarks within each patch of the segmented non-diagnostic region (RONI), guided dynamically by adaptive masks derived from the attention mechanism, minimizing impact on diagnostic accuracy while maximizing precision and ensuring optimal utilization of spatial information. The hybrid meta-heuristic optimization algorithm, TuniBee Fusion, dynamically optimizes watermarking parameters, striking a balance between exploration and exploitation, thereby enhancing watermarking efficiency and robustness. The incorporation of advanced cryptographic techniques, including SHA-512 hashing and AES encryption, fortifies the model's security, ensuring the authenticity and confidentiality of watermarked medical images. A PSNR value of 60.7 dB, along with an NCC value of 0.9999 and an SSIM value of 1.00, underscores its effectiveness in preserving image quality, security, and diagnostic accuracy. Robustness analysis against a spectrum of attacks validates ViTU-Net's resilience in real-world scenarios.

Zhao, Y., Alizadeh, E., Taha, H. B., Liu, Y., Xu, M., Mahoney, J. M., Li, S.

biorxiv logopreprintJun 2 2025
Deep learning models trained with spatial omics data uncover complex patterns and relationships among cells, genes, and proteins in a high-dimensional space. State-of-the-art in silico spatial multi-cell gene expression methods using histological images of tissue stained with hematoxylin and eosin (H&E) allow us to characterize cellular heterogeneity. We developed a vision transformer (ViT) framework to map histological signatures to spatial single-cell transcriptomic signatures, named SPiRiT. SPiRiT predicts single-cell spatial gene expression using the matched H&E image tiles of human breast cancer and whole mouse pup, evaluated by Xenium (10x Genomics) datasets. Importantly, SPiRiT incorporates rigorous strategies to ensure reproducibility and robustness of predictions and provides trustworthy interpretation through attention-based model explainability. SPiRiT model interpretation revealed the areas, and attention details it uses to predict gene expressions like marker genes in invasive cancer cells. In an apple-to-apple comparison with ST-Net, SPiRiT improved the predictive accuracy by 40%. These gene predictions and expression levels were highly consistent with the tumor region annotation. In summary, SPiRiT highlights the feasibility to infer spatial single-cell gene expression using tissue morphology in multiple-species.

Gersey ZC, Zenkin S, Mamindla P, Amjadzadeh M, Ak M, Plute T, Peddagangireddy V, Abdallah H, Muthiah N, Wang EW, Snyderman C, Gardner PA, Colen RR, Zenonos GA

pubmed logopapersJun 2 2025
Chordomas are rare, aggressive tumors of notochordal origin, commonly affecting the spine and skull base. Skull Base Chordomas (SBCs) comprise approximately 39% of cases, with an incidence of less than 1 per million annually in the U.S. Prognosis remains poor due to resistance to chemotherapy, often requiring extensive surgical resection and adjuvant radiotherapy. Current classification methods based on chromosomal deletions are invasive and costly, presenting a need for alternative diagnostic tools. Radiomics allows for non-invasive SBC diagnosis and treatment planning. We developed and validated radiomic-based models using MRI data to predict Overall Survival (OS) and Progression-Free Survival following Surgery (PFSS) in SBC patients. Machine learning classifiers, including eXtreme Gradient Boosting (XGBoost), were employed along with feature selection techniques. Unsupervised clustering identified radiomic-based subgroups, which were correlated with chromosomal deletions and clinical outcomes. Our XGBoost model demonstrated superior predictive performance, achieving an area under the curve (AUC) of 83.33% for OS and 80.36% for PFSS, outperforming other classifiers. Radiomic clustering revealed two SBC groups with differing survival and molecular characteristics, strongly correlating with chromosomal deletion profiles. These findings indicate that radiomics can non-invasively characterize SBC phenotypes and stratify patients by prognosis. Radiomics shows promise as a reliable, non-invasive tool for the prognostication and classification of SBCs, minimizing the need for invasive genetic testing and supporting personalized treatment strategies.

Lian C, Zhou HY, Liang D, Qin J, Wang L

pubmed logopapersJun 2 2025
Medical vision-language alignment through cross-modal contrastive learning shows promising performance in image-text matching tasks, such as retrieval and zero-shot classification. However, conventional cross-modal contrastive learning (CLIP-based) methods suffer from suboptimal visual representation capabilities, which also limits their effectiveness in vision-language alignment. In contrast, although the models pretrained via multimodal masked modeling struggle with direct cross-modal matching, they excel in visual representation. To address this contradiction, we propose ALTA (ALign Through Adapting), an efficient medical vision-language alignment method that utilizes only about 8% of the trainable parameters and less than 1/5 of the computational consumption required for masked record modeling. ALTA achieves superior performance in vision-language matching tasks like retrieval and zero-shot classification by adapting the pretrained vision model from masked record modeling. Additionally, we integrate temporal-multiview radiograph inputs to enhance the information consistency between radiographs and their corresponding descriptions in reports, further improving the vision-language alignment. Experimental evaluations show that ALTA outperforms the best-performing counterpart by over 4% absolute points in text-to-image accuracy and approximately 6% absolute points in image-to-text retrieval accuracy. The adaptation of vision-language models during efficient alignment also promotes better vision and language understanding. Code is publicly available at https://github.com/DopamineLcy/ALTA.

Dou L, Jiang J, Yao H, Zhang B, Wang X

pubmed logopapersJun 2 2025
<b><i>Background:</i></b> The molecular mechanisms driving hepatocellular carcinoma (HCC) and predict the chemotherapy sensitive remain unclear; therefore, identification of these key biomarkers is essential for early diagnosis and treatment of HCC. <b><i>Method:</i></b> We collected and processed Computed Tomography (CT) and clinical data from 116 patients with autoimmune hepatitis (AIH) and HCC who came to our hospital's Liver Cancer Center. We then identified and extracted important characteristic features of significant patient images and correlated them with mitochondria-related genes using machine learning techniques such as multihead attention networks, lasso regression, principal component analysis (PCA), and support vector machines (SVM). These genes were integrated into radiomics signature models to explore their role in disease progression. We further correlated these results with clinical variables to screen for driver genes and evaluate the predict ability of chemotherapy sensitive of key genes in liver cancer (LC) patients. Finally, qPCR was used to validate the expression of this gene in patient samples. <b><i>Results:</i></b> Our study utilized attention networks to identify disease regions in medical images with 97% accuracy and an AUC of 94%. We extracted 942 imaging features, identifying five key features through lasso regression that accurately differentiate AIH from HCC. Transcriptome analysis revealed 132 upregulated and 101 downregulated genes in AIH, with 45 significant genes identified by XGBOOST. In HCC analysis, PCA and random forest highlighted 11 key features. Among mitochondrial genes, <i>SLC25A42</i> correlated positively with normal tissue imaging features but negatively with cancerous tissues and was identified as a driver gene. Low expression of <i>SLC25A42</i> was associated with chemotherapy sensitive in HCC patients. <b><i>Conclusions:</i></b> In conclusion, machine learning modeling combined with genomic profiling provides a promising approach to identify the driver gene <i>SLC25A42</i> in LC, which may help improve diagnostic accuracy and chemotherapy sensitivity for this disease.

Tang W, Yang Z

pubmed logopapersJun 2 2025
Deep learning-based disease grading technologies facilitate timely medical intervention due to their high efficiency and accuracy. Recent advancements have enhanced grading performance by incorporating the ordinal relationships of disease labels. However, existing methods often assume same probability distributions for disease labels across instances within the same category, overlooking variations in label distributions. Additionally, the hyperparameters of these distributions are typically determined empirically, which may not accurately reflect the true distribution. To address these limitations, we propose a disease grading network utilizing a sample-aware asymmetric Gaussian label distribution, termed DGN-AGLD. This approach includes a variance predictor designed to learn and predict parameters that control the asymmetry of the Gaussian distribution, enabling distinct label distributions within the same category. This module can be seamlessly integrated into standard deep learning networks. Experimental results on four disease datasets validate the effectiveness and superiority of the proposed method, particularly on the IDRiD dataset, where it achieves a diabetic retinopathy accuracy of 77.67%. Furthermore, our method extends to joint disease grading tasks, yielding superior results and demonstrating significant generalization capabilities. Visual analysis indicates that our method more accurately captures the trend of disease progression by leveraging the asymmetry in label distribution. Our code is publicly available on https://github.com/ahtwq/AGNet.

Liu Y, Sun Z, Liu H

pubmed logopapersJun 2 2025
This study aims to develop a CycleGAN based denoising model to enhance the quality of low-dose PET (LDPET) images, making them as close as possible to standard-dose PET (SDPET) images. Using a Philips Vereos PET/CT system, whole-body PET images of fluorine-18 fluorodeoxyglucose (18F-FDG) were acquired from 37 patients to facilitate the development of the UR-CycleGAN model. In this model, low-dose data were simulated by reconstructing PET images with a 30-s acquisition time, while standard-dose data were reconstructed from a 2.5-min acquisition. The network was trained in a supervised manner on 13 210 pairs of PET images, and the quality of the images was objectively evaluated using peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Compared to simulated low-dose data, the denoised PET images generated by our model showed significant improvement, with a clear trend toward SDPET image quality. The proposed method reduces acquisition time by 80% compared to standard-dose imaging, while achieving image quality close to SDPET images. It also enhances visual detail fidelity, demonstrating the feasibility and practical utility of the model for significantly reducing imaging time while maintaining high image quality.

Ren Y, Shi C, Zhu D, Zhou C

pubmed logopapersJun 2 2025
Accurate pulmonary nodule detection in CT imaging remains challenging due to fragmented feature integration in conventional deep learning models. This paper proposes SPCF-YOLO, a real-time detection framework that synergizes hierarchical feature fusion with anatomical context modeling. First, the space-to-depth convolution (SPDConv) module preserves fine-grained features in low-resolution images through spatial dimension reorganization. Second, the shared feature pyramid convolution (SFPConv) module is designed to dynamically extract multi-scale contextual information using multi-dilation-rate convolutional layers. Incorporating a small object detection layer aims to improve sensitivity to small nodules. This is achieved in combination with the improved pyramid squeeze attention (PSA) module and the improved contextual transformer (CoTB) module, which enhance global channel dependencies and reduce feature loss. The model achieves 82.8% mean average precision (mAP) and 82.9% F1 score on LUNA16 at 151 frames per second (representing improvements of 17.5% and 82.9% over YOLOv8 respectively), demonstrating real-time clinical viability. Cross-modality validation on SIIM-COVID-19 shows 1.5% improvement, confirming robust generalization.

Yu Y, Xu W, Li X, Zeng X, Su Z, Wang Q, Li S, Liu C, Wang Z, Wang S, Liao L, Zhang J

pubmed logopapersJun 2 2025
This study aims to develop an paraspinal muscle-based radiomics model using a machine learning approach and assess its utility in predicting postoperative outcomes among patients with lumbar degenerative spondylolisthesis (LDS). This retrospective study included a total of 155 patients diagnosed with LDS who underwent single-level posterior lumbar interbody fusion (PLIF) surgery between January 2021 and October 2023. The patients were divided into train and test cohorts in a ratio of 8:2.Radiomics features were extracted from axial T2-weighted lumbar MRI, and seven machine learning models were developed after selecting the most relevant radiomic features using T-test, Pearson correlation, and Lasso. A combined model was then created by integrating both clinical and radiomics features. The performance of the models was evaluated through ROC, sensitivity, and specificity, while their clinical utility was assessed using AUC and Decision Curve Analysis (DCA). The LR model demonstrated robust predictive performance compared to the other machine learning models evaluated in the study. The combined model, integrating both clinical and radiomic features, exhibited an AUC of 0.822 (95% CI, 0.761-0.883) in the training cohorts and 0.826 (95% CI, 0.766-0.886) in the test cohorts, indicating substantial predictive capability. Moreover, the combined model showed superior clinical benefit and increased classification accuracy when compared to the radiomics model alone. The findings suggest that the combined model holds promise for accurately predicting postoperative outcomes in patients with LDS and could be valuable in guiding treatment strategies and assisting clinicians in making informed clinical decisions for LDS patients.
Page 635 of 7647636 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,700+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.