Sort by:
Page 38 of 1411405 results

Radiogenomics and Radiomics of Skull Base Chordoma: Classification of Novel Radiomic Subgroups and Prediction of Genetic Signatures and Clinical Outcomes.

Gersey ZC, Zenkin S, Mamindla P, Amjadzadeh M, Ak M, Plute T, Peddagangireddy V, Abdallah H, Muthiah N, Wang EW, Snyderman C, Gardner PA, Colen RR, Zenonos GA

pubmed logopapersJun 2 2025
Chordomas are rare, aggressive tumors of notochordal origin, commonly affecting the spine and skull base. Skull Base Chordomas (SBCs) comprise approximately 39% of cases, with an incidence of less than 1 per million annually in the U.S. Prognosis remains poor due to resistance to chemotherapy, often requiring extensive surgical resection and adjuvant radiotherapy. Current classification methods based on chromosomal deletions are invasive and costly, presenting a need for alternative diagnostic tools. Radiomics allows for non-invasive SBC diagnosis and treatment planning. We developed and validated radiomic-based models using MRI data to predict Overall Survival (OS) and Progression-Free Survival following Surgery (PFSS) in SBC patients. Machine learning classifiers, including eXtreme Gradient Boosting (XGBoost), were employed along with feature selection techniques. Unsupervised clustering identified radiomic-based subgroups, which were correlated with chromosomal deletions and clinical outcomes. Our XGBoost model demonstrated superior predictive performance, achieving an area under the curve (AUC) of 83.33% for OS and 80.36% for PFSS, outperforming other classifiers. Radiomic clustering revealed two SBC groups with differing survival and molecular characteristics, strongly correlating with chromosomal deletion profiles. These findings indicate that radiomics can non-invasively characterize SBC phenotypes and stratify patients by prognosis. Radiomics shows promise as a reliable, non-invasive tool for the prognostication and classification of SBCs, minimizing the need for invasive genetic testing and supporting personalized treatment strategies.

Efficient Medical Vision-Language Alignment Through Adapting Masked Vision Models.

Lian C, Zhou HY, Liang D, Qin J, Wang L

pubmed logopapersJun 2 2025
Medical vision-language alignment through cross-modal contrastive learning shows promising performance in image-text matching tasks, such as retrieval and zero-shot classification. However, conventional cross-modal contrastive learning (CLIP-based) methods suffer from suboptimal visual representation capabilities, which also limits their effectiveness in vision-language alignment. In contrast, although the models pretrained via multimodal masked modeling struggle with direct cross-modal matching, they excel in visual representation. To address this contradiction, we propose ALTA (ALign Through Adapting), an efficient medical vision-language alignment method that utilizes only about 8% of the trainable parameters and less than 1/5 of the computational consumption required for masked record modeling. ALTA achieves superior performance in vision-language matching tasks like retrieval and zero-shot classification by adapting the pretrained vision model from masked record modeling. Additionally, we integrate temporal-multiview radiograph inputs to enhance the information consistency between radiographs and their corresponding descriptions in reports, further improving the vision-language alignment. Experimental evaluations show that ALTA outperforms the best-performing counterpart by over 4% absolute points in text-to-image accuracy and approximately 6% absolute points in image-to-text retrieval accuracy. The adaptation of vision-language models during efficient alignment also promotes better vision and language understanding. Code is publicly available at https://github.com/DopamineLcy/ALTA.

Exploring <i>SLC25A42</i> as a Radiogenomic Marker from the Perioperative Stage to Chemotherapy in Hepatitis-Related Hepatocellular Carcinoma.

Dou L, Jiang J, Yao H, Zhang B, Wang X

pubmed logopapersJun 2 2025
<b><i>Background:</i></b> The molecular mechanisms driving hepatocellular carcinoma (HCC) and predict the chemotherapy sensitive remain unclear; therefore, identification of these key biomarkers is essential for early diagnosis and treatment of HCC. <b><i>Method:</i></b> We collected and processed Computed Tomography (CT) and clinical data from 116 patients with autoimmune hepatitis (AIH) and HCC who came to our hospital's Liver Cancer Center. We then identified and extracted important characteristic features of significant patient images and correlated them with mitochondria-related genes using machine learning techniques such as multihead attention networks, lasso regression, principal component analysis (PCA), and support vector machines (SVM). These genes were integrated into radiomics signature models to explore their role in disease progression. We further correlated these results with clinical variables to screen for driver genes and evaluate the predict ability of chemotherapy sensitive of key genes in liver cancer (LC) patients. Finally, qPCR was used to validate the expression of this gene in patient samples. <b><i>Results:</i></b> Our study utilized attention networks to identify disease regions in medical images with 97% accuracy and an AUC of 94%. We extracted 942 imaging features, identifying five key features through lasso regression that accurately differentiate AIH from HCC. Transcriptome analysis revealed 132 upregulated and 101 downregulated genes in AIH, with 45 significant genes identified by XGBOOST. In HCC analysis, PCA and random forest highlighted 11 key features. Among mitochondrial genes, <i>SLC25A42</i> correlated positively with normal tissue imaging features but negatively with cancerous tissues and was identified as a driver gene. Low expression of <i>SLC25A42</i> was associated with chemotherapy sensitive in HCC patients. <b><i>Conclusions:</i></b> In conclusion, machine learning modeling combined with genomic profiling provides a promising approach to identify the driver gene <i>SLC25A42</i> in LC, which may help improve diagnostic accuracy and chemotherapy sensitivity for this disease.

Disease-Grading Networks with Asymmetric Gaussian Distribution for Medical Imaging.

Tang W, Yang Z

pubmed logopapersJun 2 2025
Deep learning-based disease grading technologies facilitate timely medical intervention due to their high efficiency and accuracy. Recent advancements have enhanced grading performance by incorporating the ordinal relationships of disease labels. However, existing methods often assume same probability distributions for disease labels across instances within the same category, overlooking variations in label distributions. Additionally, the hyperparameters of these distributions are typically determined empirically, which may not accurately reflect the true distribution. To address these limitations, we propose a disease grading network utilizing a sample-aware asymmetric Gaussian label distribution, termed DGN-AGLD. This approach includes a variance predictor designed to learn and predict parameters that control the asymmetry of the Gaussian distribution, enabling distinct label distributions within the same category. This module can be seamlessly integrated into standard deep learning networks. Experimental results on four disease datasets validate the effectiveness and superiority of the proposed method, particularly on the IDRiD dataset, where it achieves a diabetic retinopathy accuracy of 77.67%. Furthermore, our method extends to joint disease grading tasks, yielding superior results and demonstrating significant generalization capabilities. Visual analysis indicates that our method more accurately captures the trend of disease progression by leveraging the asymmetry in label distribution. Our code is publicly available on https://github.com/ahtwq/AGNet.

UR-cycleGAN: Denoising full-body low-dose PET images using cycle-consistent Generative Adversarial Networks.

Liu Y, Sun Z, Liu H

pubmed logopapersJun 2 2025
This study aims to develop a CycleGAN based denoising model to enhance the quality of low-dose PET (LDPET) images, making them as close as possible to standard-dose PET (SDPET) images. Using a Philips Vereos PET/CT system, whole-body PET images of fluorine-18 fluorodeoxyglucose (18F-FDG) were acquired from 37 patients to facilitate the development of the UR-CycleGAN model. In this model, low-dose data were simulated by reconstructing PET images with a 30-s acquisition time, while standard-dose data were reconstructed from a 2.5-min acquisition. The network was trained in a supervised manner on 13 210 pairs of PET images, and the quality of the images was objectively evaluated using peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Compared to simulated low-dose data, the denoised PET images generated by our model showed significant improvement, with a clear trend toward SDPET image quality. The proposed method reduces acquisition time by 80% compared to standard-dose imaging, while achieving image quality close to SDPET images. It also enhances visual detail fidelity, demonstrating the feasibility and practical utility of the model for significantly reducing imaging time while maintaining high image quality.

SPCF-YOLO: An Efficient Feature Optimization Model for Real-Time Lung Nodule Detection.

Ren Y, Shi C, Zhu D, Zhou C

pubmed logopapersJun 2 2025
Accurate pulmonary nodule detection in CT imaging remains challenging due to fragmented feature integration in conventional deep learning models. This paper proposes SPCF-YOLO, a real-time detection framework that synergizes hierarchical feature fusion with anatomical context modeling. First, the space-to-depth convolution (SPDConv) module preserves fine-grained features in low-resolution images through spatial dimension reorganization. Second, the shared feature pyramid convolution (SFPConv) module is designed to dynamically extract multi-scale contextual information using multi-dilation-rate convolutional layers. Incorporating a small object detection layer aims to improve sensitivity to small nodules. This is achieved in combination with the improved pyramid squeeze attention (PSA) module and the improved contextual transformer (CoTB) module, which enhance global channel dependencies and reduce feature loss. The model achieves 82.8% mean average precision (mAP) and 82.9% F1 score on LUNA16 at 151 frames per second (representing improvements of 17.5% and 82.9% over YOLOv8 respectively), demonstrating real-time clinical viability. Cross-modality validation on SIIM-COVID-19 shows 1.5% improvement, confirming robust generalization.

MRI Radiomics based on paraspinal muscle for prediction postoperative outcomes in lumbar degenerative spondylolisthesis.

Yu Y, Xu W, Li X, Zeng X, Su Z, Wang Q, Li S, Liu C, Wang Z, Wang S, Liao L, Zhang J

pubmed logopapersJun 2 2025
This study aims to develop an paraspinal muscle-based radiomics model using a machine learning approach and assess its utility in predicting postoperative outcomes among patients with lumbar degenerative spondylolisthesis (LDS). This retrospective study included a total of 155 patients diagnosed with LDS who underwent single-level posterior lumbar interbody fusion (PLIF) surgery between January 2021 and October 2023. The patients were divided into train and test cohorts in a ratio of 8:2.Radiomics features were extracted from axial T2-weighted lumbar MRI, and seven machine learning models were developed after selecting the most relevant radiomic features using T-test, Pearson correlation, and Lasso. A combined model was then created by integrating both clinical and radiomics features. The performance of the models was evaluated through ROC, sensitivity, and specificity, while their clinical utility was assessed using AUC and Decision Curve Analysis (DCA). The LR model demonstrated robust predictive performance compared to the other machine learning models evaluated in the study. The combined model, integrating both clinical and radiomic features, exhibited an AUC of 0.822 (95% CI, 0.761-0.883) in the training cohorts and 0.826 (95% CI, 0.766-0.886) in the test cohorts, indicating substantial predictive capability. Moreover, the combined model showed superior clinical benefit and increased classification accuracy when compared to the radiomics model alone. The findings suggest that the combined model holds promise for accurately predicting postoperative outcomes in patients with LDS and could be valuable in guiding treatment strategies and assisting clinicians in making informed clinical decisions for LDS patients.

Slim UNETR++: A lightweight 3D medical image segmentation network for medical image analysis.

Jin J, Yang S, Tong J, Zhang K, Wang Z

pubmed logopapersJun 2 2025
Convolutional neural network (CNN) models, such as U-Net, V-Net, and DeepLab, have achieved remarkable results across various medical imaging modalities, and ultrasound. Additionally, hybrid Transformer-based segmentation methods have shown great potential in medical image analysis. Despite the breakthroughs in feature extraction through self-attention mechanisms, these methods are computationally intensive, especially for three-dimensional medical imaging, posing significant challenges to graphics processing unit (GPU) hardware. Consequently, the demand for lightweight models is increasing. To address this issue, we designed a high-accuracy yet lightweight model that combines the strengths of CNNs and Transformers. We introduce Slim UNEt TRansformers++ (Slim UNETR++), which builds upon Slim UNETR by incorporating Medical ConvNeXt (MedNeXt), Spatial-Channel Attention (SCA), and Efficient Paired-Attention (EPA) modules. This integration leverages the advantages of both CNN and Transformer architectures to enhance model accuracy. The core component of Slim UNETR++ is the Slim UNETR++ block, which facilitates efficient information exchange through a sparse self-attention mechanism and low-cost representation aggregation. We also introduced throughput as a performance metric to quantify data processing speed. Experimental results demonstrate that Slim UNETR++ outperforms other models in terms of accuracy and model size. On the BraTS2021 dataset, Slim UNETR++ achieved a Dice accuracy of 93.12% and a 95% Hausdorff distance (HD95) of 4.23mm, significantly surpassing mainstream relevant methods such as Swin UNETR.

Robust Uncertainty-Informed Glaucoma Classification Under Data Shift.

Rashidisabet H, Chan RVP, Leiderman YI, Vajaranant TS, Yi D

pubmed logopapersJun 2 2025
Standard deep learning (DL) models often suffer significant performance degradation on out-of-distribution (OOD) data, where test data differs from training data, a common challenge in medical imaging due to real-world variations. We propose a unified self-censorship framework as an alternative to the standard DL models for glaucoma classification using deep evidential uncertainty quantification. Our approach detects OOD samples at both the dataset and image levels. Dataset-level self-censorship enables users to accept or reject predictions for an entire new dataset based on model uncertainty, whereas image-level self-censorship refrains from making predictions on individual OOD images rather than risking incorrect classifications. We validated our approach across diverse datasets. Our dataset-level self-censorship method outperforms the standard DL model in OOD detection, achieving an average 11.93% higher area under the curve (AUC) across 14 OOD datasets. Similarly, our image-level self-censorship model improves glaucoma classification accuracy by an average of 17.22% across 4 external glaucoma datasets against baselines while censoring 28.25% more data. Our approach addresses the challenge of generalization in standard DL models for glaucoma classification across diverse datasets by selectively withholding predictions when the model is uncertain. This method reduces misclassification errors compared to state-of-the-art baselines, particularly for OOD cases. This study introduces a tunable framework that explores the trade-off between prediction accuracy and data retention in glaucoma prediction. By managing uncertainty in model outputs, the approach lays a foundation for future decision support tools aimed at improving the reliability of automated glaucoma diagnosis.

Impact of Optic Nerve Tortuosity, Globe Proptosis, and Size on Retinal Ganglion Cell Thickness Across General, Glaucoma, and Myopic Populations.

Chiang CYN, Wang X, Gardiner SK, Buist M, Girard MJA

pubmed logopapersJun 2 2025
The purpose of this study was to investigate the impact of optic nerve tortuosity (ONT), and the interaction of globe proptosis and size on retinal ganglion cell (RGC) thickness, using retinal nerve fiber layer (RNFL) thickness, across general, glaucoma, and myopic populations. This study analyzed 17,940 eyes from the UKBiobank cohort (ID 76442), including 72 glaucomatous and 2475 myopic eyes. Artificial intelligence models were developed to derive RNFL thickness corrected for ocular magnification from 3D optical coherence tomography scans and orbit features from 3D magnetic resonance images, including ONT, globe proptosis, axial length, and a novel feature: the interzygomatic line-to-posterior pole (ILPP) distance - a composite marker of globe proptosis and size. Generalized estimating equation (GEE) models evaluated associations between orbital and retinal features. RNFL thickness was positively correlated with ONT and ILPP distance (r = 0.065, P < 0.001 and r = 0.206, P < 0.001, respectively) in the general population. The same was true for glaucoma (r = 0.040, P = 0.74 and r = 0.224, P = 0.059), and for myopia (r = 0.069, P < 0.001 and r = 0.100, P < 0.001). GEE models revealed that straighter optic nerves and shorter ILPP distance were predictive of thinner RNFL in all populations. Straighter optic nerves and decreased ILPP distance could cause RNFL thinning, possibly due to greater traction forces. ILPP distance emerged as a potential biomarker of axonal health. These findings underscore the importance of orbit structures in RGC axonal health and warrant further research into orbit biomechanics.
Page 38 of 1411405 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.