Sort by:
Page 319 of 3433422 results

Prediction of clinical stages of cervical cancer via machine learning integrated with clinical features and ultrasound-based radiomics.

Zhang M, Zhang Q, Wang X, Peng X, Chen J, Yang H

pubmed logopapersMay 29 2025
To investigate the prediction of a model constructed by combining machine learning (ML) with clinical features and ultrasound radiomics in the clinical staging of cervical cancer. General clinical and ultrasound data of 227 patients with cervical cancer who received transvaginal ultrasonography were retrospectively analyzed. The region of interest (ROI) radiomics profiles of the original image and derived image were retrieved and profile screening was performed. The chosen profiles were employed in radiomics model and Radscore formula construction. Prediction models were developed utilizing several ML algorithms by Python based on an integrated dataset of clinical features and ultrasound radiomics. Model performances were evaluated via AUC. Plot calibration curves and clinical decision curves were used to assess model efficacy. The model developed by support vector machine (SVM) emerged as the superior model. Integrating clinical characteristics with ultrasound radiomics, it showed notable performance metrics in both the training and validation datasets. Specifically, in the training set, the model obtained an AUC of 0.88 (95% Confidence Interval (CI): 0.83-0.93), alongside a 0.84 accuracy, 0.68 sensitivity, and 0.91 specificity. When validated, the model maintained an AUC of 0.77 (95% CI: 0.63-0.88), with 0.77 accuracy, 0.62 sensitivity, and 0.83 specificity. The calibration curve aligned closely with the perfect calibration line. Additionally, based on the clinical decision curve analysis, the model offers clinical utility over wide-ranging threshold possibilities. The clinical- and radiomics-based SVM model provides a noninvasive tool for predicting cervical cancer stage, integrating ultrasound radiomics and key clinical factors (age, abortion history) to improve risk stratification. This approach could guide personalized treatment (surgery vs. chemoradiation) and optimize staging accuracy, particularly in resource-limited settings where advanced imaging is scarce.

Deep learning enables fast and accurate quantification of MRI-guided near-infrared spectral tomography for breast cancer diagnosis.

Feng J, Tang Y, Lin S, Jiang S, Xu J, Zhang W, Geng M, Dang Y, Wei C, Li Z, Sun Z, Jia K, Pogue BW, Paulsen KD

pubmed logopapersMay 29 2025
The utilization of magnetic resonance (MR) im-aging to guide near-infrared spectral tomography (NIRST) shows significant potential for improving the specificity and sensitivity of breast cancer diagnosis. However, the ef-ficiency and accuracy of NIRST image reconstruction have been limited by the complexities of light propagation mod-eling and MRI image segmentation. To address these chal-lenges, we developed and evaluated a deep learning-based approach for MR-guided 3D NIRST image reconstruction (DL-MRg-NIRST). Using a network trained on synthetic data, the DL-MRg-NIRST system reconstructed images from data acquired during 38 clinical imaging exams of pa-tients with breast abnormalities. Statistical analysis of the results demonstrated a sensitivity of 87.5%, a specificity of 92.9%, and a diagnostic accuracy of 89.5% in distinguishing pathologically defined benign from malignant lesions. Ad-ditionally, the combined use of MRI and DL-MRg-NIRST di-agnoses achieved an area under the receiver operating characteristic (ROC) curve of 0.98. Remarkably, the DL-MRg-NIRST image reconstruction process required only 1.4 seconds, significantly faster than state-of-the-art MR-guided NIRST methods.

Ultrasound image-based contrastive fusion non-invasive liver fibrosis staging algorithm.

Dong X, Tan Q, Xu S, Zhang J, Zhou M

pubmed logopapersMay 29 2025
The diagnosis of liver fibrosis is usually based on histopathological examination of liver puncture specimens. Although liver puncture is accurate, it has invasive risks and high economic costs, which are difficult for some patients to accept. Therefore, this study uses deep learning technology to build a liver fibrosis diagnosis model to achieve non-invasive staging of liver fibrosis, avoid complications, and reduce costs. This study uses ultrasound examination to obtain pure liver parenchyma image section data. With the consent of the patient, combined with the results of percutaneous liver puncture biopsy, the degree of liver fibrosis indicated by ultrasound examination data is judged. The concept of Fibrosis Contrast Layer (FCL) is creatively introduced in our experimental method, which can help our model more keenly capture the significant differences in the characteristics of liver fibrosis of various grades. Finally, through label fusion (LF), the characteristics of liver specimens of the same fibrosis stage are abstracted and fused to improve the accuracy and stability of the diagnostic model. Experimental evaluation demonstrated that our model achieved an accuracy of 85.6%, outperforming baseline models such as ResNet (81.9%), InceptionNet (80.9%), and VGG (80.8%). Even under a small-sample condition (30% data), the model maintained an accuracy of 84.8%, significantly outperforming traditional deep-learning models exhibiting sharp performance declines. The training results show that in the whole sample data set and 30% small sample data set training environments, the FCLLF model's test performance results are better than those of traditional deep learning models such as VGG, ResNet, and InceptionNet. The performance of the FCLLF model is more stable, especially in the small sample data set environment. Our proposed FCLLF model effectively improves the accuracy and stability of liver fibrosis staging using non-invasive ultrasound imaging.

Self-supervised feature learning for cardiac Cine MR image reconstruction

Siying Xu, Marcel Früh, Kerstin Hammernik, Andreas Lingg, Jens Kübler, Patrick Krumm, Daniel Rueckert, Sergios Gatidis, Thomas Küstner

arxiv logopreprintMay 29 2025
We propose a self-supervised feature learning assisted reconstruction (SSFL-Recon) framework for MRI reconstruction to address the limitation of existing supervised learning methods. Although recent deep learning-based methods have shown promising performance in MRI reconstruction, most require fully-sampled images for supervised learning, which is challenging in practice considering long acquisition times under respiratory or organ motion. Moreover, nearly all fully-sampled datasets are obtained from conventional reconstruction of mildly accelerated datasets, thus potentially biasing the achievable performance. The numerous undersampled datasets with different accelerations in clinical practice, hence, remain underutilized. To address these issues, we first train a self-supervised feature extractor on undersampled images to learn sampling-insensitive features. The pre-learned features are subsequently embedded in the self-supervised reconstruction network to assist in removing artifacts. Experiments were conducted retrospectively on an in-house 2D cardiac Cine dataset, including 91 cardiovascular patients and 38 healthy subjects. The results demonstrate that the proposed SSFL-Recon framework outperforms existing self-supervised MRI reconstruction methods and even exhibits comparable or better performance to supervised learning up to $16\times$ retrospective undersampling. The feature learning strategy can effectively extract global representations, which have proven beneficial in removing artifacts and increasing generalization ability during reconstruction.

Gaussian random fields as an abstract representation of patient metadata for multimodal medical image segmentation.

Cassidy B, McBride C, Kendrick C, Reeves ND, Pappachan JM, Raad S, Yap MH

pubmed logopapersMay 29 2025
Growing rates of chronic wound occurrence, especially in patients with diabetes, has become a recent concerning trend. Chronic wounds are difficult and costly to treat, and have become a serious burden on health care systems worldwide. Innovative deep learning methods for the detection and monitoring of such wounds have the potential to reduce the impact to patients and clinicians. We present a novel multimodal segmentation method which allows for the introduction of patient metadata into the training workflow whereby the patient data are expressed as Gaussian random fields. Our results indicate that the proposed method improved performance when utilising multiple models, each trained on different metadata categories. Using the Diabetic Foot Ulcer Challenge 2022 test set, when compared to the baseline results (intersection over union = 0.4670, Dice similarity coefficient = 0.5908) we demonstrate improvements of +0.0220 and +0.0229 for intersection over union and Dice similarity coefficient respectively. This paper presents the first study to focus on integrating patient data into a chronic wound segmentation workflow. Our results show significant performance gains when training individual models using specific metadata categories, followed by average merging of prediction masks using distance transforms. All source code for this study is available at: https://github.com/mmu-dermatology-research/multimodal-grf.

Comparing the Effects of Persistence Barcodes Aggregation and Feature Concatenation on Medical Imaging

Dashti A. Ali, Richard K. G. Do, William R. Jarnagin, Aras T. Asaad, Amber L. Simpson

arxiv logopreprintMay 29 2025
In medical image analysis, feature engineering plays an important role in the design and performance of machine learning models. Persistent homology (PH), from the field of topological data analysis (TDA), demonstrates robustness and stability to data perturbations and addresses the limitation from traditional feature extraction approaches where a small change in input results in a large change in feature representation. Using PH, we store persistent topological and geometrical features in the form of the persistence barcode whereby large bars represent global topological features and small bars encapsulate geometrical information of the data. When multiple barcodes are computed from 2D or 3D medical images, two approaches can be used to construct the final topological feature vector in each dimension: aggregating persistence barcodes followed by featurization or concatenating topological feature vectors derived from each barcode. In this study, we conduct a comprehensive analysis across diverse medical imaging datasets to compare the effects of the two aforementioned approaches on the performance of classification models. The results of this analysis indicate that feature concatenation preserves detailed topological information from individual barcodes, yields better classification performance and is therefore a preferred approach when conducting similar experiments.

ROC Analysis of Biomarker Combinations in Fragile X Syndrome-Specific Clinical Trials: Evaluating Treatment Efficacy via Exploratory Biomarkers

Norris, J. E., Berry-Kravis, E. M., Harnett, M. D., Reines, S. A., Reese, M., Auger, E. K., Outterson, A., Furman, J., Gurney, M. E., Ethridge, L. E.

medrxiv logopreprintMay 29 2025
Fragile X Syndrome (FXS) is a rare neurodevelopmental disorder caused by a trinucleotide repeat expansion on the 5 untranslated region of the FMR1 gene. FXS is characterized by intellectual disability, anxiety, sensory hypersensitivity, and difficulties with executive function. A recent phase 2 placebo-controlled clinical trial assessing BPN14770, a first-in-class phosphodiesterase 4D allosteric inhibitor, in 30 adult males (age 18-41 years) with FXS demonstrated cognitive improvements on the NIH Toolbox Cognitive Battery in domains related to language and caregiver reports of improvement in both daily functioning and language. However, individual physiological measures from electroencephalography (EEG) demonstrated only marginal significance for trial efficacy. A secondary analysis of resting state EEG data collected as part of the phase 2 clinical trial evaluating BPN14770 was conducted using a machine learning classification algorithm to classify trial conditions (i.e., baseline, drug, placebo) via linear EEG variable combinations. The algorithm identified a composite of peak alpha frequencies (PAF) across multiple brain regions as a potential biomarker demonstrating BPN14770 efficacy. Increased PAF from baseline was associated with drug but not placebo. Given the relationship between PAF and cognitive function among typically developed adults and those with intellectual disability, as well as previously reported reductions in alpha frequency and power in FXS, PAF represents a potential physiological measure of BPN14770 efficacy.

Research on multi-algorithm and explainable AI techniques for predictive modeling of acute spinal cord injury using multimodal data.

Tai J, Wang L, Xie Y, Li Y, Fu H, Ma X, Li H, Li X, Yan Z, Liu J

pubmed logopapersMay 29 2025
Machine learning technology has been extensively applied in the medical field, particularly in the context of disease prediction and patient rehabilitation assessment. Acute spinal cord injury (ASCI) is a sudden trauma that frequently results in severe neurological deficits and a significant decline in quality of life. Early prediction of neurological recovery is crucial for the personalized treatment planning. While extensively explored in other medical fields, this study is the first to apply multiple machine learning methods and Shapley Additive Explanations (SHAP) analysis specifically to ASCI for predicting neurological recovery. A total of 387 ASCI patients were included, with clinical, imaging, and laboratory data collected. Key features were selected using univariate analysis, Lasso regression, and other feature selection techniques, integrating clinical, radiomics, and laboratory data. A range of machine learning models, including XGBoost, Logistic Regression, KNN, SVM, Decision Tree, Random Forest, LightGBM, ExtraTrees, Gradient Boosting, and Gaussian Naive Bayes, were evaluated, with Gaussian Naive Bayes exhibiting the best performance. Radiomics features extracted from T2-weighted fat-suppressed MRI scans, such as original_glszm_SizeZoneNonUniformity and wavelet-HLL_glcm_SumEntropy, significantly enhanced predictive accuracy. SHAP analysis identified critical clinical features, including IMLL, INR, BMI, Cys C, and RDW-CV, in the predictive model. The model was validated and demonstrated excellent performance across multiple metrics. The clinical utility and interpretability of the model were further enhanced through the application of patient clustering and nomogram analysis. This model has the potential to serve as a reliable tool for clinicians in the formulation of personalized treatment plans and prognosis assessment.

Super-temporal-resolution Photoacoustic Imaging with Dynamic Reconstruction through Implicit Neural Representation in Sparse-view

Youshen Xiao, Yiling Shi, Ruixi Sun, Hongjiang Wei, Fei Gao, Yuyao Zhang

arxiv logopreprintMay 29 2025
Dynamic Photoacoustic Computed Tomography (PACT) is an important imaging technique for monitoring physiological processes, capable of providing high-contrast images of optical absorption at much greater depths than traditional optical imaging methods. However, practical instrumentation and geometric constraints limit the number of acoustic sensors available around the imaging target, leading to sparsity in sensor data. Traditional photoacoustic (PA) image reconstruction methods, when directly applied to sparse PA data, produce severe artifacts. Additionally, these traditional methods do not consider the inter-frame relationships in dynamic imaging. Temporal resolution is crucial for dynamic photoacoustic imaging, which is fundamentally limited by the low repetition rate (e.g., 20 Hz) and high cost of high-power laser technology. Recently, Implicit Neural Representation (INR) has emerged as a powerful deep learning tool for solving inverse problems with sparse data, by characterizing signal properties as continuous functions of their coordinates in an unsupervised manner. In this work, we propose an INR-based method to improve dynamic photoacoustic image reconstruction from sparse-views and enhance temporal resolution, using only spatiotemporal coordinates as input. Specifically, the proposed INR represents dynamic photoacoustic images as implicit functions and encodes them into a neural network. The weights of the network are learned solely from the acquired sparse sensor data, without the need for external training datasets or prior images. Benefiting from the strong implicit continuity regularization provided by INR, as well as explicit regularization for low-rank and sparsity, our proposed method outperforms traditional reconstruction methods under two different sparsity conditions, effectively suppressing artifacts and ensuring image quality.

ImmunoDiff: A Diffusion Model for Immunotherapy Response Prediction in Lung Cancer

Moinak Bhattacharya, Judy Huang, Amna F. Sher, Gagandeep Singh, Chao Chen, Prateek Prasanna

arxiv logopreprintMay 29 2025
Accurately predicting immunotherapy response in Non-Small Cell Lung Cancer (NSCLC) remains a critical unmet need. Existing radiomics and deep learning-based predictive models rely primarily on pre-treatment imaging to predict categorical response outcomes, limiting their ability to capture the complex morphological and textural transformations induced by immunotherapy. This study introduces ImmunoDiff, an anatomy-aware diffusion model designed to synthesize post-treatment CT scans from baseline imaging while incorporating clinically relevant constraints. The proposed framework integrates anatomical priors, specifically lobar and vascular structures, to enhance fidelity in CT synthesis. Additionally, we introduce a novel cbi-Adapter, a conditioning module that ensures pairwise-consistent multimodal integration of imaging and clinical data embeddings, to refine the generative process. Additionally, a clinical variable conditioning mechanism is introduced, leveraging demographic data, blood-based biomarkers, and PD-L1 expression to refine the generative process. Evaluations on an in-house NSCLC cohort treated with immune checkpoint inhibitors demonstrate a 21.24% improvement in balanced accuracy for response prediction and a 0.03 increase in c-index for survival prediction. Code will be released soon.
Page 319 of 3433422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.