Sort by:
Page 73 of 2372364 results

Amorphous-Crystalline Synergy in CoSe<sub>2</sub>/CoS<sub>2</sub> Heterostructures: High-Performance SERS Substrates for Esophageal Tumor Cell Discrimination.

Zhang M, Liu A, Meng X, Wang Y, Yu J, Liu H, Sun Y, Xu L, Song X, Zhang J, Sun L, Lin J, Wu A, Wang X, Chai N, Li L

pubmed logopapersAug 12 2025
Although surface-enhanced Raman scattering (SERS) spectroscopy is applied in biomedicine deeply, the design of new substrates for wider detection is still in demand. Crystalline-amorphous CoSe<sub>2</sub>/CoS<sub>2</sub> heterojunction is synthesized, with high SERS performance and stability, composed of orthorhombic (o-CoSe<sub>2</sub>) and amorphous CoS<sub>2</sub> (a-CoS<sub>2</sub>). By adjusting feed ratio, the proportion of a-CoS<sub>2</sub> to o-CoSe<sub>2</sub> is regulated, where CoSe<sub>2</sub>/CoS<sub>2</sub>-S50 with a 1:1 ratio demonstrates the best SERS performance due to the balance of two components. It is confirmed through experimental and simulation methods that o-CoSe<sub>2</sub> and a-CoS<sub>2</sub> have unique contribution, respectively: a-CoS<sub>2</sub> has rich vacancies and a higher density of active sites, while o-CoSe<sub>2</sub> further enriches vacancies, enhances electron delocalization and charge transfer (CT) capabilities, and reduces bandgap. Besides, CoSe<sub>2</sub>/CoS<sub>2</sub>-S50 achieves not only SERS detection of two common esophageal tumor cells (KYSE and TE) and healthy oral epithelial cells (het-1A), but also the discrimination with high sensitivity, specificity, and accuracy via machine learning (ML) analysis.

Comparative analysis of tumor and mesorectum radiomics in predicting neoadjuvant chemoradiotherapy response in locally advanced rectal cancer.

Cantürk A, Yarol RC, Tasak AS, Gülmez H, Kadirli K, Bişgin T, Manoğlu B, Sökmen S, Öztop İ, Görken Bilkay İ, Sağol Ö, Sarıoğlu S, Barlık F

pubmed logopapersAug 12 2025
Neoadjuvant chemoradiotherapy (CRT) is known to increase sphincter preservation rates and decrease the risk of postoperative recurrence in patients with locally advanced rectal tumors. However, the response to CRT in patients with locally advanced rectal cancer (LARC) varies significantly. The objective of this study was to compare the performance of models based on radiomics features of the tumor alone, the mesorectum alone, and a combination of both in predicting tumor response to neoadjuvant CRT in LARC. This retrospective study included 101 patients with LARC. Patients were categorized as responders (modified Ryan score 0-1) and non-responders (modified Ryan score 2-3). Pre-CRT magnetic resonance imaging evaluations included tumor-T2 weighted imaging (T2WI), tumor-diffusion weighted imaging (DWI), tumor-apparent diffusion coefficient (ADC) maps, and mesorectum-T2WI. The first radiologist segmented the tumor and mesorectum from T2-weighted images, and the second radiologist performed tumor segmentation using DWI and ADC maps. Feature reproducibility was assessed by calculating the intraclass correlation coefficient (ICC) using a two-way mixed-effects model with absolute agreement for single measurements [ICC(3,1)]. Radiomic features with ICC values <0.60 were excluded from further analysis. Subsequently, the least absolute shrinkage and selection operator method was applied to select the most relevant radiomic features. The top five features with the highest coefficients were selected for model training. To address class imbalance between groups, the synthetic minority over-sampling technique was applied exclusively to the training folds during cross-validation. Thereafter, classification learner models were developed using 10-fold cross-validation to achieve the highest performance. The performance metrics of the final models, including accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC), were calculated to evaluate the classification performance. Among the 101 patients, 36 were classified as responders and 65 as non-responders. A total of 25 radiomic features from the tumor and 20 from the mesorectum were found to be statistically significant (<i>P</i> < 0.05). The AUC values for predicting treatment response were 0.781 for the tumor-only model (random forest), 0.726 for the mesorectum-only model (logistic regression), and 0.837 for the combined model (logistic regression). Radiomic features derived from both the tumor and mesorectum demonstrated complementary prognostic value in predicting treatment response. The inclusion of mesorectal features substantially improved model performance, with the combined model achieving the highest AUC value. These findings highlight the added predictive contribution of the mesorectum as a key peritumoral structure in radiomics-based assessment. Currently, the response of locally advanced rectal tumors to neoadjuvant therapy cannot be reliably predicted using conventional methods. Recently, the significance of the mesorectum in predicting treatment response has gained attention, although the number of studies focusing on this area remains limited. In our study, we performed radiomics analyses of both the tumor tissue and the mesorectum to predict neoadjuvant treatment response.

Current imaging applications, radiomics, and machine learning modalities of CNS demyelinating disorders and its mimickers.

Alam Z, Maddali A, Patel S, Weber N, Al Rikabi S, Thiemann D, Desai K, Monoky D

pubmed logopapersAug 12 2025
Distinguishing among neuroinflammatory demyelinating diseases of the central nervous system can present a significant diagnostic challenge due to substantial overlap in clinical presentations and imaging features. Collaboration between specialists, novel antibody testing, and dedicated magnetic resonance imaging protocols have helped to narrow the diagnostic gap, but challenging cases remain. Machine learning algorithms have proven to be able to identify subtle patterns that escape even the most experienced human eye. Indeed, machine learning and the subfield of radiomics have demonstrated exponential growth and improvement in diagnosis capacity within the past decade. The sometimes daunting diagnostic overlap of various demyelinating processes thus provides a unique opportunity: can the elite pattern recognition powers of machine learning close the gap in making the correct diagnosis? This review specifically focuses on neuroinflammatory demyelinating diseases, exploring the role of artificial intelligence in the detection, diagnosis, and differentiation of the most common pathologies: multiple sclerosis (MS), neuromyelitis optica spectrum disorder (NMOSD), acute disseminated encephalomyelitis (ADEM), Sjogren's syndrome, MOG antibody-associated disorder (MOGAD), and neuropsychiatric systemic lupus erythematosus (NPSLE). Understanding how these tools enhance diagnostic precision may lead to earlier intervention, improved outcomes, and optimized management strategies.

Enhanced MRI brain tumor detection using deep learning in conjunction with explainable AI SHAP based diverse and multi feature analysis.

Rahman A, Hayat M, Iqbal N, Alarfaj FK, Alkhalaf S, Alturise F

pubmed logopapersAug 11 2025
Recent innovations in medical imaging have markedly improved brain tumor identification, surpassing conventional diagnostic approaches that suffer from low resolution, radiation exposure, and limited contrast. Magnetic Resonance Imaging (MRI) is pivotal in precise and accurate tumor characterization owing to its high-resolution, non-invasive nature. This study investigates the synergy among multiple feature representation schemes such as local Binary Patterns (LBP), Gabor filters, Discrete Wavelet Transform, Fast Fourier Transform, Convolutional Neural Networks (CNN), and Gray-Level Run Length Matrix alongside five learning algorithms namely: k-nearest Neighbor, Random Forest, Support Vector Classifier (SVC), and probabilistic neural network (PNN), and CNN. Empirical findings indicate that LBP in conjunction with SVC and CNN obtained high specificity and accuracy, rendering it a promising method for MRI-based tumor diagnosis. Further to investigate the contribution of LBP, Statistical analysis chi-square and p-value tests are used to confirm the significant impact of LBP feature space for identification of brain Tumor. In addition, The SHAP analysis was used to identify the most important features in classification. In a small dataset, CNN obtained 97.8% accuracy while SVC yielded 98.06% accuracy. In subsequent analysis, a large benchmark dataset is also utilized to evaluate the performance of learning algorithms in order to investigate the generalization power of the proposed model. CNN achieves the highest accuracy of 98.9%, followed by SVC at 96.7%. These results highlight CNN's effectiveness in automated, high-precision tumor diagnosis. This achievement is ascribed with MRI-based feature extraction by combining high resolution, non-invasive imaging capabilities with the powerful analytical abilities of CNN. CNN demonstrates superiority in medical imaging owing to its ability to learn intricate spatial patterns and generalize effectively. This interaction enhances the accuracy, speed, and consistency of brain tumor detection, ultimately leading to better patient outcomes and more efficient healthcare delivery. https://github.com/asifrahman557/BrainTumorDetection .

Deep learning and radiomics fusion for predicting the invasiveness of lung adenocarcinoma within ground glass nodules.

Sun Q, Yu L, Song Z, Wang C, Li W, Chen W, Xu J, Han S

pubmed logopapersAug 11 2025
Microinvasive adenocarcinoma (MIA) and invasive adenocarcinoma (IAC) require distinct treatment strategies and are associated with different prognoses, underscoring the importance of accurate differentiation. This study aims to develop a predictive model that combines radiomics and deep learning to effectively distinguish between MIA and IAC. In this retrospective study, 252 pathologically confirmed cases of ground-glass nodules (GGNs) were included, with 177 allocated to the training set and 75 to the testing set. Radiomics, 2D deep learning, and 3D deep learning models were constructed based on CT images. In addition, two fusion strategies were employed to integrate these modalities: early fusion, which concatenates features from all modalities prior to classification, and late fusion, which ensembles the output probabilities of the individual models. The predictive performance of all five models was evaluated using the area under the receiver operating characteristic curve (AUC), and DeLong's test was performed to compare differences in AUC between models. The radiomics model achieved an AUC of 0.794 (95% CI: 0.684-0.898), while the 2D and 3D deep learning models achieved AUCs of 0.754 (95% CI: 0.594-0.882) and 0.847 (95% CI: 0.724-0.945), respectively, in the testing set. Among the fusion models, the late fusion strategy demonstrated the highest predictive performance, with an AUC of 0.898 (95% CI: 0.784-0.962), outperforming the early fusion model, which achieved an AUC of 0.857 (95% CI: 0.731-0.936). Although the differences were not statistically significant, the late fusion model yielded the highest numerical values for diagnostic accuracy, sensitivity, and specificity across all models. The fusion of radiomics and deep learning features shows potential in improving the differentiation of MIA and IAC in GGNs. The late fusion strategy demonstrated promising results, warranting further validation in larger, multicenter studies.

Outcome Prediction in Pediatric Traumatic Brain Injury Utilizing Social Determinants of Health and Machine Learning Methods.

Kaliaev A, Vejdani-Jahromi M, Gunawan A, Qureshi M, Setty BN, Farris C, Takahashi C, AbdalKader M, Mian A

pubmed logopapersAug 11 2025
Considerable socioeconomic disparities exist among pediatric traumatic brain injury (TBI) patients. This study aims to analyze the effects of social determinants of health on head injury outcomes and to create a novel machine-learning algorithm (MLA) that incorporates socioeconomic factors to predict the likelihood of a positive or negative trauma-related finding on head computed tomography (CT). A cohort of blunt trauma patients under age 15 who presented to the largest safety net hospital in New England between January 2006 and December 2013 (n=211) was included in this study. Patient socioeconomic data such as race, language, household income, and insurance type were collected alongside other parameters like Injury Severity Score (ISS), age, sex, and mechanism of injury. Multivariable analysis was performed to identify significant factors in predicting a positive head CT outcome. The cohort was split into 80% training (168 samples) and 20% testing (43 samples) datasets using stratified sampling. Twenty-two multi-parametric MLAs were trained with 5-fold cross-validation and hyperparameter tuning via GridSearchCV, and top-performing models were evaluated on the test dataset. Significant factors associated with pediatric head CT outcome included ISS, age, and insurance type (p<0.05). The age of the subjects with a clinically relevant trauma-related head CT finding (median= 1.8 years) was significantly different from the age of patients without such findings (median= 9.1 years). These predictors were utilized to train the machine learning models. With ISS, the Fine Gaussian SVM achieved the highest test AUC (0.923), with accuracy=0.837, sensitivity=0.647, and specificity=0.962. The Coarse Tree yielded accuracy=0.837, AUC=0.837, sensitivity=0.824, and specificity=0.846. Without ISS, the Narrow Neural Network performed best with accuracy=0.837, AUC=0.857, sensitivity=0.765, and specificity=0.885. Key predictors of clinically relevant head CT findings in pediatric TBI include ISS, age, and social determinants of health, with children under 5 at higher risk. A novel Fine Gaussian SVM model outperformed other MLA, offering high accuracy in predicting outcomes. This tool shows promise for improving clinical decisions while minimizing radiation exposure in children. TBI = Traumatic Brain Injury; ISS = Injury Severity Score; MLA = Machine Learning Algorithm; CT = Computed Tomography; AUC = Area Under the Curve.

A Systematic Review of Multimodal Deep Learning and Machine Learning Fusion Techniques for Prostate Cancer Classification

Manzoor, F., Gupta, V., Pinky, L., Wang, Z., Chen, Z., Deng, Y., Neupane, S.

medrxiv logopreprintAug 11 2025
Prostate cancer remains one of the most prevalent malignancies and a leading cause of cancer-related deaths among men worldwide. Despite advances in traditional diagnostic methods such as Prostate-specific antigen testing, digital rectal examination, and multiparametric Magnetic resonance imaging, these approaches remain constrained by modality-specific limitations, suboptimal sensitivity and specificity, and reliance on expert interpretation, which may introduce diagnostic inconsistency. Multimodal deep learning and machine learning fusion, which integrates diverse data sources including imaging, clinical, and molecular information, has emerged as a promising strategy to enhance the accuracy of prostate cancer classification. This review aims to outline the current state-of-the-art deep learning and machine learning based fusion techniques for prostate cancer classification, focusing on their implementation, performance, challenges, and clinical applicability. Following the PRISMA guidelines, a total of 131 studies were identified, of which 27 met the inclusion criteria for studies published between 2021 and 2025. Extracted data included input techniques, deep learning architectures, performance metrics, and validation approaches. The majority of the studies used an early fusion approach with convolutional neural networks to integrate the data. Clinical and imaging data were the most commonly used modalities in the reviewed studies for prostate cancer research. Overall, multimodal deep learning and machine learning-based fusion significantly advances prostate cancer classification and outperform unimodal approaches.

Adapting Biomedical Foundation Models for Predicting Outcomes of Anti Seizure Medications

Pham, D. K., Mehta, D., Jiang, Y., Thom, D., Chang, R. S.-k., Foster, E., Fazio, T., Holper, S., Verspoor, K., Liu, J., Nhu, D., Barnard, S., O'Brien, T., Chen, Z., French, J., Kwan, P., Ge, Z.

medrxiv logopreprintAug 11 2025
Epilepsy affects over 50 million people worldwide, with anti-seizure medications (ASMs) as the primary treatment for seizure control. However, ASM selection remains a "trial and error" process due to the lack of reliable predictors of effectiveness and tolerability. While machine learning approaches have been explored, existing models are limited to predicting outcomes only for ASMs encountered during training and have not leveraged recent biomedical foundation models for this task. This work investigates ASM outcome prediction using only patient MRI scans and reports. Specifically, we leverage biomedical vision-language foundation models and introduce a novel contextualized instruction-tuning framework that integrates expert-built knowledge trees of MRI entities to enhance their performance. Additionally, by training only on the four most commonly prescribed ASMs, our framework enables generalization to predicting outcomes and effectiveness for unseen ASMs not present during training. We evaluate our instruction-tuning framework on two retrospective epilepsy patient datasets, achieving an average AUC of 71.39 and 63.03 in predicting outcomes for four primary ASMs and three completely unseen ASMs, respectively. Our approach improves the AUC by 5.53 and 3.51 compared to standard report-based instruction tuning for seen and unseen ASMs, respectively. Our code, MRI knowledge tree, prompting templates, and TREE-TUNE generated instruction-answer tuning dataset are available at the link.

Construction and validation of a urinary stone composition prediction model based on machine learning.

Guo J, Zhang J, Zhang J, Xu C, Wang X, Liu C

pubmed logopapersAug 11 2025
The composition of urinary calculi serves as a critical determinant for personalized surgical strategies; however, such compositional data are often unavailable preoperatively. This study aims to develop a machine learning-based preoperative prediction model for stone composition and evaluate its clinical utility. A retrospective cohort study design was employed to include patients with urinary calculi admitted to the Department of Urology at the Second Affiliated Hospital of Zhengzhou University from 2019 to 2024. Feature selection was performed using least absolute shrinkage and selection operator (LASSO) regression combined with multivariate logistic regression, and a binary prediction model for urinary calculi was subsequently constructed. Model validation was conducted using metrics such as the area under the curve (AUC), while Shapley Additive Explanations(SHAP) values were applied to interpret the predictive outcomes. Among 708 eligible patients, distinct prediction models were established for four stone types: calcium oxalate stones: Logistic regression achieved optimal performance (AUC = 0.845), with maximum stone CT value, 24-hour urinary oxalate, and stone size as top predictors (SHAP-ranked); infection stones: Logistic regression (AUC = 0.864) prioritized stone size, urinary pH, and recurrence history; uric acid stones: LASSO-ridge-elastic net model demonstrated exceptional accuracy (AUC = 0.961), driven by maximum CT value, 24-hour oxalate, and urinary calcium; calcium-containing stones: Logistic regression attained better prediction (AUC = 0.953), relying on CT value, 24-hour calcium, and stone size. This study developed a machine learning prediction model based on multi-algorithm integration, achieving accurate preoperative discrimination of urinary stone composition. The integration of key imaging features with metabolic indicators enhanced the model's predictive performance.

CMVFT: A Multi-Scale Attention Guided Framework for Enhanced Keratoconus Suspect Classification in Multi-View Corneal Topography.

Lu Y, Li B, Zhang Y, Qi Y, Shi X

pubmed logopapersAug 11 2025
Retrospective cross-sectional study. To develop a multi-view fusion framework that effectively identifies suspect keratoconus cases and facilitates the possibility of early clinical intervention. A total of 573 corneal topography maps representing eyes classified as normal, suspect, or keratoconus. We designed the Corneal Multi-View Fusion Transformer (CMVFT), which integrates features from seven standard corneal topography maps. A pretrained ResNet-50 extracts single-view representations that are further refined by a custom-designed Multi-Scale Attention Module (MSAM). This integrated design specifically compensates for the representation gap commonly encountered when applying Transformers to small-sample corneal topography datasets by dynamically bridging local convolution-based feature extraction with global self-attention mechanisms. A subsequent fusion Transformer then models long-range dependencies across views for comprehensive multi-view feature integration. The primary measure was the framework's ability to differentiate suspect cases from normal and keratoconus cases, thereby creating a pathway for early clinical intervention. Experimental evaluation demonstrated that CMVFT effectively distinguishes suspect cases within a feature space characterized by overlapping attributes. Ablation studies confirmed that both the MSAM and the fusion Transformer are essential for robust multi-view feature integration, successfully compensating for potential representation shortcomings in small datasets. This study is the first to apply a Transformer-driven multi-view fusion approach in corneal topography analysis. By compensating for the representation gap inherent in small-sample settings, CMVFT shows promise in enabling the identification of suspect keratoconus cases and supporting early intervention strategies, with prospective implications for early clinical intervention.
Page 73 of 2372364 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.