Sort by:
Page 55 of 1331329 results

Ultrasound placental image texture analysis using artificial intelligence and deep learning models to predict hypertension in pregnancy.

Arora U, Vigneshwar P, Sai MK, Yadav R, Sengupta D, Kumar M

pubmed logopapersJun 21 2025
This study considers the application of ultrasound placental image texture analysis for the prediction of hypertensive disorders of pregnancy (HDP) using deep learning (DL) algorithm. In this prospective observational study, placental ultrasound images were taken serially at 11-14 weeks (T1), 20-24 weeks (T2), and 28-32 weeks (T3). Pregnant women with blood pressure at or above 140/90 mmHg on two occasions 4 h apart were considered to have HDP. The image data of women with HDP were compared with those with a normal outcome using DL techniques such as convolutional neural networks (CNN), transfer learning, and a Vision Transformer (ViT) with a TabNet classifier. The accuracy and the Cohen kappa scores of the different DL techniques were compared. A total of 600/1008 (59.5%) subjects had a normal outcome, and 143/1008 (14.2%) had HDP; the reminder, 265/1008 (26.3%), had other adverse outcomes. In the basic CNN model, the accuracy was 81.6% for T1, 80% for T2, and 82.8% for T3. Using the Efficient Net B0 transfer learning model, the accuracy was 87.7%, 85.3%, and 90.3% for T1, T2, and T3, respectively. Using a TabNet classifier with a ViT, the accuracy and area under the receiver operating characteristic curve scores were 91.4% and 0.915 for T1, 90.2% and 0.904 for T2, and 90.3% and 0.907 for T3. The sensitivity and specificity for HDP prediction using ViT were 89.1% and 91.7% for T1, 86.6% and 93.7% for T2, and 85.6% and 94.6% for T3. Ultrasound placental image texture analysis using DL could differentiate women with a normal outcome and those with HDP with excellent accuracy and could open new avenues for research in this field.

Advances of MR imaging in glioma: what the neurosurgeon needs to know.

Falk Delgado A

pubmed logopapersJun 21 2025
Glial tumors and especially glioblastoma present a major challenge in neuro-oncology due to their infiltrative growth, resistance to therapy, and poor overall survival-despite aggressive treatments such as maximal safe resection and chemoradiotherapy. These tumors typically manifest through neurological symptoms such as seizures, headaches, and signs of increased intracranial pressure, prompting urgent neuroimaging. At initial diagnosis, MRI plays a central role in differentiating true neoplasms from tumor mimics, including inflammatory or infectious conditions. Advanced techniques such as perfusion-weighted imaging (PWI) and diffusion-weighted imaging (DWI) enhance diagnostic specificity and may prevent unnecessary surgical intervention. In the preoperative phase, MRI contributes to surgical planning through the use of functional MRI (fMRI) and diffusion tensor imaging (DTI), enabling localization of eloquent cortex and white matter tracts. These modalities support safer resections by informing trajectory planning and risk assessment. Emerging MR techniques, including magnetic resonance spectroscopy, amide proton transfer imaging, and 2HG quantification, offer further potential in delineating tumor infiltration beyond contrast-enhancing margins. Postoperatively, MRI is important for evaluating residual tumor, detecting surgical complications, and guiding radiotherapy planning. During treatment surveillance, MRI assists in distinguishing true progression from pseudoprogression or radiation necrosis, thereby guiding decisions on additional surgery, changes in systemic therapy, or inclusion into clinical trials. The continued evolution of MRI hardware, software, and image analysis-particularly with the integration of machine learning-will be critical for supporting precision neurosurgical oncology. This review highlights how advanced MRI techniques can inform clinical decision-making at each stage of care in patients with high-grade gliomas.

Development of Radiomics-Based Risk Prediction Models for Stages of Hashimoto's Thyroiditis Using Ultrasound, Clinical, and Laboratory Factors.

Chen JH, Kang K, Wang XY, Chi JN, Gao XM, Li YX, Huang Y

pubmed logopapersJun 21 2025
To develop a radiomics risk-predictive model for differentiating the different stages of Hashimoto's thyroiditis (HT). Data from patients with HT who underwent definitive surgical pathology between January 2018 and December 2023 were retrospectively collected and categorized into early HT (HT patients with simple positive antibodies or simultaneously accompanied by elevated thyroid hormones) and late HT (HT patients with positive antibodies and beginning to present subclinical hypothyroidism or developing hypothyroidism). Ultrasound images and five clinical and 12 laboratory indicators were obtained. Six classifiers were used to construct radiomics models. The gradient boosting decision tree (GBDT) classifier was used to screen for the best features to explore the main risk factors for differentiating early HT. The performance of each model was evaluated by receiver operating characteristic (ROC) curve. The model was validated using one internal and two external test cohorts. A total of 785 patients were enrolled. Extreme gradient boosting (XGBOOST) showed best performance in the training cohort, with an AUC of 0.999 (0.998, 1), and AUC values of 0.993 (0.98, 1), 0.947 (0.866, 1), and 0.98 (0.939, 1), respectively, in the internal test, first external, and second external cohorts. Ultrasound radiomic features contributed to 78.6% (11/14) of the model. The first-order feature of traverse section of thyroid ultrasound image, texture feature gray-level run length matrix (GLRLM) of longitudinal section of thyroid ultrasound image and free thyroxine showed the greatest contributions in the model. Our study developed and tested a risk-predictive model that effectively differentiated HT stages to more precisely and actively manage patients with HT at an earlier stage.

DRIMV_TSK: An Interpretable Surgical Evaluation Model for Incomplete Multi-View Rectal Cancer Data

Wei Zhang, Zi Wang, Hanwen Zhou, Zhaohong Deng, Weiping Ding, Yuxi Ge, Te Zhang, Yuanpeng Zhang, Kup-Sze Choi, Shitong Wang, Shudong Hu

arxiv logopreprintJun 21 2025
A reliable evaluation of surgical difficulty can improve the success of the treatment for rectal cancer and the current evaluation method is based on clinical data. However, more data about rectal cancer can be collected with the development of technology. Meanwhile, with the development of artificial intelligence, its application in rectal cancer treatment is becoming possible. In this paper, a multi-view rectal cancer dataset is first constructed to give a more comprehensive view of patients, including the high-resolution MRI image view, pressed-fat MRI image view, and clinical data view. Then, an interpretable incomplete multi-view surgical evaluation model is proposed, considering that it is hard to obtain extensive and complete patient data in real application scenarios. Specifically, a dual representation incomplete multi-view learning model is first proposed to extract the common information between views and specific information in each view. In this model, the missing view imputation is integrated into representation learning, and second-order similarity constraint is also introduced to improve the cooperative learning between these two parts. Then, based on the imputed multi-view data and the learned dual representation, a multi-view surgical evaluation model with the TSK fuzzy system is proposed. In the proposed model, a cooperative learning mechanism is constructed to explore the consistent information between views, and Shannon entropy is also introduced to adapt the view weight. On the MVRC dataset, we compared it with several advanced algorithms and DRIMV_TSK obtained the best results.

The diagnostic accuracy of MRI radiomics in axillary lymph node metastasis prediction: a systematic review and meta-analysis.

Motiei M, Mansouri SS, Tamimi A, Farokhi S, Fakouri A, Rassam K, Sedighi-Pirsaraei N, Hassanzadeh-Rad A

pubmed logopapersJun 20 2025
Breast cancer is the most prevalent malignancy in women and a leading cause of mortality. Accurate assessment of axillary lymph node metastasis (LNM) is critical for breast cancer management. Exploring non-invasive methods such as radiomics for the detection of LNM is highly important. We systematically searched Pubmed, Embase, Scopus, Web of Science and google scholar until 11 March 2024. To assess the risk of bias and quality of studies, we utilized the quality assessment of diagnostic accuracy studies (QUADAS) tool as well as the radiomics quality score (RQS). Area under the curve (AUC), sensitivity, specificity and accuracy were determined for each study to evaluate the diagnostic accuracy of radiomics in magnetic resonance imaging (MRI) for detecting LNM in patients with breast cancer. This meta-analysis of 20 studies (5072 patients) demonstrated an overall AUC of 0.83 (95% confidence interval (CI): 0.80-0.86). Subgroup analysis revealed a trend towards higher specificity when radiomics was combined with clinical factors (0.83) compared to radiomics alone (0.79). Sensitivity analysis confirmed the robustness of the findings and publication bias was not evident. The radiomics models increased the likelihood of a positive LNM outcome from 37% to 73.2% when initial probability was positive and decreased the likelihood to 8% when initial probability was negative, highlighting their potential clinical utility. Radiomics as a non-invasive method demonstrates strong potential for detecting LNM in breast cancer, offering clinical promise. However, further standardization and validation are needed in future studies.

MVKD-Trans: A Multi-View Knowledge Distillation Vision Transformer Architecture for Breast Cancer Classification Based on Ultrasound Images.

Ling D, Jiao X

pubmed logopapersJun 20 2025
Breast cancer is the leading cancer threatening women's health. In recent years, deep neural networks have outperformed traditional methods in terms of both accuracy and efficiency for breast cancer classification. However, most ultrasound-based breast cancer classification methods rely on single-perspective information, which may lead to higher misdiagnosis rates. In this study, we propose a multi-view knowledge distillation vision transformer architecture (MVKD-Trans) for the classification of benign and malignant breast tumors. We utilize multi-view ultrasound images of the same tumor to capture diverse features. Additionally, we employ a shuffle module for feature fusion, extracting channel and spatial dual-attention information to improve the model's representational capability. Given the limited computational capacity of ultrasound devices, we also utilize knowledge distillation (KD) techniques to compress the multi-view network into a single-view network. The results show that the accuracy, area under the ROC curve (AUC), sensitivity, specificity, precision, and F1 score of the model are 88.15%, 91.23%, 81.41%, 90.73%, 78.29%, and 79.69%, respectively. The superior performance of our approach, compared to several existing models, highlights its potential to significantly enhance the understanding and classification of breast cancer.

Effective workflow from multimodal MRI data to model-based prediction.

Jung K, Wischnewski KJ, Eickhoff SB, Popovych OV

pubmed logopapersJun 20 2025
Predicting human behavior from neuroimaging data remains a complex challenge in neuroscience. To address this, we propose a systematic and multi-faceted framework that incorporates a model-based workflow using dynamical brain models. This approach utilizes multi-modal MRI data for brain modeling and applies the optimized modeling outcome to machine learning. We demonstrate the performance of such an approach through several examples such as sex classification and prediction of cognition or personality traits. We in particular show that incorporating the simulated data into machine learning can significantly improve the prediction performance compared to using empirical features alone. These results suggest considering the output of the dynamical brain models as an additional neuroimaging data modality that complements empirical data by capturing brain features that are difficult to measure directly. The discussed model-based workflow can offer a promising avenue for investigating and understanding inter-individual variability in brain-behavior relationships and enhancing prediction performance in neuroimaging research.

Detection of breast cancer using fractional discrete sinc transform based on empirical Fourier decomposition.

Azmy MM

pubmed logopapersJun 20 2025
Breast cancer is the most common cause of death among women worldwide. Early detection of breast cancer is important; for saving patients' lives. Ultrasound and mammography are the most common noninvasive methods for detecting breast cancer. Computer techniques are used to help physicians diagnose cancer. In most of the previous studies, the classification parameter rates were not high enough to achieve the correct diagnosis. In this study, new approaches were applied to detect breast cancer images from three databases. The programming software used to extract features from the images was MATLAB R2022a. Novel approaches were obtained using new fractional transforms. These fractional transforms were deduced from the fraction Fourier transform and novel discrete transforms. The novel discrete transforms were derived from discrete sine and cosine transforms. The steps of the approaches were described below. First, fractional transforms were applied to the breast images. Then, the empirical Fourier decomposition (EFD) was obtained. The mean, variance, kurtosis, and skewness were subsequently calculated. Finally, RNN-BILSTM (recurrent neural network-bidirectional-long short-term memory) was used as a classification phase. The proposed approaches were compared to obtain the highest accuracy rate during the classification phase based on different fractional transforms. The highest accuracy rate was obtained when the fractional discrete sinc transform of approach 4 was applied. The area under the receiver operating characteristic curve (AUC) was 1. The accuracy, sensitivity, specificity, precision, G-mean, and F-measure rates were 100%. If traditional machine learning methods, such as support vector machines (SVMs) and artificial neural networks (ANNs), were used, the classification parameter rates would be low. Therefore, the fourth approach used RNN-BILSTM to extract the features of breast images perfectly. This approach can be programed on a computer to help physicians correctly classify breast images.

Generalizable model to predict new or progressing compression fractures in tumor-infiltrated thoracolumbar vertebrae in an all-comer population.

Flores A, Nitturi V, Kavoussi A, Feygin M, Andrade de Almeida RA, Ramirez Ferrer E, Anand A, Nouri S, Allam AK, Ricciardelli A, Reyes G, Reddy S, Rampalli I, Rhines L, Tatsui CE, North RY, Ghia A, Siewerdsen JH, Ropper AE, Alvarez-Breckenridge C

pubmed logopapersJun 20 2025
Neurosurgical evaluation is required in the setting of spinal metastases at high risk for leading to a vertebral body fracture. Both irradiated and nonirradiated vertebrae are affected. Understanding fracture risk is critical in determining management, including follow-up timing and prophylactic interventions. Herein, the authors report the results of a machine learning model that predicts the development or progression of a pathological vertebral compression fracture (VCF) in metastatic tumor-infiltrated thoracolumbar vertebrae in an all-comer population. A multi-institutional all-comer cohort of patients with tumor containing vertebral levels spanning T1 through L5 and at least 1 year of follow-up was included in the study. Clinical features of the patients, diseases, and treatments were collected. CT radiomic features of the vertebral bodies were extracted from tumor-infiltrated vertebrae that did or did not subsequently fracture or progress. Recursive feature elimination (RFE) of both radiomic and clinical features was performed. The resulting features were used to create a purely clinical model, purely radiomic model, and combined clinical-radiomic model. A Spine Instability Neoplastic Score (SINS) model was created for a baseline performance comparison. Model performance was assessed using the area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity (with 95% confidence intervals) with tenfold cross-validation. Within 1 year from initial CT, 123 of 977 vertebrae developed VCF. Selected clinical features included SINS, SINS component for < 50% vertebral body collapse, SINS component for "none of the prior 3" (i.e., "none of the above" on the SINS component for vertebral body involvement), histology, age, and BMI. Of the 2015 radiomic features, RFE selected 19 to be used in the pure radiomic model and the combined clinical-radiomic model. The best performing model was a random forest classifier using both clinical and radiomic features, demonstrating an AUROC of 0.86 (95% CI 0.82-0.9), sensitivity of 0.78 (95% CI 0.70-0.84), and specificity of 0.80 (95% CI 0.77-0.82). This performance was significantly higher than the best SINS-alone model (AUROC 0.75, 95% CI 0.70-0.80) and outperformed the clinical-only model but not in a statistically significant manner (AUROC 0.82, 95% CI 0.77-0.87). The authors developed a clinically generalizable machine learning model to predict the risk of a new or progressing VCF in an all-comer population. This model addresses limitations from prior work and was trained on the largest cohort of patients and vertebrae published to date. If validated, the model could lead to more consistent and systematic identification of high-risk vertebrae, resulting in faster, more accurate triage of patients for optimal management.

BioTransX: A novel bi-former based hybrid model with bi-level routing attention for brain tumor classification with explainable insights.

Rajpoot R, Jain S, Semwal VB

pubmed logopapersJun 20 2025
Brain tumors, known for their life-threatening implications, underscore the urgency of precise and interpretable early detection. Expertise remains essential for accurate identification through MRI scans due to the intricacies involved. However, the growing recognition of automated detection systems holds the potential to enhance accuracy and improve interpretability. By consistently providing easily comprehensible results, these automated solutions could boost the overall efficiency and effectiveness of brain tumor diagnosis, promising a transformative era in healthcare. This paper introduces a new hybrid model, BioTransX, which uses a bi-former encoder mechanism, a dynamic sparse attention-based transformer, in conjunction with ensemble convolutional networks. Recognizing the importance of better contrast and data quality, we applied Contrast-Limited Adaptive Histogram Equalization (CLAHE) during the initial data processing stage. Additionally, to address the crucial aspect of model interpretability, we integrated Grad-CAM and Gradient Attention Rollout, which elucidate decisions by highlighting influential regions within medical images. Our hybrid deep learning model was primarily evaluated on the Kaggle MRI dataset for multi-class brain tumor classification, achieving a mean accuracy and F1-score of 99.29%. To validate its generalizability and robustness, BioTransX was further tested on two additional benchmark datasets, BraTS and Figshare, where it consistently maintained high performance across key evaluation metrics. The transformer-based hybrid model demonstrated promising performance in explainable identification and offered notable advantages in computational efficiency and memory usage. These strengths differentiate BioTransX from existing models in the literature and make it ideal for real-world deployment in resource-constrained clinical infrastructures.
Page 55 of 1331329 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.