Sort by:
Page 45 of 3463455 results

Enhancing diagnostic precision for thyroid C-TIRADS category 4 nodules: a hybrid deep learning and machine learning model integrating grayscale and elastographic ultrasound features.

Zou D, Lyu F, Pan Y, Fan X, Du J, Mai X

pubmed logopapersSep 1 2025
Accurate and timely diagnosis of thyroid cancer is critical for clinical care, and artificial intelligence can enhance this process. This study aims to develop and validate an intelligent assessment model called C-TNet, based on the Chinese Guidelines for Ultrasound Malignancy Risk Stratification of Thyroid Nodules (C-TIRADS) and real-time elasticity imaging. The goal is to differentiate between benign and malignant characteristics of thyroid nodules classified as C-TIRADS category 4. We evaluated the performance of C-TNet against ultrasonographers and BMNet, a model trained exclusively on histopathological findings indicating benign or malignant nature. The study included 3,545 patients with pathologically confirmed C-TIRADS category 4 thyroid nodules from two tertiary hospitals in China: Affiliated Hospital of Integrated Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine (n=3,463 patients) and Jiangyin People's Hospital (n=82 patients). The cohort from Affiliated Hospital of Integrated Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine was randomly divided into a training set and validation set (7:3 ratio), while the cohort from Jiangyin People's Hospital served as the external validation set. The C-TNet model was developed by extracting image features from the training set and integrating them with six commonly used classifier algorithms: logistic regression (LR), linear discriminant analysis (LDA), random forest (RF), kernel support vector machine (K-SVM), adaptive boosting (AdaBoost), and Naive Bayes (NB). Its performance was evaluated using both internal and external validation sets, with statistical differences analyzed through the Chi-squared test. C-TNet model effectively integrates feature extraction from deep neural networks with a RF classifier, utilizing grayscale and elastography ultrasound data. It successfully differentiates benign from malignant thyroid nodules, achieving an area under the curve (AUC) of 0.873, comparable to the performance of senior physicians (AUC: 0.868). The model demonstrates generalizability across diverse clinical settings, positioning itself as a transformative decision-support tool for enhancing the risk stratification of thyroid nodules.

An innovative bimodal computed tomography data-driven deep learning model for predicting aortic dissection: a multi-center study.

Li Z, Chen L, Zhang S, Zhang X, Zhang J, Ying M, Zhu J, Li R, Song M, Feng Z, Zhang J, Liang W

pubmed logopapersSep 1 2025
Aortic dissection (AD) is a lethal emergency requiring prompt diagnosis. Current computed tomography angiography (CTA)-based diagnosis requires contrast agents, which expends time, whereas existing deep learning (DL) models only support single-modality inputs [non-contrast computed tomography (CT) or CTA]. In this study, we propose a bimodal DL framework to independently process both types, enabling dual-path detection and improving diagnostic efficiency. Patients who underwent non-contrast CT and CTA from February 2016 to September 2021 were retrospectively included from three institutions, including the First Affiliated Hospital, Zhejiang University School of Medicine (Center I), Zhejiang Hospital (Center II), and Yiwu Central Hospital (Center III). A two-stage DL model for predicting AD was developed. The first stage used an aorta detection network (AoDN) to localize the aorta in non-contrast CT or CTA images. Image patches that contained detected aorta were cut from CT images and combined to form an image patch sequence, which was inputted to an aortic dissection diagnosis network (ADDiN) to diagnose AD in the second stage. The following performances were assessed: aorta detection and diagnosis using average precision at the intersection over union threshold 0.5 ([email protected]) and area under the receiver operating characteristic curve (AUC). The first cohort, comprising 102 patients (53±15 years, 80 men) from two institutions, was used for the AoDN, whereas the second cohort, consisting of 861 cases (55±15 years, 623 men) from three institutions, was used for the ADDiN. For the AD task, the AoDN achieved [email protected] 99.14% on the non-contrast CT test set and 99.34% on the CTA test set, respectively. For the AD diagnosis task, the ADDiN obtained an AUCs of 0.98 on the non-contrast CT test set and 0.99 on the CTA test set. The proposed bimodal CT data-driven DL model accurately diagnoses AD, facilitating prompt hospital diagnosis and treatment of AD.

Added prognostic value of histogram features from preoperative multi-modal diffusion MRI in predicting Ki-67 proliferation for adult-type diffuse gliomas.

Huang Y, He S, Hu H, Ma H, Huang Z, Zeng S, Mazu L, Zhou W, Zhao C, Zhu N, Wu J, Liu Q, Yang Z, Wang W, Shen G, Zhang N, Chu J

pubmed logopapersSep 1 2025
Ki-67 labelling index (LI), a critical marker of tumor proliferation, is vital for grading adult-type diffuse gliomas and predicting patient survival. However, its accurate assessment currently relies on invasive biopsy or surgical resection. This makes it challenging to non-invasively predict Ki-67 LI and subsequent prognosis. Therefore, this study aimed to investigate whether histogram analysis of multi-parametric diffusion model metrics-specifically diffusion tensor imaging (DTI), diffusion kurtosis imaging (DKI), and neurite orientation dispersion and density imaging (NODDI)-could help predict Ki-67 LI in adult-type diffuse gliomas and further predict patient survival. A total of 123 patients with diffuse gliomas who underwent preoperative bipolar spin-echo diffusion magnetic resonance imaging (MRI) were included. Diffusion metrics (DTI, DKI and NODDI) and their histogram features were extracted and used to develop a nomogram model in the training set (n=86), and the performance was verified in the test set (n=37). Area under the receiver operating characteristics curve of the nomogram model was calculated. The outcome cohort, including 123 patients, was used to evaluate the predictive value of the diffusion nomogram model for overall survival (OS). Cox proportion regression was performed to predict OS. Among 123 patients, 87 exhibited high Ki-67 LI (Ki-67 LI >5%). The patients had a mean age of 46.08±13.24 years, with 39 being female. Tumor grading showed 46 cases of grade 2, 21 cases of grade 3, and 56 cases of grade 4. The nomogram model included eight histogram features from diffusion MRI and showed good performance for prediction Ki-67 LI, with area under the receiver operating characteristic curves (AUCs) of 0.92 [95% confidence interval (CI): 0.85-0.98, sensitivity =0.85, specificity =0.84] and 0.84 (95% CI: 0.64-0.98, sensitivity =0.77, specificity =0.73) in the training set and test set, respectively. Further nomogram incorporating these variables showed good discrimination in Ki-67 LI predicting and glioma grading. A low nomogram model score relative to the median value in the outcomes cohort was independently associated with OS (P<0.01). Accurate prediction of the Ki-67 LI in adult-type diffuse glioma patients was achieved by using multi-modal diffusion MRI histogram radiomics model, which also reliably and accurately determined survival. ClinicalTrials.gov Identifier: NCT06572592.

Evaluating machine learning models for post-surgery treatment response assessment in glioblastoma multiforme: a comparative study of gray level co-occurrence matrix (GLCM), curvelet, and combined radiomics features selected by multiple algorithms.

Alibabaei S, Yousefipour M, Rahmani M, Raminfard S, Tahmasbi M

pubmed logopapersSep 1 2025
Developing quantitative methods to assess post-surgery treatment response in Glioblastoma Multiforme (GBM) is critical for improving patient outcomes and refining current subjective approaches. This study analyzes the performance of machine learning models trained on radiomic datasets derived from magnetic resonance imaging (MRI) scans of GBM patients. MRI scans from 143 GBM patients receiving adjuvant therapy post-surgery were acquired and preprocessed. A total of 92 radiomic features, including 68 Gy-level co-occurrence matrix (GLCM)-based features calculated in four directions (0°, 45°, 90°, and 135°) and 24 Curvelet coefficient-based features, were extracted from each patient's segmented tumor cavity. Machine learning classifiers, including Support Vector Machine (SVM), Random Forest, K-Nearest Neighbors (KNN), AdaBoost, CatBoost, LightGBM, XGBoost, Gaussian Naïve Bayes (GNB), and Logistic Regression (LR), were trained on the extracted radiomics selected using sequential feature selection, LASSO, and PCA. Validation was performed with 10-fold cross-validation. The proposed pipeline achieved an accuracy of 87% in classifying post-surgery treatment responses in GBM patients. This accuracy was achieved with the SVM trained on a combination of GLCM and Curvelet-based radiomics selected via forward sequential algorithm-8, and with KNN trained on GLCM and Curvelet radiomics combination selected using LASSO (alpha = 0.01). The LR model trained on Curvelet-based LASSO-selected radiomics (alpha = 0.01) also showed strong performance. The results demonstrate that MRI-based radiomics, specifically GLCM and Curvelet features, can effectively train machine learning models to quantitatively assess GBM treatment response. These models serve as valuable tools to complement qualitative evaluations, enhancing accuracy and objectivity in post-surgery outcome assessment. Not applicable.

Comparison of segmentation performance of cnns, vision transformers, and hybrid networks for paranasal sinuses with sinusitis on CT images.

Song D, Yang S, Han JY, Kim KG, Kim ST, Yi WJ

pubmed logopapersSep 1 2025
Accurate segmentation of the paranasal sinuses, including the frontal sinus (FS), ethmoid sinus (ES), sphenoid sinus (SS), and maxillary sinus (MS), plays an important role in supporting image-guided surgery (IGS) for sinusitis, facilitating safer intraoperative navigation by identifying anatomical variations and delineating surgical landmarks on CT imaging. To the best of our knowledge, no comparative studies of convolutional neural networks (CNNs), vision transformers (ViTs), and hybrid networks for segmenting each paranasal sinus in patients with sinusitis have been conducted. Therefore, the objective of this study was to compare the segmentation performance of CNNs, ViTs, and hybrid networks for individual paranasal sinuses with varying degrees of anatomical complexity and morphological and textural variations caused by sinusitis on CT images. The performance of CNNs, ViTs, and hybrid networks was compared using Jaccard Index (JI), Dice similarity coefficient (DSC), precision (PR), recall (RC), and 95% Hausdorff Distance (HD95) for segmentation accuracy metrics and the number of parameters (Params) and inference time (IT) for computational efficiency. The Swin UNETR hybrid network outperformed the other networks, achieving the highest segmentation scores, with a JI of 0.719, a DSC of 0.830, a PR of 0.935, and a RC of 0.758, and the lowest HD95 value of 10.529 with the smallest number of the model architectural parameter, with 15.705 M Params. Also, CoTr, another hybrid network, demonstrated superior segmentation performance compared to CNNs and ViTs, and achieved the fastest inference time with 0.149 IT. Compared with CNNs and ViTs, hybrid networks significantly reduced false positives and enabled more precise boundary delineation, effectively capturing anatomical relationships among the sinuses and surrounding structures. This resulted in the lowest segmentation errors near critical surgical landmarks. In conclusion, hybrid networks may provide a more balanced trade-off between segmentation accuracy and computational efficiency, with potential applicability in clinical decision support systems for sinusitis.

Explainable self-supervised learning for medical image diagnosis based on DINO V2 model and semantic search.

Hussien A, Elkhateb A, Saeed M, Elsabawy NM, Elnakeeb AE, Elrashidy N

pubmed logopapersSep 1 2025
Medical images have become indispensable for decision-making and significantly affect treatment planning. However, increasing medical imaging has widened the gap between medical images and available radiologists, leading to delays and diagnosis errors. Recent studies highlight the potential of deep learning (DL) in medical image diagnosis. However, their reliance on labelled data limits their applicability in various clinical settings. As a result, recent studies explore the role of self-supervised learning to overcome these challenges. Our study aims to address these challenges by examining the performance of self-supervised learning (SSL) in diverse medical image datasets and comparing it with traditional pre-trained supervised learning models. Unlike prior SSL methods that focus solely on classification, our framework leverages DINOv2's embeddings to enable semantic search in medical databases (via Qdrant), allowing clinicians to retrieve similar cases efficiently. This addresses a critical gap in clinical workflows where rapid case The results affirmed SSL's ability, especially DINO v2, to overcome the challenge associated with labelling data and provide an accurate diagnosis superior to traditional SL. DINO V2 provides 100%, 99%, 99%, 100 and 95% for classification accuracy of Lung cancer, brain tumour, leukaemia and Eye Retina Disease datasets, respectively. While existing SSL models (e.g., BYOL, SimCLR) lack interpretability, we uniquely combine DINOv2 with ViT-CX, a causal explanation method tailored for transformers. This provides clinically actionable heatmaps, revealing how the model localizes tumors/cellular patternsa feature absent in prior SSL medical imaging studies Furthermore, our research explores the impact of semantic search in the medical images domain and how it can revolutionize the querying process and provide semantic results alongside SSL and the Qudra Net dataset utilized to save the embedding of the developed model after the training process. Cosine similarity measures the distance between the image query and stored information in the embedding using cosine similarity. Our study aims to enhance the efficiency and accuracy of medical image analysis, ultimately improving the decision-making process.

Pulmonary T2* quantification of fetuses with congenital diaphragmatic hernia: a retrospective, case-controlled, MRI pilot study.

Avena-Zampieri CL, Uus A, Egloff A, Davidson J, Hutter J, Knight CL, Hall M, Deprez M, Payette K, Rutherford M, Greenough A, Story L

pubmed logopapersSep 1 2025
Advanced MRI techniques, motion-correction and T2*-relaxometry, may provide information regarding functional properties of pulmonary tissue. We assessed whether lung volumes and pulmonary T2* values in fetuses with congenital diaphragmatic hernia (CDH) were lower than controls and differed between survivors and non-survivors. Women with uncomplicated pregnancies (controls) and those with a CDH had a fetal MRI on a 1.5 T imaging system encompassing T2 single shot fast spin echo sequences and gradient echo single shot echo planar sequences providing T2* data. Motion-correction was performed using slice-to-volume reconstruction, T2* maps were generated using in-house pipelines. Lungs were segmented separately using a pre-trained 3D-deep-learning pipeline. Datasets from 33 controls and 12 CDH fetuses were analysed. The mean ± SD gestation at scan was 28.3 ± 4.3 for controls and 27.6 ± 4.9 weeks for CDH cases. CDH lung volumes were lower than controls in both non-survivors and survivors for both lungs combined (5.76 ± 3.59 [cc], mean difference = 15.97, 95% CI: -24.51--12.9, p < 0.001 and 5.73 ± 2.96 [cc], mean difference = 16, 95% CI: 1.91-11.53, p = 0.008) and for the ipsilateral lung (1.93 ± 2.09 [cc], mean difference = 19.8, 95% CI: -28.48--16.45, p < 0.001 1.58 ± 1.18 [cc], mean difference=20.15, 95% CI: 5.96-15.97, p < 0.001). Mean pulmonary T2* values were lower in non-survivors in both lungs, the ipsilateral and contralateral lungs compared with the control group (81.83 ± 26.21 ms, mean difference = 31.13, 95% CI: -58.14--10.32, p = 0.006; 81.05 ± 26.84 ms, mean difference = 31.91, 95% CI: -59.02--10.82, p = 0.006; 82.62 ± 36.31 ms, mean difference = 30.34, 95% CI: -58.84--8.25, p = 0.011) but no difference was observed between controls and CDH cases that survived. Mean pulmonary T2* values were lower in CDH fetuses compared to controls and CDH cases who died compared to survivors. Mean pulmonary T2* values may have a prognostic function in CDH fetuses. This study provides original motion-corrected assessment of the morphologic and functional properties of the ipsilateral and contralateral fetal lungs in the context of CDH. Mean pulmonary T2* values were lower in CDH fetuses compared to controls and in cases who died compared to survivors. Mean pulmonary T2* values may have a role in prognostication. Reduction in pulmonary T2* values in CDH fetuses suggests altered pulmonary development, contributing new insights into antenatal assessment.

Automated rating of Fazekas scale in fluid-attenuated inversion recovery MRI for ischemic stroke or transient ischemic attack using machine learning.

Jeon ET, Kim SM, Jung JM

pubmed logopapersSep 1 2025
White matter hyperintensities (WMH) are commonly assessed using the Fazekas scale, a subjective visual grading system. Despite the emergence of deep learning models for automatic WMH grading, their application in stroke patients remains limited. This study aimed to develop and validate an automatic segmentation and grading model for WMH in stroke patients, utilizing spatial-probabilistic methods. We developed a two-step deep learning pipeline to predict Fazekas scale scores from T2-weighted FLAIR images. First, WMH segmentation was performed using a residual neural network based on the U-Net architecture. Then, Fazekas scale grading was carried out using a 3D convolutional neural network trained on the segmented WMH probability volumes. A total of 471 stroke patients from three different sources were included in the analysis. The performance metrics included area under the precision-recall curve (AUPRC), Dice similarity coefficient, and absolute error for WMH volume prediction. In addition, agreement analysis and quadratic weighted kappa were calculated to assess the accuracy of the Fazekas scale predictions. The WMH segmentation model achieved an AUPRC of 0.81 (95% CI, 0.55-0.95) and a Dice similarity coefficient of 0.73 (95% CI, 0.49-0.87) in the internal test set. The mean absolute error between the true and predicted WMH volumes was 3.1 ml (95% CI, 0.0 ml-15.9 ml), with no significant variation across Fazekas scale categories. The agreement analysis demonstrated strong concordance, with an R-squared value of 0.91, a concordance correlation coefficient of 0.96, and a systematic difference of 0.33 ml in the internal test set, and 0.94, 0.97, and 0.40 ml, respectively, in the external validation set. In predicting Fazekas scores, the 3D convolutional neural network achieved quadratic weighted kappa values of 0.951 for regression tasks and 0.956 for classification tasks in the internal test set, and 0.898 and 0.956, respectively, in the external validation set. The proposed deep learning pipeline demonstrated robust performance in automatic WMH segmentation and Fazekas scale grading from FLAIR images in stroke patients. This approach offers a reliable and efficient tool for evaluating WMH burden, which may assist in predicting future vascular events.

Uncovering novel functions of NUF2 in glioblastoma and MRI-based expression prediction.

Zhong RD, Liu YS, Li Q, Kou ZW, Chen FF, Wang H, Zhang N, Tang H, Zhang Y, Huang GD

pubmed logopapersSep 1 2025
Glioblastoma multiforme (GBM) is a lethal brain tumor with limited therapies. NUF2, a kinetochore protein involved in cell cycle regulation, shows oncogenic potential in various cancers; however, its role in GBM pathogenesis remains unclear. In this study, we investigated NUF2's function and mechanisms in GBM and developed an MRI-based machine learning model to predict its expression non-invasively, and evaluated its potential as a therapeutic target and prognostic biomarker. Functional assays (proliferation, colony formation, migration, and invasion) and cell cycle analysis were conducted using NUF2-knockdown U87/U251 cells. Western blotting was performed to assess the expression levels of β-catenin and MMP-9. Bioinformatic analyses included pathway enrichment, immune infiltration, and single-cell subtype characterization. Using preoperative T1CE Magnetic Resonance Imaging sequences from 61 patients, we extracted 1037 radiomic features and developed a predictive model using Least Absolute Shrinkage and Selection Operator regression for feature selection and random forest algorithms for classification with rigorous cross-validation. NUF2 overexpression in GBM tissues and cells was correlated with poor survival (p < 0.01). Knockdown of NUF2 significantly suppressed malignant phenotypes (p < 0.05), induced G0/G1 arrest (p < 0.01), and increased sensitivity to TMZ treatment via the β-catenin/MMP9 pathway. The radiomic model achieved superior NUF2 prediction (AUC = 0.897) using six optimized features. Key features demonstrated associations with MGMT methylation and 1p/19q co-deletion, serving as independent prognostic markers. NUF2 drives GBM progression through β-catenin/MMP9 activation, establishing its dual role as a therapeutic target and a prognostic biomarker. The developed radiogenomic model enables precise non-invasive NUF2 evaluation, thereby advancing personalized GBM management. This study highlights the translational value of integrating molecular biology with artificial intelligence in neuro-oncology.

Machine learning to predict high-risk coronary artery disease on CT in the SCOT-HEART trial.

Williams MC, Guimaraes ARM, Jiang M, Kwieciński J, Weir-McCall JR, Adamson PD, Mills NL, Roditi GH, van Beek EJR, Nicol E, Berman DS, Slomka PJ, Dweck MR, Newby DE, Dey D

pubmed logopapersSep 1 2025
Machine learning based on clinical characteristics has the potential to predict coronary CT angiography (CCTA) findings and help guide resource utilisation. From the SCOT-HEART (Scottish Computed Tomography of the HEART) trial, data from 1769 patients was used to train and to test machine learning models (XGBoost, 10-fold cross validation, grid search hyperparameter selection). Two models were separately generated to predict the presence of coronary artery disease (CAD) and an increased burden of low-attenuation coronary artery plaque (LAP) using symptoms, demographic and clinical characteristics, electrocardiography and exercise tolerance testing (ETT). Machine learning predicted the presence of CAD on CCTA (area under the curve (AUC) 0.80, 95% CI 0.74 to 0.85) better than the 10-year cardiovascular risk score alone (AUC 0.75, 95% CI 0.70, 0.81, p=0.004). The most important features in this model were the 10-year cardiovascular risk score, age, sex, total cholesterol and an abnormal ETT. In contrast, the second model used to predict an increased LAP burden performed similarly to the 10-year cardiovascular risk score (AUC 0.75, 95% CI 0.70 to 0.80 vs AUC 0.72, 95% CI 0.66 to 0.77, p=0.08) with the most important features being the 10-year cardiovascular risk score, age, body mass index and total and high-density lipoprotein cholesterol concentrations. Machine learning models can improve prediction of the presence of CAD on CCTA, over the standard cardiovascular risk score. However, it was not possible to improve the prediction of an increased LAP burden based on clinical factors alone.
Page 45 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.