Sort by:
Page 45 of 2352345 results

Combining curriculum learning and weakly supervised attention for enhanced thyroid nodule assessment in ultrasound imaging.

Keatmanee C, Songsaeng D, Klabwong S, Nakaguro Y, Kunapinun A, Ekpanyapong M, Dailey MN

pubmed logopapersSep 1 2025
The accurate assessment of thyroid nodules, which are increasingly common with age and lifestyle factors, is essential for early malignancy detection. Ultrasound imaging, the primary diagnostic tool for this purpose, holds promise when paired with deep learning. However, challenges persist with small datasets, where conventional data augmentation can introduce noise and obscure essential diagnostic features. To address dataset imbalance and enhance model generalization, this study integrates curriculum learning with a weakly supervised attention network to improve diagnostic accuracy for thyroid nodule classification. This study integrates curriculum learning with attention-guided data augmentation to improve deep learning model performance in classifying thyroid nodules. Using verified datasets from Siriraj Hospital, the model was trained progressively, beginning with simpler images and gradually incorporating more complex cases. This structured learning approach is designed to enhance the model's diagnostic accuracy by refining its ability to distinguish benign from malignant nodules. Among the curriculum learning schemes tested, schematic IV achieved the best results, with a precision of 100% for benign and 70% for malignant nodules, a recall of 82% for benign and 100% for malignant, and F1-scores of 90% and 83%, respectively. This structured approach improved the model's diagnostic sensitivity and robustness. These findings suggest that automated thyroid nodule assessment, supported by curriculum learning, has the potential to complement radiologists in clinical practice, enhancing diagnostic accuracy and aiding in more reliable malignancy detection.

Uncovering novel functions of NUF2 in glioblastoma and MRI-based expression prediction.

Zhong RD, Liu YS, Li Q, Kou ZW, Chen FF, Wang H, Zhang N, Tang H, Zhang Y, Huang GD

pubmed logopapersSep 1 2025
Glioblastoma multiforme (GBM) is a lethal brain tumor with limited therapies. NUF2, a kinetochore protein involved in cell cycle regulation, shows oncogenic potential in various cancers; however, its role in GBM pathogenesis remains unclear. In this study, we investigated NUF2's function and mechanisms in GBM and developed an MRI-based machine learning model to predict its expression non-invasively, and evaluated its potential as a therapeutic target and prognostic biomarker. Functional assays (proliferation, colony formation, migration, and invasion) and cell cycle analysis were conducted using NUF2-knockdown U87/U251 cells. Western blotting was performed to assess the expression levels of β-catenin and MMP-9. Bioinformatic analyses included pathway enrichment, immune infiltration, and single-cell subtype characterization. Using preoperative T1CE Magnetic Resonance Imaging sequences from 61 patients, we extracted 1037 radiomic features and developed a predictive model using Least Absolute Shrinkage and Selection Operator regression for feature selection and random forest algorithms for classification with rigorous cross-validation. NUF2 overexpression in GBM tissues and cells was correlated with poor survival (p < 0.01). Knockdown of NUF2 significantly suppressed malignant phenotypes (p < 0.05), induced G0/G1 arrest (p < 0.01), and increased sensitivity to TMZ treatment via the β-catenin/MMP9 pathway. The radiomic model achieved superior NUF2 prediction (AUC = 0.897) using six optimized features. Key features demonstrated associations with MGMT methylation and 1p/19q co-deletion, serving as independent prognostic markers. NUF2 drives GBM progression through β-catenin/MMP9 activation, establishing its dual role as a therapeutic target and a prognostic biomarker. The developed radiogenomic model enables precise non-invasive NUF2 evaluation, thereby advancing personalized GBM management. This study highlights the translational value of integrating molecular biology with artificial intelligence in neuro-oncology.

Enhancing diagnostic precision for thyroid C-TIRADS category 4 nodules: a hybrid deep learning and machine learning model integrating grayscale and elastographic ultrasound features.

Zou D, Lyu F, Pan Y, Fan X, Du J, Mai X

pubmed logopapersSep 1 2025
Accurate and timely diagnosis of thyroid cancer is critical for clinical care, and artificial intelligence can enhance this process. This study aims to develop and validate an intelligent assessment model called C-TNet, based on the Chinese Guidelines for Ultrasound Malignancy Risk Stratification of Thyroid Nodules (C-TIRADS) and real-time elasticity imaging. The goal is to differentiate between benign and malignant characteristics of thyroid nodules classified as C-TIRADS category 4. We evaluated the performance of C-TNet against ultrasonographers and BMNet, a model trained exclusively on histopathological findings indicating benign or malignant nature. The study included 3,545 patients with pathologically confirmed C-TIRADS category 4 thyroid nodules from two tertiary hospitals in China: Affiliated Hospital of Integrated Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine (n=3,463 patients) and Jiangyin People's Hospital (n=82 patients). The cohort from Affiliated Hospital of Integrated Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine was randomly divided into a training set and validation set (7:3 ratio), while the cohort from Jiangyin People's Hospital served as the external validation set. The C-TNet model was developed by extracting image features from the training set and integrating them with six commonly used classifier algorithms: logistic regression (LR), linear discriminant analysis (LDA), random forest (RF), kernel support vector machine (K-SVM), adaptive boosting (AdaBoost), and Naive Bayes (NB). Its performance was evaluated using both internal and external validation sets, with statistical differences analyzed through the Chi-squared test. C-TNet model effectively integrates feature extraction from deep neural networks with a RF classifier, utilizing grayscale and elastography ultrasound data. It successfully differentiates benign from malignant thyroid nodules, achieving an area under the curve (AUC) of 0.873, comparable to the performance of senior physicians (AUC: 0.868). The model demonstrates generalizability across diverse clinical settings, positioning itself as a transformative decision-support tool for enhancing the risk stratification of thyroid nodules.

Prediction of lymphovascular invasion in invasive breast cancer via intratumoral and peritumoral multiparametric magnetic resonance imaging machine learning-based radiomics with Shapley additive explanations interpretability analysis.

Chen S, Zhong Z, Chen Y, Tang W, Fan Y, Sui Y, Hu W, Pan L, Liu S, Kong Q, Guo Y, Liu W

pubmed logopapersSep 1 2025
The use of multiparametric magnetic resonance imaging (MRI) in predicting lymphovascular invasion (LVI) in breast cancer has been well-documented in the literature. However, the majority of the related studies have primarily focused on intratumoral characteristics, overlooking the potential contribution of peritumoral features. The aim of this study was to evaluate the effectiveness of multiparametric MRI in predicting LVI by analyzing both intratumoral and peritumoral radiomics features and to assess the added value of incorporating both regions in LVI prediction. A total of 366 patients underwent preoperative breast MRI from two centers and were divided into training (n=208), validation (n=70), and test (n=88) sets. Imaging features were extracted from intratumoral and peritumoral T2-weighted imaging, diffusion-weighted imaging, and dynamic contrast-enhanced MRI. Five models were developed for predicting LVI status based on logistic regression: the tumor area (TA) model, peritumoral area (PA) model, tumor-plus-peritumoral area (TPA) model, clinical model, and combined model. The combined model was created incorporating the highest radiomics score and clinical factors. Predictive efficacy was evaluated via the receiver operating characteristic (ROC) curve and area under the curve (AUC). The Shapley additive explanation (SHAP) method was used to rank the features and explain the final model. The performance of the TPA model was superior to that of the TA and PA models. A combined model was further developed via multivariable logistic regression, with the TPA radiomics score (radscore), MRI-assessed axillary lymph node (ALN) status, and peritumoral edema (PE) being incorporated. The combined model demonstrated good calibration and discrimination performance across the training, validation, and test datasets, with AUCs of 0.888 [95% confidence interval (CI): 0.841-0.934], 0.856 (95% CI: 0.769-0.943), and 0.853 (95% CI: 0.760-0.946), respectively. Furthermore, we conducted SHAP analysis to evaluate the contributions of TPA radscore, MRI-ALN status, and PE in LVI status prediction. The combined model, incorporating clinical factors and intratumoral and peritumoral radscore, effectively predicts LVI and may potentially aid in tailored treatment planning.

Left ventricular ejection fraction assessment: artificial intelligence compared to echocardiography expert and cardiac magnetic resonance measurements.

Mołek-Dziadosz P, Woźniak A, Furman-Niedziejko A, Pieszko K, Szachowicz-Jaworska J, Miszalski-Jamka T, Krupiński M, Dweck MR, Nessler J, Gackowski A

pubmed logopapersSep 1 2025
 Cardiac magnetic resonance (CMR) is the gold standard for assessing left ventricular ejection fraction (LVEF). Artificial intelligence (AI) - based echocardiographic analysis is increasingly utilized in clinical practice.  This study compares measurements of LVEF between echocardiography (ECHO) assessed by experts and automated AI, in comparison to CMR as the reference standard.  We retrospectively analyzed 118 patients who underwent both CMR and ECHO within 7 days. LVEF measured by CMR was compared with results obtained from an AI-based software which automatically analyzed all stored DICOM loops (Multi loop AI analysis) in echocardiography (ECHO). Additionally, AI results were repeated using only one best quality loop for 2 and one for 4 chamber views (One Loop AI Analysis) in ECHO. These results were further compared with standard ECHO analysis performed by two independent experts. Agreement was investigated using Pearson's correlation and Bland-Altman analysis as well as Cohen's Kappa and concordance for categorization of LVEF into subgroups (≤30%, 31-40%, 41-50%, 51-70%; and >70%).  Both Experts demonstrated strong inter-reader agreement (R = 0.88, κ = 0.77) and correlated well with CMR LVEF (Expert 1: R = 0.86, κ = 0.74; Expert 2: R = 0.85, κ = 0.68). Multi loop AI analysis correlated strongly with CMR (R = 0.87, κ = 0.68) and Experts (R = 0.88-0.90). One Loop AI Analysis demonstrated numerically higher concordance with CMR LVEF (R = 0.89, κ = 0.75) compared to Multi loop AI analysis and Experts.  AI-based analysis showed similar LVEF assessment as human experts in comparison to CMR results. AI-based ECHO analysis are promising, but the obtained results should be interpreted with caution.

Deep learning-based automated assessment of hepatic fibrosis via magnetic resonance images and nonimage data.

Li W, Zhu Y, Zhao G, Chen X, Zhao X, Xu H, Che Y, Chen Y, Ye Y, Dou X, Wang H, Cheng J, Xie Q, Chen K

pubmed logopapersSep 1 2025
Accurate staging of hepatic fibrosis is critical for prognostication and management among patients with chronic liver disease, and noninvasive, efficient alternatives to biopsy are urgently needed. This study aimed to evaluate the performance of an automated deep learning (DL) algorithm for fibrosis staging and for differentiating patients with hepatic fibrosis from healthy individuals via magnetic resonance (MR) images with and without additional clinical data. A total of 500 patients from two medical centers were retrospectively analyzed. DL models were developed based on delayed-phase MR images to predict fibrosis stages. Additional models were constructed by integrating the DL algorithm with nonimaging variables, including serologic biomarkers [aminotransferase-to-platelet ratio index (APRI) and fibrosis index based on four factors (FIB-4)], viral status (hepatitis B and C), and MR scanner parameters. Diagnostic performance, was assessed via the area under the receiver operating characteristic curve (AUROC), and comparisons were through use of the DeLong test. Sensitivity and specificity of the DL and full models (DL plus all clinical features) were compared with those of experienced radiologists and serologic biomarkers via the McNemar test. In the test set, the full model achieved AUROC values of 0.99 [95% confidence interval (CI): 0.94-1.00], 0.98 (95% CI: 0.93-0.99), 0.90 (95% CI: 0.83-0.95), 0.81 (95% CI: 0.73-0.88), and 0.84 (95% CI: 0.76-0.90) for staging F0-4, F1-4, F2-4, F3-4, and F4, respectively. This model significantly outperformed the DL model in early-stage classification (F0-4 and F1-4). Compared with expert radiologists, it showed superior specificity for F0-4 and higher sensitivity across the other four classification tasks. Both the DL and full models showed significantly greater specificity than did the biomarkers for staging advanced fibrosis (F3-4 and F4). The proposed DL algorithm provides a noninvasive method for hepatic fibrosis staging and screening, outperforming both radiologists and conventional biomarkers, and may facilitate improved clinical decision-making.

Automated coronary analysis in ultrahigh-spatial resolution photon-counting detector CT angiography: Clinical validation and intra-individual comparison with energy-integrating detector CT.

Kravchenko D, Hagar MT, Varga-Szemes A, Schoepf UJ, Schoebinger M, O'Doherty J, Gülsün MA, Laghi A, Laux GS, Vecsey-Nagy M, Emrich T, Tremamunno G

pubmed logopapersSep 1 2025
To evaluate a deep-learning algorithm for automated coronary artery analysis on ultrahigh-resolution photon-counting detector coronary computed tomography (CT) angiography and compared its performance to expert readers using invasive coronary angiography as reference. Thirty-two patients (mean age 68.6 years; 81 ​% male) underwent both energy-integrating detector and ultrahigh-resolution photon-counting detector CT within 30 days. Expert readers scored each image using the Coronary Artery Disease-Reporting and Data System classification, and compared to invasive angiography. After a three-month wash-out, one reader reanalyzed the photon-counting detector CT images assisted by the algorithm. Sensitivity, specificity, accuracy, inter-reader agreement, and reading times were recorded for each method. On 401 arterial segments, inter-reader agreement improved from substantial (κ ​= ​0.75) on energy-integrating detector CT to near-perfect (κ ​= ​0.86) on photon-counting detector CT. The algorithm alone achieved 85 ​% sensitivity, 91 ​% specificity, and 90 ​% accuracy on energy-integrating detector CT, and 85 ​%, 96 ​%, and 95 ​% on photon-counting detector CT. Compared to invasive angiography on photon-counting detector CT, manual and automated reads had similar sensitivity (67 ​%), but manual assessment slightly outperformed regarding specificity (85 ​% vs. 79 ​%) and accuracy (84 ​% vs. 78 ​%). When the reader was assisted by the algorithm, specificity rose to 97 ​% (p ​< ​0.001), accuracy to 95 ​%, and reading time decreased by 54 ​% (p ​< ​0.001). This deep-learning algorithm demonstrates high agreement with experts and improved diagnostic performance on photon-counting detector CT. Expert review augmented by the algorithm further increases specificity and dramatically reduces interpretation time.

Feasibility of fully automatic assessment of cervical canal stenosis using MRI via deep learning.

Feng X, Zhang Y, Lu M, Ma C, Miao X, Yang J, Lin L, Zhang Y, Zhang K, Zhang N, Kang Y, Luo Y, Cao K

pubmed logopapersSep 1 2025
Currently, there is no fully automated tool available for evaluating the degree of cervical spinal stenosis. The aim of this study was to develop and validate the use of artificial intelligence (AI) algorithms for the assessment of cervical spinal stenosis. In this retrospective multi-center study, cervical spine magnetic resonance imaging (MRI) scans obtained from July 2020 to June 2023 were included. Studies of patients with spinal instrumentation or studies with suboptimal image quality were excluded. Sagittal T2-weighted images were used. The training data from the Fourth People's Hospital of Shanghai (Hos. 1) and Shanghai Changzheng Hospital (Hos. 2) were annotated by two musculoskeletal (MSK) radiologists following Kang's system as the standard reference. First, a convolutional neural network (CNN) was trained to detect the region of interest (ROI), with a second Transformer for classification. The performance of the deep learning (DL) model was assessed on an internal test set from Hos. 2 and an external test set from Shanghai Changhai Hospital (Hos. 3), and compared among six readers. Metrics such as detection precision, interrater agreement, sensitivity (SEN), and specificity (SPE) were calculated. Overall, 795 patients were analyzed (mean age ± standard deviation, 55±14 years; 346 female), with 589 in the training (75%) and validation (25%) sets, 206 in the internal test set, and 95 in the external test set. Four tasks with different clinical application scenarios were trained, and their accuracy (ACC) ranged from 0.8993 to 0.9532. When using a Kang system score of ≥2 as a threshold for diagnosing central cervical canal stenosis in the internal test set, both the algorithm and six readers achieved similar areas under the receiver operating characteristic curve (AUCs) of 0.936 [95% confidence interval (CI): 0.916-0.955], with a SEN of 90.3% and SPE of 93.8%; the AUC of the DL model was 0.931 (95% CI: 0.917-0.946), with a SEN in the external test set of 100%, and a SPE of 86.3%. Correlation analysis comparing the DL method, the six readers, and MRI reports between the reference standard showed a moderate correlation, with R values ranging from 0.589 to 0.668. The DL model produced approximately the same upgrades (9.2%) and downgrades (5.1%) as the six readers. The DL model could fully automatically and reliably assess cervical canal stenosis using MRI scans.

Application of deep learning for detection of nasal bone fracture on X-ray nasal bone lateral view.

Mortezaei T, Dalili Kajan Z, Mirroshandel SA, Mehrpour M, Shahidzadeh S

pubmed logopapersSep 1 2025
This study aimed to assess the efficacy of deep learning applications for the detection of nasal bone fracture on X-ray nasal bone lateral view. In this retrospective observational study, 2968 X-ray nasal bone lateral views of trauma patients were collected from a radiology centre, and randomly divided into training, validation, and test sets. Preprocessing included noise reduction by using the Gaussian filter and image resizing. Edge detection was performed using the Canny edge detector. Feature extraction was conducted using the gray-level co-occurrence matrix (GLCM), histogram of oriented gradients (HOG), and local binary pattern (LBP) techniques. Several machine learning algorithms namely CNN, VGG16, VGG19, MobileNet, Xception, ResNet50V2, and InceptionV3 were employed for the classification of images into 2 classes of normal and fracture. The accuracy was the highest for VGG16 and Swin Transformer (79%) followed by ResNet50V2 and InceptionV3 (0.74), Xception (0.72), and MobileNet (0.71). The AUC was the highest for VGG16 (0.86) followed by VGG19 (0.84), MobileNet and Xception (0.83), and Swin Transformer (0.79). The tested deep learning models were capable of detecting nasal bone fractures on X-ray nasal bone lateral views with high accuracy. VGG16 was the best model with successful results.

Can super resolution via deep learning improve classification accuracy in dental radiography?

Çelik B, Mikaeili M, Genç MZ, Çelik ME

pubmed logopapersSep 1 2025
Deep learning-driven super resolution (SR) aims to enhance the quality and resolution of images, offering potential benefits in dental imaging. Although extensive research has focused on deep learning based dental classification tasks, the impact of applying SR techniques on classification remains underexplored. This study seeks to address this gap by evaluating and comparing the performance of deep learning classification models on dental images with and without SR enhancement. An open-source dental image dataset was utilized to investigate the impact of SR on image classification performance. SR was applied by 2 models with a scaling ratio of 2 and 4, while classification was performed by 4 deep learning models. Performances were evaluated by well-accepted metrics like structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), accuracy, recall, precision, and F1 score. The effect of SR on classification performance is interpreted through 2 different approaches. Two SR models yielded average SSIM and PSNR values of 0.904 and 36.71 for increasing resolution with 2 scaling ratios. Average accuracy and F-1 score for the classification trained and tested with 2 SR-generated images were 0.859 and 0.873. In the first of the comparisons carried out with 2 different approaches, it was observed that the accuracy increased in at least half of the cases (8 out of 16) when different models and scaling ratios were considered, while in the second approach, SR showed a significantly higher performance for almost all cases (12 out of 16). This study demonstrated that the classification with SR-generated images significantly improved outcomes. For the first time, the classification performance of dental radiographs with improved resolution by SR has been investigated. Significant performance improvement was observed compared to the case without SR.
Page 45 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.