Sort by:
Page 142 of 2432424 results

Iterative Misclassification Error Training (IMET): An Optimized Neural Network Training Technique for Image Classification

Ruhaan Singh, Sreelekha Guggilam

arxiv logopreprintJul 1 2025
Deep learning models have proven to be effective on medical datasets for accurate diagnostic predictions from images. However, medical datasets often contain noisy, mislabeled, or poorly generalizable images, particularly for edge cases and anomalous outcomes. Additionally, high quality datasets are often small in sample size that can result in overfitting, where models memorize noise rather than learn generalizable patterns. This in particular, could pose serious risks in medical diagnostics where the risk associated with mis-classification can impact human life. Several data-efficient training strategies have emerged to address these constraints. In particular, coreset selection identifies compact subsets of the most representative samples, enabling training that approximates full-dataset performance while reducing computational overhead. On the other hand, curriculum learning relies on gradually increasing training difficulty and accelerating convergence. However, developing a generalizable difficulty ranking mechanism that works across diverse domains, datasets, and models while reducing the computational tasks and remains challenging. In this paper, we introduce Iterative Misclassification Error Training (IMET), a novel framework inspired by curriculum learning and coreset selection. The IMET approach is aimed to identify misclassified samples in order to streamline the training process, while prioritizing the model's attention to edge case senarious and rare outcomes. The paper evaluates IMET's performance on benchmark medical image classification datasets against state-of-the-art ResNet architectures. The results demonstrating IMET's potential for enhancing model robustness and accuracy in medical image analysis are also presented in the paper.

Radiomics Analysis of Different Machine Learning Models based on Multiparametric MRI to Identify Benign and Malignant Testicular Lesions.

Jian Y, Yang S, Liu R, Tan X, Zhao Q, Wu J, Chen Y

pubmed logopapersJul 1 2025
To develop and validate a machine learning-based prediction model for the use of multiparametric magnetic resonance imaging(MRI) to predict benign and malignant lesions in the testis. The study retrospectively enrolled 148 patients with pathologically confirmed benign and malignant testicular lesions, dividing them into: training set (n=103) and validation set (n=45). Radiomics characteristics were derived from T2-weighted(T2WI)、contrast-enhanced T1-weighted(CE-T1WI)、diffusion-weighted imaging(DWI) and Apparent diffusion coefficient(ADC) MRI images, followed by feature selection. A machine learning-based combined model was developed by incorporating radiomics scores (rad scores) from the optimal radiomics model along with clinical predictors. Draw the receiver operating characteristic (ROC) curve and use the area under the curve (AUC) to evaluate and compare the predictive performance of each model. The diagnostic efficacy of the various machine learning models was evaluated using the Delong test. Radiomics features were extracted from four sequence-based groups(CE-T1WI+DWI+ADC+T2WI), and the model that combined Logistic Regression(LR) machine learning showed the best performance in the radiomics model. The clinical model identified one independent predictors. The combined clinical-radiomics model showed the best performance, whose AUC value was 0.932(95% confidence intervals(CI)0.868-0.978), sensitivity was 0.875, specificity was 0.871 and accuracy was 0.884 in validation set. The combined clinical-radiomics model can be used as a reliable tool to predict benign and malignant testicular lesions and provide a reference for clinical treatment method decisions.

Synergizing advanced algorithm of explainable artificial intelligence with hybrid model for enhanced brain tumor detection in healthcare.

Lamba K, Rani S, Shabaz M

pubmed logopapersJul 1 2025
Brain tumor causes life-threatening consequences due to which its timely detection and accurate classification are critical for determining appropriate treatment plans while focusing on the improved patient outcomes. However, conventional approaches of brain tumor diagnosis, such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) scans, are often labor-intensive, prone to human error, and completely reliable on expertise of radiologists.Thus, the integration of advanced techniques such as Machine Learning (ML) and Deep Learning (DL) has brought revolution in the healthcare sector due to their supporting features or properties having ability to analyze medical images in recent years, demonstrating great potential for achieving accurate and improved outcomes but also resulted in a few drawbacks due to their black-box nature. As understanding reasoning behind their predictions is still a great challenge for the healthcare professionals and raised a great concern about their trustworthiness, interpretability and transparency in clinical settings. Thus, an advanced algorithm of explainable artificial intelligence (XAI) has been synergized with hybrid model comprising of DenseNet201 network for extracting the most important features based on the input Magnetic resonance imaging (MRI) data following supervised algorithm, support vector machine (SVM) to distinguish distinct types of brain scans. To overcome this, an explainable hybrid framework has been proposed that integrates DenseNet201 for deep feature extraction with a Support Vector Machine (SVM) classifier for robust binary classification. A region-adaptive preprocessing pipeline is used to enhance tumor visibility and feature clarity. To address the need for interpretability, multiple XAI techniques-Grad-CAM, Integrated Gradients (IG), and Layer-wise Relevance Propagation (LRP) have been incorporated. Our comparative evaluation shows that LRP achieves the highest performance across all explainability metrics, with 98.64% accuracy, 0.74 F1-score, and 0.78 IoU. The proposed model provides transparent and highly accurate diagnostic predictions, offering a reliable clinical decision support tool. It achieves 0.9801 accuracy, 0.9223 sensitivity, 0.9909 specificity, 0.9154 precision, and 0.9360 F1-score, demonstrating strong potential for real-world brain tumor diagnosis and personalized treatment strategies.

The value of machine learning based on spectral CT quantitative parameters in the distinguishing benign from malignant thyroid micro-nodules.

Song Z, Liu Q, Huang J, Zhang D, Yu J, Zhou B, Ma J, Zou Y, Chen Y, Tang Z

pubmed logopapersJul 1 2025
More cases of thyroid micro-nodules have been diagnosed annually in recent years because of advancements in diagnostic technologies and increased public health awareness. To explore the application value of various machine learning (ML) algorithms based on dual-layer spectral computed tomography (DLCT) quantitative parameters in distinguishing benign from malignant thyroid micro-nodules. All 338 thyroid micro-nodules (177 malignant micro-nodules and 161 benign micro-nodules) were randomly divided into a training cohort (n = 237) and a testing cohort (n = 101) at a ratio of 7:3. Four typical radiological features and 19 DLCT quantitative parameters in the arterial phase and venous phase were measured. Recursive feature elimination was employed for variable selection. Three ML algorithms-support vector machine (SVM), logistic regression (LR), and naive Bayes (NB)-were implemented to construct predictive models. Predictive performance was evaluated via receiver operating characteristic (ROC) curve analysis. A variable set containing 6 key variables with "one standard error" rules was identified in the SVM model, which performed well in the training and testing cohorts (area under the ROC curve (AUC): 0.924 and 0.931, respectively). A variable set containing 2 key variables was identified in the NB model, which performed well in the training and testing cohorts (AUC: 0.882 and 0.899, respectively). A variable set containing 8 key variables was identified in the LR model, which performed well in the training and testing cohorts (AUC: 0.924 and 0.925, respectively). And nine ML models were developed with varying variable sets (2, 6, or 8 variables), all of which consistently achieved AUC values above 0.85 in the training, cross validation (CV)-Training, CV-Validation, and testing cohorts. Artificial intelligence-based DLCT quantitative parameters are promising for distinguishing benign from malignant thyroid micro-nodules.

A Machine Learning Model for Predicting the HER2 Positive Expression of Breast Cancer Based on Clinicopathological and Imaging Features.

Qin X, Yang W, Zhou X, Yang Y, Zhang N

pubmed logopapersJul 1 2025
To develop a machine learning (ML) model based on clinicopathological and imaging features to predict the Human Epidermal Growth Factor Receptor 2 (HER2) positive expression (HER2-p) of breast cancer (BC), and to compare its performance with that of a logistic regression (LR) model. A total of 2541 consecutive female patients with pathologically confirmed primary breast lesions were enrolled in this study. Based on chronological order, 2034 patients treated between January 2018 and December 2022 were designated as the retrospective development cohort, while 507 patients treated between January 2023 and May 2024 were designated as the prospective validation cohort. The patients were randomly divided into a train cohort (n=1628) and a test cohort (n=406) in an 8:2 ratio within the development cohort. Pretreatment mammography (MG) and breast MRI data, along with clinicopathological features, were recorded. Extreme Gradient Boosting (XGBoost) in combination with Artificial Neural Network (ANN) and multivariate LR analyses were employed to extract features associated with HER2 positivity in BC and to develop an ANN model (using XGBoost features) and an LR model, respectively. The predictive value was assessed using a receiver operating characteristic (ROC) curve. Following the application of Recursive Feature Elimination with Cross-Validation (RFE-CV) for feature dimensionality reduction, the XGBoost algorithm identified tumor size, suspicious calcifications, Ki-67 index, spiculation, and minimum apparent diffusion coefficient (minimum ADC) as key feature subsets indicative of HER2-p in BC. The constructed ANN model consistently outperformed the LR model, achieving the area under the curve (AUC) of 0.853 (95% CI: 0.837-0.872) in the train cohort, 0.821 (95% CI: 0.798-0.853) in the test cohort, and 0.809 (95% CI: 0.776-0.841) in the validation cohort. The ANN model, built using the significant feature subsets identified by the XGBoost algorithm with RFE-CV, demonstrates potential in predicting HER2-p in BC.

Prediction of High-risk Capsule Characteristics for Recurrence of Pleomorphic Adenoma in the Parotid Gland Based on Habitat Imaging and Peritumoral Radiomics: A Two-center Study.

Wang Y, Dai A, Wen Y, Sun M, Gao J, Yin Z, Han R

pubmed logopapersJul 1 2025
This study aims to develop and validate an ultrasoundbased habitat imaging and peritumoral radiomics model for predicting high-risk capsule characteristics for recurrence of pleomorphic adenoma (PA) of the parotid gland while also exploring the optimal range of peritumoral region. Retrospective analysis was conducted on 325 patients (171 in training set, 74 in validation set and 80 in testing set) diagnosed with PA at two medical centers. Univariate and multivariate logistic regression analyses were performed to identify clinical risk factors. The tumor was segmented into four habitat subregions using K-means clustering, with peri-tumor regions expanded at thicknesses of 1/3/5mm. Radiomics features were extracted from intra-tumor, habitat subregions, and peritumoral regions respectively to construct predictive models, integrating three machine learning classifiers: SVM, RandomForest, and XGBoost. Additionally, a combined model was developed by incorporating peritumoral features and clinical factors based on habitat imaging. Model performance was evaluated using receiver operating characteristic (ROC) curves, calibration curves, and decision curve analysis (DCA). SHAP analysis was employed to improve the interpretability. The RandomForest model in habitat imaging consistently outperformed other models in predictive performance, with AUC values of 0.881, 0.823, and 0.823 for the training set, validation set, and testing set respectively. Incorporating peri-1mm features and clinical factors into the combined model slightly improved its performance, resulting in AUC values of 0.898, 0.833, and 0.829 for each set. The calibration curves and DCA exhibited excellent fit for the combined model while providing great clinical net benefit. The combined model exhibits robust predictive performance in identifying high-risk capsule characteristics for recurrence of PA in the parotid gland. This model may assist in determining optimal surgical margin and assessing patients' prognosis.

Leveraging multithreading on edge computing for smart healthcare based on intelligent multimodal classification approach.

Alghareb FS, Hasan BT

pubmed logopapersJul 1 2025
Medical digitization has been intensively developed in the last decade, leading to paving the path for computer-aided medical diagnosis research. Thus, anomaly detection based on machine and deep learning techniques has been extensively employed in healthcare applications, such as medical imaging classification and monitoring of patients' vital signs. To effectively leverage digitized medical records for identifying challenges in healthcare, this manuscript presents a smart Clinical Decision Support System (CDSS) dedicated for medical multimodal data automated diagnosis. A smart healthcare system necessitating medical data management and decision-making is proposed. To deliver timely rapid diagnosis, thread-level parallelism (TLP) is utilized for parallel distribution of classification tasks on three edge computing devices, each employing an AI module for on-device AI classifications. In comparison to existing machine and deep learning classification techniques, the proposed multithreaded architecture realizes a hybrid (ML and DL) processing module on each edge node. In this context, the presented edge computing-based parallel architecture captures a high level of parallelism, tailored for dealing with multiple categories of medical records. The cluster of the proposed architecture encompasses three edge computing Raspberry Pi devices and an edge server. Furthermore, lightweight neural networks, such as MobileNet, EfficientNet, and ResNet18, are trained and optimized based on genetic algorithms to provide classification of brain tumor, pneumonia, and colon cancer. Model deployment was conducted based on Python programming, where PyCharm is run on the edge server whereas Thonny is installed on edge nodes. In terms of accuracy, the proposed GA-based optimized ResNet18 for pneumonia diagnosis achieves 93.59% predictive accuracy and reduces the classifier computation complexity by 33.59%, whereas an outstanding accuracy of 99.78% and 100% were achieved with EfficientNet-v2 for brain tumor and colon cancer prediction, respectively, while both models preserving a reduction of 25% in the model's classifier. More importantly, an inference speedup of 28.61% and 29.08% was obtained by implementing parallel 2 DL and 3 DL threads configurations compared to the sequential implementation, respectively. Thus, the proposed multimodal-multithreaded architecture offers promising prospects for comprehensive and accurate anomaly detection of patients' medical imaging and vital signs. To summarize, our proposed architecture contributes to the advancement of healthcare services, aiming to improve patient medical diagnosis and therapy outcomes.

Deep learning based classification of tibio-femoral knee osteoarthritis from lateral view knee joint X-ray images.

Abdullah SS, Rajasekaran MP, Hossen MJ, Wong WK, Ng PK

pubmed logopapersJul 1 2025
Design an effective deep learning-driven method to locate and classify the tibio-femoral knee joint space width (JSW) with respect to both anterior-posterior (AP) and lateral views. Compare the results and see how successfully a deep learning approach can locate and classify tibio-femoral knee joint osteoarthritis from both anterior-posterior (AP) and lateral-view knee joint x-ray images. To evaluate the performance of a deep learning approach to classify and compare radiographic tibio-femoral knee joint osteoarthritis from both AP and lateral view knee joint digital X-ray images. We use 4334 data points (knee X-ray images) for this study. This paper introduces a methodology to locate, classify, and compare the outcomes of tibio-femoral knee joint osteoarthritis from both AP and lateral knee joint x-ray images. We have fine-tuned DenseNet 201 with transfer learning to extract the features to detect and classify tibio-femoral knee joint osteoarthritis from both AP view and lateral view knee joint X-ray images. The proposed model is compared with some classifiers. The proposed model locate the tibio femoral knee JSW localization accuracy at 98.12% (lateral view) and 99.32% (AP view). The classification accuracy with respect to the lateral view is 92.42% and the AP view is 98.57%, which indicates the performance of automatic detection and classification of tibio-femoral knee joint osteoarthritis with respect to both views (AP and lateral views).We represent the first automated deep learning approach to classify tibio-femoral osteoarthritis on both the AP view and the lateral view, respectively. The proposed deep learning approach trained on the femur and tibial bone regions from both AP view and lateral view digital X-ray images. The proposed model performs better at locating and classifying tibio femoral knee joint osteoarthritis than the existing approaches. The proposed approach will be helpful for the clinicians/medical experts to analyze the progression of tibio-femoral knee OA in different views. The proposed approach performs better in AP view than Lateral view. So, when compared to other continuing existing architectures/models, the proposed model offers exceptional outcomes with fine-tuning.

Deep learning radiomics and mediastinal adipose tissue-based nomogram for preoperative prediction of postoperative‌ brain metastasis risk in non-small cell lung cancer.

Niu Y, Jia HB, Li XM, Huang WJ, Liu PP, Liu L, Liu ZY, Wang QJ, Li YZ, Miao SD, Wang RT, Duan ZX

pubmed logopapersJul 1 2025
Brain metastasis (BM) significantly affects the prognosis of non-small cell lung cancer (NSCLC) patients. Increasing evidence suggests that adipose tissue influences cancer progression and metastasis. This study aimed to develop a predictive nomogram integrating mediastinal fat area (MFA) and deep learning (DL)-derived tumor characteristics to stratify postoperative‌ BM risk in NSCLC patients. A retrospective cohort of 585 surgically resected NSCLC patients was analyzed. Preoperative computed tomography (CT) scans were utilized to quantify MFA using ImageJ software (radiologist-validated measurements). Concurrently, a DL algorithm extracted tumor radiomic features, generating a deep learning brain metastasis score (DLBMS). Multivariate logistic regression identified independent BM predictors, which were incorporated into a nomogram. Model performance was assessed via area under the receiver operating characteristic curve (AUC), calibration plots, integrated discrimination improvement (IDI), net reclassification improvement (NRI), and decision curve analysis (DCA). Multivariate analysis identified N stage, EGFR mutation status, MFA, and DLBMS as independent predictors of BM. The nomogram achieved superior discriminative capacity (AUC: 0.947 in the test set), significantly outperforming conventional models. MFA contributed substantially to predictive accuracy, with IDI and NRI values confirming its incremental utility (IDI: 0.123, <i>P</i> < 0.001; NRI: 0.386, <i>P</i> = 0.023). Calibration analysis demonstrated strong concordance between predicted and observed BM probabilities, while DCA confirmed clinical net benefit across risk thresholds. This DL-enhanced nomogram, incorporating MFA and tumor radiomics, represents a robust and clinically useful tool for preoperative prediction of postoperative BM risk in NSCLC. The integration of adipose tissue metrics with advanced imaging analytics advances personalized prognostic assessment in NSCLC patients. The online version contains supplementary material available at 10.1186/s12885-025-14466-5.

Multi-parametric MRI Habitat Radiomics Based on Interpretable Machine Learning for Preoperative Assessment of Microsatellite Instability in Rectal Cancer.

Wang Y, Xie B, Wang K, Zou W, Liu A, Xue Z, Liu M, Ma Y

pubmed logopapersJul 1 2025
This study constructed an interpretable machine learning model based on multi-parameter MRI sub-region habitat radiomics and clinicopathological features, aiming to preoperatively evaluate the microsatellite instability (MSI) status of rectal cancer (RC) patients. This retrospective study recruited 291 rectal cancer patients with pathologically confirmed MSI status and randomly divided them into a training cohort and a testing cohort at a ratio of 8:2. First, the K-means method was used for cluster analysis of tumor voxels, and sub-region radiomics features and classical radiomics features were respectively extracted from multi-parameter MRI sequences. Then, the synthetic minority over-sampling technique method was used to balance the sample size, and finally, the features were screened. Prediction models were established using logistic regression based on clinicopathological variables, classical radiomics features, and MSI-related sub-region radiomics features, and the contribution of each feature to the model decision was quantified by the Shapley-Additive-Explanations (SHAP) algorithm. The area under the curve (AUC) of the sub-region radiomics model in the training and testing groups was 0.848 and 0.8, respectively, both better than that of the classical radiomics and clinical models. The combined model performed the best, with AUCs of 0.908 and 0.863 in the training and testing groups, respectively. We developed and validated a robust combined model that integrates clinical variables, classical radiomics features, and sub-region radiomics features to accurately determine the MSI status of RC patients. We visualized the prediction process using SHAP, enabling more effective personalized treatment plans and ultimately improving RC patient survival rates.
Page 142 of 2432424 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.