Sort by:
Page 142 of 3453445 results

Multi-modal convolutional neural network-based thyroid cytology classification and diagnosis.

Yang D, Li T, Li L, Chen S, Li X

pubmed logopapersJul 4 2025
The cytologic diagnosis of thyroid nodules' benign and malignant nature based on cytological smears obtained through ultrasound-guided fine-needle aspiration is crucial for determining subsequent treatment plans. The development of artificial intelligence (AI) can assist pathologists in improving the efficiency and accuracy of cytological diagnoses. We propose a novel diagnostic model based on a network architecture that integrates cytologic images and digital ultrasound image features (CI-DUF) to solve the multi-class classification task of thyroid fine-needle aspiration cytology. We compare this model with a model relying solely on cytologic images (CI) and evaluate its performance and clinical application potential in thyroid cytology diagnosis. A retrospective analysis was conducted on 384 patients with 825 thyroid cytologic images. These images were used as a dataset for training the models, which were divided into training and testing sets in an 8:2 ratio to assess the performance of both the CI and CI-DUF diagnostic models. The AUROC of the CI model for thyroid cytology diagnosis was 0.9119, while the AUROC of the CI-DUF diagnostic model was 0.9326. Compared with the CI model, the CI-DUF model showed significantly increased accuracy, sensitivity, and specificity in the cytologic classification of papillary carcinoma, follicular neoplasm, medullary carcinoma, and benign lesions. The proposed CI-DUF diagnostic model, which intergrates multi-modal information, shows better diagnostic performance than the CI model that relies only on cytologic images, particularly excelling in thyroid cytology classification.

A preliminary attempt to harmonize using physics-constrained deep neural networks for multisite and multiscanner MRI datasets (PhyCHarm).

Lee G, Ye DH, Oh SH

pubmed logopapersJul 4 2025
In magnetic resonance imaging (MRI), variations in scan parameters and scanner specifications can result in differences in image appearance. To minimize these differences, harmonization in MRI has been suggested as a crucial image processing technique. In this study, we developed an MR physics-based harmonization framework, Physics-Constrained Deep Neural Network for multisite and multiscanner Harmonization (PhyCHarm). PhyCHarm includes two deep neural networks: (1) the Quantitative Maps Generator to generate T<sub>1</sub>- and M<sub>0</sub>-maps and (2) the Harmonization Network. We used an open dataset consisting of 3T MP2RAGE images from 50 healthy individuals for the Quantitative Maps Generator and a traveling dataset consisting of 3T T<sub>1</sub>w images from 9 healthy individuals for the Harmonization Network. PhyCHarm was evaluated using the structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and normalized-root-mean square error (NRMSE) for the Quantitative Maps Generator, and using SSIM, PSNR, and volumetric analysis for the Harmonization network, respectively. PhyCHarm demonstrated increased SSIM and PSNR, the highest Dice score in the FSL FAST segmentation results for gray and white matter compared to U-Net, Pix2Pix, CALAMITI, and HarmonizingFlows. PhyCHarm showed a greater reduction in volume differences after harmonization for gray and white matter than U-Net, Pix2Pix, CALAMITI, or HarmonizingFlows. As an initial step toward developing advanced harmonization techniques, we investigated the applicability of physics-based constraints within a supervised training strategy. The proposed physics constraints could be integrated with unsupervised methods, paving the way for more sophisticated harmonization qualities.

Improving risk assessment of local failure in brain metastases patients using vision transformers - A multicentric development and validation study.

Erdur AC, Scholz D, Nguyen QM, Buchner JA, Mayinger M, Christ SM, Brunner TB, Wittig A, Zimmer C, Meyer B, Guckenberger M, Andratschke N, El Shafie RA, Debus JU, Rogers S, Riesterer O, Schulze K, Feldmann HJ, Blanck O, Zamboglou C, Bilger-Z A, Grosu AL, Wolff R, Eitz KA, Combs SE, Bernhardt D, Wiestler B, Rueckert D, Peeken JC

pubmed logopapersJul 4 2025
This study investigates the use of Vision Transformers (ViTs) to predict Freedom from Local Failure (FFLF) in patients with brain metastases using pre-operative MRI scans. The goal is to develop a model that enhances risk stratification and informs personalized treatment strategies. Within the AURORA retrospective trial, patients (n = 352) who received surgical resection followed by post-operative stereotactic radiotherapy (SRT) were collected from seven hospitals. We trained our ViT for the direct image-to-risk task on T1-CE and FLAIR sequences and combined clinical features along the way. We employed segmentation-guided image modifications, model adaptations, and specialized patient sampling strategies during training. The model was evaluated with five-fold cross-validation and ensemble learning across all validation runs. An external, international test cohort (n = 99) within the dataset was used to assess the generalization capabilities of the model, and saliency maps were generated for explainability analysis. We achieved a competent C-Index score of 0.7982 on the test cohort, surpassing all clinical, CNN-based, and hybrid baselines. Kaplan-Meier analysis showed significant FFLF risk stratification. Saliency maps focusing on the BM core confirmed that model explanations aligned with expert observations. Our ViT-based model offers a potential for personalized treatment strategies and follow-up regimens in patients with brain metastases. It provides an alternative to radiomics as a robust, automated tool for clinical workflows, capable of improving patient outcomes through effective risk assessment and stratification.

Progression risk of adolescent idiopathic scoliosis based on SHAP-Explained machine learning models: a multicenter retrospective study.

Fang X, Weng T, Zhang Z, Gong W, Zhang Y, Wang M, Wang J, Ding Z, Lai C

pubmed logopapersJul 4 2025
To develop an interpretable machine learning model, explained using SHAP, based on imaging features of adolescent idiopathic scoliosis extracted by convolutional neural networks (CNNs), in order to predict the risk of curve progression and identify the most accurate predictive model. This study included 233 patients with adolescent idiopathic scoliosis from three medical centers. CNNs were used to extract features from full-spine coronal X-ray images taken at three follow-up points for each patient. Imaging and clinical features from center 1 were analyzed using the Boruta algorithm to identify independent predictors. Data from center 1 were divided into training (80%) and testing (20%) sets, while data from centers 2 and 3 were used as external validation sets. Six machine learning models were constructed. Receiver operating characteristic (ROC) curves were plotted, and model performance was assessed by calculating the area under the curve (AUC), accuracy, sensitivity, and specificity in the training, testing, and external validation sets. The SHAP interpreter was used to analyze the most effective model. The six models yielded AUCs ranging from 0.565 to 0.989, accuracies from 0.600 to 0.968, sensitivities from 0.625 to 1.0, and specificities from 0.571 to 0.974. The XGBoost model achieved the best performance, with an AUC of 0.896 in the external validation set. SHAP analysis identified the change in the main Cobb angle between the second and first follow-ups [Cobb1(2−1)] as the most important predictor, followed by the main Cobb angle at the second follow-up (Cobb1-2) and the change in the secondary Cobb angle [Cobb2(2−1)]. The XGBoost model demonstrated the best predictive performance in the external validation cohort, confirming its preliminary stability and generalizability. SHAP analysis indicated that Cobb1(2−1) was the most important feature for predicting scoliosis progression. This model offers a valuable tool for clinical decision-making by enabling early identification of high-risk patients and supporting early intervention strategies through automated feature extraction and interpretable analysis. The online version contains supplementary material available at 10.1186/s12891-025-08841-3.

Revolutionizing medical imaging: A cutting-edge AI framework with vision transformers and perceiver IO for multi-disease diagnosis.

Khaliq A, Ahmad F, Rehman HU, Alanazi SA, Haleem H, Junaid K, Andrikopoulou E

pubmed logopapersJul 4 2025
The integration of artificial intelligence in medical image classification has significantly advanced disease detection. However, traditional deep learning models face persistent challenges, including poor generalizability, high false-positive rates, and difficulties in distinguishing overlapping anatomical features, limiting their clinical utility. To address these limitations, this study proposes a hybrid framework combining Vision Transformers (ViT) and Perceiver IO, designed to enhance multi-disease classification accuracy. Vision Transformers leverage self-attention mechanisms to capture global dependencies in medical images, while Perceiver IO optimizes feature extraction for computational efficiency and precision. The framework is evaluated across three critical clinical domains: neurological disorders, including Stroke (tested on the Brain Stroke Prediction CT Scan Image Dataset) and Alzheimer's (analyzed via the Best Alzheimer MRI Dataset); skin diseases, covering Tinea (trained on the Skin Diseases Dataset) and Melanoma (augmented with dermoscopic images from the HAM10000/HAM10k dataset); and lung diseases, focusing on Lung Cancer (using the Lung Cancer Image Dataset) and Pneumonia (evaluated with the Pneumonia Dataset containing bacterial, viral, and normal X-ray cases). For neurological disorders, the model achieved 0.99 accuracy, 0.99 precision, 1.00 recall, 0.99 F1-score, demonstrating robust detection of structural brain abnormalities. In skin disease classification, it attained 0.95 accuracy, 0.93 precision, 0.97 recall, 0.95 F1-score, highlighting its ability to differentiate fine-grained textural patterns in lesions. For lung diseases, the framework achieved 0.98 accuracy, 0.97 precision, 1.00 recall, 0.98 F1-score, confirming its efficacy in identifying respiratory conditions. To bridge research and clinical practice, an AI-powered chatbot was developed for real-time analysis, enabling users to upload MRI, X-ray, or skin images for automated diagnosis with confidence scores and interpretable insights. This work represents the first application of ViT and Perceiver IO for these disease categories, outperforming conventional architectures in accuracy, computational efficiency, and clinical interpretability. The framework holds significant potential for early disease detection in healthcare settings, reducing diagnostic errors, and improving treatment outcomes for clinicians, radiologists, and patients. By addressing critical limitations of traditional models, such as overlapping feature confusion and false positives, this research advances the deployment of reliable AI tools in neurology, dermatology, and pulmonology.

A comparative study of machine learning models for predicting neoadjuvant chemoradiotheraphy response in rectal cancer patients using radiomics and clinical features.

Ozdemir G, Tulu CN, Isik O, Olmez T, Sozutek A, Seker A

pubmed logopapersJul 4 2025
Neoadjuvant chemoradiotherapy (nCRT) followed by total mesorectal excision is the standard treatment for locally advanced rectal cancer. However, the response to nCRT varies significantly among patients, making it crucial to identify those unlikely to benefit to avoid unnecessary toxicities. Radiomics, a technique for extracting quantitative features from medical images like computed tomography (CT), offers a promising noninvasive approach to analyze disease characteristics and potentially improve treatment decision-making. This retrospective cohort study aimed to compare the performance of various machine learning models in predicting the response to nCRT in rectal cancer based on medical data, including radiomic features extracted from CT, and to investigate the contribution of radiomics to these models. Participants who had completed a long course of nCRT before undergoing surgery were retrospectively enrolled. The patients were categorized into 2 groups: nonresponders and responders based on pathological assessment using the Ryan tumor regression grade. Pretreatment contrast-enhanced CT scans were used to extract 101 radiomic features using the PyRadiomics library. Clinical data, including age, gender, tumor grade, presence of colostomy, carcinoembryonic antigen level, constipation status, albumin, and hemoglobin levels, were also collected. Fifteen machine learning models were trained and evaluated using 10-fold cross-validation on a training set (n = 112 patients). The performance of the trained models was then assessed on an internal test set (n = 35 patients) and an external test set (n = 40 patients) using accuracy, area under the ROC curve (AUC), recall, precision, and F1-score. Among the models, the gradient boosting classifier showed the best training performance (accuracy: 0.92, AUC: 0.95, recall: 0.96, precision: 0.93, F1-score: 0.94). On the internal test set, the extra trees classifier (ETC) achieved an accuracy of 0.84, AUC of 0.90, recall of 0.92, precision of 0.87, and F1-score of 0.90. In the external validation, the ETC model yielded an accuracy of 0.75, AUC of 0.79, recall of 0.91, precision of 0.76, and F1-score of 0.83. Patient-specific biomarkers were more influential than radiomic features in the ETC model. The ETC consistently showed strong performance in predicting nCRT response. Clinical biomarkers, particularly tumor grade, were more influential than radiomic features. The model's external validation performance suggests potential for generalization.

MRI-based habitat, intra-, and peritumoral machine learning model for perineural invasion prediction in rectal cancer.

Zhong J, Huang T, Jiang R, Zhou Q, Wu G, Zeng Y

pubmed logopapersJul 3 2025
This study aimed to analyze preoperative multimodal magnetic resonance images of patients with rectal cancer using habitat-based, intratumoral, peritumoral, and combined radiomics models for non-invasive prediction of perineural invasion (PNI) status. Data were collected from 385 pathologically confirmed rectal cancer cases across two centers. Patients from Center 1 were randomly assigned to training and internal validation groups at an 8:2 ratio; the external validation group comprised patients from Center 2. Tumors were divided into three subregions via K-means clustering. Radiomics features were isolated from intratumoral and peritumoral (3 mm beyond the tumor) regions, as well as subregions, to form a combined dataset based on T2-weighted imaging and diffusion-weighted imaging. The support vector machine algorithm was used to construct seven predictive models. intratumoral, peritumoral, and subregion features were integrated to generate an additional model, referred to as the Total model. For each radiomics feature, its contribution to prediction outcomes was quantified using Shapley values, providing interpretable evidence to support clinical decision-making. The Total combined model outperformed other predictive models in the training, internal validation, and external validation sets (area under the curve values: 0.912, 0.882, and 0.880, respectively). The integration of intratumoral, peritumoral, and subregion features represents an effective approach for predicting PNI in rectal cancer, providing valuable guidance for rectal cancer treatment, along with enhanced clinical decision-making precision and reliability.

A deep active learning framework for mitotic figure detection with minimal manual annotation and labelling.

Liu E, Lin A, Kakodkar P, Zhao Y, Wang B, Ling C, Zhang Q

pubmed logopapersJul 3 2025
Accurately and efficiently identifying mitotic figures (MFs) is crucial for diagnosing and grading various cancers, including glioblastoma (GBM), a highly aggressive brain tumour requiring precise and timely intervention. Traditional manual counting of MFs in whole slide images (WSIs) is labour-intensive and prone to interobserver variability. Our study introduces a deep active learning framework that addresses these challenges with minimal human intervention. We utilized a dataset of GBM WSIs from The Cancer Genome Atlas (TCGA). Our framework integrates convolutional neural networks (CNNs) with an active learning strategy. Initially, a CNN is trained on a small, annotated dataset. The framework then identifies uncertain samples from the unlabelled data pool, which are subsequently reviewed by experts. These ambiguous cases are verified and used for model retraining. This iterative process continues until the model achieves satisfactory performance. Our approach achieved 81.75% precision and 82.48% recall for MF detection. For MF subclass classification, it attained an accuracy of 84.1%. Furthermore, this approach significantly reduced annotation time - approximately 900 min across 66 WSIs - cutting the effort nearly in half compared to traditional methods. Our deep active learning framework demonstrates a substantial improvement in both efficiency and accuracy for MF detection and classification in GBM WSIs. By reducing reliance on large annotated datasets, it minimizes manual effort while maintaining high performance. This methodology can be generalized to other medical imaging tasks, supporting broader applications in the healthcare domain.

Interpretable and generalizable deep learning model for preoperative assessment of microvascular invasion and outcome in hepatocellular carcinoma based on MRI: a multicenter study.

Dong X, Jia X, Zhang W, Zhang J, Xu H, Xu L, Ma C, Hu H, Luo J, Zhang J, Wang Z, Ji W, Yang D, Yang Z

pubmed logopapersJul 3 2025
This study aimed to develop an interpretable, domain-generalizable deep learning model for microvascular invasion (MVI) assessment in hepatocellular carcinoma (HCC). Utilizing a retrospective dataset of 546 HCC patients from five centers, we developed and validated a clinical-radiological model and deep learning models aimed at MVI prediction. The models were developed on a dataset of 263 cases consisting of data from three centers, internally validated on a set of 66 patients, and externally tested on two independent sets. An adversarial network-based deep learning (AD-DL) model was developed to learn domain-invariant features from multiple centers within the training set. The area under the receiver operating characteristic curve (AUC) was calculated using pathological MVI status. With the best-performed model, early recurrence-free survival (ERFS) stratification was validated on the external test set by the log-rank test, and the differentially expressed genes (DEGs) associated with MVI status were tested on the RNA sequencing analysis of the Cancer Imaging Archive. The AD-DL model demonstrated the highest diagnostic performance and generalizability with an AUC of 0.793 in the internal test set, 0.801 in external test set 1, and 0.773 in external test set 2. The model's prediction of MVI status also demonstrated a significant correlation with ERFS (p = 0.048). DEGs associated with MVI status were primarily enriched in the metabolic processes and the Wnt signaling pathway, and the epithelial-mesenchymal transition process. The AD-DL model allows preoperative MVI prediction and ERFS stratification in HCC patients, which has a good generalizability and biological interpretability. The adversarial network-based deep learning model predicts MVI status well in HCC patients and demonstrates good generalizability. By integrating bioinformatics analysis of the model's predictions, it achieves biological interpretability, facilitating its clinical translation. Current MVI assessment models for HCC lack interpretability and generalizability. The adversarial network-based model's performance surpassed clinical radiology and squeeze-and-excitation network-based models. Biological function analysis was employed to enhance the interpretability and clinical translatability of the adversarial network-based model.

Can Whole-Thyroid-Based CT Radiomics Model Achieve the Performance of Lesion-Based Model in Predicting the Thyroid Nodules Malignancy? - A Comparative Study.

Yuan W, Wu J, Mai W, Li H, Li Z

pubmed logopapersJul 3 2025
Machine learning is now extensively implemented in medical imaging for preoperative risk stratification and post-therapeutic outcome assessment, enhancing clinical decision-making. Numerous studies have focused on predicting whether thyroid nodules are benign or malignant using a nodule-based approach, which is time-consuming, inefficient, and overlooks the impact of the peritumoral region. To evaluate the effectiveness of using the whole-thyroid as the region of interest in differentiating between benign and malignant thyroid nodules, exploring the potential application value of the entire thyroid. This study enrolled 1121 patients with thyroid nodules between February 2017 and May 2023. All participants underwent contrast-enhanced CT scans prior to surgical intervention. Radiomics features were extracted from arterial phase images, and feature dimensionality reduction was performed using the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm. Four machine learning models were trained on the selected features within the training cohort and subsequently evaluated on the independent validation cohort. The diagnostic performance of whole-thyroid versus nodule-based radiomics models was compared through receiver operating characteristic (ROC) curve analysis and area under the curve (AUC) metrics. The nodule-based logistic regression model achieved an AUC of 0.81 in the validation set, with sensitivity, specificity, and accuracy of 78.6%, 69.4%, and 75.6%, respectively. The whole-thyroid-based random forest model attained an AUC of 0.80, with sensitivity, specificity, and accuracy of 90.0%, 51.9.%, and 80.1%, respectively. The AUC advantage ratios on the LR, DT, RF, and SVM models are approximately - 2.47%, 0.00%, - 4.76%, and - 4.94%, respectively. The Delong test showed no significant differences among the four machine learning models regarding the region of interest defined by either the thyroid primary lesion or the whole thyroid. There was no significant difference in distinguishing between benign and malignant thyroid nodules using either a nodule-based or whole-thyroid-based strategy for ROI outlining. We hypothesize that the whole-thyroid approach provides enhanced diagnostic capability for detecting papillary thyroid carcinomas (PTCs) with ill-defined margins.
Page 142 of 3453445 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.