Sort by:
Page 14 of 1261258 results

PGMI assessment in mammography: AI software versus human readers.

Santner T, Ruppert C, Gianolini S, Stalheim JG, Frei S, Hondl M, Fröhlich V, Hofvind S, Widmann G

pubmed logopapersJul 5 2025
The aim of this study was to evaluate human inter-reader agreement of parameters included in PGMI (perfect-good-moderate-inadequate) classification of screening mammograms and explore the role of artificial intelligence (AI) as an alternative reader. Five radiographers from three European countries independently performed a PGMI assessment of 520 anonymized mammography screening examinations randomly selected from representative subsets from 13 imaging centres within two European countries. As a sixth reader, a dedicated AI software was used. Accuracy, Cohen's Kappa, and confusion matrices were calculated to compare the predictions of the software against the individual assessment of the readers, as well as potential discrepancies between them. A questionnaire and a personality test were used to better understand the decision-making processes of the human readers. Significant inter-reader variability among human readers with poor to moderate agreement (κ = -0.018 to κ = 0.41) was observed, with some showing more homogenous interpretations of single features and overall quality than others. In comparison, the software surpassed human inter-reader agreement in detecting glandular tissue cuts, mammilla deviation, pectoral muscle detection, and pectoral angle measurement, while remaining features and overall image quality exhibited comparable performance to human assessment. Notably, human inter-reader disagreement of PGMI assessment in mammography is considerably high. AI software may already reliably categorize quality. Its potential for standardization and immediate feedback to achieve and monitor high levels of quality in screening programs needs further attention and should be included in future approaches. AI has promising potential for automated assessment of diagnostic image quality. Faster, more representative and more objective feedback may support radiographers in their quality management processes. Direct transformation of common PGMI workflows into an AI algorithm could be challenging.

A comparative study of machine learning models for predicting neoadjuvant chemoradiotheraphy response in rectal cancer patients using radiomics and clinical features.

Ozdemir G, Tulu CN, Isik O, Olmez T, Sozutek A, Seker A

pubmed logopapersJul 4 2025
Neoadjuvant chemoradiotherapy (nCRT) followed by total mesorectal excision is the standard treatment for locally advanced rectal cancer. However, the response to nCRT varies significantly among patients, making it crucial to identify those unlikely to benefit to avoid unnecessary toxicities. Radiomics, a technique for extracting quantitative features from medical images like computed tomography (CT), offers a promising noninvasive approach to analyze disease characteristics and potentially improve treatment decision-making. This retrospective cohort study aimed to compare the performance of various machine learning models in predicting the response to nCRT in rectal cancer based on medical data, including radiomic features extracted from CT, and to investigate the contribution of radiomics to these models. Participants who had completed a long course of nCRT before undergoing surgery were retrospectively enrolled. The patients were categorized into 2 groups: nonresponders and responders based on pathological assessment using the Ryan tumor regression grade. Pretreatment contrast-enhanced CT scans were used to extract 101 radiomic features using the PyRadiomics library. Clinical data, including age, gender, tumor grade, presence of colostomy, carcinoembryonic antigen level, constipation status, albumin, and hemoglobin levels, were also collected. Fifteen machine learning models were trained and evaluated using 10-fold cross-validation on a training set (n = 112 patients). The performance of the trained models was then assessed on an internal test set (n = 35 patients) and an external test set (n = 40 patients) using accuracy, area under the ROC curve (AUC), recall, precision, and F1-score. Among the models, the gradient boosting classifier showed the best training performance (accuracy: 0.92, AUC: 0.95, recall: 0.96, precision: 0.93, F1-score: 0.94). On the internal test set, the extra trees classifier (ETC) achieved an accuracy of 0.84, AUC of 0.90, recall of 0.92, precision of 0.87, and F1-score of 0.90. In the external validation, the ETC model yielded an accuracy of 0.75, AUC of 0.79, recall of 0.91, precision of 0.76, and F1-score of 0.83. Patient-specific biomarkers were more influential than radiomic features in the ETC model. The ETC consistently showed strong performance in predicting nCRT response. Clinical biomarkers, particularly tumor grade, were more influential than radiomic features. The model's external validation performance suggests potential for generalization.

Revolutionizing medical imaging: A cutting-edge AI framework with vision transformers and perceiver IO for multi-disease diagnosis.

Khaliq A, Ahmad F, Rehman HU, Alanazi SA, Haleem H, Junaid K, Andrikopoulou E

pubmed logopapersJul 4 2025
The integration of artificial intelligence in medical image classification has significantly advanced disease detection. However, traditional deep learning models face persistent challenges, including poor generalizability, high false-positive rates, and difficulties in distinguishing overlapping anatomical features, limiting their clinical utility. To address these limitations, this study proposes a hybrid framework combining Vision Transformers (ViT) and Perceiver IO, designed to enhance multi-disease classification accuracy. Vision Transformers leverage self-attention mechanisms to capture global dependencies in medical images, while Perceiver IO optimizes feature extraction for computational efficiency and precision. The framework is evaluated across three critical clinical domains: neurological disorders, including Stroke (tested on the Brain Stroke Prediction CT Scan Image Dataset) and Alzheimer's (analyzed via the Best Alzheimer MRI Dataset); skin diseases, covering Tinea (trained on the Skin Diseases Dataset) and Melanoma (augmented with dermoscopic images from the HAM10000/HAM10k dataset); and lung diseases, focusing on Lung Cancer (using the Lung Cancer Image Dataset) and Pneumonia (evaluated with the Pneumonia Dataset containing bacterial, viral, and normal X-ray cases). For neurological disorders, the model achieved 0.99 accuracy, 0.99 precision, 1.00 recall, 0.99 F1-score, demonstrating robust detection of structural brain abnormalities. In skin disease classification, it attained 0.95 accuracy, 0.93 precision, 0.97 recall, 0.95 F1-score, highlighting its ability to differentiate fine-grained textural patterns in lesions. For lung diseases, the framework achieved 0.98 accuracy, 0.97 precision, 1.00 recall, 0.98 F1-score, confirming its efficacy in identifying respiratory conditions. To bridge research and clinical practice, an AI-powered chatbot was developed for real-time analysis, enabling users to upload MRI, X-ray, or skin images for automated diagnosis with confidence scores and interpretable insights. This work represents the first application of ViT and Perceiver IO for these disease categories, outperforming conventional architectures in accuracy, computational efficiency, and clinical interpretability. The framework holds significant potential for early disease detection in healthcare settings, reducing diagnostic errors, and improving treatment outcomes for clinicians, radiologists, and patients. By addressing critical limitations of traditional models, such as overlapping feature confusion and false positives, this research advances the deployment of reliable AI tools in neurology, dermatology, and pulmonology.

Progression risk of adolescent idiopathic scoliosis based on SHAP-Explained machine learning models: a multicenter retrospective study.

Fang X, Weng T, Zhang Z, Gong W, Zhang Y, Wang M, Wang J, Ding Z, Lai C

pubmed logopapersJul 4 2025
To develop an interpretable machine learning model, explained using SHAP, based on imaging features of adolescent idiopathic scoliosis extracted by convolutional neural networks (CNNs), in order to predict the risk of curve progression and identify the most accurate predictive model. This study included 233 patients with adolescent idiopathic scoliosis from three medical centers. CNNs were used to extract features from full-spine coronal X-ray images taken at three follow-up points for each patient. Imaging and clinical features from center 1 were analyzed using the Boruta algorithm to identify independent predictors. Data from center 1 were divided into training (80%) and testing (20%) sets, while data from centers 2 and 3 were used as external validation sets. Six machine learning models were constructed. Receiver operating characteristic (ROC) curves were plotted, and model performance was assessed by calculating the area under the curve (AUC), accuracy, sensitivity, and specificity in the training, testing, and external validation sets. The SHAP interpreter was used to analyze the most effective model. The six models yielded AUCs ranging from 0.565 to 0.989, accuracies from 0.600 to 0.968, sensitivities from 0.625 to 1.0, and specificities from 0.571 to 0.974. The XGBoost model achieved the best performance, with an AUC of 0.896 in the external validation set. SHAP analysis identified the change in the main Cobb angle between the second and first follow-ups [Cobb1(2−1)] as the most important predictor, followed by the main Cobb angle at the second follow-up (Cobb1-2) and the change in the secondary Cobb angle [Cobb2(2−1)]. The XGBoost model demonstrated the best predictive performance in the external validation cohort, confirming its preliminary stability and generalizability. SHAP analysis indicated that Cobb1(2−1) was the most important feature for predicting scoliosis progression. This model offers a valuable tool for clinical decision-making by enabling early identification of high-risk patients and supporting early intervention strategies through automated feature extraction and interpretable analysis. The online version contains supplementary material available at 10.1186/s12891-025-08841-3.

Hybrid-View Attention for csPCa Classification in TRUS

Zetian Feng, Juan Fu, Xuebin Zou, Hongsheng Ye, Hong Wu, Jianhua Zhou, Yi Wang

arxiv logopreprintJul 4 2025
Prostate cancer (PCa) is a leading cause of cancer-related mortality in men, and accurate identification of clinically significant PCa (csPCa) is critical for timely intervention. Transrectal ultrasound (TRUS) is widely used for prostate biopsy; however, its low contrast and anisotropic spatial resolution pose diagnostic challenges. To address these limitations, we propose a novel hybrid-view attention (HVA) network for csPCa classification in 3D TRUS that leverages complementary information from transverse and sagittal views. Our approach integrates a CNN-transformer hybrid architecture, where convolutional layers extract fine-grained local features and transformer-based HVA models global dependencies. Specifically, the HVA comprises intra-view attention to refine features within a single view and cross-view attention to incorporate complementary information across views. Furthermore, a hybrid-view adaptive fusion module dynamically aggregates features along both channel and spatial dimensions, enhancing the overall representation. Experiments are conducted on an in-house dataset containing 590 subjects who underwent prostate biopsy. Comparative and ablation results prove the efficacy of our method. The code is available at https://github.com/mock1ngbrd/HVAN.

ChestGPT: Integrating Large Language Models and Vision Transformers for Disease Detection and Localization in Chest X-Rays

Shehroz S. Khan, Petar Przulj, Ahmed Ashraf, Ali Abedi

arxiv logopreprintJul 4 2025
The global demand for radiologists is increasing rapidly due to a growing reliance on medical imaging services, while the supply of radiologists is not keeping pace. Advances in computer vision and image processing technologies present significant potential to address this gap by enhancing radiologists' capabilities and improving diagnostic accuracy. Large language models (LLMs), particularly generative pre-trained transformers (GPTs), have become the primary approach for understanding and generating textual data. In parallel, vision transformers (ViTs) have proven effective at converting visual data into a format that LLMs can process efficiently. In this paper, we present ChestGPT, a deep-learning framework that integrates the EVA ViT with the Llama 2 LLM to classify diseases and localize regions of interest in chest X-ray images. The ViT converts X-ray images into tokens, which are then fed, together with engineered prompts, into the LLM, enabling joint classification and localization of diseases. This approach incorporates transfer learning techniques to enhance both explainability and performance. The proposed method achieved strong global disease classification performance on the VinDr-CXR dataset, with an F1 score of 0.76, and successfully localized pathologies by generating bounding boxes around the regions of interest. We also outline several task-specific prompts, in addition to general-purpose prompts, for scenarios radiologists might encounter. Overall, this framework offers an assistive tool that can lighten radiologists' workload by providing preliminary findings and regions of interest to facilitate their diagnostic process.

An Advanced Deep Learning Framework for Ischemic and Hemorrhagic Brain Stroke Diagnosis Using Computed Tomography (CT) Images

Md. Sabbir Hossen, Eshat Ahmed Shuvo, Shibbir Ahmed Arif, Pabon Shaha, Md. Saiduzzaman, Mostofa Kamal Nasir

arxiv logopreprintJul 4 2025
Brain stroke is one of the leading causes of mortality and long-term disability worldwide, highlighting the need for precise and fast prediction techniques. Computed Tomography (CT) scan is considered one of the most effective methods for diagnosing brain strokes. The majority of stroke classification techniques rely on a single slice-level prediction mechanism, allowing the radiologist to manually choose the most critical CT slice from the original CT volume. Although clinical evaluations are often used in traditional diagnostic procedures, machine learning (ML) has opened up new avenues for improving stroke diagnosis. To supplement traditional diagnostic techniques, this study investigates the use of machine learning models, specifically concerning the prediction of brain stroke at an early stage utilizing CT scan images. In this research, we proposed a novel approach to brain stroke detection leveraging machine learning techniques, focusing on optimizing classification performance with pre-trained deep learning models and advanced optimization strategies. Pre-trained models, including DenseNet201, InceptionV3, MobileNetV2, ResNet50, and Xception, are utilized for feature extraction. Additionally, we employed feature engineering techniques, including BFO, PCA, and LDA, to enhance models' performance further. These features are subsequently classified using machine learning algorithms such as SVC, RF, XGB, DT, LR, KNN, and GNB. Our experiments demonstrate that the combination of MobileNetV2, LDA, and SVC achieved the highest classification accuracy of 97.93%, significantly outperforming other model-optimizer-classifier combinations. The results underline the effectiveness of integrating lightweight pre-trained models with robust optimization and classification techniques for brain stroke diagnosis.

Multi-modal convolutional neural network-based thyroid cytology classification and diagnosis.

Yang D, Li T, Li L, Chen S, Li X

pubmed logopapersJul 4 2025
The cytologic diagnosis of thyroid nodules' benign and malignant nature based on cytological smears obtained through ultrasound-guided fine-needle aspiration is crucial for determining subsequent treatment plans. The development of artificial intelligence (AI) can assist pathologists in improving the efficiency and accuracy of cytological diagnoses. We propose a novel diagnostic model based on a network architecture that integrates cytologic images and digital ultrasound image features (CI-DUF) to solve the multi-class classification task of thyroid fine-needle aspiration cytology. We compare this model with a model relying solely on cytologic images (CI) and evaluate its performance and clinical application potential in thyroid cytology diagnosis. A retrospective analysis was conducted on 384 patients with 825 thyroid cytologic images. These images were used as a dataset for training the models, which were divided into training and testing sets in an 8:2 ratio to assess the performance of both the CI and CI-DUF diagnostic models. The AUROC of the CI model for thyroid cytology diagnosis was 0.9119, while the AUROC of the CI-DUF diagnostic model was 0.9326. Compared with the CI model, the CI-DUF model showed significantly increased accuracy, sensitivity, and specificity in the cytologic classification of papillary carcinoma, follicular neoplasm, medullary carcinoma, and benign lesions. The proposed CI-DUF diagnostic model, which intergrates multi-modal information, shows better diagnostic performance than the CI model that relies only on cytologic images, particularly excelling in thyroid cytology classification.

Improving risk assessment of local failure in brain metastases patients using vision transformers - A multicentric development and validation study.

Erdur AC, Scholz D, Nguyen QM, Buchner JA, Mayinger M, Christ SM, Brunner TB, Wittig A, Zimmer C, Meyer B, Guckenberger M, Andratschke N, El Shafie RA, Debus JU, Rogers S, Riesterer O, Schulze K, Feldmann HJ, Blanck O, Zamboglou C, Bilger-Z A, Grosu AL, Wolff R, Eitz KA, Combs SE, Bernhardt D, Wiestler B, Rueckert D, Peeken JC

pubmed logopapersJul 4 2025
This study investigates the use of Vision Transformers (ViTs) to predict Freedom from Local Failure (FFLF) in patients with brain metastases using pre-operative MRI scans. The goal is to develop a model that enhances risk stratification and informs personalized treatment strategies. Within the AURORA retrospective trial, patients (n = 352) who received surgical resection followed by post-operative stereotactic radiotherapy (SRT) were collected from seven hospitals. We trained our ViT for the direct image-to-risk task on T1-CE and FLAIR sequences and combined clinical features along the way. We employed segmentation-guided image modifications, model adaptations, and specialized patient sampling strategies during training. The model was evaluated with five-fold cross-validation and ensemble learning across all validation runs. An external, international test cohort (n = 99) within the dataset was used to assess the generalization capabilities of the model, and saliency maps were generated for explainability analysis. We achieved a competent C-Index score of 0.7982 on the test cohort, surpassing all clinical, CNN-based, and hybrid baselines. Kaplan-Meier analysis showed significant FFLF risk stratification. Saliency maps focusing on the BM core confirmed that model explanations aligned with expert observations. Our ViT-based model offers a potential for personalized treatment strategies and follow-up regimens in patients with brain metastases. It provides an alternative to radiomics as a robust, automated tool for clinical workflows, capable of improving patient outcomes through effective risk assessment and stratification.

Machine learning approach using radiomics features to distinguish odontogenic cysts and tumours.

Muraoka H, Kaneda T, Ito K, Otsuka K, Tokunaga S

pubmed logopapersJul 4 2025
Although most odontogenic lesions in the jaw are benign, treatment varies widely depending on the nature of the lesion. This study was performed to assess the ability of a machine learning (ML) model using computed tomography (CT) and magnetic resonance imaging (MRI) radiomic features to classify odontogenic cysts and tumours. CT and MRI data from patients with odontogenic lesions including dentigerous cysts, odontogenic keratocysts, and ameloblastomas were analysed. Manual segmentation of the CT image and the apparent diffusion coefficient (ADC) map from diffusion-weighted MRI was performed to extract radiomic features. The extracted radiomic features were split into training (70%) and test (30%) sets. The random forest model was adjusted or optimized using 5-fold stratified cross-validation within the training set and assessed on a separate hold-out test set. Analysis of the CT-based ML model showed cross-validation accuracy of 0.59 and 0.60 for the training set and test set, respectively, with precision, recall, and F1 score all being 0.57. Analysis of the ADC-based ML model showed cross-validation accuracy of 0.90 and 0.94 for the training set and test set, respectively; the precision, recall, and F1 score were all 0.87. ML models, particularly when using MRI radiological features, can effectively classify odontogenic lesions.
Page 14 of 1261258 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.