Sort by:
Page 100 of 2382377 results

MRI-Based Models Using Habitat Imaging for Predicting Distinct Vascular Patterns in Hepatocellular Carcinoma.

Xie Y, Zhang T, Liu Z, Yan Z, Yu Y, Qu Q, Gu C, Ding C, Zhang X

pubmed logopapersJul 24 2025
To develop two distinct models for predicting microvascular invasion (MVI) and vessels encapsulating tumor clusters (VETC) based on habitat imaging, and to integrate these models for prognosis assessment. In this multicenter retrospective study, patients from two different institutions were enrolled and categorized for MVI (n=295) and VETC (n=276) prediction. Tumor and peritumoral regions on hepatobiliary phase images were segmented into subregions, from which all relevant features were extracted. The MVI and VETC predictive models were constructed by analyzing these features using various machine learning algorithms, and classifying patients into high-risk and low-risk groups. Cox regression analysis was utilized to identify risk factors for early recurrence. The MVI and VETC prediction models demonstrated excellent performance in both the training and external validation cohorts (AUC: 0.961 and 0.838 for MVI; 0.931 and 0.820 for VETC). Based on model predictions, patients were classified into high-risk group (High-risk MVI/ High-risk VETC), medium-risk group (High-risk MVI/Low-risk VETC or Low-risk MVI/High-risk VETC), and low-risk group (Low-risk MVI/Low-risk VETC). Multivariable Cox regression analysis revealed that risk group, number of tumors, and gender were independent predictors of early recurrence. Models based on habitat imaging can be used for the preoperative, noninvasive prediction of MVI and VETC, offering valuable stratification and diagnostic insights for HCC patients.

Enhanced HER-2 prediction in breast cancer through synergistic integration of deep learning, ultrasound radiomics, and clinical data.

Hu M, Zhang L, Wang X, Xiao X

pubmed logopapersJul 24 2025
This study integrates ultrasound Radiomics with clinical data to enhance the diagnostic accuracy of HER-2 expression status in breast cancer, aiming to provide more reliable treatment strategies for this aggressive disease. We included ultrasound images and clinicopathologic data from 210 female breast cancer patients, employing a Generative Adversarial Network (GAN) to enhance image clarity and segment the region of interest (ROI) for Radiomics feature extraction. Features were optimized through Z-score normalization and various statistical methods. We constructed and compared multiple machine learning models, including Linear Regression, Random Forest, and XGBoost, with deep learning models such as CNNs (ResNet101, VGG19) and Transformer technology. The Grad-CAM technique was used to visualize the decision-making process of the deep learning models. The Deep Learning Radiomics (DLR) model integrated Radiomics features with deep learning features, and a combined model further integrated clinical features to predict HER-2 status. The LightGBM and ResNet101 models showed high performance, but the combined model achieved the highest AUC values in both training and testing, demonstrating the effectiveness of integrating diverse data sources. The study successfully demonstrates that the fusion of deep learning with Radiomics analysis significantly improves the prediction accuracy of HER-2 status, offering a new strategy for personalized breast cancer treatment and prognostic assessments.

A Lightweight Hybrid DL Model for Multi-Class Chest X-ray Classification for Pulmonary Diseases.

Precious JG, S R, B SP, R R V, M SSM, Sapthagirivasan V

pubmed logopapersJul 24 2025
Pulmonary diseases have become one of the main reasons for people's health decline, impacting millions of people worldwide. Rapid advancement of deep learning has significantly impacted medical image analysis by improving diagnostic accuracy and efficiency. Timely and precise diagnosis of these diseases proves to be invaluable for effective treatment procedures. Chest X-rays (CXR) perform a pivotal role in diagnosing various respiratory diseases by offering valuable insights into the chest and lung regions. This study puts forth a hybrid approach for classifying CXR images into four classes namely COVID-19, tuberculosis, pneumonia, and normal (healthy) cases. The presented method integrates a machine learning method, Support Vector Machine (SVM), with a pre-trained deep learning model for improved classification accuracy and reduced training time. Data from a number of public sources was used in this study, which represents a wide range of demographics. Class weights were implemented during training to balance the contribution of each class in order to address the class imbalance. Several pre-trained architectures, namely DenseNet, MobileNet, EfficientNetB0, and EfficientNetB3, have been investigated, and their performance was evaluated. Since MobileNet achieved the best classification accuracy of 94%, it was opted for the hybrid model, which combines MobileNet with SVM classifier, increasing the accuracy to 97%. The results suggest that this approach is reliable and holds great promise for clinical applications.&#xD.

Artificial intelligence for multi-time-point arterial phase contrast-enhanced MRI profiling to predict prognosis after transarterial chemoembolization in hepatocellular carcinoma.

Yao L, Adwan H, Bernatz S, Li H, Vogl TJ

pubmed logopapersJul 24 2025
Contrast-enhanced magnetic resonance imaging (CE-MRI) monitoring across multiple time points is critical for optimizing hepatocellular carcinoma (HCC) prognosis during transarterial chemoembolization (TACE) treatment. The aim of this retrospective study is to develop and validate an artificial intelligence (AI)-powered models utilizing multi-time-point arterial phase CE-MRI data for HCC prognosis stratification in TACE patients. A total of 543 individual arterial phase CE-MRI scans from 181 HCC patients were retrospectively collected in this study. All patients underwent TACE and longitudinal arterial phase CE-MRI assessments at three time points: prior to treatment, and following the first and second TACE sessions. Among them, 110 patients received TACE monotherapy, while the remaining 71 patients underwent TACE in combination with microwave ablation (MWA). All images were subjected to standardized preprocessing procedures. We developed an end-to-end deep learning model, ProgSwin-UNETR, based on the Swin Transformer architecture, to perform four-class prognosis stratification directly from input imaging data. The model was trained using multi-time-point arterial phase CE-MRI data and evaluated via fourfold cross-validation. Classification performance was assessed using the area under the receiver operating characteristic curve (AUC). For comparative analysis, we benchmarked performance against traditional radiomics-based classifiers and the mRECIST criteria. Prognostic utility was further assessed using Kaplan-Meier (KM) survival curves. Additionally, multivariate Cox proportional hazards regression was performed as a post hoc analysis to evaluate the independent and complementary prognostic value of the model outputs and clinical variables. GradCAM +  + was applied to visualize the imaging regions contributing most to model prediction. The ProgSwin-UNETR model achieved an accuracy of 0.86 and an AUC of 0.92 (95% CI: 0.90-0.95) for the four-class prognosis stratification task, outperforming radiomic models across all risk groups. Furthermore, KM survival analyses were performed using three different approaches-AI model, radiomics-based classifiers, and mRECIST criteria-to stratify patients by risk. Of the three approaches, only the AI-based ProgSwin-UNETR model achieved statistically significant risk stratification across the entire cohort and in both TACE-alone and TACE + MWA subgroups (p < 0.005). In contrast, the mRECIST and radiomics models did not yield significant survival differences across subgroups (p > 0.05). Multivariate Cox regression analysis further demonstrated that the model was a robust independent prognostic factor (p = 0.01), effectively stratifying patients into four distinct risk groups (Class 0 to Class 3) with Log(HR) values of 0.97, 0.51, -0.53, and -0.92, respectively. Additionally, GradCAM +  + visualizations highlighted critical regional features contributing to prognosis prediction, providing interpretability of the model. ProgSwin-UNETR can well predict the various risk groups of HCC patients undergoing TACE therapy and can further be applied for personalized prediction.

Contrast-Enhanced CT-Based Deep Learning and Habitat Radiomics for Analysing the Predictive Capability for Oral Squamous Cell Carcinoma.

Liu Q, Liang Z, Qi X, Yang S, Fu B, Dong H

pubmed logopapersJul 24 2025
This study aims to explore a novel approach for predicting cervical lymph node metastasis (CLNM) and pathological subtypes in oral squamous cell carcinoma (OSCC) by comparing deep learning (DL) and habitat analysis models based on contrast-enhanced CT (CECT). A retrospective analysis was conducted using CECT images from patients diagnosed with OSCC via paraffin pathology at the Second Affiliated Hospital of Dalian Medical University. All patients underwent primary tumor resection and cervical lymph node dissection, with a total of 132 cases included. A DL model was developed by analysing regions of interest (ROIs) in the CECT images using a convolutional neural network (CNN). For habitat analysis, the ROI images were segmented into 3 regions using K-means clustering, and features were selected through a fully connected neural network (FCNN) to build the model. A separate clinical model was constructed based on nine clinical features, including age, gender, and tumor location. Using LNM and pathological subtypes as endpoints, the predictive performance of the clinical model, DL model, habitat analysis model, and a combined clinical + habitat model was evaluated using confusion matrices and receiver operating characteristic (ROC) curves. For LNM prediction, the combined clinical + habitat model achieved an area under the ROC curve (AUC) of 0.97. For pathological subtype prediction, the AUC was 0.96. The DL model yielded an AUC of 0.83 for LNM prediction and 0.91 for pathological subtype classification. The clinical model alone achieved an AUC of 0.94 for predicting LNM. The integrated habitat-clinical model demonstrates improved predictive performance. Combining habitat analysis with clinical features offers a promising approach for the prediction of oral cancer. The habitat-clinical integrated model may assist clinicians in performing accurate preoperative prognostic assessments in patients with oral cancer.

Artificial intelligence in radiology: 173 commercially available products and their scientific evidence.

Antonissen N, Tryfonos O, Houben IB, Jacobs C, de Rooij M, van Leeuwen KG

pubmed logopapersJul 24 2025
To assess changes in peer-reviewed evidence on commercially available radiological artificial intelligence (AI) products from 2020 to 2023, as a follow-up to a 2020 review of 100 products. A literature review was conducted, covering January 2015 to March 2023, focusing on CE-certified radiological AI products listed on www.healthairegister.com . Papers were categorised using the hierarchical model of efficacy: technical/diagnostic accuracy (levels 1-2), clinical decision-making and patient outcomes (levels 3-5), or socio-economic impact (level 6). Study features such as design, vendor independence, and multicentre/multinational data usage were also examined. By 2023, 173 CE-certified AI products from 90 vendors were identified, compared to 100 products in 2020. Products with peer-reviewed evidence increased from 36% to 66%, supported by 639 papers (up from 237). Diagnostic accuracy studies (level 2) remained predominant, though their share decreased from 65% to 57%. Studies addressing higher-efficacy levels (3-6) remained constant at 22% and 24%, with the number of products supported by such evidence increasing from 18% to 31%. Multicentre studies rose from 30% to 41% (p < 0.01). However, vendor-independent studies decreased (49% to 45%), as did multinational studies (15% to 11%) and prospective designs (19% to 16%), all with p > 0.05. The increase in peer-reviewed evidence and higher levels of evidence per product indicate maturation in the radiological AI market. However, the continued focus on lower-efficacy studies and reductions in vendor independence, multinational data, and prospective designs highlight persistent challenges in establishing unbiased, real-world evidence. Question Evaluating advancements in peer-reviewed evidence for CE-certified radiological AI products is crucial to understand their clinical adoption and impact. Findings CE-certified AI products with peer-reviewed evidence increased from 36% in 2020 to 66% in 2023, but the proportion of higher-level evidence papers (~24%) remained unchanged. Clinical relevance The study highlights increased validation of radiological AI products but underscores a continued lack of evidence on their clinical and socio-economic impact, which may limit these tools' safe and effective implementation into clinical workflows.

Fractal Analysis for Cognitive Impairment Classification in DAVF Using Machine Learning.

Sivan Sulaja J, Kannath SK, Menon RN, Thomas B

pubmed logopapersJul 24 2025
Intracranial dural arteriovenous fistula (DAVF) is an acquired vascular condition involving abnormal connections between dural arteries and veins without intervening capillary beds. Cognitive impairment is a common symptom in DAVFs, often linked to disrupted brain network connectivity. Resting-state functional MRI (rsfMRI) allows for examining functional connectivity through blood oxygenation level dependent (BOLD) signal analysis. However, rsfMRI signals exhibit fractal behavior that complicates connectivity analysis. This study explores nonfractal connectivity as a potential biomarker for cognitive impairment in DAVF patients by isolating short-memory components in BOLD signals.&#xD;Method: 50 DAVF patients and 50 healthy controls underwent neuropsychological assessments and rsfMRI. Preprocessed BOLD signals were decomposed using wavelet transforms to isolate fractal and nonfractal components. Connectivity matrices based on fractal, nonfractal, and Pearson correlation components were generated and used as features for classification. Machine learning classifiers, including SVM and decision trees, were optimized via cross-validation in MATLAB, with performance assessed by accuracy, sensitivity, specificity, and AUC.&#xD;Results: Nonfractal connectivity outperformed fractal and Pearson correlation measures, achieving a classification accuracy of 89.82% using SVM, with high sensitivity (86.54%), specificity (92.4%), and an AUC of 0.96. Nonfractal connectivity effectively differentiated cognitive impairment in DAVFs, offering a clearer depiction of neural activity by reducing the influence of fractal patterns.&#xD;Conclusion: This study suggests that nonfractal connectivity is a promising biomarker for assessing cognitive impairment in DAVF patients, potentially supporting early diagnosis and intervention. While nonfractal analysis showed promising classification accuracy, further research with larger datasets is needed to validate these findings and explore applicability in other neurological conditions.&#xD.

Malignancy classification of thyroid incidentalomas using 18F-fluorodeoxy-d-glucose PET/computed tomography-derived radiomics.

Yeghaian M, Piek MW, Bartels-Rutten A, Abdelatty MA, Herrero-Huertas M, Vogel WV, de Boer JP, Hartemink KJ, Bodalal Z, Beets-Tan RGH, Trebeschi S, van der Ploeg IMC

pubmed logopapersJul 24 2025
Thyroid incidentalomas (TIs) are incidental thyroid lesions detected on fluorodeoxy-d-glucose (18F-FDG) PET/computed tomography (PET/CT) scans. This study aims to investigate the role of noninvasive PET/CT-derived radiomic features in characterizing 18F-FDG PET/CT TIs and distinguishing benign from malignant thyroid lesions in oncological patients. We included 46 patients with PET/CT TIs who underwent thyroid ultrasound and thyroid surgery at our oncological referral hospital. Radiomic features extracted from regions of interest (ROI) in both PET and CT images and analyzed for their association with thyroid cancer and their predictive ability. The TIs were graded using the ultrasound TIRADS classification, and histopathological results served as the reference standard. Univariate and multivariate analyses were performed using features from each modality individually and combined. The performance of radiomic features was compared to the TIRADS classification. Among the 46 included patients, 36 patients (78%) had malignant thyroid lesions, while 10 patients (22%) had benign lesions. The combined run length nonuniformity radiomic feature from PET and CT cubical ROIs demonstrated the highest area under the curve (AUC) of 0.88 (P < 0.05), with a negative correlation with malignancy. This performance was comparable to the TIRADS classification (AUC: 0.84, P < 0.05), which showed a positive correlation with thyroid cancer. Multivariate analysis showed higher predictive performance using CT-derived radiomics (AUC: 0.86 ± 0.13) compared to TIRADS (AUC: 0.80 ± 0.08). This study highlights the potential of 18F-FDG PET/CT-derived radiomics to distinguish benign from malignant thyroid lesions. Further studies with larger cohorts and deep learning-based methods could obtain more robust results.

A Dynamic Machine Learning Model to Predict Angiographic Vasospasm After Aneurysmal Subarachnoid Hemorrhage.

Sen RD, McGrath MC, Shenoy VS, Meyer RM, Park C, Fong CT, Lele AV, Kim LJ, Levitt MR, Wang LL, Sekhar LN

pubmed logopapersJul 24 2025
The goal of this study was to develop a highly precise, dynamic machine learning model centered on daily transcranial Doppler ultrasound (TCD) data to predict angiographic vasospasm (AV) in the context of aneurysmal subarachnoid hemorrhage (aSAH). A retrospective review of patients with aSAH treated at a single institution was performed. The primary outcome was AV, defined as angiographic narrowing of any intracranial artery at any time point during admission from risk assessment. Standard demographic, clinical, and radiographic data were collected. Quantitative data including mean arterial pressure, cerebral perfusion pressure, daily serum sodium, and hourly ventriculostomy output were collected. Detailed daily TCD data of intracranial arteries including maximum velocities, pulsatility indices, and Lindegaard ratios were collected. Three predictive machine learning models were created and compared: A static multivariate logistics regression model based on data collected on the date of admission (Baseline Model; BM), a standard TCD model using middle cerebral artery flow velocity and Lindegaard ratio measurements (SM), and a machine learning long short term memory (LSTM) model using all data trended through the hospitalization. A total of 424 patients with aSAH were reviewed, 78 of whom developed AV. In predicting AV at any time point in the future, the LSTM model had the highest precision (0.571) and accuracy (0.776), whereas the SM model had the highest overall performance with an F1 score of 0.566. In predicting AV within 5 days, the LSTM continued to have the highest precision (0.488) and accuracy (0.803). After an ablation test removing all non-TCD elements, the LSTM model improved to a precision of 0.824. Longitudinal TCD data can be used to create a dynamic machine learning model with higher precision than static TCD measurements for predicting AV after aSAH.

Comparative Analysis of Vision Transformers and Convolutional Neural Networks for Medical Image Classification

Kunal Kawadkar

arxiv logopreprintJul 24 2025
The emergence of Vision Transformers (ViTs) has revolutionized computer vision, yet their effectiveness compared to traditional Convolutional Neural Networks (CNNs) in medical imaging remains under-explored. This study presents a comprehensive comparative analysis of CNN and ViT architectures across three critical medical imaging tasks: chest X-ray pneumonia detection, brain tumor classification, and skin cancer melanoma detection. We evaluated four state-of-the-art models - ResNet-50, EfficientNet-B0, ViT-Base, and DeiT-Small - across datasets totaling 8,469 medical images. Our results demonstrate task-specific model advantages: ResNet-50 achieved 98.37% accuracy on chest X-ray classification, DeiT-Small excelled at brain tumor detection with 92.16% accuracy, and EfficientNet-B0 led skin cancer classification at 81.84% accuracy. These findings provide crucial insights for practitioners selecting architectures for medical AI applications, highlighting the importance of task-specific architecture selection in clinical decision support systems.
Page 100 of 2382377 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.