Sort by:
Page 11 of 1031027 results

Knowledge, attitudes, and practices of cardiovascular health care personnel regarding coronary CTA and AI-assisted diagnosis: a cross-sectional study.

Jiang S, Ma L, Pan K, Zhang H

pubmed logopapersJul 4 2025
Artificial intelligence (AI) holds significant promise for medical applications, particularly in coronary computed tomography angiography (CTA). We assessed the knowledge, attitudes, and practices (KAP) of cardiovascular health care personnel regarding coronary CTA and AI-assisted diagnosis. We conducted a cross-sectional survey from 1 July to 1 August 2024 at Tsinghua University Hospital, Beijing, China. Healthcare professionals, including both physicians and nurses, aged ≥18 years were eligible to participate. We used a structured questionnaire to collect demographic information and KAP scores. We analysed the data using correlation and regression methods, along with structural equation modelling. Among 496 participants, 58.5% were female, 52.6% held a bachelor's degree, and 40.7% worked in radiology. Mean KAP scores were 13.87 (standard deviation (SD) = 4.96, possible range = 0-20) for knowledge, 28.25 (SD = 4.35, possible range = 8-40) for attitude, and 31.67 (SD = 8.23, possible range = 10-50) for practice. Knowledge (r = 0.358; P < 0.001) and attitude positively correlated with practice (r = 0.489; P < 0.001). Multivariate logistic regression indicated that educational level, department affiliation, and job satisfaction were significant predictors of knowledge. Attitude was influenced by marital status, department, and years of experience, while practice was shaped by knowledge, attitude, departmental factors, and job satisfaction. Structural equation modelling showed that knowledge was directly affected by gender (β = -0.121; P = 0.009), workplace (β = -0.133; P = 0.004), department (β = -0.197; P < 0.001), employment status (β = -0.166; P < 0.001), and night shift frequency (β = 0.163; P < 0.001). Attitude was directly influenced by marriage (β = 0.124; P = 0.006) and job satisfaction (β = -0.528; P < 0.001). Practice was directly affected by knowledge (β = 0.389; P < 0.001), attitude (β = 0.533; P < 0.001), and gender (β = -0.092; P = 0.010). Additionally, gender (β = -0.051; P = 0.010) and marriage (β = 0.066; P = 0.007) had indirect effects on practice. Cardiovascular health care personnel exhibited suboptimal knowledge, positive attitudes, and relatively inactive practices regarding coronary CTA and AI-assisted diagnosis. Targeted educational efforts are needed to enhance knowledge and support the integration of AI into clinical workflows.

Deep learning-based classification of parotid gland tumors: integrating dynamic contrast-enhanced MRI for enhanced diagnostic accuracy.

Sinci KA, Koska IO, Cetinoglu YK, Erdogan N, Koc AM, Eliyatkin NO, Koska C, Candan B

pubmed logopapersJul 4 2025
To evaluate the performance of deep learning models in classifying parotid gland tumors using T2-weighted, diffusion-weighted, and contrast-enhanced T1-weighted MR images, along with DCE data derived from time-intensity curves. In this retrospective, single-center study including a total of 164 participants, 124 patients with surgically confirmed parotid gland tumors and 40 individuals with normal parotid glands underwent multiparametric MRI, including DCE sequences. Data partitions were performed at the patient level (80% training, 10% validation, 10% testing). Two deep learning architectures (MobileNetV2 and EfficientNetB0), as well as a combined approach integrating predictions from both models, were fine-tuned using transfer learning to classify (i) normal versus tumor (Task 1), (ii) benign versus malignant tumors (Task 2), and (iii) benign subtypes (Warthin tumor vs. pleomorphic adenoma) (Task 3). For Tasks 2 and 3, DCE-derived metrics were integrated via a support vector machine. Classification performance was assessed using accuracy, precision, recall, and F1-score, with 95% confidence intervals derived via bootstrap resampling. In Task 1, EfficientNetB0 achieved the highest accuracy (85%). In Task 2, the combined approach reached an accuracy of 65%, while adding DCE data significantly improved performance, with MobileNetV2 achieving an accuracy of 96%. In Task 3, EfficientNetB0 demonstrated the highest accuracy without DCE data (75%), while including DCE data boosted the combined approach to an accuracy of 89%. Adding DCE-MRI data to deep learning models substantially enhances parotid gland tumor classification accuracy, highlighting the value of functional imaging biomarkers in improving noninvasive diagnostic workflows.

Predicting ESWL success for ureteral stones: a radiomics-based machine learning approach.

Yang R, Zhao D, Ye C, Hu M, Qi X, Li Z

pubmed logopapersJul 4 2025
This study aimed to develop and validate a machine learning (ML) model that integrates radiomics and conventional radiological features to predict the success of single-session extracorporeal shock wave lithotripsy (ESWL) for ureteral stones. This retrospective study included 329 patients with ureteral stones who underwent ESWL between October 2022 and June 2024. Patients were randomly divided into a training set (n = 230) and a test set (n = 99) in a 7:3 ratio. Preoperative clinical data and noncontrast CT images were collected, and radiomic features were extracted by outlining the stone's region of interest (ROI). Univariate analysis was used to identify clinical and conventional radiological features related to the success of single-session ESWL. Radiomic features were selected using the least absolute shrinkage and selection operator (LASSO) algorithm to calculate a radiomic score (Rad-score). Five machine learning models (RF, KNN, LR, SVM, AdaBoost) were developed using 10-fold cross-validation. Model performance was assessed using AUC, accuracy, sensitivity, specificity, and F1 score. Calibration and decision curve analyses were used to evaluate model calibration and clinical value. SHAP analysis was conducted to interpret feature importance, and a nomogram was built to improve model interpretability. Ureteral diameter proximal to the stone (UDPS), stone-to-skin distance (SSD), and renal pelvic width (RPW) were identified as significant predictors. Six radiomic features were selected from 1,595 to calculate the Rad-score. The LR model showed the best performance on the test set, with an accuracy of 83.8%, sensitivity of 84.9%, specificity of 82.6%, F1 score of 84.9%, and AUC of 0.888 (95% CI: 0.822-0.949). SHAP analysis indicated that the Rad-score and UDPS were the most influential features. Calibration and decision curve analyses confirmed the model's good calibration and clinical utility. The LR model, integrating radiomics and conventional radiological features, demonstrated strong performance in predicting the success of single-session ESWL for ureteral stones. This approach may assist clinicians in making more accurate treatment decisions. Retrospectively. Not applicable.

Intralesional and perilesional radiomics strategy based on different machine learning for the prediction of international society of urological pathology grade group in prostate cancer.

Li Z, Yang L, Wang X, Xu H, Chen W, Kang S, Huang Y, Shu C, Cui F, Zhang Y

pubmed logopapersJul 4 2025
To develop and evaluate a intralesional and perilesional radiomics strategy based on different machine learning model to differentiate International Society of Urological Pathology (ISUP) grade > 2 group and ISUP ≤ 2 prostate cancers (PCa). 340 case of PCa patients confirmed by radical prostatectomy pathology were obtained from two hospitals. The patients were divided into training, internal validation, and external validation groups. Radiomic features were extracted from T2-weighted imaging, and four distinct radiomic feature models were constructed: intralesional, perilesional, combined tumoral and perilesional, and intralesional and perilesional image fusion. Four machine learning classifiers logistic regression (LR), random forest (RF), extra trees (ET), and multilayer perceptron (MLP) were employed for model training and evaluation to select the optimal model. The performance of each model was assessed by calculating the area under the ROC curve (AUC), accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and F1 score. The AUCs for the RF classifier were higher than that of LR, ET, and MLP, and was selected as the final radiomic model. The nomogram model integrating perilesional, combined intralesional and perilesional, and intralesional and perilesional image fusion had an AUC of 0.929, 0.734, 0.743 for the training, internal, and external validation cohorts, respectively, which was higher than that of the individual intralesional, perilesional, combined intralesional and perilesional, and intralesional and perilesional image fusion models. The proposed nomogram established from perilesional, combined intralesional and perilesional, and intralesional and perilesional image fusion radiomic has the potential to predict the differentiation degree of ISUP PCa patients. Not applicable.

A tailored deep learning approach for early detection of oral cancer using a 19-layer CNN on clinical lip and tongue images.

Liu P, Bagi K

pubmed logopapersJul 4 2025
Early and accurate detection of oral cancer plays a pivotal role in improving patient outcomes. This research introduces a custom-designed, 19-layer convolutional neural network (CNN) for the automated diagnosis of oral cancer using clinical images of the lips and tongue. The methodology integrates advanced preprocessing steps, including min-max normalization and histogram-based contrast enhancement, to optimize image features critical for reliable classification. The model is extensively validated on the publicly available Oral Cancer (Lips and Tongue) Images (OCI) dataset, which is divided into 80% training and 20% testing subsets. Comprehensive performance evaluation employs established metrics-accuracy, sensitivity, specificity, precision, and F1-score. Our CNN architecture achieved an accuracy of 99.54%, sensitivity of 95.73%, specificity of 96.21%, precision of 96.34%, and F1-score of 96.03%, demonstrating substantial improvements over prominent transfer learning benchmarks, including SqueezeNet, AlexNet, Inception, VGG19, and ResNet50, all tested under identical experimental protocols. The model's robust performance, efficient computation, and high reliability underline its practicality for clinical application and support its superiority over existing approaches. This study provides a reproducible pipeline and a new reference point for deep learning-based oral cancer detection, facilitating translation into real-world healthcare environments and promising enhanced diagnostic confidence.

Multi-modality radiomics diagnosis of breast cancer based on MRI, ultrasound and mammography.

Wu J, Li Y, Gong W, Li Q, Han X, Zhang T

pubmed logopapersJul 4 2025
To develop a multi-modality machine learning-based radiomics model utilizing Magnetic Resonance Imaging (MRI), Ultrasound (US), and Mammography (MMG) for the differentiation of benign and malignant breast nodules. This study retrospectively collected data from 204 patients across three hospitals, including MRI, US, and MMG imaging data along with confirmed pathological diagnoses. Lesions on 2D US, 2D MMG, and 3D MRI images were selected to outline the areas of interest, which were then automatically expanded outward by 3 mm, 5 mm, and 8 mm to extract radiomic features within and around the tumor. ANOVA, the maximum correlation minimum redundancy (mRMR) algorithm, and the least absolute shrinkage and selection operator (LASSO) were used to select features for breast cancer diagnosis through logistic regression analysis. The performance of the radiomics models was evaluated using receiver operating characteristic (ROC) curve analysis, curves decision curve analysis (DCA), and calibration curves. Among the various radiomics models tested, the MRI_US_MMG multi-modality logistic regression model with 5 mm peritumoral features demonstrated the best performance. In the test cohort, this model achieved an AUC of 0.905(95% confidence interval [CI]: 0.805-1). These results suggest that the inclusion of peritumoral features, specifically at a 5 mm expansion, significantly enhanced the diagnostic efficiency of the multi-modality radiomics model in differentiating benign from malignant breast nodules. The multi-modality radiomics model based on MRI, ultrasound, and mammography can predict benign and malignant breast lesions.

Medical slice transformer for improved diagnosis and explainability on 3D medical images with DINOv2.

Müller-Franzes G, Khader F, Siepmann R, Han T, Kather JN, Nebelung S, Truhn D

pubmed logopapersJul 4 2025
Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) are essential clinical cross-sectional imaging techniques for diagnosing complex conditions. However, large 3D datasets with annotations for deep learning are scarce. While methods like DINOv2 are encouraging for 2D image analysis, these methods have not been applied to 3D medical images. Furthermore, deep learning models often lack explainability due to their "black-box" nature. This study aims to extend 2D self-supervised models, specifically DINOv2, to 3D medical imaging while evaluating their potential for explainable outcomes. We introduce the Medical Slice Transformer (MST) framework to adapt 2D self-supervised models for 3D medical image analysis. MST combines a Transformer architecture with a 2D feature extractor, i.e., DINOv2. We evaluate its diagnostic performance against a 3D convolutional neural network (3D ResNet) across three clinical datasets: breast MRI (651 patients), chest CT (722 patients), and knee MRI (1199 patients). Both methods were tested for diagnosing breast cancer, predicting lung nodule dignity, and detecting meniscus tears. Diagnostic performance was assessed by calculating the Area Under the Receiver Operating Characteristic Curve (AUC). Explainability was evaluated through a radiologist's qualitative comparison of saliency maps based on slice and lesion correctness. P-values were calculated using Delong's test. MST achieved higher AUC values compared to ResNet across all three datasets: breast (0.94 ± 0.01 vs. 0.91 ± 0.02, P = 0.02), chest (0.95 ± 0.01 vs. 0.92 ± 0.02, P = 0.13), and knee (0.85 ± 0.04 vs. 0.69 ± 0.05, P = 0.001). Saliency maps were consistently more precise and anatomically correct for MST than for ResNet. Self-supervised 2D models like DINOv2 can be effectively adapted for 3D medical imaging using MST, offering enhanced diagnostic accuracy and explainability compared to convolutional neural networks.

Characteristics of brain network connectome and connectome-based efficacy predictive model in bipolar depression.

Xi C, Lu B, Guo X, Qin Z, Yan C, Hu S

pubmed logopapersJul 4 2025
Aberrant functional connectivity (FC) between brain networks has been indicated closely associated with bipolar disorder (BD). However, the previous findings of specific brain network connectivity patterns have been inconsistent, and the clinical utility of FCs for predicting treatment outcomes in bipolar depression was underexplored. To identify robust neuro-biomarkers of bipolar depression, a connectome-based analysis was conducted on resting-state functional MRI (rs-fMRI) data of 580 bipolar depression patients and 116 healthy controls (HCs). A subsample of 148 patients underwent a 4-week quetiapine treatment with post-treatment clinical assessment. Adopting machine learning, a predictive model based on pre-treatment brain connectome was then constructed to predict treatment response and identify the efficacy-specific networks. Distinct brain network connectivity patterns were observed in bipolar depression compared to HCs. Elevated intra-network connectivity was identified within the default mode network (DMN), sensorimotor network (SMN), and subcortical network (SC); and as to the inter-network connectivity, increased FCs were between the DMN, SMN and frontoparietal (FPN), ventral attention network (VAN), and decreased FCs were between the SC and cortical networks, especially the DMN and FPN. And the global network topology analyses revealed decreased global efficiency and increased characteristic path length in BD compared to HC. Further, the support vector regression model successfully predicted the efficacy of quetiapine treatment, as indicated by a high correspondence between predicted and actual HAMD reduction ratio values (r<sub>(df=147)</sub>=0.4493, p = 2*10<sup>-4</sup>). The identified efficacy-specific networks primarily encompassed FCs between the SMN and SC, and between the FPN, DMN, and VAN. These identified networks further predicted treatment response with r = 0.3940 in the subsequent validation with an independent cohort (n = 43). These findings presented the characteristic aberrant patterns of brain network connectome in bipolar depression and demonstrated the predictive potential of pre-treatment network connectome for quetiapine response. Promisingly, the identified connectivity networks may serve as functional targets for future precise treatments for bipolar depression.

Intelligent brain tumor detection using hybrid finetuned deep transfer features and ensemble machine learning algorithms.

Salakapuri R, Terlapu PV, Kalidindi KR, Balaka RN, Jayaram D, Ravikumar T

pubmed logopapersJul 4 2025
Brain tumours (BTs) are severe neurological disorders. They affect more than 308,000 people each year worldwide. The mortality rate is over 251,000 deaths annually (IARC, 2020 reports). Detecting BTs is complex because they vary in nature. Early diagnosis is essential for better survival rates. The study presents a new system for detecting BTs. It combines deep (DL) learning and machine (ML) learning techniques. The system uses advanced models like Inception-V3, ResNet-50, and VGG-16 for feature extraction, and for dimensional reduction, it uses the PCA model. It also employs ensemble methods such as Stacking, k-NN, Gradient Boosting, AdaBoost, Multi-Layer Perceptron (MLP), and Support Vector Machines for classification and predicts the BTs using MRI scans. The MRI scans were resized to 224 × 224 pixels, and pixel intensities were normalized to a [0,1] scale. We apply the Gaussian filter for stability. We use the Keras Image Data Generator for image augmentation. It applied methods like zooming and ± 10% brightness adjustments. The dataset has 5,712 MRI scans. These scans are classified into four groups: Meningioma, No-Tumor, Glioma, and Pituitary. A tenfold cross-validation method helps check if the model is reliable. Deep transfer (TL) learning and ensemble ML models work well together. They showed excellent results in detecting BTs. The stacking ensemble model achieved the highest accuracy across all feature extraction methods, with ResNet-50 features reduced by PCA (500), producing an accuracy of 0.957, 95% CI: 0.948-0.966; AUC: 0.996, 95% CI: 0.989-0.998, significantly outperforming baselines (p < 0.01). Neural networks and gradient-boosting models also show strong performance. The stacking model is robust and reliable. This method is useful for medical applications. Future studies will focus on using multi-modal imaging. This will help improve diagnostic accuracy. The research improves early detection of brain tumors.

ViT-GCN: A Novel Hybrid Model for Accurate Pneumonia Diagnosis from X-ray Images.

Xu N, Wu J, Cai F, Li X, Xie HB

pubmed logopapersJul 4 2025
This study aims to enhance the accuracy of pneumonia diagnosis from X-ray images by developing a model that integrates Vision Transformer (ViT) and Graph Convolutional Networks (GCN) for improved feature extraction and diagnostic performance. The ViT-GCN model was designed to leverage the strengths of both ViT, which captures global image information by dividing the image into fixed-size patches and processing them in sequence, and GCN, which captures node features and relationships through message passing and aggregation in graph data. A composite loss function combining multivariate cross-entropy, focal loss, and GHM loss was introduced to address dataset imbalance and improve training efficiency on small datasets. The ViT-GCN model demonstrated superior performance, achieving an accuracy of 91.43\% on the COVID-19 chest X-ray database, surpassing existing models in diagnostic accuracy for pneumonia. The study highlights the effectiveness of combining ViT and GCN architectures in medical image diagnosis, particularly in addressing challenges related to small datasets. This approach can lead to more accurate and efficient pneumonia diagnoses, especially in resource-constrained settings where small datasets are common.
Page 11 of 1031027 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.