Sort by:
Page 41 of 1651650 results

Multimodal deep learning-based radiomics for meningioma consistency prediction: integrating T1 and T2 MRI in a multi-center study.

Lin H, Yue Y, Xie L, Chen B, Li W, Yang F, Zhang Q, Chen H

pubmed logopapersJul 1 2025
Meningioma consistency critically impacts surgical planning, as soft tumors are easier to resect than hard tumors. Current assessments of tumor consistency using MRI are subjective and lack quantitative accuracy. Integrating deep learning and radiomics could enhance the predictive accuracy of meningioma consistency. A retrospective study analyzed 204 meningioma patients from two centers: the Second Affiliated Hospital of Guangzhou Medical University and the Southern Theater Command Hospital PLA. Three models-a radiomics model (Rad_Model), a deep learning model (DL_Model), and a combined model (DLR_Model)-were developed. Model performance was evaluated using AUC, accuracy, sensitivity, specificity, and precision. The DLR_Model outperformed other models across all cohorts. In the training set, it achieved AUC 0.957, accuracy of 0.908, and precision of 0.965. In the external test cohort, it maintained superior performance with an AUC of 0.854, accuracy of 0.778, and precision of 0.893, surpassing both the Rad_Model (AUC = 0.768) and DL_Model (AUC = 0.720). Combining radiomics and deep learning features improved predictive performance and robustness. Our study introduced and evaluated a deep learning radiomics model (DLR-Model) to accurately predict the consistency of meningiomas, which has the potential to improve preoperative assessments and surgical planning.

Quantitative ultrasound classification of healthy and chemically degraded ex-vivo cartilage.

Sorriento A, Guachi-Guachi L, Turini C, Lenzi E, Dolzani P, Lisignoli G, Kerdegari S, Valenza G, Canale C, Ricotti L, Cafarelli A

pubmed logopapersJul 1 2025
In this study, we explore the potential of ten quantitative (radiofrequency-based) ultrasound parameters to assess the progressive loss of collagen and proteoglycans, mimicking an osteoarthritis condition in ex-vivo bovine cartilage samples. Most analyzed metrics showed significant changes as the degradation progressed, especially with collagenase treatment. We propose for the first time a combination of these ultrasound parameters through machine learning models aimed at automatically identifying healthy and degraded cartilage samples. The random forest model showed good performance in distinguishing healthy cartilage from trypsin-treated samples, with an accuracy of 60%. The support vector machine demonstrated excellent accuracy (96%) in differentiating healthy cartilage from collagenase-degraded samples. Histological and mechanical analyses further confirmed these findings, with collagenase having a more pronounced impact on both mechanical and histological properties, compared to trypsin. These metrics were obtained using an ultrasound probe having a transmission frequency of 15 MHz, typically used for the diagnosis of musculoskeletal diseases, enabling a fully non-invasive procedure without requiring arthroscopic probes. As a perspective, the proposed quantitative ultrasound assessment has the potential to become a new standard for monitoring cartilage health, enabling the early detection of cartilage pathologies and timely interventions.

A hybrid XAI-driven deep learning framework for robust GI tract disease diagnosis.

Dahan F, Shah JH, Saleem R, Hasnain M, Afzal M, Alfakih TM

pubmed logopapersJul 1 2025
The stomach is one of the main digestive organs in the GIT, essential for digestion and nutrient absorption. However, various gastrointestinal diseases, including gastritis, ulcers, and cancer, affect health and quality of life severely. The precise diagnosis of gastrointestinal (GI) tract diseases is a significant challenge in the field of healthcare, as misclassification leads to late prescriptions and negative consequences for patients. Even with the advancement in machine learning and explainable AI for medical image analysis, existing methods tend to have high false negative rates which compromise critical disease cases. This paper presents a hybrid deep learning based explainable artificial intelligence (XAI) approach to improve the accuracy of gastrointestinal disorder diagnosis, including stomach diseases, from images acquired endoscopically. Swin Transformer with DCNN (EfficientNet-B3, ResNet-50) is integrated to improve both the accuracy of diagnostics and the interpretability of the model to extract robust features. Stacked machine learning classifiers with meta-loss and XAI techniques (Grad-CAM) are combined to minimize false negatives, which helps in early and accurate medical diagnoses in GI tract disease evaluation. The proposed model successfully achieved an accuracy of 93.79% with a lower misclassification rate, which is effective for gastrointestinal tract disease classification. Class-wise performance metrics, such as precision, recall, and F1-score, show considerable improvements with false-negative rates being reduced. AI-driven GI tract disease diagnosis becomes more accessible for medical professionals through Grad-CAM because it provides visual explanations about model predictions. This study makes the prospect of using a synergistic DL with XAI open for improvement towards early diagnosis with fewer human errors and also guiding doctors handling gastrointestinal diseases.

Hybrid model integration with explainable AI for brain tumor diagnosis: a unified approach to MRI analysis and prediction.

Vamsidhar D, Desai P, Joshi S, Kolhar S, Deshpande N, Gite S

pubmed logopapersJul 1 2025
Effective treatment for brain tumors relies on accurate detection because this is a crucial health condition. Medical imaging plays a pivotal role in improving tumor detection and diagnosis in the early stage. This study presents two approaches to the tumor detection problem focusing on the healthcare domain. A combination of image processing, vision transformer (ViT), and machine learning algorithms is the first approach that focuses on analyzing medical images. The second approach is the parallel model integration technique, where we first integrate two pre-trained deep learning models, ResNet101, and Xception, followed by applying local interpretable model-agnostic explanations (LIME) to explain the model. The results obtained an accuracy of 98.17% for the combination of vision transformer, random forest and contrast-limited adaptive histogram equalization and 99. 67% for the parallel model integration (ResNet101 and Xception). Based on these results, this paper proposed the deep learning approach-parallel model integration technique as the most effective method. Future work aims to extend the model to multi-class classification for tumor type detection and improve model generalization for broader applicability.

Transformer attention fusion for fine grained medical image classification.

Badar D, Abbas J, Alsini R, Abbas T, ChengLiang W, Daud A

pubmed logopapersJul 1 2025
Fine-grained visual classification is fundamental for medical image applications because it detects minor lesions. Diabetic retinopathy (DR) is a preventable cause of blindness, which requires exact and timely diagnosis to prevent vision damage. The challenges automated DR classification systems face include irregular lesions, uneven distributions between image classes, and inconsistent image quality that reduces diagnostic accuracy during early detection stages. Our solution to these problems includes MSCAS-Net (Multi-Scale Cross and Self-Attention Network), which uses the Swin Transformer as the backbone. It extracts features at three different resolutions (12 × 12, 24 × 24, 48 × 48), allowing it to detect subtle local features and global elements. This model uses self-attention mechanics to improve spatial connections between single scales and cross-attention to automatically match feature patterns across multiple scales, thereby developing a comprehensive information structure. The model becomes better at detecting significant lesions because of its dual mechanism, which focuses on both attention points. MSCAS-Net displays the best performance on APTOS and DDR and IDRID benchmarks by reaching accuracy levels of 93.8%, 89.80% and 86.70%, respectively. Through its algorithm, the model solves problems with imbalanced datasets and inconsistent image quality without needing data augmentation because it learns stable features. MSCAS-Net demonstrates a breakthrough in automated DR diagnostics since it combines high diagnostic precision with interpretable abilities to become an efficient AI-powered clinical decision support system. The presented research demonstrates how fine-grained visual classification methods benefit detecting and treating DR during its early stages.

Muscle-Driven prognostication in gastric cancer: A multicenter deep learning framework integrating Iliopsoas and erector spinae radiomics for 5-Year survival prediction.

Hong Y, Zhang P, Teng Z, Cheng K, Zhang Z, Cheng Y, Cao G, Chen B

pubmed logopapersJul 1 2025
This study developed a 5-year survival prediction model for gastric cancer patients by combining radiomics and deep learning, focusing on CT-based 2D and 3D features of the iliopsoas and erector spinae muscles. Retrospective data from 705 patients across two centers were analyzed, with clinical variables assessed via Cox regression and radiomic features extracted using deep learning. The 2D model outperformed the 3D approach, leading to feature fusion across five dimensions, optimized via logistic regression. Results showed no significant association between clinical baseline characteristics and survival, but the 2D model demonstrated strong prognostic performance (AUC ~ 0.8), with attention heatmaps emphasizing spinal muscle regions. The 3D model underperformed due to irrelevant data. The final integrated model achieved stable predictive accuracy, confirming the link between muscle mass and survival. This approach advances precision medicine by enabling personalized prognosis and exploring 3D imaging feasibility, offering insights for gastric cancer research.

Determination of the oral carcinoma and sarcoma in contrast enhanced CT images using deep convolutional neural networks.

Warin K, Limprasert W, Paipongna T, Chaowchuen S, Vicharueang S

pubmed logopapersJul 1 2025
Oral cancer is a hazardous disease and a major cause of morbidity and mortality worldwide. The purpose of this study was to develop the deep convolutional neural networks (CNN)-based multiclass classification and object detection models for distinguishing and detection of oral carcinoma and sarcoma in contrast-enhanced CT images. This study included 3,259 slices of CT images of oral cancer cases from the cancer hospital and two regional hospitals from 2016 to 2020. Multiclass classification models were constructed using DenseNet-169, ResNet-50, EfficientNet-B0, ConvNeXt-Base, and ViT-Base-Patch16-224 to accurately differentiate between oral carcinoma and sarcoma. Additionally, multiclass object detection models, including Faster R-CNN, YOLOv8, and YOLOv11, were designed to autonomously identify and localize lesions by placing bounding boxes on CT images. Performance evaluation on a test dataset showed that the best classification model achieved an accuracy of 0.97, while the best detection models yielded a mean average precision (mAP) of 0.87. In conclusion, the CNN-based multiclass models have a great promise for accurately determining and distinguishing oral carcinoma and sarcoma in CT imaging, potentially enhancing early detection and informing treatment strategies.

AI-based CT assessment of 3117 vertebrae reveals significant sex-specific vertebral height differences.

Palm V, Thangamani S, Budai BK, Skornitzke S, Eckl K, Tong E, Sedaghat S, Heußel CP, von Stackelberg O, Engelhardt S, Kopytova T, Norajitra T, Maier-Hein KH, Kauczor HU, Wielpütz MO

pubmed logopapersJul 1 2025
Predicting vertebral height is complex due to individual factors. AI-based medical imaging analysis offers new opportunities for vertebral assessment. Thereby, these novel methods may contribute to sex-adapted nomograms and vertebral height prediction models, aiding in diagnosing spinal conditions like compression fractures and supporting individualized, sex-specific medicine. In this study an AI-based CT-imaging spine analysis of 262 subjects (mean age 32.36 years, range 20-54 years) was conducted, including a total of 3117 vertebrae, to assess sex-associated anatomical variations. Automated segmentations provided anterior, central, and posterior vertebral heights. Regression analysis with a cubic spline linear mixed-effects model was adapted to age, sex, and spinal segments. Measurement reliability was confirmed by two readers with an intraclass correlation coefficient (ICC) of 0.94-0.98. Female vertebral heights were consistently smaller than males (p < 0.05). The largest differences were found in the upper thoracic spine (T1-T6), with mean differences of 7.9-9.0%. Specifically, T1 and T2 showed differences of 8.6% and 9.0%, respectively. The strongest height increase between consecutive vertebrae was observed from T9 to L1 (mean slope of 1.46; 6.63% for females and 1.53; 6.48% for males). This study highlights significant sex-based differences in vertebral heights, resulting in sex-adapted nomograms that can enhance diagnostic accuracy and support individualized patient assessments.

Attention residual network for medical ultrasound image segmentation.

Liu H, Zhang P, Hu J, Huang Y, Zuo S, Li L, Liu M, She C

pubmed logopapersJul 1 2025
Ultrasound imaging can distinctly display the morphology and structure of internal organs within the human body, enabling the examination of organs like the breast, liver, and thyroid. It can identify the locations of tumors, nodules, and other lesions, thereby serving as an efficacious tool for treatment detection and rehabilitation evaluation. Typically, the attending physician is required to manually demarcate the boundaries of lesion locations, such as tumors, in ultrasound images. Nevertheless, several issues exist. The high noise level in ultrasound images, the degradation of image quality due to the impact of surrounding tissues, and the influence of the operator's experience and proficiency on the determination of lesion locations can all contribute to a reduction in the accuracy of delineating the boundaries of lesion sites. In the wake of the advancement of deep learning, its application in medical image segmentation is becoming increasingly prevalent. For instance, while the U-Net model has demonstrated a favorable performance in medical image segmentation, the convolution layers of the traditional U-Net model are relatively simplistic, leading to suboptimal extraction of global information. Moreover, due to the significant noise present in ultrasound images, the model is prone to interference. In this research, we propose an Attention Residual Network model (ARU-Net). By incorporating residual connections within the encoder section, the learning capacity of the model is enhanced. Additionally, a spatial hybrid convolution module is integrated to augment the model's ability to extract global information and deepen the vertical architecture of the network. During the feature fusion stage of the skip connections, a channel attention mechanism and a multi-convolutional self-attention mechanism are respectively introduced to suppress noisy points within the fused feature maps, enabling the model to acquire more information regarding the target region. Finally, the predictive efficacy of the model was evaluated using publicly accessible breast ultrasound and thyroid ultrasound data. The ARU-Net achieved mean Intersection over Union (mIoU) values of 82.59% and 84.88%, accuracy values of 97.53% and 96.09%, and F1-score values of 90.06% and 89.7% for breast and thyroid ultrasound, respectively.

Radiomics analysis based on dynamic contrast-enhanced MRI for predicting early recurrence after hepatectomy in hepatocellular carcinoma patients.

Wang KD, Guan MJ, Bao ZY, Shi ZJ, Tong HH, Xiao ZQ, Liang L, Liu JW, Shen GL

pubmed logopapersJul 1 2025
This study aimed to develop a machine learning model based on Magnetic Resonance Imaging (MRI) radiomics for predicting early recurrence after curative surgery in patients with hepatocellular carcinoma (HCC).A retrospective analysis was conducted on 200 patients with HCC who underwent curative hepatectomy. Patients were randomly allocated to training (n = 140) and validation (n = 60) cohorts. Preoperative arterial, portal venous, and delayed phase images were acquired. Tumor regions of interest (ROIs) were manually delineated, with an additional ROI obtained by expanding the tumor boundary by 5 mm. Radiomic features were extracted and selected using the Least Absolute Shrinkage and Selection Operator (LASSO). Multiple machine learning algorithms were employed to develop predictive models. Model performance was evaluated using receiver operating characteristic (ROC) curves, decision curve analysis, and calibration curves. The 20 most discriminative radiomic features were integrated with tumor size and satellite nodules for model development. In the validation cohort, the clinical-peritumoral radiomics model demonstrated superior predictive accuracy (AUC = 0.85, 95% CI: 0.74-0.95) compared to the clinical-intratumoral radiomics model (AUC = 0.82, 95% CI: 0.68-0.93) and the radiomics-only model (AUC = 0.82, 95% CI: 0.69-0.93). Furthermore, calibration curves and decision curve analyses indicated superior calibration ability and clinical benefit. The MRI-based peritumoral radiomics model demonstrates significant potential for predicting early recurrence of HCC.
Page 41 of 1651650 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.