Sort by:
Page 44 of 2102095 results

Multimodal deep learning-based radiomics for meningioma consistency prediction: integrating T1 and T2 MRI in a multi-center study.

Lin H, Yue Y, Xie L, Chen B, Li W, Yang F, Zhang Q, Chen H

pubmed logopapersJul 1 2025
Meningioma consistency critically impacts surgical planning, as soft tumors are easier to resect than hard tumors. Current assessments of tumor consistency using MRI are subjective and lack quantitative accuracy. Integrating deep learning and radiomics could enhance the predictive accuracy of meningioma consistency. A retrospective study analyzed 204 meningioma patients from two centers: the Second Affiliated Hospital of Guangzhou Medical University and the Southern Theater Command Hospital PLA. Three models-a radiomics model (Rad_Model), a deep learning model (DL_Model), and a combined model (DLR_Model)-were developed. Model performance was evaluated using AUC, accuracy, sensitivity, specificity, and precision. The DLR_Model outperformed other models across all cohorts. In the training set, it achieved AUC 0.957, accuracy of 0.908, and precision of 0.965. In the external test cohort, it maintained superior performance with an AUC of 0.854, accuracy of 0.778, and precision of 0.893, surpassing both the Rad_Model (AUC = 0.768) and DL_Model (AUC = 0.720). Combining radiomics and deep learning features improved predictive performance and robustness. Our study introduced and evaluated a deep learning radiomics model (DLR-Model) to accurately predict the consistency of meningiomas, which has the potential to improve preoperative assessments and surgical planning.

Quantitative ultrasound classification of healthy and chemically degraded ex-vivo cartilage.

Sorriento A, Guachi-Guachi L, Turini C, Lenzi E, Dolzani P, Lisignoli G, Kerdegari S, Valenza G, Canale C, Ricotti L, Cafarelli A

pubmed logopapersJul 1 2025
In this study, we explore the potential of ten quantitative (radiofrequency-based) ultrasound parameters to assess the progressive loss of collagen and proteoglycans, mimicking an osteoarthritis condition in ex-vivo bovine cartilage samples. Most analyzed metrics showed significant changes as the degradation progressed, especially with collagenase treatment. We propose for the first time a combination of these ultrasound parameters through machine learning models aimed at automatically identifying healthy and degraded cartilage samples. The random forest model showed good performance in distinguishing healthy cartilage from trypsin-treated samples, with an accuracy of 60%. The support vector machine demonstrated excellent accuracy (96%) in differentiating healthy cartilage from collagenase-degraded samples. Histological and mechanical analyses further confirmed these findings, with collagenase having a more pronounced impact on both mechanical and histological properties, compared to trypsin. These metrics were obtained using an ultrasound probe having a transmission frequency of 15 MHz, typically used for the diagnosis of musculoskeletal diseases, enabling a fully non-invasive procedure without requiring arthroscopic probes. As a perspective, the proposed quantitative ultrasound assessment has the potential to become a new standard for monitoring cartilage health, enabling the early detection of cartilage pathologies and timely interventions.

A hybrid XAI-driven deep learning framework for robust GI tract disease diagnosis.

Dahan F, Shah JH, Saleem R, Hasnain M, Afzal M, Alfakih TM

pubmed logopapersJul 1 2025
The stomach is one of the main digestive organs in the GIT, essential for digestion and nutrient absorption. However, various gastrointestinal diseases, including gastritis, ulcers, and cancer, affect health and quality of life severely. The precise diagnosis of gastrointestinal (GI) tract diseases is a significant challenge in the field of healthcare, as misclassification leads to late prescriptions and negative consequences for patients. Even with the advancement in machine learning and explainable AI for medical image analysis, existing methods tend to have high false negative rates which compromise critical disease cases. This paper presents a hybrid deep learning based explainable artificial intelligence (XAI) approach to improve the accuracy of gastrointestinal disorder diagnosis, including stomach diseases, from images acquired endoscopically. Swin Transformer with DCNN (EfficientNet-B3, ResNet-50) is integrated to improve both the accuracy of diagnostics and the interpretability of the model to extract robust features. Stacked machine learning classifiers with meta-loss and XAI techniques (Grad-CAM) are combined to minimize false negatives, which helps in early and accurate medical diagnoses in GI tract disease evaluation. The proposed model successfully achieved an accuracy of 93.79% with a lower misclassification rate, which is effective for gastrointestinal tract disease classification. Class-wise performance metrics, such as precision, recall, and F1-score, show considerable improvements with false-negative rates being reduced. AI-driven GI tract disease diagnosis becomes more accessible for medical professionals through Grad-CAM because it provides visual explanations about model predictions. This study makes the prospect of using a synergistic DL with XAI open for improvement towards early diagnosis with fewer human errors and also guiding doctors handling gastrointestinal diseases.

Hybrid model integration with explainable AI for brain tumor diagnosis: a unified approach to MRI analysis and prediction.

Vamsidhar D, Desai P, Joshi S, Kolhar S, Deshpande N, Gite S

pubmed logopapersJul 1 2025
Effective treatment for brain tumors relies on accurate detection because this is a crucial health condition. Medical imaging plays a pivotal role in improving tumor detection and diagnosis in the early stage. This study presents two approaches to the tumor detection problem focusing on the healthcare domain. A combination of image processing, vision transformer (ViT), and machine learning algorithms is the first approach that focuses on analyzing medical images. The second approach is the parallel model integration technique, where we first integrate two pre-trained deep learning models, ResNet101, and Xception, followed by applying local interpretable model-agnostic explanations (LIME) to explain the model. The results obtained an accuracy of 98.17% for the combination of vision transformer, random forest and contrast-limited adaptive histogram equalization and 99. 67% for the parallel model integration (ResNet101 and Xception). Based on these results, this paper proposed the deep learning approach-parallel model integration technique as the most effective method. Future work aims to extend the model to multi-class classification for tumor type detection and improve model generalization for broader applicability.

Transformer attention fusion for fine grained medical image classification.

Badar D, Abbas J, Alsini R, Abbas T, ChengLiang W, Daud A

pubmed logopapersJul 1 2025
Fine-grained visual classification is fundamental for medical image applications because it detects minor lesions. Diabetic retinopathy (DR) is a preventable cause of blindness, which requires exact and timely diagnosis to prevent vision damage. The challenges automated DR classification systems face include irregular lesions, uneven distributions between image classes, and inconsistent image quality that reduces diagnostic accuracy during early detection stages. Our solution to these problems includes MSCAS-Net (Multi-Scale Cross and Self-Attention Network), which uses the Swin Transformer as the backbone. It extracts features at three different resolutions (12 × 12, 24 × 24, 48 × 48), allowing it to detect subtle local features and global elements. This model uses self-attention mechanics to improve spatial connections between single scales and cross-attention to automatically match feature patterns across multiple scales, thereby developing a comprehensive information structure. The model becomes better at detecting significant lesions because of its dual mechanism, which focuses on both attention points. MSCAS-Net displays the best performance on APTOS and DDR and IDRID benchmarks by reaching accuracy levels of 93.8%, 89.80% and 86.70%, respectively. Through its algorithm, the model solves problems with imbalanced datasets and inconsistent image quality without needing data augmentation because it learns stable features. MSCAS-Net demonstrates a breakthrough in automated DR diagnostics since it combines high diagnostic precision with interpretable abilities to become an efficient AI-powered clinical decision support system. The presented research demonstrates how fine-grained visual classification methods benefit detecting and treating DR during its early stages.

Muscle-Driven prognostication in gastric cancer: A multicenter deep learning framework integrating Iliopsoas and erector spinae radiomics for 5-Year survival prediction.

Hong Y, Zhang P, Teng Z, Cheng K, Zhang Z, Cheng Y, Cao G, Chen B

pubmed logopapersJul 1 2025
This study developed a 5-year survival prediction model for gastric cancer patients by combining radiomics and deep learning, focusing on CT-based 2D and 3D features of the iliopsoas and erector spinae muscles. Retrospective data from 705 patients across two centers were analyzed, with clinical variables assessed via Cox regression and radiomic features extracted using deep learning. The 2D model outperformed the 3D approach, leading to feature fusion across five dimensions, optimized via logistic regression. Results showed no significant association between clinical baseline characteristics and survival, but the 2D model demonstrated strong prognostic performance (AUC ~ 0.8), with attention heatmaps emphasizing spinal muscle regions. The 3D model underperformed due to irrelevant data. The final integrated model achieved stable predictive accuracy, confirming the link between muscle mass and survival. This approach advances precision medicine by enabling personalized prognosis and exploring 3D imaging feasibility, offering insights for gastric cancer research.

Determination of the oral carcinoma and sarcoma in contrast enhanced CT images using deep convolutional neural networks.

Warin K, Limprasert W, Paipongna T, Chaowchuen S, Vicharueang S

pubmed logopapersJul 1 2025
Oral cancer is a hazardous disease and a major cause of morbidity and mortality worldwide. The purpose of this study was to develop the deep convolutional neural networks (CNN)-based multiclass classification and object detection models for distinguishing and detection of oral carcinoma and sarcoma in contrast-enhanced CT images. This study included 3,259 slices of CT images of oral cancer cases from the cancer hospital and two regional hospitals from 2016 to 2020. Multiclass classification models were constructed using DenseNet-169, ResNet-50, EfficientNet-B0, ConvNeXt-Base, and ViT-Base-Patch16-224 to accurately differentiate between oral carcinoma and sarcoma. Additionally, multiclass object detection models, including Faster R-CNN, YOLOv8, and YOLOv11, were designed to autonomously identify and localize lesions by placing bounding boxes on CT images. Performance evaluation on a test dataset showed that the best classification model achieved an accuracy of 0.97, while the best detection models yielded a mean average precision (mAP) of 0.87. In conclusion, the CNN-based multiclass models have a great promise for accurately determining and distinguishing oral carcinoma and sarcoma in CT imaging, potentially enhancing early detection and informing treatment strategies.

Innovative deep learning classifiers for breast cancer detection through hybrid feature extraction techniques.

Vijayalakshmi S, Pandey BK, Pandey D, Lelisho ME

pubmed logopapersJul 1 2025
Breast cancer remains a major cause of mortality among women, where early and accurate detection is critical to improving survival rates. This study presents a hybrid classification approach for mammogram analysis by combining handcrafted statistical features and deep learning techniques. The methodology involves preprocessing with the Shearlet Transform, segmentation using Improved Otsu thresholding and Canny edge detection, followed by feature extraction through Gray Level Co-occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), and 1st-order statistical descriptors. These features are input into a 2D BiLSTM-CNN model designed to learn spatial and sequential patterns in mammogram images. Evaluated on the MIAS dataset, the proposed method achieved 97.14% accuracy, outperforming several benchmark models. The results indicate that this hybrid strategy offers improvements in classification performance and may assist radiologists in more effective breast cancer screening.

Evaluation of MRI-based synthetic CT for lumbar degenerative disease: a comparison with CT.

Jiang Z, Zhu Y, Wang W, Li Z, Li Y, Zhang M

pubmed logopapersJul 1 2025
Patients with lumbar degenerative disease typically undergo preoperative MRI combined with CT scans, but this approach introduces additional ionizing radiation and examination costs. To compare the effectiveness of MRI-based synthetic CT (sCT) in displaying lumbar degenerative changes, using CT as the gold standard. This prospective study was conducted between June 2021 and September 2023. Adult patients suspected of lumbar degenerative disease were enrolled and underwent both lumbar MRI and CT scans on the same day. The MRI images were processed using a deep learning-based image synthesis method (BoneMRI) to generate sCT images. Two radiologists independently assessed and measured the display and length of osteophytes, the presence of annular calcifications, and the CT values (HU) of L1 vertebrae on both sCT and CT images. The consistency between CT and sCT in terms of imaging results was evaluated using equivalence statistical tests. The display performance of sCT images generated from MRI scans by different manufacturers and field strengths was also compared. A total of 105 participants were included (54 males and 51 females, aged 19-95 years). sCT demonstrated statistical equivalence to CT in displaying osteophytes and annular calcifications but showed poorer performance in detecting osteoporosis. The display effectiveness of sCT images synthesized from MRI scans obtained using different imaging equipment was consistent. sCT demonstrated comparable effectiveness to CT in geometric measurements of lumbar degenerative changes. However, sCT cannot independently detect osteoporosis. When combined with conventional MRI's soft tissue information, sCT offers a promising possibility for radiation-free diagnosis and preoperative planning.

AI-based CT assessment of 3117 vertebrae reveals significant sex-specific vertebral height differences.

Palm V, Thangamani S, Budai BK, Skornitzke S, Eckl K, Tong E, Sedaghat S, Heußel CP, von Stackelberg O, Engelhardt S, Kopytova T, Norajitra T, Maier-Hein KH, Kauczor HU, Wielpütz MO

pubmed logopapersJul 1 2025
Predicting vertebral height is complex due to individual factors. AI-based medical imaging analysis offers new opportunities for vertebral assessment. Thereby, these novel methods may contribute to sex-adapted nomograms and vertebral height prediction models, aiding in diagnosing spinal conditions like compression fractures and supporting individualized, sex-specific medicine. In this study an AI-based CT-imaging spine analysis of 262 subjects (mean age 32.36 years, range 20-54 years) was conducted, including a total of 3117 vertebrae, to assess sex-associated anatomical variations. Automated segmentations provided anterior, central, and posterior vertebral heights. Regression analysis with a cubic spline linear mixed-effects model was adapted to age, sex, and spinal segments. Measurement reliability was confirmed by two readers with an intraclass correlation coefficient (ICC) of 0.94-0.98. Female vertebral heights were consistently smaller than males (p < 0.05). The largest differences were found in the upper thoracic spine (T1-T6), with mean differences of 7.9-9.0%. Specifically, T1 and T2 showed differences of 8.6% and 9.0%, respectively. The strongest height increase between consecutive vertebrae was observed from T9 to L1 (mean slope of 1.46; 6.63% for females and 1.53; 6.48% for males). This study highlights significant sex-based differences in vertebral heights, resulting in sex-adapted nomograms that can enhance diagnostic accuracy and support individualized patient assessments.
Page 44 of 2102095 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.