Sort by:
Page 126 of 2432422 results

Capsule-ConvKAN: A Hybrid Neural Approach to Medical Image Classification

Laura Pituková, Peter Sinčák, László József Kovács

arxiv logopreprintJul 8 2025
This study conducts a comprehensive comparison of four neural network architectures: Convolutional Neural Network, Capsule Network, Convolutional Kolmogorov--Arnold Network, and the newly proposed Capsule--Convolutional Kolmogorov--Arnold Network. The proposed Capsule-ConvKAN architecture combines the dynamic routing and spatial hierarchy capabilities of Capsule Network with the flexible and interpretable function approximation of Convolutional Kolmogorov--Arnold Networks. This novel hybrid model was developed to improve feature representation and classification accuracy, particularly in challenging real-world biomedical image data. The architectures were evaluated on a histopathological image dataset, where Capsule-ConvKAN achieved the highest classification performance with an accuracy of 91.21\%. The results demonstrate the potential of the newly introduced Capsule-ConvKAN in capturing spatial patterns, managing complex features, and addressing the limitations of traditional convolutional models in medical image classification.

A fully automated deep learning framework for age estimation in adults using periapical radiographs of canine teeth.

Upalananda W, Phisutphithayakun C, Assawasuksant P, Tanwattana P, Prasatkaew P

pubmed logopapersJul 8 2025
Determining age from dental remains is vital in forensic investigations, aiding in victim identification and anthropological research. Our framework uses a two-step pipeline: tooth detection followed by age estimation, based on either canine tooth images alone or combined with sex information. The dataset included 2,587 radiographs from 1,004 patients (691 females, 313 males) aged 13.42-85.45 years. The YOLOv8-Nano model achieved exceptional performance in detecting canine teeth, with an F1 score of 0.994, a 98.94% detection success rate, and accurate numbering of all detected teeth. For age estimation, we implemented four convolutional neural network architectures: ResNet-18, DenseNet-121, EfficientNet-B0, and MobileNetV3. Each model was trained to estimate age based on one of the four individual canine teeth (13, 23, 33, and 43). The models achieved median absolute errors ranging from 3.55 to 5.18 years. Incorporating sex as an additional input feature did not improve performance. Moreover, no significant differences in predictive accuracy were observed among the individual teeth. In conclusion, the proposed framework demonstrates potential as a robust and practical tool for forensic age estimation across diverse forensic contexts.

Attention-Enhanced Deep Learning Ensemble for Breast Density Classification in Mammography

Peyman Sharifian, Xiaotong Hong, Alireza Karimian, Mehdi Amini, Hossein Arabi

arxiv logopreprintJul 8 2025
Breast density assessment is a crucial component of mammographic interpretation, with high breast density (BI-RADS categories C and D) representing both a significant risk factor for developing breast cancer and a technical challenge for tumor detection. This study proposes an automated deep learning system for robust binary classification of breast density (low: A/B vs. high: C/D) using the VinDr-Mammo dataset. We implemented and compared four advanced convolutional neural networks: ResNet18, ResNet50, EfficientNet-B0, and DenseNet121, each enhanced with channel attention mechanisms. To address the inherent class imbalance, we developed a novel Combined Focal Label Smoothing Loss function that integrates focal loss, label smoothing, and class-balanced weighting. Our preprocessing pipeline incorporated advanced techniques, including contrast-limited adaptive histogram equalization (CLAHE) and comprehensive data augmentation. The individual models were combined through an optimized ensemble voting approach, achieving superior performance (AUC: 0.963, F1-score: 0.952) compared to any single model. This system demonstrates significant potential to standardize density assessments in clinical practice, potentially improving screening efficiency and early cancer detection rates while reducing inter-observer variability among radiologists.

A Meta-Analysis of the Diagnosis of Condylar and Mandibular Fractures Based on 3-dimensional Imaging and Artificial Intelligence.

Wang F, Jia X, Meiling Z, Oscandar F, Ghani HA, Omar M, Li S, Sha L, Zhen J, Yuan Y, Zhao B, Abdullah JY

pubmed logopapersJul 8 2025
This article aims to review the literature, study the current situation of using 3D images and artificial intelligence-assisted methods to improve the rapid and accurate classification and diagnosis of condylar fractures and conduct a meta-analysis of mandibular fractures. Mandibular condyle fracture is a common fracture type in maxillofacial surgery. Accurate classification and diagnosis of condylar fractures are critical to developing an effective treatment plan. With the rapid development of 3-dimensional imaging technology and artificial intelligence (AI), traditional x-ray diagnosis is gradually replaced by more accurate technologies such as 3-dimensional computed tomography (CT). These emerging technologies provide more detailed anatomic information and significantly improve the accuracy and efficiency of condylar fracture diagnosis, especially in the evaluation and surgical planning of complex fractures. The application of artificial intelligence in medical imaging is further analyzed, especially the successful cases of fracture detection and classification through deep learning models. Although AI technology has demonstrated great potential in condylar fracture diagnosis, it still faces challenges such as data quality, model interpretability, and clinical validation. This article evaluates the accuracy and practicality of AI in diagnosing mandibular fractures through a systematic review and meta-analysis of the existing literature. The results show that AI-assisted diagnosis has high prediction accuracy in detecting condylar fractures and significantly improves diagnostic efficiency. However, more multicenter studies are still needed to verify the application of AI in different clinical settings to promote its widespread application in maxillofacial surgery.

Machine learning models for discriminating clinically significant from clinically insignificant prostate cancer using bi-parametric magnetic resonance imaging.

Ayyıldız H, İnce O, Korkut E, Dağoğlu Kartal MG, Tunacı A, Ertürk ŞM

pubmed logopapersJul 8 2025
This study aims to demonstrate the performance of machine learning algorithms to distinguish clinically significant prostate cancer (csPCa) from clinically insignificant prostate cancer (ciPCa) in prostate bi-parametric magnetic resonance imaging (MRI) using radiomics features. MRI images of patients who were diagnosed with cancer with histopathological confirmation following prostate MRI were collected retrospectively. Patients with a Gleason score of 3+3 were considered to have clinically ciPCa, and patients with a Gleason score of 3+4 and above were considered to have csPCa. Radiomics features were extracted from T2-weighted (T2W) images, apparent diffusion coefficient (ADC) images, and their corresponding Laplacian of Gaussian (LoG) filtered versions. Additionally, a third feature subset was created by combining the T2W and ADC images, enhancing the analysis with an integrated approach. Once the features were extracted, Pearson’s correlation coefficient and selection were performed using wrapper-based sequential algorithms. The models were then built using support vector machine (SVM) and logistic regression (LR) machine learning algorithms. The models were validated using a five-fold cross-validation technique. This study included 77 patients, 30 with ciPCA and 47 with csPCA. From each image, four images were extracted with LoG filtering, and 111 features were obtained from each image. After feature selection, 5 features were obtained from T2W images, 5 from ADC images, and 15 from the combined dataset. In the SVM model, area under the curve (AUC) values of 0.64 for T2W, 0.86 for ADC, and 0.86 for the combined dataset were obtained in the test set. In the LR model, AUC values of 0.79 for T2W, 0.86 for ADC, and 0.85 for the combined dataset were obtained. Machine learning models developed with radiomics can provide a decision support system to complement pathology results and help avoid invasive procedures such as re-biopsies or follow-up biopsies that are sometimes necessary today. This study demonstrates that machine learning models using radiomics features derived from bi-parametric MRI can discriminate csPCa from clinically insignificant PCa. These findings suggest that radiomics-based machine learning models have the potential to reduce the need for re-biopsy in cases of indeterminate pathology, assist in diagnosing pathology–radiology discordance, and support treatment decision-making in the management of PCa.

Integrating Machine Learning into Myositis Research: a Systematic Review.

Juarez-Gomez C, Aguilar-Vazquez A, Gonzalez-Gauna E, Garcia-Ordoñez GP, Martin-Marquez BT, Gomez-Rios CA, Becerra-Jimenez J, Gaspar-Ruiz A, Vazquez-Del Mercado M

pubmed logopapersJul 8 2025
Idiopathic inflammatory myopathies (IIM) are a group of autoimmune rheumatic diseases characterized by proximal muscle weakness and extra muscular manifestations. Since 1975, these IIM have been classified into different clinical phenotypes. Each clinical phenotype is associated with a better or worse prognosis and a particular physiopathology. Machine learning (ML) is a fascinating field of knowledge with worldwide applications in different fields. In IIM, ML is an emerging tool assessed in very specific clinical contexts as a complementary tool for research purposes, including transcriptome profiles in muscle biopsies, differential diagnosis using magnetic resonance imaging (MRI), and ultrasound (US). With the cancer-associated risk and predisposing factors for interstitial lung disease (ILD) development, this systematic review evaluates 23 original studies using supervised learning models, including logistic regression (LR), random forest (RF), support vector machines (SVM), and convolutional neural networks (CNN), with performance assessed primarily through the area under the curve coupled with the receiver operating characteristic (AUC-ROC).

An autonomous agent for auditing and improving the reliability of clinical AI models

Lukas Kuhn, Florian Buettner

arxiv logopreprintJul 8 2025
The deployment of AI models in clinical practice faces a critical challenge: models achieving expert-level performance on benchmarks can fail catastrophically when confronted with real-world variations in medical imaging. Minor shifts in scanner hardware, lighting or demographics can erode accuracy, but currently reliability auditing to identify such catastrophic failure cases before deployment is a bespoke and time-consuming process. Practitioners lack accessible and interpretable tools to expose and repair hidden failure modes. Here we introduce ModelAuditor, a self-reflective agent that converses with users, selects task-specific metrics, and simulates context-dependent, clinically relevant distribution shifts. ModelAuditor then generates interpretable reports explaining how much performance likely degrades during deployment, discussing specific likely failure modes and identifying root causes and mitigation strategies. Our comprehensive evaluation across three real-world clinical scenarios - inter-institutional variation in histopathology, demographic shifts in dermatology, and equipment heterogeneity in chest radiography - demonstrates that ModelAuditor is able correctly identify context-specific failure modes of state-of-the-art models such as the established SIIM-ISIC melanoma classifier. Its targeted recommendations recover 15-25% of performance lost under real-world distribution shift, substantially outperforming both baseline models and state-of-the-art augmentation methods. These improvements are achieved through a multi-agent architecture and execute on consumer hardware in under 10 minutes, costing less than US$0.50 per audit.

Post-hoc eXplainable AI methods for analyzing medical images of gliomas (- A review for clinical applications).

Ayaz H, Sümer-Arpak E, Ozturk-Isik E, Booth TC, Tormey D, McLoughlin I, Unnikrishnan S

pubmed logopapersJul 8 2025
Deep learning (DL) has shown promise in glioma imaging tasks using magnetic resonance imaging (MRI) and histopathology images, yet their complexity demands greater transparency in artificial intelligence (AI) systems. This is noticeable when users must understand the model output for a clinical application. In this systematic review, 65 post-hoc eXplainable AI (XAI), or interpretable AI studies, have been reviewed that provide an understanding of why a system generated a given output for tasks related to glioma imaging. A framework of post-hoc XAI methods, such as Gradient-based XAI (G-XAI) and Perturbation-based XAI (P-XAI), is introduced to evaluate deep models and explain their application in gliomas. The papers on XAI techniques in gliomas are surveyed and categorized by their specific aims such as grading, genetic biomarker detection, localization, intra-tumoral heterogeneity assessment, and survival analysis, and their XAI approach. This review highlights the growing integration of XAI in glioma imaging, demonstrating their role in bridging AI decision-making and medical diagnostics. The co-occurrence analysis emphasizes their role in enhancing model transparency and trust and guiding future research toward more reliable clinical applications. Finally, the current challenges associated with DL and XAI approaches and their clinical integration are discussed with an outlook on future opportunities from clinical users' perspectives and upcoming trends in XAI.

Attention-Enhanced Deep Learning Ensemble for Breast Density Classification in Mammography

Peyman Sharifian, Xiaotong Hong, Alireza Karimian, Mehdi Amini, Hossein Arabi

arxiv logopreprintJul 8 2025
Breast density assessment is a crucial component of mammographic interpretation, with high breast density (BI-RADS categories C and D) representing both a significant risk factor for developing breast cancer and a technical challenge for tumor detection. This study proposes an automated deep learning system for robust binary classification of breast density (low: A/B vs. high: C/D) using the VinDr-Mammo dataset. We implemented and compared four advanced convolutional neural networks: ResNet18, ResNet50, EfficientNet-B0, and DenseNet121, each enhanced with channel attention mechanisms. To address the inherent class imbalance, we developed a novel Combined Focal Label Smoothing Loss function that integrates focal loss, label smoothing, and class-balanced weighting. Our preprocessing pipeline incorporated advanced techniques, including contrast-limited adaptive histogram equalization (CLAHE) and comprehensive data augmentation. The individual models were combined through an optimized ensemble voting approach, achieving superior performance (AUC: 0.963, F1-score: 0.952) compared to any single model. This system demonstrates significant potential to standardize density assessments in clinical practice, potentially improving screening efficiency and early cancer detection rates while reducing inter-observer variability among radiologists.

Artificial intelligence in cardiac sarcoidosis: ECG, Echo, CPET and MRI.

Umeojiako WI, Lüscher T, Sharma R

pubmed logopapersJul 8 2025
Cardiac sarcoidosis is a form of inflammatory cardiomyopathy that varies in its clinical presentation. It is associated with significant clinical complications such as high-degree atrioventricular block, ventricular tachycardia, heart failure and sudden cardiac death. It is challenging to diagnose clinically, and its increasing detection rate may represent increasing awareness of the disease by clinicians as well as a rising incidence. Prompt diagnosis and risk stratification reduces morbidity and mortality from cardiac sarcoidosis. Noninvasive diagnostic modalities such as ECG, echocardiography, PET/computed tomography (PET/CT) and cardiac MRI (cMRI) are increasingly playing important roles in cardiac sarcoidosis diagnosis. Artificial intelligence driven applications are increasingly being applied to these diagnostic modalities to improve the detection of cardiac sarcoidosis. Review of the recent literature suggests artificial intelligence based algorithms in PET/CT and cMRIs can predict cardiac sarcoidosis as accurately as trained experts, however, there are few published studies on artificial intelligence based algorithms in ECG and echocardiography. The impressive advances in artificial intelligence have the potential to transform patient screening in cardiac sarcoidosis, aid prompt diagnosis and appropriate risk stratification and change clinical practice.
Page 126 of 2432422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.