Sort by:
Page 1 of 1711710 results
Next

Cerebral ischemia detection using deep learning techniques.

Pastor-Vargas R, Antón-Munárriz C, Haut JM, Robles-Gómez A, Paoletti ME, Benítez-Andrades JA

pubmed logopapersDec 1 2025
Cerebrovascular accident (CVA), commonly known as stroke, stands as a significant contributor to contemporary mortality and morbidity rates, often leading to lasting disabilities. Early identification is crucial in mitigating its impact and reducing mortality. Non-contrast computed tomography (NCCT) remains the primary diagnostic tool in stroke emergencies due to its speed, accessibility, and cost-effectiveness. NCCT enables the exclusion of hemorrhage and directs attention to ischemic causes resulting from arterial flow obstruction. Quantification of NCCT findings employs the Alberta Stroke Program Early Computed Tomography Score (ASPECTS), which evaluates affected brain structures. This study seeks to identify early alterations in NCCT density in patients with stroke symptoms using a binary classifier distinguishing NCCT scans with and without stroke. To achieve this, various well-known deep learning architectures, namely VGG3D, ResNet3D, and DenseNet3D, validated in the ImageNet challenges, are implemented with 3D images covering the entire brain volume. The training results of these networks are presented, wherein diverse parameters are examined for optimal performance. The DenseNet3D network emerges as the most effective model, attaining a training set accuracy of 98% and a test set accuracy of 95%. The aim is to alert medical professionals to potential stroke cases in their early stages based on NCCT findings displaying altered density patterns.

Convolutional autoencoder-based deep learning for intracerebral hemorrhage classification using brain CT images.

Nageswara Rao B, Acharya UR, Tan RS, Dash P, Mohapatra M, Sabut S

pubmed logopapersDec 1 2025
Intracerebral haemorrhage (ICH) is a common form of stroke that affects millions of people worldwide. The incidence is associated with a high rate of mortality and morbidity. Accurate diagnosis using brain non-contrast computed tomography (NCCT) is crucial for decision-making on potentially life-saving surgery. Limited access to expert readers and inter-observer variability imposes barriers to timeous and accurate ICH diagnosis. We proposed a hybrid deep learning model for automated ICH diagnosis using NCCT images, which comprises a convolutional autoencoder (CAE) to extract features with reduced data dimensionality and a dense neural network (DNN) for classification. In order to ensure that the model generalizes to new data, we trained it using tenfold cross-validation and holdout methods. Principal component analysis (PCA) based dimensionality reduction and classification is systematically implemented for comparison. The study dataset comprises 1645 ("ICH" class) and 1648 ("Normal" class belongs to patients with non-hemorrhagic stroke) labelled images obtained from 108 patients, who had undergone CT examination on a 64-slice computed tomography scanner at Kalinga Institute of Medical Sciences between 2020 and 2023. Our developed CAE-DNN hybrid model attained 99.84% accuracy, 99.69% sensitivity, 100% specificity, 100% precision, and 99.84% F1-score, which outperformed the comparator PCA-DNN model as well as the published results in the literature. In addition, using saliency maps, our CAE-DNN model can highlight areas on the images that are closely correlated with regions of ICH, which have been manually contoured by expert readers. The CAE-DNN model demonstrates the proof-of-concept for accurate ICH detection and localization, which can potentially be implemented to prioritize the treatment using NCCT images in clinical settings.

SurgPointTransformer: transformer-based vertebra shape completion using RGB-D imaging.

Massalimova A, Liebmann F, Jecklin S, Carrillo F, Farshad M, Fürnstahl P

pubmed logopapersDec 1 2025
State-of-the-art computer- and robot-assisted surgery systems rely on intraoperative imaging technologies such as computed tomography and fluoroscopy to provide detailed 3D visualizations of patient anatomy. However, these methods expose both patients and clinicians to ionizing radiation. This study introduces a radiation-free approach for 3D spine reconstruction using RGB-D data. Inspired by the "mental map" surgeons form during procedures, we present SurgPointTransformer, a shape completion method that reconstructs unexposed spinal regions from sparse surface observations. The method begins with a vertebra segmentation step that extracts vertebra-level point clouds for subsequent shape completion. SurgPointTransformer then uses an attention mechanism to learn the relationship between visible surface features and the complete spine structure. The approach is evaluated on an <i>ex vivo</i> dataset comprising nine samples, with CT-derived data used as ground truth. SurgPointTransformer significantly outperforms state-of-the-art baselines, achieving a Chamfer distance of 5.39 mm, an F-score of 0.85, an Earth mover's distance of 11.00 and a signal-to-noise ratio of 22.90 dB. These results demonstrate the potential of our method to reconstruct 3D vertebral shapes without exposing patients to ionizing radiation. This work contributes to the advancement of computer-aided and robot-assisted surgery by enhancing system perception and intelligence.

Application of Artificial Intelligence in rheumatic disease classification: an example of ankylosing spondylitis severity inspection model.

Chen CW, Tsai HH, Yeh CY, Yang CK, Tsou HK, Leong PY, Wei JC

pubmed logopapersDec 1 2025
The development of the Artificial Intelligence (AI)-based severity inspection model for ankylosing spondylitis (AS) could support health professionals to rapidly assess the severity of the disease, enhance proficiency, and reduce the demands of human resources. This paper aims to develop an AI-based severity inspection model for AS using patients' X-ray images and modified Stoke Ankylosing Spondylitis Spinal Score (mSASSS). The numerical simulation with AI is developed following the progress of data preprocessing, building and testing the model, and then the model. The training data is preprocessed by inviting three experts to check the X-ray images of 222 patients following the Gold Standard. The model is then developed through two stages, including keypoint detection and mSASSS evaluation. The two-stage AI-based severity inspection model for AS was developed to automatically detect spine points and evaluate mSASSS scores. At last, the data obtained from the developed model was compared with those from experts' assessment to analyse the accuracy of the model. The study was conducted in accordance with the ethical principles outlined in the Declaration of Helsinki. The spine point detection at the first stage achieved 1.57 micrometres in mean error distance with the ground truth, and the second stage of the classification network can reach 0.81 in mean accuracy. The model can correctly identify 97.4% patches belonging to mSASSS score 3, while those belonging to score 0 can still be classified into scores 1 or 2. The automatic severity inspection model for AS developed in this paper is accurate and can support health professionals in rapidly assessing the severity of AS, enhancing assessment proficiency, and reducing the demands of human resources.

The performance of artificial intelligence in image-based prediction of hematoma enlargement: a systematic review and meta-analysis.

Fan W, Wu Z, Zhao W, Jia L, Li S, Wei W, Chen X

pubmed logopapersDec 1 2025
Accurately predicting hematoma enlargement (HE) is crucial for improving the prognosis of patients with cerebral haemorrhage. Artificial intelligence (AI) is a potentially reliable assistant for medical image recognition. This study systematically reviews medical imaging articles on the predictive performance of AI in HE. Retrieved relevant studies published before October, 2024 from Embase, Institute of Electrical and Electronics Engineers (IEEE), PubMed, Web of Science, and Cochrane Library databases. The diagnostic test of predicting hematoma enlargement based on CT image training artificial intelligence model, and reported 2 × 2 contingency tables or provided sensitivity (SE) and specificity (SP) for calculation. Two reviewers independently screened the retrieved citations and extracted data. The methodological quality of studies was assessed using the QUADAS-AI, and Preferred Reporting Items for Systematic reviews and Meta-Analyses was used to ensure standardised reporting of studies. Subgroup analysis was performed based on sample size, risk of bias, year of publication, ratio of training set to test set, and number of centres involved. 36 articles were included in this Systematic review to qualitative analysis, of which 23 have sufficient information for further quantitative analysis. Among these articles, there are a total of 7 articles used deep learning (DL) and 16 articles used machine learning (ML). The comprehensive SE and SP of ML are 78% (95% CI: 69-85%) and 85% (78-90%), respectively, while the AUC is 0.89 (0.86-0.91). The SE and SP of DL was 87% (95% CI: 80-92%) and 75% (67-81%), respectively, with an AUC of 0.88 (0.85-0.91). The subgroup analysis found that when the ratio of the training set to the test set is 7:3, the sensitivity is 0.77(0.62-0.91), <i>p</i> = 0.03; In terms of specificity, the group with sample size more than 200 has higher specificity, which is 0.83 (0.75-0.92), <i>p</i> = 0.02; among the risk groups in the study design, the specificity of the risk group was higher, which was 0.83 (0.76-0.89), <i>p</i> = 0.02. The group specificity of articles published before 2021 was higher, 0.84 (0.77-0.90); and the specificity of data from a single research centre was higher, which was 0.85 (0.80-0.91), <i>p</i> < 0.001. Artificial intelligence algorithms based on imaging have shown good performance in predicting HE.

Deep learning for differential diagnosis of parotid tumors based on 2.5D magnetic resonance imaging.

Mai W, Fan X, Zhang L, Li J, Chen L, Hua X, Zhang D, Li H, Cai M, Shi C, Liu X

pubmed logopapersDec 1 2025
Accurate preoperative diagnosis of parotid gland tumors (PGTs) is crucial for surgical planning since malignant tumors require more extensive excision. Though fine-needle aspiration biopsy is the diagnostic gold standard, its sensitivity in detecting malignancies is limited. While Deep learning (DL) models based on magnetic resonance imaging (MRI) are common in medicine, they are less studied for parotid gland tumors. This study used a 2.5D imaging approach (Incorporating Inter-Slice Information) to train a DL model to differentiate between benign and malignant PGTs. This retrospective study included 122 parotid tumor patients, using MRI and clinical features to build predictive models. In the traditional model, univariate analysis identified statistically significant features, which were then used in multivariate logistic regression to determine independent predictors. The model was built using four-fold cross-validation. The deep learning model was trained using 2D and 2.5D imaging approaches, with a transformer-based architecture employed for transfer learning. The model's performance was evaluated using the area under the receiver operating characteristic curve (AUC) and confusion matrix metrics. In the traditional model, boundary and peritumoral invasion were identified as independent predictors for PGTs, and the model was constructed based on these features. The model achieved an AUC of 0.79 but demonstrated low sensitivity (0.54). In contrast, the DL model based on 2.5D T2 fat-suppressed images showed superior performance, with an AUC of 0.86 and a sensitivity of 0.78. The 2.5D imaging technique, when integrated with a transformer-based transfer learning model, demonstrates significant efficacy in differentiating between PGTs.

Nonsuicidal self-injury prediction with pain-processing neural circuits using interpretable graph neural network.

Wu S, Xue Y, Hang Y, Xie Y, Zhang P, Liang M, Zhong Y, Wang C

pubmed logopapersDec 1 2025
Nonsuicidal self-injury (NSSI) involves the intentional destruction of one's own body tissues without suicidal intent. Prior research has shown that individuals with NSSI exhibit abnormal pain perception; however, the pain-processing neural circuits underlying NSSI remain poorly understood. This study leverages graph neural networks to predict NSSI risk and examine the learned connectivity of neural underpinnings using multimodal data. Resting-state functional MRI and diffusion tensor imaging were collected from 50 patients with NSSI, 79 healthy controls (HC), and 44 patients with mental disorder who did not engage in NSSI as disease controls (DC). We constructed pain-related brain networks for each participant. An interpretable graph attention networks (GAT) model was developed, considering demographic factors, to predict NSSI risk and highlight NSSI-specific connectivity using learned attention matrices. The proposed GAT model based on imaging data achieved an accuracy of 80%, and increased to 88% when self-reported pain scales were incorporated alongside imaging data in distinguishing patients with NSSI from HC. It highlighted amygdala-parahippocampus and inferior frontal gyrus (IFG)-insula connectivity as pivotal in NSSI-related pain processing. After incorporating imaging data of DC, the model's accuracy reached 74%, underscoring consistent neural connectivity patterns. The GAT model demonstrates high predictive accuracy for NSSI, enhanced by including self-reported pain scales. Our proposed GAT model underscores the significance in the functional integration of limbic regions, paralimbic regions and IFG in NSSI pain processing. Our findings suggest altered pain processing as a key mechanism in NSSI, providing insights for potential neural modulation intervention strategies.

Breast tumor diagnosis via multimodal deep learning using ultrasound B-mode and Nakagami images.

Muhtadi S, Gallippi CM

pubmed logopapersNov 1 2025
We propose and evaluate multimodal deep learning (DL) approaches that combine ultrasound (US) B-mode and Nakagami parametric images for breast tumor classification. It is hypothesized that integrating tissue brightness information from B-mode images with scattering properties from Nakagami images will enhance diagnostic performance compared with single-input approaches. An EfficientNetV2B0 network was used to develop multimodal DL frameworks that took as input (i) numerical two-dimensional (2D) maps or (ii) rendered red-green-blue (RGB) representations of both B-mode and Nakagami data. The diagnostic performance of these frameworks was compared with single-input counterparts using 831 US acquisitions from 264 patients. In addition, gradient-weighted class activation mapping was applied to evaluate diagnostically relevant information utilized by the different networks. The multimodal architectures demonstrated significantly higher area under the receiver operating characteristic curve (AUC) values ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ) than their monomodal counterparts, achieving an average improvement of 10.75%. In addition, the multimodal networks incorporated, on average, 15.70% more diagnostically relevant tissue information. Among the multimodal models, those using RGB representations as input outperformed those that utilized 2D numerical data maps ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ). The top-performing multimodal architecture achieved a mean AUC of 0.896 [95% confidence interval (CI): 0.813 to 0.959] when performance was assessed at the image level and 0.848 (95% CI: 0.755 to 0.903) when assessed at the lesion level. Incorporating B-mode and Nakagami information together in a multimodal DL framework improved classification outcomes and increased the amount of diagnostically relevant information accessed by networks, highlighting the potential for automating and standardizing US breast cancer diagnostics to enhance clinical outcomes.

Robust evaluation of tissue-specific radiomic features for classifying breast tissue density grades.

Dong V, Mankowski W, Silva Filho TM, McCarthy AM, Kontos D, Maidment ADA, Barufaldi B

pubmed logopapersNov 1 2025
Breast cancer risk depends on an accurate assessment of breast density due to lesion masking. Although governed by standardized guidelines, radiologist assessment of breast density is still highly variable. Automated breast density assessment tools leverage deep learning but are limited by model robustness and interpretability. We assessed the robustness of a feature selection methodology (RFE-SHAP) for classifying breast density grades using tissue-specific radiomic features extracted from raw central projections of digital breast tomosynthesis screenings ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <mrow> <msub><mrow><mi>n</mi></mrow> <mrow><mi>I</mi></mrow> </msub> <mo>=</mo> <mn>651</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"> <mrow> <msub><mrow><mi>n</mi></mrow> <mrow><mi>II</mi></mrow> </msub> <mo>=</mo> <mn>100</mn></mrow> </math> ). RFE-SHAP leverages traditional and explainable AI methods to identify highly predictive and influential features. A simple logistic regression (LR) classifier was used to assess classification performance, and unsupervised clustering was employed to investigate the intrinsic separability of density grade classes. LR classifiers yielded cross-validated areas under the receiver operating characteristic (AUCs) per density grade of [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>A</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.909</mn> <mo>±</mo> <mn>0.032</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>B</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.858</mn> <mo>±</mo> <mn>0.027</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>C</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.927</mn> <mo>±</mo> <mn>0.013</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>D</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.890</mn> <mo>±</mo> <mn>0.089</mn></mrow> </math> ] and an AUC of <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.936</mn> <mo>±</mo> <mn>0.016</mn></mrow> </math> for classifying patients as nondense or dense. In external validation, we observed per density grade AUCs of [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>A</mi></mrow> </math> : 0.880, <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>B</mi></mrow> </math> : 0.779, <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>C</mi></mrow> </math> : 0.878, <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>D</mi></mrow> </math> : 0.673] and nondense/dense AUC of 0.823. Unsupervised clustering highlighted the ability of these features to characterize different density grades. Our RFE-SHAP feature selection methodology for classifying breast tissue density generalized well to validation datasets after accounting for natural class imbalance, and the identified radiomic features properly captured the progression of density grades. Our results potentiate future research into correlating selected radiomic features with clinical descriptors of breast tissue density.

Comparing percent breast density assessments of an AI-based method with expert reader estimates: inter-observer variability.

Romanov S, Howell S, Harkness E, Gareth Evans D, Astley S, Fergie M

pubmed logopapersNov 1 2025
Breast density estimation is an important part of breast cancer risk assessment, as mammographic density is associated with risk. However, density assessed by multiple experts can be subject to high inter-observer variability, so automated methods are increasingly used. We investigate the inter-reader variability and risk prediction for expert assessors and a deep learning approach. Screening data from a cohort of 1328 women, case-control matched, was used to compare between two expert readers and between a single reader and a deep learning model, Manchester artificial intelligence - visual analog scale (MAI-VAS). Bland-Altman analysis was used to assess the variability and matched concordance index to assess risk. Although the mean differences for the two experiments were alike, the limits of agreement between MAI-VAS and a single reader are substantially lower at +SD (standard deviation) 21 (95% CI: 19.65, 21.69) -SD 22 (95% CI: <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mo>-</mo> <mn>22.71</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mo>-</mo> <mn>20.68</mn></mrow> </math> ) than between two expert readers +SD 31 (95% CI: 32.08, 29.23) -SD 29 (95% CI: <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mo>-</mo> <mn>29.94</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mo>-</mo> <mn>27.09</mn></mrow> </math> ). In addition, breast cancer risk discrimination for the deep learning method and density readings from a single expert was similar, with a matched concordance of 0.628 (95% CI: 0.598, 0.658) and 0.624 (95% CI: 0.595, 0.654), respectively. The automatic method had a similar inter-view agreement to experts and maintained consistency across density quartiles. The artificial intelligence breast density assessment tool MAI-VAS has a better inter-observer agreement with a randomly selected expert reader than that between two expert readers. Deep learning-based density methods provide consistent density scores without compromising on breast cancer risk discrimination.
Page 1 of 1711710 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.