Sort by:
Page 337 of 3863853 results

Explainable deep learning for age and gender estimation in dental CBCT scans using attention mechanisms and multi task learning.

Pishghadam N, Esmaeilyfard R, Paknahad M

pubmed logopapersMay 24 2025
Accurate and interpretable age estimation and gender classification are essential in forensic and clinical diagnostics, particularly when using high-dimensional medical imaging data such as Cone Beam Computed Tomography (CBCT). Traditional CBCT-based approaches often suffer from high computational costs and limited interpretability, reducing their applicability in forensic investigations. This study aims to develop a multi-task deep learning framework that enhances both accuracy and explainability in CBCT-based age estimation and gender classification using attention mechanisms. We propose a multi-task learning (MTL) model that simultaneously estimates age and classifies gender using panoramic slices extracted from CBCT scans. To improve interpretability, we integrate Convolutional Block Attention Module (CBAM) and Grad-CAM visualization, highlighting relevant craniofacial regions. The dataset includes 2,426 CBCT images from individuals aged 7 to 23 years, and performance is assessed using Mean Absolute Error (MAE) for age estimation and accuracy for gender classification. The proposed model achieves a MAE of 1.08 years for age estimation and 95.3% accuracy in gender classification, significantly outperforming conventional CBCT-based methods. CBAM enhances the model's ability to focus on clinically relevant anatomical features, while Grad-CAM provides visual explanations, improving interpretability. Additionally, using panoramic slices instead of full 3D CBCT volumes reduces computational costs without sacrificing accuracy. Our framework improves both accuracy and interpretability in forensic age estimation and gender classification from CBCT images. By incorporating explainable AI techniques, this model provides a computationally efficient and clinically interpretable tool for forensic and medical applications.

Stroke prediction in elderly patients with atrial fibrillation using machine learning combined clinical and left atrial appendage imaging phenotypic features.

Huang H, Xiong Y, Yao Y, Zeng J

pubmed logopapersMay 24 2025
Atrial fibrillation (AF) is one of the primary etiologies for ischemic stroke, and it is of paramount importance to delineate the risk phenotypes among elderly AF patients and to investigate more efficacious models for predicting stroke risk. This single-center prospective cohort study collected clinical data and cardiac computed tomography angiography (CTA) images from elderly AF patients. The clinical phenotypes and left atrial appendage (LAA) radiomic phenotypes of elderly AF patients were identified through K-means clustering. The independent correlations between these phenotypes and stroke risk were subsequently analyzed. Machine learning algorithms-Logistic Regression, Naive Bayes, Support Vector Machine (SVM), Random Forest, and Extreme Gradient Boosting-were selected to develop a predictive model for stroke risk in this patient cohort. The model was assessed using the Area Under the Receiver Operating Characteristic Curve, Hosmer-Lemeshow tests, and Decision Curve Analysis. A total of 419 elderly AF patients (≥ 65 years old) were included. K-means clustering identified three clinical phenotypes: Group A (cardiac enlargement/dysfunction), Group B (normal phenotype), and Group C (metabolic/coagulation abnormalities). Stroke incidence was highest in Group A (19.3%) and Group C (14.5%) versus Group B (3.3%). Similarly, LAA radiomic phenotypes revealed elevated stroke risk in patients with enlarged LAA structure (Group B: 20.0%) and complex LAA morphology (Group C: 14.0%) compared to normal LAA (Group A: 2.9%). Among the five machine learning models, the SVM model achieved superior prediction performance (AUROC: 0.858 [95% CI: 0.830-0.887]). The stroke-risk prediction model for elderly AF patients constructed based on the SVM algorithm has strong predictive efficacy.

Deep learning-based identification of vertebral fracture and osteoporosis in lateral spine radiographs and DXA vertebral fracture assessment to predict incident fracture.

Hong N, Cho SW, Lee YH, Kim CO, Kim HC, Rhee Y, Leslie WD, Cummings SR, Kim KM

pubmed logopapersMay 24 2025
Deep learning (DL) identification of vertebral fractures and osteoporosis in lateral spine radiographs and DXA vertebral fracture assessment (VFA) images may improve fracture risk assessment in older adults. In 26 299 lateral spine radiographs from 9276 individuals attending a tertiary-level institution (60% train set; 20% validation set; 20% test set; VERTE-X cohort), DL models were developed to detect prevalent vertebral fracture (pVF) and osteoporosis. The pre-trained DL models from lateral spine radiographs were then fine-tuned in 30% of a DXA VFA dataset (KURE cohort), with performance evaluated in the remaining 70% test set. The area under the receiver operating characteristics curve (AUROC) for DL models to detect pVF and osteoporosis was 0.926 (95% CI 0.908-0.955) and 0.848 (95% CI 0.827-0.869) from VERTE-X spine radiographs, respectively, and 0.924 (95% CI 0.905-0.942) and 0.867 (95% CI 0.853-0.881) from KURE DXA VFA images, respectively. A total of 13.3% and 13.6% of individuals sustained an incident fracture during a median follow-up of 5.4 years and 6.4 years in the VERTE-X test set (n = 1852) and KURE test set (n = 2456), respectively. Incident fracture risk was significantly greater among individuals with DL-detected vertebral fracture (hazard ratios [HRs] 3.23 [95% CI 2.51-5.17] and 2.11 [95% CI 1.62-2.74] for the VERTE-X and KURE test sets) or DL-detected osteoporosis (HR 2.62 [95% CI 1.90-3.63] and 2.14 [95% CI 1.72-2.66]), which remained significant after adjustment for clinical risk factors and femoral neck bone mineral density. DL scores improved incident fracture discrimination and net benefit when combined with clinical risk factors. In summary, DL-detected pVF and osteoporosis in lateral spine radiographs and DXA VFA images enhanced fracture risk prediction in older adults.

Relational Bi-level aggregation graph convolutional network with dynamic graph learning and puzzle optimization for Alzheimer's classification.

Raajasree K, Jaichandran R

pubmed logopapersMay 24 2025
Alzheimer's disease (AD) is a neurodegenerative disorder characterized by a progressive cognitive decline, necessitating early diagnosis for effective treatment. This study presents the Relational Bi-level Aggregation Graph Convolutional Network with Dynamic Graph Learning and Puzzle Optimization for Alzheimer's Classification (RBAGCN-DGL-PO-AC), using denoised T1-weighted Magnetic Resonance Images (MRIs) collected from Alzheimer's Disease Neuroimaging Initiative (ADNI) repository. Addressing the impact of noise in medical imaging, the method employs advanced denoising techniques includes: the Modified Spline-Kernelled Chirplet Transform (MSKCT), Jump Gain Integral Recurrent Neural Network (JGIRNN), and Newton Time Extracting Wavelet Transform (NTEWT), to enhance the image quality. Key brain regions, crucial for classification such as hippocampal, lateral ventricle and posterior cingulate cortex are segmented using Attention Guided Generalized Intuitionistic Fuzzy C-Means Clustering (AG-GIFCMC). Feature extraction and classification using segmented outputs are performed with RBAGCN-DGL and puzzle optimization, categorize input images into Healthy Controls (HC), Early Mild Cognitive Impairment (EMCI), Late Mild Cognitive Impairment (LMCI), and Alzheimer's Disease (AD). To assess the effectiveness of the proposed method, we systematically examined the structural modifications to the RBAGCN-DGL-PO-AC model through extensive ablation studies. Experimental findings highlight that RBAGCN-DGL-PO-AC state-of-the art performance, with 99.25 % accuracy, outperforming existing methods including MSFFGCN_ADC, CNN_CAD_DBMRI, and FCNN_ADC, while reducing training time by 28.5 % and increasing inference speed by 32.7 %. Hence, the RBAGCN-DGL-PO-AC method enhances AD classification by integrating denoising, segmentation, and dynamic graph-based feature extraction, achieving superior accuracy and making it a valuable tool for clinical applications, ultimately improving patient outcomes and disease management.

MATI: A GPU-accelerated toolbox for microstructural diffusion MRI simulation and data fitting with a graphical user interface.

Xu J, Devan SP, Shi D, Pamulaparthi A, Yan N, Zu Z, Smith DS, Harkins KD, Gore JC, Jiang X

pubmed logopapersMay 24 2025
To introduce MATI (Microstructural Analysis Toolbox for Imaging), a versatile MATLAB-based toolbox that combines both simulation and data fitting capabilities for microstructural dMRI research. MATI provides a user-friendly, graphical user interface that enables researchers, including those without much programming experience, to perform advanced simulations and data analyses for microstructural MRI research. For simulation, MATI supports arbitrary microstructural tissues and pulse sequences. For data fitting, MATI supports a range of fitting methods, including traditional non-linear least squares, Bayesian approaches, machine learning, and dictionary matching methods, allowing users to tailor analyses based on specific research needs. Optimized with vectorized matrix operations and high-performance numerical libraries, MATI achieves high computational efficiency, enabling rapid simulations and data fitting on CPU and GPU hardware. While designed for microstructural dMRI, MATI's generalized framework can be extended to other imaging methods, making it a flexible and scalable tool for quantitative MRI research. MATI offers a significant step toward translating advanced microstructural MRI techniques into clinical applications.

Cross-Fusion Adaptive Feature Enhancement Transformer: Efficient high-frequency integration and sparse attention enhancement for brain MRI super-resolution.

Yang Z, Xiao H, Wang X, Zhou F, Deng T, Liu S

pubmed logopapersMay 24 2025
High-resolution magnetic resonance imaging (MRI) is essential for diagnosing and treating brain diseases. Transformer-based approaches demonstrate strong potential in MRI super-resolution by capturing long-range dependencies effectively. However, existing Transformer-based super-resolution methods face several challenges: (1) they primarily focus on low-frequency information, neglecting the utilization of high-frequency information; (2) they lack effective mechanisms to integrate both low-frequency and high-frequency information; (3) they struggle to effectively eliminate redundant information during the reconstruction process. To address these issues, we propose the Cross-fusion Adaptive Feature Enhancement Transformer (CAFET). Our model maximizes the potential of both CNNs and Transformers. It consists of four key blocks: a high-frequency enhancement block for extracting high-frequency information; a hybrid attention block for capturing global information and local fitting, which includes channel attention and shifted rectangular window attention; a large-window fusion attention block for integrating local high-frequency features and global low-frequency features; and an adaptive sparse overlapping attention block for dynamically retaining key information and enhancing the aggregation of cross-window features. Extensive experiments validate the effectiveness of the proposed method. On the BraTS and IXI datasets, with an upsampling factor of ×2, the proposed method achieves a maximum PSNR improvement of 2.4 dB and 1.3 dB compared to state-of-the-art methods, along with an SSIM improvement of up to 0.16% and 1.42%. Similarly, at an upsampling factor of ×4, the proposed method achieves a maximum PSNR improvement of 1.04 dB and 0.3 dB over the current leading methods, along with an SSIM improvement of up to 0.25% and 1.66%. Our method is capable of reconstructing high-quality super-resolution brain MRI images, demonstrating significant clinical potential.

Symbolic and hybrid AI for brain tissue segmentation using spatial model checking.

Belmonte G, Ciancia V, Massink M

pubmed logopapersMay 24 2025
Segmentation of 3D medical images, and brain segmentation in particular, is an important topic in neuroimaging and in radiotherapy. Overcoming the current, time consuming, practise of manual delineation of brain tumours and providing an accurate, explainable, and replicable method of segmentation of the tumour area and related tissues is therefore an open research challenge. In this paper, we first propose a novel symbolic approach to brain segmentation and delineation of brain lesions based on spatial model checking. This method has its foundations in the theory of closure spaces, a generalisation of topological spaces, and spatial logics. At its core is a high-level declarative logic language for image analysis, ImgQL, and an efficient spatial model checker, VoxLogicA, exploiting state-of-the-art image analysis libraries in its model checking algorithm. We then illustrate how this technique can be combined with Machine Learning techniques leading to a hybrid AI approach that provides accurate and explainable segmentation results. We show the results of the application of the symbolic approach on several public datasets with 3D magnetic resonance (MR) images. Three datasets are provided by the 2017, 2019 and 2020 international MICCAI BraTS Challenges with 210, 259 and 293 MR images, respectively, and the fourth is the BrainWeb dataset with 20 (synthetic) 3D patient images of the normal brain. We then apply the hybrid AI method to the BraTS 2020 training set. Our segmentation results are shown to be in line with the state-of-the-art with respect to other recent approaches, both from the accuracy point of view as well as from the view of computational efficiency, but with the advantage of them being explainable.

Generalizable AI approach for detecting projection type and left-right reversal in chest X-rays.

Ohta Y, Katayama Y, Ichida T, Utsunomiya A, Ishida T

pubmed logopapersMay 23 2025
The verification of chest X-ray images involves several checkpoints, including orientation and reversal. To address the challenges of manual verification, this study developed an artificial intelligence (AI)-based system using a deep convolutional neural network (DCNN) to automatically verify the consistency between the imaging direction and examination orders. The system classified the chest X-ray images into four categories: anteroposterior (AP), posteroanterior (PA), flipped AP, and flipped PA. To evaluate the impact of internal and external datasets on the classification accuracy, the DCNN was trained using multiple publicly available chest X-ray datasets and tested on both internal and external data. The results demonstrated that the DCNN accurately classified the imaging directions and detected image reversal. However, the classification accuracy was strongly influenced by the training dataset. When trained exclusively on NIH data, the network achieved an accuracy of 98.9% on the same dataset; however, this reduced to 87.8% when evaluated with PADChest data. When trained on a mixed dataset, the accuracy improved to 96.4%; however, it decreased to 76.0% when tested on an external COVID-CXNet dataset. Further, using Grad-CAM, we visualized the decision-making process of the network, highlighting the areas of influence, such as the cardiac silhouette and arm positioning, depending on the imaging direction. Thus, this study demonstrated the potential of AI in assisting in automating the verification of imaging direction and positioning in chest X-rays. However, the network must be fine-tuned to local data characteristics to achieve optimal performance.

Construction of a Prediction Model for Adverse Perinatal Outcomes in Foetal Growth Restriction Based on a Machine Learning Algorithm: A Retrospective Study.

Meng X, Wang L, Wu M, Zhang N, Li X, Wu Q

pubmed logopapersMay 23 2025
To create and validate a machine learning (ML)-based model for predicting the adverse perinatal outcome (APO) in foetal growth restriction (FGR) at diagnosis. A retrospective study. Multi-centre in China. Pregnancies affected by FGR. We enrolled singleton foetuses with a perinatal diagnosis of FGR who were admitted between January 2021 and November 2023. A total of 361 pregnancies from Beijing Obstetrics and Gynecology Hospital were used as the training set and the internal test set. In comparison, data from 50 pregnancies from Haidian Maternal and Child Health Hospital were used as the external test set. Feature screening was performed using the random forest (RF), the Least Absolute Shrinkage and Selection Operator (LASSO) and logistic regression (LR). Subsequently, six ML methods, including Stacking, were used to construct models to predict the APO of FGR. Model's performance was evaluated through indicators such as the area under the receiver operating characteristic curve (AUROC). The Shapley Additive Explanation analysis was used to rank each model feature and explain the final model. Mean ± SD gestational age at diagnosis was 32.3 ± 4.8 weeks in the absent APO group and 27.3 ± 3.7 in the present APO group. Women enrolled in the present APO group had a higher rate of hypertension related to pregnancy (74.8% vs. 18.8%, p < 0.001). Among 17 candidate predictors (including maternal characteristics, maternal comorbidities, obstetric characteristics and ultrasound parameters), the integration of RF, LASSO and LR methodologies identified maternal body mass index, hypertension, gestational age at diagnosis of FGR, estimated foetal weight (EFW) z score, EFW growth velocity and abnormal umbilical artery Doppler (defined as a pulsatility index above the 95th percentile or instances of absent/reversed diastolic flow) as significant predictors. The Stacking model demonstrated a good performance in both the internal test set [AUROC: 0.861, 95% confidence interval (CI), 0.838-0.896] and the external test set [AUROC: 0.906, 95% CI, 0.875-0.947]. The calibration curve showed high agreement between the predicted and observed risks. The Hosmer-Lemeshow test for the internal and external test sets was p = 0.387 and p = 0.825, respectively. The ML algorithm for APO, which integrates maternal clinical factors and ultrasound parameters, demonstrates good predictive value for APO in FGR at diagnosis. This suggested that ML techniques may be a valid approach for the early detection of high-risk APO in FGR pregnancies.

Self-supervised feature learning for cardiac Cine MR image reconstruction.

Xu S, Fruh M, Hammernik K, Lingg A, Kubler J, Krumm P, Rueckert D, Gatidis S, Kustner T

pubmed logopapersMay 23 2025
We propose a self-supervised feature learning assisted reconstruction (SSFL-Recon) framework for MRI reconstruction to address the limitation of existing supervised learning methods. Although recent deep learning-based methods have shown promising performance in MRI reconstruction, most require fully-sampled images for supervised learning, which is challenging in practice considering long acquisition times under respiratory or organ motion. Moreover, nearly all fully-sampled datasets are obtained from conventional reconstruction of mildly accelerated datasets, thus potentially biasing the achievable performance. The numerous undersampled datasets with different accelerations in clinical practice, hence, remain underutilized. To address these issues, we first train a self-supervised feature extractor on undersampled images to learn sampling-insensitive features. The pre-learned features are subsequently embedded in the self-supervised reconstruction network to assist in removing artifacts. Experiments were conducted retrospectively on an in-house 2D cardiac Cine dataset, including 91 cardiovascular patients and 38 healthy subjects. The results demonstrate that the proposed SSFL-Recon framework outperforms existing self-supervised MRI reconstruction methods and even exhibits comparable or better performance to supervised learning up to 16× retrospective undersampling. The feature learning strategy can effectively extract global representations, which have proven beneficial in removing artifacts and increasing generalization ability during reconstruction.
Page 337 of 3863853 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.