Sort by:
Page 84 of 1321316 results

Evaluation of locoregional invasiveness of early lung adenocarcinoma manifesting as ground-glass nodules via [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT imaging.

Ruan D, Shi S, Guo W, Pang Y, Yu L, Cai J, Wu Z, Wu H, Sun L, Zhao L, Chen H

pubmed logopapersMay 24 2025
Accurate differentiation of the histologic invasiveness of early-stage lung adenocarcinoma is crucial for determining surgical strategies. This study aimed to investigate the potential of [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT in assessing the invasiveness of early lung adenocarcinoma presenting as ground-glass nodules (GGNs) and identifying imaging features with strong predictive potential. This prospective study (NCT04588064) was conducted between July 2020 and July 2022, focusing on GGNs that were confirmed postoperatively to be either invasive adenocarcinoma (IAC), minimally invasive adenocarcinoma (MIA), or precursor glandular lesions (PGL). A total of 45 patients with 53 pulmonary GGNs were included in the study: 19 patients with GGNs associated with PGL-MIA and 34 with IAC. Lung nodules were segmented using the Segment Anything Model in Medical Images (MedSAM) and the PET Tumor Segmentation Extension. Clinical characteristics, along with conventional and high-throughput radiomics features from High-resolution CT (HRCT) and PET scans, were analysed. The predictive performance of these features in differentiating between PGL or MIA (PGL-MIA) and IAC was assessed using 5-fold cross-validation across six machine learning algorithms. Model validation was performed on an independent external test set (n = 11). The Chi-squared, Fisher's exact, and DeLong tests were employed to compare the performance of the models. The maximum standardised uptake value (SUVmax) derived from [<sup>68</sup>Ga]Ga-FAPI-46 PET was identified as an independent predictor of IAC. A cut-off value of 1.82 yielded a sensitivity of 94% (32/34), specificity of 84% (16/19), and an overall accuracy of 91% (48/53) in the training set, while achieving 100% (12/12) accuracy in the external test set. Radiomics-based classification further improved diagnostic performance, achieving a sensitivity of 97% (33/34), specificity of 89% (17/19), accuracy of 94% (50/53), and an area under the receiver operating characteristic curve (AUC) of 0.97 [95% CI: 0.93-1.00]. Compared with the CT-based radiomics model and the PET-based model, the combined PET/CT radiomics model did not show significant improvement in predictive performance. The key predictive feature was [<sup>68</sup>Ga]Ga-FAPI-46 PET log-sigma-7-mm-3D_firstorder_RootMeanSquared. The SUVmax derived from [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT can effectively differentiate the invasiveness of early-stage lung adenocarcinoma manifesting as GGNs. Integrating high-throughput features from [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT images can considerably enhance classification accuracy. NCT04588064; URL: https://clinicaltrials.gov/study/NCT04588064 .

A novel multimodal computer-aided diagnostic model for pulmonary embolism based on hybrid transformer-CNN and tabular transformer.

Zhang W, Gu Y, Ma H, Yang L, Zhang B, Wang J, Chen M, Lu X, Li J, Liu X, Yu D, Zhao Y, Tang S, He Q

pubmed logopapersMay 24 2025
Pulmonary embolism (PE) is a life-threatening clinical problem where early diagnosis and prompt treatment are essential to reducing morbidity and mortality. While the combination of CT images and electronic health records (EHR) can help improve computer-aided diagnosis, there are many challenges that need to be addressed. The primary objective of this study is to leverage both 3D CT images and EHR data to improve PE diagnosis. First, for 3D CT images, we propose a network combining Swin Transformers with 3D CNNs, enhanced by a Multi-Scale Feature Fusion (MSFF) module to address fusion challenges between different encoders. Secondly, we introduce a Polarized Self-Attention (PSA) module to enhance the attention mechanism within the 3D CNN. And then, for EHR data, we design the Tabular Transformer for effective feature extraction. Finally, we design and evaluate three multimodal attention fusion modules to integrate CT and EHR features, selecting the most effective one for final fusion. Experimental results on the RadFusion dataset demonstrate that our model significantly outperforms existing state-of-the-art methods, achieving an AUROC of 0.971, an F1 score of 0.926, and an accuracy of 0.920. These results underscore the effectiveness and innovation of our multimodal approach in advancing PE diagnosis.

Noninvasive prediction of failure of the conservative treatment in lateral epicondylitis by clinicoradiological features and elbow MRI radiomics based on interpretable machine learning: a multicenter cohort study.

Cui J, Wang P, Zhang X, Zhang P, Yin Y, Bai R

pubmed logopapersMay 24 2025
To develop and validate an interpretable machine learning model based on clinicoradiological features and radiomic features based on magnetic resonance imaging (MRI) to predict the failure of conservative treatment in lateral epicondylitis (LE). This retrospective study included 420 patients with LE from three hospitals, divided into a training cohort (n = 245), an internal validation cohort (n = 115), and an external validation cohort (n = 60). Patients were categorized into conservative treatment failure (n = 133) and conservative treatment success (n = 287) groups based on the outcome of conservative treatment. We developed two predictive models: one utilizing clinicoradiological features, and another integrating clinicoradiological and radiomic features. Seven machine learning algorithms were evaluated to determine the optimal model for predicting the failure of conservative treatment. Model performance was assessed using ROC, and model interpretability was examined using SHapley Additive exPlanations (SHAP). The LightGBM algorithm was selected as the optimal model because of its superior performance. The combined model demonstrated enhanced predictive accuracy with an area under the ROC curve (AUC) of 0.96 (95% CI: 0.91, 0.99) in the external validation cohort. SHAP analysis identified the radiological feature "CET coronal tear size" and the radiomic feature "AX_log-sigma-1-0-mm-3D_glszm_SmallAreaEmphasis" as key predictors of conservative treatment failure. We developed and validated an interpretable LightGBM machine learning model that integrates clinicoradiological and radiomic features to predict the failure of conservative treatment in LE. The model demonstrates high predictive accuracy and offers valuable insights into key prognostic factors.

Using machine learning models based on cardiac magnetic resonance parameters to predict the prognostic in children with myocarditis.

Hu D, Cui M, Zhang X, Wu Y, Liu Y, Zhai D, Guo W, Ju S, Fan G, Cai W

pubmed logopapersMay 24 2025
To develop machine learning (ML) models incorporating explanatory cardiac magnetic resonance (CMR) parameters for predicting the prognosis of myocarditis in pediatric patients. 77 patients with pediatric myocarditis diagnosed clinically between January 2020 and December 2023 were enrolled retrospectively. All patients were examined by ultrasound, electrocardiogram (ECG), serum biomarkers on admission, and CMR scan to obtain 16 explanatory CMR parameters. All patients underwent follow-up echocardiography and CMR. Patients were divided into two groups according to the occurrence of adverse cardiac events (ACE) during follow-up: the poor prognosis group (n = 23) and the good prognosis group (n = 54). Four models were established, including logistic regression (LR), random forest (RF), support vector machine classifier (SVC), and extreme gradient boosting (XGBoost) model. The performance of each model was evaluated by the area under the receiver operating characteristic curve (AUC). Model interpretation was generated by Shapley additive interpretation (Shap). Among the four models, the three most important features were late gadolinium enhancement (LGE), left ventricular ejection fraction (LVEF), and SAXPeak Global Circumferential Strain (SAXGCS). In addition, LGE, LVEF, SAXGCS, and LAXPeak Global Longitudinal Strain (LAXGLS) were selected as the key predictors for all four models. Four interpretable CMR parameters were extracted, among which the LR model had the best prediction performance. The AUC, sensitivity, and specificity were 0.893, 0.820, and 0.944, respectively. The findings indicate that the presence of LGE on CMR imaging, along with reductions in LVEF, SAXGCS, and LAXGLS, are predictive of poor prognosis in patients with acute myocarditis. ML models, particularly the LR model, demonstrate the potential to predict the prognosis of children with myocarditis. These findings provide valuable insights for cardiologists, supporting more informed clinical decision-making and potentially enhancing patient outcomes in pediatric myocarditis cases.

Explainable deep learning for age and gender estimation in dental CBCT scans using attention mechanisms and multi task learning.

Pishghadam N, Esmaeilyfard R, Paknahad M

pubmed logopapersMay 24 2025
Accurate and interpretable age estimation and gender classification are essential in forensic and clinical diagnostics, particularly when using high-dimensional medical imaging data such as Cone Beam Computed Tomography (CBCT). Traditional CBCT-based approaches often suffer from high computational costs and limited interpretability, reducing their applicability in forensic investigations. This study aims to develop a multi-task deep learning framework that enhances both accuracy and explainability in CBCT-based age estimation and gender classification using attention mechanisms. We propose a multi-task learning (MTL) model that simultaneously estimates age and classifies gender using panoramic slices extracted from CBCT scans. To improve interpretability, we integrate Convolutional Block Attention Module (CBAM) and Grad-CAM visualization, highlighting relevant craniofacial regions. The dataset includes 2,426 CBCT images from individuals aged 7 to 23 years, and performance is assessed using Mean Absolute Error (MAE) for age estimation and accuracy for gender classification. The proposed model achieves a MAE of 1.08 years for age estimation and 95.3% accuracy in gender classification, significantly outperforming conventional CBCT-based methods. CBAM enhances the model's ability to focus on clinically relevant anatomical features, while Grad-CAM provides visual explanations, improving interpretability. Additionally, using panoramic slices instead of full 3D CBCT volumes reduces computational costs without sacrificing accuracy. Our framework improves both accuracy and interpretability in forensic age estimation and gender classification from CBCT images. By incorporating explainable AI techniques, this model provides a computationally efficient and clinically interpretable tool for forensic and medical applications.

Stroke prediction in elderly patients with atrial fibrillation using machine learning combined clinical and left atrial appendage imaging phenotypic features.

Huang H, Xiong Y, Yao Y, Zeng J

pubmed logopapersMay 24 2025
Atrial fibrillation (AF) is one of the primary etiologies for ischemic stroke, and it is of paramount importance to delineate the risk phenotypes among elderly AF patients and to investigate more efficacious models for predicting stroke risk. This single-center prospective cohort study collected clinical data and cardiac computed tomography angiography (CTA) images from elderly AF patients. The clinical phenotypes and left atrial appendage (LAA) radiomic phenotypes of elderly AF patients were identified through K-means clustering. The independent correlations between these phenotypes and stroke risk were subsequently analyzed. Machine learning algorithms-Logistic Regression, Naive Bayes, Support Vector Machine (SVM), Random Forest, and Extreme Gradient Boosting-were selected to develop a predictive model for stroke risk in this patient cohort. The model was assessed using the Area Under the Receiver Operating Characteristic Curve, Hosmer-Lemeshow tests, and Decision Curve Analysis. A total of 419 elderly AF patients (≥ 65 years old) were included. K-means clustering identified three clinical phenotypes: Group A (cardiac enlargement/dysfunction), Group B (normal phenotype), and Group C (metabolic/coagulation abnormalities). Stroke incidence was highest in Group A (19.3%) and Group C (14.5%) versus Group B (3.3%). Similarly, LAA radiomic phenotypes revealed elevated stroke risk in patients with enlarged LAA structure (Group B: 20.0%) and complex LAA morphology (Group C: 14.0%) compared to normal LAA (Group A: 2.9%). Among the five machine learning models, the SVM model achieved superior prediction performance (AUROC: 0.858 [95% CI: 0.830-0.887]). The stroke-risk prediction model for elderly AF patients constructed based on the SVM algorithm has strong predictive efficacy.

Deep learning-based identification of vertebral fracture and osteoporosis in lateral spine radiographs and DXA vertebral fracture assessment to predict incident fracture.

Hong N, Cho SW, Lee YH, Kim CO, Kim HC, Rhee Y, Leslie WD, Cummings SR, Kim KM

pubmed logopapersMay 24 2025
Deep learning (DL) identification of vertebral fractures and osteoporosis in lateral spine radiographs and DXA vertebral fracture assessment (VFA) images may improve fracture risk assessment in older adults. In 26 299 lateral spine radiographs from 9276 individuals attending a tertiary-level institution (60% train set; 20% validation set; 20% test set; VERTE-X cohort), DL models were developed to detect prevalent vertebral fracture (pVF) and osteoporosis. The pre-trained DL models from lateral spine radiographs were then fine-tuned in 30% of a DXA VFA dataset (KURE cohort), with performance evaluated in the remaining 70% test set. The area under the receiver operating characteristics curve (AUROC) for DL models to detect pVF and osteoporosis was 0.926 (95% CI 0.908-0.955) and 0.848 (95% CI 0.827-0.869) from VERTE-X spine radiographs, respectively, and 0.924 (95% CI 0.905-0.942) and 0.867 (95% CI 0.853-0.881) from KURE DXA VFA images, respectively. A total of 13.3% and 13.6% of individuals sustained an incident fracture during a median follow-up of 5.4 years and 6.4 years in the VERTE-X test set (n = 1852) and KURE test set (n = 2456), respectively. Incident fracture risk was significantly greater among individuals with DL-detected vertebral fracture (hazard ratios [HRs] 3.23 [95% CI 2.51-5.17] and 2.11 [95% CI 1.62-2.74] for the VERTE-X and KURE test sets) or DL-detected osteoporosis (HR 2.62 [95% CI 1.90-3.63] and 2.14 [95% CI 1.72-2.66]), which remained significant after adjustment for clinical risk factors and femoral neck bone mineral density. DL scores improved incident fracture discrimination and net benefit when combined with clinical risk factors. In summary, DL-detected pVF and osteoporosis in lateral spine radiographs and DXA VFA images enhanced fracture risk prediction in older adults.

Relational Bi-level aggregation graph convolutional network with dynamic graph learning and puzzle optimization for Alzheimer's classification.

Raajasree K, Jaichandran R

pubmed logopapersMay 24 2025
Alzheimer's disease (AD) is a neurodegenerative disorder characterized by a progressive cognitive decline, necessitating early diagnosis for effective treatment. This study presents the Relational Bi-level Aggregation Graph Convolutional Network with Dynamic Graph Learning and Puzzle Optimization for Alzheimer's Classification (RBAGCN-DGL-PO-AC), using denoised T1-weighted Magnetic Resonance Images (MRIs) collected from Alzheimer's Disease Neuroimaging Initiative (ADNI) repository. Addressing the impact of noise in medical imaging, the method employs advanced denoising techniques includes: the Modified Spline-Kernelled Chirplet Transform (MSKCT), Jump Gain Integral Recurrent Neural Network (JGIRNN), and Newton Time Extracting Wavelet Transform (NTEWT), to enhance the image quality. Key brain regions, crucial for classification such as hippocampal, lateral ventricle and posterior cingulate cortex are segmented using Attention Guided Generalized Intuitionistic Fuzzy C-Means Clustering (AG-GIFCMC). Feature extraction and classification using segmented outputs are performed with RBAGCN-DGL and puzzle optimization, categorize input images into Healthy Controls (HC), Early Mild Cognitive Impairment (EMCI), Late Mild Cognitive Impairment (LMCI), and Alzheimer's Disease (AD). To assess the effectiveness of the proposed method, we systematically examined the structural modifications to the RBAGCN-DGL-PO-AC model through extensive ablation studies. Experimental findings highlight that RBAGCN-DGL-PO-AC state-of-the art performance, with 99.25 % accuracy, outperforming existing methods including MSFFGCN_ADC, CNN_CAD_DBMRI, and FCNN_ADC, while reducing training time by 28.5 % and increasing inference speed by 32.7 %. Hence, the RBAGCN-DGL-PO-AC method enhances AD classification by integrating denoising, segmentation, and dynamic graph-based feature extraction, achieving superior accuracy and making it a valuable tool for clinical applications, ultimately improving patient outcomes and disease management.

MATI: A GPU-accelerated toolbox for microstructural diffusion MRI simulation and data fitting with a graphical user interface.

Xu J, Devan SP, Shi D, Pamulaparthi A, Yan N, Zu Z, Smith DS, Harkins KD, Gore JC, Jiang X

pubmed logopapersMay 24 2025
To introduce MATI (Microstructural Analysis Toolbox for Imaging), a versatile MATLAB-based toolbox that combines both simulation and data fitting capabilities for microstructural dMRI research. MATI provides a user-friendly, graphical user interface that enables researchers, including those without much programming experience, to perform advanced simulations and data analyses for microstructural MRI research. For simulation, MATI supports arbitrary microstructural tissues and pulse sequences. For data fitting, MATI supports a range of fitting methods, including traditional non-linear least squares, Bayesian approaches, machine learning, and dictionary matching methods, allowing users to tailor analyses based on specific research needs. Optimized with vectorized matrix operations and high-performance numerical libraries, MATI achieves high computational efficiency, enabling rapid simulations and data fitting on CPU and GPU hardware. While designed for microstructural dMRI, MATI's generalized framework can be extended to other imaging methods, making it a flexible and scalable tool for quantitative MRI research. MATI offers a significant step toward translating advanced microstructural MRI techniques into clinical applications.

Cross-Fusion Adaptive Feature Enhancement Transformer: Efficient high-frequency integration and sparse attention enhancement for brain MRI super-resolution.

Yang Z, Xiao H, Wang X, Zhou F, Deng T, Liu S

pubmed logopapersMay 24 2025
High-resolution magnetic resonance imaging (MRI) is essential for diagnosing and treating brain diseases. Transformer-based approaches demonstrate strong potential in MRI super-resolution by capturing long-range dependencies effectively. However, existing Transformer-based super-resolution methods face several challenges: (1) they primarily focus on low-frequency information, neglecting the utilization of high-frequency information; (2) they lack effective mechanisms to integrate both low-frequency and high-frequency information; (3) they struggle to effectively eliminate redundant information during the reconstruction process. To address these issues, we propose the Cross-fusion Adaptive Feature Enhancement Transformer (CAFET). Our model maximizes the potential of both CNNs and Transformers. It consists of four key blocks: a high-frequency enhancement block for extracting high-frequency information; a hybrid attention block for capturing global information and local fitting, which includes channel attention and shifted rectangular window attention; a large-window fusion attention block for integrating local high-frequency features and global low-frequency features; and an adaptive sparse overlapping attention block for dynamically retaining key information and enhancing the aggregation of cross-window features. Extensive experiments validate the effectiveness of the proposed method. On the BraTS and IXI datasets, with an upsampling factor of ×2, the proposed method achieves a maximum PSNR improvement of 2.4 dB and 1.3 dB compared to state-of-the-art methods, along with an SSIM improvement of up to 0.16% and 1.42%. Similarly, at an upsampling factor of ×4, the proposed method achieves a maximum PSNR improvement of 1.04 dB and 0.3 dB over the current leading methods, along with an SSIM improvement of up to 0.25% and 1.66%. Our method is capable of reconstructing high-quality super-resolution brain MRI images, demonstrating significant clinical potential.
Page 84 of 1321316 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.