Sort by:
Page 226 of 2922917 results

Evaluation of synthetic training data for 3D intraoral reconstruction of cleft patients from single images.

Lingens L, Lill Y, Nalabothu P, Benitez BK, Mueller AA, Gross M, Solenthaler B

pubmed logopapersMay 24 2025
This study investigates the effectiveness of synthetic training data in predicting 2D landmarks for 3D intraoral reconstruction in cleft lip and palate patients. We take inspiration from existing landmark prediction and 3D reconstruction techniques for faces and demonstrate their potential in medical applications. We generated both real and synthetic datasets from intraoral scans and videos. A convolutional neural network was trained using a negative-Gaussian log-likelihood loss function to predict 2D landmarks and their corresponding confidence scores. The predicted landmarks were then used to fit a statistical shape model to generate 3D reconstructions from individual images. We analyzed the model's performance on real patient data and explored the dataset size required to overcome the domain gap between synthetic and real images. Our approach generates satisfying results on synthetic data and shows promise when tested on real data. The method achieves rapid 3D reconstruction from single images and can therefore provide significant value in day-to-day medical work. Our results demonstrate that synthetic training data are viable for training models to predict 2D landmarks and reconstruct 3D meshes in patients with cleft lip and palate. This approach offers an accessible, low-cost alternative to traditional methods, using smartphone technology for noninvasive, rapid, and accurate 3D reconstructions in clinical settings.

Classifying athletes and non-athletes by differences in spontaneous brain activity: a machine learning and fMRI study.

Peng L, Xu L, Zhang Z, Wang Z, Zhong X, Wang L, Peng Z, Xu R, Shao Y

pubmed logopapersMay 24 2025
Different types of sports training can induce distinct changes in brain activity and function; however, it remains unclear if there are commonalities across various sports disciplines. Moreover, the relationship between these brain activity alterations and the duration of sports training requires further investigation. This study employed resting-state functional magnetic resonance imaging (rs-fMRI) techniques to analyze spontaneous brain activity using the amplitude of low-frequency fluctuations (ALFF) and fractional amplitude of low-frequency fluctuations (fALFF) in 86 highly trained athletes compared to 74 age- and gender-matched non-athletes. Our findings revealed significantly higher ALFF values in the Insula_R (Right Insula), OFCpost_R (Right Posterior orbital gyrus), and OFClat_R (Right Lateral orbital gyrus) in athletes compared to controls, whereas fALFF in the Postcentral_R (Right Postcentral) was notably higher in controls. Additionally, we identified a significant negative correlation between fALFF values in the Postcentral_R of athletes and their years of professional training. Utilizing machine learning algorithms, we achieved accurate classification of brain activity patterns distinguishing athletes from non-athletes with over 96.97% accuracy. These results suggest that the functional reorganization observed in athletes' brains may signify an adaptation to prolonged training, potentially reflecting enhanced processing efficiency. This study emphasizes the importance of examining the impact of long-term sports training on brain function, which could influence cognitive and sensory systems crucial for optimal athletic performance. Furthermore, machine learning methods could be used in the future to select athletes based on differences in brain activity.

Quantitative image quality metrics enable resource-efficient quality control of clinically applied AI-based reconstructions in MRI.

White OA, Shur J, Castagnoli F, Charles-Edwards G, Whitcher B, Collins DJ, Cashmore MTD, Hall MG, Thomas SA, Thompson A, Harrison CA, Hopkinson G, Koh DM, Winfield JM

pubmed logopapersMay 24 2025
AI-based MRI reconstruction techniques improve efficiency by reducing acquisition times whilst maintaining or improving image quality. Recent recommendations from professional bodies suggest centres should perform quality assessments on AI tools. However, monitoring long-term performance presents challenges, due to model drift or system updates. Radiologist-based assessments are resource-intensive and may be subjective, highlighting the need for efficient quality control (QC) measures. This study explores using image quality metrics (IQMs) to assess AI-based reconstructions. 58 patients undergoing standard-of-care rectal MRI were imaged using AI-based and conventional T2-weighted sequences. Paired and unpaired IQMs were calculated. Sensitivity of IQMs to detect retrospective perturbations in AI-based reconstructions was assessed using control charts, and statistical comparisons between the four MR systems in the evaluation were performed. Two radiologists evaluated the image quality of the perturbed images, giving an indication of their clinical relevance. Paired IQMs demonstrated sensitivity to changes in AI-reconstruction settings, identifying deviations outside ± 2 standard deviations of the reference dataset. Unpaired metrics showed less sensitivity. Paired IQMs showed no difference in performance between 1.5 T and 3 T systems (p > 0.99), whilst minor but significant (p < 0.0379) differences were noted for unpaired IQMs. IQMs are effective for QC of AI-based MR reconstructions, offering resource-efficient alternatives to repeated radiologist evaluations. Future work should expand this to other imaging applications and assess additional measures.

Deep learning reconstruction combined with contrast-enhancement boost in dual-low dose CT pulmonary angiography: a two-center prospective trial.

Shen L, Lu J, Zhou C, Bi Z, Ye X, Zhao Z, Xu M, Zeng M, Wang M

pubmed logopapersMay 24 2025
To investigate whether the deep learning reconstruction (DLR) combined with contrast-enhancement-boost (CE-boost) technique can improve the diagnostic quality of CT pulmonary angiography (CTPA) at low radiation and contrast doses, compared with routine CTPA using hybrid iterative reconstruction (HIR). This prospective two-center study included 130 patients who underwent CTPA for suspected pulmonary embolism. Patients were randomly divided into two groups: the routine CTPA group, reconstructed using HIR; and the dual-low dose CTPA group, reconstructed using HIR and DLR, additionally combined with the CE-boost to generate HIR-boost and DLR-boost images. Signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of pulmonary arteries were quantitatively assessed. Two experienced radiologists independently ordered CT images (5, best; 1, worst) based on overall image noise and vascular contrast. Diagnostic performance for PE detection was calculated for each dataset. Patient demographics were similar between groups. Compared to HIR images of the routine group, DLR-boost images of the dual-low dose group were significantly better at qualitative scores (p < 0.001). The CT values of pulmonary arteries between the DLR-boost and the HIR images were comparable (p > 0.05), whereas the SNRs and CNRs of pulmonary arteries in the DLR-boost images were the highest among all five datasets (p < 0.001). The AUCs of DLR, HIR-boost, and DLR-boost were 0.933, 0.924, and 0.986, respectively (all p > 0.05). DLR combined with CE-boost technique can significantly improve the image quality of CTPA with reduced radiation and contrast doses, facilitating a more accurate diagnosis of pulmonary embolism. Question The dual-low dose protocol is essential for detecting pulmonary emboli (PE) in follow-up CT pulmonary angiography (PA), yet effective solutions are still lacking. Findings Deep learning reconstruction (DLR)-boost with reduced radiation and contrast doses demonstrated higher quantitative and qualitative image quality than hybrid-iterative reconstruction in the routine CTPA. Clinical relevance DLR-boost based low-radiation and low-contrast-dose CTPA protocol offers a novel strategy to further enhance the image quality and diagnosis accuracy for pulmonary embolism patients.

Preoperative risk assessment of invasive endometrial cancer using MRI-based radiomics: a systematic review and meta-analysis.

Gao Y, Liang F, Tian X, Zhang G, Zhang H

pubmed logopapersMay 24 2025
Image-derived machine learning (ML) is a robust and growing field in diagnostic imaging systems for both clinicians and radiologists. Accurate preoperative radiological evaluation of the invasive ability of endometrial cancer (EC) can increase the degree of clinical benefit. The present study aimed to investigate the diagnostic performance of magnetic resonance imaging (MRI)-derived artificial intelligence for accurate preoperative assessment of the invasive risk. The PubMed, Embase, Cochrane Library and Web of Science databases were searched, and pertinent English-language papers were collected. The pooled sensitivity, specificity, diagnostic odds ratio (DOR), and positive and negative likelihood ratios (PLR and NLR, respectively) of all the papers were calculated using Stata software. The results were plotted on a summary receiver operating characteristic (SROC) curve, publication bias and threshold effects were evaluated, and meta-regression and subgroup analyses were conducted to explore the possible causes of intratumoral heterogeneity. MRI-based radiomics revealed pooled sensitivity (SEN) and specificity (SPE) values of 0.85 and 0.82 for the prediction of high-grade EC; 0.80 and 0.85 for deep myometrial invasion (DMI); 0.85 and 0.73 for lymphovascular space invasion (LVSI); 0.79 and 0.85 for microsatellite instability (MSI); and 0.90 and 0.72 for lymph node metastasis (LNM), respectively. For LVSI prediction and high-grade histological analysis, meta-regression revealed that the image segmentation and MRI-based radiomics modeling contributed to heterogeneity (p = 0.003 and 0.04). Through a systematic review and meta-analysis of the reported literature, preoperative MRI-derived ML could help clinicians accurately evaluate EC risk factors, potentially guiding individual treatment thereafter.

Evaluation of locoregional invasiveness of early lung adenocarcinoma manifesting as ground-glass nodules via [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT imaging.

Ruan D, Shi S, Guo W, Pang Y, Yu L, Cai J, Wu Z, Wu H, Sun L, Zhao L, Chen H

pubmed logopapersMay 24 2025
Accurate differentiation of the histologic invasiveness of early-stage lung adenocarcinoma is crucial for determining surgical strategies. This study aimed to investigate the potential of [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT in assessing the invasiveness of early lung adenocarcinoma presenting as ground-glass nodules (GGNs) and identifying imaging features with strong predictive potential. This prospective study (NCT04588064) was conducted between July 2020 and July 2022, focusing on GGNs that were confirmed postoperatively to be either invasive adenocarcinoma (IAC), minimally invasive adenocarcinoma (MIA), or precursor glandular lesions (PGL). A total of 45 patients with 53 pulmonary GGNs were included in the study: 19 patients with GGNs associated with PGL-MIA and 34 with IAC. Lung nodules were segmented using the Segment Anything Model in Medical Images (MedSAM) and the PET Tumor Segmentation Extension. Clinical characteristics, along with conventional and high-throughput radiomics features from High-resolution CT (HRCT) and PET scans, were analysed. The predictive performance of these features in differentiating between PGL or MIA (PGL-MIA) and IAC was assessed using 5-fold cross-validation across six machine learning algorithms. Model validation was performed on an independent external test set (n = 11). The Chi-squared, Fisher's exact, and DeLong tests were employed to compare the performance of the models. The maximum standardised uptake value (SUVmax) derived from [<sup>68</sup>Ga]Ga-FAPI-46 PET was identified as an independent predictor of IAC. A cut-off value of 1.82 yielded a sensitivity of 94% (32/34), specificity of 84% (16/19), and an overall accuracy of 91% (48/53) in the training set, while achieving 100% (12/12) accuracy in the external test set. Radiomics-based classification further improved diagnostic performance, achieving a sensitivity of 97% (33/34), specificity of 89% (17/19), accuracy of 94% (50/53), and an area under the receiver operating characteristic curve (AUC) of 0.97 [95% CI: 0.93-1.00]. Compared with the CT-based radiomics model and the PET-based model, the combined PET/CT radiomics model did not show significant improvement in predictive performance. The key predictive feature was [<sup>68</sup>Ga]Ga-FAPI-46 PET log-sigma-7-mm-3D_firstorder_RootMeanSquared. The SUVmax derived from [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT can effectively differentiate the invasiveness of early-stage lung adenocarcinoma manifesting as GGNs. Integrating high-throughput features from [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT images can considerably enhance classification accuracy. NCT04588064; URL: https://clinicaltrials.gov/study/NCT04588064 .

A novel multimodal computer-aided diagnostic model for pulmonary embolism based on hybrid transformer-CNN and tabular transformer.

Zhang W, Gu Y, Ma H, Yang L, Zhang B, Wang J, Chen M, Lu X, Li J, Liu X, Yu D, Zhao Y, Tang S, He Q

pubmed logopapersMay 24 2025
Pulmonary embolism (PE) is a life-threatening clinical problem where early diagnosis and prompt treatment are essential to reducing morbidity and mortality. While the combination of CT images and electronic health records (EHR) can help improve computer-aided diagnosis, there are many challenges that need to be addressed. The primary objective of this study is to leverage both 3D CT images and EHR data to improve PE diagnosis. First, for 3D CT images, we propose a network combining Swin Transformers with 3D CNNs, enhanced by a Multi-Scale Feature Fusion (MSFF) module to address fusion challenges between different encoders. Secondly, we introduce a Polarized Self-Attention (PSA) module to enhance the attention mechanism within the 3D CNN. And then, for EHR data, we design the Tabular Transformer for effective feature extraction. Finally, we design and evaluate three multimodal attention fusion modules to integrate CT and EHR features, selecting the most effective one for final fusion. Experimental results on the RadFusion dataset demonstrate that our model significantly outperforms existing state-of-the-art methods, achieving an AUROC of 0.971, an F1 score of 0.926, and an accuracy of 0.920. These results underscore the effectiveness and innovation of our multimodal approach in advancing PE diagnosis.

Noninvasive prediction of failure of the conservative treatment in lateral epicondylitis by clinicoradiological features and elbow MRI radiomics based on interpretable machine learning: a multicenter cohort study.

Cui J, Wang P, Zhang X, Zhang P, Yin Y, Bai R

pubmed logopapersMay 24 2025
To develop and validate an interpretable machine learning model based on clinicoradiological features and radiomic features based on magnetic resonance imaging (MRI) to predict the failure of conservative treatment in lateral epicondylitis (LE). This retrospective study included 420 patients with LE from three hospitals, divided into a training cohort (n = 245), an internal validation cohort (n = 115), and an external validation cohort (n = 60). Patients were categorized into conservative treatment failure (n = 133) and conservative treatment success (n = 287) groups based on the outcome of conservative treatment. We developed two predictive models: one utilizing clinicoradiological features, and another integrating clinicoradiological and radiomic features. Seven machine learning algorithms were evaluated to determine the optimal model for predicting the failure of conservative treatment. Model performance was assessed using ROC, and model interpretability was examined using SHapley Additive exPlanations (SHAP). The LightGBM algorithm was selected as the optimal model because of its superior performance. The combined model demonstrated enhanced predictive accuracy with an area under the ROC curve (AUC) of 0.96 (95% CI: 0.91, 0.99) in the external validation cohort. SHAP analysis identified the radiological feature "CET coronal tear size" and the radiomic feature "AX_log-sigma-1-0-mm-3D_glszm_SmallAreaEmphasis" as key predictors of conservative treatment failure. We developed and validated an interpretable LightGBM machine learning model that integrates clinicoradiological and radiomic features to predict the failure of conservative treatment in LE. The model demonstrates high predictive accuracy and offers valuable insights into key prognostic factors.

Using machine learning models based on cardiac magnetic resonance parameters to predict the prognostic in children with myocarditis.

Hu D, Cui M, Zhang X, Wu Y, Liu Y, Zhai D, Guo W, Ju S, Fan G, Cai W

pubmed logopapersMay 24 2025
To develop machine learning (ML) models incorporating explanatory cardiac magnetic resonance (CMR) parameters for predicting the prognosis of myocarditis in pediatric patients. 77 patients with pediatric myocarditis diagnosed clinically between January 2020 and December 2023 were enrolled retrospectively. All patients were examined by ultrasound, electrocardiogram (ECG), serum biomarkers on admission, and CMR scan to obtain 16 explanatory CMR parameters. All patients underwent follow-up echocardiography and CMR. Patients were divided into two groups according to the occurrence of adverse cardiac events (ACE) during follow-up: the poor prognosis group (n = 23) and the good prognosis group (n = 54). Four models were established, including logistic regression (LR), random forest (RF), support vector machine classifier (SVC), and extreme gradient boosting (XGBoost) model. The performance of each model was evaluated by the area under the receiver operating characteristic curve (AUC). Model interpretation was generated by Shapley additive interpretation (Shap). Among the four models, the three most important features were late gadolinium enhancement (LGE), left ventricular ejection fraction (LVEF), and SAXPeak Global Circumferential Strain (SAXGCS). In addition, LGE, LVEF, SAXGCS, and LAXPeak Global Longitudinal Strain (LAXGLS) were selected as the key predictors for all four models. Four interpretable CMR parameters were extracted, among which the LR model had the best prediction performance. The AUC, sensitivity, and specificity were 0.893, 0.820, and 0.944, respectively. The findings indicate that the presence of LGE on CMR imaging, along with reductions in LVEF, SAXGCS, and LAXGLS, are predictive of poor prognosis in patients with acute myocarditis. ML models, particularly the LR model, demonstrate the potential to predict the prognosis of children with myocarditis. These findings provide valuable insights for cardiologists, supporting more informed clinical decision-making and potentially enhancing patient outcomes in pediatric myocarditis cases.

Explainable deep learning for age and gender estimation in dental CBCT scans using attention mechanisms and multi task learning.

Pishghadam N, Esmaeilyfard R, Paknahad M

pubmed logopapersMay 24 2025
Accurate and interpretable age estimation and gender classification are essential in forensic and clinical diagnostics, particularly when using high-dimensional medical imaging data such as Cone Beam Computed Tomography (CBCT). Traditional CBCT-based approaches often suffer from high computational costs and limited interpretability, reducing their applicability in forensic investigations. This study aims to develop a multi-task deep learning framework that enhances both accuracy and explainability in CBCT-based age estimation and gender classification using attention mechanisms. We propose a multi-task learning (MTL) model that simultaneously estimates age and classifies gender using panoramic slices extracted from CBCT scans. To improve interpretability, we integrate Convolutional Block Attention Module (CBAM) and Grad-CAM visualization, highlighting relevant craniofacial regions. The dataset includes 2,426 CBCT images from individuals aged 7 to 23 years, and performance is assessed using Mean Absolute Error (MAE) for age estimation and accuracy for gender classification. The proposed model achieves a MAE of 1.08 years for age estimation and 95.3% accuracy in gender classification, significantly outperforming conventional CBCT-based methods. CBAM enhances the model's ability to focus on clinically relevant anatomical features, while Grad-CAM provides visual explanations, improving interpretability. Additionally, using panoramic slices instead of full 3D CBCT volumes reduces computational costs without sacrificing accuracy. Our framework improves both accuracy and interpretability in forensic age estimation and gender classification from CBCT images. By incorporating explainable AI techniques, this model provides a computationally efficient and clinically interpretable tool for forensic and medical applications.
Page 226 of 2922917 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.