Sort by:
Page 96 of 2232230 results

Prediction of NIHSS Scores and Acute Ischemic Stroke Severity Using a Cross-attention Vision Transformer Model with Multimodal MRI.

Tuxunjiang P, Huang C, Zhou Z, Zhao W, Han B, Tan W, Wang J, Kukun H, Zhao W, Xu R, Aihemaiti A, Subi Y, Zou J, Xie C, Chang Y, Wang Y

pubmed logopapersJun 13 2025
This study aimed to develop and evaluate models for classifying the severity of neurological impairment in acute ischemic stroke (AIS) patients using multimodal MRI data. A retrospective cohort of 1227 AIS patients was collected and categorized into mild (NIHSS<5) and moderate-to-severe (NIHSS≥5) stroke groups based on NIHSS scores. Eight baseline models were constructed for performance comparison, including a clinical model, radiomics models using DWI or multiple MRI sequences, and deep learning (DL) models with varying fusion strategies (early fusion, later fusion, full cross-fusion, and DWI-centered cross-fusion). All DL models were based on the Vision Transformer (ViT) framework. Model performance was evaluated using metrics such as AUC and ACC, and robustness was assessed through subgroup analyses and visualization using Grad-CAM. Among the eight models, the DL model using DWI as the primary sequence with cross-fusion of other MRI sequences (Model 8) achieved the best performance. In the test cohort, Model 8 demonstrated an AUC of 0.914, ACC of 0.830, and high specificity (0.818) and sensitivity (0.853). Subgroup analysis shows that model 8 is robust in most subgroups with no significant prediction difference (p > 0.05), and the AUC value consistently exceeds 0.900. A significant predictive difference was observed in the BMI group (p < 0.001). The results of external validation showed that the AUC values of the model 8 in center 2 and center 3 reached 0.910 and 0.912, respectively. Visualization using Grad-CAM emphasized the infarct core as the most critical region contributing to predictions, with consistent feature attention across DWI, T1WI, T2WI, and FLAIR sequences, further validating the interpretability of the model. A ViT-based DL model with cross-modal fusion strategies provides a non-invasive and efficient tool for classifying AIS severity. Its robust performance across subgroups and interpretability make it a promising tool for personalized management and decision-making in clinical practice.

Does restrictive anorexia nervosa impact brain aging? A machine learning approach to estimate age based on brain structure.

Gupta Y, de la Cruz F, Rieger K, di Giuliano M, Gaser C, Cole J, Breithaupt L, Holsen LM, Eddy KT, Thomas JJ, Cetin-Karayumak S, Kubicki M, Lawson EA, Miller KK, Misra M, Schumann A, Bär KJ

pubmed logopapersJun 13 2025
Anorexia nervosa (AN), a severe eating disorder marked by extreme weight loss and malnutrition, leads to significant alterations in brain structure. This study used machine learning (ML) to estimate brain age from structural MRI scans and investigated brain-predicted age difference (brain-PAD) as a potential biomarker in AN. Structural MRI scans were collected from female participants aged 10-40 years across two institutions (Boston, USA, and Jena, Germany), including acute AN (acAN; n=113), weight-restored AN (wrAN; n=35), and age-matched healthy controls (HC; n=90). The ML model was trained on 3487 healthy female participants (ages 5-45 years) from ten datasets, using 377 neuroanatomical features extracted from T1-weighted MRI scans. The model achieved strong performance with a mean absolute error (MAE) of 1.93 years and a correlation of r = 0.88 in HCs. In acAN patients, brain age was overestimated by an average of +2.25 years, suggesting advanced brain aging. In contrast, wrAN participants showed significantly lower brain-PAD than acAN (+0.26 years, p=0.0026) and did not differ from HC (p=0.98), suggesting normalization of brain age estimates following weight restoration. A significant group-by-age interaction effect on predicted brain age (p<0.001) indicated that brain age deviations were most pronounced in younger acAN participants. Brain-PAD in acAN was significantly negatively associated with BMI (r = -0.291, p<sub>fdr</sub> = 0.005), but not in wrAN or HC groups. Importantly, no significant associations were found between brain-PAD and clinical symptom severity. These findings suggest that acute AN is linked to advanced brain aging during the acute stage, and that may partially normalize following weight recovery.

Prediction of functional outcome after traumatic brain injury: a narrative review.

Iaquaniello C, Scordo E, Robba C

pubmed logopapersJun 13 2025
To synthesize current evidence on prognostic factors, tools, and strategies influencing functional outcomes in patients with traumatic brain injury (TBI), with a focus on the acute and postacute phases of care. Key early predictors such as Glasgow Coma Scale (GCS) scores, pupillary reactivity, and computed tomography (CT) imaging findings remain fundamental in guiding clinical decision-making. Prognostic models like IMPACT and CRASH enhance early risk stratification, while outcome measures such as the Glasgow Outcome Scale-Extended (GOS-E) provide structured long-term assessments. Despite their utility, heterogeneity in assessment approaches and treatment protocols continues to limit consistency in outcome predictions. Recent advancements highlight the value of fluid biomarkers like neurofilament light chain (NFL) and glial fibrillary acidic protein (GFAP), which offer promising avenues for improved accuracy. Additionally, artificial intelligence models are emerging as powerful tools to integrate complex datasets and refine individualized outcome forecasting. Neurological prognostication after TBI is evolving through the integration of clinical, radiological, molecular, and computational data. Although standardized models and scales remain foundational, emerging technologies and therapies - such as biomarkers, machine learning, and neurostimulants - represent a shift toward more personalized and actionable strategies to optimize recovery and long-term function.

Investigating the Role of Area Deprivation Index in Observed Differences in CT-Based Body Composition by Race.

Chisholm M, Jabal MS, He H, Wang Y, Kalisz K, Lafata KJ, Calabrese E, Bashir MR, Tailor TD, Magudia K

pubmed logopapersJun 13 2025
Differences in CT-based body composition (BC) have been observed by race. We sought to investigate whether indices reporting census block group-level disadvantage, area deprivation index (ADI) and social vulnerability index (SVI), age, sex, and/or clinical factors could explain race-based differences in body composition. The first abdominal CT exams for patients in Durham County at a single institution in 2020 were analyzed using a fully automated and open-source deep learning BC analysis workflow to generate cross-sectional areas for skeletal muscle (SMA), subcutaneous fat (SFA), and visceral fat (VFA). Patient level demographic and clinical data were gathered from the electronic health record. State ADI ranking and SVI values were linked to each patient. Univariable and multivariable models were created to assess the association of demographics, ADI, SVI, and other relevant clinical factors with SMA, SFA, and VFA. 5,311 patients (mean age, 57.4 years; 55.5% female, 46.5% Black; 39.5% White 10.3% Hispanic) were included. At univariable analysis, race, ADI, SVI, sex, BMI, weight, and height were significantly associated with all body compartments (SMA, SFA, and VFA, all p<0.05). At multivariable analyses adjusted for patient characteristics and clinical comorbidities, race remained a significant predictor, whereas ADI did not. SVI was significant in a multivariable model with SMA.

Recent Advances in sMRI and Artificial Intelligence for Presurgical Planning in Focal Cortical Dysplasia: A Systematic Review.

Mahmoudi A, Alizadeh A, Ganji Z, Zare H

pubmed logopapersJun 13 2025
Focal Cortical Dysplasia (FCD) is a leading cause of drug-resistant epilepsy, particularly in children and young adults, necessitating precise presurgical planning. Traditional structural MRI often fails to detect subtle FCD lesions, especially in MRI-negative cases. Recent advancements in Artificial Intelligence (AI), particularly Machine Learning (ML) and Deep Learning (DL), have the potential to enhance FCD detection's sensitivity and specificity. This systematic review, following PRISMA guidelines, searched PubMed, Embase, Scopus, Web of Science, and Science Direct for articles published from 2020 onwards, using keywords related to "Focal Cortical Dysplasia," "MRI," and "Artificial Intelligence/Machine Learning/Deep Learning." Included were original studies employing AI and structural MRI (sMRI) for FCD detection in humans, reporting quantitative performance metrics, and published in English. Data extraction was performed independently by two reviewers, with discrepancies resolved by a third. The included studies demonstrated that AI significantly improved FCD detection, achieving sensitivity up to 97.1% and specificities up to 84.3% across various MRI sequences, including MPRAGE, MP2RAGE, and FLAIR. AI models, particularly deep learning models, matched or surpassed human radiologist performance, with combined AI-human expertise reaching up to 87% detection rates. Among 88 full-text articles reviewed, 27 met inclusion criteria. The studies emphasized the importance of advanced MRI sequences and multimodal MRI for enhanced detection, though model performance varied with FCD type and training datasets. Recent advances in sMRI and AI, especially deep learning, offer substantial potential to improve FCD detection, leading to better presurgical planning and patient outcomes in drug-resistant epilepsy. These methods enable faster, more accurate, and automated FCD detection, potentially enhancing surgical decision-making. Further clinical validation and optimization of AI algorithms across diverse datasets are essential for broader clinical translation.

Impact of Deep Learning-Based Image Conversion on Fully Automated Coronary Artery Calcium Scoring Using Thin-Slice, Sharp-Kernel, Non-Gated, Low-Dose Chest CT Scans: A Multi-Center Study.

Kim C, Hong S, Choi H, Yoo WS, Kim JY, Chang S, Park CH, Hong SJ, Yang DH, Yong HS, van Assen M, De Cecco CN, Suh YJ

pubmed logopapersJun 13 2025
To evaluate the impact of deep learning-based image conversion on the accuracy of automated coronary artery calcium quantification using thin-slice, sharp-kernel, non-gated, low-dose chest computed tomography (LDCT) images collected from multiple institutions. A total of 225 pairs of LDCT and calcium scoring CT (CSCT) images scanned at 120 kVp and acquired from the same patient within a 6-month interval were retrospectively collected from four institutions. Image conversion was performed for LDCT images using proprietary software programs to simulate conventional CSCT. This process included 1) deep learning-based kernel conversion of low-dose, high-frequency, sharp kernels to simulate standard-dose, low-frequency kernels, and 2) thickness conversion using the raysum method to convert 1-mm or 1.25-mm thickness images to 3-mm thickness. Automated Agaston scoring was conducted on the LDCT scans before (LDCT-Org<sub>auto</sub>) and after the image conversion (LDCT-CONV<sub>auto</sub>). Manual scoring was performed on the CSCT images (CSCT<sub>manual</sub>) and used as a reference standard. The accuracy of automated Agaston scores and risk severity categorization based on the automated scoring on LDCT scans was analyzed compared to the reference standard, using the Bland-Altman analysis, concordance correlation coefficient (CCC), and weighted kappa (κ) statistic. LDCT-CONV<sub>auto</sub> demonstrated a reduced bias for Agaston score, compared with CSCT<sub>manual</sub>, than LDCT-Org<sub>auto</sub> did (-3.45 vs. 206.7). LDCT-CONV<sub>auto</sub> showed a higher CCC than LDCT-Org<sub>auto</sub> did (0.881 [95% confidence interval {CI}, 0.750-0.960] vs. 0.269 [95% CI, 0.129-0.430]). In terms of risk category assignment, LDCT-Org<sub>auto</sub> exhibited poor agreement with CSCT<sub>manual</sub> (weighted κ = 0.115 [95% CI, 0.082-0.154]), whereas LDCT-CONV<sub>auto</sub> achieved good agreement (weighted κ = 0.792 [95% CI, 0.731-0.847]). Deep learning-based conversion of LDCT images originally obtained with thin slices and a sharp kernel can enhance the accuracy of automated coronary artery calcium score measurement using the images.

OneTouch Automated Photoacoustic and Ultrasound Imaging of Breast in Standing Pose.

Zhang H, Zheng E, Zheng W, Huang C, Xi Y, Cheng Y, Yu S, Chakraborty S, Bonaccio E, Takabe K, Fan XC, Xu W, Xia J

pubmed logopapersJun 12 2025
We developed an automated photoacoustic and ultrasound breast tomography system that images the patient in the standing pose. The system, named OneTouch-PAT, utilized linear transducer arrays with optical-acoustic combiners for effective dual-modal imaging. During scanning, subjects only need to gently attach their breasts to the imaging window, and co-registered three-dimensional ultrasonic and photoacoustic images of the breast can be obtained within one minute. Our system has a large field of view of 17 cm by 15 cm and achieves an imaging depth of 3 cm with sub-millimeter resolution. A three-dimensional deep-learning network was also developed to further improve the image quality by improving the 3D resolution, enhancing vasculature, eliminating skin signals, and reducing noise. The performance of the system was tested on four healthy subjects and 61 patients with breast cancer. Our results indicate that the ultrasound structural information can be combined with the photoacoustic vascular information for better tissue characterization. Representative cases from different molecular subtypes have indicated different photoacoustic and ultrasound features that could potentially be used for imaging-based cancer classification. Statistical analysis among all patients indicates that the regional photoacoustic intensity and vessel branching points are indicators of breast malignancy. These promising results suggest that our system could significantly enhance breast cancer diagnosis and classification.

Radiomics and machine learning for predicting valve vegetation in infective endocarditis: a comparative analysis of mitral and aortic valves using TEE imaging.

Esmaely F, Moradnejad P, Boudagh S, Bitarafan-Rajabi A

pubmed logopapersJun 12 2025
Detecting valve vegetation in infective endocarditis (IE) poses challenges, particularly with mechanical valves, because acoustic shadowing artefacts often obscure critical diagnostic details. This study aimed to classify native and prosthetic mitral and aortic valves with and without vegetation using radiomics and machine learning. 286 TEE scans from suspected IE cases (August 2023-November 2024) were analysed alongside 113 rejected IE as control cases. Frames were preprocessed using the Extreme Total Variation Bilateral (ETVB) filter, and radiomics features were extracted for classification using machine learning models, including Random Forest, Decision Tree, SVM, k-NN, and XGBoost. in order to evaluate the models, AUC, ROC curves, and Decision Curve Analysis (DCA) were used. For native mitral valves, SVM achieved the highest performance with an AUC of 0.88, a sensitivity of 0.91, and a specificity of 0.87. Mechanical mitral valves also showed optimal results with SVM (AUC: 0.85, sensitivity: 0.73, specificity: 0.92). Native aortic valves were best classified using SVM (AUC: 0.86, sensitivity: 0.87, specificity: 0.86), while Random Forest excelled for mechanical aortic valves (AUC: 0.81, sensitivity: 0.89, specificity: 0.78). These findings suggest that combining the models with the clinician's report may enhance the diagnostic accuracy of TEE, particularly in the absence of advanced imaging methods like PET/CT.

High visceral-to-subcutaneous fat area ratio is an unfavorable prognostic indicator in patients with uterine sarcoma.

Kurokawa M, Gonoi W, Hanaoka S, Kurokawa R, Uehara S, Kato M, Suzuki M, Toyohara Y, Takaki Y, Kusakabe M, Kino N, Tsukazaki T, Unno T, Sone K, Abe O

pubmed logopapersJun 12 2025
Uterine sarcoma is a rare disease whose association with body composition parameters is poorly understood. This study explored the impact of body composition parameters on overall survival with uterine sarcoma. This multicenter study included 52 patients with uterine sarcomas treated at three Japanese hospitals between 2007 and 2023. A semi-automatic segmentation program based on deep learning analyzed transaxial CT images at the L3 vertebral level, calculating body composition parameters as follows: area indices (areas divided by height squared) of skeletal muscle, visceral and subcutaneous adipose tissue (SMI, VATI, and SATI, respectively); skeletal muscle density; and the visceral-to-subcutaneous fat area ratio (VSR). The optimal cutoff values for each parameter were calculated using maximally selected rank statistics with several p value approximations. The effects of body composition parameters and clinical data on overall survival (OS) and cancer-specific survival (CSS) were analyzed. Univariate Cox proportional hazards regression analysis revealed that advanced stage (III-IV) and high VSR were unfavorable prognostic factors for both OS and CSS. Multivariate Cox proportional hazard regression analysis revealed that advanced stage (III-IV) (hazard ratios (HRs), 4.67 for OS and 4.36 for CSS, p < 0.01), and high VSR (HRs, 9.36 for OS and 8.22 for CSS, p < 0.001) were poor prognostic factors for both OS and CSS. Added values were observed when the VSR was incorporated into the OS and the CSS prediction models. Increased VSR and tumor stage are significant predictors of poor overall survival in patients with uterine sarcoma.

Radiogenomic correlation of hypoxia-related biomarkers in clear cell renal cell carcinoma.

Shao Y, Cen HS, Dhananjay A, Pawan SJ, Lei X, Gill IS, D'souza A, Duddalwar VA

pubmed logopapersJun 12 2025
This study aimed to evaluate radiomic models' ability to predict hypoxia-related biomarker expression in clear cell renal cell carcinoma (ccRCC). Clinical and molecular data from 190 patients were extracted from The Cancer Genome Atlas-Kidney Renal Clear Cell Carcinoma dataset, and corresponding CT imaging data were manually segmented from The Cancer Imaging Archive. A panel of 2,824 radiomic features was analyzed, and robust, high-interscanner-reproducibility features were selected. Gene expression data for 13 hypoxia-related biomarkers were stratified by tumor grade (1/2 vs. 3/4) and stage (I/II vs. III/IV) and analyzed using Wilcoxon rank sum test. Machine learning modeling was conducted using the High-Performance Random Forest (RF) procedure in SAS Enterprise Miner 15.1, with significance at P < 0.05. Descriptive univariate analysis revealed significantly lower expression of several biomarkers in high-grade and late-stage tumors, with KLF6 showing the most notable decrease. The RF model effectively predicted the expression of KLF6, ETS1, and BCL2, as well as PLOD2 and PPARGC1A underexpression. Stratified performance assessment showed improved predictive ability for RORA, BCL2, and KLF6 in high-grade tumors and for ETS1 across grades, with no significant performance difference across grade or stage. The RF model demonstrated modest but significant associations between texture metrics derived from clinical CT scans, such as GLDM and GLCM, and key hypoxia-related biomarkers including KLF6, BCL2, ETS1, and PLOD2. These findings suggest that radiomic analysis could support ccRCC risk stratification and personalized treatment planning by providing non-invasive insights into tumor biology.
Page 96 of 2232230 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.