Sort by:
Page 64 of 1351347 results

Structural and metabolic topological alterations associated with butylphthalide treatment in mild cognitive impairment: Data from a randomized, double-blind, placebo-controlled trial.

Han X, Gong S, Gong J, Wang P, Li R, Chen R, Xu C, Sun W, Li S, Chen Y, Yang Y, Luan H, Wen B, Guo J, Lv S, Wei C

pubmed logopapersJun 1 2025
Effective intervention for mild cognitive impairment (MCI) is key for preventing dementia. As a neuroprotective agent, butylphthalide has the potential to treat MCI due to Alzheimer disease (AD). However, the pharmacological mechanism of butylphthalide from the brain network perspective is not clear. Therefore, we aimed to investigate the multimodal brain network changes associated with butylphthalide treatment in MCI due to AD. A total of 270 patients with MCI due to AD received either butylphthalide or placebo at a ratio of 1:1 for 1 year. Effective treatment was defined as a decrease in the Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-cog) > 2.5. Brain networks were constructed using T1-magnetic resonance imaging and fluorodeoxyglucose positron emission tomography. A support vector machine was applied to develop predictive models. Both treatment (drug vs. placebo)-time interactions and efficacy (effective vs. ineffective)-time interactions were detected on some overlapping structural network metrics. Simple effects analyses revealed a significantly increased global efficiency in the structural network under both treatment and effective treatment of butylphthalide. Among the overlapping metrics, an increased degree centrality of left paracentral lobule was significantly related to poorer cognitive improvement. The predictive model based on baseline multimodal network metrics exhibited high accuracy (88.93%) of predicting butylphthalide's efficacy. Butylphthalide may restore abnormal organization in structural networks of patients with MCI due to AD, and baseline network metrics could be predictive markers for therapeutic efficacy of butylphthalide. This study was registered in the Chinese Clinical Trial Registry (Registration Number: ChiCTR1800018362, Registration Date: 2018-09-13).

Predicting hemorrhagic transformation in acute ischemic stroke: a systematic review, meta-analysis, and methodological quality assessment of CT/MRI-based deep learning and radiomics models.

Salimi M, Vadipour P, Bahadori AR, Houshi S, Mirshamsi A, Fatemian H

pubmed logopapersJun 1 2025
Acute ischemic stroke (AIS) is a major cause of mortality and morbidity, with hemorrhagic transformation (HT) as a severe complication. Accurate prediction of HT is essential for optimizing treatment strategies. This review assesses the accuracy and utility of deep learning (DL) and radiomics in predicting HT through imaging, regarding clinical decision-making for AIS patients. A literature search was conducted across five databases (Pubmed, Scopus, Web of Science, Embase, IEEE) up to January 23, 2025. Studies involving DL or radiomics-based ML models for predicting HT in AIS patients were included. Data from training, validation, and clinical-combined models were extracted and analyzed separately. Pooled sensitivity, specificity, and AUC were calculated with a random-effects bivariate model. For the quality assessment of studies, the Methodological Radiomics Score (METRICS) and QUADAS-2 tool were used. 16 studies consisting of 3,083 individual participants were included in the meta-analysis. The pooled AUC for training cohorts was 0.87, sensitivity 0.80, and specificity 0.85. For validation cohorts, AUC was 0.87, sensitivity 0.81, and specificity 0.86. Clinical-combined models showed an AUC of 0.93, sensitivity 0.84, and specificity 0.89. Moderate to severe heterogeneity was noted and addressed. Deep-learning models outperformed radiomics models, while clinical-combined models outperformed deep learning-only and radiomics-only models. The average METRICS score was 62.85%. No publication bias was detected. DL and radiomics models showed great potential in predicting HT in AIS patients. However, addressing methodological issues-such as inconsistent reference standards and limited external validation-is essential for the clinical implementation of these models.

TDSF-Net: Tensor Decomposition-Based Subspace Fusion Network for Multimodal Medical Image Classification.

Zhang Y, Xu G, Zhao M, Wang H, Shi F, Chen S

pubmed logopapersJun 1 2025
Data from multimodalities bring complementary information for deep learning-based medical image classification models. However, data fusion methods simply concatenating features or images barely consider the correlations or complementarities among different modalities and easily suffer from exponential growth in dimensions and computational complexity when the modality increases. Consequently, this article proposes a subspace fusion network with tensor decomposition (TD) to heighten multimodal medical image classification. We first introduce a Tucker low-rank TD module to map the high-level dimensional tensor to the low-rank subspace, reducing the redundancy caused by multimodal data and high-dimensional features. Then, a cross-tensor attention mechanism is utilized to fuse features from the subspace into a high-dimension tensor, enhancing the representation ability of extracted features and constructing the interaction information among components in the subspace. Extensive comparison experiments with state-of-the-art (SOTA) methods are conducted on one self-established and three public multimodal medical image datasets, verifying the effectiveness and generalization ability of the proposed method. The code is available at https://github.com/1zhang-yi/TDSFNet.

HResFormer: Hybrid Residual Transformer for Volumetric Medical Image Segmentation.

Ren S, Li X

pubmed logopapersJun 1 2025
Vision Transformer shows great superiority in medical image segmentation due to the ability to learn long-range dependency. For medical image segmentation from 3-D data, such as computed tomography (CT), existing methods can be broadly classified into 2-D-based and 3-D-based methods. One key limitation in 2-D-based methods is that the intraslice information is ignored, while the limitation in 3-D-based methods is the high computation cost and memory consumption, resulting in a limited feature representation for inner slice information. During the clinical examination, radiologists primarily use the axial plane and then routinely review both axial and coronal planes to form a 3-D understanding of anatomy. Motivated by this fact, our key insight is to design a hybrid model that can first learn fine-grained inner slice information and then generate a 3-D understanding of anatomy by incorporating 3-D information. We present a novel Hybrid Residual TransFormer (HResFormer) for 3-D medical image segmentation. Building upon standard 2-D and 3-D Transformer backbones, HResFormer involves two novel key designs: 1) a Hybrid Local-Global fusion Module (HLGM) to effectively and adaptively fuse inner slice information from 2-D Transformers and intraslice information from 3-D volumes for 3-D Transformers with local fine-grained and global long-range representation and 2) residual learning of the hybrid model, which can effectively leverage the inner slice and intraslice information for better 3-D understanding of anatomy. Experiments show that our HResFormer outperforms prior art on widely used medical image segmentation benchmarks. This article sheds light on an important but neglected way to design Transformers for 3-D medical image segmentation.

External validation and performance analysis of a deep learning-based model for the detection of intracranial hemorrhage.

Nada A, Sayed AA, Hamouda M, Tantawi M, Khan A, Alt A, Hassanein H, Sevim BC, Altes T, Gaballah A

pubmed logopapersJun 1 2025
PurposeWe aimed to investigate the external validation and performance of an FDA-approved deep learning model in labeling intracranial hemorrhage (ICH) cases on a real-world heterogeneous clinical dataset. Furthermore, we delved deeper into evaluating how patients' risk factors influenced the model's performance and gathered feedback on satisfaction from radiologists of varying ranks.MethodsThis prospective IRB approved study included 5600 non-contrast CT scans of the head in various clinical settings, that is, emergency, inpatient, and outpatient units. The patients' risk factors were collected and tested for impacting the performance of DL model utilizing univariate and multivariate regression analyses. The performance of DL model was contrasted to the radiologists' interpretation to determine the presence or absence of ICH with subsequent classification into subcategories of ICH. Key metrics, including accuracy, sensitivity, specificity, positive predictive value, and negative predictive value, were calculated. Receiver operating characteristics curve, along with the area under the curve, were determined. Additionally, a questionnaire was conducted with radiologists of varying ranks to assess their experience with the model.ResultsThe model exhibited outstanding performance, achieving a high sensitivity of 89% and specificity of 96%. Additional performance metrics, including positive predictive value (82%), negative predictive value (97%), and overall accuracy (94%), underscore its robust capabilities. The area under the ROC curve further demonstrated the model's efficacy, reaching 0.954. Multivariate logistic regression revealed statistical significance for age, sex, history of trauma, operative intervention, HTN, and smoking.ConclusionOur study highlights the satisfactory performance of the DL model on a diverse real-world dataset, garnering positive feedback from radiology trainees.

RS-MAE: Region-State Masked Autoencoder for Neuropsychiatric Disorder Classifications Based on Resting-State fMRI.

Ma H, Xu Y, Tian L

pubmed logopapersJun 1 2025
Dynamic functional connectivity (DFC) extracted from resting-state functional magnetic resonance imaging (fMRI) has been widely used for neuropsychiatric disorder classifications. However, serious information redundancy within DFC matrices can significantly undermine the performance of classification models based on them. Moreover, traditional deep models cannot adapt well to connectivity-like data, and insufficient training samples further hinder their effective training. In this study, we proposed a novel region-state masked autoencoder (RS-MAE) for proficient representation learning based on DFC matrices and ultimately neuropsychiatric disorder classifications based on fMRI. Three strategies were taken to address the aforementioned limitations. First, masked autoencoder (MAE) was introduced to reduce redundancy within DFC matrices and learn effective representations of human brain function simultaneously. Second, region-state (RS) patch embedding was proposed to replace space-time patch embedding in video MAE to adapt to DFC matrices, in which only topological locality, rather than spatial locality, exists. Third, random state concatenation (RSC) was introduced as a DFC matrix augmentation approach, to alleviate the problem of training sample insufficiency. Neuropsychiatric disorder classifications were attained by fine-tuning the pretrained encoder included in RS-MAE. The performance of the proposed RS-MAE was evaluated on four publicly available datasets, achieving accuracies of 76.32%, 77.25%, 88.87%, and 76.53% for the attention deficit and hyperactivity disorder (ADHD), autism spectrum disorder (ASD), Alzheimer's disease (AD), and schizophrenia (SCZ) classification tasks, respectively. These results demonstrate the efficacy of the RS-MAE as a proficient deep learning model for neuropsychiatric disorder classifications.

Comparison of Sarcopenia Assessment in Liver Transplant Recipients by Computed Tomography Freehand Region-of-Interest versus an Automated Deep Learning System.

Miller W, Fate K, Fisher J, Thul J, Ko Y, Kim KW, Pruett T, Teigen L

pubmed logopapersJun 1 2025
Sarcopenia, or the loss of muscle quality and quantity, has been associated with poor clinical outcomes in liver transplantation such as infection, increased length of stay, and increased patient mortality. Abdominal computed tomography (CT) scans are utilized to measure patient core musculature as a measurement of sarcopenia. Methods to extract information on core body musculature can be through either freehand region-of-interest (ROI) or machine learning algorithms to quantitate total body muscle within a given area. This study directly compares these two collection methods leveraging length of stay (LOS) outcomes previously found to be associated with freehand ROI measurements. A total of 50 individuals were included who underwent liver transplantation from our single center between January 1, 2016, and May 30, 2021, and had a non-contrast abdominal CT scan within 6-months of surgery. CT-derived skeletal muscle measures at the third lumbar vertebrae were obtained using freehand ROI and an automated deep learning system. Correlation analysis of freehand psoas muscle measures, psoas area index (PAI) and mean Hounsfield units (mHU), were significantly correlated to the automated deep learning system's total skeletal muscle measures at the level of the L3, skeletal muscle index (SMI) and skeletal muscle density (SMD), respectively (R<sup>2</sup> = 0.4221; p value < 0.0001; R<sup>2</sup> = 0.6297; p value < 0.0001). The automated deep learning model's SMI predicted ∼20% of the variability (R<sup>2</sup> = 0.2013; hospital length of stay) while the PAI variable only predicted about 10% of the variability (R<sup>2</sup> = 0.0919; total healthcare length of stay) of the length of stay variables. In contrast, both the freehand ROI mHU and the automated deep learning model's muscle density variables were associated with ∼20% of the variability in the inpatient length of stay (R<sup>2</sup> = 0.2383 and 0.1810, respectively) and total healthcare length of stay variables (R<sup>2</sup> = 0.2190 and 0.1947, respectively). Sarcopenia measurements represent an important risk stratification tool for liver transplantation outcomes. For muscle sarcopenia assessment association with LOS, freehand measures of sarcopenia perform similarly to automated deep learning system measurements.

Measurement of adipose body composition using an artificial intelligence-based CT Protocol and its association with severe acute pancreatitis in hospitalized patients.

Cortés P, Mistretta TA, Jackson B, Olson CG, Al Qady AM, Stancampiano FF, Korfiatis P, Klug JR, Harris DM, Dan Echols J, Carter RE, Ji B, Hardway HD, Wallace MB, Kumbhari V, Bi Y

pubmed logopapersJun 1 2025
The clinical utility of body composition in predicting the severity of acute pancreatitis (AP) remains unclear. We aimed to measure body composition using artificial intelligence (AI) to predict severe AP in hospitalized patients. We performed a retrospective study of patients hospitalized with AP at three tertiary care centers in 2018. Patients with computer tomography (CT) imaging of the abdomen at admission were included. A fully automated and validated abdominal segmentation algorithm was used for body composition analysis. The primary outcome was severe AP, defined as having persistent single- or multi-organ failure as per the revised Atlanta classification. 352 patients were included. Severe AP occurred in 35 patients (9.9%). In multivariable analysis, adjusting for male sex and first episode of AP, intermuscular adipose tissue (IMAT) was associated with severe AP, OR = 1.06 per 5 cm<sup>2</sup>, p = 0.0207. Subcutaneous adipose tissue (SAT) area approached significance, OR = 1.05, p = 0.17. Neither visceral adipose tissue (VAT) nor skeletal muscle (SM) was associated with severe AP. In obese patients, a higher SM was associated with severe AP in unadjusted analysis (86.7 vs 75.1 and 70.3 cm<sup>2</sup> in moderate and mild, respectively p = 0.009). In this multi-site retrospective study using AI to measure body composition, we found elevated IMAT to be associated with severe AP. Although SAT was non-significant for severe AP, it approached statistical significance. Neither VAT nor SM were significant. Further research in larger prospective studies may be beneficial.

Comparing Artificial Intelligence and Traditional Regression Models in Lung Cancer Risk Prediction Using A Systematic Review and Meta-Analysis.

Leonard S, Patel MA, Zhou Z, Le H, Mondal P, Adams SJ

pubmed logopapersJun 1 2025
Accurately identifying individuals who are at high risk of lung cancer is critical to optimize lung cancer screening with low-dose CT (LDCT). We sought to compare the performance of traditional regression models and artificial intelligence (AI)-based models in predicting future lung cancer risk. A systematic review and meta-analysis were conducted with reporting according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. We searched MEDLINE, Embase, Scopus, and the Cumulative Index to Nursing and Allied Health Literature databases for studies reporting the performance of AI or traditional regression models for predicting lung cancer risk. Two researchers screened articles, and a third researcher resolved conflicts. Model characteristics and predictive performance metrics were extracted. The quality of studies was assessed using the Prediction model Risk of Bias Assessment Tool. A meta-analysis assessed the discrimination performance of models, based on area under the receiver operating characteristic curve (AUC). One hundred forty studies met inclusion criteria and included 185 traditional and 64 AI-based models. Of these, 16 AI models and 65 traditional models have been externally validated. The pooled AUC of external validations of AI models was 0.82 (95% confidence interval [CI], 0.80-0.85), and the pooled AUC for traditional regression models was 0.73 (95% CI, 0.72-0.74). In a subgroup analysis, AI models that included LDCT had a pooled AUC of 0.85 (95% CI, 0.82-0.88). Overall risk of bias was high for both AI and traditional models. AI-based models, particularly those using imaging data, show promise for improving lung cancer risk prediction over traditional regression models. Future research should focus on prospective validation of AI models and direct comparisons with traditional methods in diverse populations.

CT-SDM: A Sampling Diffusion Model for Sparse-View CT Reconstruction Across Various Sampling Rates.

Yang L, Huang J, Yang G, Zhang D

pubmed logopapersJun 1 2025
Sparse views X-ray computed tomography has emerged as a contemporary technique to mitigate radiation dose. Because of the reduced number of projection views, traditional reconstruction methods can lead to severe artifacts. Recently, research studies utilizing deep learning methods has made promising progress in removing artifacts for Sparse-View Computed Tomography (SVCT). However, given the limitations on the generalization capability of deep learning models, current methods usually train models on fixed sampling rates, affecting the usability and flexibility of model deployment in real clinical settings. To address this issue, our study proposes a adaptive reconstruction method to achieve high-performance SVCT reconstruction at various sampling rate. Specifically, we design a novel imaging degradation operator in the proposed sampling diffusion model for SVCT (CT-SDM) to simulate the projection process in the sinogram domain. Thus, the CT-SDM can gradually add projection views to highly undersampled measurements to generalize the full-view sinograms. By choosing an appropriate starting point in diffusion inference, the proposed model can recover the full-view sinograms from various sampling rate with only one trained model. Experiments on several datasets have verified the effectiveness and robustness of our approach, demonstrating its superiority in reconstructing high-quality images from sparse-view CT scans across various sampling rates.
Page 64 of 1351347 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.