Sort by:
Page 346 of 6636627 results

Nam SM, Hwang JH, Kim JM, Lee DI, Kim YH, Park SJ, Park CK, Dho YS, Kim MS

pubmed logopapersJul 23 2025
Current methods for evaluating ventriculomegaly, particularly Evans' Index (EI), fail to accurately assess three-dimensional ventricular changes. We developed and validated an automated multi-class segmentation system for precise volumetric assessment, simultaneously segmenting five anatomical classes (ventricles, parenchyma, skull, skin, and hemorrhage) to support future augmented reality (AR)-guided external ventricular drainage (EVD) systems. Using the nnUNet architecture, we trained our model on 288 brain CT scans with diverse pathological conditions and validated it using internal (n=10),external (n=43) and public (n=192) datasets. Clinical validation involved 227 patients who underwent CSF drainage procedures. We compared automated volumetric measurements against traditional EI measurements and actual CSF drainage volumes in surgical cases. The model achieved exceptional performance with a mean Dice similarity coefficient of 93.0% across all five classes, demonstrating consistent performance across institutional and public datasets, with particularly robust ventricle segmentation (92.5%). Clinical validation revealed EI was the strongest single predictor of ventricular volume (adjusted R<sup>2</sup> = 0.430, p < 0.001), though influenced by age, sex, and diagnosis type. Most significantly, in EVD cases, automated volume differences showed remarkable correlation with actual CSF drainage amounts (β = 0.956, adjusted R<sup>2</sup> = 0.936, p < 0.001), validating the system's accuracy in measuring real CSF volume changes. Our comprehensive multi-class segmentation system offers a superior alternative to traditional measurements with potential for non-invasive CSF dynamics monitoring and AR-guided EVD placement.

Yang J, Siddique MA, Ullah H, Gilanie G, Por LY, Alshathri S, El-Shafai W, Aldossary H, Gadekallu TR

pubmed logopapersJul 23 2025
Brain tumors pose a significant risk to human life, making accurate grading essential for effective treatment planning and improved survival rates. Magnetic Resonance Imaging (MRI) plays a crucial role in this process. The objective of this study was to develop an automated brain tumor grading system utilizing deep learning techniques. A dataset comprising 293 MRI scans from patients was obtained from the Department of Radiology at Bahawal Victoria Hospital in Bahawalpur, Pakistan. The proposed approach integrates a specialized Convolutional Neural Network (CNN) with pre-trained models to classify brain tumors into low-grade (LGT) and high-grade (HGT) categories with high accuracy. To assess the model's robustness, experiments were conducted using various methods: (1) raw MRI slices, (2) MRI segments containing only the tumor area, (3) feature-extracted slices derived from the original images through the proposed CNN architecture, and (4) feature-extracted slices from tumor area-only segmented images using the proposed CNN. The MRI slices and the features extracted from them were labeled using machine learning models, including Support Vector Machine (SVM) and CNN architectures based on transfer learning, such as MobileNet, Inception V3, and ResNet-50. Additionally, a custom model was specifically developed for this research. The proposed model achieved an impressive peak accuracy of 99.45%, with classification accuracies of 99.56% for low-grade tumors and 99.49% for high-grade tumors, surpassing traditional methods. These results not only enhance the accuracy of brain tumor grading but also improve computational efficiency by reducing processing time and the number of iterations required.

Zhao Y, Bo L, Qian L, Chen X, Wang Y, Cui L, Xin Y, Liu L

pubmed logopapersJul 23 2025
To investigate the risk factors associated with postoperative cement displacement following percutaneous kyphoplasty (PKP) in patients with osteoporotic vertebral compression fractures (OVCF) and to develop predictive models for clinical risk assessment. This retrospective study included 198 patients with OVCF who underwent PKP. Imaging and clinical variables were collected. Multiple machine learning models, including logistic regression, L1- and L2-regularized logistic regression, support vector machine (SVM), decision tree, gradient boosting, and random forest, were developed to predict cement displacement. L1- and L2-regularized logistic regression models identified four key risk factors: kissing spine (L1: 1.11; L2: 0.91), incomplete anterior cortex (L1: -1.60; L2: -1.62), low vertebral body CT value (L1: -2.38; L2: -1.71), and large Cobb change (L1: 0.89; L2: 0.87). The support vector machine (SVM) model achieved the best performance (accuracy: 0.983, precision: 0.875, recall: 1.000, F1-score: 0.933, specificity: 0.981, AUC: 0.997). Other models, including logistic regression, decision tree, gradient boosting, and random forest, also showed high performance but were slightly inferior to SVM. Key predictors of cement displacement were identified, and machine learning models were developed for risk assessment. These findings can assist clinicians in identifying high-risk patients, optimizing treatment strategies, and improving patient outcomes.

Zehui Zhao, Laith Alzubaidi, Haider A. Alwzwazy, Jinglan Zhang, Yuantong Gu

arxiv logopreprintJul 23 2025
In recent years, advanced deep learning architectures have shown strong performance in medical imaging tasks. However, the traditional centralized learning paradigm poses serious privacy risks as all data is collected and trained on a single server. To mitigate this challenge, decentralized approaches such as federated learning and swarm learning have emerged, allowing model training on local nodes while sharing only model weights. While these methods enhance privacy, they struggle with heterogeneous and imbalanced data and suffer from inefficiencies due to frequent communication and the aggregation of weights. More critically, the dynamic and complex nature of clinical environments demands scalable AI systems capable of continuously learning from diverse modalities and multilabels. Yet, both centralized and decentralized models are prone to catastrophic forgetting during system expansion, often requiring full model retraining to incorporate new data. To address these limitations, we propose VGS-ATD, a novel distributed learning framework. To validate VGS-ATD, we evaluate it in experiments spanning 30 datasets and 80 independent labels across distributed nodes, VGS-ATD achieved an overall accuracy of 92.7%, outperforming centralized learning (84.9%) and swarm learning (72.99%), while federated learning failed under these conditions due to high requirements on computational resources. VGS-ATD also demonstrated strong scalability, with only a 1% drop in accuracy on existing nodes after expansion, compared to a 20% drop in centralized learning, highlighting its resilience to catastrophic forgetting. Additionally, it reduced computational costs by up to 50% relative to both centralized and swarm learning, confirming its superior efficiency and scalability.

Wahid, S. B., Rothy, Z. T., News, R. K., Rieyan, S. A.

medrxiv logopreprintJul 23 2025
Deep learning has emerged as a promising tool for automating gastrointestinal (GI) disease diagnosis. However, multi-class GI disease classification remains underexplored. This study addresses this gap by presenting a framework that uses advanced models like InceptionNetV3 and ResNet50, combined with boosting algorithms (XGB, LGBM), to classify lower GI abnormalities. InceptionNetV3 with XGB achieved the best recall of 0.81 and an F1 score of 0.90. To assist clinicians in understanding model decisions, the Grad-CAM technique, a form of explainable AI, was employed to highlight the critical regions influencing predictions, fostering trust in these systems. This approach significantly improves both the accuracy and reliability of GI disease diagnosis.

Huang R, Chen J, Wang H, Wu X, Hu H, Zheng W, Ye X, Su S, Zhuang Z

pubmed logopapersJul 23 2025
To develop a deep learning (DL) pipeline for accurate slice selection, temporal muscle (TM) segmentation, TM thickness (TMT) and area (TMA) quantification, and assessment of the prognostic role of TMT and TMA in acute ischemic stroke (AIS) patients. A total of 1020 AIS patients were enrolled. Participants were divided into three datasets: Dataset 1 (n = 295) for slice selection using ResNet50 model, Dataset 2 (n = 258) for TM segmentation employing TransUNet-based algorithm, and Dataset 3 (n = 467) for evaluating DL-based quantification of TMT and TMA as prognostic factors in AIS. The ability of the DL system to select slices was assessed using accuracy, ±1 slice accuracy and mean absolute error. The Dice similarity coefficient (DSC) is used to assess the performance of the DL system on TM segmentation. The association between automatic quantification of TMT and TMA and 6-month outcomes was determined. Automatic slice selection achieved a mean accuracy of 72.91 %, 97.94 % ± 1 slice accuracy with a mean absolute error of 1.54 mm, while TM segmentation on T1WI achieved a mean DSC of 0.858. Automatically extracted TMT and TMA were each independently associated with 6-month poor outcomes in AIS patients after adjusting for age, sex, onodera nutritional prognosis index, systemic immune-inflammation index, albumin levels, and smoking/drinking history (TMT: hazard ratio 0.736, 95 % confidence interval 0.528-0.931; TMA: hazard ratio 0.702, 95 % confidence interval 0.541-0.910). TMT and TMA are robust prognostic markers in AIS patients, and our end-to-end DL pipeline enables rapid, automated quantification that integrates seamlessly into clinical workflows, supporting scalable risk stratification and personalized rehabilitation planning.

Tan WY, Huang X, Huang J, Robert C, Cui J, Chen CPLH, Hilal S

pubmed logopapersJul 22 2025
Cerebrovascular disease (CeVD) and cognitive impairment risk factors contribute to cognitive decline, but the role of brain age gap (BAG) in mediating this relationship remains unclear, especially in Southeast Asian populations. This study investigated the influence of cognitive impairment risk factors on cognition and examined how BAG mediates this relationship, particularly in individuals with varying CeVD burden. This cross-sectional study analyzed Singaporean community and memory clinic participants. Cognitive impairment risk factors were assessed using the Cognitive Impairment Scoring System (CISS), encompassing 11 sociodemographic and vascular factors. Cognition was assessed through a neuropsychological battery, evaluating global cognition and 6 cognitive domains: executive function, attention, memory, language, visuomotor speed, and visuoconstruction. Brain age was derived from structural MRI features using ensemble machine learning model. Propensity score matching balanced risk profiles between model training and the remaining sample. Structural equation modeling examined the mediation effect of BAG on CISS-cognition relationship, stratified by CeVD burden (high: CeVD+, low: CeVD-). The study included 1,437 individuals without dementia, with 646 in the matched sample (mean age 66.4 ± 6.0 years, 47% female, 60% with no cognitive impairment). Higher CISS was consistently associated with poorer cognitive performance across all domains, with the strongest negative associations in visuomotor speed (β = -2.70, <i>p</i> < 0.001) and visuoconstruction (β = -3.02, <i>p</i> < 0.001). Among the CeVD+ group, BAG significantly mediated the relationship between CISS and global cognition (proportion mediated: 19.95%, <i>p</i> = 0.01), with the strongest mediation effects in executive function (34.1%, <i>p</i> = 0.03) and language (26.6%, <i>p</i> = 0.008). BAG also mediated the relationship between CISS and memory (21.1%) and visuoconstruction (14.4%) in the CeVD+ group, but these effects diminished after statistical adjustments. Our findings suggest that BAG is a key intermediary linking cognitive impairment risk factors to cognitive function, particularly in individuals with high CeVD burden. This mediation effect is domain-specific, with executive function, language, and visuoconstruction being the most vulnerable to accelerated brain aging. Limitations of this study include the cross-sectional design, limiting causal inference, and the focus on Southeast Asian populations, limiting generalizability. Future longitudinal studies should verify these relationships and explore additional factors not captured in our model.

Li S, Yan W, Zhang X, Hu W, Ji L, Yue Q

pubmed logopapersJul 22 2025
Head-and-neck MRI faces inherent challenges, including motion artifacts and trade-offs between spatial resolution and acquisition time. We aimed to evaluate a dual-network deep learning (DL) super-resolution method for improving image quality and reducing scan time in T1- and T2-weighted head-and-neck MRI. In this prospective study, 97 patients with head-and-neck masses were enrolled at xx from August 2023 to August 2024. After exclusions, 58 participants underwent paired conventional and accelerated T1WI and T2WI MRI sequences, with the accelerated sequences being reconstructed using a dual-network DL framework for super-resolution. Image quality was assessed both quantitatively (signal-to-noise ratio [SNR], contrast-to-noise ratio [CNR], contrast ratio [CR]) and qualitatively by two blinded radiologists using a 5-point Likert scale for image sharpness, lesion conspicuity, structure delineation, and artifacts. Wilcoxon signed-rank tests were used to compare paired outcomes. Among 58 participants (34 men, 24 women; mean age 51.37 ± 13.24 years), DL reconstruction reduced scan times by 46.3% (T1WI) and 26.9% (T2WI). Quantitative analysis showed significant improvements in SNR (T1WI: 26.33 vs. 20.65; T2WI: 14.14 vs. 11.26) and CR (T1WI: 0.20 vs. 0.18; T2WI: 0.34 vs. 0.30; all p < 0.001), with comparable CNR (p > 0.05). Qualitatively, image sharpness, lesion conspicuity, and structure delineation improved significantly (p < 0.05), while artifact scores remained similar (all p > 0.05). The dual-network DL method significantly enhanced image quality and reduced scan times in head-and-neck MRI while maintaining diagnostic performance comparable to conventional methods. This approach offers potential for improved workflow efficiency and patient comfort.

Tolga Çukur, Salman U. H. Dar, Valiyeh Ansarian Nezhad, Yohan Jun, Tae Hyung Kim, Shohei Fujita, Berkin Bilgic

arxiv logopreprintJul 22 2025
MRI is an indispensable clinical tool, offering a rich variety of tissue contrasts to support broad diagnostic and research applications. Clinical exams routinely acquire multiple structural sequences that provide complementary information for differential diagnosis, while research protocols often incorporate advanced functional, diffusion, spectroscopic, and relaxometry sequences to capture multidimensional insights into tissue structure and composition. However, these capabilities come at the cost of prolonged scan times, which reduce patient throughput, increase susceptibility to motion artifacts, and may require trade-offs in image quality or diagnostic scope. Over the last two decades, advances in image reconstruction algorithms--alongside improvements in hardware and pulse sequence design--have made it possible to accelerate acquisitions while preserving diagnostic quality. Central to this progress is the ability to incorporate prior information to regularize the solutions to the reconstruction problem. In this tutorial, we overview the basics of MRI reconstruction and highlight state-of-the-art approaches, beginning with classical methods that rely on explicit hand-crafted priors, and then turning to deep learning methods that leverage a combination of learned and crafted priors to further push the performance envelope. We also explore the translational aspects and eventual clinical implications of these methods. We conclude by discussing future directions to address remaining challenges in MRI reconstruction. The tutorial is accompanied by a Python toolbox (https://github.com/tutorial-MRI-recon/tutorial) to demonstrate select methods discussed in the article.

Xueming Fu, Pei Wu, Yingtai Li, Xin Luo, Zihang Jiang, Junhao Mei, Jian Lu, Gao-Jun Teng, S. Kevin Zhou

arxiv logopreprintJul 22 2025
Accurate analysis of cardiac motion is crucial for evaluating cardiac function. While dynamic cardiac magnetic resonance imaging (CMR) can capture detailed tissue motion throughout the cardiac cycle, the fine-grained 4D cardiac motion tracking remains challenging due to the homogeneous nature of myocardial tissue and the lack of distinctive features. Existing approaches can be broadly categorized into image based and representation-based, each with its limitations. Image-based methods, including both raditional and deep learning-based registration approaches, either struggle with topological consistency or rely heavily on extensive training data. Representation-based methods, while promising, often suffer from loss of image-level details. To address these limitations, we propose Dynamic 3D Gaussian Representation (Dyna3DGR), a novel framework that combines explicit 3D Gaussian representation with implicit neural motion field modeling. Our method simultaneously optimizes cardiac structure and motion in a self-supervised manner, eliminating the need for extensive training data or point-to-point correspondences. Through differentiable volumetric rendering, Dyna3DGR efficiently bridges continuous motion representation with image-space alignment while preserving both topological and temporal consistency. Comprehensive evaluations on the ACDC dataset demonstrate that our approach surpasses state-of-the-art deep learning-based diffeomorphic registration methods in tracking accuracy. The code will be available in https://github.com/windrise/Dyna3DGR.
Page 346 of 6636627 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.