Sort by:
Page 154 of 3563559 results

Developing deep learning-based cerebral ventricle auto-segmentation system and clinical application for the evaluation of ventriculomegaly.

Nam SM, Hwang JH, Kim JM, Lee DI, Kim YH, Park SJ, Park CK, Dho YS, Kim MS

pubmed logopapersJul 23 2025
Current methods for evaluating ventriculomegaly, particularly Evans' Index (EI), fail to accurately assess three-dimensional ventricular changes. We developed and validated an automated multi-class segmentation system for precise volumetric assessment, simultaneously segmenting five anatomical classes (ventricles, parenchyma, skull, skin, and hemorrhage) to support future augmented reality (AR)-guided external ventricular drainage (EVD) systems. Using the nnUNet architecture, we trained our model on 288 brain CT scans with diverse pathological conditions and validated it using internal (n=10),external (n=43) and public (n=192) datasets. Clinical validation involved 227 patients who underwent CSF drainage procedures. We compared automated volumetric measurements against traditional EI measurements and actual CSF drainage volumes in surgical cases. The model achieved exceptional performance with a mean Dice similarity coefficient of 93.0% across all five classes, demonstrating consistent performance across institutional and public datasets, with particularly robust ventricle segmentation (92.5%). Clinical validation revealed EI was the strongest single predictor of ventricular volume (adjusted R<sup>2</sup> = 0.430, p < 0.001), though influenced by age, sex, and diagnosis type. Most significantly, in EVD cases, automated volume differences showed remarkable correlation with actual CSF drainage amounts (β = 0.956, adjusted R<sup>2</sup> = 0.936, p < 0.001), validating the system's accuracy in measuring real CSF volume changes. Our comprehensive multi-class segmentation system offers a superior alternative to traditional measurements with potential for non-invasive CSF dynamics monitoring and AR-guided EVD placement.

BrainCNN: Automated Brain Tumor Grading from Magnetic Resonance Images Using a Convolutional Neural Network-Based Customized Model.

Yang J, Siddique MA, Ullah H, Gilanie G, Por LY, Alshathri S, El-Shafai W, Aldossary H, Gadekallu TR

pubmed logopapersJul 23 2025
Brain tumors pose a significant risk to human life, making accurate grading essential for effective treatment planning and improved survival rates. Magnetic Resonance Imaging (MRI) plays a crucial role in this process. The objective of this study was to develop an automated brain tumor grading system utilizing deep learning techniques. A dataset comprising 293 MRI scans from patients was obtained from the Department of Radiology at Bahawal Victoria Hospital in Bahawalpur, Pakistan. The proposed approach integrates a specialized Convolutional Neural Network (CNN) with pre-trained models to classify brain tumors into low-grade (LGT) and high-grade (HGT) categories with high accuracy. To assess the model's robustness, experiments were conducted using various methods: (1) raw MRI slices, (2) MRI segments containing only the tumor area, (3) feature-extracted slices derived from the original images through the proposed CNN architecture, and (4) feature-extracted slices from tumor area-only segmented images using the proposed CNN. The MRI slices and the features extracted from them were labeled using machine learning models, including Support Vector Machine (SVM) and CNN architectures based on transfer learning, such as MobileNet, Inception V3, and ResNet-50. Additionally, a custom model was specifically developed for this research. The proposed model achieved an impressive peak accuracy of 99.45%, with classification accuracies of 99.56% for low-grade tumors and 99.49% for high-grade tumors, surpassing traditional methods. These results not only enhance the accuracy of brain tumor grading but also improve computational efficiency by reducing processing time and the number of iterations required.

Kissing Spine and Other Imaging Predictors of Postoperative Cement Displacement Following Percutaneous Kyphoplasty: A Machine Learning Approach.

Zhao Y, Bo L, Qian L, Chen X, Wang Y, Cui L, Xin Y, Liu L

pubmed logopapersJul 23 2025
To investigate the risk factors associated with postoperative cement displacement following percutaneous kyphoplasty (PKP) in patients with osteoporotic vertebral compression fractures (OVCF) and to develop predictive models for clinical risk assessment. This retrospective study included 198 patients with OVCF who underwent PKP. Imaging and clinical variables were collected. Multiple machine learning models, including logistic regression, L1- and L2-regularized logistic regression, support vector machine (SVM), decision tree, gradient boosting, and random forest, were developed to predict cement displacement. L1- and L2-regularized logistic regression models identified four key risk factors: kissing spine (L1: 1.11; L2: 0.91), incomplete anterior cortex (L1: -1.60; L2: -1.62), low vertebral body CT value (L1: -2.38; L2: -1.71), and large Cobb change (L1: 0.89; L2: 0.87). The support vector machine (SVM) model achieved the best performance (accuracy: 0.983, precision: 0.875, recall: 1.000, F1-score: 0.933, specificity: 0.981, AUC: 0.997). Other models, including logistic regression, decision tree, gradient boosting, and random forest, also showed high performance but were slightly inferior to SVM. Key predictors of cement displacement were identified, and machine learning models were developed for risk assessment. These findings can assist clinicians in identifying high-risk patients, optimizing treatment strategies, and improving patient outcomes.

Interpretable Deep Learning Approaches for Reliable GI Image Classification: A Study with the HyperKvasir Dataset

Wahid, S. B., Rothy, Z. T., News, R. K., Rieyan, S. A.

medrxiv logopreprintJul 23 2025
Deep learning has emerged as a promising tool for automating gastrointestinal (GI) disease diagnosis. However, multi-class GI disease classification remains underexplored. This study addresses this gap by presenting a framework that uses advanced models like InceptionNetV3 and ResNet50, combined with boosting algorithms (XGB, LGBM), to classify lower GI abnormalities. InceptionNetV3 with XGB achieved the best recall of 0.81 and an F1 score of 0.90. To assist clinicians in understanding model decisions, the Grad-CAM technique, a form of explainable AI, was employed to highlight the critical regions influencing predictions, fostering trust in these systems. This approach significantly improves both the accuracy and reliability of GI disease diagnosis.

Deep learning-based temporal muscle quantification on MRI predicts adverse outcomes in acute ischemic stroke.

Huang R, Chen J, Wang H, Wu X, Hu H, Zheng W, Ye X, Su S, Zhuang Z

pubmed logopapersJul 23 2025
To develop a deep learning (DL) pipeline for accurate slice selection, temporal muscle (TM) segmentation, TM thickness (TMT) and area (TMA) quantification, and assessment of the prognostic role of TMT and TMA in acute ischemic stroke (AIS) patients. A total of 1020 AIS patients were enrolled. Participants were divided into three datasets: Dataset 1 (n = 295) for slice selection using ResNet50 model, Dataset 2 (n = 258) for TM segmentation employing TransUNet-based algorithm, and Dataset 3 (n = 467) for evaluating DL-based quantification of TMT and TMA as prognostic factors in AIS. The ability of the DL system to select slices was assessed using accuracy, ±1 slice accuracy and mean absolute error. The Dice similarity coefficient (DSC) is used to assess the performance of the DL system on TM segmentation. The association between automatic quantification of TMT and TMA and 6-month outcomes was determined. Automatic slice selection achieved a mean accuracy of 72.91 %, 97.94 % ± 1 slice accuracy with a mean absolute error of 1.54 mm, while TM segmentation on T1WI achieved a mean DSC of 0.858. Automatically extracted TMT and TMA were each independently associated with 6-month poor outcomes in AIS patients after adjusting for age, sex, onodera nutritional prognosis index, systemic immune-inflammation index, albumin levels, and smoking/drinking history (TMT: hazard ratio 0.736, 95 % confidence interval 0.528-0.931; TMA: hazard ratio 0.702, 95 % confidence interval 0.541-0.910). TMT and TMA are robust prognostic markers in AIS patients, and our end-to-end DL pipeline enables rapid, automated quantification that integrates seamlessly into clinical workflows, supporting scalable risk stratification and personalized rehabilitation planning.

Role of Brain Age Gap as a Mediator in the Relationship Between Cognitive Impairment Risk Factors and Cognition.

Tan WY, Huang X, Huang J, Robert C, Cui J, Chen CPLH, Hilal S

pubmed logopapersJul 22 2025
Cerebrovascular disease (CeVD) and cognitive impairment risk factors contribute to cognitive decline, but the role of brain age gap (BAG) in mediating this relationship remains unclear, especially in Southeast Asian populations. This study investigated the influence of cognitive impairment risk factors on cognition and examined how BAG mediates this relationship, particularly in individuals with varying CeVD burden. This cross-sectional study analyzed Singaporean community and memory clinic participants. Cognitive impairment risk factors were assessed using the Cognitive Impairment Scoring System (CISS), encompassing 11 sociodemographic and vascular factors. Cognition was assessed through a neuropsychological battery, evaluating global cognition and 6 cognitive domains: executive function, attention, memory, language, visuomotor speed, and visuoconstruction. Brain age was derived from structural MRI features using ensemble machine learning model. Propensity score matching balanced risk profiles between model training and the remaining sample. Structural equation modeling examined the mediation effect of BAG on CISS-cognition relationship, stratified by CeVD burden (high: CeVD+, low: CeVD-). The study included 1,437 individuals without dementia, with 646 in the matched sample (mean age 66.4 ± 6.0 years, 47% female, 60% with no cognitive impairment). Higher CISS was consistently associated with poorer cognitive performance across all domains, with the strongest negative associations in visuomotor speed (β = -2.70, <i>p</i> < 0.001) and visuoconstruction (β = -3.02, <i>p</i> < 0.001). Among the CeVD+ group, BAG significantly mediated the relationship between CISS and global cognition (proportion mediated: 19.95%, <i>p</i> = 0.01), with the strongest mediation effects in executive function (34.1%, <i>p</i> = 0.03) and language (26.6%, <i>p</i> = 0.008). BAG also mediated the relationship between CISS and memory (21.1%) and visuoconstruction (14.4%) in the CeVD+ group, but these effects diminished after statistical adjustments. Our findings suggest that BAG is a key intermediary linking cognitive impairment risk factors to cognitive function, particularly in individuals with high CeVD burden. This mediation effect is domain-specific, with executive function, language, and visuoconstruction being the most vulnerable to accelerated brain aging. Limitations of this study include the cross-sectional design, limiting causal inference, and the focus on Southeast Asian populations, limiting generalizability. Future longitudinal studies should verify these relationships and explore additional factors not captured in our model.

Dual-Network Deep Learning for Accelerated Head and Neck MRI: Enhanced Image Quality and Reduced Scan Time.

Li S, Yan W, Zhang X, Hu W, Ji L, Yue Q

pubmed logopapersJul 22 2025
Head-and-neck MRI faces inherent challenges, including motion artifacts and trade-offs between spatial resolution and acquisition time. We aimed to evaluate a dual-network deep learning (DL) super-resolution method for improving image quality and reducing scan time in T1- and T2-weighted head-and-neck MRI. In this prospective study, 97 patients with head-and-neck masses were enrolled at xx from August 2023 to August 2024. After exclusions, 58 participants underwent paired conventional and accelerated T1WI and T2WI MRI sequences, with the accelerated sequences being reconstructed using a dual-network DL framework for super-resolution. Image quality was assessed both quantitatively (signal-to-noise ratio [SNR], contrast-to-noise ratio [CNR], contrast ratio [CR]) and qualitatively by two blinded radiologists using a 5-point Likert scale for image sharpness, lesion conspicuity, structure delineation, and artifacts. Wilcoxon signed-rank tests were used to compare paired outcomes. Among 58 participants (34 men, 24 women; mean age 51.37 ± 13.24 years), DL reconstruction reduced scan times by 46.3% (T1WI) and 26.9% (T2WI). Quantitative analysis showed significant improvements in SNR (T1WI: 26.33 vs. 20.65; T2WI: 14.14 vs. 11.26) and CR (T1WI: 0.20 vs. 0.18; T2WI: 0.34 vs. 0.30; all p < 0.001), with comparable CNR (p > 0.05). Qualitatively, image sharpness, lesion conspicuity, and structure delineation improved significantly (p < 0.05), while artifact scores remained similar (all p > 0.05). The dual-network DL method significantly enhanced image quality and reduced scan times in head-and-neck MRI while maintaining diagnostic performance comparable to conventional methods. This approach offers potential for improved workflow efficiency and patient comfort.

Faithful, Interpretable Chest X-ray Diagnosis with Anti-Aliased B-cos Networks

Marcel Kleinmann, Shashank Agnihotri, Margret Keuper

arxiv logopreprintJul 22 2025
Faithfulness and interpretability are essential for deploying deep neural networks (DNNs) in safety-critical domains such as medical imaging. B-cos networks offer a promising solution by replacing standard linear layers with a weight-input alignment mechanism, producing inherently interpretable, class-specific explanations without post-hoc methods. While maintaining diagnostic performance competitive with state-of-the-art DNNs, standard B-cos models suffer from severe aliasing artifacts in their explanation maps, making them unsuitable for clinical use where clarity is essential. Additionally, the original B-cos formulation is limited to multi-class settings, whereas chest X-ray analysis often requires multi-label classification due to co-occurring abnormalities. In this work, we address both limitations: (1) we introduce anti-aliasing strategies using FLCPooling (FLC) and BlurPool (BP) to significantly improve explanation quality, and (2) we extend B-cos networks to support multi-label classification. Our experiments on chest X-ray datasets demonstrate that the modified $\text{B-cos}_\text{FLC}$ and $\text{B-cos}_\text{BP}$ preserve strong predictive performance while providing faithful and artifact-free explanations suitable for clinical application in multi-label settings. Code available at: $\href{https://github.com/mkleinma/B-cos-medical-paper}{GitHub repository}$.

MLRU++: Multiscale Lightweight Residual UNETR++ with Attention for Efficient 3D Medical Image Segmentation

Nand Kumar Yadav, Rodrigue Rizk, Willium WC Chen, KC

arxiv logopreprintJul 22 2025
Accurate and efficient medical image segmentation is crucial but challenging due to anatomical variability and high computational demands on volumetric data. Recent hybrid CNN-Transformer architectures achieve state-of-the-art results but add significant complexity. In this paper, we propose MLRU++, a Multiscale Lightweight Residual UNETR++ architecture designed to balance segmentation accuracy and computational efficiency. It introduces two key innovations: a Lightweight Channel and Bottleneck Attention Module (LCBAM) that enhances contextual feature encoding with minimal overhead, and a Multiscale Bottleneck Block (M2B) in the decoder that captures fine-grained details via multi-resolution feature aggregation. Experiments on four publicly available benchmark datasets (Synapse, BTCV, ACDC, and Decathlon Lung) demonstrate that MLRU++ achieves state-of-the-art performance, with average Dice scores of 87.57% (Synapse), 93.00% (ACDC), and 81.12% (Lung). Compared to existing leading models, MLRU++ improves Dice scores by 5.38% and 2.12% on Synapse and ACDC, respectively, while significantly reducing parameter count and computational cost. Ablation studies evaluating LCBAM and M2B further confirm the effectiveness of the proposed architectural components. Results suggest that MLRU++ offers a practical and high-performing solution for 3D medical image segmentation tasks. Source code is available at: https://github.com/1027865/MLRUPP

Pyramid Hierarchical Masked Diffusion Model for Imaging Synthesis

Xiaojiao Xiao, Qinmin Vivian Hu, Guanghui Wang

arxiv logopreprintJul 22 2025
Medical image synthesis plays a crucial role in clinical workflows, addressing the common issue of missing imaging modalities due to factors such as extended scan times, scan corruption, artifacts, patient motion, and intolerance to contrast agents. The paper presents a novel image synthesis network, the Pyramid Hierarchical Masked Diffusion Model (PHMDiff), which employs a multi-scale hierarchical approach for more detailed control over synthesizing high-quality images across different resolutions and layers. Specifically, this model utilizes randomly multi-scale high-proportion masks to speed up diffusion model training, and balances detail fidelity and overall structure. The integration of a Transformer-based Diffusion model process incorporates cross-granularity regularization, modeling the mutual information consistency across each granularity's latent spaces, thereby enhancing pixel-level perceptual accuracy. Comprehensive experiments on two challenging datasets demonstrate that PHMDiff achieves superior performance in both the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM), highlighting its capability to produce high-quality synthesized images with excellent structural integrity. Ablation studies further confirm the contributions of each component. Furthermore, the PHMDiff model, a multi-scale image synthesis framework across and within medical imaging modalities, shows significant advantages over other methods. The source code is available at https://github.com/xiaojiao929/PHMDiff
Page 154 of 3563559 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.