Sort by:
Page 18 of 2332330 results

Postconcussive Sleep Problems and Glymphatic Dysfunction Predict Persistent Working Memory Decline.

Li YT, Chen DY, Kuo DP, Chen YC, Cheng SJ, Hsieh LC, Chiang YH, Chen CY

pubmed logopapersSep 22 2025
Persistent working memory decline (PWMD) is a common sequela of mild traumatic brain injury (mTBI), yet reliable biomarkers for predicting long-term working memory outcomes remain lacking. The glymphatic system, a brain-wide waste clearance network, plays a crucial role in cognitive recovery. The diffusion tensor imaging analysis along the perivascular space (DTI-ALPS) index, a noninvasive magnetic resonance imaging (MRI)-based technique, offers a promising approach to evaluate perivascular fluid dynamics-a key component of glymphatic function. However, its role in long-term working memory dysfunction remains underexplored, particularly in the presence of traumatic cerebral microbleeds (CMBs) and poor sleep quality-as measured by Pittsburgh Sleep Quality Index (PSQI)-both of which have been suggested to disrupt glymphatic clearance, exacerbate neurovascular impairment, and contribute to cognitive decline. This study aims to investigate the interplay between CMBs, sleep quality, and perivascular fluid dynamics in predicting PWMD after mTBI. We further assess the feasibility of a machine learning-based approach to enhance individualized working memory outcome prediction. Between September 2015 and October 2022, 3,068 patients presenting with concussion were screened, and 471 met the inclusion criteria for mTBI. A total of 184 patients provided informed consent, and 61 completed both baseline and 1-year follow-up assessments. In addition, 61 demographically matched healthy controls were recruited. Susceptibility-weighted imaging was used to detect CMBs, while perivascular fluid dynamics was assessed using the DTI-ALPS index. Sleep quality was evaluated using the PSQI, and working memory was measured with the Digit Span test at baseline and 1-year post-injury. Mediation analysis was conducted to examine the indirect effects of perivascular fluid dynamics on cognitive outcomes, and a machine learning model incorporating DTI-ALPS, CMBs, sleep quality, and baseline cognitive scores was developed for individualized prediction. CMBs were present in 29.5% of mTBI patients and were associated with significantly lower DTI-ALPS index values (<i>p</i> < 0.001), suggesting compromised perivascular fluid dynamics and glymphatic impairment. Poor sleep quality (PSQI > 8) correlated with lower 1-year Digit Span scores (<i>r</i> = -0.551, <i>p</i> < 0.001), supporting the link between disrupted glymphatic function and cognitive decline. Mediation analysis revealed that the DTI-ALPS index partially mediated the relationship between CMBs and PWMD (Sobel test, <i>p</i> = 0.031). Machine learning-based predictive modeling achieved a high accuracy in forecasting 1-year working memory outcomes (<i>R</i><sup>2</sup> = 0.78). These findings highlight the potential of noninvasive MRI-based assessment of perivascular fluid dynamics as an early biomarker for PWMD. Given the essential role of the glymphatic system in sleep and memory, integrating DTI-ALPS with CMB detection and sleep quality evaluation may enhance prognostic accuracy and inform personalized rehabilitation strategies for mTBI patients.

The optimal diagnostic assistance system for predicting three-dimensional contact between mandibular third molars and the mandibular canal on panoramic radiographs.

Fukuda M, Nomoto D, Nozawa M, Kise Y, Kuwada C, Kubo H, Ariji E, Ariji Y

pubmed logopapersSep 22 2025
This study aimed to identify the most effective diagnostic assistance system for assessing the relationship between mandibular third molars (M3M) and mandibular canals (MC) using panoramic radiographs. In total, 2,103 M3M were included from patients in whom the M3M and MC overlapped on panoramic radiographs. All M3M were classified into high-risk and low-risk groups based on the degree of contact with the MC observed on computed tomography. The contact classification was evaluated using four machine learning models (Prediction One software, AdaBoost, XGBoost, and random forest), three convolutional neural networks (CNNs) (EfficientNet-B0, ResNet18, and Inception v3), and three human observers (two radiologists and one oral surgery resident). Receiver operating characteristic curves were plotted; the area under the curve (AUC), accuracy, sensitivity, and specificity were calculated. Factors contributing to prediction of high-risk cases by machine learning models were identified. Machine learning models demonstrated AUC values ranging from 0.84 to 0.88, with accuracy ranging from 0.81 to 0.88 and sensitivity of 0.80, indicating consistently strong performance. Among the CNNs, ResNet18 achieved the best performance, with an AUC of 0.83. The human observers exhibited AUC values between 0.67 and 0.80. Three factors were identified as contributing to prediction of high-risk cases by machine learning models: increased root radiolucency, diversion of the MC, and narrowing of the MC. Machine learning models demonstrated strong performance in predicting the three-dimensional relationship between the M3M and MC.

Deep-learning-based prediction of significant portal hypertension with single cross-sectional non-enhanced CT.

Yamamoto A, Sato S, Ueda D, Walston SL, Kageyama K, Jogo A, Nakano M, Kotani K, Uchida-Kobayashi S, Kawada N, Miki Y

pubmed logopapersSep 22 2025
The purpose of this study was to establish a predictive deep learning (DL) model for clinically significant portal hypertension (CSPH) based on a single cross-sectional non-contrast CT image and to compare four representative positional images to determine the most suitable for the detection of CSPH. The study included 421 patients with chronic liver disease who underwent hepatic venous pressure gradient measurement at our institution between May 2007 and January 2024. Patients were randomly classified into training, validation, and test datasets at a ratio of 8:1:1. Non-contrast cross-sectional CT images from four target areas of interest were used to create four deep-learning-based models for predicting CSPH. The areas of interest were the umbilical portion of the portal vein (PV), the first right branch of the PV, the confluence of the splenic vein and PV, and the maximum cross-section of the spleen. The models were implemented using convolutional neural networks with a multilayer perceptron as the classifier. The model with the best predictive ability for CSPH was then compared to 13 conventional evaluation methods. Among the four areas, the umbilical portion of the PV had the highest predictive ability for CSPH (area under the curve [AUC]: 0.80). At the threshold maximizing the Youden index, sensitivity and specificity were 0.867 and 0.615, respectively. This DL model outperformed the ANTICIPATE model. We developed an algorithm that can predict CSPH immediately from a single slice of non-contrast CT, using the most suitable image of the umbilical portion of the PV. Question CSPH predicts complications but requires invasive hepatic venous pressure gradient measurement for diagnosis. Findings At the threshold maximizing the Youden index, sensitivity and specificity were 0.867 and 0.615, respectively. This DL model outperformed the ANTICIPATE model. Clinical relevance This study shows that a DL model can accurately predict CSPH from a single non-contrast CT image, providing a non-invasive alternative to invasive methods and aiding early detection and risk stratification in chronic liver disease without image manipulation.

Machine learning predicts severe adverse events and salvage success of CT-guided lung biopsy after nondiagnostic transbronchial lung biopsy.

Yang S, Hua Z, Chen Y, Liu L, Wang Z, Cheng Y, Wang J, Xu Z, Chen C

pubmed logopapersSep 22 2025
To address the unmet clinical need for validated risk stratification tools in salvage CT-guided percutaneous lung biopsy (PNLB) following nondiagnostic transbronchial lung biopsy (TBLB). We aimed to develop machine learning models predicting severe adverse events (SAEs) in PNLB (Model 1) and diagnostic success of salvage PNLB post-TBLB failure (Model 2). This multicenter predictive modeling study enrolled 2910 cases undergoing PNLB across two centers (Center 1: n = 2653 (2016-2020); Center 2: n = 257 (2017-2022)) with complete imaging and clinical documentation meeting predefined inclusion and exclusion criteria. Key variables were selected via LASSO regression, followed by development and validation of Model 1 (incorporating sex, smoking, pleural contact, lesion size, and puncture depth) and Model 2 (including age, lesion size, lesion characteristics, and post-bronchoscopic pathological categories (PBPCs)) using ten machine learning algorithms. Model performance was rigorously evaluated through discrimination metrics, calibration curves, and decision curve analysis to assess clinical applicability. A total of 2653 and 257 PNLB cases were included from two centers, where Model 1 achieved external validation ROC-AUC 0.717 (95% CI: 0.609-0.825) and PR-AUC 0.258 (95% CI: 0.0365-0.708), while Model 2 exhibited ROC-AUC 0.884 (95% CI: 0.784-0.984) and PR-AUC 0.852 (95% CI: 0.784-0.896), with XGBoost outperforming other algorithms. The dual XGBoost system stratifies salvage PNLB candidates by quantifying SAE risks (AUC = 0.717) versus diagnostic yield (AUC = 0.884), addressing the unmet need for personalized biopsy pathway optimization. Question Current tools cannot quantify severe adverse event (SAE) risks versus salvage diagnostic success for CT-guided lung biopsy (PNLB) after failed transbronchial biopsy (TBLB). Findings Dual XGBoost models successfully predicted the risks of PNLB SAEs (AUC = 0.717) and diagnostic success post-TBLB failure (AUC = 0.884) with validated clinical stratification benefits. Clinical relevance The dual XGBoost system guides clinical decision-making by integrating individual risk of SAEs with predictors of diagnostic success, enabling personalized salvage biopsy strategies that balance safety and diagnostic yield.

Multitask radioclinical decision stratification in non-metastatic colon cancer: integrating MMR status, pT staging, and high-risk pathological factors.

Yang R, Liu J, Li L, Fan Y, Shu Y, Wu W, Shu J

pubmed logopapersSep 22 2025
Constructing a multi-task global decision support system based on preoperative enhanced CT features to predict the mismatch repair (MMR) status, T stage, and pathological risk factors (e.g., histological differentiation, lymphovascular invasion) for patients with non-metastatic colon cancer. 372 eligible non-metastatic colon cancer (NMCC) participants (training cohort: n = 260; testing cohort: n = 112) were enrolled from two institutions. The 34 features (imaging features: n = 27; clinical features: n = 7) were subjected to feature selection using LASSO, Boruta, ReliefF, mRMR, and XGBoost-RFE, respectively. In each of the three categories-MMR, pT staging, and pathological risk factors-four features were selected to construct the total feature set. Subsequently, the multitask model was built with 14 machine learning algorithms. The predictive performance of the machine model was evaluated using the area under the receiver operating characteristic curve (AUC). The final feature set for constructing the model is based on the mRMR feature screening method. For the final MMR classification, pT staging, and pathological risk factors, SVC, Bernoulli NB, and Decision Tree algorithm were selected respectively, with AUC scores of 0.80 [95% CI 0.71-0.89], 0.82 [95% CI 0.71-0.94], and 0.85 [95% CI 0.77-0.93] on the test set. Furthermore, a direct multiclass model constructed using the total feature set resulted in an average AUC of 0.77 across four management plans in the test set. The multi-task machine learning model proposed in this study enables non-invasive and precise preoperative stratification of patients with NMCC based on MMR status, pT stage, and pathological risk factors. This predictive tool demonstrates significant potential in facilitating preoperative risk stratification and guiding individualized therapeutic strategies.

Diagnostic accuracy and consistency of ChatGPT-4o in radiology: influence of image, clinical data, and answer options on performance.

Atakır K, Işın K, Taş A, Önder H

pubmed logopapersSep 22 2025
This study aimed to evaluate the diagnostic accuracy of Chat Generative Pre-trained Transformer (ChatGPT) version 4 Omni (ChatGPT-4o) in radiology across seven information input combinations (image, clinical data, and multiple-choice options) to assess the consistency of its outputs across repeated trials and to compare its performance with that of human radiologists. We tested 129 distinct radiology cases under seven input conditions (varying presence of imaging, clinical context, and answer options). Each case was processed by ChatGPT-4o for seven different input combinations on three separate accounts. Diagnostic accuracy was determined by comparison with ground-truth diagnoses, and interobserver consistency was measured using Fleiss' kappa. Pairwise comparisons were performed with the Wilcoxon signed-rank test. Additionally, the same set of cases was evaluated by nine radiology residents to benchmark ChatGPT-4o's performance against human diagnostic accuracy. ChatGPT-4o's diagnostic accuracy was lowest for "image only" (19.90%) and "options only" (20.67%) conditions. The highest accuracy was observed in "image + clinical information + options" (80.88%) and "clinical information + options" (75.45%) conditions. The highest interobserver agreement was observed in the "image + clinical information + options" condition (κ = 0.733) and the lowest was in the "options only" condition (κ = 0.023), suggesting that more information improves consistency. However, there was no effective benefit of adding imaging data over already provided clinical data and options, as seen in post-hoc analysis. In human comparison, ChatGPT-4o outperformed radiology residents in text-based configurations (75.45% vs. 42.89%), whereas residents showed slightly better performance in image-based tasks (64.13% vs. 61.24%). Notably, when residents were allowed to use ChatGPT-4o as a support tool, their image-based diagnostic accuracy increased from 63.04% to 74.16%. ChatGPT-4o performs well when provided with rich textual input but remains limited in purely image- based diagnoses. Its accuracy and consistency increase with multimodal input, yet adding imaging does not significantly improve performance beyond clinical context and diagnostic options alone. The model's superior performance to residents in text-based tasks underscores its potential as a diagnostic aid in structured scenarios. Furthermore, its integration as a support tool may enhance human diagnostic accuracy, particularly in image-based interpretation. Although ChatGPT-4o is not yet capable of reliably interpreting radiologic images on its own, it demonstrates strong performance in text-based diagnostic reasoning. Its integration into clinical workflows-particularly for triage, structured decision support, or educational purposes-may augment radiologists' diagnostic capacity and consistency.

MRN: Harnessing 2D Vision Foundation Models for Diagnosing Parkinson's Disease with Limited 3D MR Data

Ding Shaodong, Liu Ziyang, Zhou Yijun, Liu Tao

arxiv logopreprintSep 22 2025
The automatic diagnosis of Parkinson's disease is in high clinical demand due to its prevalence and the importance of targeted treatment. Current clinical practice often relies on diagnostic biomarkers in QSM and NM-MRI images. However, the lack of large, high-quality datasets makes training diagnostic models from scratch prone to overfitting. Adapting pre-trained 3D medical models is also challenging, as the diversity of medical imaging leads to mismatches in voxel spacing and modality between pre-training and fine-tuning data. In this paper, we address these challenges by leveraging 2D vision foundation models (VFMs). Specifically, we crop multiple key ROIs from NM and QSM images, process each ROI through separate branches to compress the ROI into a token, and then combine these tokens into a unified patient representation for classification. Within each branch, we use 2D VFMs to encode axial slices of the 3D ROI volume and fuse them into the ROI token, guided by an auxiliary segmentation head that steers the feature extraction toward specific brain nuclei. Additionally, we introduce multi-ROI supervised contrastive learning, which improves diagnostic performance by pulling together representations of patients from the same class while pushing away those from different classes. Our approach achieved first place in the MICCAI 2025 PDCADxFoundation challenge, with an accuracy of 86.0% trained on a dataset of only 300 labeled QSM and NM-MRI scans, outperforming the second-place method by 5.5%.These results highlight the potential of 2D VFMs for clinical analysis of 3D MR images.

Visual Instruction Pretraining for Domain-Specific Foundation Models

Yuxuan Li, Yicheng Zhang, Wenhao Tang, Yimian Dai, Ming-Ming Cheng, Xiang Li, Jian Yang

arxiv logopreprintSep 22 2025
Modern computer vision is converging on a closed loop in which perception, reasoning and generation mutually reinforce each other. However, this loop remains incomplete: the top-down influence of high-level reasoning on the foundational learning of low-level perceptual features is not yet underexplored. This paper addresses this gap by proposing a new paradigm for pretraining foundation models in downstream domains. We introduce Visual insTruction Pretraining (ViTP), a novel approach that directly leverages reasoning to enhance perception. ViTP embeds a Vision Transformer (ViT) backbone within a Vision-Language Model and pretrains it end-to-end using a rich corpus of visual instruction data curated from target downstream domains. ViTP is powered by our proposed Visual Robustness Learning (VRL), which compels the ViT to learn robust and domain-relevant features from a sparse set of visual tokens. Extensive experiments on 16 challenging remote sensing and medical imaging benchmarks demonstrate that ViTP establishes new state-of-the-art performance across a diverse range of downstream tasks. The code is available at github.com/zcablii/ViTP.

Path-Weighted Integrated Gradients for Interpretable Dementia Classification

Firuz Kamalov, Mohmad Al Falasi, Fadi Thabtah

arxiv logopreprintSep 22 2025
Integrated Gradients (IG) is a widely used attribution method in explainable artificial intelligence (XAI). In this paper, we introduce Path-Weighted Integrated Gradients (PWIG), a generalization of IG that incorporates a customizable weighting function into the attribution integral. This modification allows for targeted emphasis along different segments of the path between a baseline and the input, enabling improved interpretability, noise mitigation, and the detection of path-dependent feature relevance. We establish its theoretical properties and illustrate its utility through experiments on a dementia classification task using the OASIS-1 MRI dataset. Attribution maps generated by PWIG highlight clinically meaningful brain regions associated with various stages of dementia, providing users with sharp and stable explanations. The results suggest that PWIG offers a flexible and theoretically grounded approach for enhancing attribution quality in complex predictive models.

Comprehensive Assessment of Tumor Stromal Heterogeneity in Bladder Cancer by Deep Learning and Habitat Radiomics.

Du Y, Sui Y, Tao Y, Cao J, Jiang X, Yu J, Wang B, Wang Y, Li H

pubmed logopapersSep 22 2025
Tumor stromal heterogeneity plays a pivotal role in bladder cancer progression. The tumor-stroma ratio (TSR) is a key pathological marker reflecting stromal heterogeneity. This study aimed to develop a preoperative, CT-based machine learning model for predicting TSR in bladder cancer, comparing various radiomic approaches, and evaluating their utility in prognostic assessment and immunotherapy response prediction. A total of 477 bladder urothelial carcinoma patients from two centers were retrospectively included. Tumors were segmented on preoperative contrast-enhanced CT, and radiomic features were extracted. K-means clustering was used to divide tumors into subregions. Radiomics models were constructed: a conventional model (Intra), a multi-subregion model (Habitat), and single-subregion models (HabitatH1/H2/H3). A deep transfer learning model (DeepL) based on the largest tumor cross-section was also developed. Model performance was evaluated in training, testing, and external validation cohorts, and associations with recurrence-free survival, CD8+ T cell infiltration, and immunotherapy response were analyzed. The HabitatH1 model demonstrated robust diagnostic performance with favorable calibration and clinical utility. The DeepL model surpassed all radiomics models in predictive accuracy. A nomogram combining DeepL and clinical variables effectively predicted recurrence-free survival, CD8+ T cell infiltration, and immunotherapy response. Imaging-predicted TSR showed significant associations with the tumor immune microenvironment and treatment outcomes. CT-based habitat radiomics and deep learning models enable non-invasive, quantitative assessment of TSR in bladder cancer. The DeepL model provides superior diagnostic and prognostic value, supporting personalized treatment decisions and prediction of immunotherapy response.
Page 18 of 2332330 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.