Sort by:
Page 69 of 3113104 results

Kissing Spine and Other Imaging Predictors of Postoperative Cement Displacement Following Percutaneous Kyphoplasty: A Machine Learning Approach.

Zhao Y, Bo L, Qian L, Chen X, Wang Y, Cui L, Xin Y, Liu L

pubmed logopapersJul 23 2025
To investigate the risk factors associated with postoperative cement displacement following percutaneous kyphoplasty (PKP) in patients with osteoporotic vertebral compression fractures (OVCF) and to develop predictive models for clinical risk assessment. This retrospective study included 198 patients with OVCF who underwent PKP. Imaging and clinical variables were collected. Multiple machine learning models, including logistic regression, L1- and L2-regularized logistic regression, support vector machine (SVM), decision tree, gradient boosting, and random forest, were developed to predict cement displacement. L1- and L2-regularized logistic regression models identified four key risk factors: kissing spine (L1: 1.11; L2: 0.91), incomplete anterior cortex (L1: -1.60; L2: -1.62), low vertebral body CT value (L1: -2.38; L2: -1.71), and large Cobb change (L1: 0.89; L2: 0.87). The support vector machine (SVM) model achieved the best performance (accuracy: 0.983, precision: 0.875, recall: 1.000, F1-score: 0.933, specificity: 0.981, AUC: 0.997). Other models, including logistic regression, decision tree, gradient boosting, and random forest, also showed high performance but were slightly inferior to SVM. Key predictors of cement displacement were identified, and machine learning models were developed for risk assessment. These findings can assist clinicians in identifying high-risk patients, optimizing treatment strategies, and improving patient outcomes.

Deep learning-based temporal muscle quantification on MRI predicts adverse outcomes in acute ischemic stroke.

Huang R, Chen J, Wang H, Wu X, Hu H, Zheng W, Ye X, Su S, Zhuang Z

pubmed logopapersJul 23 2025
To develop a deep learning (DL) pipeline for accurate slice selection, temporal muscle (TM) segmentation, TM thickness (TMT) and area (TMA) quantification, and assessment of the prognostic role of TMT and TMA in acute ischemic stroke (AIS) patients. A total of 1020 AIS patients were enrolled. Participants were divided into three datasets: Dataset 1 (n = 295) for slice selection using ResNet50 model, Dataset 2 (n = 258) for TM segmentation employing TransUNet-based algorithm, and Dataset 3 (n = 467) for evaluating DL-based quantification of TMT and TMA as prognostic factors in AIS. The ability of the DL system to select slices was assessed using accuracy, ±1 slice accuracy and mean absolute error. The Dice similarity coefficient (DSC) is used to assess the performance of the DL system on TM segmentation. The association between automatic quantification of TMT and TMA and 6-month outcomes was determined. Automatic slice selection achieved a mean accuracy of 72.91 %, 97.94 % ± 1 slice accuracy with a mean absolute error of 1.54 mm, while TM segmentation on T1WI achieved a mean DSC of 0.858. Automatically extracted TMT and TMA were each independently associated with 6-month poor outcomes in AIS patients after adjusting for age, sex, onodera nutritional prognosis index, systemic immune-inflammation index, albumin levels, and smoking/drinking history (TMT: hazard ratio 0.736, 95 % confidence interval 0.528-0.931; TMA: hazard ratio 0.702, 95 % confidence interval 0.541-0.910). TMT and TMA are robust prognostic markers in AIS patients, and our end-to-end DL pipeline enables rapid, automated quantification that integrates seamlessly into clinical workflows, supporting scalable risk stratification and personalized rehabilitation planning.

Role of Brain Age Gap as a Mediator in the Relationship Between Cognitive Impairment Risk Factors and Cognition.

Tan WY, Huang X, Huang J, Robert C, Cui J, Chen CPLH, Hilal S

pubmed logopapersJul 22 2025
Cerebrovascular disease (CeVD) and cognitive impairment risk factors contribute to cognitive decline, but the role of brain age gap (BAG) in mediating this relationship remains unclear, especially in Southeast Asian populations. This study investigated the influence of cognitive impairment risk factors on cognition and examined how BAG mediates this relationship, particularly in individuals with varying CeVD burden. This cross-sectional study analyzed Singaporean community and memory clinic participants. Cognitive impairment risk factors were assessed using the Cognitive Impairment Scoring System (CISS), encompassing 11 sociodemographic and vascular factors. Cognition was assessed through a neuropsychological battery, evaluating global cognition and 6 cognitive domains: executive function, attention, memory, language, visuomotor speed, and visuoconstruction. Brain age was derived from structural MRI features using ensemble machine learning model. Propensity score matching balanced risk profiles between model training and the remaining sample. Structural equation modeling examined the mediation effect of BAG on CISS-cognition relationship, stratified by CeVD burden (high: CeVD+, low: CeVD-). The study included 1,437 individuals without dementia, with 646 in the matched sample (mean age 66.4 ± 6.0 years, 47% female, 60% with no cognitive impairment). Higher CISS was consistently associated with poorer cognitive performance across all domains, with the strongest negative associations in visuomotor speed (β = -2.70, <i>p</i> < 0.001) and visuoconstruction (β = -3.02, <i>p</i> < 0.001). Among the CeVD+ group, BAG significantly mediated the relationship between CISS and global cognition (proportion mediated: 19.95%, <i>p</i> = 0.01), with the strongest mediation effects in executive function (34.1%, <i>p</i> = 0.03) and language (26.6%, <i>p</i> = 0.008). BAG also mediated the relationship between CISS and memory (21.1%) and visuoconstruction (14.4%) in the CeVD+ group, but these effects diminished after statistical adjustments. Our findings suggest that BAG is a key intermediary linking cognitive impairment risk factors to cognitive function, particularly in individuals with high CeVD burden. This mediation effect is domain-specific, with executive function, language, and visuoconstruction being the most vulnerable to accelerated brain aging. Limitations of this study include the cross-sectional design, limiting causal inference, and the focus on Southeast Asian populations, limiting generalizability. Future longitudinal studies should verify these relationships and explore additional factors not captured in our model.

Dual-Network Deep Learning for Accelerated Head and Neck MRI: Enhanced Image Quality and Reduced Scan Time.

Li S, Yan W, Zhang X, Hu W, Ji L, Yue Q

pubmed logopapersJul 22 2025
Head-and-neck MRI faces inherent challenges, including motion artifacts and trade-offs between spatial resolution and acquisition time. We aimed to evaluate a dual-network deep learning (DL) super-resolution method for improving image quality and reducing scan time in T1- and T2-weighted head-and-neck MRI. In this prospective study, 97 patients with head-and-neck masses were enrolled at xx from August 2023 to August 2024. After exclusions, 58 participants underwent paired conventional and accelerated T1WI and T2WI MRI sequences, with the accelerated sequences being reconstructed using a dual-network DL framework for super-resolution. Image quality was assessed both quantitatively (signal-to-noise ratio [SNR], contrast-to-noise ratio [CNR], contrast ratio [CR]) and qualitatively by two blinded radiologists using a 5-point Likert scale for image sharpness, lesion conspicuity, structure delineation, and artifacts. Wilcoxon signed-rank tests were used to compare paired outcomes. Among 58 participants (34 men, 24 women; mean age 51.37 ± 13.24 years), DL reconstruction reduced scan times by 46.3% (T1WI) and 26.9% (T2WI). Quantitative analysis showed significant improvements in SNR (T1WI: 26.33 vs. 20.65; T2WI: 14.14 vs. 11.26) and CR (T1WI: 0.20 vs. 0.18; T2WI: 0.34 vs. 0.30; all p < 0.001), with comparable CNR (p > 0.05). Qualitatively, image sharpness, lesion conspicuity, and structure delineation improved significantly (p < 0.05), while artifact scores remained similar (all p > 0.05). The dual-network DL method significantly enhanced image quality and reduced scan times in head-and-neck MRI while maintaining diagnostic performance comparable to conventional methods. This approach offers potential for improved workflow efficiency and patient comfort.

AgentMRI: A Vison Language Model-Powered AI System for Self-regulating MRI Reconstruction with Multiple Degradations.

Sajua GA, Akhib M, Chang Y

pubmed logopapersJul 22 2025
Artificial intelligence (AI)-driven autonomous agents are transforming multiple domains by integrating reasoning, decision-making, and task execution into a unified framework. In medical imaging, such agents have the potential to change workflows by reducing human intervention and optimizing image quality. In this paper, we introduce the AgentMRI. It is an AI-driven system that leverages vision language models (VLMs) for fully autonomous magnetic resonance imaging (MRI) reconstruction in the presence of multiple degradations. Unlike traditional MRI correction or reconstruction methods, AgentMRI does not rely on manual intervention for post-processing or does not rely on fixed correction models. Instead, it dynamically detects MRI corruption and then automatically selects the best correction model for image reconstruction. The framework uses a multi-query VLM strategy to ensure robust corruption detection through consensus-based decision-making and confidence-weighted inference. AgentMRI automatically chooses deep learning models that include MRI reconstruction, motion correction, and denoising models. We evaluated AgentMRI in both zero-shot and fine-tuned settings. Experimental results on a comprehensive brain MRI dataset demonstrate that AgentMRI achieves an average of 73.6% accuracy in zero-shot and 95.1% accuracy for fine-tuned settings. Experiments show that it accurately executes the reconstruction process without human intervention. AgentMRI eliminates manual intervention and introduces a scalable and multimodal AI framework for autonomous MRI processing. This work may build a significant step toward fully autonomous and intelligent MR image reconstruction systems.

Deep learning algorithm for the automatic assessment of axial vertebral rotation in patients with scoliosis using the Nash-Moe method.

Kim JK, Wang MX, Park D, Chang MC

pubmed logopapersJul 22 2025
Accurate assessments of axial vertebral rotation (AVR) is essential for managing idiopathic scoliosis. The Nash-Moe classification method has been extensively used for AVR assessment; however, its subjective nature can lead to measurement variability. Therefore, herein, we propose an automated deep learning (DL) model for AVR assessment based on posteroanterior spinal radiographs. We develop a two-stage DL framework using the MMRotate toolbox and analyze 1080 posteroanterior spinal radiographs of patients aged 4-18 years. The framework comprises a vertebra detection model (864 training and 216 validation images) and a pedicle detection model (14,608 training and 3652 validation images). We improved the Nash-Moe classification method by implementing a 12-segment division system and width ratio metric for precise pedicle assessment. The vertebra and pedicle detection models achieved mean average precision values of 0.909 and 0.905, respectively. The overall classification accuracy was 0.74, with grade-specific performance between 0.70 and 1.00 for precision and 0.33 and 0.93 for recall across Grades 0-3. The proposed DL framework processed complete posteroanterior radiographs in < 5 s per case compared with conventional manual measurements (114 s per radiograph). The best performance was observed in mild to moderate rotation cases, with performance in severe rotation cases limited by insufficient data. The implementation of DL framework for the automated Nash-Moe classification method exhibited satisfactory accuracy and exceptional efficiency. However, this study is limited by low recall (0.33) for Grade 3 and the inability to classify Grade 4 towing to dataset constraints. Further validation using augmented datasets that include severe rotation cases is necessary.

Re-identification of patients from imaging features extracted by foundation models.

Nebbia G, Kumar S, McNamara SM, Bridge C, Campbell JP, Chiang MF, Mandava N, Singh P, Kalpathy-Cramer J

pubmed logopapersJul 22 2025
Foundation models for medical imaging are a prominent research topic, but risks associated with the imaging features they can capture have not been explored. We aimed to assess whether imaging features from foundation models enable patient re-identification and to relate re-identification to demographic features prediction. Our data included Colour Fundus Photos (CFP), Optical Coherence Tomography (OCT) b-scans, and chest x-rays and we reported re-identification rates of 40.3%, 46.3%, and 25.9%, respectively. We reported varying performance on demographic features prediction depending on re-identification status (e.g., AUC-ROC for gender from CFP is 82.1% for re-identified images vs. 76.8% for non-re-identified ones). When training a deep learning model on the re-identification task, we reported performance of 82.3%, 93.9%, and 63.7% at image level on our internal CFP, OCT, and chest x-ray data. We showed that imaging features extracted from foundation models in ophthalmology and radiology include information that can lead to patient re-identification.

MAN-GAN: a mask-adaptive normalization based generative adversarial networks for liver multi-phase CT image generation.

Zhao W, Chen W, Fan L, Shang Y, Wang Y, Situ W, Li W, Liu T, Yuan Y, Liu J

pubmed logopapersJul 22 2025
Liver multiphase enhanced computed tomography (MPECT) is vital in clinical practice, but its utility is limited by various factors. We aimed to develop a deep learning network capable of automatically generating MPECT images from standard non-contrast CT scans. Dataset 1 included 374 patients and was divided into three parts: a training set, a validation set and a test set. Dataset 2 included 144 patients with one specific liver disease and was used as an internal test dataset. We further collected another dataset comprising 83 patients for external validation. Then, we propose a Mask-Adaptive Normalization-based Generative Adversarial Network with Cycle-Consistency Loss (MAN-GAN) to achieve non-contrast CT to MPECT translation. To assess the efficiency of MAN-GAN, we conducted a comparative analysis with state-of-the-art methods commonly employed in diverse medical image synthesis tasks. Moreover, two subjective radiologist evaluation studies were performed to verify the clinical usefulness of the generated images. MAN-GAN outperformed the baseline network and other state-of-the-art methods in all generations of the three phases. These results were verified in internal and external datasets. According to radiological evaluation, the image quality of generated three phase images are all above average. Moreover, the similarities between real images and generated images in all three phases are satisfactory. MAN-GAN demonstrates the feasibility of liver MPECT image translation based on non-contrast images and achieves state-of-the-art performance via the subtraction strategy. It has great potential for solving the dilemma of liver CT contrast canning and aiding further liver interaction clinical scenarios.

Training Language Models for Estimating Priority Levels in Ultrasound Examination Waitlists: Algorithm Development and Validation.

Masayoshi K, Hashimoto M, Toda N, Mori H, Kobayashi G, Haque H, So M, Jinzaki M

pubmed logopapersJul 22 2025
Ultrasound examinations, while valuable, are time-consuming and often limited in availability. Consequently, many hospitals implement reservation systems; however, these systems typically lack prioritization for examination purposes. Hence, our hospital uses a waitlist system that prioritizes examination requests based on their clinical value when slots become available due to cancellations. This system, however, requires a manual review of examination purposes, which are recorded in free-form text. We hypothesized that artificial intelligence language models could preliminarily estimate the priority of requests before manual reviews. This study aimed to investigate potential challenges associated with using language models for estimating the priority of medical examination requests and to evaluate the performance of language models in processing Japanese medical texts. We retrospectively collected ultrasound examination requests from the waitlist system at Keio University Hospital, spanning January 2020 to March 2023. Each request comprised an examination purpose documented by the requesting physician and a 6-tier priority level assigned by a radiologist during the clinical workflow. We fine-tuned JMedRoBERTa, Luke, OpenCalm, and LLaMA2 under two conditions: (1) tuning only the final layer and (2) tuning all layers using either standard backpropagation or low-rank adaptation. We had 2335 and 204 requests in the training and test datasets post cleaning. When only the final layers were tuned, JMedRoBERTa outperformed the other models (Kendall coefficient=0.225). With full fine-tuning, JMedRoBERTa continued to perform best (Kendall coefficient=0.254), though with reduced margins compared with the other models. The radiologist's retrospective re-evaluation yielded a Kendall coefficient of 0.221. Language models can estimate the priority of examination requests with accuracy comparable with that of human radiologists. The fine-tuning results indicate that general-purpose language models can be adapted to domain-specific texts (ie, Japanese medical texts) with sufficient fine-tuning. Further research is required to address priority rank ambiguity, expand the dataset across multiple institutions, and explore more recent language models with potentially higher performance or better suitability for this task.

A Benchmark Framework for the Right Atrium Cavity Segmentation From LGE-MRIs.

Bai J, Zhu J, Chen Z, Yang Z, Lu Y, Li L, Li Q, Wang W, Zhang H, Wang K, Gan J, Zhao J, Lu H, Li S, Huang J, Chen X, Zhang X, Xu X, Li L, Tian Y, Campello VM, Lekadir K

pubmed logopapersJul 22 2025
The right atrium (RA) is critical for cardiac hemodynamics but is often overlooked in clinical diagnostics. This study presents a benchmark framework for RA cavity segmentation from late gadolinium-enhanced magnetic resonance imaging (LGE-MRIs), leveraging a two-stage strategy and a novel 3D deep learning network, RASnet. The architecture addresses challenges in class imbalance and anatomical variability by incorporating multi-path input, multi-scale feature fusion modules, Vision Transformers, context interaction mechanisms, and deep supervision. Evaluated on datasets comprising 354 LGE-MRIs, RASnet achieves SOTA performance with a Dice score of 92.19% on a primary dataset and demonstrates robust generalizability on an independent dataset. The proposed framework establishes a benchmark for RA cavity segmentation, enabling accurate and efficient analysis for cardiac imaging applications. Open-source code (https://github.com/zjinw/RAS) and data (https://zenodo.org/records/15524472) are provided to facilitate further research and clinical adoption.
Page 69 of 3113104 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.