Sort by:
Page 96 of 3513502 results

Multi-organ AI Endophenotypes Chart the Heterogeneity of Pan-disease in the Brain, Eye, and Heart

Consortium, T. M., Boquet-Pujadas, A., anagnostakis, f., Yang, Z., Tian, Y. E., duggan, m., erus, g., srinivasan, d., Joynes, C., Bai, W., patel, p., Walker, K. A., Zalesky, A., davatzikos, c., WEN, J.

medrxiv logopreprintAug 13 2025
Disease heterogeneity and commonality pose significant challenges to precision medicine, as traditional approaches frequently focus on single disease entities and overlook shared mechanisms across conditions1. Inspired by pan-cancer2 and multi-organ research3, we introduce the concept of "pan-disease" to investigate the heterogeneity and shared etiology in brain, eye, and heart diseases. Leveraging individual-level data from 129,340 participants, as well as summary-level data from the MULTI consortium, we applied a weakly-supervised deep learning model (Surreal-GAN4,5) to multi-organ imaging, genetic, proteomic, and RNA-seq data, identifying 11 AI-derived biomarkers - called Multi-organ AI Endophenotypes (MAEs) - for the brain (Brain 1-6), eye (Eye 1-3), and heart (Heart 1-2), respectively. We found Brain 3 to be a risk factor for Alzheimers disease (AD) progression and mortality, whereas Brain 5 was protective against AD progression. Crucially, in data from an anti-amyloid AD drug (solanezumab6), heterogeneity in cognitive decline trajectories was observed across treatment groups. At week 240, patients with lower brain 1-3 expression had slower cognitive decline, whereas patients with higher expression had faster cognitive decline. A multi-layer causal pathway pinpointed Brain 1 as a mediational endophenotype7 linking the FLRT2 protein to migraine, exemplifying novel therapeutic targets and pathways. Additionally, genes associated with Eye 1 and Eye 3 were enriched in cancer drug-related gene sets with causal links to specific cancer types and proteins. Finally, Heart 1 and Heart 2 had the highest mortality risk and unique medication history profiles, with Heart 1 showing favorable responses to antihypertensive medications and Heart 2 to digoxin treatment. The 11 MAEs provide novel AI dimensional representations for precision medicine and highlight the potential of AI-driven patient stratification for disease risk monitoring, clinical trials, and drug discovery.

Graph Neural Networks for Realistic Bleeding Prediction in Surgical Simulators.

Kakdas YC, De S, Demirel D

pubmed logopapersAug 12 2025
This study presents a novel approach using graph neural networks to predict the risk of internal bleeding using vessel maps derived from patient CT and MRI scans, aimed at enhancing the realism of surgical simulators for emergency scenarios such as trauma, where rapid detection of internal bleeding can be lifesaving. First, medical images are segmented and converted into graph representations of the vasculature, where nodes represent vessel branching points with spatial coordinates and edges encode vessel features such as length and radius. Due to no existing dataset directly labeling bleeding risks, we calculate the bleeding probability for each vessel node using a physics-based heuristic, peripheral vascular resistance via the Hagen-Poiseuille equation. A graph attention network is then trained to regress these probabilities, effectively learning to predict hemorrhage risk from the graph-structured imaging data. The model is trained using a tenfold cross-validation on a combined dataset of 1708 vessel graphs extracted from four public image datasets (MSD, KiTS, AbdomenCT, CT-ORG) with optimization via the Adam optimizer, mean squared error loss, early stopping, and L2 regularization. Our model achieves a mean R-squared of 0.86, reaching up to 0.9188 in optimal configurations and low mean training and validation losses of 0.0069 and 0.0074, respectively, in predicting bleeding risk, with higher performance on well-connected vascular graphs. Finally, we integrate the trained model into an immersive virtual reality environment to simulate intra-abdominal bleeding scenarios for immersive surgical training. The model demonstrates robust predictive performance despite the inherent sparsity of real-life datasets.

[Development of a machine learning-based diagnostic model for T-shaped uterus using transvaginal 3D ultrasound quantitative parameters].

Li SJ, Wang Y, Huang R, Yang LM, Lyu XD, Huang XW, Peng XB, Song DM, Ma N, Xiao Y, Zhou QY, Guo Y, Liang N, Liu S, Gao K, Yan YN, Xia EL

pubmed logopapersAug 12 2025
<b>Objective:</b> To develop a machine learning diagnostic model for T-shaped uterus based on quantitative parameters from 3D transvaginal ultrasound. <b>Methods:</b> A retrospective cross-sectional study was conducted, recruiting 304 patients who visited the hysteroscopy centre of Fuxing Hospital, Beijing, China, between July 2021 and June 2024 for reasons such as "infertility or recurrent pregnancy loss" and other adverse obstetric histories. Twelve experts, including seven clinicians and five sonographers, from Fuxing Hospital and Beijing Obstetrics and Gynecology Hospital of Capital Medical University, Peking University People's Hospital, and Beijing Hospital, independently and anonymously assessed the diagnosis of T-shaped uterus using a modified Delphi method. Based on the consensus results, 56 cases were classified into the T-shaped uterus group and 248 cases into the non-T-shaped uterus group. A total of 7 clinical features and 14 sonographic features were initially included. Features demonstrating significant diagnostic impact were selected using 10-fold cross-validated LASSO (Least Absolute Shrinkage and Selection Operator) regression. Four machine learning algorithms [logistic regression (LR), decision tree (DT), random forest (RF), and support vector machine (SVM)] were subsequently implemented to develop T-shaped uterus diagnostic models. Using the Python random module, the patient dataset was randomly divided into five subsets, each maintaining the original class distribution (T-shaped uterus: non-T-shaped uterus ≈ 1∶4) and a balanced number of samples between the two categories. Five-fold cross-validation was performed, with four subsets used for training and one for validation in each round, to enhance the reliability of model evaluation. Model performance was rigorously assessed using established metrics: area under the curve (AUC) of receiver operator characteristic (ROC) curve, sensitivity, specificity, precision, and F1-score. In the RF model, feature importance was assessed by the mean decrease in Gini impurity attributed to each variable. <b>Results:</b> A total of 304 patients had a mean age of (35±4) years, and the age of the T-shaped uterus group was (35±5) years; the age of the non-T-shaped uterus group was (34±4) years.. Eight features with non-zero coefficients were selected by LASSO regression, including average lateral wall indentation width, average lateral wall indentation angle, upper cavity depth, endometrial thickness, uterine cavity area, cavity width at level of lateral wall indentation, angle formed by the bilateral lateral walls, and average cornual angle (coefficient: 0.125, -0.064,-0.037,-0.030,-0.026,-0.025,-0.025 and -0.024, respectively). The RF model showed the best diagnostic performance: in training set, AUC was 0.986 (95%<i>CI</i>: 0.980-0.992), sensitivity was 0.978, specificity 0.946, precision 0.802, and F1-score 0.881; in testing set, AUC was 0.948 (95%<i>CI</i>: 0.911-0.985), sensitivity was 0.873, specificity 0.919, precision 0.716, and F1-score 0.784. RF model feature importance analysis revealed that average lateral wall indentation width, upper cavity depth, and average lateral wall indentation angle were the top three features (over 65% in total), playing a decisive role in model prediction. <b>Conclusion:</b> The machine learning models developed in this study, particularly the RF model, are promising for the diagnosis of T-shaped uterus, offering new perspectives and technical support for clinical practice.

Exploring GPT-4o's multimodal reasoning capabilities with panoramic radiograph: the role of prompt engineering.

Xiong YT, Lian WJ, Sun YN, Liu W, Guo JX, Tang W, Liu C

pubmed logopapersAug 12 2025
The aim of this study was to evaluate GPT-4o's multimodal reasoning ability to review panoramic radiograph (PR) and verify its radiologic findings, while exploring the role of prompt engineering in enhancing its performance. The study included 230 PRs from West China Hospital of Stomatology in 2024, which were interpreted to generate the PR findings. A total of 300 instances of interpretation errors, were manually inserted into the PR findings. The ablation study was conducted to assess whether GPT-4o can perform reasoning on PR under a zero-shot prompt. Prompt engineering was employed to enhance the reasoning capabilities of GPT-4o in identifying interpretation errors with PRs. The prompt strategies included chain-of-thought, self-consistency, in-context learning, multimodal in-context learning, and their systematic integration into a meta-prompt. Recall, accuracy, and F1 score were employed to evaluate the outputs. Subsequently, the localization capability of GPT-4o and its influence on reasoning capability were evaluated. In the ablation study, GPT-4o's recall increased significantly from 2.67 to 43.33% upon acquiring PRs (P < 0.001). GPT-4o with the meta prompt demonstrated improvements in recall (43.33% vs. 52.67%, P = 0.022), accuracy (39.95% vs. 68.75%, P < 0.001), and F1 score (0.42 vs. 0.60, P < 0.001) compared to the zero-shot prompt and other prompt strategies. The localization accuracy of GPT-4o was 45.67% (137 out of 300, 95% CI: 40.00 to 51.34). A significant correlation was observed between its localization accuracy and reasoning capability under the meta prompt (φ coefficient = 0.33, p < 0.001). The model's recall increased by 5.49% (P = 0.031) by providing accurate localization cues within the meta prompt. GPT-4o demonstrated a certain degree of multimodal capability for PR, with performance enhancement through prompt engineering. Nevertheless, its performance remains inadequate for clinical requirements. Future efforts will be necessary to identify additional factors influencing the model's reasoning capability or to develop more advanced models. Evaluating GPT-4o's capability to interpret and reason through PRs and exploring potential methods to enhance its performance before clinical application in assisting radiological assessments.

Genetic architecture of bone marrow fat fraction implies its involvement in osteoporosis risk.

Wu Z, Yang Y, Ning C, Li J, Cai Y, Li Y, Cao Z, Tian S, Peng J, Ma Q, He C, Xia S, Chen J, Miao X, Li Z, Zhu Y, Chu Q, Tian J

pubmed logopapersAug 12 2025
Bone marrow adipose tissue, as a distinct adipose subtype, has been implicated in the pathophysiology of skeletal, metabolic, and hematopoietic disorders. To identify its underlying genetic factors, we utilized a deep learning algorithm capable of quantifying bone marrow fat fraction (BMFF) in the vertebrae and proximal femur using magnetic resonance imaging data of over 38,000 UK Biobank participants. Genome-wide association analyses uncovered 373 significant BMFF-associated variants (P-value < 5 × 10<sup>-9</sup>), with enrichment in bone remodeling, metabolism, and hematopoiesis pathway. Furthermore, genetic correlation highlighted a significant association between BMFF and skeletal disease. In about 300,000 individuals, polygenic risk scores derived from three proximal femur BMFF were significantly associated with increased osteoporosis risk. Notably, Mendelian randomization analyses revealed a causal link between proximal femur BMFF and osteoporosis. Here, we show critical insights into the genetic determinants of BMFF and offer perspectives on the biological mechanisms driving osteoporosis development.

A non-sub-sampled shearlet transform-based deep learning sub band enhancement and fusion method for multi-modal images.

Sengan S, Gugulothu P, Alroobaea R, Webber JL, Mehbodniya A, Yousef A

pubmed logopapersAug 12 2025
Multi-Modal Medical Image Fusion (MMMIF) has become increasingly important in clinical applications, as it enables the integration of complementary information from different imaging modalities to support more accurate diagnosis and treatment planning. The primary objective of Medical Image Fusion (MIF) is to generate a fused image that retains the most informative features from the Source Images (SI), thereby enhancing the reliability of clinical decision-making systems. However, due to inherent limitations in individual imaging modalities-such as poor spatial resolution in functional images or low contrast in anatomical scans-fused images can suffer from information degradation or distortion. To address these limitations, this study proposes a novel fusion framework that integrates the Non-Subsampled Shearlet Transform (NSST) with a Convolutional Neural Network (CNN) for effective sub-band enhancement and image reconstruction. Initially, each source image is decomposed into Low-Frequency Coefficients (LFC) and multiple High-Frequency Coefficients (HFC) using NSST. The proposed Concurrent Denoising and Enhancement Network (CDEN) is then applied to these sub-bands to suppress noise and enhance critical structural details. The enhanced LFCs are fused using an AlexNet-based activity-level fusion model, while the enhanced HFCs are combined using a Pulse Coupled Neural Network (PCNN) guided by a Novel Sum-Modified Laplacian (NSML) metric. Finally, the fused image is reconstructed via Inverse-NSST (I-NSST). Experimental results prove that the proposed method outperforms existing fusion algorithms, achieving approximately 16.5% higher performance in terms of the QAB/F (edge preservation) metric, along with strong results across both subjective visual assessments and objective quality indices.

Are [18F]FDG PET/CT imaging and cell blood count-derived biomarkers robust non-invasive surrogates for tumor-infiltrating lymphocytes in early-stage breast cancer?

Seban RD, Rebaud L, Djerroudi L, Vincent-Salomon A, Bidard FC, Champion L, Buvat I

pubmed logopapersAug 12 2025
Tumor-infiltrating lymphocytes (TILs) are key immune biomarkers associated with prognosis and treatment response in early-stage breast cancer (BC), particularly in the triple-negative subtype. This study aimed to evaluate whether [18F]FDG PET/CT imaging and routine cell blood count (CBC)-derived biomarkers can serve as non-invasive surrogates for TILs, using machine-learning models. We retrospectively analyzed 358 patients with biopsy-proven early-stage invasive BC who underwent pre-treatment [18F]FDG PET/CT imaging. PET-derived biomarkers were extracted from the primary tumor, lymph nodes, and lymphoid organs (spleen and bone marrow). CBC-derived biomarkers included neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR). TILs were assessed histologically and categorized as low (0-10%), intermediate (11-59%), or high (≥ 60%). Correlations were assessed using Spearman's rank coefficient, and classification and regression models were built using several machine-learning algorithms. Tumor SUVmax and tumor SUVmean showed the highest correlation with TIL levels (ρ = 0.29 and 0.30 respectively, p < 0.001 for both), but overall associations between TILs and PET or CBC-derived biomarkers were weak. No CBC-derived biomarker showed significant correlation or discriminative performance. Machine-learning models failed to predict TIL levels with satisfactory accuracy (maximum balanced accuracy = 0.66). Lymphoid organ metrics (SLR, BLR) and CBC-derived parameters did not significantly enhance predictive value. In this study, neither [18F]FDG PET/CT nor routine CBC-derived biomarkers reliably predict TILs levels in early-stage BC. This observation was made in presence of potential scanner-related variability and for a restricted set of usual PET metrics. Future models should incorporate more targeted imaging approaches, such as immunoPET, to non-invasively assess immune infiltration with higher specificity and improve personalized treatment strategies.

The performance of large language models in dentomaxillofacial radiology: a systematic review.

Liu Z, Nalley A, Hao J, H Ai QY, Kan Yeung AW, Tanaka R, Hung KF

pubmed logopapersAug 12 2025
This study aimed to systematically review the current performance of large language models (LLMs) in dento-maxillofacial radiology (DMFR). Five electronic databases were used to identify studies that developed, fine-tuned, or evaluated LLMs for DMFR-related tasks. Data extracted included study purpose, LLM type, images/text source, applied language, dataset characteristics, input and output, performance outcomes, evaluation methods, and reference standards. Customized assessment criteria adapted from the TRIPOD-LLM reporting guideline were used to evaluate the risk-of-bias in the included studies specifically regarding the clarity of dataset origin, the robustness of performance evaluation methods, and the validity of the reference standards. The initial search yielded 1621 titles, and nineteen studies were included. These studies investigated the use of LLMs for tasks including the production and answering of DMFR-related qualification exams and educational questions (n = 8), diagnosis and treatment recommendations (n = 7), and radiology report generation and patient communication (n = 4). LLMs demonstrated varied performance in diagnosing dental conditions, with accuracy ranging from 37-92.5% and expert ratings for differential diagnosis and treatment planning between 3.6-4.7 on a 5-point scale. For DMFR-related qualification exams and board-style questions, LLMs achieved correctness rates between 33.3-86.1%. Automated radiology report generation showed moderate performance with accuracy ranging from 70.4-81.3%. LLMs demonstrate promising potential in DMFR, particularly for diagnostic, educational, and report generation tasks. However, their current accuracy, completeness, and consistency remain variable. Further development, validation, and standardization are needed before LLMs can be reliably integrated as supportive tools in clinical workflows and educational settings.

Results of the 9th Scientific Workshop of the European Crohn's and Colitis Organisation (ECCO): Artificial Intelligence in Endoscopy, Radiology and Histology in IBD Diagnostics.

Mookhoek A, Sinonque P, Allocca M, Carter D, Ensari A, Iacucci M, Kopylov U, Verstockt B, Baumgart DC, Noor NM, El-Hussuna A, Sahnan K, Marigorta UM, Noviello D, Bossuyt P, Pellino G, Soriano A, de Laffolie J, Daperno M, Raine T, Cleynen I, Sebastian S

pubmed logopapersAug 12 2025
In this review, a comprehensive overview of the current state of artificial intelligence (AI) research in Inflammatory Bowel Disease (IBD) diagnostics in the domains of endoscopy, radiology and histology is presented. Moreover, key considerations for development of AI algorithms in medical image analysis are discussed. AI presents a potential breakthrough in real-time, objective and rapid endoscopic assessment, with implications for predicting disease progression. It is anticipated that, by harmonising multimodal data, AI will transform patient care through early diagnosis, accurate patient profiling and therapeutic response prediction. The ability of AI in cross-sectional medical imaging to improve diagnostic accuracy, automate and enable objective assessment of disease activity and predict clinical outcomes highlights its transformative potential. AI models have consistently outperformed traditional methods of image interpretation, particularly in complex areas such as differentiating IBD subtypes, identifying disease progression and complications. The use of AI in histology is a particularly dynamic research field. Implementation of AI algorithms in clinical practice is still lagging, a major hurdle being the lack of a digital workflow in many pathology institutes. Adoption is likely to start with implementation of automatic disease activity scoring. Beyond matching pathologist performance, algorithms may teach us more about IBD pathophysiology. While AI is set to substantially advance IBD diagnostics, various challenges such as heterogeneous datasets, retrospective designs and assessment of different endpoints must be addressed. Implementation of novel standards of reporting may drive an increase in research quality and overcome these obstacles.

Spatial Prior-Guided Dual-Path Network for Thyroid Nodule Segmentation.

Pang C, Miao H, Zhang R, Liu Q, Lyu L

pubmed logopapersAug 12 2025
Accurate segmentation of thyroid nodules in ultrasound images is critical for clinical diagnosis but remains challenging due to low contrast and complex anatomical structures. Existing deep learning methods often rely solely on local nodule features, lacking anatomical prior knowledge of the thyroid region, which can result in misclassification of non-thyroid tissues, especially in low-quality scans. To address these issues, we propose a Spatial Prior-Guided Dual-Path Network that integrates a prior-aware encoder to model thyroid anatomical structures and a low-cost heterogeneous encoder to preserve fine-grained multi-scale features, enhancing both spatial detail and contextual awareness. To capture the diverse and irregular appearances of nodules, we design a CrossBlock module, which combines an efficient cross-attention mechanism with mixed-scale convolutional operations to enable global context modeling and local feature extraction. The network further employs a dual-decoder architecture, where one decoder learns thyroid region priors and the other focuses on accurate nodule segmentation. Gland-specific features are hierarchically refined and injected into the nodule decoder to enhance boundary delineation through anatomical guidance. Extensive experiments on the TN3K and MTNS datasets demonstrate that our method consistently outperforms state-of-the-art approaches, particularly in boundary precision and localization accuracy, offering practical value for preoperative planning and clinical decision-making.
Page 96 of 3513502 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.