Sort by:
Page 59 of 78779 results

Patient-specific prediction of glioblastoma growth via reduced order modeling and neural networks.

Cerrone D, Riccobelli D, Gazzoni S, Vitullo P, Ballarin F, Falco J, Acerbi F, Manzoni A, Zunino P, Ciarletta P

pubmed logopapersJun 3 2025
Glioblastoma is among the most aggressive brain tumors in adults, characterized by patient-specific invasion patterns driven by the underlying brain microstructure. In this work, we present a proof-of-concept for a mathematical model of GBL growth, enabling real-time prediction and patient-specific parameter identification from longitudinal neuroimaging data. The framework exploits a diffuse-interface mathematical model to describe the tumor evolution and a reduced-order modeling strategy, relying on proper orthogonal decomposition, trained on synthetic data derived from patient-specific brain anatomies reconstructed from magnetic resonance imaging and diffusion tensor imaging. A neural network surrogate learns the inverse mapping from tumor evolution to model parameters, achieving significant computational speed-up while preserving high accuracy. To ensure robustness and interpretability, we perform both global and local sensitivity analyses, identifying the key biophysical parameters governing tumor dynamics and assessing the stability of the inverse problem solution. These results establish a methodological foundation for future clinical deployment of patient-specific digital twins in neuro-oncology.

A Review of Intracranial Aneurysm Imaging Modalities, from CT to State-of-the-Art MR.

Allaw S, Khabaz K, Given TC, Montas D, Alcazar-Felix RJ, Srinath A, Kass-Hout T, Carroll TJ, Hurley MC, Polster SP

pubmed logopapersJun 3 2025
Traditional guidance for intracranial aneurysm (IA) management is dichotomized by rupture status. Fundamental to the management of ruptured aneurysm is the detection and treatment of SAH, along with securing the aneurysm by the safest technique. On the other hand, unruptured aneurysms first require a careful assessment of their natural history versus treatment risk, including an imaging assessment of aneurysm size, location, and morphology, along with additional evidence-based risk factors such as smoking, hypertension, and family history. Unfortunately, a large proportion of ruptured aneurysms are in the lower risk size category (<7 mm), putting a premium on discovering a more refined noninvasive biomarker to detect and stratify aneurysm instability before rupture. In this review of aneurysm work-up, we cover the gamut of established imaging modalities (eg, CT, CTA, DSA, FLAIR, 3D TOF-MRA, contrast-enhanced-MRA) as well as more novel MR techniques (MR vessel wall imaging, dynamic contrast-enhanced MRI, computational fluid dynamics). Additionally, we evaluate the current landscape of artificial intelligence software and its integration into diagnostic and risk-stratification pipelines for IAs. These advanced MR techniques, increasingly complemented with artificial intelligence models, offer a paradigm shift by evaluating factors beyond size and morphology, including vessel wall inflammation, permeability, and hemodynamics. Additionally, we provide our institution's scan parameters for many of these modalities as a reference. Ultimately, this review provides an organized, up-to-date summary of the array of available modalities/sequences for IA imaging to help build protocols focused on IA characterization.

Deep learning reveals pathology-confirmed neuroimaging signatures in Alzheimer's, vascular and Lewy body dementias.

Wang D, Honnorat N, Toledo JB, Li K, Charisis S, Rashid T, Benet Nirmala A, Brandigampala SR, Mojtabai M, Seshadri S, Habes M

pubmed logopapersJun 3 2025
Concurrent neurodegenerative and vascular pathologies pose a diagnostic challenge in the clinical setting, with histopathology remaining the definitive modality for dementia-type diagnosis. To address this clinical challenge, we introduce a neuropathology-based, data-driven, multi-label deep-learning framework to identify and quantify in vivo biomarkers for Alzheimer's disease (AD), vascular dementia (VD) and Lewy body dementia (LBD) using antemortem T1-weighted MRI scans of 423 demented and 361 control participants from National Alzheimer's Coordinating Center and Alzheimer's Disease Neuroimaging Initiative datasets. Based on the best-performing deep-learning model, explainable heat maps were extracted to visualize disease patterns, and the novel Deep Signature of Pathology Atrophy REcognition (DeepSPARE) indices were developed, where a higher DeepSPARE score indicates more brain alterations associated with that specific pathology. A substantial discrepancy in clinical and neuropathological diagnosis was observed in the demented patients: 71% had more than one pathology, but 67% were diagnosed clinically as AD only. Based on these neuropathological diagnoses and leveraging cross-validation principles, the deep-learning model achieved the best performance, with a balanced accuracy of 0.844, 0.839 and 0.623 for AD, VD and LBD, respectively, and was used to generate the explainable deep-learning heat maps and DeepSPARE indices. The explainable deep-learning heat maps revealed distinct neuroimaging brain alteration patterns for each pathology: (i) the AD heat map highlighted bilateral hippocampal regions; (ii) the VD heat map emphasized white matter regions; and (iii) the LBD heat map exposed occipital alterations. The DeepSPARE indices were validated by examining their associations with cognitive testing and neuropathological and neuroimaging measures using linear mixed-effects models. The DeepSPARE-AD index was associated with Mini-Mental State Examination, the Trail Making Test B, memory, hippocampal volume, Braak stages, Consortium to Establish a Registry for Alzheimer's Disease (CERAD) scores and Thal phases [false-discovery rate (FDR)-adjusted P < 0.05]. The DeepSPARE-VD index was associated with white matter hyperintensity volume and cerebral amyloid angiopathy (FDR-adjusted P < 0.001), and the DeepSPARE-LBD index was associated with Lewy body stages (FDR-adjusted P < 0.05). The findings were replicated in an out-of-sample Alzheimer's Disease Neuroimaging Initiative dataset by testing associations with cognitive, imaging, plasma and CSF measures. CSF and plasma tau phosphorylated at threonine-181 (pTau181) were significantly associated with DeepSPARE-AD in the AD and mild cognitive impairment amyloid-β positive (AD/MCIΑβ+) group (FDR-adjusted P < 0.001), and CSF α-synuclein was associated solely with DeepSPARE-LBD (FDR-adjusted P = 0.036). Overall, these findings demonstrate the advantages of our innovative deep-learning framework in detecting antemortem neuroimaging signatures linked to different pathologies. The newly deep-learning-derived DeepSPARE indices are precise, pathology-sensitive and single-valued non-invasive neuroimaging metrics, bridging the traditional widely available in vivo T1 imaging with histopathology.

Fine-tuned large Language model for extracting newly identified acute brain infarcts based on computed tomography or magnetic resonance imaging reports.

Fujita N, Yasaka K, Kiryu S, Abe O

pubmed logopapersJun 2 2025
This study aimed to develop an automated early warning system using a large language model (LLM) to identify acute to subacute brain infarction from free-text computed tomography (CT) or magnetic resonance imaging (MRI) radiology reports. In this retrospective study, 5,573, 1,883, and 834 patients were included in the training (mean age, 67.5 ± 17.2 years; 2,831 males), validation (mean age, 61.5 ± 18.3 years; 994 males), and test (mean age, 66.5 ± 16.1 years; 488 males) datasets. An LLM (Japanese Bidirectional Encoder Representations from Transformers model) was fine-tuned to classify the CT and MRI reports into three groups (group 0, newly identified acute to subacute infarction; group 1, known acute to subacute infarction or old infarction; group 2, without infarction). The training and validation processes were repeated 15 times, and the best-performing model on the validation dataset was selected to further evaluate its performance on the test dataset. The best fine-tuned model exhibited sensitivities of 0.891, 0.905, and 0.959 for groups 0, 1, and 2, respectively, in the test dataset. The macrosensitivity (the average of sensitivity for all groups) and accuracy were 0.918 and 0.923, respectively. The model's performance in extracting newly identified acute brain infarcts was high, with an area under the receiver operating characteristic curve of 0.979 (95% confidence interval, 0.956-1.000). The average prediction time was 0.115 ± 0.037 s per patient. A fine-tuned LLM could extract newly identified acute to subacute brain infarcts based on CT or MRI findings with high performance.

Beyond Pixel Agreement: Large Language Models as Clinical Guardrails for Reliable Medical Image Segmentation

Jiaxi Sheng, Leyi Yu, Haoyue Li, Yifan Gao, Xin Gao

arxiv logopreprintJun 2 2025
Evaluating AI-generated medical image segmentations for clinical acceptability poses a significant challenge, as traditional pixelagreement metrics often fail to capture true diagnostic utility. This paper introduces Hierarchical Clinical Reasoner (HCR), a novel framework that leverages Large Language Models (LLMs) as clinical guardrails for reliable, zero-shot quality assessment. HCR employs a structured, multistage prompting strategy that guides LLMs through a detailed reasoning process, encompassing knowledge recall, visual feature analysis, anatomical inference, and clinical synthesis, to evaluate segmentations. We evaluated HCR on a diverse dataset across six medical imaging tasks. Our results show that HCR, utilizing models like Gemini 2.5 Flash, achieved a classification accuracy of 78.12%, performing comparably to, and in instances exceeding, dedicated vision models such as ResNet50 (72.92% accuracy) that were specifically trained for this task. The HCR framework not only provides accurate quality classifications but also generates interpretable, step-by-step reasoning for its assessments. This work demonstrates the potential of LLMs, when appropriately guided, to serve as sophisticated evaluators, offering a pathway towards more trustworthy and clinically-aligned quality control for AI in medical imaging.

Medical World Model: Generative Simulation of Tumor Evolution for Treatment Planning

Yijun Yang, Zhao-Yang Wang, Qiuping Liu, Shuwen Sun, Kang Wang, Rama Chellappa, Zongwei Zhou, Alan Yuille, Lei Zhu, Yu-Dong Zhang, Jieneng Chen

arxiv logopreprintJun 2 2025
Providing effective treatment and making informed clinical decisions are essential goals of modern medicine and clinical care. We are interested in simulating disease dynamics for clinical decision-making, leveraging recent advances in large generative models. To this end, we introduce the Medical World Model (MeWM), the first world model in medicine that visually predicts future disease states based on clinical decisions. MeWM comprises (i) vision-language models to serve as policy models, and (ii) tumor generative models as dynamics models. The policy model generates action plans, such as clinical treatments, while the dynamics model simulates tumor progression or regression under given treatment conditions. Building on this, we propose the inverse dynamics model that applies survival analysis to the simulated post-treatment tumor, enabling the evaluation of treatment efficacy and the selection of the optimal clinical action plan. As a result, the proposed MeWM simulates disease dynamics by synthesizing post-treatment tumors, with state-of-the-art specificity in Turing tests evaluated by radiologists. Simultaneously, its inverse dynamics model outperforms medical-specialized GPTs in optimizing individualized treatment protocols across all metrics. Notably, MeWM improves clinical decision-making for interventional physicians, boosting F1-score in selecting the optimal TACE protocol by 13%, paving the way for future integration of medical world models as the second readers.

Current trends in glioma tumor segmentation: A survey of deep learning modules.

Shoushtari FK, Elahi R, Valizadeh G, Moodi F, Salari HM, Rad HS

pubmed logopapersJun 2 2025
Multiparametric Magnetic Resonance Imaging (mpMRI) is the gold standard for diagnosing brain tumors, especially gliomas, which are difficult to segment due to their heterogeneity and varied sub-regions. While manual segmentation is time-consuming and error-prone, Deep Learning (DL) automates the process with greater accuracy and speed. We conducted ablation studies on surveyed articles to evaluate the impact of "add-on" modules-addressing challenges like spatial information loss, class imbalance, and overfitting-on glioma segmentation performance. Advanced modules-such as atrous (dilated) convolutions, inception, attention, transformer, and hybrid modules-significantly enhance segmentation accuracy, efficiency, multiscale feature extraction, and boundary delineation, while lightweight modules reduce computational complexity. Experiments on the Brain Tumor Segmentation (BraTS) dataset (comprising low- and high-grade gliomas) confirm their robustness, with top-performing models achieving high Dice score for tumor sub-regions. This survey underscores the need for optimal module selection and placement to balance speed, accuracy, and interpretability in glioma segmentation. Future work should focus on improving model interpretability, lowering computational costs, and boosting generalizability. Tools like NeuroQuant® and Raidionics demonstrate potential for clinical translation. Further refinement could enable regulatory approval, advancing precision in brain tumor diagnosis and treatment planning.

Efficiency and Quality of Generative AI-Assisted Radiograph Reporting.

Huang J, Wittbrodt MT, Teague CN, Karl E, Galal G, Thompson M, Chapa A, Chiu ML, Herynk B, Linchangco R, Serhal A, Heller JA, Abboud SF, Etemadi M

pubmed logopapersJun 2 2025
Diagnostic imaging interpretation involves distilling multimodal clinical information into text form, a task well-suited to augmentation by generative artificial intelligence (AI). However, to our knowledge, impacts of AI-based draft radiological reporting remain unstudied in clinical settings. To prospectively evaluate the association of radiologist use of a workflow-integrated generative model capable of providing draft radiological reports for plain radiographs across a tertiary health care system with documentation efficiency, the clinical accuracy and textual quality of final radiologist reports, and the model's potential for detecting unexpected, clinically significant pneumothorax. This prospective cohort study was conducted from November 15, 2023, to April 24, 2024, at a tertiary care academic health system. The association between use of the generative model and radiologist documentation efficiency was evaluated for radiographs documented with model assistance compared with a baseline set of radiographs without model use, matched by study type (chest or nonchest). Peer review was performed on model-assisted interpretations. Flagging of pneumothorax requiring intervention was performed on radiographs prospectively. The primary outcomes were association of use of the generative model with radiologist documentation efficiency, assessed by difference in documentation time with and without model use using a linear mixed-effects model; for peer review of model-assisted reports, the difference in Likert-scale ratings using a cumulative-link mixed model; and for flagging pneumothorax requiring intervention, sensitivity and specificity. A total of 23 960 radiographs (11 980 each with and without model use) were used to analyze documentation efficiency. Interpretations with model assistance (mean [SE], 159.8 [27.0] seconds) were faster than the baseline set of those without (mean [SE], 189.2 [36.2] seconds) (P = .02), representing a 15.5% documentation efficiency increase. Peer review of 800 studies showed no difference in clinical accuracy (χ2 = 0.68; P = .41) or textual quality (χ2 = 3.62; P = .06) between model-assisted interpretations and nonmodel interpretations. Moreover, the model flagged studies containing a clinically significant, unexpected pneumothorax with a sensitivity of 72.7% and specificity of 99.9% among 97 651 studies screened. In this prospective cohort study of clinical use of a generative model for draft radiological reporting, model use was associated with improved radiologist documentation efficiency while maintaining clinical quality and demonstrated potential to detect studies containing a pneumothorax requiring immediate intervention. This study suggests the potential for radiologist and generative AI collaboration to improve clinical care delivery.

ViTU-net: A hybrid deep learning model with patch-based LSB approach for medical image watermarking and authentication using a hybrid metaheuristic algorithm.

Nanammal V, Rajalakshmi S, Remya V, Ranjith S

pubmed logopapersJun 2 2025
In modern healthcare, telemedicine, health records, and AI-driven diagnostics depend on medical image watermarking to secure chest X-rays for pneumonia diagnosis, ensuring data integrity, confidentiality, and authenticity. A 2024 study found over 70 % of healthcare institutions faced medical image data breaches. Yet, current methods falter in imperceptibility, robustness against attacks, and deployment efficiency. ViTU-Net integrates cutting-edge techniques to address these multifaceted challenges in medical image security and analysis. The model's core component, the Vision Transformer (ViT) encoder, efficiently captures global dependencies and spatial information, while the U-Net decoder enhances image reconstruction, with both components leveraging the Adaptive Hierarchical Spatial Attention (AHSA) module for improved spatial processing. Additionally, the patch-based LSB embedding mechanism ensures focused embedding of reversible fragile watermarks within each patch of the segmented non-diagnostic region (RONI), guided dynamically by adaptive masks derived from the attention mechanism, minimizing impact on diagnostic accuracy while maximizing precision and ensuring optimal utilization of spatial information. The hybrid meta-heuristic optimization algorithm, TuniBee Fusion, dynamically optimizes watermarking parameters, striking a balance between exploration and exploitation, thereby enhancing watermarking efficiency and robustness. The incorporation of advanced cryptographic techniques, including SHA-512 hashing and AES encryption, fortifies the model's security, ensuring the authenticity and confidentiality of watermarked medical images. A PSNR value of 60.7 dB, along with an NCC value of 0.9999 and an SSIM value of 1.00, underscores its effectiveness in preserving image quality, security, and diagnostic accuracy. Robustness analysis against a spectrum of attacks validates ViTU-Net's resilience in real-world scenarios.

Efficient Medical Vision-Language Alignment Through Adapting Masked Vision Models.

Lian C, Zhou HY, Liang D, Qin J, Wang L

pubmed logopapersJun 2 2025
Medical vision-language alignment through cross-modal contrastive learning shows promising performance in image-text matching tasks, such as retrieval and zero-shot classification. However, conventional cross-modal contrastive learning (CLIP-based) methods suffer from suboptimal visual representation capabilities, which also limits their effectiveness in vision-language alignment. In contrast, although the models pretrained via multimodal masked modeling struggle with direct cross-modal matching, they excel in visual representation. To address this contradiction, we propose ALTA (ALign Through Adapting), an efficient medical vision-language alignment method that utilizes only about 8% of the trainable parameters and less than 1/5 of the computational consumption required for masked record modeling. ALTA achieves superior performance in vision-language matching tasks like retrieval and zero-shot classification by adapting the pretrained vision model from masked record modeling. Additionally, we integrate temporal-multiview radiograph inputs to enhance the information consistency between radiographs and their corresponding descriptions in reports, further improving the vision-language alignment. Experimental evaluations show that ALTA outperforms the best-performing counterpart by over 4% absolute points in text-to-image accuracy and approximately 6% absolute points in image-to-text retrieval accuracy. The adaptation of vision-language models during efficient alignment also promotes better vision and language understanding. Code is publicly available at https://github.com/DopamineLcy/ALTA.
Page 59 of 78779 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.