Sort by:
Page 93 of 6386373 results

Jawinski P, Forstbach H, Kirsten H, Beyer F, Villringer A, Witte AV, Scholz M, Ripke S, Markett S

pubmed logopapersOct 3 2025
Neuroimaging and machine learning are advancing research into the mechanisms of biological aging. In this field, 'brain age gap' has emerged as a promising magnetic resonance imaging-based biomarker that quantifies the deviation between an individual's biological and chronological age of the brain. Here we conducted an in-depth genomic analysis of the brain age gap and its relationships with over 1,000 health traits. Genome-wide analyses in up to 56,348 individuals unveiled a heritability of 23-29% attributable to common genetic variants and highlighted 59 associated loci (39 novel). The leading locus encompasses MAPT, encoding the tau protein central to Alzheimer's disease. Genetic correlations revealed relationships with mental health, physical health, lifestyle and socioeconomic traits, including depressed mood, diabetes, alcohol intake and income. Mendelian randomization indicated a causal role of high blood pressure and type 2 diabetes in accelerated brain aging. Our study highlights key genes and pathways related to neurogenesis, immune-system-related processes and small GTPase binding, laying the foundation for further mechanistic exploration.

Varzaneh ZA, Mousavi SM, Khoshkangini R, Moosavi Khaliji SM

pubmed logopapersOct 3 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder characterized by the gradual decline in cognitive functions, particularly memory and reasoning. Early detection, especially during cognitive impairment (MCI) stage, is crucial for timely intervention and management. Enhanced diagnostic methods are essential for facilitating early identification and improving patient outcomes. This study presents a robust deep learning framework for the early detection of Alzheimer's disease. It employs transfer learning and hyperparameter-tuning of InceptionResnetV2, InceptionV3, Xception architectures to enhance feature extraction by leveraging their pre-trained capabilities. An ensemble voting mechanism has been integrated to combine predictions from different models, optimizing both accuracy and robustness. The proposed ensemble voting approach demonstrated exceptional performance, achieving 98.96% accuracy and 100% precision for predicting classes Mildly Demented and Moderately Demented. It outperformed baseline and state-of-the-art models, highlighting its potential as a reliable tool for early diagnosis and intervention.

Falco I, Guillaume G, Henry M, Josserand V, Bossy E, Arnal B

pubmed logopapersOct 3 2025
3D conventional photoacoustic (PA) imaging often suffers from visibility artifacts caused by the limited bandwidth and constrained viewing angles of ultrasound transducers, as well as the use of sparse arrays. PA fluctuation imaging (PAFI), which leverages signal variations due to blood flow, compensates for these visibility artifacts at the cost of temporal resolution. Deep learning (DL)--based photoacoustic image enhancement has previously demonstrated strong potential for improved reconstruction at a high temporal resolution. However, generating an experimental training dataset remains problematic. 
Herein, we propose creating an experimental training dataset based on single-shot 3D PA images (input) and corresponding PAFI images (ground truth) of chicken embryo vasculature, which is used to train a 3D ResU-Net neural network.
The trained DL-PAFI network predictions on new experimental test images reveal effective improvement in visibility and contrast. We observe, however, that the output image resolution is lower than that of PAFI. Importantly, incorporating only experimental data into training already yields a good performance, while pre-training with simulated examples improves the overall accuracy. Additionally, we demonstrate the feasibility of real-time rendering and present preliminary in vivo predictions in mice, generated by the network trained exclusively on chicken embryo vasculature. These findings suggest the potential for achieving real-time, artifact-free 3D PA imaging with sparse arrays, adaptable to various in vivo applications.

Talha Ahmed, Nehal Ahmed Shaikh, Hassan Mohy-ud-Din

arxiv logopreprintOct 3 2025
For equitable deployment of AI tools in hospitals and healthcare facilities, we need Deep Segmentation Networks that offer high performance and can be trained on cost-effective GPUs with limited memory and large batch sizes. In this work, we propose Wave-GMS, a lightweight and efficient multi-scale generative model for medical image segmentation. Wave-GMS has a substantially smaller number of trainable parameters, does not require loading memory-intensive pretrained vision foundation models, and supports training with large batch sizes on GPUs with limited memory. We conducted extensive experiments on four publicly available datasets (BUS, BUSI, Kvasir-Instrument, and HAM10000), demonstrating that Wave-GMS achieves state-of-the-art segmentation performance with superior cross-domain generalizability, while requiring only ~2.6M trainable parameters. Code is available at https://github.com/ATPLab-LUMS/Wave-GMS.

Ci-Siang Lin, Min-Hung Chen, Yu-Yang Sheng, Yu-Chiang Frank Wang

arxiv logopreprintOct 3 2025
Multimodal Large Language Models (MLLMs) have achieved strong performance on general visual benchmarks but struggle with out-of-distribution (OOD) tasks in specialized domains such as medical imaging, where labeled data is limited and expensive. We introduce LEAML, a label-efficient adaptation framework that leverages both scarce labeled VQA samples and abundant unlabeled images. Our approach generates domain-relevant pseudo question-answer pairs for unlabeled data using a QA generator regularized by caption distillation. Importantly, we selectively update only those neurons most relevant to question-answering, enabling the QA Generator to efficiently acquire domain-specific knowledge during distillation. Experiments on gastrointestinal endoscopy and sports VQA demonstrate that LEAML consistently outperforms standard fine-tuning under minimal supervision, highlighting the effectiveness of our proposed LEAML framework.

Tidiane Camaret Ndir, Alexander Pfefferle, Robin Tibor Schirrmeister

arxiv logopreprintOct 3 2025
Interactive 3D biomedical image segmentation requires efficient models that can iteratively refine predictions based on user prompts. Current foundation models either lack volumetric awareness or suffer from limited interactive capabilities. We propose a training strategy that combines dynamic volumetric prompt generation with content-aware adaptive cropping to optimize the use of the image encoder. Our method simulates realistic user interaction patterns during training while addressing the computational challenges of learning from sequential refinement feedback on a single GPU. For efficient training, we initialize our network using the publicly available weights from the nnInteractive segmentation model. Evaluation on the \textbf{Foundation Models for Interactive 3D Biomedical Image Segmentation} competition demonstrates strong performance with an average final Dice score of 0.6385, normalized surface distance of 0.6614, and area-under-the-curve metrics of 2.4799 (Dice) and 2.5671 (NSD).

Fu Y, Zhou L, Zhang X, Xie G, Zhang T, Gong Y, Pan T, Kang W, Lv L, Xu H, Chen Q

pubmed logopapersOct 3 2025
To explore the diagnostic accuracy and robustness of artificial intelligence (AI)-based fully automated CT-derived fractional flow reserve (CT-FFR) in detecting significant coronary artery disease (CAD) in patients with transcatheter aortic valve replacement (TAVR). This single-center retrospective study included consecutive patients who underwent TAVR between January 2020 and June 2023. All patients received preoperative coronary CT angiography (CCTA) and invasive coronary angiography (ICA). CT-FFR was evaluated with a fully automated AI-based software. The diagnostic performance of CCTA and CT-FFR for the identification of significant CAD was compared using ICA (≥70% diameter stenosis) as the reference standard. Patients who underwent post-TAVR CCTA within 3 months were used to calculate CT-FFR values. The post-TAVR CT-FFR calculations were compared with pre-TAVR CT-FFR to evaluate the robustness of the AI-based software. A total of 77 pre-TAVR patients and 164 vessels were included. Significant CAD was identified by ICA in 18 patients (23.4%). In per-patient analysis, the sensitivity, specificity, positive predictive value, negative predictive value, and diagnostic accuracy were 44.4%, 91.5%, 61.5%, 84.4%, and 80.5% for CCTA and 94.4%, 83.1%, 64.0%, 98.0%, and 85.7% for CT-FFR. The area under the receiver operating characteristic curve of CT-FFR was superior to CCTA (0.83 vs. 0.63, P = 0.001). Thirty-five (45.5%) patients underwent CT-FFR calculations before and after TAVR. There was good agreement between pre- and post-TAVR of CT-FFR values (intraclass correlation coefficient 0.85). AI-based fully automated CT-FFR enables to improve the diagnostic performance of CCTA for the detection of significant CAD pre-TAVR and demonstrates robust stability post-TAVR.

Nghiem DX, Yahyavi-Firouz-Abadi N, Hwang GL, Zafari Z, Moy L, Carlos RC, Doo FX

pubmed logopapersOct 3 2025
To estimate economic and environmental reduction potential of iodinated contrast media (ICM) saving strategies, by examining supply chain data (from iodine extraction through administration) to inform a decision-making framework which can be tailored to local institutional priorities. A 100 mL polymer vial of ICM was set as the standard reference case (SRC) for baseline comparison. To evaluate cost and emissions impacts, four ICM reduction strategies were modeled relative to this SRC baseline: vial optimization, hardware or software (AI-enabled) dose reduction, and multi-dose vial/injector systems. This analysis was then translated into a decision-making framework for radiologists to compare ICM strategies by cost, emissions, and operational feasibility. The supply chain life cycle of a 100 mL iodinated contrast vial produces 1,029 g CO2e, primarily from iodine extraction and clinical use. ICM-saving strategies varied widely in emissions reduction, ranging from 12%-50% nationally. Economically a 125% tariff could inflate national ICM-related costs to $11.9B, the ICM reduction strategy of AI-enhanced ICM systems could lower this expenditure to $2.7B. Institutional analysis reveals that the ICM savings from high-capital upfront investment strategies can offset their initial investment, highlighting important trade-offs for implementation decision-making. ICM is a major and modifiable contributor to healthcare carbon emissions. Depending on the utilized ICM-reduction strategy, emissions can be reduced by up to 53% and ICM-related costs by up to 50%. To guide implementation, we developed a decision-making framework that categorizes strategies based on environmental benefit, cost, and operational feasibility, enabling radiology leaders to align sustainability goals with institutional priorities.

Refik Mert Cam, Seonyeong Park, Umberto Villa, Mark A. Anastasio

arxiv logopreprintOct 3 2025
Quantitative photoacoustic computed tomography (qPACT) is a promising imaging modality for estimating physiological parameters such as blood oxygen saturation. However, developing robust qPACT reconstruction methods remains challenging due to computational demands, modeling difficulties, and experimental uncertainties. Learning-based methods have been proposed to address these issues but remain largely unvalidated. Virtual imaging (VI) studies are essential for validating such methods early in development, before proceeding to less-controlled phantom or in vivo studies. Effective VI studies must employ ensembles of stochastically generated numerical phantoms that accurately reflect relevant anatomy and physiology. Yet, most prior VI studies for qPACT relied on overly simplified phantoms. In this work, a realistic VI testbed is employed for the first time to assess a representative 3D learning-based qPACT reconstruction method for breast imaging. The method is evaluated across subject variability and physical factors such as measurement noise and acoustic aberrations, offering insights into its strengths and limitations.

Daphne Tsolissou, Theofanis Ganitidis, Konstantinos Mitsis, Stergios CHristodoulidis, Maria Vakalopoulou, Konstantina Nikita

arxiv logopreprintOct 3 2025
Reliable risk assessment for carotid atheromatous disease remains a major clinical challenge, as it requires integrating diverse clinical and imaging information in a manner that is transparent and interpretable to clinicians. This study investigates the potential of state-of-the-art and recent large vision-language models (LVLMs) for multimodal carotid plaque assessment by integrating ultrasound imaging (USI) with structured clinical, demographic, laboratory, and protein biomarker data. A framework that simulates realistic diagnostic scenarios through interview-style question sequences is proposed, comparing a range of open-source LVLMs, including both general-purpose and medically tuned models. Zero-shot experiments reveal that even if they are very powerful, not all LVLMs can accurately identify imaging modality and anatomy, while all of them perform poorly in accurate risk classification. To address this limitation, LLaVa-NeXT-Vicuna is adapted to the ultrasound domain using low-rank adaptation (LoRA), resulting in substantial improvements in stroke risk stratification. The integration of multimodal tabular data in the form of text further enhances specificity and balanced accuracy, yielding competitive performance compared to prior convolutional neural network (CNN) baselines trained on the same dataset. Our findings highlight both the promise and limitations of LVLMs in ultrasound-based cardiovascular risk prediction, underscoring the importance of multimodal integration, model calibration, and domain adaptation for clinical translation.
Page 93 of 6386373 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.