Sort by:
Page 3 of 78773 results

Optimized T<sub>1</sub>-weighted MP-RAGE MRI of the brain at 0.55 T using variable flip angle coherent gradient echo imaging and deep learning reconstruction.

Bieri O, Nickel MD, Weidensteiner C, Madörin P, Bauman G

pubmed logopapersSep 29 2025
To propose and evaluate an optimized MP-RAGE protocol for rapid T<sub>1</sub>-weighted imaging of the brain at 0.55 T. Incoherent and coherent steady state free precession (SSFP) RAGE kernels with constant and variable excitation angles were investigated in terms of the white matter SNR and the white matter-gray matter signal difference. Potential edge smearing from the transient signal readout was assessed based on a differential point spread function analysis. Finally, the prospects of a deep-learning reconstruction (DLR) method for accelerated MP-RAGE MRI of undersampled data were evaluated for the best performing variant. MP-RAGE imaging with a variable flip angle (vFA) SSFP-FID kernel outperformed all other investigated variants. As compared to the standard MPRAGE sequence using a spoiled gradient echo kernel with constant flip angle, vFA SSFP-FID offered an average gain in the white matter SNR of 21% ± 2% and an average improvement for the white matter-gray matter signal difference for cortical gray matter of 47% ± 7%. The differential point spread function was narrowest for the spoiled gradient echo but slightly increased by 8% for vFA SSFP-FID. For vFA SSFP-FID, DLR offered a considerable decrease in the overall scan time from 5:17 min down to 2:46 min without noticeable image artifacts and degradations. At 0.55 T, a vFA MP-RAGE variant using an SSFP-FID kernel combined with a DLR method offers excellent prospects for rapid T<sub>1</sub>-weighted whole brain imaging in less than 3 min with nearly 1 mm (1.12 × 1.17 × 1.25 mm<sup>3</sup>) isotropic resolution.

Precision medicine in prostate cancer: individualized treatment through radiomics, genomics, and biomarkers.

Min K, Lin Q, Qiu D

pubmed logopapersSep 29 2025
Prostate cancer (PCa) is one of the most common malignancies threatening men's health globally. A comprehensive and integrated approach is essential for its early screening, diagnosis, risk stratification, treatment guidance, and efficacy assessment. Radiomics, leveraging multi-parametric magnetic resonance imaging (mpMRI) and positron emission tomography/computed tomography (PET/CT), has demonstrated significant clinical value in the non-invasive diagnosis, aggressiveness assessment, and prognosis prediction of PCa, with substantial potential when combined with artificial intelligence. In genomics, mutations or deletions in genes such as TMPRSS2-ERG, PTEN, RB1, TP53, and DNA damage repair genes (e.g., BRCA1/2) are closely associated with disease development and progression, holding profound implications for diagnosis, treatment, and prognosis. Concurrently, biomarkers like prostate-specific antigen (PSA), novel urinary markers (e.g., PCA3), and circulating tumor cells (CTCs) are widely utilized in PCa research and management. Integrating these technologies into personalized treatment plans and the broader framework of precision medicine allows for an in-depth exploration of the relationship between specific biomarkers and disease pathogenesis. This review summarizes the current research on radiomics, genomics, and biomarkers in PCa, and discusses their future potential and applications in advancing individualized patient care.

Integrating big data and artificial intelligence to predict progression in multiple sclerosis: challenges and the path forward.

Khan H, Aerts S, Vermeulen I, Woodruff HC, Lambin P, Peeters LM

pubmed logopapersSep 29 2025
Multiple sclerosis (MS) remains a complex and costly neurological condition characterised by progressive disability, making early detection and accurate prognosis of disease progression imperative. While artificial intelligence (AI) combined with big data promises transformative advances in personalised MS care, integration of multimodal, real-world datasets, including clinical records, magnetic resonance imaging (MRI), and digital biomarkers, remains limited. This perspective paper identifies a critical gap between technical innovation and clinical implementation, driven by methodological constraints, evolving regulatory frameworks, and ethical concerns related to bias, privacy, and equity. We explore this gap through three interconnected lenses: the underuse of integrated real-world data, the barriers posed by regulation and ethics, and emerging solutions. Promising strategies such as federated learning, regulatory initiatives like DARWIN-EU and the European Health Data Space, and patient-led frameworks including PROMS and CLAIMS, offer structured pathways forward. Additionally, we highlight the growing relevance of foundation models for interpreting complex MS data and supporting clinical decision-making. We advocate for harmonised data infrastructures, patient-centred design, explainable AI, and real-world validation as core pillars for future implementation. By aligning technical, regulatory, and ethical domains, stakeholders can unlock the full potential of AI to enhance prognosis, personalise care, and improve outcomes for people with MS.

Readability versus accuracy in LLM-transformed radiology reports: stakeholder preferences across reading grade levels.

Lee HS, Kim S, Kim S, Seo J, Kim WH, Kim J, Han K, Hwang SH, Lee YH

pubmed logopapersSep 29 2025
To examine how reading grade levels affect stakeholder preferences based on a trade-off between accuracy and readability. A retrospective study of 500 radiology reports from academic and community hospitals across five imaging modalities was conducted. Reports were transformed into 11 reading grade levels (7-17) using Gemini. Accuracy, readability, and preference were rated on a 5-point scale by radiologists, physicians, and laypersons. Errors (generalizations, omissions, hallucinations) and potential changes in patient management (PCPM) were identified. Ordinal logistic regression analyzed preference predictors, and weighted kappa measured interobserver reliability. Preferences varied across reading grade levels depending on stakeholder group, modality, and clinical setting. Overall, preferences peaked at grade 16, but declined at grade 17, particularly among laypersons. Lower reading grades improved readability but increased errors, while higher grades improved accuracy but reduced readability. In multivariable analysis, accuracy was the strongest predictor of preference for all groups (OR: 30.29, 33.05, and 2.16; p <0 .001), followed by readability (OR: 2.73, 1.70, 2.01; p <0.001). Higher-grade levels were generally preferred due to better accuracy, with a range of 12-17. Further increasing grade levels reduced readability sharply, limiting preference. These findings highlight the limitations of unsupervised LLM transformations and suggest the need for hybrid approaches that maintain original reports while incorporating explanatory content to balance accuracy and readability.

Causal-Adapter: Taming Text-to-Image Diffusion for Faithful Counterfactual Generation

Lei Tong, Zhihua Liu, Chaochao Lu, Dino Oglic, Tom Diethe, Philip Teare, Sotirios A. Tsaftaris, Chen Jin

arxiv logopreprintSep 29 2025
We present Causal-Adapter, a modular framework that adapts frozen text-to-image diffusion backbones for counterfactual image generation. Our method enables causal interventions on target attributes, consistently propagating their effects to causal dependents without altering the core identity of the image. In contrast to prior approaches that rely on prompt engineering without explicit causal structure, Causal-Adapter leverages structural causal modeling augmented with two attribute regularization strategies: prompt-aligned injection, which aligns causal attributes with textual embeddings for precise semantic control, and a conditioned token contrastive loss to disentangle attribute factors and reduce spurious correlations. Causal-Adapter achieves state-of-the-art performance on both synthetic and real-world datasets, with up to 91\% MAE reduction on Pendulum for accurate attribute control and 87\% FID reduction on ADNI for high-fidelity MRI image generation. These results show that our approach enables robust, generalizable counterfactual editing with faithful attribute modification and strong identity preservation.

Toward a Vision-Language Foundation Model for Medical Data: Multimodal Dataset and Benchmarks for Vietnamese PET/CT Report Generation

Huu Tien Nguyen, Dac Thai Nguyen, The Minh Duc Nguyen, Trung Thanh Nguyen, Thao Nguyen Truong, Huy Hieu Pham, Johan Barthelemy, Minh Quan Tran, Thanh Tam Nguyen, Quoc Viet Hung Nguyen, Quynh Anh Chau, Hong Son Mai, Thanh Trung Nguyen, Phi Le Nguyen

arxiv logopreprintSep 29 2025
Vision-Language Foundation Models (VLMs), trained on large-scale multimodal datasets, have driven significant advances in Artificial Intelligence by enabling rich cross-modal reasoning. Despite their success in general domains, applying these models to medical imaging remains challenging due to the limited availability of diverse imaging modalities and multilingual clinical data. Most existing medical VLMs are trained on a subset of imaging modalities and focus primarily on high-resource languages, thus limiting their generalizability and clinical utility. To address these limitations, we introduce a novel Vietnamese-language multimodal medical dataset comprising 1,567,062 paired CT-PET images and corresponding 2,757 full-length clinical reports. This dataset is designed to fill two pressing gaps in medical AI development: (1) the lack of PET/CT imaging data in existing VLMs training corpora, which hinders the development of models capable of handling functional imaging tasks; and (2) the underrepresentation of low-resource languages, particularly the Vietnamese language, in medical vision-language research. To the best of our knowledge, this is the first dataset to provide comprehensive PET/CT-report pairs in Vietnamese. We further introduce a training framework to enhance VLMs' learning, including data augmentation and expert-validated test sets. We conduct comprehensive experiments benchmarking state-of-the-art VLMs on downstream tasks, including medical report generation and visual question answering. The experimental results show that incorporating our dataset significantly improves the performance of existing VLMs. We believe this dataset and benchmark will serve as a pivotal step in advancing the development of more robust VLMs for medical imaging, particularly in low-resource languages, and improving their clinical relevance in Vietnamese healthcare.

Cycle Diffusion Model for Counterfactual Image Generation

Fangrui Huang, Alan Wang, Binxu Li, Bailey Trang, Ridvan Yesiloglu, Tianyu Hua, Wei Peng, Ehsan Adeli

arxiv logopreprintSep 29 2025
Deep generative models have demonstrated remarkable success in medical image synthesis. However, ensuring conditioning faithfulness and high-quality synthetic images for direct or counterfactual generation remains a challenge. In this work, we introduce a cycle training framework to fine-tune diffusion models for improved conditioning adherence and enhanced synthetic image realism. Our approach, Cycle Diffusion Model (CDM), enforces consistency between generated and original images by incorporating cycle constraints, enabling more reliable direct and counterfactual generation. Experiments on a combined 3D brain MRI dataset (from ABCD, HCP aging & young adults, ADNI, and PPMI) show that our method improves conditioning accuracy and enhances image quality as measured by FID and SSIM. The results suggest that the cycle strategy used in CDM can be an effective method for refining diffusion-based medical image generation, with applications in data augmentation, counterfactual, and disease progression modeling.

EVLF-FM: Explainable Vision Language Foundation Model for Medicine

Yang Bai, Haoran Cheng, Yang Zhou, Jun Zhou, Arun Thirunavukarasu, Yuhe Ke, Jie Yao, Kanae Fukutsu, Chrystie Wan Ning Quek, Ashley Hong, Laura Gutierrez, Zhen Ling Teo, Darren Shu Jeng Ting, Brian T. Soetikno, Christopher S. Nielsen, Tobias Elze, Zengxiang Li, Linh Le Dinh, Hiok Hong Chan, Victor Koh, Marcus Tan, Kelvin Z. Li, Leonard Yip, Ching Yu Cheng, Yih Chung Tham, Gavin Siew Wei Tan, Leopold Schmetterer, Marcus Ang, Rahat Hussain, Jod Mehta, Tin Aung, Lionel Tim-Ee Cheng, Tran Nguyen Tuan Anh, Chee Leong Cheng, Tien Yin Wong, Nan Liu, Iain Beehuat Tan, Soon Thye Lim, Eyal Klang, Tony Kiat Hon Lim, Rick Siow Mong Goh, Yong Liu, Daniel Shu Wei Ting

arxiv logopreprintSep 29 2025
Despite the promise of foundation models in medical AI, current systems remain limited - they are modality-specific and lack transparent reasoning processes, hindering clinical adoption. To address this gap, we present EVLF-FM, a multimodal vision-language foundation model (VLM) designed to unify broad diagnostic capability with fine-grain explainability. The development and testing of EVLF-FM encompassed over 1.3 million total samples from 23 global datasets across eleven imaging modalities related to six clinical specialties: dermatology, hepatology, ophthalmology, pathology, pulmonology, and radiology. External validation employed 8,884 independent test samples from 10 additional datasets across five imaging modalities. Technically, EVLF-FM is developed to assist with multiple disease diagnosis and visual question answering with pixel-level visual grounding and reasoning capabilities. In internal validation for disease diagnostics, EVLF-FM achieved the highest average accuracy (0.858) and F1-score (0.797), outperforming leading generalist and specialist models. In medical visual grounding, EVLF-FM also achieved stellar performance across nine modalities with average mIOU of 0.743 and [email protected] of 0.837. External validations further confirmed strong zero-shot and few-shot performance, with competitive F1-scores despite a smaller model size. Through a hybrid training strategy combining supervised and visual reinforcement fine-tuning, EVLF-FM not only achieves state-of-the-art accuracy but also exhibits step-by-step reasoning, aligning outputs with visual evidence. EVLF-FM is an early multi-disease VLM model with explainability and reasoning capabilities that could advance adoption of and trust in foundation models for real-world clinical deployment.

An Efficient 3D Latent Diffusion Model for T1-contrast Enhanced MRI Generation

Zach Eidex, Mojtaba Safari, Jie Ding, Richard Qiu, Justin Roper, David Yu, Hui-Kuo Shu, Zhen Tian, Hui Mao, Xiaofeng Yang

arxiv logopreprintSep 29 2025
Objective: Gadolinium-based contrast agents (GBCAs) are commonly employed with T1w MRI to enhance lesion visualization but are restricted in patients at risk of nephrogenic systemic fibrosis and variations in GBCA administration can introduce imaging inconsistencies. This study develops an efficient 3D deep-learning framework to generate T1-contrast enhanced images (T1C) from pre-contrast multiparametric MRI. Approach: We propose the 3D latent rectified flow (T1C-RFlow) model for generating high-quality T1C images. First, T1w and T2-FLAIR images are input into a pretrained autoencoder to acquire an efficient latent space representation. A rectified flow diffusion model is then trained in this latent space representation. The T1C-RFlow model was trained on a curated dataset comprised of the BraTS 2024 glioma (GLI; 1480 patients), meningioma (MEN; 1141 patients), and metastases (MET; 1475 patients) datasets. Selected patients were split into train (N=2860), validation (N=612), and test (N=614) sets. Results: Both qualitative and quantitative results demonstrate that the T1C-RFlow model outperforms benchmark 3D models (pix2pix, DDPM, Diffusion Transformers (DiT-3D)) trained in the same latent space. T1C-RFlow achieved the following metrics - GLI: NMSE 0.044 +/- 0.047, SSIM 0.935 +/- 0.025; MEN: NMSE 0.046 +/- 0.029, SSIM 0.937 +/- 0.021; MET: NMSE 0.098 +/- 0.088, SSIM 0.905 +/- 0.082. T1C-RFlow had the best tumor reconstruction performance and significantly faster denoising times (6.9 s/volume, 200 steps) than conventional DDPM models in both latent space (37.7s, 1000 steps) and patch-based in image space (4.3 hr/volume). Significance: Our proposed method generates synthetic T1C images that closely resemble ground truth T1C in much less time than previous diffusion models. Further development may permit a practical method for contrast-agent-free MRI for brain tumors.

Advancement in hepatocellular carcinoma research: Biomarkers, therapeutics approaches and impact of artificial intelligence.

Rajak D, Nema P, Sahu A, Vishwakarma S, Kashaw SK

pubmed logopapersSep 29 2025
Cancer is a leading, highly complex, and deadly disease that has become a major concern in modern medicine. Hepatocellular carcinoma is the most common primary liver cancer and a leading cause of global cancer mortality. Its development is predominantly associated with chronic liver diseases such as hepatitis B and C infections, cirrhosis, alcohol consumption, and non-alcoholic fatty liver disease. Molecular mechanisms underlying HCC involve genetic mutations, epigenetic changes, and disrupted signalling pathways, including Wnt/β-catenin and PI3K/AKT/mTOR. Early diagnosis remains challenging, as most cases are detected at advanced stages, limiting curative treatment options. Diagnostic advancements, including biomarkers like alpha-fetoprotein and cutting-edge imaging techniques such as CT, MRI, and ultrasound-based radiomics, have improved early detection. Treatment strategies depend on the disease stage, ranging from curative options like surgical resection and liver transplantation to palliative therapies, including transarterial chemoembolization, systemic therapies, and immunotherapy. Immune checkpoint inhibitors targeting PD-1/PD-L1 and CTLA-4 have shown promise for advanced HCC. In this review we discuss about emerging technologies, including artificial intelligence and multi-omics platforms for HCC management by enhancing diagnostic accuracy, identifying novel therapeutic targets, and enabling personalized treatments. Despite these advancements, the prognosis for HCC patients remains poor, underscoring the need for continued research into early detection, innovative therapies, and translational applications to effectively address this global health challenge.
Page 3 of 78773 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.