Sort by:
Page 68 of 99986 results

Large models in medical imaging: Advances and prospects.

Fang M, Wang Z, Pan S, Feng X, Zhao Y, Hou D, Wu L, Xie X, Zhang XY, Tian J, Dong D

pubmed logopapersJun 20 2025
Recent advances in large models demonstrate significant prospects for transforming the field of medical imaging. These models, including large language models, large visual models, and multimodal large models, offer unprecedented capabilities in processing and interpreting complex medical data across various imaging modalities. By leveraging self-supervised pretraining on vast unlabeled datasets, cross-modal representation learning, and domain-specific medical knowledge adaptation through fine-tuning, large models can achieve higher diagnostic accuracy and more efficient workflows for key clinical tasks. This review summarizes the concepts, methods, and progress of large models in medical imaging, highlighting their potential in precision medicine. The article first outlines the integration of multimodal data under large model technologies, approaches for training large models with medical datasets, and the need for robust evaluation metrics. It then explores how large models can revolutionize applications in critical tasks such as image segmentation, disease diagnosis, personalized treatment strategies, and real-time interactive systems, thus pushing the boundaries of traditional imaging analysis. Despite their potential, the practical implementation of large models in medical imaging faces notable challenges, including the scarcity of high-quality medical data, the need for optimized perception of imaging phenotypes, safety considerations, and seamless integration with existing clinical workflows and equipment. As research progresses, the development of more efficient, interpretable, and generalizable models will be critical to ensuring their reliable deployment across diverse clinical environments. This review aims to provide insights into the current state of the field and provide directions for future research to facilitate the broader adoption of large models in clinical practice.

Robust Training with Data Augmentation for Medical Imaging Classification

Josué Martínez-Martínez, Olivia Brown, Mostafa Karami, Sheida Nabavi

arxiv logopreprintJun 20 2025
Deep neural networks are increasingly being used to detect and diagnose medical conditions using medical imaging. Despite their utility, these models are highly vulnerable to adversarial attacks and distribution shifts, which can affect diagnostic reliability and undermine trust among healthcare professionals. In this study, we propose a robust training algorithm with data augmentation (RTDA) to mitigate these vulnerabilities in medical image classification. We benchmark classifier robustness against adversarial perturbations and natural variations of RTDA and six competing baseline techniques, including adversarial training and data augmentation approaches in isolation and combination, using experimental data sets with three different imaging technologies (mammograms, X-rays, and ultrasound). We demonstrate that RTDA achieves superior robustness against adversarial attacks and improved generalization performance in the presence of distribution shift in each image classification task while maintaining high clean accuracy.

The value of multimodal neuroimaging in the diagnosis and treatment of post-traumatic stress disorder: a narrative review.

Zhang H, Hu Y, Yu Y, Zhou Z, Sun Y, Qi C, Yang L, Xie H, Zhang J, Zhu H

pubmed logopapersJun 20 2025
Post-traumatic stress disorder (PTSD) is a delayed-onset or prolonged persistent psychiatric disorder caused by individuals experiencing an unusually threatening or catastrophic stressful event or situation. Due to its long duration and recurrent nature, unimodal neuroimaging tools such as computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and electroencephalography (EEG) have been widely used in the diagnosis and treatment of PTSD for early intervention. However, as compared with an unimodal approach, a multimodal imaging approach can better capture integrated neural mechanisms underlying the occurrence and development of PTSD, including predisposing factors, changes in neural activity, and physiological mechanisms of symptoms. Moreover, a multimodal neuroimaging approach can aid the diagnosis and treatment of PTSD, facilitate searching for biomarkers at different stages of PTSD, and explore biomarkers for symptomatic improvement. However, at present, the majority of PTSD studies remain unimodal, while the combination of multimodal brain imaging data with machine learning will become an important direction for future research.

Artificial intelligence in imaging diagnosis of liver tumors: current status and future prospects.

Hori M, Suzuki Y, Sofue K, Sato J, Nishigaki D, Tomiyama M, Nakamoto A, Murakami T, Tomiyama N

pubmed logopapersJun 19 2025
Liver cancer remains a significant global health concern, ranking as the sixth most common malignancy and the third leading cause of cancer-related deaths worldwide. Medical imaging plays a vital role in managing liver tumors, particularly hepatocellular carcinoma (HCC) and metastatic lesions. However, the large volume and complexity of imaging data can make accurate and efficient interpretation challenging. Artificial intelligence (AI) is recognized as a promising tool to address these challenges. Therefore, this review aims to explore the recent advances in AI applications in liver tumor imaging, focusing on key areas such as image reconstruction, image quality enhancement, lesion detection, tumor characterization, segmentation, and radiomics. Among these, AI-based image reconstruction has already been widely integrated into clinical workflows, helping to enhance image quality while reducing radiation exposure. While the adoption of AI-assisted diagnostic tools in liver imaging has lagged behind other fields, such as chest imaging, recent developments are driving their increasing integration into clinical practice. In the future, AI is expected to play a central role in various aspects of liver cancer care, including comprehensive image analysis, treatment planning, response evaluation, and prognosis prediction. This review offers a comprehensive overview of the status and prospects of AI applications in liver tumor imaging.

The Clinical Significance of Femoral and Tibial Anatomy for Anterior Cruciate Ligament Injury and Reconstruction.

Liew FF, Liang J

pubmed logopapersJun 19 2025
The anterior cruciate ligament (ACL) is a crucial stabilizer of the knee joint, and its injury risk and surgical outcomes are closely linked to femoral and tibial anatomy. This review focuses on current evidence on how skeletal parameters, such as femoral intercondylar notch morphology, tibial slope, and insertion site variations-influence ACL biomechanics. A narrowed or concave femoral notch raises the risk of impingement, while a higher posterior tibial slope makes anterior tibial translation worse, which increases ACL strain. Gender disparities exist, with females exhibiting smaller notch dimensions, and hormonal fluctuations may contribute to ligament laxity. Anatomical changes that come with getting older make clinical management even harder. Adolescent patients have problems with epiphyseal growth, and older patients have to deal with degenerative notch narrowing and lower bone density. Preoperative imaging (MRI, CT, and 3D reconstruction) enables precise assessment of anatomical variations, guiding individualized surgical strategies. Optimal femoral and tibial tunnel placement during reconstruction is vital to replicate native ACL biomechanics and avoid graft failure. Emerging technologies, including AI-driven segmentation and deep learning models, enhance risk prediction and intraoperative precision. Furthermore, synergistic factors, such as meniscal integrity and posterior oblique ligament anatomy, need to be integrated into comprehensive evaluations. Future directions emphasize personalized approaches, combining advanced imaging, neuromuscular training, and artificial intelligence to optimize prevention, diagnosis, and rehabilitation. Addressing age-specific challenges, such as growth plate preservation in pediatric cases and osteoarthritis management in the elderly, will improve long-term outcomes. Ultimately, a nuanced understanding of skeletal anatomy and technological integration holds promise for reducing ACL reinjury rates and enhancing patient recovery.

Quality appraisal of radiomics-based studies on chondrosarcoma using METhodological RadiomICs Score (METRICS) and Radiomics Quality Score (RQS).

Gitto S, Cuocolo R, Klontzas ME, Albano D, Messina C, Sconfienza LM

pubmed logopapersJun 18 2025
To assess the methodological quality of radiomics-based studies on bone chondrosarcoma using METhodological RadiomICs Score (METRICS) and Radiomics Quality Score (RQS). A literature search was conducted on EMBASE and PubMed databases for research papers published up to July 2024 and focused on radiomics in bone chondrosarcoma, with no restrictions regarding the study aim. Three readers independently evaluated the study quality using METRICS and RQS. Baseline study characteristics were extracted. Inter-reader reliability was calculated using intraclass correlation coefficient (ICC). Out of 68 identified papers, 18 were finally included in the analysis. Radiomics research was aimed at lesion classification (n = 15), outcome prediction (n = 2) or both (n = 1). Study design was retrospective in all papers. Most studies employed MRI (n = 12), CT (n = 3) or both (n = 1). METRICS and RQS adherence rates ranged between 37.3-94.8% and 2.8-44.4%, respectively. Excellent inter-reader reliability was found for both METRICS (ICC = 0.961) and RQS (ICC = 0.975). Among the limitations of the evaluated studies, the absence of prospective studies and deep learning-based analyses was highlighted, along with the limited adherence to radiomics guidelines, use of external testing datasets and open science data. METRICS and RQS are reproducible quality assessment tools, with the former showing higher adherence rates in studies on chondrosarcoma. METRICS is better suited for assessing papers with retrospective design, which is often chosen in musculoskeletal oncology due to the low prevalence of bone sarcomas. Employing quality scoring systems should be promoted in radiomics-based studies to improve methodological quality and facilitate clinical translation. Employing reproducible quality scoring systems, especially METRICS (which shows higher adherence rates than RQS and is better suited for assessing retrospective investigations), is highly recommended to design radiomics-based studies on chondrosarcoma, improve methodological quality and facilitate clinical translation. The low scientific and reporting quality of radiomics studies on chondrosarcoma is the main reason preventing clinical translation. Quality appraisal using METRICS and RQS showed 37.3-94.8% and 2.8-44.4% adherence rates, respectively. Room for improvement was noted in study design, deep learning methods, external testing and open science. Employing reproducible quality scoring systems is recommended to design radiomics studies on bone chondrosarcoma and facilitate clinical translation.

RadioRAG: Online Retrieval-augmented Generation for Radiology Question Answering.

Tayebi Arasteh S, Lotfinia M, Bressem K, Siepmann R, Adams L, Ferber D, Kuhl C, Kather JN, Nebelung S, Truhn D

pubmed logopapersJun 18 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate diagnostic accuracy of various large language models (LLMs) when answering radiology-specific questions with and without access to additional online, up-to-date information via retrieval-augmented generation (RAG). Materials and Methods The authors developed Radiology RAG (RadioRAG), an end-to-end framework that retrieves data from authoritative radiologic online sources in real-time. RAG incorporates information retrieval from external sources to supplement the initial prompt, grounding the model's response in relevant information. Using 80 questions from the RSNA Case Collection across radiologic subspecialties and 24 additional expert-curated questions with reference standard answers, LLMs (GPT-3.5-turbo, GPT-4, Mistral-7B, Mixtral-8 × 7B, and Llama3 [8B and 70B]) were prompted with and without RadioRAG in a zero-shot inference scenario (temperature ≤ 0.1, top- <i>P</i> = 1). RadioRAG retrieved context-specific information from www.radiopaedia.org. Accuracy of LLMs with and without RadioRAG in answering questions from each dataset was assessed. Statistical analyses were performed using bootstrapping while preserving pairing. Additional assessments included comparison of model with human performance and comparison of time required for conventional versus RadioRAG-powered question answering. Results RadioRAG improved accuracy for some LLMs, including GPT-3.5-turbo [74% (59/80) versus 66% (53/80), FDR = 0.03] and Mixtral-8 × 7B [76% (61/80) versus 65% (52/80), FDR = 0.02] on the RSNA-RadioQA dataset, with similar trends in the ExtendedQA dataset. Accuracy exceeded (FDR ≤ 0.007) that of a human expert (63%, (50/80)) for these LLMs, while not for Mistral-7B-instruct-v0.2, Llama3-8B, and Llama3-70B (FDR ≥ 0.21). RadioRAG reduced hallucinations for all LLMs (rates from 6-25%). RadioRAG increased estimated response time fourfold. Conclusion RadioRAG shows potential to improve LLM accuracy and factuality in radiology question answering by integrating real-time domain-specific data. ©RSNA, 2025.

Mono-Modalizing Extremely Heterogeneous Multi-Modal Medical Image Registration

Kyobin Choo, Hyunkyung Han, Jinyeong Kim, Chanyong Yoon, Seong Jae Hwang

arxiv logopreprintJun 18 2025
In clinical practice, imaging modalities with functional characteristics, such as positron emission tomography (PET) and fractional anisotropy (FA), are often aligned with a structural reference (e.g., MRI, CT) for accurate interpretation or group analysis, necessitating multi-modal deformable image registration (DIR). However, due to the extreme heterogeneity of these modalities compared to standard structural scans, conventional unsupervised DIR methods struggle to learn reliable spatial mappings and often distort images. We find that the similarity metrics guiding these models fail to capture alignment between highly disparate modalities. To address this, we propose M2M-Reg (Multi-to-Mono Registration), a novel framework that trains multi-modal DIR models using only mono-modal similarity while preserving the established architectural paradigm for seamless integration into existing models. We also introduce GradCyCon, a regularizer that leverages M2M-Reg's cyclic training scheme to promote diffeomorphism. Furthermore, our framework naturally extends to a semi-supervised setting, integrating pre-aligned and unaligned pairs only, without requiring ground-truth transformations or segmentation masks. Experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset demonstrate that M2M-Reg achieves up to 2x higher DSC than prior methods for PET-MRI and FA-MRI registration, highlighting its effectiveness in handling highly heterogeneous multi-modal DIR. Our code is available at https://github.com/MICV-yonsei/M2M-Reg.

Image-based AI tools in peripheral nerves assessment: Current status and integration strategies - A narrative review.

Martín-Noguerol T, Díaz-Angulo C, Luna A, Segovia F, Gómez-Río M, Górriz JM

pubmed logopapersJun 18 2025
Peripheral Nerves (PNs) are traditionally evaluated using US or MRI, allowing radiologists to identify and classify them as normal or pathological based on imaging findings, symptoms, and electrophysiological tests. However, the anatomical complexity of PNs, coupled with their proximity to surrounding structures like vessels and muscles, presents significant challenges. Advanced imaging techniques, including MR-neurography and Diffusion-Weighted Imaging (DWI) neurography, have shown promise but are hindered by steep learning curves, operator dependency, and limited accessibility. Discrepancies between imaging findings and patient symptoms further complicate the evaluation of PNs, particularly in cases where imaging appears normal despite clinical indications of pathology. Additionally, demographic and clinical factors such as age, sex, comorbidities, and physical activity influence PN health but remain unquantifiable with current imaging methods. Artificial Intelligence (AI) solutions have emerged as a transformative tool in PN evaluation. AI-based algorithms offer the potential to transition from qualitative to quantitative assessments, enabling precise segmentation, characterization, and threshold determination to distinguish healthy from pathological nerves. These advances could improve diagnostic accuracy and treatment monitoring. This review highlights the latest advances in AI applications for PN imaging, discussing their potential to overcome the current limitations and opportunities to improve their integration into routine radiological practice.

DM-FNet: Unified multimodal medical image fusion via diffusion process-trained encoder-decoder

Dan He, Weisheng Li, Guofen Wang, Yuping Huang, Shiqiang Liu

arxiv logopreprintJun 18 2025
Multimodal medical image fusion (MMIF) extracts the most meaningful information from multiple source images, enabling a more comprehensive and accurate diagnosis. Achieving high-quality fusion results requires a careful balance of brightness, color, contrast, and detail; this ensures that the fused images effectively display relevant anatomical structures and reflect the functional status of the tissues. However, existing MMIF methods have limited capacity to capture detailed features during conventional training and suffer from insufficient cross-modal feature interaction, leading to suboptimal fused image quality. To address these issues, this study proposes a two-stage diffusion model-based fusion network (DM-FNet) to achieve unified MMIF. In Stage I, a diffusion process trains UNet for image reconstruction. UNet captures detailed information through progressive denoising and represents multilevel data, providing a rich set of feature representations for the subsequent fusion network. In Stage II, noisy images at various steps are input into the fusion network to enhance the model's feature recognition capability. Three key fusion modules are also integrated to process medical images from different modalities adaptively. Ultimately, the robust network structure and a hybrid loss function are integrated to harmonize the fused image's brightness, color, contrast, and detail, enhancing its quality and information density. The experimental results across various medical image types demonstrate that the proposed method performs exceptionally well regarding objective evaluation metrics. The fused image preserves appropriate brightness, a comprehensive distribution of radioactive tracers, rich textures, and clear edges. The code is available at https://github.com/HeDan-11/DM-FNet.
Page 68 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.