Sort by:
Page 8 of 39382 results

The Clinical Significance of Femoral and Tibial Anatomy for Anterior Cruciate Ligament Injury and Reconstruction.

Liew FF, Liang J

pubmed logopapersJun 19 2025
The anterior cruciate ligament (ACL) is a crucial stabilizer of the knee joint, and its injury risk and surgical outcomes are closely linked to femoral and tibial anatomy. This review focuses on current evidence on how skeletal parameters, such as femoral intercondylar notch morphology, tibial slope, and insertion site variations-influence ACL biomechanics. A narrowed or concave femoral notch raises the risk of impingement, while a higher posterior tibial slope makes anterior tibial translation worse, which increases ACL strain. Gender disparities exist, with females exhibiting smaller notch dimensions, and hormonal fluctuations may contribute to ligament laxity. Anatomical changes that come with getting older make clinical management even harder. Adolescent patients have problems with epiphyseal growth, and older patients have to deal with degenerative notch narrowing and lower bone density. Preoperative imaging (MRI, CT, and 3D reconstruction) enables precise assessment of anatomical variations, guiding individualized surgical strategies. Optimal femoral and tibial tunnel placement during reconstruction is vital to replicate native ACL biomechanics and avoid graft failure. Emerging technologies, including AI-driven segmentation and deep learning models, enhance risk prediction and intraoperative precision. Furthermore, synergistic factors, such as meniscal integrity and posterior oblique ligament anatomy, need to be integrated into comprehensive evaluations. Future directions emphasize personalized approaches, combining advanced imaging, neuromuscular training, and artificial intelligence to optimize prevention, diagnosis, and rehabilitation. Addressing age-specific challenges, such as growth plate preservation in pediatric cases and osteoarthritis management in the elderly, will improve long-term outcomes. Ultimately, a nuanced understanding of skeletal anatomy and technological integration holds promise for reducing ACL reinjury rates and enhancing patient recovery.

RadioRAG: Online Retrieval-augmented Generation for Radiology Question Answering.

Tayebi Arasteh S, Lotfinia M, Bressem K, Siepmann R, Adams L, Ferber D, Kuhl C, Kather JN, Nebelung S, Truhn D

pubmed logopapersJun 18 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate diagnostic accuracy of various large language models (LLMs) when answering radiology-specific questions with and without access to additional online, up-to-date information via retrieval-augmented generation (RAG). Materials and Methods The authors developed Radiology RAG (RadioRAG), an end-to-end framework that retrieves data from authoritative radiologic online sources in real-time. RAG incorporates information retrieval from external sources to supplement the initial prompt, grounding the model's response in relevant information. Using 80 questions from the RSNA Case Collection across radiologic subspecialties and 24 additional expert-curated questions with reference standard answers, LLMs (GPT-3.5-turbo, GPT-4, Mistral-7B, Mixtral-8 × 7B, and Llama3 [8B and 70B]) were prompted with and without RadioRAG in a zero-shot inference scenario (temperature ≤ 0.1, top- <i>P</i> = 1). RadioRAG retrieved context-specific information from www.radiopaedia.org. Accuracy of LLMs with and without RadioRAG in answering questions from each dataset was assessed. Statistical analyses were performed using bootstrapping while preserving pairing. Additional assessments included comparison of model with human performance and comparison of time required for conventional versus RadioRAG-powered question answering. Results RadioRAG improved accuracy for some LLMs, including GPT-3.5-turbo [74% (59/80) versus 66% (53/80), FDR = 0.03] and Mixtral-8 × 7B [76% (61/80) versus 65% (52/80), FDR = 0.02] on the RSNA-RadioQA dataset, with similar trends in the ExtendedQA dataset. Accuracy exceeded (FDR ≤ 0.007) that of a human expert (63%, (50/80)) for these LLMs, while not for Mistral-7B-instruct-v0.2, Llama3-8B, and Llama3-70B (FDR ≥ 0.21). RadioRAG reduced hallucinations for all LLMs (rates from 6-25%). RadioRAG increased estimated response time fourfold. Conclusion RadioRAG shows potential to improve LLM accuracy and factuality in radiology question answering by integrating real-time domain-specific data. ©RSNA, 2025.

Mono-Modalizing Extremely Heterogeneous Multi-Modal Medical Image Registration

Kyobin Choo, Hyunkyung Han, Jinyeong Kim, Chanyong Yoon, Seong Jae Hwang

arxiv logopreprintJun 18 2025
In clinical practice, imaging modalities with functional characteristics, such as positron emission tomography (PET) and fractional anisotropy (FA), are often aligned with a structural reference (e.g., MRI, CT) for accurate interpretation or group analysis, necessitating multi-modal deformable image registration (DIR). However, due to the extreme heterogeneity of these modalities compared to standard structural scans, conventional unsupervised DIR methods struggle to learn reliable spatial mappings and often distort images. We find that the similarity metrics guiding these models fail to capture alignment between highly disparate modalities. To address this, we propose M2M-Reg (Multi-to-Mono Registration), a novel framework that trains multi-modal DIR models using only mono-modal similarity while preserving the established architectural paradigm for seamless integration into existing models. We also introduce GradCyCon, a regularizer that leverages M2M-Reg's cyclic training scheme to promote diffeomorphism. Furthermore, our framework naturally extends to a semi-supervised setting, integrating pre-aligned and unaligned pairs only, without requiring ground-truth transformations or segmentation masks. Experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset demonstrate that M2M-Reg achieves up to 2x higher DSC than prior methods for PET-MRI and FA-MRI registration, highlighting its effectiveness in handling highly heterogeneous multi-modal DIR. Our code is available at https://github.com/MICV-yonsei/M2M-Reg.

NERO: Explainable Out-of-Distribution Detection with Neuron-level Relevance

Anju Chhetri, Jari Korhonen, Prashnna Gyawali, Binod Bhattarai

arxiv logopreprintJun 18 2025
Ensuring reliability is paramount in deep learning, particularly within the domain of medical imaging, where diagnostic decisions often hinge on model outputs. The capacity to separate out-of-distribution (OOD) samples has proven to be a valuable indicator of a model's reliability in research. In medical imaging, this is especially critical, as identifying OOD inputs can help flag potential anomalies that might otherwise go undetected. While many OOD detection methods rely on feature or logit space representations, recent works suggest these approaches may not fully capture OOD diversity. To address this, we propose a novel OOD scoring mechanism, called NERO, that leverages neuron-level relevance at the feature layer. Specifically, we cluster neuron-level relevance for each in-distribution (ID) class to form representative centroids and introduce a relevance distance metric to quantify a new sample's deviation from these centroids, enhancing OOD separability. Additionally, we refine performance by incorporating scaled relevance in the bias term and combining feature norms. Our framework also enables explainable OOD detection. We validate its effectiveness across multiple deep learning architectures on the gastrointestinal imaging benchmarks Kvasir and GastroVision, achieving improvements over state-of-the-art OOD detection methods.

DM-FNet: Unified multimodal medical image fusion via diffusion process-trained encoder-decoder

Dan He, Weisheng Li, Guofen Wang, Yuping Huang, Shiqiang Liu

arxiv logopreprintJun 18 2025
Multimodal medical image fusion (MMIF) extracts the most meaningful information from multiple source images, enabling a more comprehensive and accurate diagnosis. Achieving high-quality fusion results requires a careful balance of brightness, color, contrast, and detail; this ensures that the fused images effectively display relevant anatomical structures and reflect the functional status of the tissues. However, existing MMIF methods have limited capacity to capture detailed features during conventional training and suffer from insufficient cross-modal feature interaction, leading to suboptimal fused image quality. To address these issues, this study proposes a two-stage diffusion model-based fusion network (DM-FNet) to achieve unified MMIF. In Stage I, a diffusion process trains UNet for image reconstruction. UNet captures detailed information through progressive denoising and represents multilevel data, providing a rich set of feature representations for the subsequent fusion network. In Stage II, noisy images at various steps are input into the fusion network to enhance the model's feature recognition capability. Three key fusion modules are also integrated to process medical images from different modalities adaptively. Ultimately, the robust network structure and a hybrid loss function are integrated to harmonize the fused image's brightness, color, contrast, and detail, enhancing its quality and information density. The experimental results across various medical image types demonstrate that the proposed method performs exceptionally well regarding objective evaluation metrics. The fused image preserves appropriate brightness, a comprehensive distribution of radioactive tracers, rich textures, and clear edges. The code is available at https://github.com/HeDan-11/DM-FNet.

Quality appraisal of radiomics-based studies on chondrosarcoma using METhodological RadiomICs Score (METRICS) and Radiomics Quality Score (RQS).

Gitto S, Cuocolo R, Klontzas ME, Albano D, Messina C, Sconfienza LM

pubmed logopapersJun 18 2025
To assess the methodological quality of radiomics-based studies on bone chondrosarcoma using METhodological RadiomICs Score (METRICS) and Radiomics Quality Score (RQS). A literature search was conducted on EMBASE and PubMed databases for research papers published up to July 2024 and focused on radiomics in bone chondrosarcoma, with no restrictions regarding the study aim. Three readers independently evaluated the study quality using METRICS and RQS. Baseline study characteristics were extracted. Inter-reader reliability was calculated using intraclass correlation coefficient (ICC). Out of 68 identified papers, 18 were finally included in the analysis. Radiomics research was aimed at lesion classification (n = 15), outcome prediction (n = 2) or both (n = 1). Study design was retrospective in all papers. Most studies employed MRI (n = 12), CT (n = 3) or both (n = 1). METRICS and RQS adherence rates ranged between 37.3-94.8% and 2.8-44.4%, respectively. Excellent inter-reader reliability was found for both METRICS (ICC = 0.961) and RQS (ICC = 0.975). Among the limitations of the evaluated studies, the absence of prospective studies and deep learning-based analyses was highlighted, along with the limited adherence to radiomics guidelines, use of external testing datasets and open science data. METRICS and RQS are reproducible quality assessment tools, with the former showing higher adherence rates in studies on chondrosarcoma. METRICS is better suited for assessing papers with retrospective design, which is often chosen in musculoskeletal oncology due to the low prevalence of bone sarcomas. Employing quality scoring systems should be promoted in radiomics-based studies to improve methodological quality and facilitate clinical translation. Employing reproducible quality scoring systems, especially METRICS (which shows higher adherence rates than RQS and is better suited for assessing retrospective investigations), is highly recommended to design radiomics-based studies on chondrosarcoma, improve methodological quality and facilitate clinical translation. The low scientific and reporting quality of radiomics studies on chondrosarcoma is the main reason preventing clinical translation. Quality appraisal using METRICS and RQS showed 37.3-94.8% and 2.8-44.4% adherence rates, respectively. Room for improvement was noted in study design, deep learning methods, external testing and open science. Employing reproducible quality scoring systems is recommended to design radiomics studies on bone chondrosarcoma and facilitate clinical translation.

Image-based AI tools in peripheral nerves assessment: Current status and integration strategies - A narrative review.

Martín-Noguerol T, Díaz-Angulo C, Luna A, Segovia F, Gómez-Río M, Górriz JM

pubmed logopapersJun 18 2025
Peripheral Nerves (PNs) are traditionally evaluated using US or MRI, allowing radiologists to identify and classify them as normal or pathological based on imaging findings, symptoms, and electrophysiological tests. However, the anatomical complexity of PNs, coupled with their proximity to surrounding structures like vessels and muscles, presents significant challenges. Advanced imaging techniques, including MR-neurography and Diffusion-Weighted Imaging (DWI) neurography, have shown promise but are hindered by steep learning curves, operator dependency, and limited accessibility. Discrepancies between imaging findings and patient symptoms further complicate the evaluation of PNs, particularly in cases where imaging appears normal despite clinical indications of pathology. Additionally, demographic and clinical factors such as age, sex, comorbidities, and physical activity influence PN health but remain unquantifiable with current imaging methods. Artificial Intelligence (AI) solutions have emerged as a transformative tool in PN evaluation. AI-based algorithms offer the potential to transition from qualitative to quantitative assessments, enabling precise segmentation, characterization, and threshold determination to distinguish healthy from pathological nerves. These advances could improve diagnostic accuracy and treatment monitoring. This review highlights the latest advances in AI applications for PN imaging, discussing their potential to overcome the current limitations and opportunities to improve their integration into routine radiological practice.

Frequency-Calibrated Membership Inference Attacks on Medical Image Diffusion Models

Xinkai Zhao, Yuta Tokuoka, Junichiro Iwasawa, Keita Oda

arxiv logopreprintJun 17 2025
The increasing use of diffusion models for image generation, especially in sensitive areas like medical imaging, has raised significant privacy concerns. Membership Inference Attack (MIA) has emerged as a potential approach to determine if a specific image was used to train a diffusion model, thus quantifying privacy risks. Existing MIA methods often rely on diffusion reconstruction errors, where member images are expected to have lower reconstruction errors than non-member images. However, applying these methods directly to medical images faces challenges. Reconstruction error is influenced by inherent image difficulty, and diffusion models struggle with high-frequency detail reconstruction. To address these issues, we propose a Frequency-Calibrated Reconstruction Error (FCRE) method for MIAs on medical image diffusion models. By focusing on reconstruction errors within a specific mid-frequency range and excluding both high-frequency (difficult to reconstruct) and low-frequency (less informative) regions, our frequency-selective approach mitigates the confounding factor of inherent image difficulty. Specifically, we analyze the reverse diffusion process, obtain the mid-frequency reconstruction error, and compute the structural similarity index score between the reconstructed and original images. Membership is determined by comparing this score to a threshold. Experiments on several medical image datasets demonstrate that our FCRE method outperforms existing MIA methods.

Deep learning based colorectal cancer detection in medical images: A comprehensive analysis of datasets, methods, and future directions.

Gülmez B

pubmed logopapersJun 17 2025
This comprehensive review examines the current state and evolution of artificial intelligence applications in colorectal cancer detection through medical imaging from 2019 to 2025. The study presents a quantitative analysis of 110 high-quality publications and 9 publicly accessible medical image datasets used for training and validation. Various convolutional neural network architectures-including ResNet (40 implementations), VGG (18 implementations), and emerging transformer-based models (12 implementations)-for classification, object detection, and segmentation tasks are systematically categorized and evaluated. The investigation encompasses hyperparameter optimization techniques utilized to enhance model performance, with particular focus on genetic algorithms and particle swarm optimization approaches. The role of explainable AI methods in medical diagnosis interpretation is analyzed through visualization techniques such as Grad-CAM and SHAP. Technical limitations, including dataset scarcity, computational constraints, and standardization challenges, are identified through trend analysis. Research gaps in current methodologies are highlighted through comparative assessment of performance metrics across different architectural implementations. Potential future research directions, including multimodal learning and federated learning approaches, are proposed based on publication trend analysis. This review serves as a comprehensive reference for researchers in medical image analysis and clinical practitioners implementing AI-based colorectal cancer detection systems.

Imaging-Based AI for Predicting Lymphovascular Space Invasion in Cervical Cancer: Systematic Review and Meta-Analysis.

She L, Li Y, Wang H, Zhang J, Zhao Y, Cui J, Qiu L

pubmed logopapersJun 16 2025
The role of artificial intelligence (AI) in enhancing the accuracy of lymphovascular space invasion (LVSI) detection in cervical cancer remains debated. This meta-analysis aimed to evaluate the diagnostic accuracy of imaging-based AI for predicting LVSI in cervical cancer. We conducted a comprehensive literature search across multiple databases, including PubMed, Embase, and Web of Science, identifying studies published up to November 9, 2024. Studies were included if they evaluated the diagnostic performance of imaging-based AI models in detecting LVSI in cervical cancer. We used a bivariate random-effects model to calculate pooled sensitivity and specificity with corresponding 95% confidence intervals. Study heterogeneity was assessed using the I2 statistic. Of 403 studies identified, 16 studies (2514 patients) were included. For the interval validation set, the pooled sensitivity, specificity, and area under the curve (AUC) for detecting LVSI were 0.84 (95% CI 0.79-0.87), 0.78 (95% CI 0.75-0.81), and 0.87 (95% CI 0.84-0.90). For the external validation set, the pooled sensitivity, specificity, and AUC for detecting LVSI were 0.79 (95% CI 0.70-0.86), 0.76 (95% CI 0.67-0.83), and 0.84 (95% CI 0.81-0.87). Using the likelihood ratio test for subgroup analysis, deep learning demonstrated significantly higher sensitivity compared to machine learning (P=.01). Moreover, AI models based on positron emission tomography/computed tomography exhibited superior sensitivity relative to those based on magnetic resonance imaging (P=.01). Imaging-based AI, particularly deep learning algorithms, demonstrates promising diagnostic performance in predicting LVSI in cervical cancer. However, the limited external validation datasets and the retrospective nature of the research may introduce potential biases. These findings underscore AI's potential as an auxiliary diagnostic tool, necessitating further large-scale prospective validation.
Page 8 of 39382 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.