Sort by:
Page 1 of 766 results
Next

Evaluation of radiology residents' reporting skills using large language models: an observational study.

Atsukawa N, Tatekawa H, Oura T, Matsushita S, Horiuchi D, Takita H, Mitsuyama Y, Omori A, Shimono T, Miki Y, Ueda D

pubmed logopapersJul 1 2025
Large language models (LLMs) have the potential to objectively evaluate radiology resident reports; however, research on their use for feedback in radiology training and assessment of resident skill development remains limited. This study aimed to assess the effectiveness of LLMs in revising radiology reports by comparing them with reports verified by board-certified radiologists and to analyze the progression of resident's reporting skills over time. To identify the LLM that best aligned with human radiologists, 100 reports were randomly selected from 7376 reports authored by nine first-year radiology residents. The reports were evaluated based on six criteria: (1) addition of missing positive findings, (2) deletion of findings, (3) addition of negative findings, (4) correction of the expression of findings, (5) correction of the diagnosis, and (6) proposal of additional examinations or treatments. Reports were segmented into four time-based terms, and 900 reports (450 CT and 450 MRI) were randomly chosen from the initial and final terms of the residents' first year. The revised rates for each criterion were compared between the first and last terms using the Wilcoxon Signed-Rank test. Among the three LLMs-ChatGPT-4 Omni (GPT-4o), Claude-3.5 Sonnet, and Claude-3 Opus-GPT-4o demonstrated the highest level of agreement with board-certified radiologists. Significant improvements were noted in Criteria 1-3 when comparing reports from the first and last terms (Criteria 1, 2, and 3; P < 0.001, P = 0.023, and P = 0.004, respectively) using GPT-4o. No significant changes were observed for Criteria 4-6. Despite this, all criteria except for Criteria 6 showed progressive enhancement over time. LLMs can effectively provide feedback on commonly corrected areas in radiology reports, enabling residents to objectively identify and improve their weaknesses and monitor their progress. Additionally, LLMs may help reduce the workload of radiologists' mentors.

CXR-LLaVA: a multimodal large language model for interpreting chest X-ray images.

Lee S, Youn J, Kim H, Kim M, Yoon SH

pubmed logopapersJul 1 2025
This study aimed to develop an open-source multimodal large language model (CXR-LLaVA) for interpreting chest X-ray images (CXRs), leveraging recent advances in large language models (LLMs) to potentially replicate the image interpretation skills of human radiologists. For training, we collected 592,580 publicly available CXRs, of which 374,881 had labels for certain radiographic abnormalities (Dataset 1) and 217,699 provided free-text radiology reports (Dataset 2). After pre-training a vision transformer with Dataset 1, we integrated it with an LLM influenced by the LLaVA network. Then, the model was fine-tuned, primarily using Dataset 2. The model's diagnostic performance for major pathological findings was evaluated, along with the acceptability of radiologic reports by human radiologists, to gauge its potential for autonomous reporting. The model demonstrated impressive performance in test sets, achieving an average F1 score of 0.81 for six major pathological findings in the MIMIC internal test set and 0.56 for six major pathological findings in the external test set. The model's F1 scores surpassed those of GPT-4-vision and Gemini-Pro-Vision in both test sets. In human radiologist evaluations of the external test set, the model achieved a 72.7% success rate in autonomous reporting, slightly below the 84.0% rate of ground truth reports. This study highlights the significant potential of multimodal LLMs for CXR interpretation, while also acknowledging the performance limitations. Despite these challenges, we believe that making our model open-source will catalyze further research, expanding its effectiveness and applicability in various clinical contexts. Question How can a multimodal large language model be adapted to interpret chest X-rays and generate radiologic reports? Findings The developed CXR-LLaVA model effectively detects major pathological findings in chest X-rays and generates radiologic reports with a higher accuracy compared to general-purpose models. Clinical relevance This study demonstrates the potential of multimodal large language models to support radiologists by autonomously generating chest X-ray reports, potentially reducing diagnostic workloads and improving radiologist efficiency.

Improving radiology reporting accuracy: use of GPT-4 to reduce errors in reports.

Mayes CJ, Reyes C, Truman ME, Dodoo CA, Adler CR, Banerjee I, Khandelwal A, Alexander LF, Sheedy SP, Thompson CP, Varner JA, Zulfiqar M, Tan N

pubmed logopapersJun 27 2025
Radiology reports are essential for communicating imaging findings to guide diagnosis and treatment. Although most radiology reports are accurate, errors can occur in the final reports due to high workloads, use of dictation software, and human error. Advanced artificial intelligence models, such as GPT-4, show potential as tools to improve report accuracy. This retrospective study evaluated how GPT-4 performed in detecting and correcting errors in finalized radiology reports in real-world settings for abdominopelvic computed tomography (CT) reports. We evaluated finalized CT abdominopelvic reports from a tertiary health system by using GPT-4 with zero-shot learning techniques. Six radiologists each reviewed 100 of their finalized reports (randomly selected), evaluating GPT-4's suggested revisions for agreement, acceptance, and clinical impact. The radiologists' responses were compared by years in practice and sex. GPT-4 identified issues and suggested revisions for 91% of the 600 reports; most revisions addressed grammar (74%). The radiologists agreed with 27% of the revisions and accepted 23%. Most revisions were rated as having no (44%) or low (46%) clinical impact. Potential harm was rare (8%), with only 2 cases of potentially severe harm. Radiologists with less experience (≤ 7 years of practice) were more likely to agree with the revisions suggested by GPT-4 than those with more experience (34% vs. 20%, P = .003) and accepted a greater percentage of the revisions (32% vs. 15%, P = .003). Although GPT-4 showed promise in identifying errors and improving the clarity of finalized radiology reports, most errors were categorized as minor, with no or low clinical impact. Collectively, the radiologists accepted 23% of the suggested revisions in their finalized reports. This study highlights the potential of GPT-4 as a prospective tool for radiology reporting, with further refinement needed for consistent use in clinical practice.

How well do multimodal LLMs interpret CT scans? An auto-evaluation framework for analyses.

Zhu Q, Hou B, Mathai TS, Mukherjee P, Jin Q, Chen X, Wang Z, Cheng R, Summers RM, Lu Z

pubmed logopapersJun 25 2025
This study introduces a novel evaluation framework, GPTRadScore, to systematically assess the performance of multimodal large language models (MLLMs) in generating clinically accurate findings from CT imaging. Specifically, GPTRadScore leverages LLMs as an evaluation metric, aiming to provide a more accurate and clinically informed assessment than traditional language-specific methods. Using this framework, we evaluate the capability of several MLLMs, including GPT-4 with Vision (GPT-4V), Gemini Pro Vision, LLaVA-Med, and RadFM, to interpret findings in CT scans. This retrospective study leverages a subset of the public DeepLesion dataset to evaluate the performance of several multimodal LLMs in describing findings in CT slices. GPTRadScore was developed to assess the generated descriptions (location, body part, and type) using GPT-4, alongside traditional metrics. RadFM was fine-tuned using a subset of the DeepLesion dataset with additional labeled examples targeting complex findings. Post fine-tuning, performance was reassessed using GPTRadScore to measure accuracy improvements. Evaluations demonstrated a high correlation of GPTRadScore with clinician assessments, with Pearson's correlation coefficients of 0.87, 0.91, 0.75, 0.90, and 0.89. These results highlight its superiority over traditional metrics, such as BLEU, METEOR, and ROUGE, and indicate that GPTRadScore can serve as a reliable evaluation metric. Using GPTRadScore, it was observed that while GPT-4V and Gemini Pro Vision outperformed other models, significant areas for improvement remain, primarily due to limitations in the datasets used for training. Fine-tuning RadFM resulted in substantial accuracy gains: location accuracy increased from 3.41% to 12.8%, body part accuracy improved from 29.12% to 53%, and type accuracy rose from 9.24% to 30%. These findings reinforce the hypothesis that fine-tuning RadFM can significantly enhance its performance. GPT-4 effectively correlates with expert assessments, validating its use as a reliable metric for evaluating multimodal LLMs in radiological diagnostics. Additionally, the results underscore the efficacy of fine-tuning approaches in improving the descriptive accuracy of LLM-generated medical imaging findings.

MedErr-CT: A Visual Question Answering Benchmark for Identifying and Correcting Errors in CT Reports

Sunggu Kyung, Hyungbin Park, Jinyoung Seo, Jimin Sung, Jihyun Kim, Dongyeong Kim, Wooyoung Jo, Yoojin Nam, Sangah Park, Taehee Kwon, Sang Min Lee, Namkug Kim

arxiv logopreprintJun 24 2025
Computed Tomography (CT) plays a crucial role in clinical diagnosis, but the growing demand for CT examinations has raised concerns about diagnostic errors. While Multimodal Large Language Models (MLLMs) demonstrate promising comprehension of medical knowledge, their tendency to produce inaccurate information highlights the need for rigorous validation. However, existing medical visual question answering (VQA) benchmarks primarily focus on simple visual recognition tasks, lacking clinical relevance and failing to assess expert-level knowledge. We introduce MedErr-CT, a novel benchmark for evaluating medical MLLMs' ability to identify and correct errors in CT reports through a VQA framework. The benchmark includes six error categories - four vision-centric errors (Omission, Insertion, Direction, Size) and two lexical error types (Unit, Typo) - and is organized into three task levels: classification, detection, and correction. Using this benchmark, we quantitatively assess the performance of state-of-the-art 3D medical MLLMs, revealing substantial variation in their capabilities across different error types. Our benchmark contributes to the development of more reliable and clinically applicable MLLMs, ultimately helping reduce diagnostic errors and improve accuracy in clinical practice. The code and datasets are available at https://github.com/babbu3682/MedErr-CT.

Fine-tuned large language model for classifying CT-guided interventional radiology reports.

Yasaka K, Nishimura N, Fukushima T, Kubo T, Kiryu S, Abe O

pubmed logopapersJun 23 2025
BackgroundManual data curation was necessary to extract radiology reports due to the ambiguities of natural language.PurposeTo develop a fine-tuned large language model that classifies computed tomography (CT)-guided interventional radiology reports into technique categories and to compare its performance with that of the readers.Material and MethodsThis retrospective study included patients who underwent CT-guided interventional radiology between August 2008 and November 2024. Patients were chronologically assigned to the training (n = 1142; 646 men; mean age = 64.1 ± 15.7 years), validation (n = 131; 83 men; mean age = 66.1 ± 16.1 years), and test (n = 332; 196 men; mean age = 66.1 ± 14.8 years) datasets. In establishing a reference standard, reports were manually classified into categories 1 (drainage), 2 (lesion biopsy within fat or soft tissue density tissues), 3 (lung biopsy), and 4 (bone biopsy). The bi-directional encoder representation from the transformers model was fine-tuned with the training dataset, and the model with the best performance in the validation dataset was selected. The performance and required time for classification in the test dataset were compared between the best-performing model and the two readers.ResultsCategories 1/2/3/4 included 309/367/270/196, 30/42/40/19, and 75/124/78/55 patients for the training, validation, and test datasets, respectively. The model demonstrated an accuracy of 0.979 in the test dataset, which was significantly better than that of the readers (0.922-0.940) (<i>P</i> ≤0.012). The model classified reports within a 49.8-53.5-fold shorter time compared to readers.ConclusionThe fine-tuned large language model classified CT-guided interventional radiology reports into four categories demonstrating high accuracy within a remarkably short time.

Assessing accuracy and legitimacy of multimodal large language models on Japan Diagnostic Radiology Board Examination

Hirano, Y., Miki, S., Yamagishi, Y., Hanaoka, S., Nakao, T., Kikuchi, T., Nakamura, Y., Nomura, Y., Yoshikawa, T., Abe, O.

medrxiv logopreprintJun 23 2025
PurposeTo assess and compare the accuracy and legitimacy of multimodal large language models (LLMs) on the Japan Diagnostic Radiology Board Examination (JDRBE). Materials and methodsThe dataset comprised questions from JDRBE 2021, 2023, and 2024, with ground-truth answers established through consensus among multiple board-certified diagnostic radiologists. Questions without associated images and those lacking unanimous agreement on answers were excluded. Eight LLMs were evaluated: GPT-4 Turbo, GPT-4o, GPT-4.5, GPT-4.1, o3, o4-mini, Claude 3.7 Sonnet, and Gemini 2.5 Pro. Each model was evaluated under two conditions: with inputting images (vision) and without (text-only). Performance differences between the conditions were assessed using McNemars exact test. Two diagnostic radiologists (with 2 and 18 years of experience) independently rated the legitimacy of responses from four models (GPT-4 Turbo, Claude 3.7 Sonnet, o3, and Gemini 2.5 Pro) using a five-point Likert scale, blinded to model identity. Legitimacy scores were analyzed using Friedmans test, followed by pairwise Wilcoxon signed-rank tests with Holm correction. ResultsThe dataset included 233 questions. Under the vision condition, o3 achieved the highest accuracy at 72%, followed by o4-mini (70%) and Gemini 2.5 Pro (70%). Under the text-only condition, o3 topped the list with an accuracy of 67%. Addition of image input significantly improved the accuracy of two models (Gemini 2.5 Pro and GPT-4.5), but not the others. Both o3 and Gemini 2.5 Pro received significantly higher legitimacy scores than GPT-4 Turbo and Claude 3.7 Sonnet from both raters. ConclusionRecent multimodal LLMs, particularly o3 and Gemini 2.5 Pro, have demonstrated remarkable progress on JDRBE questions, reflecting their rapid evolution in diagnostic radiology. Secondary abstract Eight multimodal large language models were evaluated on the Japan Diagnostic Radiology Board Examination. OpenAIs o3 and Google DeepMinds Gemini 2.5 Pro achieved high accuracy rates (72% and 70%) and received good legitimacy scores from human raters, demonstrating steady progress.

Artificial intelligence-assisted decision-making in third molar assessment using ChatGPT: is it really a valid tool?

Grinberg N, Ianculovici C, Whitefield S, Kleinman S, Feldman S, Peleg O

pubmed logopapersJun 20 2025
Artificial intelligence (AI) is becoming increasingly popular in medicine. The current study aims to investigate whether an AI-based chatbot, such as ChatGPT, could be a valid tool for assisting in decision-making when assessing mandibular third molars before extractions. Panoramic radiographs were collected from a publicly available library. Mandibular third molars were assessed by position and depth. Two specialists evaluated each case regarding the need for CBCT referral, followed by introducing all cases to ChatGPT under a uniform script to decide the need for further CBCT radiographs. The process was performed first without any guidelines, Second, after introducing the guidelines presented by Rood et al. (1990), and third, with additional test cases. ChatGPT and a specialist's decision were compared and analyzed using Cohen's kappa test and the Cochrane-Mantel--Haenszel test to consider the effect of different tooth positions. All analyses were made under a 95% confidence level. The study evaluated 184 molars. Without any guidelines, ChatGPT correlated with the specialist in 49% of cases, with no statistically significant agreement (kappa < 0.1), followed by 70% and 91% with moderate (kappa = 0.39) and near-perfect (kappa = 0.81) agreement, respectively, after the second and third rounds (p < 0.05). The high correlation between the specialist and the chatbot was preserved when analyzed by the different tooth locations and positions (p < 0.01). ChatGPT has shown the ability to analyze third molars prior to surgical interventions using accepted guidelines with substantial correlation to specialists.

Evaluating ChatGPT's performance across radiology subspecialties: A meta-analysis of board-style examination accuracy and variability.

Nguyen D, Kim GHJ, Bedayat A

pubmed logopapersJun 20 2025
Large language models (LLMs) like ChatGPT are increasingly used in medicine due to their ability to synthesize information and support clinical decision-making. While prior research has evaluated ChatGPT's performance on medical board exams, limited data exist on radiology-specific exams especially considering prompt strategies and input modalities. This meta-analysis reviews ChatGPT's performance on radiology board-style questions, assessing accuracy across radiology subspecialties, prompt engineering methods, GPT model versions, and input modalities. Searches in PubMed and SCOPUS identified 163 articles, of which 16 met inclusion criteria after excluding irrelevant topics and non-board exam evaluations. Data extracted included subspecialty topics, accuracy, question count, GPT model, input modality, prompting strategies, and access dates. Statistical analyses included two-proportion z-tests, a binomial generalized linear model (GLM), and meta-regression with random effects (Stata v18.0, R v4.3.1). Across 7024 questions, overall accuracy was 58.83 % (95 % CI, 55.53-62.13). Performance varied widely by subspecialty, highest in emergency radiology (73.00 %) and lowest in musculoskeletal radiology (49.24 %). GPT-4 and GPT-4o significantly outperformed GPT-3.5 (p < .001), but visual inputs yielded lower accuracy (46.52 %) compared to textual inputs (67.10 %, p < .001). Prompting strategies showed significant improvement (p < .01) with basic prompts (66.23 %) compared to no prompts (59.70 %). A modest but significant decline in performance over time was also observed (p < .001). ChatGPT demonstrates promising but inconsistent performance in radiology board-style questions. Limitations in visual reasoning, heterogeneity across studies, and prompt engineering variability highlight areas requiring targeted optimization.

Data extraction from free-text stroke CT reports using GPT-4o and Llama-3.3-70B: the impact of annotation guidelines.

Wihl J, Rosenkranz E, Schramm S, Berberich C, Griessmair M, Woźnicki P, Pinto F, Ziegelmayer S, Adams LC, Bressem KK, Kirschke JS, Zimmer C, Wiestler B, Hedderich D, Kim SH

pubmed logopapersJun 19 2025
To evaluate the impact of an annotation guideline on the performance of large language models (LLMs) in extracting data from stroke computed tomography (CT) reports. The performance of GPT-4o and Llama-3.3-70B in extracting ten imaging findings from stroke CT reports was assessed in two datasets from a single academic stroke center. Dataset A (n = 200) was a stratified cohort including various pathological findings, whereas dataset B (n = 100) was a consecutive cohort. Initially, an annotation guideline providing clear data extraction instructions was designed based on a review of cases with inter-annotator disagreements in dataset A. For each LLM, data extraction was performed under two conditions: with the annotation guideline included in the prompt and without it. GPT-4o consistently demonstrated superior performance over Llama-3.3-70B under identical conditions, with micro-averaged precision ranging from 0.83 to 0.95 for GPT-4o and from 0.65 to 0.86 for Llama-3.3-70B. Across both models and both datasets, incorporating the annotation guideline into the LLM input resulted in higher precision rates, while recall rates largely remained stable. In dataset B, the precision of GPT-4o and Llama-3-70B improved from 0.83 to 0.95 and from 0.87 to 0.94, respectively. Overall classification performance with and without the annotation guideline was significantly different in five out of six conditions. GPT-4o and Llama-3.3-70B show promising performance in extracting imaging findings from stroke CT reports, although GPT-4o steadily outperformed Llama-3.3-70B. We also provide evidence that well-defined annotation guidelines can enhance LLM data extraction accuracy. Annotation guidelines can improve the accuracy of LLMs in extracting findings from radiological reports, potentially optimizing data extraction for specific downstream applications. LLMs have utility in data extraction from radiology reports, but the role of annotation guidelines remains underexplored. Data extraction accuracy from stroke CT reports by GPT-4o and Llama-3.3-70B improved when well-defined annotation guidelines were incorporated into the model prompt. Well-defined annotation guidelines can improve the accuracy of LLMs in extracting imaging findings from radiological reports.
Page 1 of 766 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.