Sort by:
Page 125 of 3453445 results

Implementing Large Language Models in Health Care: Clinician-Focused Review With Interactive Guideline.

Li H, Fu JF, Python A

pubmed logopapersJul 11 2025
Large language models (LLMs) can generate outputs understandable by humans, such as answers to medical questions and radiology reports. With the rapid development of LLMs, clinicians face a growing challenge in determining the most suitable algorithms to support their work. We aimed to provide clinicians and other health care practitioners with systematic guidance in selecting an LLM that is relevant and appropriate to their needs and facilitate the integration process of LLMs in health care. We conducted a literature search of full-text publications in English on clinical applications of LLMs published between January 1, 2022, and March 31, 2025, on PubMed, ScienceDirect, Scopus, and IEEE Xplore. We excluded papers from journals below a set citation threshold, as well as papers that did not focus on LLMs, were not research based, or did not involve clinical applications. We also conducted a literature search on arXiv within the same investigated period and included papers on the clinical applications of innovative multimodal LLMs. This led to a total of 270 studies. We collected 330 LLMs and recorded their application frequency in clinical tasks and frequency of best performance in their context. On the basis of a 5-stage clinical workflow, we found that stages 2, 3, and 4 are key stages in the clinical workflow, involving numerous clinical subtasks and LLMs. However, the diversity of LLMs that may perform optimally in each context remains limited. GPT-3.5 and GPT-4 were the most versatile models in the 5-stage clinical workflow, applied to 52% (29/56) and 71% (40/56) of the clinical subtasks, respectively, and they performed best in 29% (16/56) and 54% (30/56) of the clinical subtasks, respectively. General-purpose LLMs may not perform well in specialized areas as they often require lightweight prompt engineering methods or fine-tuning techniques based on specific datasets to improve model performance. Most LLMs with multimodal abilities are closed-source models and, therefore, lack of transparency, model customization, and fine-tuning for specific clinical tasks and may also pose challenges regarding data protection and privacy, which are common requirements in clinical settings. In this review, we found that LLMs may help clinicians in a variety of clinical tasks. However, we did not find evidence of generalist clinical LLMs successfully applicable to a wide range of clinical tasks. Therefore, their clinical deployment remains challenging. On the basis of this review, we propose an interactive online guideline for clinicians to select suitable LLMs by clinical task. With a clinical perspective and free of unnecessary technical jargon, this guideline may be used as a reference to successfully apply LLMs in clinical settings.

Oriented tooth detection: a CBCT image processing method integrated with RoI transformer.

Zhao Z, Wu B, Su S, Liu D, Wu Z, Gao R, Zhang N

pubmed logopapersJul 11 2025
Cone beam computed tomography (CBCT) has revolutionized dental imaging due to its high spatial resolution and ability to provide detailed three-dimensional reconstructions of dental structures. This study introduces an innovative CBCT image processing method using an oriented object detection approach integrated with a Region of Interest (RoI) Transformer. This study addresses the challenge of accurate tooth detection and classification in PAN derived from CBCT, introducing an innovative oriented object detection approach, which has not been previously applied in dental imaging. This method better aligns with the natural growth patterns of teeth, allowing for more accurate detection and classification of molars, premolars, canines, and incisors. By integrating RoI transformer, the model demonstrates relatively acceptable performance metrics compared to conventional horizontal detection methods, while also offering enhanced visualization capabilities. Furthermore, post-processing techniques, including distance and grayscale value constraints, are employed to correct classification errors and reduce false positives, especially in areas with missing teeth. The experimental results indicate that the proposed method achieves an accuracy of 98.48%, a recall of 97.21%, an F1 score of 97.21%, and an mAP of 98.12% in tooth detection. The proposed method enhances the accuracy of tooth detection in CBCT-derived PAN by reducing background interference and improving the visualization of tooth orientation.

HNOSeg-XS: Extremely Small Hartley Neural Operator for Efficient and Resolution-Robust 3D Image Segmentation.

Wong KCL, Wang H, Syeda-Mahmood T

pubmed logopapersJul 11 2025
In medical image segmentation, convolutional neural networks (CNNs) and transformers are dominant. For CNNs, given the local receptive fields of convolutional layers, long-range spatial correlations are captured through consecutive convolutions and pooling. However, as the computational cost and memory footprint can be prohibitively large, 3D models can only afford fewer layers than 2D models with reduced receptive fields and abstract levels. For transformers, although long-range correlations can be captured by multi-head attention, its quadratic complexity with respect to input size is computationally demanding. Therefore, either model may require input size reduction to allow more filters and layers for better segmentation. Nevertheless, given their discrete nature, models trained with patch-wise training or image downsampling may produce suboptimal results when applied on higher resolutions. To address this issue, here we propose the resolution-robust HNOSeg-XS architecture. We model image segmentation by learnable partial differential equations through the Fourier neural operator which has the zero-shot super-resolution property. By replacing the Fourier transform by the Hartley transform and reformulating the problem in the frequency domain, we created the HNOSeg-XS model, which is resolution robust, fast, memory efficient, and extremely parameter efficient. When tested on the BraTS'23, KiTS'23, and MVSeg'23 datasets with a Tesla V100 GPU, HNOSeg-XS showed its superior resolution robustness with fewer than 34.7k model parameters. It also achieved the overall best inference time (< 0.24 s) and memory efficiency (< 1.8 GiB) compared to the tested CNN and transformer models<sup>1</sup>.

Automated MRI protocoling in neuroradiology in the era of large language models.

Reiner LN, Chelbi M, Fetscher L, Stöckel JC, Csapó-Schmidt C, Guseynova S, Al Mohamad F, Bressem KK, Nawabi J, Siebert E, Wattjes MP, Scheel M, Meddeb A

pubmed logopapersJul 11 2025
This study investigates the automation of MRI protocoling, a routine task in radiology, using large language models (LLMs), comparing an open-source (LLama 3.1 405B) and a proprietary model (GPT-4o) with and without retrieval-augmented generation (RAG), a method for incorporating domain-specific knowledge. This retrospective study included MRI studies conducted between January and December 2023, along with institution-specific protocol assignment guidelines. Clinical questions were extracted, and a neuroradiologist established the gold standard protocol. LLMs were tasked with assigning MRI protocols and contrast medium administration with and without RAG. The results were compared to protocols selected by four radiologists. Token-based symmetric accuracy, the Wilcoxon signed-rank test, and the McNemar test were used for evaluation. Data from 100 neuroradiology reports (mean age = 54.2 years ± 18.41, women 50%) were included. RAG integration significantly improved accuracy in sequence and contrast media prediction for LLama 3.1 (Sequences: 38% vs. 70%, P < .001, Contrast Media: 77% vs. 94%, P < .001), and GPT-4o (Sequences: 43% vs. 81%, P < .001, Contrast Media: 79% vs. 92%, P = .006). GPT-4o outperformed LLama 3.1 in MRI sequence prediction (81% vs. 70%, P < .001), with comparable accuracies to the radiologists (81% ± 0.21, P = .43). Both models equaled radiologists in predicting contrast media administration (LLama 3.1 RAG: 94% vs. 91% ± 0.2, P = .37, GPT-4o RAG: 92% vs. 91% ± 0.24, P = .48). Large language models show great potential as decision-support tools for MRI protocoling, with performance similar to radiologists. RAG enhances the ability of LLMs to provide accurate, institution-specific protocol recommendations.

Semi-supervised Medical Image Segmentation Using Heterogeneous Complementary Correction Network and Confidence Contrastive Learning.

Li L, Xue M, Li S, Dong Z, Liao T, Li P

pubmed logopapersJul 11 2025
Semi-supervised medical image segmentation techniques have demonstrated significant potential and effectiveness in clinical diagnosis. The prevailing approaches using the mean-teacher (MT) framework achieve promising image segmentation results. However, due to the unreliability of the pseudo labels generated by the teacher model, existing methods still have some inherent limitations that must be considered and addressed. In this paper, we propose an innovative semi-supervised method for medical image segmentation by combining the heterogeneous complementary correction network and confidence contrastive learning (HC-CCL). Specifically, we develop a triple-branch framework by integrating a heterogeneous complementary correction (HCC) network into the MT framework. HCC serves as an auxiliary branch that corrects prediction errors in the student model and provides complementary information. To improve the capacity for feature learning in our proposed model, we introduce a confidence contrastive learning (CCL) approach with a novel sampling strategy. Furthermore, we develop a momentum style transfer (MST) method to narrow the gap between labeled and unlabeled data distributions. In addition, we introduce a Cutout-style augmentation for unsupervised learning to enhance performance. Three medical image datasets (including left atrial (LA) dataset, NIH pancreas dataset, Brats-2019 dataset) were employed to rigorously evaluate HC-CCL. Quantitative results demonstrate significant performance advantages over existing approaches, achieving state-of-the-art performance across all metrics. The implementation will be released at https://github.com/xxmmss/HC-CCL .

A novel artificial Intelligence-Based model for automated Lenke classification in adolescent idiopathic scoliosis.

Xie K, Zhu S, Lin J, Li Y, Huang J, Lei W, Yan Y

pubmed logopapersJul 11 2025
To develop an artificial intelligence (AI)-driven model for automatic Lenke classification of adolescent idiopathic scoliosis (AIS) and assess its performance. This retrospective study utilized 860 spinal radiographs from 215 AIS patients with four views, including 161 training sets and 54 testing sets. Additionally, 1220 spinal radiographs from 610 patients with only anterior-posterior (AP) and lateral (LAT) views were collected for training. The model was designed to perform keypoint detection, pedicle segmentation, and AIS classification based on a custom classification strategy. Its performance was evaluated against the gold standard using metrics such as mean absolute difference (MAD), intraclass correlation coefficient (ICC), Bland-Altman plots, Cohen's Kappa, and the confusion matrix. In comparison to the gold standard, the MAD for all predicted angles was 2.29°, with an excellent ICC. Bland-Altman analysis revealed minimal differences between the methods. For Lenke classification, the model exhibited exceptional consistency in curve type, lumbar modifier, and thoracic sagittal profile, with average Kappa values of 0.866, 0.845, and 0.827, respectively, and corresponding accuracy rates of 87.07%, 92.59%, and 92.59%. Subgroup analysis further confirmed the model's high consistency, with Kappa values ranging from 0.635 to 0.930, 0.672 to 0.926, and 0.815 to 0.847, and accuracy rates between 90.7 and 98.1%, 92.6-98.3%, and 92.6-98.1%, respectively. This novel AI system facilitates the rapid and accurate automatic Lenke classification, offering potential assistance to spinal surgeons.

Effect of data-driven motion correction for respiratory movement on lesion detectability in PET-CT: a phantom study.

de Winter MA, Gevers R, Lavalaye J, Habraken JBA, Maspero M

pubmed logopapersJul 11 2025
While data-driven motion correction (DDMC) techniques have proven to enhance the visibility of lesions affected by motion, their impact on overall detectability remains unclear. This study investigates whether DDMC improves lesion detectability in PET-CT using FDG-18F. A moving platform simulated respiratory motion in a NEMA-IEC body phantom with varying amplitudes (0, 7, 10, 20, 30 mm) and target-to-background ratios (2, 5, 10.5). Scans were reconstructed with and without DDMC, and the spherical targets' maximal and mean recovery coefficient (RC) and contrast-to-noise ratio (CNR) were measured. DDMC results in higher RC values in the target spheres. CNR values increase for small, high-motion affected targets but decrease for larger spheres with smaller amplitudes. A sub-analysis shows that DDMC increases the contrast of the sphere along with a 36% increase in background noise. While DDMC significantly enhances contrast (RC), its impact on detectability (CNR) is less profound due to increased background noise. CNR improves for small targets with high motion amplitude, potentially enhancing the detectability of low-uptake lesions. Given that the increased background noise may reduce detectability for targets unaffected by motion, we suggest that DDMC reconstructions are used best in addition to non-DDMC reconstructions.

Diffusion-weighted imaging in rectal cancer MRI from theory to practice.

Mayumi Takamune D, Miranda J, Mariussi M, Reif de Paula T, Mazaheri Y, Younus E, Jethwa KR, Knudsen CC, Bizinoto V, Cardoso D, de Arimateia Batista Araujo-Filho J, Sparapan Marques CF, Higa Nomura C, Horvat N

pubmed logopapersJul 11 2025
Diffusion-weighted imaging (DWI) has become a cornerstone of high-resolution rectal MRI, providing critical functional information that complements T2-weighted imaging (T2WI) throughout the management of rectal cancer. From baseline staging to restaging after neoadjuvant therapy and longitudinal surveillance during nonoperative management or post-surgical follow-up, DWI improves tumor detection, characterizes treatment response, and facilitates early identification of tumor regrowth or recurrence. This review offers a comprehensive overview of DWI in rectal cancer, emphasizing its technical characteristics, optimal acquisition strategies, and integration with qualitative and quantitative interpretive frameworks. The manuscript also addresses interpretive pitfalls, highlights emerging techniques such as intravoxel incoherent motion (IVIM), diffusion kurtosis imaging (DKI), and small field-of-view DWI, and explores the growing role of radiomics and artificial intelligence in advancing precision imaging. DWI, when rigorously implemented and interpreted, enhances the accuracy, reproducibility, and clinical utility of rectal MRI.

[MP-MRI in the evaluation of non-operative treatment response, for residual and recurrent tumor detection in head and neck cancer].

Gődény M

pubmed logopapersJul 11 2025
As non-surgical therapies gain acceptance in head and neck tumors, the importance of imaging has increased. New therapeutic methods (in radiation therapy, targeted biological therapy, immunotherapy) need better tumor characterization and prognostic information along with the accurate anatomy. Magnetic resonance imaging (MRI) has become the gold standard in head and neck cancer evaluation not only for staging but also for assessing tumor response, posttreatment status and complications, as well as for finding residual or recurrent tumor. Multiparametric anatomical and functional MRI (MP-MRI) is a true cancer imaging biomarker providing, in addition to high resolution tumor anatomy, more molecular and functional, qualitative and quantitative data using diffusion- weighted MRI (DW-MRI) and perfusion-dynamic contrast enhanced MRI (P-DCE-MRI), can improve the assessment of biological target volume and determine treatment response. DW-MRI provides information at the cellular level about the cell density and the integrity of the plasma membrane, based on water movement. P-DCE-MRI provides useful hemodynamic information about tissue vascularity and vascular permeability. Recent studies have shown promising results using radiomics features, MP-MRI has opened new perspectives in oncologic imaging with better realization of the latest technological advances with the help of artificial intelligence.

RadientFusion-XR: A Hybrid LBP-HOG Model for COVID-19 Detection Using Machine Learning.

K V G, Gripsy JV

pubmed logopapersJul 11 2025
The rapid and accurate detection of COVID-19 (coronavirus disease 2019) from normal and pneumonia chest x-ray images is essential for timely diagnosis and treatment. The overlapping features in radiology images make it challenging for radiologists to distinguish COVID-19 cases. This research study investigates the effectiveness of combining local binary pattern (LBP) and histogram of oriented gradients (HOG) features with machine learning algorithms to differentiate COVID-19 from normal and pneumonia cases using chest x-rays. The proposed hybrid fusion model "RadientFusion-XR" utilizes LBP and HOG features with shallow learning algorithms. The proposed hybrid HOG-LBP fusion model, RadientFusion-XR, detects COVID-19 cases from normal and pneumonia classes. This fusion model provides a comprehensive representation, enabling more precise differentiation among the three classes. This methodology presents a promising and efficient tool for early COVID-19 and pneumonia diagnosis in clinical settings, with potential integration into automated diagnostic systems. The findings highlight the potential of this hybrid feature extraction and a shallow learning approach to improve diagnostic accuracy in chest x-ray analysis significantly. The hybrid model using LBP and HOG features with an ensemble model achieved an exceptional accuracy of 99% for binary class (COVID-19, normal) and 97% for multi-class (COVID-19, normal, pneumonia), respectively. These results demonstrate the efficacy of our hybrid approach in enhancing feature representation and achieving superior classification accuracy. The proposed RadientFusion-XR model with hybrid feature extraction and shallow learning approach significantly increases the accuracy of COVID-19 and pneumonia diagnoses from chest x-rays. The interpretable nature of RadientFusion-XR, alongside its effectiveness and explainability, makes it a valuable tool for clinical applications, fostering trust and enabling informed decision-making by healthcare professionals.
Page 125 of 3453445 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.