Sort by:
Page 89 of 99982 results

Systematic review on the impact of deep learning-driven worklist triage on radiology workflow and clinical outcomes.

Momin E, Cook T, Gershon G, Barr J, De Cecco CN, van Assen M

pubmed logopapersMay 21 2025
To perform a systematic review on the impact of deep learning (DL)-based triage for reducing diagnostic delays and improving patient outcomes in peer-reviewed and pre-print publications. A search was conducted of primary research studies focused on DL-based worklist optimization for diagnostic imaging triage published on multiple databases from January 2018 until July 2024. Extracted data included study design, dataset characteristics, workflow metrics including report turnaround time and time-to-treatment, and patient outcome differences. Further analysis between clinical settings and integration modality was investigated using nonparametric statistics. Risk of bias was assessed with the risk of bias in non-randomized studies-of interventions (ROBINS-I) checklist. A total of 38 studies from 20 publications, involving 138,423 images, were analyzed. Workflow interventions concerned pulmonary embolism (n = 8), stroke (n = 3), intracranial hemorrhage (n = 12), and chest conditions (n = 15). Patients in the post DL-triage group had shorter median report turnaround times: a mean difference of 12.3 min (IQR: -25.7, -7.6) for pulmonary embolism, 20.5 min (IQR: -32.1, -9.3) for stroke, 4.3 min (IQR: -8.6, 1.3) for intracranial hemorrhage and 29.7 min (IQR: -2947.7, -18.3) for chest diseases. Sub-group analysis revealed that reductions varied per clinical environment and relative prevalence rates but were the highest when algorithms actively stratified and reordered the radiological worklist, with reductions of -43.7% in report turnaround time compared to -7.6% from widget-based systems (p < 0.01). DL-based triage systems had comparable report turnaround time improvements, especially in outpatient and high-prevalence settings, suggesting that AI-based triage holds promise in alleviating radiology workloads. Question Can DL-based triage address lengthening imaging report turnaround times and improve patient outcomes across distinct clinical environments? Findings DL-based triage improved report turnaround time across disease groups, with higher reductions reported in high-prevalence or lower acuity settings. Clinical relevance DL-based workflow prioritization is a reliable tool for reducing diagnostic imaging delay for time-sensitive disease across clinical settings. However, further research and reliable metrics are needed to provide specific recommendations with regards to false-negative examinations and multi-condition prioritization.

SAMA-UNet: Enhancing Medical Image Segmentation with Self-Adaptive Mamba-Like Attention and Causal-Resonance Learning

Saqib Qamar, Mohd Fazil, Parvez Ahmad, Ghulam Muhammad

arxiv logopreprintMay 21 2025
Medical image segmentation plays an important role in various clinical applications, but existing models often struggle with the computational inefficiencies and challenges posed by complex medical data. State Space Sequence Models (SSMs) have demonstrated promise in modeling long-range dependencies with linear computational complexity, yet their application in medical image segmentation remains hindered by incompatibilities with image tokens and autoregressive assumptions. Moreover, it is difficult to achieve a balance in capturing both local fine-grained information and global semantic dependencies. To address these challenges, we introduce SAMA-UNet, a novel architecture for medical image segmentation. A key innovation is the Self-Adaptive Mamba-like Aggregated Attention (SAMA) block, which integrates contextual self-attention with dynamic weight modulation to prioritise the most relevant features based on local and global contexts. This approach reduces computational complexity and improves the representation of complex image features across multiple scales. We also suggest the Causal-Resonance Multi-Scale Module (CR-MSM), which enhances the flow of information between the encoder and decoder by using causal resonance learning. This mechanism allows the model to automatically adjust feature resolution and causal dependencies across scales, leading to better semantic alignment between the low-level and high-level features in U-shaped architectures. Experiments on MRI, CT, and endoscopy images show that SAMA-UNet performs better in segmentation accuracy than current methods using CNN, Transformer, and Mamba. The implementation is publicly available at GitHub.

X-GRM: Large Gaussian Reconstruction Model for Sparse-view X-rays to Computed Tomography

Yifan Liu, Wuyang Li, Weihao Yu, Chenxin Li, Alexandre Alahi, Max Meng, Yixuan Yuan

arxiv logopreprintMay 21 2025
Computed Tomography serves as an indispensable tool in clinical workflows, providing non-invasive visualization of internal anatomical structures. Existing CT reconstruction works are limited to small-capacity model architecture, inflexible volume representation, and small-scale training data. In this paper, we present X-GRM (X-ray Gaussian Reconstruction Model), a large feedforward model for reconstructing 3D CT from sparse-view 2D X-ray projections. X-GRM employs a scalable transformer-based architecture to encode an arbitrary number of sparse X-ray inputs, where tokens from different views are integrated efficiently. Then, tokens are decoded into a new volume representation, named Voxel-based Gaussian Splatting (VoxGS), which enables efficient CT volume extraction and differentiable X-ray rendering. To support the training of X-GRM, we collect ReconX-15K, a large-scale CT reconstruction dataset containing around 15,000 CT/X-ray pairs across diverse organs, including the chest, abdomen, pelvis, and tooth etc. This combination of a high-capacity model, flexible volume representation, and large-scale training data empowers our model to produce high-quality reconstructions from various testing inputs, including in-domain and out-domain X-ray projections. Project Page: https://github.com/CUHK-AIM-Group/X-GRM.

MedBLIP: Fine-tuning BLIP for Medical Image Captioning

Manshi Limbu, Diwita Banerjee

arxiv logopreprintMay 20 2025
Medical image captioning is a challenging task that requires generating clinically accurate and semantically meaningful descriptions of radiology images. While recent vision-language models (VLMs) such as BLIP, BLIP2, Gemini and ViT-GPT2 show strong performance on natural image datasets, they often produce generic or imprecise captions when applied to specialized medical domains. In this project, we explore the effectiveness of fine-tuning the BLIP model on the ROCO dataset for improved radiology captioning. We compare the fine-tuned BLIP against its zero-shot version, BLIP-2 base, BLIP-2 Instruct and a ViT-GPT2 transformer baseline. Our results demonstrate that domain-specific fine-tuning on BLIP significantly improves performance across both quantitative and qualitative evaluation metrics. We also visualize decoder cross-attention maps to assess interpretability and conduct an ablation study to evaluate the contributions of encoder-only and decoder-only fine-tuning. Our findings highlight the importance of targeted adaptation for medical applications and suggest that decoder-only fine-tuning (encoder-frozen) offers a strong performance baseline with 5% lower training time than full fine-tuning, while full model fine-tuning still yields the best results overall.

Artificial Intelligence and Musculoskeletal Surgical Applications.

Oettl FC, Zsidai B, Oeding JF, Samuelsson K

pubmed logopapersMay 20 2025
Artificial intelligence (AI) has emerged as a transformative force in orthopedic surgery. Potentially encompassing pre-, intra-, and postoperative processes, it can process complex medical imaging, provide real-time surgical guidance, and analyze large datasets for outcome prediction and optimization. AI has shown improvements in surgical precision, efficiency, and patient outcomes across orthopedic subspecialties, and large language models and agentic AI systems are expanding AI utility beyond surgical applications into areas such as clinical documentation, patient education, and autonomous decision support. The successful implementation of AI in orthopedic surgery requires careful attention to validation, regulatory compliance, and healthcare system integration. As these technologies continue to advance, maintaining the balance between innovation and patient safety remains crucial, with the ultimate goal of achieving more personalized, efficient, and equitable healthcare delivery while preserving the essential role of human clinical judgment. This review examines the current landscape and future trajectory of AI applications in orthopedic surgery, highlighting both technological advances and their clinical impact. Studies have suggested that AI-assisted procedures achieve higher accuracy and better functional outcomes compared to conventional methods, while reducing operative times and complications. However, these technologies are designed to augment rather than replace clinical expertise, serving as sophisticated tools to enhance surgeons' capabilities and improve patient care.

Intelligent health model for medical imaging to guide laymen using neural cellular automata.

Sharma SK, Chowdhary CL, Sharma VS, Rasool A, Khan AA

pubmed logopapersMay 20 2025
A layman in health systems is a person who doesn't have any knowledge about health data i.e., X-ray, MRI, CT scan, and health examination reports, etc. The motivation behind the proposed invention is to help laymen to make medical images understandable. The health model is trained using a neural network approach that analyses user health examination data; predicts the type and level of the disease and advises precaution to the user. Cellular Automata (CA) technology has been integrated with the neural networks to segment the medical image. The CA analyzes the medical images pixel by pixel and generates a robust threshold value which helps to efficiently segment the image and identify accurate abnormal spots from the medical image. The proposed method has been trained and experimented using 10000+ medical images which are taken from various open datasets. Various text analysis measures i.e., BLEU, ROUGE, and WER are used in the research to validate the produced report. The BLEU and ROUGE calculate a similarity to decide how the generated text report is closer to the original report. The BLEU and ROUGE scores of the experimented images are approximately 0.62 and 0.90, claims that the produced report is very close to the original report. The WER score 0.14, claims that the generated report contains the most relevant words. The overall summary of the proposed research is that it provides a fruitful medical report with accurate disease and precautions to the laymen.

AI-powered integration of multimodal imaging in precision medicine for neuropsychiatric disorders.

Huang W, Shu N

pubmed logopapersMay 20 2025
Neuropsychiatric disorders have complex pathological mechanism, pronounced clinical heterogeneity, and a prolonged preclinical phase, which presents a challenge for early diagnosis and development of precise intervention strategies. With the development of large-scale multimodal neuroimaging datasets and advancement of artificial intelligence (AI) algorithms, the integration of multimodal imaging with AI techniques has emerged as a pivotal avenue for early detection and tailoring individualized treatment for neuropsychiatric disorders. To support these advances, in this review, we outline multimodal neuroimaging techniques, AI methods, and strategies for multimodal data fusion. We highlight applications of multimodal AI based on neuroimaging data in precision medicine for neuropsychiatric disorders, discussing challenges in clinical adoption, their emerging solutions, and future directions.

Enhancing pathological myopia diagnosis: a bimodal artificial intelligence approach integrating fundus and optical coherence tomography imaging for precise atrophy, traction and neovascularisation grading.

Xu Z, Yang Y, Chen H, Han R, Han X, Zhao J, Yu W, Yang Z, Chen Y

pubmed logopapersMay 20 2025
Pathological myopia (PM) has emerged as a leading cause of global visual impairment, early detection and precise grading of PM are crucial for timely intervention. The atrophy, traction and neovascularisation (ATN) system is applied to define PM progression and stages with precision. This study focuses on constructing a comprehensive PM image dataset comprising both fundus and optical coherence tomography (OCT) images and developing a bimodal artificial intelligence (AI) classification model for ATN grading in PM. This single-centre retrospective cross-sectional study collected 2760 colour fundus photographs and matching OCT images of PM from January 2019 to November 2022 at Peking Union Medical College Hospital. Ophthalmology specialists labelled and inspected all paired images using the ATN grading system. The AI model used a ResNet-50 backbone and a multimodal multi-instance learning module to enhance interaction across instances from both modalities. Performance comparisons among single-modality fundus, OCT and bimodal AI models were conducted for ATN grading in PM. The bimodality model, dual-deep learning (DL), demonstrated superior accuracy in both detailed multiclassification and biclassification of PM, which aligns well with our observation from instance attention-weight activation maps. The area under the curve for severe PM using dual-DL was 0.9635 (95% CI 0.9380 to 0.9890), compared with 0.9359 (95% CI 0.9027 to 0.9691) for the solely OCT model and 0.9268 (95% CI 0.8915 to 0.9621) for the fundus model. Our novel bimodal AI multiclassification model for PM ATN staging proves accurate and beneficial for public health screening and prompt referral of PM patients.

Feasibility study of a general model for synthetic CT generation in MRI-guided extracranial radiotherapy.

Hsu SH, Han Z, Hu YH, Ferguson D, van Dams R, Mak RH, Leeman JE, Sudhyadhom A

pubmed logopapersMay 19 2025
This study aims to investigate the feasibility of a single general model to synthesize CT images across body sites, thorax, abdomen, and pelvis, to support treatment planning for MRI-only radiotherapy. A total of 157 patients who received MRI-guided radiation therapy in the thorax, abdomen, and pelvis on a 0.35T MRIdian Linac were included. A subset of 122 cases were used for model training and the remaining 35 cases were used for model validation. All patient datasets had semi-paired CT-simulation image and 0.35T MR image acquired using TrueFISP. A conditional generative adversarial network with a multi-planar method was used to generate synthetic CT images from 0.35T MR images. The effect of preprocessing methods (with and without bias field corrections) on the quality of synthetic CT was evaluated and found to be insignificant. The general models trained on all cases performed comparably to the site-specific models trained on individual body sites. For all models, the peak signal-to-noise ratios ranged from 31.7 to 34.9 and the structural index similarity measures ranged from 0.9547 to 0.9758. For the datasets with bias field corrections, the mean-absolute-errors in HU (general model versus site-specific model) were 49.7 ± 9.4 versus 49.5 ± 8.9, 48.7 ± 7.6 versus 43 ± 7.8 and 32.8 ± 5.5 versus 31.8 ± 5.3 for the thorax, abdomen, and pelvis, respectively. When comparing plans between synthetic CTs and ground truth CTs, the dosimetric difference was on average less than 0.5% (0.2 Gy) for target coverage and less than 2.1% (0.4 Gy) for organ-at-risk metrics for all body sites with either the general or specific models. Synthetic CT plans showed good agreement with mean gamma pass rates of >94% and >99% for 1%/1 mm and 2%/2 mm, respectively. This study has demonstrated the feasibility of using a general model for multiple body sites and the potential of using synthetic CT to support an MRI-guided radiotherapy workflow.

New approaches to lesion assessment in multiple sclerosis.

Preziosa P, Filippi M, Rocca MA

pubmed logopapersMay 19 2025
To summarize recent advancements in artificial intelligence-driven lesion segmentation and novel neuroimaging modalities that enhance the identification and characterization of multiple sclerosis (MS) lesions, emphasizing their implications for clinical use and research. Artificial intelligence, particularly deep learning approaches, are revolutionizing MS lesion assessment and segmentation, improving accuracy, reproducibility, and efficiency. Artificial intelligence-based tools now enable automated detection not only of T2-hyperintense white matter lesions, but also of specific lesion subtypes, including gadolinium-enhancing, central vein sign-positive, paramagnetic rim, cortical, and spinal cord lesions, which hold diagnostic and prognostic value. Novel neuroimaging techniques such as quantitative susceptibility mapping (QSM), χ-separation imaging, and soma and neurite density imaging (SANDI), together with PET, are providing deeper insights into lesion pathology, better disentangling their heterogeneities and clinical relevance. Artificial intelligence-powered lesion segmentation tools hold great potential for improving fast, accurate and reproducible lesional assessment in the clinical scenario, thus improving MS diagnosis, monitoring, and treatment response assessment. Emerging neuroimaging modalities may contribute to advance the understanding MS pathophysiology, provide more specific markers of disease progression, and novel potential therapeutic targets.
Page 89 of 99982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.