Sort by:
Page 62 of 71706 results

AI-powered integration of multimodal imaging in precision medicine for neuropsychiatric disorders.

Huang W, Shu N

pubmed logopapersMay 20 2025
Neuropsychiatric disorders have complex pathological mechanism, pronounced clinical heterogeneity, and a prolonged preclinical phase, which presents a challenge for early diagnosis and development of precise intervention strategies. With the development of large-scale multimodal neuroimaging datasets and advancement of artificial intelligence (AI) algorithms, the integration of multimodal imaging with AI techniques has emerged as a pivotal avenue for early detection and tailoring individualized treatment for neuropsychiatric disorders. To support these advances, in this review, we outline multimodal neuroimaging techniques, AI methods, and strategies for multimodal data fusion. We highlight applications of multimodal AI based on neuroimaging data in precision medicine for neuropsychiatric disorders, discussing challenges in clinical adoption, their emerging solutions, and future directions.

Enhancing pathological myopia diagnosis: a bimodal artificial intelligence approach integrating fundus and optical coherence tomography imaging for precise atrophy, traction and neovascularisation grading.

Xu Z, Yang Y, Chen H, Han R, Han X, Zhao J, Yu W, Yang Z, Chen Y

pubmed logopapersMay 20 2025
Pathological myopia (PM) has emerged as a leading cause of global visual impairment, early detection and precise grading of PM are crucial for timely intervention. The atrophy, traction and neovascularisation (ATN) system is applied to define PM progression and stages with precision. This study focuses on constructing a comprehensive PM image dataset comprising both fundus and optical coherence tomography (OCT) images and developing a bimodal artificial intelligence (AI) classification model for ATN grading in PM. This single-centre retrospective cross-sectional study collected 2760 colour fundus photographs and matching OCT images of PM from January 2019 to November 2022 at Peking Union Medical College Hospital. Ophthalmology specialists labelled and inspected all paired images using the ATN grading system. The AI model used a ResNet-50 backbone and a multimodal multi-instance learning module to enhance interaction across instances from both modalities. Performance comparisons among single-modality fundus, OCT and bimodal AI models were conducted for ATN grading in PM. The bimodality model, dual-deep learning (DL), demonstrated superior accuracy in both detailed multiclassification and biclassification of PM, which aligns well with our observation from instance attention-weight activation maps. The area under the curve for severe PM using dual-DL was 0.9635 (95% CI 0.9380 to 0.9890), compared with 0.9359 (95% CI 0.9027 to 0.9691) for the solely OCT model and 0.9268 (95% CI 0.8915 to 0.9621) for the fundus model. Our novel bimodal AI multiclassification model for PM ATN staging proves accurate and beneficial for public health screening and prompt referral of PM patients.

Feasibility study of a general model for synthetic CT generation in MRI-guided extracranial radiotherapy.

Hsu SH, Han Z, Hu YH, Ferguson D, van Dams R, Mak RH, Leeman JE, Sudhyadhom A

pubmed logopapersMay 19 2025
This study aims to investigate the feasibility of a single general model to synthesize CT images across body sites, thorax, abdomen, and pelvis, to support treatment planning for MRI-only radiotherapy. A total of 157 patients who received MRI-guided radiation therapy in the thorax, abdomen, and pelvis on a 0.35T MRIdian Linac were included. A subset of 122 cases were used for model training and the remaining 35 cases were used for model validation. All patient datasets had semi-paired CT-simulation image and 0.35T MR image acquired using TrueFISP. A conditional generative adversarial network with a multi-planar method was used to generate synthetic CT images from 0.35T MR images. The effect of preprocessing methods (with and without bias field corrections) on the quality of synthetic CT was evaluated and found to be insignificant. The general models trained on all cases performed comparably to the site-specific models trained on individual body sites. For all models, the peak signal-to-noise ratios ranged from 31.7 to 34.9 and the structural index similarity measures ranged from 0.9547 to 0.9758. For the datasets with bias field corrections, the mean-absolute-errors in HU (general model versus site-specific model) were 49.7 ± 9.4 versus 49.5 ± 8.9, 48.7 ± 7.6 versus 43 ± 7.8 and 32.8 ± 5.5 versus 31.8 ± 5.3 for the thorax, abdomen, and pelvis, respectively. When comparing plans between synthetic CTs and ground truth CTs, the dosimetric difference was on average less than 0.5% (0.2 Gy) for target coverage and less than 2.1% (0.4 Gy) for organ-at-risk metrics for all body sites with either the general or specific models. Synthetic CT plans showed good agreement with mean gamma pass rates of >94% and >99% for 1%/1 mm and 2%/2 mm, respectively. This study has demonstrated the feasibility of using a general model for multiple body sites and the potential of using synthetic CT to support an MRI-guided radiotherapy workflow.

New approaches to lesion assessment in multiple sclerosis.

Preziosa P, Filippi M, Rocca MA

pubmed logopapersMay 19 2025
To summarize recent advancements in artificial intelligence-driven lesion segmentation and novel neuroimaging modalities that enhance the identification and characterization of multiple sclerosis (MS) lesions, emphasizing their implications for clinical use and research. Artificial intelligence, particularly deep learning approaches, are revolutionizing MS lesion assessment and segmentation, improving accuracy, reproducibility, and efficiency. Artificial intelligence-based tools now enable automated detection not only of T2-hyperintense white matter lesions, but also of specific lesion subtypes, including gadolinium-enhancing, central vein sign-positive, paramagnetic rim, cortical, and spinal cord lesions, which hold diagnostic and prognostic value. Novel neuroimaging techniques such as quantitative susceptibility mapping (QSM), χ-separation imaging, and soma and neurite density imaging (SANDI), together with PET, are providing deeper insights into lesion pathology, better disentangling their heterogeneities and clinical relevance. Artificial intelligence-powered lesion segmentation tools hold great potential for improving fast, accurate and reproducible lesional assessment in the clinical scenario, thus improving MS diagnosis, monitoring, and treatment response assessment. Emerging neuroimaging modalities may contribute to advance the understanding MS pathophysiology, provide more specific markers of disease progression, and novel potential therapeutic targets.

An overview of artificial intelligence and machine learning in shoulder surgery.

Cho SH, Kim YS

pubmed logopapersMay 19 2025
Machine learning (ML), a subset of artificial intelligence (AI), utilizes advanced algorithms to learn patterns from data, enabling accurate predictions and decision-making without explicit programming. In orthopedic surgery, ML is transforming clinical practice, particularly in shoulder arthroplasty and rotator cuff tears (RCTs) management. This review explores the fundamental paradigms of ML, including supervised, unsupervised, and reinforcement learning, alongside key algorithms such as XGBoost, neural networks, and generative adversarial networks. In shoulder arthroplasty, ML accurately predicts postoperative outcomes, complications, and implant selection, facilitating personalized surgical planning and cost optimization. Predictive models, including ensemble learning methods, achieve over 90% accuracy in forecasting complications, while neural networks enhance surgical precision through AI-assisted navigation. In RCTs treatment, ML enhances diagnostic accuracy using deep learning models on magnetic resonance imaging and ultrasound, achieving area under the curve values exceeding 0.90. ML models also predict tear reparability with 85% accuracy and postoperative functional outcomes, including range of motion and patient-reported outcomes. Despite remarkable advancements, challenges such as data variability, model interpretability, and integration into clinical workflows persist. Future directions involve federated learning for robust model generalization and explainable AI to enhance transparency. ML continues to revolutionize orthopedic care by providing data-driven, personalized treatment strategies and optimizing surgical outcomes.

Current trends and emerging themes in utilizing artificial intelligence to enhance anatomical diagnostic accuracy and efficiency in radiotherapy.

Pezzino S, Luca T, Castorina M, Puleo S, Castorina S

pubmed logopapersMay 19 2025
Artificial intelligence (AI) incorporation into healthcare has proven revolutionary, especially in radiotherapy, where accuracy is critical. The purpose of the study is to present patterns and develop topics in the application of AI to improve the precision of anatomical diagnosis, delineation of organs, and therapeutic effectiveness in radiation and radiological imaging. We performed a bibliometric analysis of scholarly articles in the fields starting in 2014. Through an examination of research output from key contributing nations and institutions, an analysis of notable research subjects, and an investigation of trends in scientific terminology pertaining to AI in radiology and radiotherapy. Furthermore, we examined software solutions based on AI in these domains, with a specific emphasis on extracting anatomical features and recognizing organs for the purpose of treatment planning. Our investigation found a significant surge in papers pertaining to AI in the fields since 2014. Institutions such as Emory University and Memorial Sloan-Kettering Cancer Center made substantial contributions to the development of the United States and China as leading research-producing nations. Key study areas encompassed adaptive radiation informed by anatomical alterations, MR-Linac for enhanced vision of soft tissues, and multi-organ segmentation for accurate planning of radiotherapy. An evident increase in the frequency of phrases such as 'radiomics,' 'radiotherapy segmentation,' and 'dosiomics' was noted. The evaluation of AI-based software revealed a wide range of uses in several subdisciplinary fields of radiation and radiology, particularly in improving the identification of anatomical features for treatment planning and identifying organs at risk. The incorporation of AI in anatomical diagnosis in radiological imaging and radiotherapy is progressing rapidly, with substantial capacity to transform the precision of diagnoses and the effectiveness of treatment planning.

Expert-Like Reparameterization of Heterogeneous Pyramid Receptive Fields in Efficient CNNs for Fair Medical Image Classification

Xiao Wu, Xiaoqing Zhang, Zunjie Xiao, Lingxi Hu, Risa Higashita, Jiang Liu

arxiv logopreprintMay 19 2025
Efficient convolutional neural network (CNN) architecture designs have attracted growing research interests. However, they usually apply single receptive field (RF), small asymmetric RFs, or pyramid RFs to learn different feature representations, still encountering two significant challenges in medical image classification tasks: 1) They have limitations in capturing diverse lesion characteristics efficiently, e.g., tiny, coordination, small and salient, which have unique roles on results, especially imbalanced medical image classification. 2) The predictions generated by those CNNs are often unfair/biased, bringing a high risk by employing them to real-world medical diagnosis conditions. To tackle these issues, we develop a new concept, Expert-Like Reparameterization of Heterogeneous Pyramid Receptive Fields (ERoHPRF), to simultaneously boost medical image classification performance and fairness. This concept aims to mimic the multi-expert consultation mode by applying the well-designed heterogeneous pyramid RF bags to capture different lesion characteristics effectively via convolution operations with multiple heterogeneous kernel sizes. Additionally, ERoHPRF introduces an expert-like structural reparameterization technique to merge its parameters with the two-stage strategy, ensuring competitive computation cost and inference speed through comparisons to a single RF. To manifest the effectiveness and generalization ability of ERoHPRF, we incorporate it into mainstream efficient CNN architectures. The extensive experiments show that our method maintains a better trade-off than state-of-the-art methods in terms of medical image classification, fairness, and computation overhead. The codes of this paper will be released soon.

Advances in pancreatic cancer diagnosis: from DNA methylation to AI-Assisted imaging.

Sharma R, Komal K, Kumar S, Ghosh R, Pandey P, Gupta GD, Kumar M

pubmed logopapersMay 19 2025
Pancreatic Cancer (PC) is a highly aggressive tumor that is mainly diagnosed at later stages. Various imaging technologies, such as CT, MRI, and EUS, possess limitations in early PC diagnosis. Therefore, this review article explores the various innovative biomarkers for PC detection, such as DNA methylation, Noncoding RNAs, and proteomic biomarkers, and the role of AI in PC detection at early stages. Innovative biomarkers, such as DNA methylation genes, show higher specificity and sensitivity in PC diagnosis. Additionally, various non-coding RNAs, such as long non-coding RNAs (lncRNAs) and microRNAs, show high diagnostic accuracy and serve as diagnostic and prognostic biomarkers. Additionally, proteomic biomarkers retain higher diagnostic accuracy in different body fluids. Apart from this, the utilization of AI showed that AI surpassed the radiologist's diagnostic performance in PC detection. The combination of AI and advanced biomarkers can revolutionize early PC detection. However, large-scale, prospective studies are needed to validate its clinical utility. Further. standardization of biomarker panels and AI algorithms is a vital step toward their reliable applications in early PC detection, ultimately improving patient outcomes.

SMFusion: Semantic-Preserving Fusion of Multimodal Medical Images for Enhanced Clinical Diagnosis

Haozhe Xiang, Han Zhang, Yu Cheng, Xiongwen Quan, Wanwan Huang

arxiv logopreprintMay 18 2025
Multimodal medical image fusion plays a crucial role in medical diagnosis by integrating complementary information from different modalities to enhance image readability and clinical applicability. However, existing methods mainly follow computer vision standards for feature extraction and fusion strategy formulation, overlooking the rich semantic information inherent in medical images. To address this limitation, we propose a novel semantic-guided medical image fusion approach that, for the first time, incorporates medical prior knowledge into the fusion process. Specifically, we construct a publicly available multimodal medical image-text dataset, upon which text descriptions generated by BiomedGPT are encoded and semantically aligned with image features in a high-dimensional space via a semantic interaction alignment module. During this process, a cross attention based linear transformation automatically maps the relationship between textual and visual features to facilitate comprehensive learning. The aligned features are then embedded into a text-injection module for further feature-level fusion. Unlike traditional methods, we further generate diagnostic reports from the fused images to assess the preservation of medical information. Additionally, we design a medical semantic loss function to enhance the retention of textual cues from the source images. Experimental results on test datasets demonstrate that the proposed method achieves superior performance in both qualitative and quantitative evaluations while preserving more critical medical information.

MedAgentBoard: Benchmarking Multi-Agent Collaboration with Conventional Methods for Diverse Medical Tasks

Yinghao Zhu, Ziyi He, Haoran Hu, Xiaochen Zheng, Xichen Zhang, Zixiang Wang, Junyi Gao, Liantao Ma, Lequan Yu

arxiv logopreprintMay 18 2025
The rapid advancement of Large Language Models (LLMs) has stimulated interest in multi-agent collaboration for addressing complex medical tasks. However, the practical advantages of multi-agent collaboration approaches remain insufficiently understood. Existing evaluations often lack generalizability, failing to cover diverse tasks reflective of real-world clinical practice, and frequently omit rigorous comparisons against both single-LLM-based and established conventional methods. To address this critical gap, we introduce MedAgentBoard, a comprehensive benchmark for the systematic evaluation of multi-agent collaboration, single-LLM, and conventional approaches. MedAgentBoard encompasses four diverse medical task categories: (1) medical (visual) question answering, (2) lay summary generation, (3) structured Electronic Health Record (EHR) predictive modeling, and (4) clinical workflow automation, across text, medical images, and structured EHR data. Our extensive experiments reveal a nuanced landscape: while multi-agent collaboration demonstrates benefits in specific scenarios, such as enhancing task completeness in clinical workflow automation, it does not consistently outperform advanced single LLMs (e.g., in textual medical QA) or, critically, specialized conventional methods that generally maintain better performance in tasks like medical VQA and EHR-based prediction. MedAgentBoard offers a vital resource and actionable insights, emphasizing the necessity of a task-specific, evidence-based approach to selecting and developing AI solutions in medicine. It underscores that the inherent complexity and overhead of multi-agent collaboration must be carefully weighed against tangible performance gains. All code, datasets, detailed prompts, and experimental results are open-sourced at https://medagentboard.netlify.app/.
Page 62 of 71706 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.