Sort by:
Page 65 of 99986 results

[AI-enabled clinical decision support systems: challenges and opportunities].

Tschochohei M, Adams LC, Bressem KK, Lammert J

pubmed logopapersJun 25 2025
Clinical decision-making is inherently complex, time-sensitive, and prone to error. AI-enabled clinical decision support systems (CDSS) offer promising solutions by leveraging large datasets to provide evidence-based recommendations. These systems range from rule-based and knowledge-based to increasingly AI-driven approaches. However, key challenges persist, particularly concerning data quality, seamless integration into clinical workflows, and clinician trust and acceptance. Ethical and legal considerations, especially data privacy, are also paramount.AI-CDSS have demonstrated success in fields like radiology (e.g., pulmonary nodule detection, mammography interpretation) and cardiology, where they enhance diagnostic accuracy and improve patient outcomes. Looking ahead, chat and voice interfaces powered by large language models (LLMs) could support shared decision-making (SDM) by fostering better patient engagement and understanding.To fully realize the potential of AI-CDSS in advancing efficient, patient-centered care, it is essential to ensure their responsible development. This includes grounding AI models in domain-specific data, anonymizing user inputs, and implementing rigorous validation of AI-generated outputs before presentation. Thoughtful design and ethical oversight will be critical to integrating AI safely and effectively into clinical practice.

IMC-PINN-FE: A Physics-Informed Neural Network for Patient-Specific Left Ventricular Finite Element Modeling with Image Motion Consistency and Biomechanical Parameter Estimation

Siyu Mu, Wei Xuan Chan, Choon Hwai Yap

arxiv logopreprintJun 25 2025
Elucidating the biomechanical behavior of the myocardium is crucial for understanding cardiac physiology, but cannot be directly inferred from clinical imaging and typically requires finite element (FE) simulations. However, conventional FE methods are computationally expensive and often fail to reproduce observed cardiac motions. We propose IMC-PINN-FE, a physics-informed neural network (PINN) framework that integrates imaged motion consistency (IMC) with FE modeling for patient-specific left ventricular (LV) biomechanics. Cardiac motion is first estimated from MRI or echocardiography using either a pre-trained attention-based network or an unsupervised cyclic-regularized network, followed by extraction of motion modes. IMC-PINN-FE then rapidly estimates myocardial stiffness and active tension by fitting clinical pressure measurements, accelerating computation from hours to seconds compared to traditional inverse FE. Based on these parameters, it performs FE modeling across the cardiac cycle at 75x speedup. Through motion constraints, it matches imaged displacements more accurately, improving average Dice from 0.849 to 0.927, while preserving realistic pressure-volume behavior. IMC-PINN-FE advances previous PINN-FE models by introducing back-computation of material properties and better motion fidelity. Using motion from a single subject to reconstruct shape modes also avoids the need for large datasets and improves patient specificity. IMC-PINN-FE offers a robust and efficient approach for rapid, personalized, and image-consistent cardiac biomechanical modeling.

Generalizable medical image enhancement using structure-preserved diffusion models.

Chen L, Yu X, Li H, Lin H, Niu K, Li H

pubmed logopapersJun 25 2025
Clinical medical images often suffer from compromised quality, which negatively impacts the diagnostic process by both clinicians and AI algorithms. While GAN-based enhancement methods have been commonly developed in recent years, delicate model training is necessary due to issues with artifacts, mode collapse, and instability. Diffusion models have shown promise in generating high-quality images superior to GANs, but challenges in training data collection and domain gaps hinder applying them for medical image enhancement. Additionally, preserving fine structures in enhancing medical images with diffusion models is still an area that requires further exploration. To overcome these challenges, we propose structure-preserved diffusion models for generalizable medical image enhancement (GEDM). GEDM leverages joint supervision from enhancement and segmentation to boost structure preservation and generalizability. Specifically, synthetic data is used to collect high-low quality paired training data with structure masks, and the Laplace transform is employed to reduce domain gaps and introduce multi-scale conditions. GEDM conducts medical image enhancement and segmentation jointly, supervised by high-quality references and structure masks from the training data. Four datasets of two medical imaging modalities were collected to implement the experiments, where GEDM outperformed state-of-the-art methods in image enhancement, as well as follow-up medical analysis tasks.

Streamlining the annotation process by radiologists of volumetric medical images with few-shot learning.

Ryabtsev A, Lederman R, Sosna J, Joskowicz L

pubmed logopapersJun 25 2025
Radiologist's manual annotations limit robust deep learning in volumetric medical imaging. While supervised methods excel with large annotated datasets, few-shot learning performs well for large structures but struggles with small ones, such as lesions. This paper describes a novel method that leverages the advantages of both few-shot learning models and fully supervised models while reducing the cost of manual annotation. Our method inputs a small dataset of labeled scans and a large dataset of unlabeled scans and outputs a validated labeled dataset used to train a supervised model (nnU-Net). The estimated correction effort is reduced by having the radiologist correct a subset of the scan labels computed by a few-shot learning model (UniverSeg). The method uses an optimized support set of scan slice patches and prioritizes the resulting labeled scans that require the least correction. This process is repeated for the remaining unannotated scans until satisfactory performance is obtained. We validated our method on liver, lung, and brain lesions on CT and MRI scans (375 scans, 5933 lesions). It significantly reduces the estimated lesion detection correction effort by 34% for missed lesions, 387% for wrongly identified lesions, with 130% fewer lesion contour corrections, and 424% fewer pixels to correct in the lesion contours with respect to manual annotation from scratch. Our method effectively reduces the radiologist's annotation effort of small structures to produce sufficient high-quality annotated datasets to train deep learning models. The method is generic and can be applied to a variety of lesions in various organs imaged by different modalities.

[Analysis of the global competitive landscape in artificial intelligence medical device research].

Chen J, Pan L, Long J, Yang N, Liu F, Lu Y, Ouyang Z

pubmed logopapersJun 25 2025
The objective of this study is to map the global scientific competitive landscape in the field of artificial intelligence (AI) medical devices using scientific data. A bibliometric analysis was conducted using the Web of Science Core Collection to examine global research trends in AI-based medical devices. As of the end of 2023, a total of 55 147 relevant publications were identified worldwide, with 76.6% published between 2018 and 2024. Research in this field has primarily focused on AI-assisted medical image and physiological signal analysis. At the national level, China (17 991 publications) and the United States (14 032 publications) lead in output. China has shown a rapid increase in publication volume, with its 2023 output exceeding twice that of the U.S.; however, the U.S. maintains a higher average citation per paper (China: 16.29; U.S.: 35.99). At the institutional level, seven Chinese institutions and three U.S. institutions rank among the global top ten in terms of publication volume. At the researcher level, prominent contributors include Acharya U Rajendra, Rueckert Daniel and Tian Jie, who have extensively explored AI-assisted medical imaging. Some researchers have specialized in specific imaging applications, such as Yang Xiaofeng (AI-assisted precision radiotherapy for tumors) and Shen Dinggang (brain imaging analysis). Others, including Gao Xiaorong and Ming Dong, focus on AI-assisted physiological signal analysis. The results confirm the rapid global development of AI in the medical device field, with "AI + imaging" emerging as the most mature direction. China and the U.S. maintain absolute leadership in this area-China slightly leads in publication volume, while the U.S., having started earlier, demonstrates higher research quality. Both countries host a large number of active research teams in this domain.

[The analysis of invention patents in the field of artificial intelligent medical devices].

Zhang T, Chen J, Lu Y, Xu D, Yan S, Ouyang Z

pubmed logopapersJun 25 2025
The emergence of new-generation artificial intelligence technology has brought numerous innovations to the healthcare field, including telemedicine and intelligent care. However, the artificial intelligent medical device sector still faces significant challenges, such as data privacy protection and algorithm reliability. This study, based on invention patent analysis, revealed the technological innovation trends in the field of artificial intelligent medical devices from aspects such as patent application time trends, hot topics, regional distribution, and innovation players. The results showed that global invention patent applications had remained active, with technological innovations primarily focused on medical image processing, physiological signal processing, surgical robots, brain-computer interfaces, and intelligent physiological parameter monitoring technologies. The United States and China led the world in the number of invention patent applications. Major international medical device giants, such as Philips, Siemens, General Electric, and Medtronic, were at the forefront of global technological innovation, with significant advantages in patent application volumes and international market presence. Chinese universities and research institutes, such as Zhejiang University, Tianjin University, and the Shenzhen Institute of Advanced Technology, had demonstrated notable technological innovation, with a relatively high number of patent applications. However, their overseas market expansion remained limited. This study provides a comprehensive overview of the technological innovation trends in the artificial intelligent medical device field and offers valuable information support for industry development from an informatics perspective.

MS-IQA: A Multi-Scale Feature Fusion Network for PET/CT Image Quality Assessment

Siqiao Li, Chen Hui, Wei Zhang, Rui Liang, Chenyue Song, Feng Jiang, Haiqi Zhu, Zhixuan Li, Hong Huang, Xiang Li

arxiv logopreprintJun 25 2025
Positron Emission Tomography / Computed Tomography (PET/CT) plays a critical role in medical imaging, combining functional and anatomical information to aid in accurate diagnosis. However, image quality degradation due to noise, compression and other factors could potentially lead to diagnostic uncertainty and increase the risk of misdiagnosis. When evaluating the quality of a PET/CT image, both low-level features like distortions and high-level features like organ anatomical structures affect the diagnostic value of the image. However, existing medical image quality assessment (IQA) methods are unable to account for both feature types simultaneously. In this work, we propose MS-IQA, a novel multi-scale feature fusion network for PET/CT IQA, which utilizes multi-scale features from various intermediate layers of ResNet and Swin Transformer, enhancing its ability of perceiving both local and global information. In addition, a multi-scale feature fusion module is also introduced to effectively combine high-level and low-level information through a dynamically weighted channel attention mechanism. Finally, to fill the blank of PET/CT IQA dataset, we construct PET-CT-IQA-DS, a dataset containing 2,700 varying-quality PET/CT images with quality scores assigned by radiologists. Experiments on our dataset and the publicly available LDCTIQAC2023 dataset demonstrate that our proposed model has achieved superior performance against existing state-of-the-art methods in various IQA metrics. This work provides an accurate and efficient IQA method for PET/CT. Our code and dataset are available at https://github.com/MS-IQA/MS-IQA/.

Med-Art: Diffusion Transformer for 2D Medical Text-to-Image Generation

Changlu Guo, Anders Nymark Christensen, Morten Rieger Hannemose

arxiv logopreprintJun 25 2025
Text-to-image generative models have achieved remarkable breakthroughs in recent years. However, their application in medical image generation still faces significant challenges, including small dataset sizes, and scarcity of medical textual data. To address these challenges, we propose Med-Art, a framework specifically designed for medical image generation with limited data. Med-Art leverages vision-language models to generate visual descriptions of medical images which overcomes the scarcity of applicable medical textual data. Med-Art adapts a large-scale pre-trained text-to-image model, PixArt-$\alpha$, based on the Diffusion Transformer (DiT), achieving high performance under limited data. Furthermore, we propose an innovative Hybrid-Level Diffusion Fine-tuning (HLDF) method, which enables pixel-level losses, effectively addressing issues such as overly saturated colors. We achieve state-of-the-art performance on two medical image datasets, measured by FID, KID, and downstream classification performance.

AdvMIM: Adversarial Masked Image Modeling for Semi-Supervised Medical Image Segmentation

Lei Zhu, Jun Zhou, Rick Siow Mong Goh, Yong Liu

arxiv logopreprintJun 25 2025
Vision Transformer has recently gained tremendous popularity in medical image segmentation task due to its superior capability in capturing long-range dependencies. However, transformer requires a large amount of labeled data to be effective, which hinders its applicability in annotation scarce semi-supervised learning scenario where only limited labeled data is available. State-of-the-art semi-supervised learning methods propose combinatorial CNN-Transformer learning to cross teach a transformer with a convolutional neural network, which achieves promising results. However, it remains a challenging task to effectively train the transformer with limited labeled data. In this paper, we propose an adversarial masked image modeling method to fully unleash the potential of transformer for semi-supervised medical image segmentation. The key challenge in semi-supervised learning with transformer lies in the lack of sufficient supervision signal. To this end, we propose to construct an auxiliary masked domain from original domain with masked image modeling and train the transformer to predict the entire segmentation mask with masked inputs to increase supervision signal. We leverage the original labels from labeled data and pseudo-labels from unlabeled data to learn the masked domain. To further benefit the original domain from masked domain, we provide a theoretical analysis of our method from a multi-domain learning perspective and devise a novel adversarial training loss to reduce the domain gap between the original and masked domain, which boosts semi-supervised learning performance. We also extend adversarial masked image modeling to CNN network. Extensive experiments on three public medical image segmentation datasets demonstrate the effectiveness of our method, where our method outperforms existing methods significantly. Our code is publicly available at https://github.com/zlheui/AdvMIM.

Interventional Radiology Reporting Standards and Checklist for Artificial Intelligence Research Evaluation (iCARE).

Anibal JT, Huth HB, Boeken T, Daye D, Gichoya J, Muñoz FG, Chapiro J, Wood BJ, Sze DY, Hausegger K

pubmed logopapersJun 25 2025
As artificial intelligence (AI) becomes increasingly prevalent within interventional radiology (IR) research and clinical practice, steps must be taken to ensure the robustness of novel technological systems presented in peer-reviewed journals. This report introduces comprehensive standards and an evaluation checklist (iCARE) that covers the application of modern AI methods in IR-specific contexts. The iCARE checklist encompasses the full "code-to-clinic" pipeline of AI development, including dataset curation, pre-training, task-specific training, explainability, privacy protection, bias mitigation, reproducibility, and model deployment. The iCARE checklist aims to support the development of safe, generalizable technologies for enhancing IR workflows, the delivery of care, and patient outcomes.
Page 65 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.