Sort by:
Page 30 of 2252246 results

Opportunistic Osteoporosis Diagnosis via Texture-Preserving Self-Supervision, Mixture of Experts and Multi-Task Integration

Jiaxing Huang, Heng Guo, Le Lu, Fan Yang, Minfeng Xu, Ge Yang, Wei Luo

arxiv logopreprintJun 25 2025
Osteoporosis, characterized by reduced bone mineral density (BMD) and compromised bone microstructure, increases fracture risk in aging populations. While dual-energy X-ray absorptiometry (DXA) is the clinical standard for BMD assessment, its limited accessibility hinders diagnosis in resource-limited regions. Opportunistic computed tomography (CT) analysis has emerged as a promising alternative for osteoporosis diagnosis using existing imaging data. Current approaches, however, face three limitations: (1) underutilization of unlabeled vertebral data, (2) systematic bias from device-specific DXA discrepancies, and (3) insufficient integration of clinical knowledge such as spatial BMD distribution patterns. To address these, we propose a unified deep learning framework with three innovations. First, a self-supervised learning method using radiomic representations to leverage unlabeled CT data and preserve bone texture. Second, a Mixture of Experts (MoE) architecture with learned gating mechanisms to enhance cross-device adaptability. Third, a multi-task learning framework integrating osteoporosis diagnosis, BMD regression, and vertebra location prediction. Validated across three clinical sites and an external hospital, our approach demonstrates superior generalizability and accuracy over existing methods for opportunistic osteoporosis screening and diagnosis.

Radiomic fingerprints for knee MR images assessment

Yaxi Chen, Simin Ni, Shaheer U. Saeed, Aleksandra Ivanova, Rikin Hargunani, Jie Huang, Chaozong Liu, Yipeng Hu

arxiv logopreprintJun 25 2025
Accurate interpretation of knee MRI scans relies on expert clinical judgment, often with high variability and limited scalability. Existing radiomic approaches use a fixed set of radiomic features (the signature), selected at the population level and applied uniformly to all patients. While interpretable, these signatures are often too constrained to represent individual pathological variations. As a result, conventional radiomic-based approaches are found to be limited in performance, compared with recent end-to-end deep learning (DL) alternatives without using interpretable radiomic features. We argue that the individual-agnostic nature in current radiomic selection is not central to its intepretability, but is responsible for the poor generalization in our application. Here, we propose a novel radiomic fingerprint framework, in which a radiomic feature set (the fingerprint) is dynamically constructed for each patient, selected by a DL model. Unlike the existing radiomic signatures, our fingerprints are derived on a per-patient basis by predicting the feature relevance in a large radiomic feature pool, and selecting only those that are predictive of clinical conditions for individual patients. The radiomic-selecting model is trained simultaneously with a low-dimensional (considered relatively explainable) logistic regression for downstream classification. We validate our methods across multiple diagnostic tasks including general knee abnormalities, anterior cruciate ligament (ACL) tears, and meniscus tears, demonstrating comparable or superior diagnostic accuracy relative to state-of-the-art end-to-end DL models. More importantly, we show that the interpretability inherent in our approach facilitates meaningful clinical insights and potential biomarker discovery, with detailed discussion, quantitative and qualitative analysis of real-world clinical cases to evidence these advantages.

MS-IQA: A Multi-Scale Feature Fusion Network for PET/CT Image Quality Assessment

Siqiao Li, Chen Hui, Wei Zhang, Rui Liang, Chenyue Song, Feng Jiang, Haiqi Zhu, Zhixuan Li, Hong Huang, Xiang Li

arxiv logopreprintJun 25 2025
Positron Emission Tomography / Computed Tomography (PET/CT) plays a critical role in medical imaging, combining functional and anatomical information to aid in accurate diagnosis. However, image quality degradation due to noise, compression and other factors could potentially lead to diagnostic uncertainty and increase the risk of misdiagnosis. When evaluating the quality of a PET/CT image, both low-level features like distortions and high-level features like organ anatomical structures affect the diagnostic value of the image. However, existing medical image quality assessment (IQA) methods are unable to account for both feature types simultaneously. In this work, we propose MS-IQA, a novel multi-scale feature fusion network for PET/CT IQA, which utilizes multi-scale features from various intermediate layers of ResNet and Swin Transformer, enhancing its ability of perceiving both local and global information. In addition, a multi-scale feature fusion module is also introduced to effectively combine high-level and low-level information through a dynamically weighted channel attention mechanism. Finally, to fill the blank of PET/CT IQA dataset, we construct PET-CT-IQA-DS, a dataset containing 2,700 varying-quality PET/CT images with quality scores assigned by radiologists. Experiments on our dataset and the publicly available LDCTIQAC2023 dataset demonstrate that our proposed model has achieved superior performance against existing state-of-the-art methods in various IQA metrics. This work provides an accurate and efficient IQA method for PET/CT. Our code and dataset are available at https://github.com/MS-IQA/MS-IQA/.

[Advances in low-dose cone-beam computed tomography image reconstruction methods based on deep learning].

Shi J, Song Y, Li G, Bai S

pubmed logopapersJun 25 2025
Cone-beam computed tomography (CBCT) is widely used in dentistry, surgery, radiotherapy and other medical fields. However, repeated CBCT scans expose patients to additional radiation doses, increasing the risk of secondary malignant tumors. Low-dose CBCT image reconstruction technology, which employs advanced algorithms to reduce radiation dose while enhancing image quality, has emerged as a focal point of recent research. This review systematically examined deep learning-based methods for low-dose CBCT reconstruction. It compared different network architectures in terms of noise reduction, artifact removal, detail preservation, and computational efficiency, covering three approaches: image-domain, projection-domain, and dual-domain techniques. The review also explored how emerging technologies like multimodal fusion and self-supervised learning could enhance these methods. By summarizing the strengths and weaknesses of current approaches, this work provides insights to optimize low-dose CBCT algorithms and support their clinical adoption.

[Thyroid nodule segmentation method integrating receiving weighted key-value architecture and spherical geometric features].

Zhu L, Wei G

pubmed logopapersJun 25 2025
To address the high computational complexity of the Transformer in the segmentation of ultrasound thyroid nodules and the loss of image details or omission of key spatial information caused by traditional image sampling techniques when dealing with high-resolution, complex texture or uneven density two-dimensional ultrasound images, this paper proposes a thyroid nodule segmentation method that integrates the receiving weighted key-value (RWKV) architecture and spherical geometry feature (SGF) sampling technology. This method effectively captures the details of adjacent regions through two-dimensional offset prediction and pixel-level sampling position adjustment, achieving precise segmentation. Additionally, this study introduces a patch attention module (PAM) to optimize the decoder feature map using a regional cross-attention mechanism, enabling it to focus more precisely on the high-resolution features of the encoder. Experiments on the thyroid nodule segmentation dataset (TN3K) and the digital database for thyroid images (DDTI) show that the proposed method achieves dice similarity coefficients (DSC) of 87.24% and 80.79% respectively, outperforming existing models while maintaining a lower computational complexity. This approach may provide an efficient solution for the precise segmentation of thyroid nodules.

[The analysis of invention patents in the field of artificial intelligent medical devices].

Zhang T, Chen J, Lu Y, Xu D, Yan S, Ouyang Z

pubmed logopapersJun 25 2025
The emergence of new-generation artificial intelligence technology has brought numerous innovations to the healthcare field, including telemedicine and intelligent care. However, the artificial intelligent medical device sector still faces significant challenges, such as data privacy protection and algorithm reliability. This study, based on invention patent analysis, revealed the technological innovation trends in the field of artificial intelligent medical devices from aspects such as patent application time trends, hot topics, regional distribution, and innovation players. The results showed that global invention patent applications had remained active, with technological innovations primarily focused on medical image processing, physiological signal processing, surgical robots, brain-computer interfaces, and intelligent physiological parameter monitoring technologies. The United States and China led the world in the number of invention patent applications. Major international medical device giants, such as Philips, Siemens, General Electric, and Medtronic, were at the forefront of global technological innovation, with significant advantages in patent application volumes and international market presence. Chinese universities and research institutes, such as Zhejiang University, Tianjin University, and the Shenzhen Institute of Advanced Technology, had demonstrated notable technological innovation, with a relatively high number of patent applications. However, their overseas market expansion remained limited. This study provides a comprehensive overview of the technological innovation trends in the artificial intelligent medical device field and offers valuable information support for industry development from an informatics perspective.

[Analysis of the global competitive landscape in artificial intelligence medical device research].

Chen J, Pan L, Long J, Yang N, Liu F, Lu Y, Ouyang Z

pubmed logopapersJun 25 2025
The objective of this study is to map the global scientific competitive landscape in the field of artificial intelligence (AI) medical devices using scientific data. A bibliometric analysis was conducted using the Web of Science Core Collection to examine global research trends in AI-based medical devices. As of the end of 2023, a total of 55 147 relevant publications were identified worldwide, with 76.6% published between 2018 and 2024. Research in this field has primarily focused on AI-assisted medical image and physiological signal analysis. At the national level, China (17 991 publications) and the United States (14 032 publications) lead in output. China has shown a rapid increase in publication volume, with its 2023 output exceeding twice that of the U.S.; however, the U.S. maintains a higher average citation per paper (China: 16.29; U.S.: 35.99). At the institutional level, seven Chinese institutions and three U.S. institutions rank among the global top ten in terms of publication volume. At the researcher level, prominent contributors include Acharya U Rajendra, Rueckert Daniel and Tian Jie, who have extensively explored AI-assisted medical imaging. Some researchers have specialized in specific imaging applications, such as Yang Xiaofeng (AI-assisted precision radiotherapy for tumors) and Shen Dinggang (brain imaging analysis). Others, including Gao Xiaorong and Ming Dong, focus on AI-assisted physiological signal analysis. The results confirm the rapid global development of AI in the medical device field, with "AI + imaging" emerging as the most mature direction. China and the U.S. maintain absolute leadership in this area-China slightly leads in publication volume, while the U.S., having started earlier, demonstrates higher research quality. Both countries host a large number of active research teams in this domain.

Streamlining the annotation process by radiologists of volumetric medical images with few-shot learning.

Ryabtsev A, Lederman R, Sosna J, Joskowicz L

pubmed logopapersJun 25 2025
Radiologist's manual annotations limit robust deep learning in volumetric medical imaging. While supervised methods excel with large annotated datasets, few-shot learning performs well for large structures but struggles with small ones, such as lesions. This paper describes a novel method that leverages the advantages of both few-shot learning models and fully supervised models while reducing the cost of manual annotation. Our method inputs a small dataset of labeled scans and a large dataset of unlabeled scans and outputs a validated labeled dataset used to train a supervised model (nnU-Net). The estimated correction effort is reduced by having the radiologist correct a subset of the scan labels computed by a few-shot learning model (UniverSeg). The method uses an optimized support set of scan slice patches and prioritizes the resulting labeled scans that require the least correction. This process is repeated for the remaining unannotated scans until satisfactory performance is obtained. We validated our method on liver, lung, and brain lesions on CT and MRI scans (375 scans, 5933 lesions). It significantly reduces the estimated lesion detection correction effort by 34% for missed lesions, 387% for wrongly identified lesions, with 130% fewer lesion contour corrections, and 424% fewer pixels to correct in the lesion contours with respect to manual annotation from scratch. Our method effectively reduces the radiologist's annotation effort of small structures to produce sufficient high-quality annotated datasets to train deep learning models. The method is generic and can be applied to a variety of lesions in various organs imaged by different modalities.

Alterations in the functional MRI-based temporal brain organisation in individuals with obesity.

Lee S, Namgung JY, Han JH, Park BY

pubmed logopapersJun 25 2025
Obesity is associated with functional alterations in the brain. Although spatial organisation changes in the brains of individuals with obesity have been widely studied, the temporal dynamics in their brains remain poorly understood. Therefore, in this study, we investigated variations in the intrinsic neural timescale (INT) across different degrees of obesity using resting-state functional and diffusion magnetic resonance imaging data from the enhanced Nathan Kline Institute Rockland Sample database. We examined the relationship between the INT and obesity phenotypes using supervised machine learning, controlling for age and sex. To further explore the structure-function characteristics of these regions, we assessed the modular network properties by analysing the participation coefficients and within-module degree derived from the structure-function coupling matrices. Finally, the INT values of the identified regions were used to predict eating behaviour traits. A significant negative correlation was observed, particularly in the default mode, limbic and reward networks. We found a negative association with the participation coefficients, suggesting that shorter INT values in higher-order association areas are related to reduced network integration. Moreover, the INT values of these identified regions moderately predicted eating behaviours, underscoring the potential of the INT as a candidate marker for obesity and eating behaviours. These findings provide insight into the temporal organisation of neural activity in obesity, highlighting the role of specific brain networks in shaping behavioural outcomes.

The evaluation of artificial intelligence in mammography-based breast cancer screening: Is breast-level analysis enough?

Taib AG, Partridge GJW, Yao L, Darker I, Chen Y

pubmed logopapersJun 25 2025
To assess whether the diagnostic performance of a commercial artificial intelligence (AI) algorithm for mammography differs between breast-level and lesion-level interpretations and to compare performance to a large population of specialised human readers. We retrospectively analysed 1200 mammograms from the NHS breast cancer screening programme using a commercial AI algorithm and assessments from 1258 trained human readers from the Personal Performance in Mammographic Screening (PERFORMS) external quality assurance programme. For breasts containing pathologically confirmed malignancies, a breast and lesion-level analysis was performed. The latter considered the locations of marked regions of interest for AI and humans. The highest score per lesion was recorded. For non-malignant breasts, a breast-level analysis recorded the highest score per breast. Area under the curve (AUC), sensitivity and specificity were calculated at the developer's recommended threshold for recall. The study was designed to detect a medium-sized effect (odds ratio 3.5 or 0.29) for sensitivity. The test set contained 882 non-malignant (73%) and 318 malignant breasts (27%), with 328 cancer lesions. The AI AUC was 0.942 at breast level and 0.929 at lesion level (difference -0.013, p < 0.01). The mean human AUC was 0.878 at breast level and 0.851 at lesion level (difference -0.027, p < 0.01). AI outperformed human readers at the breast and lesion level (ps < 0.01, respectively) according to the AUC. AI's diagnostic performance significantly decreased at the lesion level, indicating reduced accuracy in localising malignancies. However, its overall performance exceeded that of human readers. Question AI often recalls mammography cases not recalled by humans; to understand why, we as humans must consider the regions of interest it has marked as cancerous. Findings Evaluations of AI typically occur at the breast level, but performance decreases when AI is evaluated on a lesion level. This also occurs for humans. Clinical relevance To improve human-AI collaboration, AI should be assessed at the lesion level; poor accuracy here may lead to automation bias and unnecessary patient procedures.
Page 30 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.