Sort by:
Page 48 of 1321311 results

Predicting brain metastases in EGFR-positive lung adenocarcinoma patients using pre-treatment CT lung imaging data.

He X, Guan C, Chen T, Wu H, Su L, Zhao M, Guo L

pubmed logopapersJun 26 2025
This study aims to establish a dual-feature fusion model integrating radiomic features with deep learning features, utilizing single-modality pre-treatment lung CT image data to achieve early warning of brain metastasis (BM) risk within 2 years in EGFR-positive lung adenocarcinoma. After rigorous screening of 362 EGFR-positive lung adenocarcinoma patients with pre-treatment lung CT images, 173 eligible participants were ultimately enrolled in this study, including 93 patients with BM and 80 without BM. Radiomic features were extracted from manually segmented lung nodule regions, and a selection of features was used to develop radiomics models. For deep learning, ROI-level CT images were processed using several deep learning networks, including the novel vision mamba, which was applied for the first time in this context. A feature-level fusion model was developed by combining radiomic and deep learning features. Model performance was assessed using receiver operating characteristic (ROC) curves and decision curve analysis (DCA), with statistical comparisons of area under the curve (AUC) values using the DeLong test. Among the models evaluated, the fused vision mamba model demonstrated the best classification performance, achieving an AUC of 0.86 (95% CI: 0.82-0.90), with a recall of 0.88, F1-score of 0.70, and accuracy of 0.76. This fusion model outperformed both radiomics-only and deep learning-only models, highlighting its superior predictive accuracy for early BM risk detection in EGFR-positive lung adenocarcinoma patients. The fused vision mamba model, utilizing single CT imaging data, significantly enhances the prediction of brain metastasis within two years in EGFR-positive lung adenocarcinoma patients. This novel approach, combining radiomic and deep learning features, offers promising clinical value for early detection and personalized treatment.

Application Value of Deep Learning-Based AI Model in the Classification of Breast Nodules.

Zhi S, Cai X, Zhou W, Qian P

pubmed logopapersJun 25 2025
<b>Aims/Background</b> Breast nodules are highly prevalent among women, and ultrasound is a widely used screening tool. However, single ultrasound examinations often result in high false-positive rates, leading to unnecessary biopsies. Artificial intelligence (AI) has demonstrated the potential to improve diagnostic accuracy, reducing misdiagnosis and minimising inter-observer variability. This study developed a deep learning-based AI model to evaluate its clinical utility in assisting sonographers with the Breast Imaging Reporting and Data System (BI-RADS) classification of breast nodules. <b>Methods</b> A retrospective analysis was conducted on 558 patients with breast nodules classified as BI-RADS categories 3 to 5, confirmed through pathological examination at The People's Hospital of Pingyang County between December 2019 and December 2023. The image dataset was divided into a training set, validation set, and test set, and a convolutional neural network (CNN) was used to construct a deep learning-based AI model. Patients underwent ultrasound examination and AI-assisted diagnosis. The receiver operating characteristic (ROC) curve was used to analyse the performance of the AI model, physician adjudication results, and the diagnostic efficacy of physicians before and after AI model assistance. Cohen's weighted Kappa coefficient was used to assess the consistency of BI-RADS classification among five ultrasound physicians before and after AI model assistance. Additionally, statistical analyses were performed to evaluate changes in BI-RADS classification results before and after AI model assistance for each physician. <b>Results</b> According to pathological examination, 765 of the 1026 breast nodules were benign, while 261 were malignant. The sensitivity, specificity, and accuracy of routine ultrasonography in diagnosing benign and malignant nodules were 80.85%, 91.59%, and 88.31%, respectively. In comparison, the AI system achieved a sensitivity of 89.36%, specificity of 92.52%, and accuracy of 91.56%. Furthermore, AI model assistance significantly improved the consistency of physicians' BI-RADS classification (<i>p</i> < 0.001). <b>Conclusion</b> A deep learning-based AI model constructed using ultrasound images can enhance the differentiation between benign and malignant breast nodules and improve classification accuracy, thereby reducing the incidence of missed and misdiagnoses.

Regional free-water diffusion is more strongly related to neuroinflammation than neurodegeneration.

Sumra V, Hadian M, Dilliott AA, Farhan SMK, Frank AR, Lang AE, Roberts AC, Troyer A, Arnott SR, Marras C, Tang-Wai DF, Finger E, Rogaeva E, Orange JB, Ramirez J, Zinman L, Binns M, Borrie M, Freedman M, Ozzoude M, Bartha R, Swartz RH, Munoz D, Masellis M, Black SE, Dixon RA, Dowlatshahi D, Grimes D, Hassan A, Hegele RA, Kumar S, Pasternak S, Pollock B, Rajji T, Sahlas D, Saposnik G, Tartaglia MC

pubmed logopapersJun 25 2025
Recent research has suggested that neuroinflammation may be important in the pathogenesis of neurodegenerative diseases. Free-water diffusion (FWD) has been proposed as a non-invasive neuroimaging-based biomarker for neuroinflammation. Free-water maps were generated using diffusion MRI data in 367 patients from the Ontario Neurodegenerative Disease Research Initiative (108 Alzheimer's Disease/Mild Cognitive Impairment, 42 Frontotemporal Dementia, 37 Amyotrophic Lateral Sclerosis, 123 Parkinson's Disease, and 58 vascular disease-related Cognitive Impairment). The ability of FWD to predict neuroinflammation and neurodegeneration from biofluids was estimated using plasma glial fibrillary-associated protein (GFAP) and neurofilament light chain (NfL), respectively. Recursive Feature Elimination (RFE) performed the strongest out of all feature selection algorithms used and revealed regional specificity for areas that are the most important features for predicting GFAP over NfL concentration. Deep learning models using selected features and demographic information revealed better prediction of GFAP over NfL. Based on feature selection and deep learning methods, FWD was found to be more strongly related to GFAP concentration (measure of astrogliosis) over NfL (measure of neuro-axonal damage), across neurodegenerative disease groups, in terms of predictive performance. Non-invasive markers of neurodegeneration such as MRI structural imaging that can reveal neurodegeneration already exist, while non-invasive markers of neuroinflammation are not available. Our results support the use of FWD as a non-invasive neuroimaging-based biomarker for neuroinflammation.

Comparative Analysis of Automated vs. Expert-Designed Machine Learning Models in Age-Related Macular Degeneration Detection and Classification.

Durmaz Engin C, Beşenk U, Özizmirliler D, Selver MA

pubmed logopapersJun 25 2025
To compare the effectiveness of expert-designed machine learning models and code-free automated machine learning (AutoML) models in classifying optical coherence tomography (OCT) images for detecting age-related macular degeneration (AMD) and distinguishing between its dry and wet forms. Custom models were developed by an artificial intelligence expert using the EfficientNet V2 architecture, while AutoML models were created by an ophthalmologist utilizing LobeAI with transfer learning via ResNet-50 V2. Both models were designed to differentiate normal OCT images from AMD and to also distinguish between dry and wet AMD. The models were trained and tested using an 80:20 split, with each diagnostic group containing 500 OCT images. Performance metrics, including sensitivity, specificity, accuracy, and F1 scores, were calculated and compared. The expert-designed model achieved an overall accuracy of 99.67% for classifying all images, with F1 scores of 0.99 or higher across all binary class comparisons. In contrast, the AutoML model achieved an overall accuracy of 89.00%, with F1 scores ranging from 0.86 to 0.90 in binary comparisons. Notably lower recall was observed for dry AMD vs. normal (0.85) in the AutoML model, indicating challenges in correctly identifying dry AMD. While the AutoML models demonstrated acceptable performance in identifying and classifying AMD cases, the expert-designed models significantly outperformed them. The use of advanced neural network architectures and rigorous optimization in the expert-developed models underscores the continued necessity of expert involvement in the development of high-precision diagnostic tools for medical image classification.

Fusing Radiomic Features with Deep Representations for Gestational Age Estimation in Fetal Ultrasound Images

Fangyijie Wang, Yuan Liang, Sourav Bhattacharjee, Abey Campbell, Kathleen M. Curran, Guénolé Silvestre

arxiv logopreprintJun 25 2025
Accurate gestational age (GA) estimation, ideally through fetal ultrasound measurement, is a crucial aspect of providing excellent antenatal care. However, deriving GA from manual fetal biometric measurements depends on the operator and is time-consuming. Hence, automatic computer-assisted methods are demanded in clinical practice. In this paper, we present a novel feature fusion framework to estimate GA using fetal ultrasound images without any measurement information. We adopt a deep learning model to extract deep representations from ultrasound images. We extract radiomic features to reveal patterns and characteristics of fetal brain growth. To harness the interpretability of radiomics in medical imaging analysis, we estimate GA by fusing radiomic features and deep representations. Our framework estimates GA with a mean absolute error of 8.0 days across three trimesters, outperforming current machine learning-based methods at these gestational ages. Experimental results demonstrate the robustness of our framework across different populations in diverse geographical regions. Our code is publicly available on \href{https://github.com/13204942/RadiomicsImageFusion_FetalUS}{GitHub}.

Opportunistic Osteoporosis Diagnosis via Texture-Preserving Self-Supervision, Mixture of Experts and Multi-Task Integration

Jiaxing Huang, Heng Guo, Le Lu, Fan Yang, Minfeng Xu, Ge Yang, Wei Luo

arxiv logopreprintJun 25 2025
Osteoporosis, characterized by reduced bone mineral density (BMD) and compromised bone microstructure, increases fracture risk in aging populations. While dual-energy X-ray absorptiometry (DXA) is the clinical standard for BMD assessment, its limited accessibility hinders diagnosis in resource-limited regions. Opportunistic computed tomography (CT) analysis has emerged as a promising alternative for osteoporosis diagnosis using existing imaging data. Current approaches, however, face three limitations: (1) underutilization of unlabeled vertebral data, (2) systematic bias from device-specific DXA discrepancies, and (3) insufficient integration of clinical knowledge such as spatial BMD distribution patterns. To address these, we propose a unified deep learning framework with three innovations. First, a self-supervised learning method using radiomic representations to leverage unlabeled CT data and preserve bone texture. Second, a Mixture of Experts (MoE) architecture with learned gating mechanisms to enhance cross-device adaptability. Third, a multi-task learning framework integrating osteoporosis diagnosis, BMD regression, and vertebra location prediction. Validated across three clinical sites and an external hospital, our approach demonstrates superior generalizability and accuracy over existing methods for opportunistic osteoporosis screening and diagnosis.

Radiomic fingerprints for knee MR images assessment

Yaxi Chen, Simin Ni, Shaheer U. Saeed, Aleksandra Ivanova, Rikin Hargunani, Jie Huang, Chaozong Liu, Yipeng Hu

arxiv logopreprintJun 25 2025
Accurate interpretation of knee MRI scans relies on expert clinical judgment, often with high variability and limited scalability. Existing radiomic approaches use a fixed set of radiomic features (the signature), selected at the population level and applied uniformly to all patients. While interpretable, these signatures are often too constrained to represent individual pathological variations. As a result, conventional radiomic-based approaches are found to be limited in performance, compared with recent end-to-end deep learning (DL) alternatives without using interpretable radiomic features. We argue that the individual-agnostic nature in current radiomic selection is not central to its intepretability, but is responsible for the poor generalization in our application. Here, we propose a novel radiomic fingerprint framework, in which a radiomic feature set (the fingerprint) is dynamically constructed for each patient, selected by a DL model. Unlike the existing radiomic signatures, our fingerprints are derived on a per-patient basis by predicting the feature relevance in a large radiomic feature pool, and selecting only those that are predictive of clinical conditions for individual patients. The radiomic-selecting model is trained simultaneously with a low-dimensional (considered relatively explainable) logistic regression for downstream classification. We validate our methods across multiple diagnostic tasks including general knee abnormalities, anterior cruciate ligament (ACL) tears, and meniscus tears, demonstrating comparable or superior diagnostic accuracy relative to state-of-the-art end-to-end DL models. More importantly, we show that the interpretability inherent in our approach facilitates meaningful clinical insights and potential biomarker discovery, with detailed discussion, quantitative and qualitative analysis of real-world clinical cases to evidence these advantages.

MS-IQA: A Multi-Scale Feature Fusion Network for PET/CT Image Quality Assessment

Siqiao Li, Chen Hui, Wei Zhang, Rui Liang, Chenyue Song, Feng Jiang, Haiqi Zhu, Zhixuan Li, Hong Huang, Xiang Li

arxiv logopreprintJun 25 2025
Positron Emission Tomography / Computed Tomography (PET/CT) plays a critical role in medical imaging, combining functional and anatomical information to aid in accurate diagnosis. However, image quality degradation due to noise, compression and other factors could potentially lead to diagnostic uncertainty and increase the risk of misdiagnosis. When evaluating the quality of a PET/CT image, both low-level features like distortions and high-level features like organ anatomical structures affect the diagnostic value of the image. However, existing medical image quality assessment (IQA) methods are unable to account for both feature types simultaneously. In this work, we propose MS-IQA, a novel multi-scale feature fusion network for PET/CT IQA, which utilizes multi-scale features from various intermediate layers of ResNet and Swin Transformer, enhancing its ability of perceiving both local and global information. In addition, a multi-scale feature fusion module is also introduced to effectively combine high-level and low-level information through a dynamically weighted channel attention mechanism. Finally, to fill the blank of PET/CT IQA dataset, we construct PET-CT-IQA-DS, a dataset containing 2,700 varying-quality PET/CT images with quality scores assigned by radiologists. Experiments on our dataset and the publicly available LDCTIQAC2023 dataset demonstrate that our proposed model has achieved superior performance against existing state-of-the-art methods in various IQA metrics. This work provides an accurate and efficient IQA method for PET/CT. Our code and dataset are available at https://github.com/MS-IQA/MS-IQA/.

[The analysis of invention patents in the field of artificial intelligent medical devices].

Zhang T, Chen J, Lu Y, Xu D, Yan S, Ouyang Z

pubmed logopapersJun 25 2025
The emergence of new-generation artificial intelligence technology has brought numerous innovations to the healthcare field, including telemedicine and intelligent care. However, the artificial intelligent medical device sector still faces significant challenges, such as data privacy protection and algorithm reliability. This study, based on invention patent analysis, revealed the technological innovation trends in the field of artificial intelligent medical devices from aspects such as patent application time trends, hot topics, regional distribution, and innovation players. The results showed that global invention patent applications had remained active, with technological innovations primarily focused on medical image processing, physiological signal processing, surgical robots, brain-computer interfaces, and intelligent physiological parameter monitoring technologies. The United States and China led the world in the number of invention patent applications. Major international medical device giants, such as Philips, Siemens, General Electric, and Medtronic, were at the forefront of global technological innovation, with significant advantages in patent application volumes and international market presence. Chinese universities and research institutes, such as Zhejiang University, Tianjin University, and the Shenzhen Institute of Advanced Technology, had demonstrated notable technological innovation, with a relatively high number of patent applications. However, their overseas market expansion remained limited. This study provides a comprehensive overview of the technological innovation trends in the artificial intelligent medical device field and offers valuable information support for industry development from an informatics perspective.

[Analysis of the global competitive landscape in artificial intelligence medical device research].

Chen J, Pan L, Long J, Yang N, Liu F, Lu Y, Ouyang Z

pubmed logopapersJun 25 2025
The objective of this study is to map the global scientific competitive landscape in the field of artificial intelligence (AI) medical devices using scientific data. A bibliometric analysis was conducted using the Web of Science Core Collection to examine global research trends in AI-based medical devices. As of the end of 2023, a total of 55 147 relevant publications were identified worldwide, with 76.6% published between 2018 and 2024. Research in this field has primarily focused on AI-assisted medical image and physiological signal analysis. At the national level, China (17 991 publications) and the United States (14 032 publications) lead in output. China has shown a rapid increase in publication volume, with its 2023 output exceeding twice that of the U.S.; however, the U.S. maintains a higher average citation per paper (China: 16.29; U.S.: 35.99). At the institutional level, seven Chinese institutions and three U.S. institutions rank among the global top ten in terms of publication volume. At the researcher level, prominent contributors include Acharya U Rajendra, Rueckert Daniel and Tian Jie, who have extensively explored AI-assisted medical imaging. Some researchers have specialized in specific imaging applications, such as Yang Xiaofeng (AI-assisted precision radiotherapy for tumors) and Shen Dinggang (brain imaging analysis). Others, including Gao Xiaorong and Ming Dong, focus on AI-assisted physiological signal analysis. The results confirm the rapid global development of AI in the medical device field, with "AI + imaging" emerging as the most mature direction. China and the U.S. maintain absolute leadership in this area-China slightly leads in publication volume, while the U.S., having started earlier, demonstrates higher research quality. Both countries host a large number of active research teams in this domain.
Page 48 of 1321311 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.