Sort by:
Page 74 of 2232230 results

High-performance Open-source AI for Breast Cancer Detection and Localization in MRI.

Hirsch L, Sutton EJ, Huang Y, Kayis B, Hughes M, Martinez D, Makse HA, Parra LC

pubmed logopapersJun 25 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and evaluate an open-source deep learning model for detection and localization of breast cancer on MRI. Materials and Methods In this retrospective study, a deep learning model for breast cancer detection and localization was trained on the largest breast MRI dataset to date. Data included all breast MRIs conducted at a tertiary cancer center in the United States between 2002 and 2019. The model was validated on sagittal MRIs from the primary site (<i>n</i> = 6,615 breasts). Generalizability was assessed by evaluating model performance on axial data from the primary site (<i>n</i> = 7,058 breasts) and a second clinical site (<i>n</i> = 1,840 breasts). Results The primary site dataset included 30,672 sagittal MRI examinations (52,598 breasts) from 9,986 female patients (mean [SD] age, 53 [11] years). The model achieved an area under the receiver operating characteristic curve (AUC) of 0.95 for detecting cancer in the primary site. At 90% specificity (5717/6353), model sensitivity was 83% (217/262), which was comparable to historical performance data for radiologists. The model generalized well to axial examinations, achieving an AUC of 0.92 on data from the same clinical site and 0.92 on data from a secondary site. The model accurately located the tumor in 88.5% (232/262) of sagittal images, 92.8% (272/293) of axial images from the primary site, and 87.7% (807/920) of secondary site axial images. Conclusion The model demonstrated state-of-the-art performance on breast cancer detection. Code and weights are openly available to stimulate further development and validation. ©RSNA, 2025.

[Advances in low-dose cone-beam computed tomography image reconstruction methods based on deep learning].

Shi J, Song Y, Li G, Bai S

pubmed logopapersJun 25 2025
Cone-beam computed tomography (CBCT) is widely used in dentistry, surgery, radiotherapy and other medical fields. However, repeated CBCT scans expose patients to additional radiation doses, increasing the risk of secondary malignant tumors. Low-dose CBCT image reconstruction technology, which employs advanced algorithms to reduce radiation dose while enhancing image quality, has emerged as a focal point of recent research. This review systematically examined deep learning-based methods for low-dose CBCT reconstruction. It compared different network architectures in terms of noise reduction, artifact removal, detail preservation, and computational efficiency, covering three approaches: image-domain, projection-domain, and dual-domain techniques. The review also explored how emerging technologies like multimodal fusion and self-supervised learning could enhance these methods. By summarizing the strengths and weaknesses of current approaches, this work provides insights to optimize low-dose CBCT algorithms and support their clinical adoption.

[Thyroid nodule segmentation method integrating receiving weighted key-value architecture and spherical geometric features].

Zhu L, Wei G

pubmed logopapersJun 25 2025
To address the high computational complexity of the Transformer in the segmentation of ultrasound thyroid nodules and the loss of image details or omission of key spatial information caused by traditional image sampling techniques when dealing with high-resolution, complex texture or uneven density two-dimensional ultrasound images, this paper proposes a thyroid nodule segmentation method that integrates the receiving weighted key-value (RWKV) architecture and spherical geometry feature (SGF) sampling technology. This method effectively captures the details of adjacent regions through two-dimensional offset prediction and pixel-level sampling position adjustment, achieving precise segmentation. Additionally, this study introduces a patch attention module (PAM) to optimize the decoder feature map using a regional cross-attention mechanism, enabling it to focus more precisely on the high-resolution features of the encoder. Experiments on the thyroid nodule segmentation dataset (TN3K) and the digital database for thyroid images (DDTI) show that the proposed method achieves dice similarity coefficients (DSC) of 87.24% and 80.79% respectively, outperforming existing models while maintaining a lower computational complexity. This approach may provide an efficient solution for the precise segmentation of thyroid nodules.

[The analysis of invention patents in the field of artificial intelligent medical devices].

Zhang T, Chen J, Lu Y, Xu D, Yan S, Ouyang Z

pubmed logopapersJun 25 2025
The emergence of new-generation artificial intelligence technology has brought numerous innovations to the healthcare field, including telemedicine and intelligent care. However, the artificial intelligent medical device sector still faces significant challenges, such as data privacy protection and algorithm reliability. This study, based on invention patent analysis, revealed the technological innovation trends in the field of artificial intelligent medical devices from aspects such as patent application time trends, hot topics, regional distribution, and innovation players. The results showed that global invention patent applications had remained active, with technological innovations primarily focused on medical image processing, physiological signal processing, surgical robots, brain-computer interfaces, and intelligent physiological parameter monitoring technologies. The United States and China led the world in the number of invention patent applications. Major international medical device giants, such as Philips, Siemens, General Electric, and Medtronic, were at the forefront of global technological innovation, with significant advantages in patent application volumes and international market presence. Chinese universities and research institutes, such as Zhejiang University, Tianjin University, and the Shenzhen Institute of Advanced Technology, had demonstrated notable technological innovation, with a relatively high number of patent applications. However, their overseas market expansion remained limited. This study provides a comprehensive overview of the technological innovation trends in the artificial intelligent medical device field and offers valuable information support for industry development from an informatics perspective.

[Analysis of the global competitive landscape in artificial intelligence medical device research].

Chen J, Pan L, Long J, Yang N, Liu F, Lu Y, Ouyang Z

pubmed logopapersJun 25 2025
The objective of this study is to map the global scientific competitive landscape in the field of artificial intelligence (AI) medical devices using scientific data. A bibliometric analysis was conducted using the Web of Science Core Collection to examine global research trends in AI-based medical devices. As of the end of 2023, a total of 55 147 relevant publications were identified worldwide, with 76.6% published between 2018 and 2024. Research in this field has primarily focused on AI-assisted medical image and physiological signal analysis. At the national level, China (17 991 publications) and the United States (14 032 publications) lead in output. China has shown a rapid increase in publication volume, with its 2023 output exceeding twice that of the U.S.; however, the U.S. maintains a higher average citation per paper (China: 16.29; U.S.: 35.99). At the institutional level, seven Chinese institutions and three U.S. institutions rank among the global top ten in terms of publication volume. At the researcher level, prominent contributors include Acharya U Rajendra, Rueckert Daniel and Tian Jie, who have extensively explored AI-assisted medical imaging. Some researchers have specialized in specific imaging applications, such as Yang Xiaofeng (AI-assisted precision radiotherapy for tumors) and Shen Dinggang (brain imaging analysis). Others, including Gao Xiaorong and Ming Dong, focus on AI-assisted physiological signal analysis. The results confirm the rapid global development of AI in the medical device field, with "AI + imaging" emerging as the most mature direction. China and the U.S. maintain absolute leadership in this area-China slightly leads in publication volume, while the U.S., having started earlier, demonstrates higher research quality. Both countries host a large number of active research teams in this domain.

Streamlining the annotation process by radiologists of volumetric medical images with few-shot learning.

Ryabtsev A, Lederman R, Sosna J, Joskowicz L

pubmed logopapersJun 25 2025
Radiologist's manual annotations limit robust deep learning in volumetric medical imaging. While supervised methods excel with large annotated datasets, few-shot learning performs well for large structures but struggles with small ones, such as lesions. This paper describes a novel method that leverages the advantages of both few-shot learning models and fully supervised models while reducing the cost of manual annotation. Our method inputs a small dataset of labeled scans and a large dataset of unlabeled scans and outputs a validated labeled dataset used to train a supervised model (nnU-Net). The estimated correction effort is reduced by having the radiologist correct a subset of the scan labels computed by a few-shot learning model (UniverSeg). The method uses an optimized support set of scan slice patches and prioritizes the resulting labeled scans that require the least correction. This process is repeated for the remaining unannotated scans until satisfactory performance is obtained. We validated our method on liver, lung, and brain lesions on CT and MRI scans (375 scans, 5933 lesions). It significantly reduces the estimated lesion detection correction effort by 34% for missed lesions, 387% for wrongly identified lesions, with 130% fewer lesion contour corrections, and 424% fewer pixels to correct in the lesion contours with respect to manual annotation from scratch. Our method effectively reduces the radiologist's annotation effort of small structures to produce sufficient high-quality annotated datasets to train deep learning models. The method is generic and can be applied to a variety of lesions in various organs imaged by different modalities.

Alterations in the functional MRI-based temporal brain organisation in individuals with obesity.

Lee S, Namgung JY, Han JH, Park BY

pubmed logopapersJun 25 2025
Obesity is associated with functional alterations in the brain. Although spatial organisation changes in the brains of individuals with obesity have been widely studied, the temporal dynamics in their brains remain poorly understood. Therefore, in this study, we investigated variations in the intrinsic neural timescale (INT) across different degrees of obesity using resting-state functional and diffusion magnetic resonance imaging data from the enhanced Nathan Kline Institute Rockland Sample database. We examined the relationship between the INT and obesity phenotypes using supervised machine learning, controlling for age and sex. To further explore the structure-function characteristics of these regions, we assessed the modular network properties by analysing the participation coefficients and within-module degree derived from the structure-function coupling matrices. Finally, the INT values of the identified regions were used to predict eating behaviour traits. A significant negative correlation was observed, particularly in the default mode, limbic and reward networks. We found a negative association with the participation coefficients, suggesting that shorter INT values in higher-order association areas are related to reduced network integration. Moreover, the INT values of these identified regions moderately predicted eating behaviours, underscoring the potential of the INT as a candidate marker for obesity and eating behaviours. These findings provide insight into the temporal organisation of neural activity in obesity, highlighting the role of specific brain networks in shaping behavioural outcomes.

The evaluation of artificial intelligence in mammography-based breast cancer screening: Is breast-level analysis enough?

Taib AG, Partridge GJW, Yao L, Darker I, Chen Y

pubmed logopapersJun 25 2025
To assess whether the diagnostic performance of a commercial artificial intelligence (AI) algorithm for mammography differs between breast-level and lesion-level interpretations and to compare performance to a large population of specialised human readers. We retrospectively analysed 1200 mammograms from the NHS breast cancer screening programme using a commercial AI algorithm and assessments from 1258 trained human readers from the Personal Performance in Mammographic Screening (PERFORMS) external quality assurance programme. For breasts containing pathologically confirmed malignancies, a breast and lesion-level analysis was performed. The latter considered the locations of marked regions of interest for AI and humans. The highest score per lesion was recorded. For non-malignant breasts, a breast-level analysis recorded the highest score per breast. Area under the curve (AUC), sensitivity and specificity were calculated at the developer's recommended threshold for recall. The study was designed to detect a medium-sized effect (odds ratio 3.5 or 0.29) for sensitivity. The test set contained 882 non-malignant (73%) and 318 malignant breasts (27%), with 328 cancer lesions. The AI AUC was 0.942 at breast level and 0.929 at lesion level (difference -0.013, p < 0.01). The mean human AUC was 0.878 at breast level and 0.851 at lesion level (difference -0.027, p < 0.01). AI outperformed human readers at the breast and lesion level (ps < 0.01, respectively) according to the AUC. AI's diagnostic performance significantly decreased at the lesion level, indicating reduced accuracy in localising malignancies. However, its overall performance exceeded that of human readers. Question AI often recalls mammography cases not recalled by humans; to understand why, we as humans must consider the regions of interest it has marked as cancerous. Findings Evaluations of AI typically occur at the breast level, but performance decreases when AI is evaluated on a lesion level. This also occurs for humans. Clinical relevance To improve human-AI collaboration, AI should be assessed at the lesion level; poor accuracy here may lead to automation bias and unnecessary patient procedures.

Assessment of Robustness of MRI Radiomic Features in the Abdomen: Impact of Deep Learning Reconstruction and Accelerated Acquisition.

Zhong J, Xing Y, Hu Y, Liu X, Dai S, Ding D, Lu J, Yang J, Song Y, Lu M, Nickel D, Lu W, Zhang H, Yao W

pubmed logopapersJun 25 2025
The objective of this study is to investigate the impact of deep learning reconstruction and accelerated acquisition on reproducibility and variability of radiomic features in abdominal MRI. Seventeen volunteers were prospectively included to undergo abdominal MRI on a 3-T scanner for axial T2-weighted, axial T2-weighted fat-suppressed, and coronal T2-weighted sequences. Each sequence was scanned for four times using clinical reference acquisition with standard reconstruction, clinical reference acquisition with deep learning reconstruction, accelerated acquisition with standard reconstruction, and accelerated acquisition with deep learning reconstruction, respectively. The regions of interest were drawn for ten anatomical sites with rigid registrations. Ninety-three radiomic features were extracted via PyRadiomics after z-score normalization. The reproducibility was evaluated using clinical reference acquisition with standard reconstruction as reference by intraclass correlation coefficient (ICC) and concordance correlation coefficient (CCC). The variability among four scans was assessed by coefficient of variation (CV) and quartile coefficient of dispersion (QCD). Our study found that the median (first and third quartile) of overall ICC and CCC values were 0.451 (0.305, 0.583) and 0.450 (0.304, 0.582). The overall percentage of radiomic features with ICC > 0.90 and CCC > 0.90 was 8.1% and 8.1%, and was considered acceptable. The median (first and third quartile) of overall CV and QCD values was 9.4% (4.9%, 17.2%) and 4.9% (2.5%, 9.7%). The overall percentage of radiomic features with CV < 10% and QCD < 10% was 51.9% and 75.0%, and was considered acceptable. Without respect to clinical significance, deep learning reconstruction and accelerated acquisition led to a poor reproducibility of radiomic features, but more than a half of the radiomic features varied within an acceptable range.

Few-Shot Learning for Prostate Cancer Detection on MRI: Comparative Analysis with Radiologists' Performance.

Yamagishi Y, Baba Y, Suzuki J, Okada Y, Kanao K, Oyama M

pubmed logopapersJun 25 2025
Deep-learning models for prostate cancer detection typically require large datasets, limiting clinical applicability across institutions due to domain shift issues. This study aimed to develop a few-shot learning deep-learning model for prostate cancer detection on multiparametric MRI that requires minimal training data and to compare its diagnostic performance with experienced radiologists. In this retrospective study, we used 99 cases (80 positive, 19 negative) of biopsy-confirmed prostate cancer (2017-2022), with 20 cases for training, 5 for validation, and 74 for testing. A 2D transformer model was trained on T2-weighted, diffusion-weighted, and apparent diffusion coefficient map images. Model predictions were compared with two radiologists using Matthews correlation coefficient (MCC) and F1 score, with 95% confidence intervals (CIs) calculated via bootstrap method. The model achieved an MCC of 0.297 (95% CI: 0.095-0.474) and F1 score of 0.707 (95% CI: 0.598-0.847). Radiologist 1 had an MCC of 0.276 (95% CI: 0.054-0.484) and F1 score of 0.741; Radiologist 2 had an MCC of 0.504 (95% CI: 0.289-0.703) and F1 score of 0.871, showing that the model performance was comparable to Radiologist 1. External validation on the Prostate158 dataset revealed that ImageNet pretraining substantially improved model performance, increasing study-level ROC-AUC from 0.464 to 0.636 and study-level PR-AUC from 0.637 to 0.773 across all architectures. Our findings demonstrate that few-shot deep-learning models can achieve clinically relevant performance when using pretrained transformer architectures, offering a promising approach to address domain shift challenges across institutions.
Page 74 of 2232230 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.