Sort by:
Page 4 of 42411 results

Federated Learning in radiomics: A comprehensive meta-survey on medical image analysis.

Raza A, Guzzo A, Ianni M, Lappano R, Zanolini A, Maggiolini M, Fortino G

pubmed logopapersJul 1 2025
Federated Learning (FL) has emerged as a promising approach for collaborative medical image analysis while preserving data privacy, making it particularly suitable for radiomics tasks. This paper presents a systematic meta-analysis of recent surveys on Federated Learning in Medical Imaging (FL-MI), published in reputable venues over the past five years. We adopt the PRISMA methodology, categorizing and analyzing the existing body of research in FL-MI. Our analysis identifies common trends, challenges, and emerging strategies for implementing FL in medical imaging, including handling data heterogeneity, privacy concerns, and model performance in non-IID settings. The paper also highlights the most widely used datasets and a comparison of adopted machine learning models. Moreover, we examine FL frameworks in FL-MI applications, such as tumor detection, organ segmentation, and disease classification. We identify several research gaps, including the need for more robust privacy protection. Our findings provide a comprehensive overview of the current state of FL-MI and offer valuable directions for future research and development in this rapidly evolving field.

Worldwide research trends on artificial intelligence in head and neck cancer: a bibliometric analysis.

Silvestre-Barbosa Y, Castro VT, Di Carvalho Melo L, Reis PED, Leite AF, Ferreira EB, Guerra ENS

pubmed logopapersJul 1 2025
This bibliometric analysis aims to explore scientific data on Artificial Intelligence (AI) and Head and Neck Cancer (HNC). AI-related HNC articles from the Web of Science Core Collection were searched. VosViewer and Biblioshiny/Bibiometrix for R Studio were used for data synthesis. This analysis covered key characteristics such as sources, authors, affiliations, countries, citations and top cited articles, keyword analysis, and trending topics. A total of 1,019 papers from 1995 to 2024 were included. Among them, 71.6% were original research articles, 7.6% were reviews, and 20.8% took other forms. The fifty most cited documents highlighted radiology as the most explored specialty, with an emphasis on deep learning models for segmentation. The publications have been increasing, with an annual growth rate of 94.4% after 2016. Among the 20 most productive countries, 14 are high-income economies. The keywords of strong citation revealed 2 main clusters: radiomics and radiotherapy. The most frequently keywords include machine learning, deep learning, artificial intelligence, and head and neck cancer, with recent emphasis on diagnosis, survival prediction, and histopathology. There has been an increase in the use of AI in HNC research since 2016 and indicated a notable disparity in publication quantity between high-income and low/middle-income countries. Future research should prioritize clinical validation and standardization to facilitate the integration of AI in HNC management, particularly in underrepresented regions.

[A deep learning method for differentiating nasopharyngeal carcinoma and lymphoma based on MRI].

Tang Y, Hua H, Wang Y, Tao Z

pubmed logopapersJul 1 2025
<b>Objective:</b>To development a deep learning(DL) model based on conventional MRI for automatic segmentation and differential diagnosis of nasopharyngeal carcinoma(NPC) and nasopharyngeal lymphoma(NPL). <b>Methods:</b>The retrospective study included 142 patients with NPL and 292 patients with NPC who underwent conventional MRI at Renmin Hospital of Wuhan University from June 2012 to February 2023. MRI from 80 patients were manually segmented to train the segmentation model. The automatically segmented regions of interest(ROIs) formed four datasets: T1 weighted images(T1WI), T2 weighted images(T2WI), T1 weighted contrast-enhanced images(T1CE), and a combination of T1WI and T2WI. The ImageNet-pretrained ResNet101 model was fine-tuned for the classification task. Statistical analysis was conducted using SPSS 22.0. The Dice coefficient loss was used to evaluate performance of segmentation task. Diagnostic performance was assessed using receiver operating characteristic(ROC) curves. Gradient-weighted class activation mapping(Grad-CAM) was imported to visualize the model's function. <b>Results:</b>The DICE score of the segmentation model reached 0.876 in the testing set. The AUC values of classification models in testing set were as follows: T1WI: 0.78(95%<i>CI</i> 0.67-0.81), T2WI: 0.75(95%<i>CI</i> 0.72-0.86), T1CE: 0.84(95%<i>CI</i> 0.76-0.87), and T1WI+T2WI: 0.93(95%<i>CI</i> 0.85-0.94). The AUC values for the two clinicians were 0.77(95%<i>CI</i> 0.72-0.82) for the junior, and 0.84(95%<i>CI</i> 0.80-0.89) for the senior. Grad-CAM analysis revealed that the central region of the tumor was highly correlated with the model's classification decisions, while the correlation was lower in the peripheral regions. <b>Conclusion:</b>The deep learning model performed well in differentiating NPC from NPL based on conventional MRI. The T1WI+T2WI combination model exhibited the best performance. The model can assist in the early diagnosis of NPC and NPL, facilitating timely and standardized treatment, which may improve patient prognosis.

The Evolution of Radiology Image Annotation in the Era of Large Language Models.

Flanders AE, Wang X, Wu CC, Kitamura FC, Shih G, Mongan J, Peng Y

pubmed logopapersJul 1 2025
Although there are relatively few diverse, high-quality medical imaging datasets on which to train computer vision artificial intelligence models, even fewer datasets contain expertly classified observations that can be repurposed to train or test such models. The traditional annotation process is laborious and time-consuming. Repurposing annotations and consolidating similar types of annotations from disparate sources has never been practical. Until recently, the use of natural language processing to convert a clinical radiology report into labels required custom training of a language model for each use case. Newer technologies such as large language models have made it possible to generate accurate and normalized labels at scale, using only clinical reports and specific prompt engineering. The combination of automatically generated labels extracted and normalized from reports in conjunction with foundational image models provides a means to create labels for model training. This article provides a short history and review of the annotation and labeling process of medical images, from the traditional manual methods to the newest semiautomated methods that provide a more scalable solution for creating useful models more efficiently. <b>Keywords:</b> Feature Detection, Diagnosis, Semi-supervised Learning © RSNA, 2025.

CT-Based Machine Learning Radiomics Analysis to Diagnose Dysthyroid Optic Neuropathy.

Ma L, Jiang X, Yang X, Wang M, Hou Z, Zhang J, Li D

pubmed logopapersJul 1 2025
To develop CT-based machine learning radiomics models used for the diagnosis of dysthyroid optic neuropathy (DON). This is a retrospective study included 57 patients (114 orbits) diagnosed with thyroid-associated ophthalmopathy (TAO) at the Beijing Tongren Hospital between December 2019 and June 2023. CT scans, medical history, examination results, and clinical data of the participants were collected. DON was diagnosed based on clinical manifestations and examinations. The DON orbits and non-DON orbits were then divided into a training set and a test set at a ratio of approximately 7:3. The 3D slicer software was used to identify the volumes of interest (VOI). Radiomics features were extracted using the Pyradiomics and selected by t-test and least absolute shrinkage and selection operator (LASSO) regression algorithm with 10-fold cross-validation. Machine-learning models, including random forest (RF) model, support vector machine (SVM) model, and logistic regression (LR) model were built and validated by receiver operating characteristic (ROC) curves, area under the curves (AUC) and confusion matrix-related data. The net benefit of the models is shown by the decision curve analysis (DCA). We extracted 107 features from the imaging data, representing various image information of the optic nerve and surrounding orbital tissues. Using the LASSO method, we identified the five most informative features. The AUC ranged from 0.77 to 0.80 in the training set and the AUC of the RF, SVM and LR models based on the features were 0.86, 0.80 and 0.83 in the test set, respectively. The DeLong test showed there was no significant difference between the three models (RF model vs SVM model: <i>p</i> = .92; RF model vs LR model: <i>p</i> = .94; SVM model vs LR model: <i>p</i> = .98) and the models showed optimal clinical efficacy in DCA. The CT-based machine learning radiomics analysis exhibited excellent ability to diagnose DON and may enhance diagnostic convenience.

Rethinking boundary detection in deep learning-based medical image segmentation.

Lin Y, Zhang D, Fang X, Chen Y, Cheng KT, Chen H

pubmed logopapersJul 1 2025
Medical image segmentation is a pivotal task within the realms of medical image analysis and computer vision. While current methods have shown promise in accurately segmenting major regions of interest, the precise segmentation of boundary areas remains challenging. In this study, we propose a novel network architecture named CTO, which combines Convolutional Neural Networks (CNNs), Vision Transformer (ViT) models, and explicit edge detection operators to tackle this challenge. CTO surpasses existing methods in terms of segmentation accuracy and strikes a better balance between accuracy and efficiency, without the need for additional data inputs or label injections. Specifically, CTO adheres to the canonical encoder-decoder network paradigm, with a dual-stream encoder network comprising a mainstream CNN stream for capturing local features and an auxiliary StitchViT stream for integrating long-range dependencies. Furthermore, to enhance the model's ability to learn boundary areas, we introduce a boundary-guided decoder network that employs binary boundary masks generated by dedicated edge detection operators to provide explicit guidance during the decoding process. We validate the performance of CTO through extensive experiments conducted on seven challenging medical image segmentation datasets, namely ISIC 2016, PH2, ISIC 2018, CoNIC, LiTS17, BraTS, and BTCV. Our experimental results unequivocally demonstrate that CTO achieves state-of-the-art accuracy on these datasets while maintaining competitive model complexity. The codes have been released at: CTO.

MED-NCA: Bio-inspired medical image segmentation.

Kalkhof J, Ihm N, Köhler T, Gregori B, Mukhopadhyay A

pubmed logopapersJul 1 2025
The reliance on computationally intensive U-Net and Transformer architectures significantly limits their accessibility in low-resource environments, creating a technological divide that hinders global healthcare equity, especially in medical diagnostics and treatment planning. This divide is most pronounced in low- and middle-income countries, primary care facilities, and conflict zones. We introduced MED-NCA, Neural Cellular Automata (NCA) based segmentation models characterized by their low parameter count, robust performance, and inherent quality control mechanisms. These features drastically lower the barriers to high-quality medical image analysis in resource-constrained settings, allowing the models to run efficiently on hardware as minimal as a Raspberry Pi or a smartphone. Building upon the foundation laid by MED-NCA, this paper extends its validation across eight distinct anatomies, including the hippocampus and prostate (MRI, 3D), liver and spleen (CT, 3D), heart and lung (X-ray, 2D), breast tumor (Ultrasound, 2D), and skin lesion (Image, 2D). Our comprehensive evaluation demonstrates the broad applicability and effectiveness of MED-NCA in various medical imaging contexts, matching the performance of two magnitudes larger UNet models. Additionally, we introduce NCA-VIS, a visualization tool that gives insight into the inference process of MED-NCA and allows users to test its robustness by applying various artifacts. This combination of efficiency, broad applicability, and enhanced interpretability makes MED-NCA a transformative solution for medical image analysis, fostering greater global healthcare equity by making advanced diagnostics accessible in even the most resource-limited environments.

Automatic segmentation of the midfacial bone surface from ultrasound images using deep learning methods.

Yuan M, Jie B, Han R, Wang J, Zhang Y, Li Z, Zhu J, Zhang R, He Y

pubmed logopapersJul 1 2025
With developments in computer science and technology, great progress has been made in three-dimensional (3D) ultrasound. Recently, ultrasound-based 3D bone modelling has attracted much attention, and its accuracy has been studied for the femur, tibia, and spine. The use of ultrasound allows data for bone surface to be acquired non-invasively and without radiation. Freehand 3D ultrasound of the bone surface can be roughly divided into two steps: segmentation of the bone surface from two-dimensional (2D) ultrasound images and 3D reconstruction of the bone surface using the segmented images. The aim of this study was to develop an automatic algorithm to segment the midface bone surface from 2D ultrasound images based on deep learning methods. Six deep learning networks were trained (nnU-Net, U-Net, ConvNeXt, Mask2Former, SegFormer, and DDRNet). The performance of the algorithms was compared with that of the ground truth and evaluated by Dice coefficient (DC), intersection over union (IoU), 95th percentile Hausdorff distance (HD95), average symmetric surface distance (ASSD), precision, recall, and time. nnU-Net yielded the highest DC of 89.3% ± 13.6% and the lowest ASSD of 0.11 ± 0.40 mm. This study showed that nnU-Net can automatically and effectively segment the midfacial bone surface from 2D ultrasound images.

Phantom-based evaluation of image quality in Transformer-enhanced 2048-matrix CT imaging at low and ultralow doses.

Li Q, Liu L, Zhang Y, Zhang L, Wang L, Pan Z, Xu M, Zhang S, Xie X

pubmed logopapersJul 1 2025
To compare the quality of standard 512-matrix, standard 1024-matrix, and Swin2SR-based 2048-matrix phantom images under different scanning protocols. The Catphan 600 phantom was scanned using a multidetector CT scanner under two protocols: 120 kV/100 mA (CT dose index volume = 3.4 mGy) to simulate low-dose CT, and 70 kV/40 mA (0.27 mGy) to simulate ultralow-dose CT. Raw data were reconstructed into standard 512-matrix images using three methods: filtered back projection (FBP), adaptive statistical iterative reconstruction at 40% intensity (ASIR-V), and deep learning image reconstruction at high intensity (DLIR-H). The Swin2SR super-resolution model was used to generate 2048-matrix images (Swin2SR-2048), while the super-resolution convolutional neural network (SRCNN) model generated 2048-matrix images (SRCNN-2048). The quality of 2048-matrix images generated by the two models (Swin2SR and SRCNN) was compared. Image quality was evaluated by ImQuest software (v7.2.0.0, Duke University) based on line pair clarity, task-based transfer function (TTF), image noise, and noise power spectrum (NPS). At equivalent radiation doses and reconstruction method, Swin2SR-2048 images identified more line pairs than both standard-512 and standard-1024 images. Except for the 0.27 mGy/DLIR-H/standard kernel sequence, TTF-50% of Teflon increased after super-resolution processing. Statistically significant differences in TTF-50% were observed between the standard 512, 1024, and Swin2SR-2048 images (all p < 0.05). Swin2SR-2048 images exhibited lower image noise and NPS<sub>peak</sub> compared to both standard 512- and 1024-matrix images, with significant differences observed in all three matrix types (all p < 0.05). Swin2SR-2048 images also demonstrated superior quality compared to SRCNN-2048, with significant differences in image noise (p < 0.001), NPS<sub>peak</sub> (p < 0.05), and TTF-50% for Teflon (p < 0.05). Transformer-enhanced 2048-matrix CT images improve spatial resolution and reduce image noise compared to standard-512 and -1024 matrix images.

ResNet-Transformer deep learning model-aided detection of dens evaginatus.

Wang S, Liu J, Li S, He P, Zhou X, Zhao Z, Zheng L

pubmed logopapersJul 1 2025
Dens evaginatus is a dental morphological developmental anomaly. Failing to detect it may lead to tubercles fracture and pulpal/periapical disease. Consequently, early detection and intervention of dens evaginatus are significant to preserve vital pulp. This study aimed to develop a deep learning model to assist dentists in early diagnosing dens evaginatus, thereby supporting early intervention and mitigating the risk of severe consequences. In this study, a deep learning model was developed utilizing panoramic radiograph images sourced from 1410 patients aged 3-16 years, with high-quality annotations to enable the automatic detection of dens evaginatus. Model performance and model's efficacy in aiding dentists were evaluated. The findings indicated that the current deep learning model demonstrated commendable sensitivity (0.8600) and specificity (0.9200), outperforming dentists in detecting dens evaginatus with an F1-score of 0.8866 compared to their average F1-score of 0.8780, indicating that the model could detect dens evaginatus with greater precision. Furthermore, with its support, young dentists heightened their focus on dens evaginatus in tooth germs and achieved improved diagnostic accuracy. Based on these results, the integration of deep learning for dens evaginatus detection holds significance and can augment dentists' proficiency in identifying such anomaly.
Page 4 of 42411 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.