Sort by:
Page 42 of 2222215 results

Breast cancer detection based on histological images using fusion of diffusion model outputs.

Akbari Y, Abdullakutty F, Al Maadeed S, Bouridane A, Hamoudi R

pubmed logopapersJul 1 2025
The precise detection of breast cancer in histopathological images remains a critical challenge in computational pathology, where accurate tissue segmentation significantly enhances diagnostic accuracy. This study introduces a novel approach leveraging a Conditional Denoising Diffusion Probabilistic Model (DDPM) to improve breast cancer detection through advanced segmentation and feature fusion. The method employs a conditional channel within the DDPM framework, first trained on a breast cancer histopathology dataset and extended to additional datasets to achieve regional-level segmentation of tumor areas and other tissue regions. These segmented regions, combined with predicted noise from the diffusion model and original images, are processed through an EfficientNet-B0 network to extract enhanced features. A transformer decoder then fuses these features to generate final detection results. Extensive experiments optimizing the network architecture and fusion strategies were conducted, and the proposed method was evaluated across four distinct datasets, achieving a peak accuracy of 92.86% on the BRACS dataset, 100% on the BreCaHAD dataset, 96.66% the ICIAR2018 dataset. This approach represents a significant advancement in computational pathology, offering a robust tool for breast cancer detection with potential applications in broader medical imaging contexts.

Deep learning based classification of tibio-femoral knee osteoarthritis from lateral view knee joint X-ray images.

Abdullah SS, Rajasekaran MP, Hossen MJ, Wong WK, Ng PK

pubmed logopapersJul 1 2025
Design an effective deep learning-driven method to locate and classify the tibio-femoral knee joint space width (JSW) with respect to both anterior-posterior (AP) and lateral views. Compare the results and see how successfully a deep learning approach can locate and classify tibio-femoral knee joint osteoarthritis from both anterior-posterior (AP) and lateral-view knee joint x-ray images. To evaluate the performance of a deep learning approach to classify and compare radiographic tibio-femoral knee joint osteoarthritis from both AP and lateral view knee joint digital X-ray images. We use 4334 data points (knee X-ray images) for this study. This paper introduces a methodology to locate, classify, and compare the outcomes of tibio-femoral knee joint osteoarthritis from both AP and lateral knee joint x-ray images. We have fine-tuned DenseNet 201 with transfer learning to extract the features to detect and classify tibio-femoral knee joint osteoarthritis from both AP view and lateral view knee joint X-ray images. The proposed model is compared with some classifiers. The proposed model locate the tibio femoral knee JSW localization accuracy at 98.12% (lateral view) and 99.32% (AP view). The classification accuracy with respect to the lateral view is 92.42% and the AP view is 98.57%, which indicates the performance of automatic detection and classification of tibio-femoral knee joint osteoarthritis with respect to both views (AP and lateral views).We represent the first automated deep learning approach to classify tibio-femoral osteoarthritis on both the AP view and the lateral view, respectively. The proposed deep learning approach trained on the femur and tibial bone regions from both AP view and lateral view digital X-ray images. The proposed model performs better at locating and classifying tibio femoral knee joint osteoarthritis than the existing approaches. The proposed approach will be helpful for the clinicians/medical experts to analyze the progression of tibio-femoral knee OA in different views. The proposed approach performs better in AP view than Lateral view. So, when compared to other continuing existing architectures/models, the proposed model offers exceptional outcomes with fine-tuning.

An adaptive deep learning approach based on InBNFus and CNNDen-GRU networks for breast cancer and maternal fetal classification using ultrasound images.

Fatima M, Khan MA, Mirza AM, Shin J, Alasiry A, Marzougui M, Cha J, Chang B

pubmed logopapersJul 1 2025
Convolutional Neural Networks (CNNs), a sophisticated deep learning technique, have proven highly effective in identifying and classifying abnormalities related to various diseases. The manual classification of these is a hectic and time-consuming process; therefore, it is essential to develop a computerized technique. Most existing methods are designed to address a single specific problem, limiting their adaptability. In this work, we proposed a novel adaptive deep-learning framework for simultaneously classifying breast cancer and maternal-fetal ultrasound datasets. Data augmentation was applied in the preprocessing phase to address the data imbalance problem. After, two novel architectures are proposed: InBnFUS and CNNDen-GRU. The InBnFUS network combines 5-Blocks inception-based architecture (Model 1) and 5-Blocks inverted bottleneck-based architecture (Model 2) through a depth-wise concatenation layer, while CNNDen-GRU incorporates 5-Blocks dense architecture with an integrated GRU layer. Post-training features were extracted from the global average pooling and GRU layer and classified using neural network classifiers. The experimental evaluation achieved enhanced accuracy rates of 99.0% for breast cancer, 96.6% for maternal-fetal (common planes), and 94.6% for maternal-fetal (brain) datasets. Additionally, the models consistently achieve high precision, recall, and F1 scores across both datasets. A comprehensive ablation study has been performed, and the results show the superior performance of the proposed models.

Gradual poisoning of a chest x-ray convolutional neural network with an adversarial attack and AI explainability methods.

Lee SB

pubmed logopapersJul 1 2025
Given artificial intelligence's transformative effects, studying safety is important to ensure it is implemented in a beneficial way. Convolutional neural networks are used in radiology research for prediction but can be corrupted through adversarial attacks. This study investigates the effect of an adversarial attack, through poisoned data. To improve generalizability, we create a generic ResNet pneumonia classification model and then use it as an example by subjecting it to BadNets adversarial attacks. The study uses various poisoned datasets of different compositions (2%, 16.7% and 100% ratios of poisoned data) and two different test sets (a normal set of test data and one that contained poisoned images) to study the effects of BadNets. To provide a visual effect of the progressing corruption of the models, SHapley Additive exPlanations (SHAP) were used. As corruption progressed, interval analysis revealed that performance on a valid test set decreased while the model learned to predict better on a poisoned test set. SHAP visualization showed focus on the trigger. In the 16.7% poisoned model, SHAP focus did not fixate on the trigger in the normal test set. Minimal effects were seen in the 2% model. SHAP visualization showed decreasing performance was correlated with increasing focus on the trigger. Corruption could potentially be masked in the 16.7% model unless subjected specifically to poisoned data. A minimum threshold for corruption may exist. The study demonstrates insights that can be further studied in future work and with future models. It also identifies areas of potential intervention for safeguarding models against adversarial attacks.

Personalized prediction model generated with machine learning for kidney function one year after living kidney donation.

Oki R, Hirai T, Iwadoh K, Kijima Y, Hashimoto H, Nishimura Y, Banno T, Unagami K, Omoto K, Shimizu T, Hoshino J, Takagi T, Ishida H, Hirai T

pubmed logopapersJul 1 2025
Living kidney donors typically experience approximately a 30% reduction in kidney function after donation, although the degree of reduction varies among individuals. This study aimed to develop a machine learning (ML) model to predict serum creatinine (Cre) levels at one year post-donation using preoperative clinical data, including kidney-, fat-, and muscle-volumetry values from computed tomography. A total of 204 living kidney donors were included. Symbolic regression via genetic programming was employed to create an ML-based Cre prediction model using preoperative clinical variables. Validation was conducted using a 7:3 training-to-test data split. The ML model demonstrated a median absolute error of 0.079 mg/dL for predicting Cre. In the validation cohort, it outperformed conventional methods (which assume post-donation eGFR to be 70% of the preoperative value) with higher R<sup>2</sup> (0.58 vs. 0.27), lower root mean squared error (5.27 vs. 6.89), and lower mean absolute error (3.92 vs. 5.8). Key predictive variables included preoperative Cre and remnant kidney volume. The model was deployed as a web application for clinical use. The ML model offers accurate predictions of post-donation kidney function and may assist in monitoring donor outcomes, enhancing personalized care after kidney donation.

Multi-modal and Multi-view Cervical Spondylosis Imaging Dataset.

Yu QS, Shan JY, Ma J, Gao G, Tao BZ, Qiao GY, Zhang JN, Wang T, Zhao YF, Qin XL, Yin YH

pubmed logopapersJul 1 2025
Multi-modal and multi-view imaging is essential for diagnosis and assessment of cervical spondylosis. Deep learning has increasingly been developed to assist in diagnosis and assessment, which can help improve clinical management and provide new ideas for clinical research. To support the development and testing of deep learning models for cervical spondylosis, we have publicly shared a multi-modal and multi-view imaging dataset of cervical spondylosis, named MMCSD. This dataset comprises MRI and CT images from 250 patients. It includes axial bone and soft tissue window CT scans, sagittal T1-weighted and T2-weighted MRI, as well as axial T2-weighted MRI. Neck pain is one of the most common symptoms of cervical spondylosis. We use the MMCSD to develop a deep learning model for predicting postoperative neck pain in patients with cervical spondylosis, thereby validating its usability. We hope that the MMCSD will contribute to the advancement of neural network models for cervical spondylosis and neck pain, further optimizing clinical diagnostic assessments and treatment decision-making for these conditions.

Deep learning model for grading carcinoma with Gini-based feature selection and linear production-inspired feature fusion.

Kundu S, Mukhopadhyay S, Talukdar R, Kaplun D, Voznesensky A, Sarkar R

pubmed logopapersJul 1 2025
The most common types of kidneys and liver cancer are renal cell carcinoma (RCC) and hepatic cell carcinoma (HCC), respectively. Accurate grading of these carcinomas is essential for determining the most appropriate treatment strategies, including surgery or pharmacological interventions. Traditional deep learning methods often struggle with the intricate and complex patterns seen in histopathology images of RCC and HCC, leading to inaccuracies in classification. To enhance the grading accuracy for liver and renal cell carcinoma, this research introduces a novel feature selection and fusion framework inspired by economic theories, incorporating attention mechanisms into three Convolutional Neural Network (CNN) architectures-MobileNetV2, DenseNet121, and InceptionV3-as foundational models. The attention mechanisms dynamically identify crucial image regions, leveraging each CNN's unique strengths. Additionally, a Gini-based feature selection method is implemented to prioritize the most discriminative features, and the extracted features from each network are optimally combined using a fusion technique modeled after a linear production function, maximizing each model's contribution to the final prediction. Experimental evaluations demonstrate that this proposed approach outperforms existing state-of-the-art models, achieving high accuracies of 93.04% for RCC and 98.24% for LCC. This underscores the method's robustness and effectiveness in accurately grading these types of cancers. The code of our method is publicly available in https://github.com/GHOSTCALL983/GRADE-CLASSIFICATION .

FPGA implementation of deep learning architecture for ankylosing spondylitis detection from MRI.

Kocaoğlu S

pubmed logopapersJul 1 2025
Ankylosing Spondylitis (AS), commonly known as Bechterew's disease, is a complex, potentially disabling disease that develops slowly over time and progresses to radiographic sacroiliitis. The etiology of this disease is poorly understood, making it difficult to diagnose. Therefore, treatment is also delayed. This study aims to diagnose AS with an automated system that classifies axial magnetic resonance imaging (MRI) sequences of AS patients. Recently, the application of deep learning neural networks (DLNNs) for MRI classification has become widespread. The implementation of this process on computer-independent end devices is advantageous due to its high computational power and low latency requirements. In this research, an MRI dataset containing images from 527 individuals was used. A deep learning architecture on a Field Programmable Gate Array (FPGA) card was implemented and analyzed. The results show that the classification performed on FPGA in AS diagnosis yields successful results close to the classification performed on CPU.

Prediction of axillary lymph node metastasis in triple negative breast cancer using MRI radiomics and clinical features.

Shen Y, Huang R, Zhang Y, Zhu J, Li Y

pubmed logopapersJul 1 2025
To develop and validate a machine learning-based prediction model to predict axillary lymph node (ALN) metastasis in triple negative breast cancer (TNBC) patients using magnetic resonance imaging (MRI) and clinical characteristics. This retrospective study included TNBC patients from the First Affiliated Hospital of Soochow University and Jiangsu Province Hospital (2016-2023). We analyzed clinical characteristics and radiomic features from T2-weighted MRI. Using LASSO regression for feature selection, we applied Logistic Regression (LR), Random Forest (RF), and Support Vector Machine (SVM) to build prediction models. A total of 163 patients, with a median age of 53 years (range: 24-73), were divided into a training group (n = 115) and a validation group (n = 48). Among them, 54 (33.13%) had ALN metastasis, and 109 (66.87%) were non-metastasis. Nottingham grade (P = 0.005), tumor size (P = 0.016) were significant difference between non-metastasis cases and metastasis cases. In the validation set, the LR-based combined model achieved the highest AUC (0.828, 95%CI: 0.706-0.950) with excellent sensitivity (0.813) and accuracy (0.812). Although the RF-based model had the highest AUC in the training set and the highest specificity (0.906) in the validation set, its performance was less consistent compared to the LR model. MRI-T2WI radiomic features predict ALN metastasis in TNBC, with integration into clinical models enhancing preoperative predictions and personalizing management.

Lessons learned from RadiologyNET foundation models for transfer learning in medical radiology.

Napravnik M, Hržić F, Urschler M, Miletić D, Štajduhar I

pubmed logopapersJul 1 2025
Deep learning models require large amounts of annotated data, which are hard to obtain in the medical field, as the annotation process is laborious and depends on expert knowledge. This data scarcity hinders a model's ability to generalise effectively on unseen data, and recently, foundation models pretrained on large datasets have been proposed as a promising solution. RadiologyNET is a custom medical dataset that comprises 1,902,414 medical images covering various body parts and modalities of image acquisition. We used the RadiologyNET dataset to pretrain several popular architectures (ResNet18, ResNet34, ResNet50, VGG16, EfficientNetB3, EfficientNetB4, InceptionV3, DenseNet121, MobileNetV3Small and MobileNetV3Large). We compared the performance of ImageNet and RadiologyNET foundation models against training from randomly initialiased weights on several publicly available medical datasets: (i) Segmentation-LUng Nodule Analysis Challenge, (ii) Regression-RSNA Pediatric Bone Age Challenge, (iii) Binary classification-GRAZPEDWRI-DX and COVID-19 datasets, and (iv) Multiclass classification-Brain Tumor MRI dataset. Our results indicate that RadiologyNET-pretrained models generally perform similarly to ImageNet models, with some advantages in resource-limited settings. However, ImageNet-pretrained models showed competitive performance when fine-tuned on sufficient data. The impact of modality diversity on model performance was tested, with the results varying across tasks, highlighting the importance of aligning pretraining data with downstream applications. Based on our findings, we provide guidelines for using foundation models in medical applications and publicly release our RadiologyNET-pretrained models to support further research and development in the field. The models are available at https://github.com/AIlab-RITEH/RadiologyNET-TL-models .
Page 42 of 2222215 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.