Sort by:
Page 38 of 2182174 results

Auto-Segmentation via deep-learning approaches for the assessment of flap volume after reconstructive surgery or radiotherapy in head and neck cancer.

Thariat J, Mesbah Z, Chahir Y, Beddok A, Blache A, Bourhis J, Fatallah A, Hatt M, Modzelewski R

pubmed logopapersJul 1 2025
Reconstructive flap surgery aims to restore the substance and function losses associated with tumor resection. Automatic flap segmentation could allow quantification of flap volume and correlations with functional outcomes after surgery or post-operative RT (poRT). Flaps being ectopic tissues of various components (fat, skin, fascia, muscle, bone) of various volume, shape and texture, the anatomical modifications, inflammation and edema of the postoperative bed make the segmentation task challenging. We built a artificial intelligence-enabled automatic soft-tissue flap segmentation method from CT scans of Head and Neck Cancer (HNC) patients. Ground-truth flap segmentation masks were delineated by two experts on postoperative CT scans of 148 HNC patients undergoing poRT. All CTs and flaps (free or pedicled, soft tissue only or bone) were kept, including those with artefacts, to ensure generalizability. A deep-learning nnUNetv2 framework was built using Hounsfield Units (HU) windowing to mimic radiological assessment. A transformer-based 2D "Segment Anything Model" (MedSAM) was also built and fine-tuned to medical CTs. Models were compared with the Dice Similarity Coefficient (DSC) and Hausdorff Distance 95th percentile (HD95) metrics. Flaps were in the oral cavity (N = 102), oropharynx (N = 26) or larynx/hypopharynx (N = 20). There were free flaps (N = 137), pedicled flaps (N = 11), of soft tissue flap-only (N = 92), reconstructed bone (N = 42), or bone resected without reconstruction (N = 40). The nnUNet-windowing model outperformed the nnUNetv2 and MedSam models. It achieved mean DSCs of 0.69 and HD95 of 25.6 mm using 5-fold cross-validation. Segmentation performed better in the absence of artifacts, and rare situations such as pedicled flaps, laryngeal primaries and resected bone without bone reconstruction (p < 0.01). Automatic flap segmentation demonstrates clinical performances that allow to quantify spontaneous and radiation-induced volume shrinkage of flaps. Free flaps achieved excellent performances; rare situations will be addressed by fine-tuning the network.

Generative AI for weakly supervised segmentation and downstream classification of brain tumors on MR images.

Yoo JJ, Namdar K, Wagner MW, Yeom KW, Nobre LF, Tabori U, Hawkins C, Ertl-Wagner BB, Khalvati F

pubmed logopapersJul 1 2025
Segmenting abnormalities is a leading problem in medical imaging. Using machine learning for segmentation generally requires manually annotated segmentations, demanding extensive time and resources from radiologists. We propose a weakly supervised approach that utilizes binary image-level labels, which are much simpler to acquire, rather than manual annotations to segment brain tumors on magnetic resonance images. The proposed method generates healthy variants of cancerous images for use as priors when training the segmentation model. However, using weakly supervised segmentations for downstream tasks such as classification can be challenging due to occasional unreliable segmentations. To address this, we propose using the generated non-cancerous variants to identify the most effective segmentations without requiring ground truths. Our proposed method generates segmentations that achieve Dice coefficients of 79.27% on the Multimodal Brain Tumor Segmentation (BraTS) 2020 dataset and 73.58% on an internal dataset of pediatric low-grade glioma (pLGG), which increase to 88.69% and 80.29%, respectively, when removing suboptimal segmentations identified using the proposed method. Using the segmentations for tumor classification results with Area Under the Characteristic Operating Curve (AUC) of 93.54% and 83.74% on the BraTS and pLGG datasets, respectively. These are comparable to using manual annotations which achieve AUCs of 95.80% and 83.03% on the BraTS and pLGG datasets, respectively.

Breast cancer detection based on histological images using fusion of diffusion model outputs.

Akbari Y, Abdullakutty F, Al Maadeed S, Bouridane A, Hamoudi R

pubmed logopapersJul 1 2025
The precise detection of breast cancer in histopathological images remains a critical challenge in computational pathology, where accurate tissue segmentation significantly enhances diagnostic accuracy. This study introduces a novel approach leveraging a Conditional Denoising Diffusion Probabilistic Model (DDPM) to improve breast cancer detection through advanced segmentation and feature fusion. The method employs a conditional channel within the DDPM framework, first trained on a breast cancer histopathology dataset and extended to additional datasets to achieve regional-level segmentation of tumor areas and other tissue regions. These segmented regions, combined with predicted noise from the diffusion model and original images, are processed through an EfficientNet-B0 network to extract enhanced features. A transformer decoder then fuses these features to generate final detection results. Extensive experiments optimizing the network architecture and fusion strategies were conducted, and the proposed method was evaluated across four distinct datasets, achieving a peak accuracy of 92.86% on the BRACS dataset, 100% on the BreCaHAD dataset, 96.66% the ICIAR2018 dataset. This approach represents a significant advancement in computational pathology, offering a robust tool for breast cancer detection with potential applications in broader medical imaging contexts.

Deep learning based classification of tibio-femoral knee osteoarthritis from lateral view knee joint X-ray images.

Abdullah SS, Rajasekaran MP, Hossen MJ, Wong WK, Ng PK

pubmed logopapersJul 1 2025
Design an effective deep learning-driven method to locate and classify the tibio-femoral knee joint space width (JSW) with respect to both anterior-posterior (AP) and lateral views. Compare the results and see how successfully a deep learning approach can locate and classify tibio-femoral knee joint osteoarthritis from both anterior-posterior (AP) and lateral-view knee joint x-ray images. To evaluate the performance of a deep learning approach to classify and compare radiographic tibio-femoral knee joint osteoarthritis from both AP and lateral view knee joint digital X-ray images. We use 4334 data points (knee X-ray images) for this study. This paper introduces a methodology to locate, classify, and compare the outcomes of tibio-femoral knee joint osteoarthritis from both AP and lateral knee joint x-ray images. We have fine-tuned DenseNet 201 with transfer learning to extract the features to detect and classify tibio-femoral knee joint osteoarthritis from both AP view and lateral view knee joint X-ray images. The proposed model is compared with some classifiers. The proposed model locate the tibio femoral knee JSW localization accuracy at 98.12% (lateral view) and 99.32% (AP view). The classification accuracy with respect to the lateral view is 92.42% and the AP view is 98.57%, which indicates the performance of automatic detection and classification of tibio-femoral knee joint osteoarthritis with respect to both views (AP and lateral views).We represent the first automated deep learning approach to classify tibio-femoral osteoarthritis on both the AP view and the lateral view, respectively. The proposed deep learning approach trained on the femur and tibial bone regions from both AP view and lateral view digital X-ray images. The proposed model performs better at locating and classifying tibio femoral knee joint osteoarthritis than the existing approaches. The proposed approach will be helpful for the clinicians/medical experts to analyze the progression of tibio-femoral knee OA in different views. The proposed approach performs better in AP view than Lateral view. So, when compared to other continuing existing architectures/models, the proposed model offers exceptional outcomes with fine-tuning.

An adaptive deep learning approach based on InBNFus and CNNDen-GRU networks for breast cancer and maternal fetal classification using ultrasound images.

Fatima M, Khan MA, Mirza AM, Shin J, Alasiry A, Marzougui M, Cha J, Chang B

pubmed logopapersJul 1 2025
Convolutional Neural Networks (CNNs), a sophisticated deep learning technique, have proven highly effective in identifying and classifying abnormalities related to various diseases. The manual classification of these is a hectic and time-consuming process; therefore, it is essential to develop a computerized technique. Most existing methods are designed to address a single specific problem, limiting their adaptability. In this work, we proposed a novel adaptive deep-learning framework for simultaneously classifying breast cancer and maternal-fetal ultrasound datasets. Data augmentation was applied in the preprocessing phase to address the data imbalance problem. After, two novel architectures are proposed: InBnFUS and CNNDen-GRU. The InBnFUS network combines 5-Blocks inception-based architecture (Model 1) and 5-Blocks inverted bottleneck-based architecture (Model 2) through a depth-wise concatenation layer, while CNNDen-GRU incorporates 5-Blocks dense architecture with an integrated GRU layer. Post-training features were extracted from the global average pooling and GRU layer and classified using neural network classifiers. The experimental evaluation achieved enhanced accuracy rates of 99.0% for breast cancer, 96.6% for maternal-fetal (common planes), and 94.6% for maternal-fetal (brain) datasets. Additionally, the models consistently achieve high precision, recall, and F1 scores across both datasets. A comprehensive ablation study has been performed, and the results show the superior performance of the proposed models.

Gradual poisoning of a chest x-ray convolutional neural network with an adversarial attack and AI explainability methods.

Lee SB

pubmed logopapersJul 1 2025
Given artificial intelligence's transformative effects, studying safety is important to ensure it is implemented in a beneficial way. Convolutional neural networks are used in radiology research for prediction but can be corrupted through adversarial attacks. This study investigates the effect of an adversarial attack, through poisoned data. To improve generalizability, we create a generic ResNet pneumonia classification model and then use it as an example by subjecting it to BadNets adversarial attacks. The study uses various poisoned datasets of different compositions (2%, 16.7% and 100% ratios of poisoned data) and two different test sets (a normal set of test data and one that contained poisoned images) to study the effects of BadNets. To provide a visual effect of the progressing corruption of the models, SHapley Additive exPlanations (SHAP) were used. As corruption progressed, interval analysis revealed that performance on a valid test set decreased while the model learned to predict better on a poisoned test set. SHAP visualization showed focus on the trigger. In the 16.7% poisoned model, SHAP focus did not fixate on the trigger in the normal test set. Minimal effects were seen in the 2% model. SHAP visualization showed decreasing performance was correlated with increasing focus on the trigger. Corruption could potentially be masked in the 16.7% model unless subjected specifically to poisoned data. A minimum threshold for corruption may exist. The study demonstrates insights that can be further studied in future work and with future models. It also identifies areas of potential intervention for safeguarding models against adversarial attacks.

Personalized prediction model generated with machine learning for kidney function one year after living kidney donation.

Oki R, Hirai T, Iwadoh K, Kijima Y, Hashimoto H, Nishimura Y, Banno T, Unagami K, Omoto K, Shimizu T, Hoshino J, Takagi T, Ishida H, Hirai T

pubmed logopapersJul 1 2025
Living kidney donors typically experience approximately a 30% reduction in kidney function after donation, although the degree of reduction varies among individuals. This study aimed to develop a machine learning (ML) model to predict serum creatinine (Cre) levels at one year post-donation using preoperative clinical data, including kidney-, fat-, and muscle-volumetry values from computed tomography. A total of 204 living kidney donors were included. Symbolic regression via genetic programming was employed to create an ML-based Cre prediction model using preoperative clinical variables. Validation was conducted using a 7:3 training-to-test data split. The ML model demonstrated a median absolute error of 0.079 mg/dL for predicting Cre. In the validation cohort, it outperformed conventional methods (which assume post-donation eGFR to be 70% of the preoperative value) with higher R<sup>2</sup> (0.58 vs. 0.27), lower root mean squared error (5.27 vs. 6.89), and lower mean absolute error (3.92 vs. 5.8). Key predictive variables included preoperative Cre and remnant kidney volume. The model was deployed as a web application for clinical use. The ML model offers accurate predictions of post-donation kidney function and may assist in monitoring donor outcomes, enhancing personalized care after kidney donation.

Multi-modal and Multi-view Cervical Spondylosis Imaging Dataset.

Yu QS, Shan JY, Ma J, Gao G, Tao BZ, Qiao GY, Zhang JN, Wang T, Zhao YF, Qin XL, Yin YH

pubmed logopapersJul 1 2025
Multi-modal and multi-view imaging is essential for diagnosis and assessment of cervical spondylosis. Deep learning has increasingly been developed to assist in diagnosis and assessment, which can help improve clinical management and provide new ideas for clinical research. To support the development and testing of deep learning models for cervical spondylosis, we have publicly shared a multi-modal and multi-view imaging dataset of cervical spondylosis, named MMCSD. This dataset comprises MRI and CT images from 250 patients. It includes axial bone and soft tissue window CT scans, sagittal T1-weighted and T2-weighted MRI, as well as axial T2-weighted MRI. Neck pain is one of the most common symptoms of cervical spondylosis. We use the MMCSD to develop a deep learning model for predicting postoperative neck pain in patients with cervical spondylosis, thereby validating its usability. We hope that the MMCSD will contribute to the advancement of neural network models for cervical spondylosis and neck pain, further optimizing clinical diagnostic assessments and treatment decision-making for these conditions.

Deep learning model for grading carcinoma with Gini-based feature selection and linear production-inspired feature fusion.

Kundu S, Mukhopadhyay S, Talukdar R, Kaplun D, Voznesensky A, Sarkar R

pubmed logopapersJul 1 2025
The most common types of kidneys and liver cancer are renal cell carcinoma (RCC) and hepatic cell carcinoma (HCC), respectively. Accurate grading of these carcinomas is essential for determining the most appropriate treatment strategies, including surgery or pharmacological interventions. Traditional deep learning methods often struggle with the intricate and complex patterns seen in histopathology images of RCC and HCC, leading to inaccuracies in classification. To enhance the grading accuracy for liver and renal cell carcinoma, this research introduces a novel feature selection and fusion framework inspired by economic theories, incorporating attention mechanisms into three Convolutional Neural Network (CNN) architectures-MobileNetV2, DenseNet121, and InceptionV3-as foundational models. The attention mechanisms dynamically identify crucial image regions, leveraging each CNN's unique strengths. Additionally, a Gini-based feature selection method is implemented to prioritize the most discriminative features, and the extracted features from each network are optimally combined using a fusion technique modeled after a linear production function, maximizing each model's contribution to the final prediction. Experimental evaluations demonstrate that this proposed approach outperforms existing state-of-the-art models, achieving high accuracies of 93.04% for RCC and 98.24% for LCC. This underscores the method's robustness and effectiveness in accurately grading these types of cancers. The code of our method is publicly available in https://github.com/GHOSTCALL983/GRADE-CLASSIFICATION .

FPGA implementation of deep learning architecture for ankylosing spondylitis detection from MRI.

Kocaoğlu S

pubmed logopapersJul 1 2025
Ankylosing Spondylitis (AS), commonly known as Bechterew's disease, is a complex, potentially disabling disease that develops slowly over time and progresses to radiographic sacroiliitis. The etiology of this disease is poorly understood, making it difficult to diagnose. Therefore, treatment is also delayed. This study aims to diagnose AS with an automated system that classifies axial magnetic resonance imaging (MRI) sequences of AS patients. Recently, the application of deep learning neural networks (DLNNs) for MRI classification has become widespread. The implementation of this process on computer-independent end devices is advantageous due to its high computational power and low latency requirements. In this research, an MRI dataset containing images from 527 individuals was used. A deep learning architecture on a Field Programmable Gate Array (FPGA) card was implemented and analyzed. The results show that the classification performed on FPGA in AS diagnosis yields successful results close to the classification performed on CPU.
Page 38 of 2182174 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.