Sort by:
Page 89 of 3703696 results

A privacy preserving machine learning framework for medical image analysis using quantized fully connected neural networks with TFHE based inference.

Selvakumar S, Senthilkumar B

pubmed logopapersJul 30 2025
Medical image analysis using deep learning algorithms has become a basis of modern healthcare, enabling early detection, diagnosis, treatment planning, and disease monitoring. However, sharing sensitive raw medical data with third parties for analysis raises significant privacy concerns. This paper presents a privacy-preserving machine learning (PPML) framework using a Fully Connected Neural Network (FCNN) for secure medical image analysis using the MedMNIST dataset. The proposed PPML framework leverages a torus-based fully homomorphic encryption (TFHE) to ensure data privacy during inference, maintain patient confidentiality, and ensure compliance with privacy regulations. The FCNN model is trained in a plaintext environment for FHE compatibility using Quantization-Aware Training to optimize weights and activations. The quantized FCNN model is then validated under FHE constraints through simulation and compiled into an FHE-compatible circuit for encrypted inference on sensitive data. The proposed framework is evaluated on the MedMNIST datasets to assess its accuracy and inference time in both plaintext and encrypted environments. Experimental results reveal that the PPML framework achieves a prediction accuracy of 88.2% in the plaintext setting and 87.5% during encrypted inference, with an average inference time of 150 milliseconds per image. This shows that FCNN models paired with TFHE-based encryption achieve high prediction accuracy on MedMNIST datasets with minimal performance degradation compared to unencrypted inference.

Classification of Brain Tumors in MRI Images with Brain-CNXSAMNet: Integrating Hybrid ConvNeXt and Spatial Attention Module Networks.

Fırat H, Üzen H

pubmed logopapersJul 30 2025
Brain tumors (BT) can cause fatal outcomes by affecting body functions, making precise early detection via magnetic resonance imaging (MRI) examinations critical. The complex variations found in cells of BT may pose challenges in identifying the type of tumor and selecting the most suitable treatment strategy, potentially resulting in different assessments by doctors. As a result, in recent years, AI-powered diagnostic systems have been created to accurately and efficiently identify different types of BT using MRI images. Notably, state-of-the-art deep learning architectures, which have demonstrated efficacy in diverse domains, are now being employed effectively for classifying of brain MRI images. This research presents a hybrid model that integrates spatial attention mechanism (SAM) with ConvNeXt to classify three types of BT: meningioma, pituitary, and glioma. The hybrid model integrates ConvNeXt to enhance the receptive field, capturing information from a broader spatial context, crucial for recognizing tumor patterns spanning multiple pixels. SAM is applied after ConvNeXt, enabling the network to selectively focus on informative regions, thereby improving the model's ability to distinguish BT types and capture complex spatial relationships. Tested on BSF and Figshare datasets, the proposed model achieves a remarkable accuracy of 99.39% and 98.86%, respectively, outperforming the results of recent studies by achieving these results in fewer training periods. This hybrid model marks a major step forward in the automatic classification of BT, demonstrating superior performance in accuracy with efficient training.

Ultrasound derived deep learning features for predicting axillary lymph node metastasis in breast cancer using graph convolutional networks in a multicenter study.

Agyekum EA, Kong W, Agyekum DN, Issaka E, Wang X, Ren YZ, Tan G, Jiang X, Shen X, Qian X

pubmed logopapersJul 30 2025
The purpose of this study was to create and validate an ultrasound-based graph convolutional network (US-based GCN) model for the prediction of axillary lymph node metastasis (ALNM) in patients with breast cancer. A total of 820 eligible patients with breast cancer who underwent preoperative breast ultrasonography (US) between April 2016 and June 2022 were retrospectively enrolled. The training cohort consisted of 621 patients, whereas validation cohort 1 included 112 patients, and validation cohort 2 included 87 patients. A US-based GCN model was built using US deep learning features. In validation cohort 1, the US-based GCN model performed satisfactorily, with an AUC of 0.88 and an accuracy of 0.76. In validation cohort 2, the US-based GCN model performed satisfactorily, with an AUC of 0.84 and an accuracy of 0.75. This approach has the potential to help guide optimal ALNM management in breast cancer patients, particularly by preventing overtreatment. In conclusion, we developed a US-based GCN model to assess the ALN status of breast cancer patients prior to surgery. The US-based GCN model can provide a possible noninvasive method for detecting ALNM and aid in clinical decision-making. High-level evidence for clinical use in later studies is anticipated to be obtained through prospective studies.

Recovering Diagnostic Value: Super-Resolution-Aided Echocardiographic Classification in Resource-Constrained Imaging

Krishan Agyakari Raja Babu, Om Prabhu, Annu, Mohanasankar Sivaprakasam

arxiv logopreprintJul 30 2025
Automated cardiac interpretation in resource-constrained settings (RCS) is often hindered by poor-quality echocardiographic imaging, limiting the effectiveness of downstream diagnostic models. While super-resolution (SR) techniques have shown promise in enhancing magnetic resonance imaging (MRI) and computed tomography (CT) scans, their application to echocardiography-a widely accessible but noise-prone modality-remains underexplored. In this work, we investigate the potential of deep learning-based SR to improve classification accuracy on low-quality 2D echocardiograms. Using the publicly available CAMUS dataset, we stratify samples by image quality and evaluate two clinically relevant tasks of varying complexity: a relatively simple Two-Chamber vs. Four-Chamber (2CH vs. 4CH) view classification and a more complex End-Diastole vs. End-Systole (ED vs. ES) phase classification. We apply two widely used SR models-Super-Resolution Generative Adversarial Network (SRGAN) and Super-Resolution Residual Network (SRResNet), to enhance poor-quality images and observe significant gains in performance metric-particularly with SRResNet, which also offers computational efficiency. Our findings demonstrate that SR can effectively recover diagnostic value in degraded echo scans, making it a viable tool for AI-assisted care in RCS, achieving more with less.

Deep learning-driven brain tumor classification and segmentation using non-contrast MRI.

Lu NH, Huang YH, Liu KY, Chen TB

pubmed logopapersJul 30 2025
This study aims to enhance the accuracy and efficiency of MRI-based brain tumor diagnosis by leveraging deep learning (DL) techniques applied to multichannel MRI inputs. MRI data were collected from 203 subjects, including 100 normal cases and 103 cases with 13 distinct brain tumor types. Non-contrast T1-weighted (T1w) and T2-weighted (T2w) images were combined with their average to form RGB three-channel inputs, enriching the representation for model training. Several convolutional neural network (CNN) architectures were evaluated for tumor classification, while fully convolutional networks (FCNs) were employed for tumor segmentation. Standard preprocessing, normalization, and training procedures were rigorously followed. The RGB fusion of T1w, T2w, and their average significantly enhanced model performance. The classification task achieved a top accuracy of 98.3% using the Darknet53 model, and segmentation attained a mean Dice score of 0.937 with ResNet50. These results demonstrate the effectiveness of multichannel input fusion and model selection in improving brain tumor analysis. While not yet integrated into clinical workflows, this approach holds promise for future development of DL-assisted decision-support tools in radiological practice.

Validating an explainable radiomics approach in non-small cell lung cancer combining high energy physics with clinical and biological analyses.

Monteleone M, Camagni F, Percio S, Morelli L, Baroni G, Gennai S, Govoni P, Paganelli C

pubmed logopapersJul 30 2025
This study aims at establishing a validation framework for an explainable radiomics-based model, specifically targeting classification of histopathological subtypes in non-small cell lung cancer (NSCLC) patients. We developed an explainable radiomics pipeline using open-access CT images from the cancer imaging archive (TCIA). Our approach incorporates three key prongs: SHAP-based feature selection for explainability within the radiomics pipeline, a technical validation of the explainable technique using high energy physics (HEP) data, and a biological validation using RNA-sequencing data and clinical observations. Our radiomic model achieved an accuracy of 0.84 in the classification of the histological subtype. The technical validation performed on the HEP domain over 150 numerically equivalent datasets, maintaining consistent sample size and class imbalance, confirmed the reliability of SHAP-based input features. Biological analysis found significant correlations between gene expression and CT-based radiomic features. In particular, gene MUC21 achieved the highest correlation with the radiomic feature describing the10th percentile of voxel intensities (r = 0.46, p < 0.05). This study presents a validation framework for explainable CT-based radiomics in lung cancer, combining HEP-driven technical validation with biological validation to enhance interpretability, reliability, and clinical relevance of XAI models.

Efficacy of image similarity as a metric for augmenting small dataset retinal image segmentation.

Wallace T, Heng IS, Subasic S, Messenger C

pubmed logopapersJul 30 2025
Synthetic images are an option for augmenting limited medical imaging datasets to improve the performance of various machine learning models. A common metric for evaluating synthetic image quality is the Fréchet Inception Distance (FID) which measures the similarity of two image datasets. In this study we evaluate the relationship between this metric and the improvement which synthetic images, generated by a Progressively Growing Generative Adversarial Network (PGGAN), grant when augmenting Diabetes-related Macular Edema (DME) intraretinal fluid segmentation performed by a U-Net model with limited amounts of training data. We find that the behaviour of augmenting with standard and synthetic images agrees with previously conducted experiments. Additionally, we show that dissimilar (high FID) datasets do not improve segmentation significantly. As FID between the training and augmenting datasets decreases, the augmentation datasets are shown to contribute to significant and robust improvements in image segmentation. Finally, we find that there is significant evidence to suggest that synthetic and standard augmentations follow separate log-normal trends between FID and improvements in model performance, with synthetic data proving more effective than standard augmentation techniques. Our findings show that more similar datasets (lower FID) will be more effective at improving U-Net performance, however, the results also suggest that this improvement may only occur when images are sufficiently dissimilar.

A generalizable diffusion framework for 3D low-dose and few-view cardiac SPECT imaging.

Xie H, Gan W, Ji W, Chen X, Alashi A, Thorn SL, Zhou B, Liu Q, Xia M, Guo X, Liu YH, An H, Kamilov US, Wang G, Sinusas AJ, Liu C

pubmed logopapersJul 30 2025
Myocardial perfusion imaging using SPECT is widely utilized to diagnose coronary artery diseases, but image quality can be negatively affected in low-dose and few-view acquisition settings. Although various deep learning methods have been introduced to improve image quality from low-dose or few-view SPECT data, previous approaches often fail to generalize across different acquisition settings, limiting realistic applicability. This work introduced DiffSPECT-3D, a diffusion framework for 3D cardiac SPECT imaging that effectively adapts to different acquisition settings without requiring further network re-training or fine-tuning. Using both image and projection data, a consistency strategy is proposed to ensure that diffusion sampling at each step aligns with the low-dose/few-view projection measurements, the image data, and the scanner geometry, thus enabling generalization to different low-dose/few-view settings. Incorporating anatomical spatial information from CT and total variation constraint, we proposed a 2.5D conditional strategy to allow DiffSPECT-3D to observe 3D contextual information from the entire image volume, addressing the 3D memory/computational issues in diffusion model. We extensively evaluated the proposed method on 1,325 clinical <sup>99m</sup>Tc tetrofosmin stress/rest studies from 795 patients. Each study was reconstructed into 5 different low-count levels and 5 different projection few-view levels for model evaluations, ranging from 1% to 50% and from 1 view to 9 view, respectively. Validated against cardiac catheterization results and diagnostic review from nuclear cardiologists, the presented results show the potential to achieve low-dose and few-view SPECT imaging without compromising clinical performance. Additionally, DiffSPECT-3D could be directly applied to full-dose SPECT images to further improve image quality, especially in a low-dose stress-first cardiac SPECT imaging protocol.

Optimizing Federated Learning Configurations for MRI Prostate Segmentation and Cancer Detection: A Simulation Study

Ashkan Moradi, Fadila Zerka, Joeran S. Bosma, Mohammed R. S. Sunoqrot, Bendik S. Abrahamsen, Derya Yakar, Jeroen Geerdink, Henkjan Huisman, Tone Frost Bathen, Mattijs Elschot

arxiv logopreprintJul 30 2025
Purpose: To develop and optimize a federated learning (FL) framework across multiple clients for biparametric MRI prostate segmentation and clinically significant prostate cancer (csPCa) detection. Materials and Methods: A retrospective study was conducted using Flower FL to train a nnU-Net-based architecture for MRI prostate segmentation and csPCa detection, using data collected from January 2010 to August 2021. Model development included training and optimizing local epochs, federated rounds, and aggregation strategies for FL-based prostate segmentation on T2-weighted MRIs (four clients, 1294 patients) and csPCa detection using biparametric MRIs (three clients, 1440 patients). Performance was evaluated on independent test sets using the Dice score for segmentation and the Prostate Imaging: Cancer Artificial Intelligence (PI-CAI) score, defined as the average of the area under the receiver operating characteristic curve and average precision, for csPCa detection. P-values for performance differences were calculated using permutation testing. Results: The FL configurations were independently optimized for both tasks, showing improved performance at 1 epoch 300 rounds using FedMedian for prostate segmentation and 5 epochs 200 rounds using FedAdagrad, for csPCa detection. Compared with the average performance of the clients, the optimized FL model significantly improved performance in prostate segmentation and csPCa detection on the independent test set. The optimized FL model showed higher lesion detection performance compared to the FL-baseline model, but no evidence of a difference was observed for prostate segmentation. Conclusions: FL enhanced the performance and generalizability of MRI prostate segmentation and csPCa detection compared with local models, and optimizing its configuration further improved lesion detection performance.

Automated Brain Tumor Segmentation using Hybrid YOLO and SAM.

M PJ, M SK

pubmed logopapersJul 30 2025
Early-stage Brain tumor detection is critical for timely diagnosis and effective treatment. We propose a hybrid deep learning method, Convolutional Neural Network (CNN) integrated with YOLO (You Only Look once) and SAM (Segment Anything Model) for diagnosing tumors. A novel hybrid deep learning framework combining a CNN with YOLOv11 for real-time object detection and the SAM for precise segmentation. Enhancing the CNN backbone with deeper convolutional layers to enable robust feature extraction, while YOLOv11 localizes tumor regions, SAM is used to refine the tumor boundaries through detailed mask generation. A dataset of 896 MRI brain images is used for training, testing, and validating the model, including images of both tumors and healthy brains. Additionally, CNN-based YOLO+SAM methods were utilized successfully to segment and diagnose brain tumors. Our suggested model achieves good performance of Precision as 94.2%, Recall as 95.6% and mAP50(B) score as 96.5% demonstrating and highlighting the effectiveness of the proposed approach for early-stage brain tumor diagnosis Conclusion: The validation is demonstrated through a comprehensive ablation study. The robustness of the system makes it more suitable for clinical deployment.
Page 89 of 3703696 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.