Sort by:
Page 41 of 2922917 results

Enhanced security for medical images using a new 5D hyper chaotic map and deep learning based segmentation.

Subathra S, Thanikaiselvan V

pubmed logopapersJul 2 2025
Medical image encryption is important for maintaining the confidentiality of sensitive medical data and protecting patient privacy. Contemporary healthcare systems store significant patient data in text and graphic form. This research proposes a New 5D hyperchaotic system combined with a customised U-Net architecture. Chaotic maps have become an increasingly popular method for encryption because of their remarkable characteristics, including statistical randomness and sensitivity to initial conditions. The significant region is segmented from the medical images using the U-Net network, and its statistics are utilised as initial conditions to generate the new random sequence. Initially, zig-zag scrambling confuses the pixel position of a medical image and applies further permutation with a new 5D hyperchaotic sequence. Two stages of diffusion are used, such as dynamic DNA flip and dynamic DNA XOR, to enhance the encryption algorithm's security against various attacks. The randomness of the New 5D hyperchaotic system is verified using the NIST SP800-22 statistical test, calculating the Lyapunov exponent and plotting the attractor diagram of the chaotic sequence. The algorithm validates with statistical measures such as PSNR, MSE, NPCR, UACI, entropy, and Chi-square values. Evaluation is performed for test images yields average horizontal, vertical, and diagonal correlation coefficients of -0.0018, -0.0002, and 0.0007, respectively, Shannon entropy of 7.9971, Kolmogorov Entropy value of 2.9469, NPCR of 99.61%, UACI of 33.49%, Chi-square "PASS" at both the 5% (293.2478) and 1% (310.4574) significance levels, key space is 2<sup>500</sup> and an average encryption time of approximately 2.93 s per 256 × 256 image on a standard desktop CPU. The performance comparisons use various encryption methods and demonstrate that the proposed method ensures secure reliability against various challenges.

Lightweight convolutional neural networks using nonlinear Lévy chaotic moth flame optimisation for brain tumour classification via efficient hyperparameter tuning.

Dehkordi AA, Neshat M, Khosravian A, Thilakaratne M, Safaa Sadiq A, Mirjalili S

pubmed logopapersJul 2 2025
Deep convolutional neural networks (CNNs) have seen significant growth in medical image classification applications due to their ability to automate feature extraction, leverage hierarchical learning, and deliver high classification accuracy. However, Deep CNNs require substantial computational power and memory, particularly for large datasets and complex architectures. Additionally, optimising the hyperparameters of deep CNNs, although critical for enhancing model performance, is challenging due to the high computational costs involved, making it difficult without access to high-performance computing resources. To address these limitations, this study presents a fast and efficient model that aims to achieve superior classification performance compared to popular Deep CNNs by developing lightweight CNNs combined with the Nonlinear Lévy chaotic moth flame optimiser (NLCMFO) for automatic hyperparameter optimisation. NLCMFO integrates the Lévy flight, chaotic parameters, and nonlinear control mechanisms to enhance the exploration capabilities of the Moth Flame Optimiser during the search phase while also leveraging the Lévy flight theorem to improve the exploitation phase. To assess the efficiency of the proposed model, empirical analyses were performed using a dataset of 2314 brain tumour detection images (1245 images of brain tumours and 1069 normal brain images). The evaluation results indicate that the CNN_NLCMFO outperformed a non-optimised CNN by 5% (92.40% accuracy) and surpassed established models such as DarkNet19 (96.41%), EfficientNetB0 (96.32%), Xception (96.41%), ResNet101 (92.15%), and InceptionResNetV2 (95.63%) by margins ranging from 1 to 5.25%. The findings demonstrate that the lightweight CNN combined with NLCMFO provides a computationally efficient yet highly accurate solution for medical image classification, addressing the challenges associated with traditional deep CNNs.

Developing an innovative lung cancer detection model for accurate diagnosis in AI healthcare systems.

Jian W, Haq AU, Afzal N, Khan S, Alsolai H, Alanazi SM, Zamani AT

pubmed logopapersJul 2 2025
Accurate Lung cancer (LC) identification is a big medical problem in the AI-based healthcare systems. Various deep learning-based methods have been proposed for Lung cancer diagnosis. In this study, we proposed a Deep learning techniques-based integrated model (CNN-GRU) for Lung cancer detection. In the proposed model development Convolutional neural networks (CNNs), and gated recurrent units (GRU) models are integrated to design an intelligent model for lung cancer detection. The CNN model extracts spatial features from lung CT images through convolutional and pooling layers. The extracted features from data are embedded in the GRUs model for the final prediction of LC. The model (CNN-GRU) was validated using LC data using the holdout validation technique. Data augmentation techniques such as rotation, and brightness were used to enlarge the data set size for effective training of the model. The optimization techniques Stochastic Gradient Descent(SGD) and Adaptive Moment Estimation(ADAM) were applied during model training for model training parameters optimization. Additionally, evaluation metrics were used to test the model performance. The experimental results of the model presented that the model achieved 99.77% accuracy as compared to previous models. The (CNN-GRU) model is recommended for accurate LC detection in AI-based healthcare systems due to its improved diagnosis accuracy.

Automated grading of rectocele with an MRI radiomics model.

Lai W, Wang S, Li J, Qi R, Zhao Z, Wang M

pubmed logopapersJul 2 2025
To develop an automated grading model for rectocele (RC) based on radiomics and evaluate its efficacy. This study retrospectively analyzed a total of 9,392 magnetic resonance imaging (MRI) images obtained from 222 patients who underwent dynamic magnetic resonance defecography (DMRD) over the period from August 2021 to June 2023. The focus was specifically on the defecation phase images of the DMRD, as this phase provides critical information for assessing RC. To develop and evaluate the model, the MRI images from all patients were randomly divided into two groups. 70% of the data were allocated to the training cohort to build the model, and the remaining 30% was reserved as a test cohort to evaluate its performance. First, the severity of RC was assessed using the RC MRI grading criteria by two independent radiologists. To extract and select radiomic features, two additional radiologists independently delineated the regions of interest (ROIs). These features were then dimensionality reduced to retain only the most relevant data for the analysis. The radiomics features were reduced in dimension, and a machine learning model was developed using a Support Vector Machine (SVM). Finally, receiver operating characteristic curve (ROC) and area under the curve (AUC) were used to evaluate the classification efficiency of the model. The AUC (macro/micro) of the model using defecation phase images was 0.794/0.824, and the overall accuracy was 0.754. The radiomics model built using the combination of DMRD defecation phase images is well suited for grading RC and helping clinicians diagnose and treat the disease.

Multitask Deep Learning Based on Longitudinal CT Images Facilitates Prediction of Lymph Node Metastasis and Survival in Chemotherapy-Treated Gastric Cancer.

Qiu B, Zheng Y, Liu S, Song R, Wu L, Lu C, Yang X, Wang W, Liu Z, Cui Y

pubmed logopapersJul 2 2025
Accurate preoperative assessment of lymph node metastasis (LNM) and overall survival (OS) status is essential for patients with locally advanced gastric cancer receiving neoadjuvant chemotherapy, providing timely guidance for clinical decision-making. However, current approaches to evaluate LNM and OS have limited accuracy. In this study, we used longitudinal CT images from 1,021 patients with locally advanced gastric cancer to develop and validate a multitask deep learning model, named co-attention tri-oriented spatial Mamba (CTSMamba), to simultaneously predict LNM and OS. CTSMamba was trained and validated on 398 patients, and the performance was further validated on 623 patients at two additional centers. Notably, CTSMamba exhibited significantly more robust performance than a clinical model in predicting LNM across all of the cohorts. Additionally, integrating CTSMamba survival scores with clinical predictors further improved personalized OS prediction. These results support the potential of CTSMamba to accurately predict LNM and OS from longitudinal images, potentially providing clinicians with a tool to inform individualized treatment approaches and optimized prognostic strategies. CTSMamba is a multitask deep learning model trained on longitudinal CT images of neoadjuvant chemotherapy-treated locally advanced gastric cancer that accurately predicts lymph node metastasis and overall survival to inform clinical decision-making. This article is part of a special series: Driving Cancer Discoveries with Computational Research, Data Science, and Machine Learning/AI.

A computationally frugal open-source foundation model for thoracic disease detection in lung cancer screening programs

Niccolò McConnell, Pardeep Vasudev, Daisuke Yamada, Daryl Cheng, Mehran Azimbagirad, John McCabe, Shahab Aslani, Ahmed H. Shahin, Yukun Zhou, The SUMMIT Consortium, Andre Altmann, Yipeng Hu, Paul Taylor, Sam M. Janes, Daniel C. Alexander, Joseph Jacob

arxiv logopreprintJul 2 2025
Low-dose computed tomography (LDCT) imaging employed in lung cancer screening (LCS) programs is increasing in uptake worldwide. LCS programs herald a generational opportunity to simultaneously detect cancer and non-cancer-related early-stage lung disease. Yet these efforts are hampered by a shortage of radiologists to interpret scans at scale. Here, we present TANGERINE, a computationally frugal, open-source vision foundation model for volumetric LDCT analysis. Designed for broad accessibility and rapid adaptation, TANGERINE can be fine-tuned off the shelf for a wide range of disease-specific tasks with limited computational resources and training data. Relative to models trained from scratch, TANGERINE demonstrates fast convergence during fine-tuning, thereby requiring significantly fewer GPU hours, and displays strong label efficiency, achieving comparable or superior performance with a fraction of fine-tuning data. Pretrained using self-supervised learning on over 98,000 thoracic LDCTs, including the UK's largest LCS initiative to date and 27 public datasets, TANGERINE achieves state-of-the-art performance across 14 disease classification tasks, including lung cancer and multiple respiratory diseases, while generalising robustly across diverse clinical centres. By extending a masked autoencoder framework to 3D imaging, TANGERINE offers a scalable solution for LDCT analysis, departing from recent closed, resource-intensive models by combining architectural simplicity, public availability, and modest computational requirements. Its accessible, open-source lightweight design lays the foundation for rapid integration into next-generation medical imaging tools that could transform LCS initiatives, allowing them to pivot from a singular focus on lung cancer detection to comprehensive respiratory disease management in high-risk populations.

Are Vision Transformer Representations Semantically Meaningful? A Case Study in Medical Imaging

Montasir Shams, Chashi Mahiul Islam, Shaeke Salman, Phat Tran, Xiuwen Liu

arxiv logopreprintJul 2 2025
Vision transformers (ViTs) have rapidly gained prominence in medical imaging tasks such as disease classification, segmentation, and detection due to their superior accuracy compared to conventional deep learning models. However, due to their size and complex interactions via the self-attention mechanism, they are not well understood. In particular, it is unclear whether the representations produced by such models are semantically meaningful. In this paper, using a projected gradient-based algorithm, we show that their representations are not semantically meaningful and they are inherently vulnerable to small changes. Images with imperceptible differences can have very different representations; on the other hand, images that should belong to different semantic classes can have nearly identical representations. Such vulnerability can lead to unreliable classification results; for example, unnoticeable changes cause the classification accuracy to be reduced by over 60\%. %. To the best of our knowledge, this is the first work to systematically demonstrate this fundamental lack of semantic meaningfulness in ViT representations for medical image classification, revealing a critical challenge for their deployment in safety-critical systems.

Calibrated Self-supervised Vision Transformers Improve Intracranial Arterial Calcification Segmentation from Clinical CT Head Scans

Benjamin Jin, Grant Mair, Joanna M. Wardlaw, Maria del C. Valdés Hernández

arxiv logopreprintJul 2 2025
Vision Transformers (ViTs) have gained significant popularity in the natural image domain but have been less successful in 3D medical image segmentation. Nevertheless, 3D ViTs are particularly interesting for large medical imaging volumes due to their efficient self-supervised training within the masked autoencoder (MAE) framework, which enables the use of imaging data without the need for expensive manual annotations. intracranial arterial calcification (IAC) is an imaging biomarker visible on routinely acquired CT scans linked to neurovascular diseases such as stroke and dementia, and automated IAC quantification could enable their large-scale risk assessment. We pre-train ViTs with MAE and fine-tune them for IAC segmentation for the first time. To develop our models, we use highly heterogeneous data from a large clinical trial, the third International Stroke Trial (IST-3). We evaluate key aspects of MAE pre-trained ViTs in IAC segmentation, and analyse the clinical implications. We show: 1) our calibrated self-supervised ViT beats a strong supervised nnU-Net baseline by 3.2 Dice points, 2) low patch sizes are crucial for ViTs for IAC segmentation and interpolation upsampling with regular convolutions is preferable to transposed convolutions for ViT-based models, and 3) our ViTs increase robustness to higher slice thicknesses and improve risk group classification in a clinical scenario by 46%. Our code is available online.

Robust brain age estimation from structural MRI with contrastive learning

Carlo Alberto Barbano, Benoit Dufumier, Edouard Duchesnay, Marco Grangetto, Pietro Gori

arxiv logopreprintJul 2 2025
Estimating brain age from structural MRI has emerged as a powerful tool for characterizing normative and pathological aging. In this work, we explore contrastive learning as a scalable and robust alternative to supervised approaches for brain age estimation. We introduce a novel contrastive loss function, $\mathcal{L}^{exp}$, and evaluate it across multiple public neuroimaging datasets comprising over 20,000 scans. Our experiments reveal four key findings. First, scaling pre-training on diverse, multi-site data consistently improves generalization performance, cutting external mean absolute error (MAE) nearly in half. Second, $\mathcal{L}^{exp}$ is robust to site-related confounds, maintaining low scanner-predictability as training size increases. Third, contrastive models reliably capture accelerated aging in patients with cognitive impairment and Alzheimer's disease, as shown through brain age gap analysis, ROC curves, and longitudinal trends. Lastly, unlike supervised baselines, $\mathcal{L}^{exp}$ maintains a strong correlation between brain age accuracy and downstream diagnostic performance, supporting its potential as a foundation model for neuroimaging. These results position contrastive learning as a promising direction for building generalizable and clinically meaningful brain representations.

Multi Source COVID-19 Detection via Kernel-Density-based Slice Sampling

Chia-Ming Lee, Bo-Cheng Qiu, Ting-Yao Chen, Ming-Han Sun, Fang-Ying Lin, Jung-Tse Tsai, I-An Tsai, Yu-Fan Lin, Chih-Chung Hsu

arxiv logopreprintJul 2 2025
We present our solution for the Multi-Source COVID-19 Detection Challenge, which classifies chest CT scans from four distinct medical centers. To address multi-source variability, we employ the Spatial-Slice Feature Learning (SSFL) framework with Kernel-Density-based Slice Sampling (KDS). Our preprocessing pipeline combines lung region extraction, quality control, and adaptive slice sampling to select eight representative slices per scan. We compare EfficientNet and Swin Transformer architectures on the validation set. The EfficientNet model achieves an F1-score of 94.68%, compared to the Swin Transformer's 93.34%. The results demonstrate the effectiveness of our KDS-based pipeline on multi-source data and highlight the importance of dataset balance in multi-institutional medical imaging evaluation.
Page 41 of 2922917 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.