Sort by:
Page 148 of 3453445 results

CareAssist GPT improves patient user experience with a patient centered approach to computer aided diagnosis.

Algarni A

pubmed logopapersJul 2 2025
The rapid integration of artificial intelligence (AI) into healthcare has enhanced diagnostic accuracy; however, patient engagement and satisfaction remain significant challenges that hinder the widespread acceptance and effectiveness of AI-driven clinical tools. This study introduces CareAssist-GPT, a novel AI-assisted diagnostic model designed to improve both diagnostic accuracy and the patient experience through real-time, understandable, and empathetic communication. CareAssist-GPT combines high-resolution X-ray images, real-time physiological vital signs, and clinical notes within a unified predictive framework using deep learning. Feature extraction is performed using convolutional neural networks (CNNs), gated recurrent units (GRUs), and transformer-based NLP modules. Model performance was evaluated in terms of accuracy, precision, recall, specificity, and response time, alongside patient satisfaction through a structured user feedback survey. CareAssist-GPT achieved a diagnostic accuracy of 95.8%, improving by 2.4% over conventional models. It reported high precision (94.3%), recall (93.8%), and specificity (92.7%), with an AUC-ROC of 0.97. The system responded within 500 ms-23.1% faster than existing tools-and achieved a patient satisfaction score of 9.3 out of 10, demonstrating its real-time usability and communicative effectiveness. CareAssist-GPT significantly enhances the diagnostic process by improving accuracy and fostering patient trust through transparent, real-time explanations. These findings position it as a promising patient-centered AI solution capable of transforming healthcare delivery by bridging the gap between advanced diagnostics and human-centered communication.

Enhanced security for medical images using a new 5D hyper chaotic map and deep learning based segmentation.

Subathra S, Thanikaiselvan V

pubmed logopapersJul 2 2025
Medical image encryption is important for maintaining the confidentiality of sensitive medical data and protecting patient privacy. Contemporary healthcare systems store significant patient data in text and graphic form. This research proposes a New 5D hyperchaotic system combined with a customised U-Net architecture. Chaotic maps have become an increasingly popular method for encryption because of their remarkable characteristics, including statistical randomness and sensitivity to initial conditions. The significant region is segmented from the medical images using the U-Net network, and its statistics are utilised as initial conditions to generate the new random sequence. Initially, zig-zag scrambling confuses the pixel position of a medical image and applies further permutation with a new 5D hyperchaotic sequence. Two stages of diffusion are used, such as dynamic DNA flip and dynamic DNA XOR, to enhance the encryption algorithm's security against various attacks. The randomness of the New 5D hyperchaotic system is verified using the NIST SP800-22 statistical test, calculating the Lyapunov exponent and plotting the attractor diagram of the chaotic sequence. The algorithm validates with statistical measures such as PSNR, MSE, NPCR, UACI, entropy, and Chi-square values. Evaluation is performed for test images yields average horizontal, vertical, and diagonal correlation coefficients of -0.0018, -0.0002, and 0.0007, respectively, Shannon entropy of 7.9971, Kolmogorov Entropy value of 2.9469, NPCR of 99.61%, UACI of 33.49%, Chi-square "PASS" at both the 5% (293.2478) and 1% (310.4574) significance levels, key space is 2<sup>500</sup> and an average encryption time of approximately 2.93 s per 256 × 256 image on a standard desktop CPU. The performance comparisons use various encryption methods and demonstrate that the proposed method ensures secure reliability against various challenges.

Lightweight convolutional neural networks using nonlinear Lévy chaotic moth flame optimisation for brain tumour classification via efficient hyperparameter tuning.

Dehkordi AA, Neshat M, Khosravian A, Thilakaratne M, Safaa Sadiq A, Mirjalili S

pubmed logopapersJul 2 2025
Deep convolutional neural networks (CNNs) have seen significant growth in medical image classification applications due to their ability to automate feature extraction, leverage hierarchical learning, and deliver high classification accuracy. However, Deep CNNs require substantial computational power and memory, particularly for large datasets and complex architectures. Additionally, optimising the hyperparameters of deep CNNs, although critical for enhancing model performance, is challenging due to the high computational costs involved, making it difficult without access to high-performance computing resources. To address these limitations, this study presents a fast and efficient model that aims to achieve superior classification performance compared to popular Deep CNNs by developing lightweight CNNs combined with the Nonlinear Lévy chaotic moth flame optimiser (NLCMFO) for automatic hyperparameter optimisation. NLCMFO integrates the Lévy flight, chaotic parameters, and nonlinear control mechanisms to enhance the exploration capabilities of the Moth Flame Optimiser during the search phase while also leveraging the Lévy flight theorem to improve the exploitation phase. To assess the efficiency of the proposed model, empirical analyses were performed using a dataset of 2314 brain tumour detection images (1245 images of brain tumours and 1069 normal brain images). The evaluation results indicate that the CNN_NLCMFO outperformed a non-optimised CNN by 5% (92.40% accuracy) and surpassed established models such as DarkNet19 (96.41%), EfficientNetB0 (96.32%), Xception (96.41%), ResNet101 (92.15%), and InceptionResNetV2 (95.63%) by margins ranging from 1 to 5.25%. The findings demonstrate that the lightweight CNN combined with NLCMFO provides a computationally efficient yet highly accurate solution for medical image classification, addressing the challenges associated with traditional deep CNNs.

Developing an innovative lung cancer detection model for accurate diagnosis in AI healthcare systems.

Jian W, Haq AU, Afzal N, Khan S, Alsolai H, Alanazi SM, Zamani AT

pubmed logopapersJul 2 2025
Accurate Lung cancer (LC) identification is a big medical problem in the AI-based healthcare systems. Various deep learning-based methods have been proposed for Lung cancer diagnosis. In this study, we proposed a Deep learning techniques-based integrated model (CNN-GRU) for Lung cancer detection. In the proposed model development Convolutional neural networks (CNNs), and gated recurrent units (GRU) models are integrated to design an intelligent model for lung cancer detection. The CNN model extracts spatial features from lung CT images through convolutional and pooling layers. The extracted features from data are embedded in the GRUs model for the final prediction of LC. The model (CNN-GRU) was validated using LC data using the holdout validation technique. Data augmentation techniques such as rotation, and brightness were used to enlarge the data set size for effective training of the model. The optimization techniques Stochastic Gradient Descent(SGD) and Adaptive Moment Estimation(ADAM) were applied during model training for model training parameters optimization. Additionally, evaluation metrics were used to test the model performance. The experimental results of the model presented that the model achieved 99.77% accuracy as compared to previous models. The (CNN-GRU) model is recommended for accurate LC detection in AI-based healthcare systems due to its improved diagnosis accuracy.

Automated grading of rectocele with an MRI radiomics model.

Lai W, Wang S, Li J, Qi R, Zhao Z, Wang M

pubmed logopapersJul 2 2025
To develop an automated grading model for rectocele (RC) based on radiomics and evaluate its efficacy. This study retrospectively analyzed a total of 9,392 magnetic resonance imaging (MRI) images obtained from 222 patients who underwent dynamic magnetic resonance defecography (DMRD) over the period from August 2021 to June 2023. The focus was specifically on the defecation phase images of the DMRD, as this phase provides critical information for assessing RC. To develop and evaluate the model, the MRI images from all patients were randomly divided into two groups. 70% of the data were allocated to the training cohort to build the model, and the remaining 30% was reserved as a test cohort to evaluate its performance. First, the severity of RC was assessed using the RC MRI grading criteria by two independent radiologists. To extract and select radiomic features, two additional radiologists independently delineated the regions of interest (ROIs). These features were then dimensionality reduced to retain only the most relevant data for the analysis. The radiomics features were reduced in dimension, and a machine learning model was developed using a Support Vector Machine (SVM). Finally, receiver operating characteristic curve (ROC) and area under the curve (AUC) were used to evaluate the classification efficiency of the model. The AUC (macro/micro) of the model using defecation phase images was 0.794/0.824, and the overall accuracy was 0.754. The radiomics model built using the combination of DMRD defecation phase images is well suited for grading RC and helping clinicians diagnose and treat the disease.

Multitask Deep Learning Based on Longitudinal CT Images Facilitates Prediction of Lymph Node Metastasis and Survival in Chemotherapy-Treated Gastric Cancer.

Qiu B, Zheng Y, Liu S, Song R, Wu L, Lu C, Yang X, Wang W, Liu Z, Cui Y

pubmed logopapersJul 2 2025
Accurate preoperative assessment of lymph node metastasis (LNM) and overall survival (OS) status is essential for patients with locally advanced gastric cancer receiving neoadjuvant chemotherapy, providing timely guidance for clinical decision-making. However, current approaches to evaluate LNM and OS have limited accuracy. In this study, we used longitudinal CT images from 1,021 patients with locally advanced gastric cancer to develop and validate a multitask deep learning model, named co-attention tri-oriented spatial Mamba (CTSMamba), to simultaneously predict LNM and OS. CTSMamba was trained and validated on 398 patients, and the performance was further validated on 623 patients at two additional centers. Notably, CTSMamba exhibited significantly more robust performance than a clinical model in predicting LNM across all of the cohorts. Additionally, integrating CTSMamba survival scores with clinical predictors further improved personalized OS prediction. These results support the potential of CTSMamba to accurately predict LNM and OS from longitudinal images, potentially providing clinicians with a tool to inform individualized treatment approaches and optimized prognostic strategies. CTSMamba is a multitask deep learning model trained on longitudinal CT images of neoadjuvant chemotherapy-treated locally advanced gastric cancer that accurately predicts lymph node metastasis and overall survival to inform clinical decision-making. This article is part of a special series: Driving Cancer Discoveries with Computational Research, Data Science, and Machine Learning/AI.

A Novel Two-step Classification Approach for Differentiating Bone Metastases From Benign Bone Lesions in SPECT/CT Imaging.

Xie W, Wang X, Liu M, Mai L, Shangguan H, Pan X, Zhan Y, Zhang J, Wu X, Dai Y, Pei Y, Zhang G, Yao Z, Wang Z

pubmed logopapersJul 2 2025
This study aims to develop and validate a novel two-step deep learning framework for the automated detection, segmentation, and classification of bone metastases in SPECT/CT imaging, accurately distinguishing malignant from benign lesions to improve early diagnosis and facilitate personalized treatment planning. A segmentation model, BL-Seg, was developed to automatically segment lesion regions in SPECT/CT images, utilizing a multi-scale attention fusion module and a triple attention mechanism to capture metabolic variations and refine lesion boundaries. A radiomics-based ensemble learning classifier was subsequently applied to integrate metabolic and texture features for benign-malignant differentiation. The framework was trained and evaluated using a proprietary dataset of SPECT/CT images collected from our institution. Performance metrics, including Dice coefficient, sensitivity, specificity, and AUC, were compared against conventional methods. The study utilized a dataset of SPECT/CT cases from our institution, divided into training and test sets acquired on Siemens SPECT/CT scanners with minor protocol differences. BL-Seg achieved a Dice coefficient of 0.8797, surpassing existing segmentation models. The classification model yielded an AUC of 0.8502, with improved sensitivity and specificity compared to traditional approaches. The proposed framework, with BL-Seg's automated lesion segmentation, demonstrates superior accuracy in detecting, segmenting, and classifying bone metastases, offering a robust tool for early diagnosis and personalized treatment planning in metastatic bone disease.

SPACE: Subregion Perfusion Analysis for Comprehensive Evaluation of Breast Tumor Using Contrast-Enhanced Ultrasound-A Retrospective and Prospective Multicenter Cohort Study.

Fu Y, Chen J, Chen Y, Lin Z, Ye L, Ye D, Gao F, Zhang C, Huang P

pubmed logopapersJul 2 2025
To develop a dynamic contrast-enhanced ultrasound (CEUS)-based method for segmenting tumor perfusion subregions, quantifying tumor heterogeneity, and constructing models for distinguishing benign from malignant breast tumors. This retrospective-prospective cohort study analyzed CEUS videos of patients with breast tumors from four academic medical centers between September 2015 and October 2024. Pixel-based time-intensity curve (TIC) perfusion variables were extracted, followed by the generation of perfusion heterogeneity maps through cluster analysis. A combined diagnostic model incorporating clinical variables, subregion percentages, and radiomics scores was developed, and subsequently, a nomogram based on this model was constructed for clinical application. A total of 339 participants were included in this bidirectional study. Retrospective data included 233 tumors divided into training and test sets. The prospective data comprised 106 tumors as an independent test set. Subregion analysis revealed Subregion 2 dominated benign tumors, while Subregion 3 was prevalent in malignant tumors. Among 59 machine-learning models, Elastic Net (ENET) (α = 0.7) performed best. Age and subregion radiomics scores were independent risk factors. The combined model achieved area under the curve (AUC) values of 0.93, 0.82, and 0.90 in the training, retrospective, and prospective test sets, respectively. The proposed CEUS-based method enhances visualization and quantification of tumor perfusion dynamics, significantly improving the diagnostic accuracy for breast tumors.

Multichannel deep learning prediction of major pathological response after neoadjuvant immunochemotherapy in lung cancer: a multicenter diagnostic study.

Geng Z, Li K, Mei P, Gong Z, Yan R, Huang Y, Zhang C, Zhao B, Lu M, Yang R, Wu G, Ye G, Liao Y

pubmed logopapersJul 2 2025
This study aimed to develop a pretreatment CT-based multichannel predictor integrating deep learning features encoded by Transformer models for preoperative diagnosis of major pathological response (MPR) in non-small cell lung cancer (NSCLC) patients receiving neoadjuvant immunochemotherapy. This multicenter diagnostic study retrospectively included 332 NSCLC patients from four centers. Pretreatment computed tomography images were preprocessed and segmented into region of interest cubes for radiomics modeling. These cubes were cropped into four groups of 2 dimensional image modules. GoogLeNet architecture was trained independently on each group within a multichannel framework, with gradient-weighted class activation mapping and SHapley Additive exPlanations value‌ for visualization. Deep learning features were carefully extracted and fused across the four image groups using the Transformer fusion model. After models training, model performance was evaluated via the area under the curve (AUC), sensitivity, specificity, F1 score, confusion matrices, calibration curves, decision curve analysis, integrated discrimination improvement, net reclassification improvement, and DeLong test. The dataset was allocated into training (n = 172, Center 1), internal validation (n = 44, Center 1), and external test (n = 116, Centers 2-4) cohorts. Four optimal deep learning models and the best Transformer fusion model were developed. In the external test cohort, traditional radiomics model exhibited an AUC of 0.736 [95% confidence interval (CI): 0.645-0.826]. The‌ optimal deep learning imaging ‌module‌ showed superior AUC of 0.855 (95% CI: 0.777-0.934). The fusion model named Transformer_GoogLeNet further improved classification accuracy (AUC = 0.924, 95% CI: 0.875-0.973). The new method of fusing multichannel deep learning with the Transformer Encoder can accurately diagnose whether NSCLC patients receiving neoadjuvant immunochemotherapy will achieve MPR. Our findings may support improved surgical planning and contribute to better treatment outcomes through more accurate preoperative assessment.
Page 148 of 3453445 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.