Sort by:
Page 118 of 4003995 results

Recovering Diagnostic Value: Super-Resolution-Aided Echocardiographic Classification in Resource-Constrained Imaging

Krishan Agyakari Raja Babu, Om Prabhu, Annu, Mohanasankar Sivaprakasam

arxiv logopreprintJul 30 2025
Automated cardiac interpretation in resource-constrained settings (RCS) is often hindered by poor-quality echocardiographic imaging, limiting the effectiveness of downstream diagnostic models. While super-resolution (SR) techniques have shown promise in enhancing magnetic resonance imaging (MRI) and computed tomography (CT) scans, their application to echocardiography-a widely accessible but noise-prone modality-remains underexplored. In this work, we investigate the potential of deep learning-based SR to improve classification accuracy on low-quality 2D echocardiograms. Using the publicly available CAMUS dataset, we stratify samples by image quality and evaluate two clinically relevant tasks of varying complexity: a relatively simple Two-Chamber vs. Four-Chamber (2CH vs. 4CH) view classification and a more complex End-Diastole vs. End-Systole (ED vs. ES) phase classification. We apply two widely used SR models-Super-Resolution Generative Adversarial Network (SRGAN) and Super-Resolution Residual Network (SRResNet), to enhance poor-quality images and observe significant gains in performance metric-particularly with SRResNet, which also offers computational efficiency. Our findings demonstrate that SR can effectively recover diagnostic value in degraded echo scans, making it a viable tool for AI-assisted care in RCS, achieving more with less.

Deep learning-driven brain tumor classification and segmentation using non-contrast MRI.

Lu NH, Huang YH, Liu KY, Chen TB

pubmed logopapersJul 30 2025
This study aims to enhance the accuracy and efficiency of MRI-based brain tumor diagnosis by leveraging deep learning (DL) techniques applied to multichannel MRI inputs. MRI data were collected from 203 subjects, including 100 normal cases and 103 cases with 13 distinct brain tumor types. Non-contrast T1-weighted (T1w) and T2-weighted (T2w) images were combined with their average to form RGB three-channel inputs, enriching the representation for model training. Several convolutional neural network (CNN) architectures were evaluated for tumor classification, while fully convolutional networks (FCNs) were employed for tumor segmentation. Standard preprocessing, normalization, and training procedures were rigorously followed. The RGB fusion of T1w, T2w, and their average significantly enhanced model performance. The classification task achieved a top accuracy of 98.3% using the Darknet53 model, and segmentation attained a mean Dice score of 0.937 with ResNet50. These results demonstrate the effectiveness of multichannel input fusion and model selection in improving brain tumor analysis. While not yet integrated into clinical workflows, this approach holds promise for future development of DL-assisted decision-support tools in radiological practice.

Role of Artificial Intelligence in Surgical Training by Assessing GPT-4 and GPT-4o on the Japan Surgical Board Examination With Text-Only and Image-Accompanied Questions: Performance Evaluation Study.

Maruyama H, Toyama Y, Takanami K, Takase K, Kamei T

pubmed logopapersJul 30 2025
Artificial intelligence and large language models (LLMs)-particularly GPT-4 and GPT-4o-have demonstrated high correct-answer rates in medical examinations. GPT-4o has enhanced diagnostic capabilities, advanced image processing, and updated knowledge. Japanese surgeons face critical challenges, including a declining workforce, regional health care disparities, and work-hour-related challenges. Nonetheless, although LLMs could be beneficial in surgical education, no studies have yet assessed GPT-4o's surgical knowledge or its performance in the field of surgery. This study aims to evaluate the potential of GPT-4 and GPT-4o in surgical education by using them to take the Japan Surgical Board Examination (JSBE), which includes both textual questions and medical images-such as surgical and computed tomography scans-to comprehensively assess their surgical knowledge. We used 297 multiple-choice questions from the 2021-2023 JSBEs. The questions were in Japanese, and 104 of them included images. First, the GPT-4 and GPT-4o responses to only the textual questions were collected via OpenAI's application programming interface to evaluate their correct-answer rate. Subsequently, the correct-answer rate of their responses to questions that included images was assessed by inputting both text and images. The overall correct-answer rates of GPT-4o and GPT-4 for the text-only questions were 78% (231/297) and 55% (163/297), respectively, with GPT-4o outperforming GPT-4 by 23% (P=<.01). By contrast, there was no significant improvement in the correct-answer rate for questions that included images compared with the results for the text-only questions. GPT-4o outperformed GPT-4 on the JSBE. However, the results of the LLMs were lower than those of the examinees. Despite the capabilities of LLMs, image recognition remains a challenge for them, and their clinical application requires caution owing to the potential inaccuracy of their results.

Ultrasound derived deep learning features for predicting axillary lymph node metastasis in breast cancer using graph convolutional networks in a multicenter study.

Agyekum EA, Kong W, Agyekum DN, Issaka E, Wang X, Ren YZ, Tan G, Jiang X, Shen X, Qian X

pubmed logopapersJul 30 2025
The purpose of this study was to create and validate an ultrasound-based graph convolutional network (US-based GCN) model for the prediction of axillary lymph node metastasis (ALNM) in patients with breast cancer. A total of 820 eligible patients with breast cancer who underwent preoperative breast ultrasonography (US) between April 2016 and June 2022 were retrospectively enrolled. The training cohort consisted of 621 patients, whereas validation cohort 1 included 112 patients, and validation cohort 2 included 87 patients. A US-based GCN model was built using US deep learning features. In validation cohort 1, the US-based GCN model performed satisfactorily, with an AUC of 0.88 and an accuracy of 0.76. In validation cohort 2, the US-based GCN model performed satisfactorily, with an AUC of 0.84 and an accuracy of 0.75. This approach has the potential to help guide optimal ALNM management in breast cancer patients, particularly by preventing overtreatment. In conclusion, we developed a US-based GCN model to assess the ALN status of breast cancer patients prior to surgery. The US-based GCN model can provide a possible noninvasive method for detecting ALNM and aid in clinical decision-making. High-level evidence for clinical use in later studies is anticipated to be obtained through prospective studies.

Classification of Brain Tumors in MRI Images with Brain-CNXSAMNet: Integrating Hybrid ConvNeXt and Spatial Attention Module Networks.

Fırat H, Üzen H

pubmed logopapersJul 30 2025
Brain tumors (BT) can cause fatal outcomes by affecting body functions, making precise early detection via magnetic resonance imaging (MRI) examinations critical. The complex variations found in cells of BT may pose challenges in identifying the type of tumor and selecting the most suitable treatment strategy, potentially resulting in different assessments by doctors. As a result, in recent years, AI-powered diagnostic systems have been created to accurately and efficiently identify different types of BT using MRI images. Notably, state-of-the-art deep learning architectures, which have demonstrated efficacy in diverse domains, are now being employed effectively for classifying of brain MRI images. This research presents a hybrid model that integrates spatial attention mechanism (SAM) with ConvNeXt to classify three types of BT: meningioma, pituitary, and glioma. The hybrid model integrates ConvNeXt to enhance the receptive field, capturing information from a broader spatial context, crucial for recognizing tumor patterns spanning multiple pixels. SAM is applied after ConvNeXt, enabling the network to selectively focus on informative regions, thereby improving the model's ability to distinguish BT types and capture complex spatial relationships. Tested on BSF and Figshare datasets, the proposed model achieves a remarkable accuracy of 99.39% and 98.86%, respectively, outperforming the results of recent studies by achieving these results in fewer training periods. This hybrid model marks a major step forward in the automatic classification of BT, demonstrating superior performance in accuracy with efficient training.

A privacy preserving machine learning framework for medical image analysis using quantized fully connected neural networks with TFHE based inference.

Selvakumar S, Senthilkumar B

pubmed logopapersJul 30 2025
Medical image analysis using deep learning algorithms has become a basis of modern healthcare, enabling early detection, diagnosis, treatment planning, and disease monitoring. However, sharing sensitive raw medical data with third parties for analysis raises significant privacy concerns. This paper presents a privacy-preserving machine learning (PPML) framework using a Fully Connected Neural Network (FCNN) for secure medical image analysis using the MedMNIST dataset. The proposed PPML framework leverages a torus-based fully homomorphic encryption (TFHE) to ensure data privacy during inference, maintain patient confidentiality, and ensure compliance with privacy regulations. The FCNN model is trained in a plaintext environment for FHE compatibility using Quantization-Aware Training to optimize weights and activations. The quantized FCNN model is then validated under FHE constraints through simulation and compiled into an FHE-compatible circuit for encrypted inference on sensitive data. The proposed framework is evaluated on the MedMNIST datasets to assess its accuracy and inference time in both plaintext and encrypted environments. Experimental results reveal that the PPML framework achieves a prediction accuracy of 88.2% in the plaintext setting and 87.5% during encrypted inference, with an average inference time of 150 milliseconds per image. This shows that FCNN models paired with TFHE-based encryption achieve high prediction accuracy on MedMNIST datasets with minimal performance degradation compared to unencrypted inference.

Validating an explainable radiomics approach in non-small cell lung cancer combining high energy physics with clinical and biological analyses.

Monteleone M, Camagni F, Percio S, Morelli L, Baroni G, Gennai S, Govoni P, Paganelli C

pubmed logopapersJul 30 2025
This study aims at establishing a validation framework for an explainable radiomics-based model, specifically targeting classification of histopathological subtypes in non-small cell lung cancer (NSCLC) patients. We developed an explainable radiomics pipeline using open-access CT images from the cancer imaging archive (TCIA). Our approach incorporates three key prongs: SHAP-based feature selection for explainability within the radiomics pipeline, a technical validation of the explainable technique using high energy physics (HEP) data, and a biological validation using RNA-sequencing data and clinical observations. Our radiomic model achieved an accuracy of 0.84 in the classification of the histological subtype. The technical validation performed on the HEP domain over 150 numerically equivalent datasets, maintaining consistent sample size and class imbalance, confirmed the reliability of SHAP-based input features. Biological analysis found significant correlations between gene expression and CT-based radiomic features. In particular, gene MUC21 achieved the highest correlation with the radiomic feature describing the10th percentile of voxel intensities (r = 0.46, p < 0.05). This study presents a validation framework for explainable CT-based radiomics in lung cancer, combining HEP-driven technical validation with biological validation to enhance interpretability, reliability, and clinical relevance of XAI models.

Refined prognostication of pathological complete response in breast cancer using radiomic features and optimized InceptionV3 with DCE-MRI.

Pattanayak S, Singh T, Kumar R

pubmed logopapersJul 30 2025
Neoadjuvant therapy plays a pivotal role in breast cancer treatment, particularly for patients aiming to conserve their breast by reducing tumor size pre-surgery. The ultimate goal of this treatment is achieving a pathologic complete response (pCR), which signifies the complete eradication of cancer cells, thereby lowering the likelihood of recurrence. This study introduces a novel predictive approach to identify patients likely to achieve pCR using radiomic features extracted from MR images, enhanced by the InceptionV3 model and cutting-edge validation methodologies. In our study, we gathered data from 255 unique Patient IDs sourced from the -SPY 2 MRI database with the goal of classifying pCR (pathological complete response). Our research introduced two key areas of novelty.Firstly, we explored the extraction of advanced features from the dcom series such as Area, Perimeter, Entropy, Intensity of the places where the intensity is more than the average intensity of the image. These features provided deeper insights into the characteristics of the MRI data and enhanced the discriminative power of our classification model.Secondly, we applied these extracted features along with combine pixel array of the dcom series of each patient to the numerous deep learning model along with InceptionV3 (GoogleNet) model which provides the best accuracy. To optimize the model's performance, we experimented with different combinations of loss functions, optimizer functions, and activation functions. Lastly, our classification results were subjected to validation using accuracy, AUC, Sensitivity, Specificity and F1 Score. These evaluation metrics provided a robust assessment of the model's performance and ensured the reliability of our findings. The successful combination of advanced feature extraction, utilization of the InceptionV3 model with tailored hyperparameters, and thorough validation using cutting-edge techniques significantly enhanced the accuracy and reliability of our pCR classification study. By adopting a collaborative approach that involved both radiologists and the computer-aided system, we achieved superior predictive performance for pCR, as evidenced by the impressive values obtained for the area under the curve (AUC) at 0.91 having an accuracy of .92. Overall, the combination of advanced feature extraction, leveraging the InceptionV3 model with customized hyperparameters, and rigorous validation using state-of-the-art techniques contributed to the accuracy and credibility of our pCR classification study.

A deep learning model for predicting radiation-induced xerostomia in patients with head and neck cancer based on multi-channel fusion.

Lin L, Ren Y, Jian W, Yang G, Zhang B, Zhu L, Zhao W, Meng H, Wang X, He Q

pubmed logopapersJul 30 2025
Radiation-induced xerostomia is a common sequela in patients who undergo head and neck radiation therapy. This study aims to develop a three-dimensional deep learning model to predict xerostomia by fusing data from the gross tumor volume primary (GTVp) channel and parotid glands (PGs) channel. Retrospective data were collected from 180 head and neck cancer patients. Xerostomia was defined as xerostomia of grade ≥ 2 occurring in the 6th month of radiation therapy. The dataset was split into 137 cases (58.4% xerostomia, 41.6% non-xerostomia) for training and 43 (55.8% xerostomia, 44.2% non-xerostomia) for testing. XeroNet was composed of GNet, PNet, and a Naive Bayes decision fusion layer. GNet processed data from the GTVp channel (CT, dose distributions corresponding and the GTVp contours). PNet processed data from the PGs channel (CT, dose distributions and the PGs contours). The Naive Bayes decision fusion layer was used to integrate the results from GNet and PNet. Model performance was evaluated using accuracy, F-score, sensitivity, specificity, and area under the receiver operator characteristic curve (AUC). The proposed model achieved promising prediction results. The accuracy, AUC, F-score, sensitivity and specificity were 0.779, 0.858, 0.797, 0.777, and 0.782, respectively. Features extracted from the CT and dose distributions in the GTVp and PGs regions were used to construct machine learning models. However, the performance of these models was inferior to our method. Compared with recent studies on xerostomia prediction, our method also showed better performance. The proposed model could effectively extract features from the GTVp and PGs channels, achieving good performance in xerostomia prediction.

Radiation enteritis associated with temporal sequencing of total neoadjuvant therapy in locally advanced rectal cancer: a preliminary study.

Ma CY, Fu Y, Liu L, Chen J, Li SY, Zhang L, Zhou JY

pubmed logopapersJul 30 2025
This study aimed to develop and validate a multi-temporal magnetic resonance imaging (MRI)-based delta-radiomics model to accurately predict severe acute radiation enteritis risk in patients undergoing total neoadjuvant therapy (TNT) for locally advanced rectal cancer (LARC). A retrospective analysis was conducted on the data from 92 patients with LARC who received TNT. All patients underwent pelvic MRI at baseline (pre-treatment) and after neoadjuvant radiotherapy (post-RT). Radiomic features of the primary tumor region were extracted from T2-weighted images at both timepoints. Four delta feature strategies were defined (absolute difference, percent change, ratio, and feature fusion) by concatenating pre- and post-RT features. Severe acute radiation enteritis (SARE) was defined as a composite CTCAE-based symptom score of ≥ 3 within the first 2 weeks of radiotherapy. Features were selected via statistical evaluation and least absolute shrinkage and selection operator regression. Support vector machine (SVM) classifiers were trained using baseline, post-RT, delta, and combined radiomic and clinical features. Model performance was evaluated in an independent test set based on the area under the curve (AUC) value and other metrics. Only the delta-fusion strategy retained stable radiomic features after selection, and outperformed the difference, percent, and ratio definitions in terms of feature stability and model performance. The SVM model, based on combined delta-fusion radiomics and clinical variables, demonstrated the best predictive performance and generalizability. In the independent test cohort, this combined model demonstrated an AUC value of 0.711, sensitivity of 88.9%, and F1-score of 0.696; these values surpassed those of models built with baseline-only or delta difference features. Integrating multi-temporal radiomic features via delta-fusion with clinical factors markedly improved early prediction of SARE in LARC. The delta-fusion approach outperformed conventional delta calculations, and demonstrated superior predictive performance. This highlights its potential in guiding individualized TNT sequencing and proactive toxicity management. NA.
Page 118 of 4003995 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.