Sort by:
Page 15 of 3313307 results

Multimodal radiomics in glioma: predicting recurrence in the peritumoural brain zone using integrated MRI.

Li Q, Xiang C, Zeng X, Liao A, Chen K, Yang J, Li Y, Jia M, Song L, Hu X

pubmed logopapersAug 11 2025
Gliomas exhibit a high recurrence rate, particularly in the peritumoural brain zone after surgery. This study aims to develop and validate a radiomics-based model using preoperative fluid-attenuated inversion recovery (FLAIR) and T1-weighted contrast-enhanced (T1-CE) magnetic resonance imaging (MRI) sequences to predict glioma recurrence within specific quadrants of the surgical margin. In this retrospective study, 149 patients with confirmed glioma recurrence were included. 23 cases of data from Guizhou Medical University were used as a test set, and the remaining data were randomly used as a training set (70%) and a validation set (30%). Two radiologists from the research group established a Cartesian coordinate system centred on the tumour, based on FLAIR and T1-CE MRI sequences, dividing the tumour into four quadrants. Recurrence in each quadrant after surgery was assessed, categorising preoperative tumour quadrants as recurrent and non-recurrent. Following the division of tumours into quadrants and the removal of outliers, These quadrants were assigned to a training set (105 non-recurrence quadrants and 226 recurrence quadrants), a verification set (45 non-recurrence quadrants and 97 recurrence quadrants) and a test set (16 non-recurrence quadrants and 68 recurrence quadrants). Imaging features were extracted from preoperative sequences, and feature selection was performed using least absolute shrinkage and selection operator. Machine learning models included support vector machine, random forest, extra trees, and XGBoost. Clinical efficacy was evaluated through model calibration and decision curve analysis. The fusion model, which combines features from FLAIR and T1-CE sequences, exhibited higher predictive accuracy than single-modality models. Among the models, the LightGBM model demonstrated the highest predictive accuracy, with an area under the curve of 0.906 in the training set, 0.832 in the validation set and 0.805 in the test set. The study highlights the potential of a multimodal radiomics approach for predicting glioma recurrence, with the fusion model serving as a robust tool for clinical decision-making.

18F-FDG PET/CT-based deep radiomic models for enhancing chemotherapy response prediction in breast cancer.

Jiang Z, Low J, Huang C, Yue Y, Njeh C, Oderinde O

pubmed logopapersAug 11 2025
Enhancing the accuracy of tumor response predictions enables the development of tailored therapeutic strategies for patients with breast cancer. In this study, we developed deep radiomic models to enhance the prediction of chemotherapy response after the first treatment cycle. 18F-Fludeoxyglucose PET/CT imaging data and clinical record from 60 breast cancer patients were retrospectively obtained from the Cancer Imaging Archive. PET/CT scans were conducted at three distinct stages of treatment; prior to the initiation of chemotherapy (T1), following the first cycle of chemotherapy (T2), and after the full chemotherapy regimen (T3). The patient's primary gross tumor volume (GTV) was delineated on PET images using a 40% threshold of the maximum standardized uptake value (SUVmax). Radiomic features were extracted from the GTV based on the PET/CT images. In addition, a squeeze-and-excitation network (SENet) deep learning model was employed to generate additional features from the PET/CT images for combined analysis. A XGBoost machine learning model was developed and compared with the conventional machine learning algorithm [random forest (RF), logistic regression (LR) and support vector machine (SVM)]. The performance of each model was assessed using receiver operating characteristics area under the curve (ROC AUC) analysis, and prediction accuracy in a validation cohort. Model performance was evaluated through fivefold cross-validation on the entire cohort, with data splits stratified by treatment response categories to ensure balanced representation. The AUC values for the machine learning models using only radiomic features were 0.85(XGBoost), 0.76 (RF), 0.80 (LR), and 0.59 (SVM), with XGBoost showing the best performance. After incorporating additional deep learning-derived features from SENet, the AUC values increased to 0.92, 0.88, 0.90, and 0.61, respectively, demonstrating significant improvements in predictive accuracy. Predictions were based on pre-treatment (T1) and post-first-cycle (T2) imaging data, enabling early assessment of chemotherapy response after the initial treatment cycle. Integrating deep learning-derived features significantly enhanced the performance of predictive models for chemotherapy response in breast cancer patients. This study demonstrated the superior predictive capability of the XGBoost model, emphasizing its potential to optimize personalized therapeutic strategies by accurately identifying patients unlikely to respond to chemotherapy after the first treatment cycle.

Enhanced Liver Tumor Detection in CT Images Using 3D U-Net and Bat Algorithm for Hyperparameter Optimization

Nastaran Ghorbani, Bitasadat Jamshidi, Mohsen Rostamy-Malkhalifeh

arxiv logopreprintAug 11 2025
Liver cancer is one of the most prevalent and lethal forms of cancer, making early detection crucial for effective treatment. This paper introduces a novel approach for automated liver tumor segmentation in computed tomography (CT) images by integrating a 3D U-Net architecture with the Bat Algorithm for hyperparameter optimization. The method enhances segmentation accuracy and robustness by intelligently optimizing key parameters like the learning rate and batch size. Evaluated on a publicly available dataset, our model demonstrates a strong ability to balance precision and recall, with a high F1-score at lower prediction thresholds. This is particularly valuable for clinical diagnostics, where ensuring no potential tumors are missed is paramount. Our work contributes to the field of medical image analysis by demonstrating that the synergy between a robust deep learning architecture and a metaheuristic optimization algorithm can yield a highly effective solution for complex segmentation tasks.

MIND: A Noise-Adaptive Denoising Framework for Medical Images Integrating Multi-Scale Transformer

Tao Tang, Chengxu Yang

arxiv logopreprintAug 11 2025
The core role of medical images in disease diagnosis makes their quality directly affect the accuracy of clinical judgment. However, due to factors such as low-dose scanning, equipment limitations and imaging artifacts, medical images are often accompanied by non-uniform noise interference, which seriously affects structure recognition and lesion detection. This paper proposes a medical image adaptive denoising model (MI-ND) that integrates multi-scale convolutional and Transformer architecture, introduces a noise level estimator (NLE) and a noise adaptive attention module (NAAB), and realizes channel-spatial attention regulation and cross-modal feature fusion driven by noise perception. Systematic testing is carried out on multimodal public datasets. Experiments show that this method significantly outperforms the comparative methods in image quality indicators such as PSNR, SSIM, and LPIPS, and improves the F1 score and ROC-AUC in downstream diagnostic tasks, showing strong prac-tical value and promotional potential. The model has outstanding benefits in structural recovery, diagnostic sensitivity, and cross-modal robustness, and provides an effective solution for medical image enhancement and AI-assisted diagnosis and treatment.

LR-COBRAS: A logic reasoning-driven interactive medical image data annotation algorithm.

Zhou N, Cao J

pubmed logopapersAug 11 2025
The volume of image data generated in the medical field is continuously increasing. Manual annotation is both costly and prone to human error. Additionally, deep learning-based medical image algorithms rely on large, accurately annotated training datasets, which are expensive to produce and often result in instability. This study introduces LR-COBRAS, an interactive computer-aided data annotation algorithm designed for medical experts. LR-COBRAS aims to assist healthcare professionals in achieving more precise annotation outcomes through interactive processes, thereby optimizing medical image annotation tasks. The algorithm enhances must-link and cannot-link constraints during interactions through a logic reasoning module. It automatically generates potential constraint relationships, reducing the frequency of user interactions and improving clustering accuracy. By utilizing rules such as symmetry, transitivity, and consistency, LR-COBRAS effectively balances automation with clinical relevance. Experimental results based on the MedMNIST+ dataset and ChestX-ray8 dataset demonstrate that LR-COBRAS significantly outperforms existing methods in clustering accuracy, efficiency, and interactive burden, showcasing superior robustness and applicability. This algorithm provides a novel solution for intelligent medical image analysis. The source code for our implementation is available on https://github.com/cjw-bbxc/MILR-COBRAS.

CMVFT: A Multi-Scale Attention Guided Framework for Enhanced Keratoconus Suspect Classification in Multi-View Corneal Topography.

Lu Y, Li B, Zhang Y, Qi Y, Shi X

pubmed logopapersAug 11 2025
Retrospective cross-sectional study. To develop a multi-view fusion framework that effectively identifies suspect keratoconus cases and facilitates the possibility of early clinical intervention. A total of 573 corneal topography maps representing eyes classified as normal, suspect, or keratoconus. We designed the Corneal Multi-View Fusion Transformer (CMVFT), which integrates features from seven standard corneal topography maps. A pretrained ResNet-50 extracts single-view representations that are further refined by a custom-designed Multi-Scale Attention Module (MSAM). This integrated design specifically compensates for the representation gap commonly encountered when applying Transformers to small-sample corneal topography datasets by dynamically bridging local convolution-based feature extraction with global self-attention mechanisms. A subsequent fusion Transformer then models long-range dependencies across views for comprehensive multi-view feature integration. The primary measure was the framework's ability to differentiate suspect cases from normal and keratoconus cases, thereby creating a pathway for early clinical intervention. Experimental evaluation demonstrated that CMVFT effectively distinguishes suspect cases within a feature space characterized by overlapping attributes. Ablation studies confirmed that both the MSAM and the fusion Transformer are essential for robust multi-view feature integration, successfully compensating for potential representation shortcomings in small datasets. This study is the first to apply a Transformer-driven multi-view fusion approach in corneal topography analysis. By compensating for the representation gap inherent in small-sample settings, CMVFT shows promise in enabling the identification of suspect keratoconus cases and supporting early intervention strategies, with prospective implications for early clinical intervention.

Artificial Intelligence-Driven Body Composition Analysis Enhances Chemotherapy Toxicity Prediction in Colorectal Cancer.

Liu YZ, Su PF, Tai AS, Shen MR, Tsai YS

pubmed logopapersAug 11 2025
Body surface area (BSA)-based chemotherapy dosing remains standard despite its limitations in predicting toxicity. Variations in body composition, particularly skeletal muscle and adipose tissue, influence drug metabolism and toxicity risk. This study aims to investigate the mediating role of body composition in the relationship between BSA-based dosing and dose-limiting toxicities (DLTs) in colorectal cancer patients receiving oxaliplatin-based chemotherapy. We retrospectively analyzed 483 stage III colorectal cancer patients treated at National Cheng Kung University Hospital (2013-2021). An artificial intelligence (AI)-driven algorithm quantified skeletal muscle and adipose tissue compartments from lumbar 3 (L3) vertebral-level computed tomography (CT) scans. Mediation analysis evaluated body composition's role in chemotherapy-related toxicities. Among the cohort, 18.2% (n = 88) experienced DLTs. While BSA alone was not significantly associated with DLTs (OR = 0.473, p = 0.376), increased intramuscular adipose tissue (IMAT) significantly predicted higher DLT risk (OR = 1.047, p = 0.038), whereas skeletal muscle area was protective. Mediation analysis confirmed that IMAT partially mediated the relationship between BSA and DLTs (indirect effect: 0.05, p = 0.040), highlighting adipose infiltration's role in chemotherapy toxicity. BSA-based dosing inadequately accounts for interindividual variations in chemotherapy tolerance. AI-assisted body composition analysis provides a precision oncology framework for identifying high-risk patients and optimizing chemotherapy regimens. Prospective validation is warranted to integrate body composition into routine clinical decision-making.

Improving discriminative ability in mammographic microcalcification classification using deep learning: a novel double transfer learning approach validated with an explainable artificial intelligence technique

Arlan, K., Bjornstrom, M., Makela, T., Meretoja, T. J., Hukkinen, K.

medrxiv logopreprintAug 11 2025
BackgroundBreast microcalcification diagnostics are challenging due to their subtle presentation, overlapping with benign findings, and high inter-reader variability, often leading to unnecessary biopsies. While deep learning (DL) models - particularly deep convolutional neural networks (DCNNs) - have shown potential to improve diagnostic accuracy, their clinical application remains limited by the need for large annotated datasets and the "black box" nature of their decision-making. PurposeTo develop and validate a deep learning model (DCNN) using a double transfer learning (d-TL) strategy for classifying suspected mammographic microcalcifications, with explainable AI (XAI) techniques to support model interpretability. Material and methodsA retrospective dataset of 396 annotated regions of interest (ROIs) from full-field digital mammography (FFDM) images of 194 patients who underwent stereotactic vacuum-assisted biopsy at the Womens Hospital radiological department, Helsinki University Hospital, was collected. The dataset was randomly split into training and test sets (24% test set, balanced for benign and malignant cases). A ResNeXt-based DCNN was developed using a d-TL approach: first pretrained on ImageNet, then adapted using an intermediate mammography dataset before fine-tuning on the target microcalcification data. Saliency maps were generated using Gradient-weighted Class Activation Mapping (Grad-CAM) to evaluate the visual relevance of model predictions. Diagnostic performance was compared to a radiologists BI-RADS-based assessment, using final histopathology as the reference standard. ResultsThe ensemble DCNN achieved an area under the ROC curve (AUC) of 0.76, with 65% sensitivity, 83% specificity, 79% positive predictive value (PPV), and 70% accuracy. The radiologist achieved an AUC of 0.65 with 100% sensitivity but lower specificity (30%) and PPV (59%). Grad-CAM visualizations showed consistent activation of the correct ROIs, even in misclassified cases where confidence scores fell below the threshold. ConclusionThe DCNN model utilizing d-TL achieved performance comparable to radiologists, with higher specificity and PPV than BI-RADS. The approach addresses data limitation issues and may help reduce additional imaging and unnecessary biopsies.

Adapting Biomedical Foundation Models for Predicting Outcomes of Anti Seizure Medications

Pham, D. K., Mehta, D., Jiang, Y., Thom, D., Chang, R. S.-k., Foster, E., Fazio, T., Holper, S., Verspoor, K., Liu, J., Nhu, D., Barnard, S., O'Brien, T., Chen, Z., French, J., Kwan, P., Ge, Z.

medrxiv logopreprintAug 11 2025
Epilepsy affects over 50 million people worldwide, with anti-seizure medications (ASMs) as the primary treatment for seizure control. However, ASM selection remains a "trial and error" process due to the lack of reliable predictors of effectiveness and tolerability. While machine learning approaches have been explored, existing models are limited to predicting outcomes only for ASMs encountered during training and have not leveraged recent biomedical foundation models for this task. This work investigates ASM outcome prediction using only patient MRI scans and reports. Specifically, we leverage biomedical vision-language foundation models and introduce a novel contextualized instruction-tuning framework that integrates expert-built knowledge trees of MRI entities to enhance their performance. Additionally, by training only on the four most commonly prescribed ASMs, our framework enables generalization to predicting outcomes and effectiveness for unseen ASMs not present during training. We evaluate our instruction-tuning framework on two retrospective epilepsy patient datasets, achieving an average AUC of 71.39 and 63.03 in predicting outcomes for four primary ASMs and three completely unseen ASMs, respectively. Our approach improves the AUC by 5.53 and 3.51 compared to standard report-based instruction tuning for seen and unseen ASMs, respectively. Our code, MRI knowledge tree, prompting templates, and TREE-TUNE generated instruction-answer tuning dataset are available at the link.

Dendrite cross attention for high-dose-rate brachytherapy distribution planning.

Saini S, Liu X

pubmed logopapersAug 10 2025
Cervical cancer is a significant global health issue, and high-dose-rate brachytherapy (HDR-BT) is crucial for its treatment. However, manually creating HDR-BT plans is time-consuming and heavily relies on the planner's expertise, making standardization difficult. This study introduces two advanced deep learning models to address this need: Bi-branch Cross-Attention UNet (BiCA-UNet) and Dendrite Cross-Attention UNet (DCA-UNet). BiCA-UNet enhances the correlation between the CT scan and segmentation maps of the clinical target volume (CTV), applicator, bladder, and rectum. It uses two branches: one processes the stacked input of CT scans and segmentations, and the other focuses on the CTV segmentation. A cross-attention mechanism integrates these branches, improving the model's understanding of the CTV region for accurate dose predictions. Building on BiCA-UNet, DCA-UNet further introduces a primary branch of stacked inputs and three secondary branches for CTV, bladder, and rectum segmentations forming a dendritic structure. Cross attention with bladder and rectum segmentation helps the model understand the regions of organs at risk (OAR), refining dose prediction. Evaluation of these models using multiple metrics indicates that both BiCA-UNet and DCA-UNet significantly improve HDR-BT dose prediction accuracy for various applicator types. The cross-attention mechanisms enhance the feature representation of critical anatomical regions, leading to precise and reliable treatment plans. This research highlights the potential of BiCA-UNet and DCA-UNet in advancing HDR-BT planning, contributing to the standardization of treatment plans, and offering promising directions for future research to improve patient outcomes in the source data.
Page 15 of 3313307 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.