Sort by:
Page 278 of 6596585 results

Jiang Z, Low J, Huang C, Yue Y, Njeh C, Oderinde O

pubmed logopapersAug 11 2025
Enhancing the accuracy of tumor response predictions enables the development of tailored therapeutic strategies for patients with breast cancer. In this study, we developed deep radiomic models to enhance the prediction of chemotherapy response after the first treatment cycle. 18F-Fludeoxyglucose PET/CT imaging data and clinical record from 60 breast cancer patients were retrospectively obtained from the Cancer Imaging Archive. PET/CT scans were conducted at three distinct stages of treatment; prior to the initiation of chemotherapy (T1), following the first cycle of chemotherapy (T2), and after the full chemotherapy regimen (T3). The patient's primary gross tumor volume (GTV) was delineated on PET images using a 40% threshold of the maximum standardized uptake value (SUVmax). Radiomic features were extracted from the GTV based on the PET/CT images. In addition, a squeeze-and-excitation network (SENet) deep learning model was employed to generate additional features from the PET/CT images for combined analysis. A XGBoost machine learning model was developed and compared with the conventional machine learning algorithm [random forest (RF), logistic regression (LR) and support vector machine (SVM)]. The performance of each model was assessed using receiver operating characteristics area under the curve (ROC AUC) analysis, and prediction accuracy in a validation cohort. Model performance was evaluated through fivefold cross-validation on the entire cohort, with data splits stratified by treatment response categories to ensure balanced representation. The AUC values for the machine learning models using only radiomic features were 0.85(XGBoost), 0.76 (RF), 0.80 (LR), and 0.59 (SVM), with XGBoost showing the best performance. After incorporating additional deep learning-derived features from SENet, the AUC values increased to 0.92, 0.88, 0.90, and 0.61, respectively, demonstrating significant improvements in predictive accuracy. Predictions were based on pre-treatment (T1) and post-first-cycle (T2) imaging data, enabling early assessment of chemotherapy response after the initial treatment cycle. Integrating deep learning-derived features significantly enhanced the performance of predictive models for chemotherapy response in breast cancer patients. This study demonstrated the superior predictive capability of the XGBoost model, emphasizing its potential to optimize personalized therapeutic strategies by accurately identifying patients unlikely to respond to chemotherapy after the first treatment cycle.

Nastaran Ghorbani, Bitasadat Jamshidi, Mohsen Rostamy-Malkhalifeh

arxiv logopreprintAug 11 2025
Liver cancer is one of the most prevalent and lethal forms of cancer, making early detection crucial for effective treatment. This paper introduces a novel approach for automated liver tumor segmentation in computed tomography (CT) images by integrating a 3D U-Net architecture with the Bat Algorithm for hyperparameter optimization. The method enhances segmentation accuracy and robustness by intelligently optimizing key parameters like the learning rate and batch size. Evaluated on a publicly available dataset, our model demonstrates a strong ability to balance precision and recall, with a high F1-score at lower prediction thresholds. This is particularly valuable for clinical diagnostics, where ensuring no potential tumors are missed is paramount. Our work contributes to the field of medical image analysis by demonstrating that the synergy between a robust deep learning architecture and a metaheuristic optimization algorithm can yield a highly effective solution for complex segmentation tasks.

Tao Tang, Chengxu Yang

arxiv logopreprintAug 11 2025
The core role of medical images in disease diagnosis makes their quality directly affect the accuracy of clinical judgment. However, due to factors such as low-dose scanning, equipment limitations and imaging artifacts, medical images are often accompanied by non-uniform noise interference, which seriously affects structure recognition and lesion detection. This paper proposes a medical image adaptive denoising model (MI-ND) that integrates multi-scale convolutional and Transformer architecture, introduces a noise level estimator (NLE) and a noise adaptive attention module (NAAB), and realizes channel-spatial attention regulation and cross-modal feature fusion driven by noise perception. Systematic testing is carried out on multimodal public datasets. Experiments show that this method significantly outperforms the comparative methods in image quality indicators such as PSNR, SSIM, and LPIPS, and improves the F1 score and ROC-AUC in downstream diagnostic tasks, showing strong prac-tical value and promotional potential. The model has outstanding benefits in structural recovery, diagnostic sensitivity, and cross-modal robustness, and provides an effective solution for medical image enhancement and AI-assisted diagnosis and treatment.

Zhou N, Cao J

pubmed logopapersAug 11 2025
The volume of image data generated in the medical field is continuously increasing. Manual annotation is both costly and prone to human error. Additionally, deep learning-based medical image algorithms rely on large, accurately annotated training datasets, which are expensive to produce and often result in instability. This study introduces LR-COBRAS, an interactive computer-aided data annotation algorithm designed for medical experts. LR-COBRAS aims to assist healthcare professionals in achieving more precise annotation outcomes through interactive processes, thereby optimizing medical image annotation tasks. The algorithm enhances must-link and cannot-link constraints during interactions through a logic reasoning module. It automatically generates potential constraint relationships, reducing the frequency of user interactions and improving clustering accuracy. By utilizing rules such as symmetry, transitivity, and consistency, LR-COBRAS effectively balances automation with clinical relevance. Experimental results based on the MedMNIST+ dataset and ChestX-ray8 dataset demonstrate that LR-COBRAS significantly outperforms existing methods in clustering accuracy, efficiency, and interactive burden, showcasing superior robustness and applicability. This algorithm provides a novel solution for intelligent medical image analysis. The source code for our implementation is available on https://github.com/cjw-bbxc/MILR-COBRAS.

Lu Y, Li B, Zhang Y, Qi Y, Shi X

pubmed logopapersAug 11 2025
Retrospective cross-sectional study. To develop a multi-view fusion framework that effectively identifies suspect keratoconus cases and facilitates the possibility of early clinical intervention. A total of 573 corneal topography maps representing eyes classified as normal, suspect, or keratoconus. We designed the Corneal Multi-View Fusion Transformer (CMVFT), which integrates features from seven standard corneal topography maps. A pretrained ResNet-50 extracts single-view representations that are further refined by a custom-designed Multi-Scale Attention Module (MSAM). This integrated design specifically compensates for the representation gap commonly encountered when applying Transformers to small-sample corneal topography datasets by dynamically bridging local convolution-based feature extraction with global self-attention mechanisms. A subsequent fusion Transformer then models long-range dependencies across views for comprehensive multi-view feature integration. The primary measure was the framework's ability to differentiate suspect cases from normal and keratoconus cases, thereby creating a pathway for early clinical intervention. Experimental evaluation demonstrated that CMVFT effectively distinguishes suspect cases within a feature space characterized by overlapping attributes. Ablation studies confirmed that both the MSAM and the fusion Transformer are essential for robust multi-view feature integration, successfully compensating for potential representation shortcomings in small datasets. This study is the first to apply a Transformer-driven multi-view fusion approach in corneal topography analysis. By compensating for the representation gap inherent in small-sample settings, CMVFT shows promise in enabling the identification of suspect keratoconus cases and supporting early intervention strategies, with prospective implications for early clinical intervention.

Liu YZ, Su PF, Tai AS, Shen MR, Tsai YS

pubmed logopapersAug 11 2025
Body surface area (BSA)-based chemotherapy dosing remains standard despite its limitations in predicting toxicity. Variations in body composition, particularly skeletal muscle and adipose tissue, influence drug metabolism and toxicity risk. This study aims to investigate the mediating role of body composition in the relationship between BSA-based dosing and dose-limiting toxicities (DLTs) in colorectal cancer patients receiving oxaliplatin-based chemotherapy. We retrospectively analyzed 483 stage III colorectal cancer patients treated at National Cheng Kung University Hospital (2013-2021). An artificial intelligence (AI)-driven algorithm quantified skeletal muscle and adipose tissue compartments from lumbar 3 (L3) vertebral-level computed tomography (CT) scans. Mediation analysis evaluated body composition's role in chemotherapy-related toxicities. Among the cohort, 18.2% (n = 88) experienced DLTs. While BSA alone was not significantly associated with DLTs (OR = 0.473, p = 0.376), increased intramuscular adipose tissue (IMAT) significantly predicted higher DLT risk (OR = 1.047, p = 0.038), whereas skeletal muscle area was protective. Mediation analysis confirmed that IMAT partially mediated the relationship between BSA and DLTs (indirect effect: 0.05, p = 0.040), highlighting adipose infiltration's role in chemotherapy toxicity. BSA-based dosing inadequately accounts for interindividual variations in chemotherapy tolerance. AI-assisted body composition analysis provides a precision oncology framework for identifying high-risk patients and optimizing chemotherapy regimens. Prospective validation is warranted to integrate body composition into routine clinical decision-making.

Arlan, K., Bjornstrom, M., Makela, T., Meretoja, T. J., Hukkinen, K.

medrxiv logopreprintAug 11 2025
BackgroundBreast microcalcification diagnostics are challenging due to their subtle presentation, overlapping with benign findings, and high inter-reader variability, often leading to unnecessary biopsies. While deep learning (DL) models - particularly deep convolutional neural networks (DCNNs) - have shown potential to improve diagnostic accuracy, their clinical application remains limited by the need for large annotated datasets and the "black box" nature of their decision-making. PurposeTo develop and validate a deep learning model (DCNN) using a double transfer learning (d-TL) strategy for classifying suspected mammographic microcalcifications, with explainable AI (XAI) techniques to support model interpretability. Material and methodsA retrospective dataset of 396 annotated regions of interest (ROIs) from full-field digital mammography (FFDM) images of 194 patients who underwent stereotactic vacuum-assisted biopsy at the Womens Hospital radiological department, Helsinki University Hospital, was collected. The dataset was randomly split into training and test sets (24% test set, balanced for benign and malignant cases). A ResNeXt-based DCNN was developed using a d-TL approach: first pretrained on ImageNet, then adapted using an intermediate mammography dataset before fine-tuning on the target microcalcification data. Saliency maps were generated using Gradient-weighted Class Activation Mapping (Grad-CAM) to evaluate the visual relevance of model predictions. Diagnostic performance was compared to a radiologists BI-RADS-based assessment, using final histopathology as the reference standard. ResultsThe ensemble DCNN achieved an area under the ROC curve (AUC) of 0.76, with 65% sensitivity, 83% specificity, 79% positive predictive value (PPV), and 70% accuracy. The radiologist achieved an AUC of 0.65 with 100% sensitivity but lower specificity (30%) and PPV (59%). Grad-CAM visualizations showed consistent activation of the correct ROIs, even in misclassified cases where confidence scores fell below the threshold. ConclusionThe DCNN model utilizing d-TL achieved performance comparable to radiologists, with higher specificity and PPV than BI-RADS. The approach addresses data limitation issues and may help reduce additional imaging and unnecessary biopsies.

Pham, D. K., Mehta, D., Jiang, Y., Thom, D., Chang, R. S.-k., Foster, E., Fazio, T., Holper, S., Verspoor, K., Liu, J., Nhu, D., Barnard, S., O'Brien, T., Chen, Z., French, J., Kwan, P., Ge, Z.

medrxiv logopreprintAug 11 2025
Epilepsy affects over 50 million people worldwide, with anti-seizure medications (ASMs) as the primary treatment for seizure control. However, ASM selection remains a "trial and error" process due to the lack of reliable predictors of effectiveness and tolerability. While machine learning approaches have been explored, existing models are limited to predicting outcomes only for ASMs encountered during training and have not leveraged recent biomedical foundation models for this task. This work investigates ASM outcome prediction using only patient MRI scans and reports. Specifically, we leverage biomedical vision-language foundation models and introduce a novel contextualized instruction-tuning framework that integrates expert-built knowledge trees of MRI entities to enhance their performance. Additionally, by training only on the four most commonly prescribed ASMs, our framework enables generalization to predicting outcomes and effectiveness for unseen ASMs not present during training. We evaluate our instruction-tuning framework on two retrospective epilepsy patient datasets, achieving an average AUC of 71.39 and 63.03 in predicting outcomes for four primary ASMs and three completely unseen ASMs, respectively. Our approach improves the AUC by 5.53 and 3.51 compared to standard report-based instruction tuning for seen and unseen ASMs, respectively. Our code, MRI knowledge tree, prompting templates, and TREE-TUNE generated instruction-answer tuning dataset are available at the link.

Manzoor, F., Gupta, V., Pinky, L., Wang, Z., Chen, Z., Deng, Y., Neupane, S.

medrxiv logopreprintAug 11 2025
Prostate cancer remains one of the most prevalent malignancies and a leading cause of cancer-related deaths among men worldwide. Despite advances in traditional diagnostic methods such as Prostate-specific antigen testing, digital rectal examination, and multiparametric Magnetic resonance imaging, these approaches remain constrained by modality-specific limitations, suboptimal sensitivity and specificity, and reliance on expert interpretation, which may introduce diagnostic inconsistency. Multimodal deep learning and machine learning fusion, which integrates diverse data sources including imaging, clinical, and molecular information, has emerged as a promising strategy to enhance the accuracy of prostate cancer classification. This review aims to outline the current state-of-the-art deep learning and machine learning based fusion techniques for prostate cancer classification, focusing on their implementation, performance, challenges, and clinical applicability. Following the PRISMA guidelines, a total of 131 studies were identified, of which 27 met the inclusion criteria for studies published between 2021 and 2025. Extracted data included input techniques, deep learning architectures, performance metrics, and validation approaches. The majority of the studies used an early fusion approach with convolutional neural networks to integrate the data. Clinical and imaging data were the most commonly used modalities in the reviewed studies for prostate cancer research. Overall, multimodal deep learning and machine learning-based fusion significantly advances prostate cancer classification and outperform unimodal approaches.

Wang M, Chen H, Mao L, Jiao W, Han H, Zhang Q

pubmed logopapersAug 11 2025
Deep learning has made notable strides in the ultrasonic diagnosis of lymph nodes, yet it faces three primary challenges: a limited number of lymph node images and a scarcity of annotated data; difficulty in comprehensively learning both local and global semantic information; and obstacles in collaborative learning for both image segmentation and classification to achieve accurate diagnosis. To address these issues, we propose the Cross-organ Cross-modality Cswin-transformer Coupled Convolutional Network (C<sup>5</sup>-Net). First, we design a cross-organ and cross-modality transfer learning strategy to leverage skin lesion dermoscopic images, which have abundant annotations and share similarities in fields of view and morphology with the lymph node ultrasound images. Second, we couple Transformer and convolutional network to comprehensively learn both local details and global information. Third, the encoder weights in the C<sup>5</sup>-Net are shared between segmentation and classification tasks to exploit the synergistic knowledge, enhancing overall performance in ultrasound lymph node diagnosis. Our study leverages 690 lymph node ultrasound images and 1000 skin lesion dermoscopic images. Experimental results show that our C<sup>5</sup>-Net achieves the best segmentation and classification performance for lymph nodes among advanced methods, with the Dice coefficient of segmentation equaling 0.854, and the accuracy of classification equaling 0.874. Our method has consistently shown accuracy and robustness in the segmentation and classification of lymph nodes, contributing to the early and accurate detection of lymph nodal malignancy, which is potentially essential for effective treatment planning in clinical oncology.
Page 278 of 6596585 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.