Sort by:
Page 13 of 2982974 results

18F-FDG PET/CT-based deep radiomic models for enhancing chemotherapy response prediction in breast cancer.

Jiang Z, Low J, Huang C, Yue Y, Njeh C, Oderinde O

pubmed logopapersAug 11 2025
Enhancing the accuracy of tumor response predictions enables the development of tailored therapeutic strategies for patients with breast cancer. In this study, we developed deep radiomic models to enhance the prediction of chemotherapy response after the first treatment cycle. 18F-Fludeoxyglucose PET/CT imaging data and clinical record from 60 breast cancer patients were retrospectively obtained from the Cancer Imaging Archive. PET/CT scans were conducted at three distinct stages of treatment; prior to the initiation of chemotherapy (T1), following the first cycle of chemotherapy (T2), and after the full chemotherapy regimen (T3). The patient's primary gross tumor volume (GTV) was delineated on PET images using a 40% threshold of the maximum standardized uptake value (SUVmax). Radiomic features were extracted from the GTV based on the PET/CT images. In addition, a squeeze-and-excitation network (SENet) deep learning model was employed to generate additional features from the PET/CT images for combined analysis. A XGBoost machine learning model was developed and compared with the conventional machine learning algorithm [random forest (RF), logistic regression (LR) and support vector machine (SVM)]. The performance of each model was assessed using receiver operating characteristics area under the curve (ROC AUC) analysis, and prediction accuracy in a validation cohort. Model performance was evaluated through fivefold cross-validation on the entire cohort, with data splits stratified by treatment response categories to ensure balanced representation. The AUC values for the machine learning models using only radiomic features were 0.85(XGBoost), 0.76 (RF), 0.80 (LR), and 0.59 (SVM), with XGBoost showing the best performance. After incorporating additional deep learning-derived features from SENet, the AUC values increased to 0.92, 0.88, 0.90, and 0.61, respectively, demonstrating significant improvements in predictive accuracy. Predictions were based on pre-treatment (T1) and post-first-cycle (T2) imaging data, enabling early assessment of chemotherapy response after the initial treatment cycle. Integrating deep learning-derived features significantly enhanced the performance of predictive models for chemotherapy response in breast cancer patients. This study demonstrated the superior predictive capability of the XGBoost model, emphasizing its potential to optimize personalized therapeutic strategies by accurately identifying patients unlikely to respond to chemotherapy after the first treatment cycle.

LR-COBRAS: A logic reasoning-driven interactive medical image data annotation algorithm.

Zhou N, Cao J

pubmed logopapersAug 11 2025
The volume of image data generated in the medical field is continuously increasing. Manual annotation is both costly and prone to human error. Additionally, deep learning-based medical image algorithms rely on large, accurately annotated training datasets, which are expensive to produce and often result in instability. This study introduces LR-COBRAS, an interactive computer-aided data annotation algorithm designed for medical experts. LR-COBRAS aims to assist healthcare professionals in achieving more precise annotation outcomes through interactive processes, thereby optimizing medical image annotation tasks. The algorithm enhances must-link and cannot-link constraints during interactions through a logic reasoning module. It automatically generates potential constraint relationships, reducing the frequency of user interactions and improving clustering accuracy. By utilizing rules such as symmetry, transitivity, and consistency, LR-COBRAS effectively balances automation with clinical relevance. Experimental results based on the MedMNIST+ dataset and ChestX-ray8 dataset demonstrate that LR-COBRAS significantly outperforms existing methods in clustering accuracy, efficiency, and interactive burden, showcasing superior robustness and applicability. This algorithm provides a novel solution for intelligent medical image analysis. The source code for our implementation is available on https://github.com/cjw-bbxc/MILR-COBRAS.

CMVFT: A Multi-Scale Attention Guided Framework for Enhanced Keratoconus Suspect Classification in Multi-View Corneal Topography.

Lu Y, Li B, Zhang Y, Qi Y, Shi X

pubmed logopapersAug 11 2025
Retrospective cross-sectional study. To develop a multi-view fusion framework that effectively identifies suspect keratoconus cases and facilitates the possibility of early clinical intervention. A total of 573 corneal topography maps representing eyes classified as normal, suspect, or keratoconus. We designed the Corneal Multi-View Fusion Transformer (CMVFT), which integrates features from seven standard corneal topography maps. A pretrained ResNet-50 extracts single-view representations that are further refined by a custom-designed Multi-Scale Attention Module (MSAM). This integrated design specifically compensates for the representation gap commonly encountered when applying Transformers to small-sample corneal topography datasets by dynamically bridging local convolution-based feature extraction with global self-attention mechanisms. A subsequent fusion Transformer then models long-range dependencies across views for comprehensive multi-view feature integration. The primary measure was the framework's ability to differentiate suspect cases from normal and keratoconus cases, thereby creating a pathway for early clinical intervention. Experimental evaluation demonstrated that CMVFT effectively distinguishes suspect cases within a feature space characterized by overlapping attributes. Ablation studies confirmed that both the MSAM and the fusion Transformer are essential for robust multi-view feature integration, successfully compensating for potential representation shortcomings in small datasets. This study is the first to apply a Transformer-driven multi-view fusion approach in corneal topography analysis. By compensating for the representation gap inherent in small-sample settings, CMVFT shows promise in enabling the identification of suspect keratoconus cases and supporting early intervention strategies, with prospective implications for early clinical intervention.

Artificial Intelligence-Driven Body Composition Analysis Enhances Chemotherapy Toxicity Prediction in Colorectal Cancer.

Liu YZ, Su PF, Tai AS, Shen MR, Tsai YS

pubmed logopapersAug 11 2025
Body surface area (BSA)-based chemotherapy dosing remains standard despite its limitations in predicting toxicity. Variations in body composition, particularly skeletal muscle and adipose tissue, influence drug metabolism and toxicity risk. This study aims to investigate the mediating role of body composition in the relationship between BSA-based dosing and dose-limiting toxicities (DLTs) in colorectal cancer patients receiving oxaliplatin-based chemotherapy. We retrospectively analyzed 483 stage III colorectal cancer patients treated at National Cheng Kung University Hospital (2013-2021). An artificial intelligence (AI)-driven algorithm quantified skeletal muscle and adipose tissue compartments from lumbar 3 (L3) vertebral-level computed tomography (CT) scans. Mediation analysis evaluated body composition's role in chemotherapy-related toxicities. Among the cohort, 18.2% (n = 88) experienced DLTs. While BSA alone was not significantly associated with DLTs (OR = 0.473, p = 0.376), increased intramuscular adipose tissue (IMAT) significantly predicted higher DLT risk (OR = 1.047, p = 0.038), whereas skeletal muscle area was protective. Mediation analysis confirmed that IMAT partially mediated the relationship between BSA and DLTs (indirect effect: 0.05, p = 0.040), highlighting adipose infiltration's role in chemotherapy toxicity. BSA-based dosing inadequately accounts for interindividual variations in chemotherapy tolerance. AI-assisted body composition analysis provides a precision oncology framework for identifying high-risk patients and optimizing chemotherapy regimens. Prospective validation is warranted to integrate body composition into routine clinical decision-making.

Dendrite cross attention for high-dose-rate brachytherapy distribution planning.

Saini S, Liu X

pubmed logopapersAug 10 2025
Cervical cancer is a significant global health issue, and high-dose-rate brachytherapy (HDR-BT) is crucial for its treatment. However, manually creating HDR-BT plans is time-consuming and heavily relies on the planner's expertise, making standardization difficult. This study introduces two advanced deep learning models to address this need: Bi-branch Cross-Attention UNet (BiCA-UNet) and Dendrite Cross-Attention UNet (DCA-UNet). BiCA-UNet enhances the correlation between the CT scan and segmentation maps of the clinical target volume (CTV), applicator, bladder, and rectum. It uses two branches: one processes the stacked input of CT scans and segmentations, and the other focuses on the CTV segmentation. A cross-attention mechanism integrates these branches, improving the model's understanding of the CTV region for accurate dose predictions. Building on BiCA-UNet, DCA-UNet further introduces a primary branch of stacked inputs and three secondary branches for CTV, bladder, and rectum segmentations forming a dendritic structure. Cross attention with bladder and rectum segmentation helps the model understand the regions of organs at risk (OAR), refining dose prediction. Evaluation of these models using multiple metrics indicates that both BiCA-UNet and DCA-UNet significantly improve HDR-BT dose prediction accuracy for various applicator types. The cross-attention mechanisms enhance the feature representation of critical anatomical regions, leading to precise and reliable treatment plans. This research highlights the potential of BiCA-UNet and DCA-UNet in advancing HDR-BT planning, contributing to the standardization of treatment plans, and offering promising directions for future research to improve patient outcomes in the source data.

Prediction of cervical cancer lymph node metastasis based on multisequence magnetic resonance imaging radiomics and deep learning features: a dual-center study.

Luo S, Guo Y, Ye Y, Mu Q, Huang W, Tang G

pubmed logopapersAug 10 2025
Cervical cancer is a leading cause of death from malignant tumors in women, and accurate evaluation of occult lymph node metastasis (OLNM) is crucial for optimal treatment. This study aimed to develop several predictive models-including Clinical model, Radiomics models (RD), Deep Learning models (DL), Radiomics-Deep Learning fusion models (RD-DL), and a Clinical-RD-DL combined model-for assessing the risk of OLNM in cervical cancer patients.The study included 130 patients from Center 1 (training set) and 55 from Center 2 (test set). Clinical data and imaging sequences (T1, T2, and DWI) were used to extract features for model construction. Model performance was assessed using the DeLong test, and SHAP analysis was used to examine feature contributions. Results showed that both the RD-combined (AUC = 0.803) and DL-combined (AUC = 0.818) models outperformed single-sequence models as well as the standalone Clinical model (AUC = 0.702). The RD-DL model yielded the highest performance, achieving an AUC of 0.981 in the training set and 0.903 in the test set. Notably, integrating clinical variables did not further improve predictive performance; the Clinical-RD-DL model performed comparably to the RD-DL model. SHAP analysis showed that deep learning features had the greatest impact on model predictions. Both RD and DL models effectively predict OLNM, with the RD-DL model offering superior performance. These findings provide a rapid, non-invasive clinical prediction method.

Improving early detection of Alzheimer's disease through MRI slice selection and deep learning techniques.

Şener B, Açıcı K, Sümer E

pubmed logopapersAug 10 2025
Alzheimer's disease is a progressive neurodegenerative disorder marked by cognitive decline, memory loss, and behavioral changes. Early diagnosis, particularly identifying Early Mild Cognitive Impairment (EMCI), is vital for managing the disease and improving patient outcomes. Detecting EMCI is challenging due to the subtle structural changes in the brain, making precise slice selection from MRI scans essential for accurate diagnosis. In this context, the careful selection of specific MRI slices that provide distinct anatomical details significantly enhances the ability to identify these early changes. The chief novelty of the study is that instead of selecting all slices, an approach for identifying the important slices is developed. The ADNI-3 dataset was used as the dataset when running the models for early detection of Alzheimer's disease. Satisfactory results have been obtained by classifying with deep learning models, vision transformers (ViT) and by adding new structures to them, together with the model proposal. In the results obtained, while an accuracy of 99.45% was achieved with EfficientNetB2 + FPN in AD vs. LMCI classification from the slices selected with SSIM, an accuracy of 99.19% was achieved in AD vs. EMCI classification, in fact, the study significantly advances early detection by demonstrating improved diagnostic accuracy of the disease at the EMCI stage. The results obtained with these methods emphasize the importance of developing deep learning models with slice selection integrated with the Vision Transformers architecture. Focusing on accurate slice selection enables early detection of Alzheimer's at the EMCI stage, allowing for timely interventions and preventive measures before the disease progresses to more advanced stages. This approach not only facilitates early and accurate diagnosis, but also lays the groundwork for timely intervention and treatment, offering hope for better patient outcomes in Alzheimer's disease. The study is finally evaluated by a statistical significance test.

Pulmonary diseases accurate recognition using adaptive multiscale feature fusion in chest radiography.

Zhou M, Gao L, Bian K, Wang H, Wang N, Chen Y, Liu S

pubmed logopapersAug 10 2025
Pulmonary disease can severely impair respiratory function and be life-threatening. Accurately recognizing pulmonary diseases in chest X-ray images is challenging due to overlapping body structures and the complex anatomy of the chest. We propose an adaptive multiscale feature fusion model for recognizing Chest X-ray images of pneumonia, tuberculosis, and COVID-19, which are common pulmonary diseases. We introduce an Adaptive Multiscale Fusion Network (AMFNet) for pulmonary disease classification in chest X-ray images. AMFNet consists of a lightweight Multiscale Fusion Network (MFNet) and ResNet50 as the secondary feature extraction network. MFNet employs Fusion Blocks with self-calibrated convolution (SCConv) and Attention Feature Fusion (AFF) to capture multiscale semantic features, and integrates a custom activation function, MFReLU, which is employed to reduce the model's memory access time. A fusion module adaptively combines features from both networks. Experimental results show that AMFNet achieves 97.48% accuracy and an F1 score of 0.9781 on public datasets, outperforming models like ResNet50, DenseNet121, ConvNeXt-Tiny, and Vision Transformer while using fewer parameters.

Prediction of hematoma changes in spontaneous intracerebral hemorrhage using a Transformer-based generative adversarial network to generate follow-up CT images.

Feng C, Jiang C, Hu C, Kong S, Ye Z, Han J, Zhong K, Yang T, Yin H, Lao Q, Ding Z, Shen D, Shen Q

pubmed logopapersAug 10 2025
To visualize and assess hematoma growth trends by generating follow-up CT images within 24 h based on baseline CT images of spontaneous intracerebral hemorrhage (sICH) using Transformer-integrated Generative Adversarial Networks (GAN). Patients with sICH were retrospectively recruited from two medical centers. The imaging data included baseline non-contrast CT scans taken after onset and follow-up imaging within 24 h. In the test set, the peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM) were utilized to quantitatively assess the quality of the predicted images. Pearson's correlation analysis was performed to assess the agreement of semantic features and geometric properties of hematomas between true follow-up CT images and the predicted images. The consistency of hematoma expansion prediction between true and generated images was further examined. The PSNR of the predicted images was 26.73 ± 1.11, and the SSIM was 91.23 ± 1.10. The Pearson correlation coefficients (r) with 95 % confidence intervals (CI) for irregularity, satellite sign number, intraventricular or subarachnoid hemorrhage, midline shift, edema expansion, mean CT value, maximum cross-sectional area, and hematoma volume between the predicted and true follow-up images were as follows: 0.94 (0.91, 0.96), 0.87 (0.81, 0.91), 0.86 (0.80, 0.91), 0.89 (0.84, 0.92), 0.91 (0.87, 0.94), 0.78(0.68, 0.84), 0.94(0.91, 0.96), and 0.94 (0.91, 0.96), respectively. The correlation coefficient (r) for predicting hematoma expansion between predicted and true follow-up images was 0.86 (95 % CI: 0.79, 0.90; P < 0.001). The model constructed using a GAN integrated with Transformer modules can accurately visualize early hematoma changes in sICH.

SST-DUNet: Smart Swin Transformer and Dense UNet for automated preclinical fMRI skull stripping.

Soltanpour S, Utama R, Chang A, Nasseef MT, Madularu D, Kulkarni P, Ferris CF, Joslin C

pubmed logopapersAug 9 2025
Skull stripping is a common preprocessing step in Magnetic Resonance Imaging (MRI) pipelines and is often performed manually. Automating this process is challenging for preclinical data due to variations in brain geometry, resolution, and tissue contrast. Existing methods for MRI skull stripping often struggle with the low resolution and varying slice sizes found in preclinical functional MRI (fMRI) data. This study proposes a novel method that integrates a Dense UNet-based architecture with a feature extractor based on the Smart Swin Transformer (SST), called SST-DUNet. The Smart Shifted Window Multi-Head Self-Attention (SSW-MSA) module in SST replaces the mask-based module in the Swin Transformer (ST), enabling the learning of distinct channel-wise features while focusing on relevant dependencies within brain structures. This modification allows the model to better handle the complexities of fMRI skull stripping, such as low resolution and variable slice sizes. To address class imbalance in preclinical data, a combined loss function using Focal and Dice loss is applied. The model was trained on rat fMRI images and evaluated across three in-house datasets, achieving Dice similarity scores of 98.65%, 97.86%, and 98.04%. We compared our method with conventional and deep learning-based approaches, demonstrating its superiority over state-of-the-art methods. The fMRI results using SST-DUNet closely align with those from manual skull stripping for both seed-based and independent component analyses, indicating that SST-DUNet can effectively substitute manual brain extraction in rat fMRI analysis.
Page 13 of 2982974 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.