Sort by:
Page 51 of 3463455 results

Proteogenomic Biomarker Profiling for Predicting Radiolabeled Immunotherapy Response in Resistant Prostate Cancer.

Yan B, Gao Y, Zou Y, Zhao L, Li Z

pubmed logopapersAug 29 2025
Treatment resistance prevents patients with preoperative chemoradiotherapy or targeted radiolabeled immunotherapy from achieving a good result, which remains a major challenge in the prostate cancer (PCa) area. A novel integrative framework combining a machine learning workflow with proteogenomic profiling was used to identify predictive ultrasound biomarkers and classify patient response to radiolabeled immunotherapy in high-risk PCa patients who are treatment resistant. The deep stacked autoencoder (DSAE) model, combined with Extreme Gradient Boosting, was designed for feature refinement and classification. The Cancer Genome Atlas and an independent radiotherapy-treated cohort have been utilized to collect multiomics data through their respective applications. In addition to genetic mutations (whole-exome sequencing), these data contained proteomic (mass spectrometry) and transcriptomic (RNA sequencing) data. Maintaining biological variety across omics layers while reducing the dimensionality of the data requires the use of the DSAE architecture. Resistance phenotypes show a notable relationship with proteogenomic profiles, including DNA repair pathways (Breast Cancer gene 2 [BRCA2], ataxia-telangiectasia mutated [ATM]), androgen receptor (AR) signaling regulators, and metabolic enzymes (ATP citrate lyase [ACLY], isocitrate dehydrogenase 1 [IDH1]). A specific panel of ultrasound biomarkers has been confirmed in a state deemed preclinical using patient-derived xenografts. To support clinical translation, real-time phenotypic features from ultrasound imaging (e.g., perfusion, stiffness) were also considered, providing complementary insights into the tumor microenvironment and treatment responsiveness. This approach provides an integrated platform that offers a clinically actionable foundation for the development of radiolabeled immunotherapy drugs before surgical operations.

A hybrid computer vision model to predict lung cancer in diverse populations

Zakkar, A., Perwaiz, N., Harikrishnan, V., Zhong, W., Narra, V., Krule, A., Yousef, F., Kim, D., Burrage-Burton, M., Lawal, A. A., Gadi, V., Korpics, M. C., Kim, S. J., Chen, Z., Khan, A. A., Molina, Y., Dai, Y., Marai, E., Meidani, H., Nguyen, R., Salahudeen, A. A.

medrxiv logopreprintAug 29 2025
PURPOSE Disparities of lung cancer incidence exist in Black populations and screening criteria underserve Black populations due to disparately elevated risk in the screening eligible population. Prediction models that integrate clinical and imaging-based features to individualize lung cancer risk is a potential means to mitigate these disparities. PATIENTS AND METHODS This Multicenter (NLST) and catchment population based (UIH, urban and suburban Cook County) study utilized participants at risk of lung cancer with available lung CT imaging and follow up between the years 2015 and 2024. 53,452 in NLST and 11,654 in UIH were included based on age and tobacco use based risk factors for lung cancer. Cohorts were used for training and testing of deep and machine learning models using clinical features alone or combined with CT image features (hybrid computer vision). RESULTS An optimized 7 clinical feature model achieved ROC-AUC values ranging 0.64-0.67 in NLST and 0.60-0.65 in UIH cohorts across multiple years. Incorporation of imaging features to form a hybrid computer vision model significantly improved ROC-AUC values to 0.78-0.91 in NLST but deteriorated in UIH with ROC-AUC values of 0.68- 0.80, attributable to Black participants where ROC-AUC values ranged from 0.63-0.72 across multiple years. Retraining the hybrid computer vision model by incorporating Black and other participants from the UIH cohort improved performance with ROC- AUC values of 0.70-0.87 in a held out UIH test set. CONCLUSION Hybrid computer vision predicted risk with improved accuracy compared to clinical risk models alone. However, potential biases in image training data reduced model generalizability in Black participants. Performance was improved upon retraining with a subset of the UIH cohort, suggesting that inclusive training and validation datasets can minimize racial disparities. Future studies incorporating vision models trained on representative data sets may demonstrate improved health equity upon clinical use.

A network-assisted joint image and motion estimation approach for robust 3D MRI motion correction across severity levels.

Nghiem B, Wu Z, Kashyap S, Kasper L, Uludağ K

pubmed logopapersAug 29 2025
The purpose of this work was to develop and evaluate a novel method that leverages neural networks and physical modeling for 3D motion correction at different levels of corruption. The novel method ("UNet+JE") combines an existing neural network ("UNet<sub>mag</sub>") with a physics-informed algorithm for jointly estimating motion parameters and the motion-compensated image ("JE"). UNet<sub>mag</sub> and UNet+JE were trained on two training datasets separately with different distributions of motion corruption severity and compared to JE as a benchmark. All five resulting methods were tested on T<sub>1</sub>w 3D MPRAGE scans of healthy participants with simulated (n = 40) and in vivo (n = 10) motion corruption ranging from mild to severe motion. UNet+JE provided better motion correction than UNet<sub>mag</sub> ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mi>p</mi> <mo><</mo> <msup><mn>10</mn> <mrow><mo>-</mo> <mn>2</mn></mrow> </msup> </mrow> <annotation>$$ p<{10}^{-2} $$</annotation></semantics> </math> for all metrics for both simulated and in vivo data), under both training datasets. UNet<sub>mag</sub> exhibited residual image artifacts and blurring, as well as greater susceptibility to data distribution shifts than UNet+JE. UNet+JE and JE did not significantly differ in image correction quality ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mi>p</mi> <mo>></mo> <mn>0.05</mn></mrow> <annotation>$$ p>0.05 $$</annotation></semantics> </math> for all metrics), even under strong distribution shifts for UNet+JE. However, UNet+JE reduced runtimes by a median reduction factor of between 2.00 to 3.80 as well as 4.05 for the simulation and in vivo studies, respectively. UNet+JE benefitted from the robustness of joint estimation and the fast image improvement provided by the neural network, enabling the method to provide high quality 3D image correction under a wide range of motion corruption within shorter runtimes.

Preoperative prediction of lymph node metastasis in adenocarcinoma of esophagogastric junction using CT texture analysis combined with machine learning.

Wang D, Wang M, Chen R, Song J, Su Y, Wang Y, Liu F, Zhu X, Yang F

pubmed logopapersAug 29 2025
This study aims to construct a noninvasive preoperative prediction model for lymph node metastasis in adenocarcinoma of esophagogastric junction (AEG) using computed tomography (CT) texture characterization and machine learning. We analyzed clinical and imaging data from 57 patients with preoperative CT enhancement scans and pathologically confirmed AEG. Lesions were delineated, and texture features were extracted from arterial phase and venous phase CT images using 3D-Slicer software. Features were normalized, downscaled, and screened using correlation analysis and the least absolute shrinkage and selection operator algorithm. The lymph node metastasis prediction model employed machine learning algorithms (random forest, logistic regression, decision tree [DT], and support vector machine), with performance validated using receiver operating characteristic curves. In the arterial phase, the random forest model excelled in precision (0.86) and positive predictive value (0.86). The DT model exhibited the best negative predictive value (0.86), while the logistic regression model demonstrated the highest area under the curve (AUC; 0.78) and specificity (1.0). During the venous phase, the DT model excelled in precision (0.72), F1 score (0.76), and recall (0.80), whereas the support vector machine model had the highest AUC (0.75). Differences in AUCs between models in both phases were not statistically significant per DeLong's test, indicating comparable performance. Each model displayed strengths across various metrics, with the DT model showing consistent performance across arterial and venous phases, emphasizing accuracy and specificity. The CT texture-based machine learning model effectively predicts lymph node metastasis noninvasively in AEG patients, demonstrating robust predictive efficacy.

Radiomics and deep learning methods for predicting the growth of subsolid nodules based on CT images.

Chen J, Yan W, Shi Y, Pan X, Yu R, Wang D, Zhang X, Wang L, Liu K

pubmed logopapersAug 29 2025
The growth of subsolid nodules (SSNs) is a strong predictor of lung adenocarcinoma. However, the heterogeneity in the biological behavior of SSNs poses significant challenges for clinical management. This study aimed to evaluate the clinical utility of deep learning and radiomics approaches in predicting SSN growth based on computed tomography (CT) images. A total of 353 patients with 387 SSNs were enrolled in this retrospective study. All cases were divided into growth (n = 195) and non-growth (n = 192) groups and were randomly assigned to the training (n = 247), validation (n = 62), and test sets (n = 78) in a ratio of 3:1:1. We obtained 1454 radiomics features from each volumetric region of interest (VOI). Pearson correlation coefficient and the least absolute shrinkage and selection operator (LASSO) methods were used for radiomics signature determination. A ResNet18 architecture was used to construct the deep-learning model. The 2 models were combined via a ResNet-based fusion network to construct an ensemble model. The area under the curve (AUC) was plotted and decision curve analysis (DCA) was performed to determine the clinical performance of the 3 models. The combined model (AUC = 0.926, 95% CI: 0.869-0.977) outperformed the radiomics (AUC = 0.894, 95% CI: 0.808-0.957) and deep-learning models (AUC = 0.802, 95% CI: 0.695-0.899) in the test set. The DeLong test results showed a statistically significant difference between the combined model and the deep-learning model (P = .012), supporting the clinical value of DCA. This study demonstrates that integrating radiomics with deep learning offers promising potential for the preoperative prediction of SSN growth.

Temporal Flow Matching for Learning Spatio-Temporal Trajectories in 4D Longitudinal Medical Imaging

Nico Albert Disch, Yannick Kirchhoff, Robin Peretzke, Maximilian Rokuss, Saikat Roy, Constantin Ulrich, David Zimmerer, Klaus Maier-Hein

arxiv logopreprintAug 29 2025
Understanding temporal dynamics in medical imaging is crucial for applications such as disease progression modeling, treatment planning and anatomical development tracking. However, most deep learning methods either consider only single temporal contexts, or focus on tasks like classification or regression, limiting their ability for fine-grained spatial predictions. While some approaches have been explored, they are often limited to single timepoints, specific diseases or have other technical restrictions. To address this fundamental gap, we introduce Temporal Flow Matching (TFM), a unified generative trajectory method that (i) aims to learn the underlying temporal distribution, (ii) by design can fall back to a nearest image predictor, i.e. predicting the last context image (LCI), as a special case, and (iii) supports $3D$ volumes, multiple prior scans, and irregular sampling. Extensive benchmarks on three public longitudinal datasets show that TFM consistently surpasses spatio-temporal methods from natural imaging, establishing a new state-of-the-art and robust baseline for $4D$ medical image prediction.

Liver fat quantification at 0.55 T enabled by locally low-rank enforced deep learning reconstruction.

Helo M, Nickel D, Kannengiesser S, Kuestner T

pubmed logopapersAug 29 2025
The emergence of new medications for fatty liver conditions has increased the need for reliable and widely available assessment of MRI proton density fat fraction (MRI-PDFF). Whereas low-field MRI presents a promising solution, its utilization is challenging due to the low SNR. This work aims to enhance SNR and enable precise PDFF quantification at low-field MRI using a novel locally low-rank deep learning-based (LLR-DL) reconstruction. LLR-DL alternates between regularized SENSE and a neural network (U-Net) throughout several iterations, operating on complex-valued data. The network processes the spectral projection onto singular value bases, which are computed on local patches across the echoes dimension. The output of the network is recast into the basis of the original echoes and used as a prior for the following iteration. The final echoes are processed by a multi-echo Dixon algorithm. Two different protocols were proposed for imaging at 0.55 T. An iron-and-fat phantom and 10 volunteers were scanned on both 0.55 and 1.5 T systems. Linear regression, t-statistics, and Bland-Altman analyses were conducted. LLR-DL achieved significantly improved image quality compared to the conventional reconstruction technique, with a 32.7% increase in peak SNR and a 25% improvement in structural similarity index. PDFF repeatability was 2.33% in phantoms (0% to 100%) and 0.79% in vivo (3% to 18%), with narrow cross-field strength limits of agreement below 1.67% in phantoms and 1.75% in vivo. An LLR-DL reconstruction was developed and investigated to enable precise PDFF quantification at 0.55 T and improve consistency with 1.5 T results.

Mapping heterogeneity in the neuroanatomical correlates of depression

Watts, D., Mallard, T. T., Dall' Aglio, L., Giangrande, E., Kennedy, C., Cai, N., Choi, K. W., Ge, T., Smoller, J.

medrxiv logopreprintAug 29 2025
Major depressive disorder (MDD) affects millions worldwide, yet its neurobiological underpinnings remain elusive. Neuroimaging studies have yielded inconsistent results, hindered by small sample sizes and heterogeneous depression definitions. We sought to address these limitations by leveraging the UK Biobanks extensive neuroimaging data (n=30,122) to investigate how depression phenotyping depth influences neuroanatomic profiles of MDD. We examined 256 brain structural features, obtained from T1- and diffusion-weighted brain imaging, and nine depression phenotypes, ranging from self-reported symptoms (shallow definitions) to clinical diagnoses (deep). Multivariable logistic regression, machine learning classifiers, and feature transfer approaches were used to explore correlational patterns, predictive accuracy and the transferability of important features across depression definitions. For white matter microstructure, we observed widespread fractional anisotropy decreases and mean diffusivity increases. In contrast, cortical thickness and surface area were less consistently associated across depression definitions, and demonstrated weaker associations. Machine learning classifiers showed varying performance in distinguishing depression cases from controls, with shallow phenotypes achieving similar discriminative performance (AUC=0.807) and slightly higher positive predictive value (PPV=0.655) compared to deep phenotypes (AUC=0.831, PPV=0.456), when sensitivity was standardized at 80%. However, when shallow phenotypes were downsampled to match deep phenotype case/control ratios, performance degraded substantially (AUC=0.690). Together, these results suggest that while core white-matter alterations are shared across phenotyping strategies, shallow phenotypes require approximately twice the sample size of deep phenotypes to achieve comparable classification performance, underscoring the fundamental power-specificity tradeoff in psychiatric neuroimaging research.

A Multi-Stage Fine-Tuning and Ensembling Strategy for Pancreatic Tumor Segmentation in Diagnostic and Therapeutic MRI

Omer Faruk Durugol, Maximilian Rokuss, Yannick Kirchhoff, Klaus H. Maier-Hein

arxiv logopreprintAug 29 2025
Automated segmentation of Pancreatic Ductal Adenocarcinoma (PDAC) from MRI is critical for clinical workflows but is hindered by poor tumor-tissue contrast and a scarcity of annotated data. This paper details our submission to the PANTHER challenge, addressing both diagnostic T1-weighted (Task 1) and therapeutic T2-weighted (Task 2) segmentation. Our approach is built upon the nnU-Net framework and leverages a deep, multi-stage cascaded pre-training strategy, starting from a general anatomical foundation model and sequentially fine-tuning on CT pancreatic lesion datasets and the target MRI modalities. Through extensive five-fold cross-validation, we systematically evaluated data augmentation schemes and training schedules. Our analysis revealed a critical trade-off, where aggressive data augmentation produced the highest volumetric accuracy, while default augmentations yielded superior boundary precision (achieving a state-of-the-art MASD of 5.46 mm and HD95 of 17.33 mm for Task 1). For our final submission, we exploited this finding by constructing custom, heterogeneous ensembles of specialist models, essentially creating a mix of experts. This metric-aware ensembling strategy proved highly effective, achieving a top cross-validation Tumor Dice score of 0.661 for Task 1 and 0.523 for Task 2. Our work presents a robust methodology for developing specialized, high-performance models in the context of limited data and complex medical imaging tasks (Team MIC-DKFZ).

Masked Autoencoder Pretraining and BiXLSTM ResNet Architecture for PET/CT Tumor Segmentation

Moona Mazher, Steven A Niederer, Abdul Qayyum

arxiv logopreprintAug 29 2025
The accurate segmentation of lesions in whole-body PET/CT imaging is es-sential for tumor characterization, treatment planning, and response assess-ment, yet current manual workflows are labor-intensive and prone to inter-observer variability. Automated deep learning methods have shown promise but often remain limited by modality specificity, isolated time points, or in-sufficient integration of expert knowledge. To address these challenges, we present a two-stage lesion segmentation framework developed for the fourth AutoPET Challenge. In the first stage, a Masked Autoencoder (MAE) is em-ployed for self-supervised pretraining on unlabeled PET/CT and longitudinal CT scans, enabling the extraction of robust modality-specific representations without manual annotations. In the second stage, the pretrained encoder is fine-tuned with a bidirectional XLSTM architecture augmented with ResNet blocks and a convolutional decoder. By jointly leveraging anatomical (CT) and functional (PET) information as complementary input channels, the model achieves improved temporal and spatial feature integration. Evalua-tion on the AutoPET Task 1 dataset demonstrates that self-supervised pre-training significantly enhances segmentation accuracy, achieving a Dice score of 0.582 compared to 0.543 without pretraining. These findings high-light the potential of combining self-supervised learning with multimodal fu-sion for robust and generalizable PET/CT lesion segmentation. Code will be available at https://github.com/RespectKnowledge/AutoPet_2025_BxLSTM_UNET_Segmentation
Page 51 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.