Sort by:
Page 50 of 3463455 results

A network-assisted joint image and motion estimation approach for robust 3D MRI motion correction across severity levels.

Nghiem B, Wu Z, Kashyap S, Kasper L, Uludağ K

pubmed logopapersAug 29 2025
The purpose of this work was to develop and evaluate a novel method that leverages neural networks and physical modeling for 3D motion correction at different levels of corruption. The novel method ("UNet+JE") combines an existing neural network ("UNet<sub>mag</sub>") with a physics-informed algorithm for jointly estimating motion parameters and the motion-compensated image ("JE"). UNet<sub>mag</sub> and UNet+JE were trained on two training datasets separately with different distributions of motion corruption severity and compared to JE as a benchmark. All five resulting methods were tested on T<sub>1</sub>w 3D MPRAGE scans of healthy participants with simulated (n = 40) and in vivo (n = 10) motion corruption ranging from mild to severe motion. UNet+JE provided better motion correction than UNet<sub>mag</sub> ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mi>p</mi> <mo><</mo> <msup><mn>10</mn> <mrow><mo>-</mo> <mn>2</mn></mrow> </msup> </mrow> <annotation>$$ p<{10}^{-2} $$</annotation></semantics> </math> for all metrics for both simulated and in vivo data), under both training datasets. UNet<sub>mag</sub> exhibited residual image artifacts and blurring, as well as greater susceptibility to data distribution shifts than UNet+JE. UNet+JE and JE did not significantly differ in image correction quality ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mi>p</mi> <mo>></mo> <mn>0.05</mn></mrow> <annotation>$$ p>0.05 $$</annotation></semantics> </math> for all metrics), even under strong distribution shifts for UNet+JE. However, UNet+JE reduced runtimes by a median reduction factor of between 2.00 to 3.80 as well as 4.05 for the simulation and in vivo studies, respectively. UNet+JE benefitted from the robustness of joint estimation and the fast image improvement provided by the neural network, enabling the method to provide high quality 3D image correction under a wide range of motion corruption within shorter runtimes.

Multi-regional Multiparametric Deep Learning Radiomics for Diagnosis of Clinically Significant Prostate Cancer.

Liu X, Liu R, He H, Yan Y, Zhang L, Zhang Q

pubmed logopapersAug 29 2025
Non-invasive and precise identification of clinically significant prostate cancer (csPCa) is essential for the management of prostatic diseases. Our study introduces a novel and interpretable diagnostic method for csPCa, leveraging multi-regional, multiparametric deep learning radiomics based on magnetic resonance imaging (MRI). The prostate regions, including the peripheral zone (PZ) and transition zone (TZ), are automatically segmented using a deep learning framework that combines convolutional neural networks and transformers to generate region-specific masks. Radiomics features are then extracted and selected from multiparametric MRI at the PZ, TZ, and their combined area to develop a multi-regional multiparametric radiomics diagnostic model. Feature contributions are quantified to enhance the model's interpretability and assess the importance of different imaging parameters across various regions. The multi-regional model substantially outperforms single-region models, achieving an optimal area under the curve (AUC) of 0.903 on the internal test set, and an AUC of 0.881 on the external test set. Comparison with other methods demonstrates that our proposed approach exhibits superior performance. Features from diffusion-weighted imaging and apparent diffusion coefficient play a crucial role in csPCa diagnosis, with contribution degrees of 53.28% and 39.52%, respectively. We introduce an interpretable, multi-regional, multiparametric diagnostic model for csPCa using deep learning radiomics. By integrating features from various zones, our model improves diagnostic accuracy and provides clear insights into the key imaging parameters, offering strong potential for clinical applications in csPCa management.

Age- and sex-related changes in proximal humeral volumetric BMD assessed via chest CT with a deep learning-based segmentation model.

Li S, Tang C, Zhang H, Ma C, Weng Y, Chen B, Xu S, Xu H, Giunchiglia F, Lu WW, Guo D, Qin Y

pubmed logopapersAug 29 2025
Accurate assessment of proximal humeral volumetric bone mineral density (vBMD) is essential for surgical planning in shoulder pathology. However, age-related changes in proximal humeral vBMD remain poorly characterized. This study developed a deep learning-based method to assess proximal humeral vBMD and identified sex-specific age-related changes. It also demonstrated that lumbar spine vBMD is not a valid substitute. This study aimed to develop a deep learning-based method for proximal humeral vBMD assessment and to investigate its age- and sex-related changes, as well as its correlation with lumbar spine vBMD. An nnU-Net-based deep learning pipeline was developed to automatically segment the proximal humerus on chest CT scans from 2,675 adults. Segmentation performance was assessed using the Dice Similarity Coefficient (DSC), Intersection over Union (IoU), 95th-percentile Hausdorff Distance (95HD), and Average Symmetric Surface Distance (ASSD). Phantom-calibrated vBMD-total, trabecular, and BMAT-corrected trabecular-was quantified for each subject. Age-related distributions were modeled with generalized additive models for location, scale, and shape (GAMLSS) to generate sex-specific P3-P97 percentile curves. Lumbar spine vBMD was measured in 1460 individuals for correlation analysis. Segmentation was highly accurate (DSC 98.42 ± 0.20%; IoU 96.89 ± 0.42%; 95HD 1.12 ± 0.37 mm; ASSD 0.94 ± 0.31 mm). In males, total, trabecular, and BMAT-corrected trabecular vBMD declined approximately linearly from early adulthood. In females, a pronounced inflection occurred at ~ 40-45 years: values were stable or slightly rising beforehand, then all percentiles dropped steeply and synchronously, indicating accelerated menopause-related loss. In females, vBMD declined earlier in the lumbar spine than in the proximal humerus. Correlations between proximal humeral and lumbar spine vBMD were low to moderate overall and weakened after age 50. We present a novel, automated method for quantifying proximal humeral vBMD from chest CT, revealing distinct, sex-specific aging patterns. Males' humeral vBMD declines linearly, while females experience an earlier, accelerated loss. Moreover, the peak humeral vBMD in females occurs later than that of the lumbar spine, and spinal measurements cannot reliably substitute for humeral BMD in clinical assessment.

Enhanced glioma semantic segmentation using U-net and pre-trained backbone U-net architectures.

Khorasani A

pubmed logopapersAug 29 2025
Gliomas are known to have different sub-regions within the tumor, including the edema, necrotic, and active tumor regions. Segmenting of these regions is very important for glioma treatment decisions and management. This paper aims to demonstrate the application of U-Net and pre-trained U-Net backbone networks in glioma semantic segmentation, utilizing different magnetic resonance imaging (MRI) image weights. The data used in this study for network training, validation, and testing is the Multimodal Brain Tumor Segmentation (BraTS) 2021 challenge. In this study, we applied the U-Net and different pre-trained Backbone U-Net for the semantic segmentation of glioma regions. The ResNet, Inception, and VGG networks, which are pre-trained using the ImageNet dataset, have been used as the Backbone in the U-Net architecture. The Accuracy (ACC) and Intersection over Union (IoU) were employed to assess the performance of the networks. The most prominent finding to emerge from this study is that trained ResNet-U-Net with T<sub>1</sub> post-contrast enhancement (T<sub>1</sub>Gd) has the highest ACC and IoU for the necrotic and active tumor regions semantic segmentation in glioma. It was also demonstrated that a trained ResNet-U-Net with T<sub>2</sub> Fluid-Attenuated Inversion Recovery (T<sub>2</sub>-FLAIR) is a suitable combination for edema segmentation in glioma. Our study further validates that the proposed framework's architecture and modules are scientifically grounded and practical, enabling the extraction and aggregation of valuable semantic information to enhance glioma semantic segmentation capability. It demonstrates how useful the ResNet-U-Net will be for physicians to extract glioma regions automatically.

Masked Autoencoder Pretraining and BiXLSTM ResNet Architecture for PET/CT Tumor Segmentation

Moona Mazher, Steven A Niederer, Abdul Qayyum

arxiv logopreprintAug 29 2025
The accurate segmentation of lesions in whole-body PET/CT imaging is es-sential for tumor characterization, treatment planning, and response assess-ment, yet current manual workflows are labor-intensive and prone to inter-observer variability. Automated deep learning methods have shown promise but often remain limited by modality specificity, isolated time points, or in-sufficient integration of expert knowledge. To address these challenges, we present a two-stage lesion segmentation framework developed for the fourth AutoPET Challenge. In the first stage, a Masked Autoencoder (MAE) is em-ployed for self-supervised pretraining on unlabeled PET/CT and longitudinal CT scans, enabling the extraction of robust modality-specific representations without manual annotations. In the second stage, the pretrained encoder is fine-tuned with a bidirectional XLSTM architecture augmented with ResNet blocks and a convolutional decoder. By jointly leveraging anatomical (CT) and functional (PET) information as complementary input channels, the model achieves improved temporal and spatial feature integration. Evalua-tion on the AutoPET Task 1 dataset demonstrates that self-supervised pre-training significantly enhances segmentation accuracy, achieving a Dice score of 0.582 compared to 0.543 without pretraining. These findings high-light the potential of combining self-supervised learning with multimodal fu-sion for robust and generalizable PET/CT lesion segmentation. Code will be available at https://github.com/RespectKnowledge/AutoPet_2025_BxLSTM_UNET_Segmentation

Proteogenomic Biomarker Profiling for Predicting Radiolabeled Immunotherapy Response in Resistant Prostate Cancer.

Yan B, Gao Y, Zou Y, Zhao L, Li Z

pubmed logopapersAug 29 2025
Treatment resistance prevents patients with preoperative chemoradiotherapy or targeted radiolabeled immunotherapy from achieving a good result, which remains a major challenge in the prostate cancer (PCa) area. A novel integrative framework combining a machine learning workflow with proteogenomic profiling was used to identify predictive ultrasound biomarkers and classify patient response to radiolabeled immunotherapy in high-risk PCa patients who are treatment resistant. The deep stacked autoencoder (DSAE) model, combined with Extreme Gradient Boosting, was designed for feature refinement and classification. The Cancer Genome Atlas and an independent radiotherapy-treated cohort have been utilized to collect multiomics data through their respective applications. In addition to genetic mutations (whole-exome sequencing), these data contained proteomic (mass spectrometry) and transcriptomic (RNA sequencing) data. Maintaining biological variety across omics layers while reducing the dimensionality of the data requires the use of the DSAE architecture. Resistance phenotypes show a notable relationship with proteogenomic profiles, including DNA repair pathways (Breast Cancer gene 2 [BRCA2], ataxia-telangiectasia mutated [ATM]), androgen receptor (AR) signaling regulators, and metabolic enzymes (ATP citrate lyase [ACLY], isocitrate dehydrogenase 1 [IDH1]). A specific panel of ultrasound biomarkers has been confirmed in a state deemed preclinical using patient-derived xenografts. To support clinical translation, real-time phenotypic features from ultrasound imaging (e.g., perfusion, stiffness) were also considered, providing complementary insights into the tumor microenvironment and treatment responsiveness. This approach provides an integrated platform that offers a clinically actionable foundation for the development of radiolabeled immunotherapy drugs before surgical operations.

Mapping heterogeneity in the neuroanatomical correlates of depression

Watts, D., Mallard, T. T., Dall' Aglio, L., Giangrande, E., Kennedy, C., Cai, N., Choi, K. W., Ge, T., Smoller, J.

medrxiv logopreprintAug 29 2025
Major depressive disorder (MDD) affects millions worldwide, yet its neurobiological underpinnings remain elusive. Neuroimaging studies have yielded inconsistent results, hindered by small sample sizes and heterogeneous depression definitions. We sought to address these limitations by leveraging the UK Biobanks extensive neuroimaging data (n=30,122) to investigate how depression phenotyping depth influences neuroanatomic profiles of MDD. We examined 256 brain structural features, obtained from T1- and diffusion-weighted brain imaging, and nine depression phenotypes, ranging from self-reported symptoms (shallow definitions) to clinical diagnoses (deep). Multivariable logistic regression, machine learning classifiers, and feature transfer approaches were used to explore correlational patterns, predictive accuracy and the transferability of important features across depression definitions. For white matter microstructure, we observed widespread fractional anisotropy decreases and mean diffusivity increases. In contrast, cortical thickness and surface area were less consistently associated across depression definitions, and demonstrated weaker associations. Machine learning classifiers showed varying performance in distinguishing depression cases from controls, with shallow phenotypes achieving similar discriminative performance (AUC=0.807) and slightly higher positive predictive value (PPV=0.655) compared to deep phenotypes (AUC=0.831, PPV=0.456), when sensitivity was standardized at 80%. However, when shallow phenotypes were downsampled to match deep phenotype case/control ratios, performance degraded substantially (AUC=0.690). Together, these results suggest that while core white-matter alterations are shared across phenotyping strategies, shallow phenotypes require approximately twice the sample size of deep phenotypes to achieve comparable classification performance, underscoring the fundamental power-specificity tradeoff in psychiatric neuroimaging research.

Ultrafast Multi-tracer Total-body PET Imaging Using a Transformer-Based Deep Learning Model.

Sun H, Sanaat A, Yi W, Salimi Y, Huang Y, Decorads CE, Castarède I, Wu H, Lu L, Zaidi H

pubmed logopapersAug 29 2025
Reducing PET scan acquisition time to minimize motion-related artifacts and improving patient comfort is always demanding. This study proposes a deep-learning framework for synthesizing diagnostic-quality PET images from ultrafast scans in multi-tracer total-body PET imaging. A retrospective analysis was conducted on clinical uEXPLORER PET/CT datasets from a single institution, including [<sup>18</sup>F]FDG (N=50), [<sup>18</sup>F]FAPI (N=45) and [<sup>68</sup>Ga]FAPI (N=60) studies. Standard 300-s acquisitions were performed for each patient, with ultrafast scan PET images (3, 6, 15, 30, and 40 s) generated through list-mode data truncation. We developed two variants of a 3D SwinUNETR-V2 architecture: Model 1 (PET-only input) and Model 2 (PET+CT fusion input). The proposed methodology was trained and tested on all three datasets using 5-fold cross-validation. The proposed Model 1 and Model 2 significantly enhanced subjective image quality and lesion detectability in multi-tracer PET images compared to the original ultrafast scans. Model 1 and Model 2 also improved objective image quality metrics. For the [¹⁸F]FDG datasets, both approaches improved peak signal-to-noise ratio (PSNR) metrics across ultra-short acquisitions: 3 s: 48.169±6.121 (Model 1) vs. 48.123±6.103 (Model 2) vs. 44.092±7.508 (ultrafast), p < 0.001; 6 s: 48.997±5.960 vs. 48.461±5.897 vs. 46.503±7.190, p < 0.001; 15 s: 50.310±5.674 vs. 50.042±5.734 vs. 49.331±6.732, p < 0.001. The proposed Model 1 and Model 2 effectively enhance image quality of multi-tracer total-body PET scans with ultrafast acquisition times. The predicted PET images demonstrate comparable performance in terms of image quality and lesion detectability.

Multimodal Deep Learning for Phyllodes Tumor Classification from Ultrasound and Clinical Data

Farhan Fuad Abir, Abigail Elliott Daly, Kyle Anderman, Tolga Ozmen, Laura J. Brattain

arxiv logopreprintAug 29 2025
Phyllodes tumors (PTs) are rare fibroepithelial breast lesions that are difficult to classify preoperatively due to their radiological similarity to benign fibroadenomas. This often leads to unnecessary surgical excisions. To address this, we propose a multimodal deep learning framework that integrates breast ultrasound (BUS) images with structured clinical data to improve diagnostic accuracy. We developed a dual-branch neural network that extracts and fuses features from ultrasound images and patient metadata from 81 subjects with confirmed PTs. Class-aware sampling and subject-stratified 5-fold cross-validation were applied to prevent class imbalance and data leakage. The results show that our proposed multimodal method outperforms unimodal baselines in classifying benign versus borderline/malignant PTs. Among six image encoders, ConvNeXt and ResNet18 achieved the best performance in the multimodal setting, with AUC-ROC scores of 0.9427 and 0.9349, and F1-scores of 0.6720 and 0.7294, respectively. This study demonstrates the potential of multimodal AI to serve as a non-invasive diagnostic tool, reducing unnecessary biopsies and improving clinical decision-making in breast tumor management.

Radiomics and deep learning methods for predicting the growth of subsolid nodules based on CT images.

Chen J, Yan W, Shi Y, Pan X, Yu R, Wang D, Zhang X, Wang L, Liu K

pubmed logopapersAug 29 2025
The growth of subsolid nodules (SSNs) is a strong predictor of lung adenocarcinoma. However, the heterogeneity in the biological behavior of SSNs poses significant challenges for clinical management. This study aimed to evaluate the clinical utility of deep learning and radiomics approaches in predicting SSN growth based on computed tomography (CT) images. A total of 353 patients with 387 SSNs were enrolled in this retrospective study. All cases were divided into growth (n = 195) and non-growth (n = 192) groups and were randomly assigned to the training (n = 247), validation (n = 62), and test sets (n = 78) in a ratio of 3:1:1. We obtained 1454 radiomics features from each volumetric region of interest (VOI). Pearson correlation coefficient and the least absolute shrinkage and selection operator (LASSO) methods were used for radiomics signature determination. A ResNet18 architecture was used to construct the deep-learning model. The 2 models were combined via a ResNet-based fusion network to construct an ensemble model. The area under the curve (AUC) was plotted and decision curve analysis (DCA) was performed to determine the clinical performance of the 3 models. The combined model (AUC = 0.926, 95% CI: 0.869-0.977) outperformed the radiomics (AUC = 0.894, 95% CI: 0.808-0.957) and deep-learning models (AUC = 0.802, 95% CI: 0.695-0.899) in the test set. The DeLong test results showed a statistically significant difference between the combined model and the deep-learning model (P = .012), supporting the clinical value of DCA. This study demonstrates that integrating radiomics with deep learning offers promising potential for the preoperative prediction of SSN growth.
Page 50 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.