Sort by:
Page 8 of 99986 results

3D gadolinium-enhanced high-resolution near-isotropic pancreatic imaging at 3.0-T MR using deep-learning reconstruction.

Guan S, Poujol J, Gouhier E, Touloupas C, Delpla A, Boulay-Coletta I, Zins M

pubmed logopapersSep 24 2025
To compare overall image quality, lesion conspicuity and detectability on 3D-T1w-GRE arterial phase high-resolution MR images with deep learning reconstruction (3D-DLR) against standard-of-care reconstruction (SOC-Recon) in patients with suspected pancreatic disease. Patients who underwent a pancreatic MR exam with a high-resolution 3D-T1w-GRE arterial phase acquisition on a 3.0-T MR system between December 2021 and June 2022 in our center were retrospectively included. A new deep learning-based reconstruction algorithm (3D-DLR) was used to additionally reconstruct arterial phase images. Two radiologists blinded to the reconstruction type assessed images for image quality, artifacts and lesion conspicuity using a Likert scale and counted the lesions. Signal-to-noise ratio and lesion contrast-to-noise ratio were calculated for each reconstruction. Quantitative data were evaluated using paired t-tests. Ordinal data such as image quality, artifacts and lesions conspicuity were analyzed using paired-Wilcoxon tests. Interobserver agreement for image quality and artifact assessment was evaluated using Cohen's kappa. Thirty-two patients (mean age 62 years ± 12, 16 female) were included. 3D-DLR significantly improved SNR for each pancreatic segment and lesion CNR compared to SOC-Recon (p < 0.01), and demonstrated significantly higher average image quality score (3.34 vs 2.68, p < 0.01). 3D DLR also significantly reduced artifacts compared to SOC-Recon (p < 0.01) for one radiologist. 3D-DLR exhibited significantly higher average lesion conspicuity (2.30 vs 1.85, p < 0.01). The sensitivity was increased with 3D-DLR compared to SOC-Recon for both reader 1 and reader 2 (1 vs 0.88 and 0.88 vs 0.83, p = 0.62 for both results). 3D-DLR images demonstrated higher overall image quality, leading to better lesion conspicuity. 3D deep learning reconstruction can be applied to gadolinium-enhanced pancreatic 3D-T1w arterial phase high-resolution images without additional acquisition time to further improve image quality and lesion conspicuity. 3D DLR has not yet been applied to pancreatic MRI high-resolution sequences. This method improves SNR, CNR, and overall 3D T1w arterial pancreatic image quality. Enhanced lesion conspicuity may improve pancreatic lesion detectability.

Exploring the role of preprocessing combinations in hyperspectral imaging for deep learning colorectal cancer detection.

Tkachenko M, Huber B, Hamotskyi S, Jansen-Winkeln B, Gockel I, Neumuth T, Köhler H, Maktabi M

pubmed logopapersSep 23 2025
This study compares various preprocessing techniques for hyperspectral deep learning-based cancer diagnostics. The study considers different spectrum scaling and noise reduction options across spatial and spectral axes of hyperspectral datacubes, as well varying levels of blood and light reflections removal. We also examine how the size of the patches extracted from the hyperspectral data affects the models' performance. We additionally explore various strategies to mitigate our dataset's imbalance (where cancerous tissues are underrepresented). Our results indicate that. Scaling: Standardization significantly improves both sensitivity and specificity compared to Normalization. Larger input patch sizes enhance performance by capturing more spatial context. Noise reduction unexpectedly degrades performance. Blood filtering is more effective than filtering reflected light pixels, although neither approach produces significant results. By carefully maintaining consistent testing conditions, we ensure a fair comparison across preprocessing methods and reproducibility. Our findings highlight the necessity of careful preprocessing selection to maximize deep learning performance in medical imaging applications.

Refining the Classroom: The Self-Supervised Professor Model for Improved Segmentation of Locally Advanced Pancreatic Ductal Adenocarcinoma.

Bereska JI, Palic S, Bereska LF, Gavves E, Nio CY, Kop MPM, Struik F, Daams F, van Dam MA, Dijkhuis T, Besselink MG, Marquering HA, Stoker J, Verpalen IM

pubmed logopapersSep 23 2025
Pancreatic ductal adenocarcinoma (PDAC) is a leading cause of cancer-related deaths, with accurate staging being critical for treatment planning. Automated 3D segmentation models can aid in staging, but segmenting PDAC, especially in cases of locally advanced pancreatic cancer (LAPC), is challenging due to the tumor's heterogeneous appearance, irregular shapes, and extensive infiltration. This study developed and evaluated a tripartite self-supervised learning architecture for improved 3D segmentation of LAPC, addressing the challenges of heterogeneous appearance, irregular shapes, and extensive infiltration in PDAC. We implemented a tripartite architecture consisting of a teacher model, a professor model, and a student model. The teacher model, trained on manually segmented CT scans, generated initial pseudo-segmentations. The professor model refined these segmentations, which were then used to train the student model. We utilized 1115 CT scans from 903 patients for training. Three expert abdominal radiologists manually segmented 30 CT scans from 27 patients with LAPC, serving as reference standards. We evaluated the performance using DICE, Hausdorff distance (HD95), and mean surface distance (MSD). The teacher, professor, and student models achieved average DICE scores of 0.60, 0.73, and 0.75, respectively, with significant boundary accuracy improvements (teacher HD95/MSD, 25.71/5.96 mm; professor, 9.68/1.96 mm; student, 4.79/1.34 mm). Our findings demonstrate that the professor model significantly enhances segmentation accuracy for LAPC (p < 0.01). Both the professor and student models offer substantial improvements over previous work. The introduced tripartite self-supervised learning architecture shows promise for improving automated 3D segmentation of LAPC, potentially aiding in more accurate staging and treatment planning.

CT-based radiomics deep learning signatures for noninvasive prediction of early recurrence after radical surgery in locally advanced colorectal cancer: A multicenter study.

Zhou Y, Zhao J, Tan Y, Zou F, Fang L, Wei P, Zeng W, Gong L, Liu L, Zhong L

pubmed logopapersSep 23 2025
Preoperative identification of high-risk locally advanced colorectal cancer (LACRC) patients is vital for optimizing treatment and minimizing toxicity. This study aims to develop and validate a combined model of CT-based images and clinical laboratory parameters to noninvasively predict postoperative early recurrence (ER) in LACRC patients. A retrospective cohort of 560 pathologically confirmed LACRC patients collected from three centers between July 2018 and March 2022 and the Gene Expression Omnibus (GEO) dataset was analyzed. We extracted radiomics and deep learning signatures (RDs) using eight machine learning techniques, integrated them with clinical-laboratory parameters to construct a preoperative combined model, and validated it in two external datasets. Its predictive performance was compared with postoperative pathological and TNM staging models. Kaplan-Meier analysis was used to evaluate preoperative risk stratification, and molecular correlations with ER were explored using GEO RNA-sequencing data. The model included five independent prognostic factors: RDs, lymphocyte-to-monocyte ratio, neutrophil-to-lymphocyte ratio, lymphocyte-Albumin, and prognostic nutritional index. It outperformed pathological and TNM models in two external datasets (AUC for test set 1:0.865 vs. 0.766, 0.665; AUC for test set 2: 0.848 vs. 0.754, 0.694). Preoperative risk stratification identified significantly better disease-free survival in low-risk vs. high-risk patients across all subgroups (p < 0.01). High enrichment scores were associated with upregulated tumor proliferation pathways (epithelial-mesenchymal transition [EMT] and inflammatory response pathways) and altered immune cell infiltration patterns in the tumor microenvironment. The preoperative model enables treatment strategy optimization and reduces unnecessary drug toxicity by noninvasively predicting ER in LACRC.

Multitask radioclinical decision stratification in non-metastatic colon cancer: integrating MMR status, pT staging, and high-risk pathological factors.

Yang R, Liu J, Li L, Fan Y, Shu Y, Wu W, Shu J

pubmed logopapersSep 22 2025
Constructing a multi-task global decision support system based on preoperative enhanced CT features to predict the mismatch repair (MMR) status, T stage, and pathological risk factors (e.g., histological differentiation, lymphovascular invasion) for patients with non-metastatic colon cancer. 372 eligible non-metastatic colon cancer (NMCC) participants (training cohort: n = 260; testing cohort: n = 112) were enrolled from two institutions. The 34 features (imaging features: n = 27; clinical features: n = 7) were subjected to feature selection using LASSO, Boruta, ReliefF, mRMR, and XGBoost-RFE, respectively. In each of the three categories-MMR, pT staging, and pathological risk factors-four features were selected to construct the total feature set. Subsequently, the multitask model was built with 14 machine learning algorithms. The predictive performance of the machine model was evaluated using the area under the receiver operating characteristic curve (AUC). The final feature set for constructing the model is based on the mRMR feature screening method. For the final MMR classification, pT staging, and pathological risk factors, SVC, Bernoulli NB, and Decision Tree algorithm were selected respectively, with AUC scores of 0.80 [95% CI 0.71-0.89], 0.82 [95% CI 0.71-0.94], and 0.85 [95% CI 0.77-0.93] on the test set. Furthermore, a direct multiclass model constructed using the total feature set resulted in an average AUC of 0.77 across four management plans in the test set. The multi-task machine learning model proposed in this study enables non-invasive and precise preoperative stratification of patients with NMCC based on MMR status, pT stage, and pathological risk factors. This predictive tool demonstrates significant potential in facilitating preoperative risk stratification and guiding individualized therapeutic strategies.

Comprehensive Assessment of Tumor Stromal Heterogeneity in Bladder Cancer by Deep Learning and Habitat Radiomics.

Du Y, Sui Y, Tao Y, Cao J, Jiang X, Yu J, Wang B, Wang Y, Li H

pubmed logopapersSep 22 2025
Tumor stromal heterogeneity plays a pivotal role in bladder cancer progression. The tumor-stroma ratio (TSR) is a key pathological marker reflecting stromal heterogeneity. This study aimed to develop a preoperative, CT-based machine learning model for predicting TSR in bladder cancer, comparing various radiomic approaches, and evaluating their utility in prognostic assessment and immunotherapy response prediction. A total of 477 bladder urothelial carcinoma patients from two centers were retrospectively included. Tumors were segmented on preoperative contrast-enhanced CT, and radiomic features were extracted. K-means clustering was used to divide tumors into subregions. Radiomics models were constructed: a conventional model (Intra), a multi-subregion model (Habitat), and single-subregion models (HabitatH1/H2/H3). A deep transfer learning model (DeepL) based on the largest tumor cross-section was also developed. Model performance was evaluated in training, testing, and external validation cohorts, and associations with recurrence-free survival, CD8+ T cell infiltration, and immunotherapy response were analyzed. The HabitatH1 model demonstrated robust diagnostic performance with favorable calibration and clinical utility. The DeepL model surpassed all radiomics models in predictive accuracy. A nomogram combining DeepL and clinical variables effectively predicted recurrence-free survival, CD8+ T cell infiltration, and immunotherapy response. Imaging-predicted TSR showed significant associations with the tumor immune microenvironment and treatment outcomes. CT-based habitat radiomics and deep learning models enable non-invasive, quantitative assessment of TSR in bladder cancer. The DeepL model provides superior diagnostic and prognostic value, supporting personalized treatment decisions and prediction of immunotherapy response.

A multi-class segmentation model of deep learning on contrast-enhanced computed tomography to segment and differentiate lipid-poor adrenal nodules: a dual-center study.

Bai X, Wu Z, Lu L, Zhang H, Zheng H, Zhang Y, Liu X, Zhang Z, Zhang G, Zhang D, Jin Z, Sun H

pubmed logopapersSep 22 2025
To develop a deep-learning model for segmenting and classifying adrenal nodules as either lipid-poor adenoma (LPA) or nodular hyperplasia (NH) on contrast-enhanced computed tomography (CECT) images. This retrospective dual-center study included 164 patients (median age 51.0 years; 93 females) with pathologically confirmed LPA or NH. The model was trained on 128 patients from the internal center and validated on 36 external cases. Radiologists annotated adrenal glands and nodules on 1-mm portal-venous phase CT images. We proposed Mamba-USeg, a novel state-space models (SSMs)-based multi-class segmentation method that performs simultaneous segmentation and classification. Performance was evaluated using the mean Dice similarity coefficient (mDSC) for segmentation and sensitivity/specificity for classification, with comparisons made against MultiResUNet and CPFNet. From per-slice segmentation, the model yielded an mDSC of 0.855 for the adrenal gland; for nodule segmentation, it achieved mDSCs of 0.869 (LPA) and 0.863 (NH), significantly outperforming two previous models-MultiResUNet (LPA, p < 0.001; NH, p = 0.014) and CPFNet (LPA, p = 0.003; NH, p = 0.023). Classification performance from per slice demonstrated sensitivity of 95.3% (95% confidence interval [CI] 91.3-96.6%) and specificity of 92.7% (95% CI: 91.9-93.6%) for LPA, and sensitivity of 94.2% (95% CI: 89.7-97.7%) and specificity of 91.5% (95% CI: 90.4-92.4%) for NH. The classification accuracy for patients from external sources was 91.7% (95% CI: 76.8-98.9%). The proposed multi-class segmentation model can accurately segment and differentiate between LPA and NH on CECT images, demonstrating superior performance to existing methods. Question Accurate differentiation between LPA and NH on imaging remains clinically challenging yet critically important for guiding appropriate treatment approaches. Findings Mamba-Useg, a multi-class segmentation model utilizing pixel-level analysis and majority voting strategies, can accurately segment and classify adrenal nodules as LPA or NH. Clinical relevance The proposed multi-class segmentation model can simultaneously segment and classify adrenal nodules, outperforming previous models in accuracy; it significantly aids clinical decision-making and thereby reduces unnecessary surgeries in adrenal hyperplasia patients.

Deep-learning-based prediction of significant portal hypertension with single cross-sectional non-enhanced CT.

Yamamoto A, Sato S, Ueda D, Walston SL, Kageyama K, Jogo A, Nakano M, Kotani K, Uchida-Kobayashi S, Kawada N, Miki Y

pubmed logopapersSep 22 2025
The purpose of this study was to establish a predictive deep learning (DL) model for clinically significant portal hypertension (CSPH) based on a single cross-sectional non-contrast CT image and to compare four representative positional images to determine the most suitable for the detection of CSPH. The study included 421 patients with chronic liver disease who underwent hepatic venous pressure gradient measurement at our institution between May 2007 and January 2024. Patients were randomly classified into training, validation, and test datasets at a ratio of 8:1:1. Non-contrast cross-sectional CT images from four target areas of interest were used to create four deep-learning-based models for predicting CSPH. The areas of interest were the umbilical portion of the portal vein (PV), the first right branch of the PV, the confluence of the splenic vein and PV, and the maximum cross-section of the spleen. The models were implemented using convolutional neural networks with a multilayer perceptron as the classifier. The model with the best predictive ability for CSPH was then compared to 13 conventional evaluation methods. Among the four areas, the umbilical portion of the PV had the highest predictive ability for CSPH (area under the curve [AUC]: 0.80). At the threshold maximizing the Youden index, sensitivity and specificity were 0.867 and 0.615, respectively. This DL model outperformed the ANTICIPATE model. We developed an algorithm that can predict CSPH immediately from a single slice of non-contrast CT, using the most suitable image of the umbilical portion of the PV. Question CSPH predicts complications but requires invasive hepatic venous pressure gradient measurement for diagnosis. Findings At the threshold maximizing the Youden index, sensitivity and specificity were 0.867 and 0.615, respectively. This DL model outperformed the ANTICIPATE model. Clinical relevance This study shows that a DL model can accurately predict CSPH from a single non-contrast CT image, providing a non-invasive alternative to invasive methods and aiding early detection and risk stratification in chronic liver disease without image manipulation.

Volume Fusion-based Self-Supervised Pretraining for 3D Medical Image Segmentation.

Wang G, Fu J, Wu J, Luo X, Zhou Y, Liu X, Li K, Lin J, Shen B, Zhang S

pubmed logopapersSep 22 2025
The performance of deep learning models for medical image segmentation is often limited in scenarios where training data or annotations are limited. Self-Supervised Learning (SSL) is an appealing solution for this dilemma due to its feature learning ability from a large amount of unannotated images. Existing SSL methods have focused on pretraining either an encoder for global feature representation or an encoder-decoder structure for image restoration, where the gap between pretext and downstream tasks limits the usefulness of pretrained decoders in downstream segmentation. In this work, we propose a novel SSL strategy named Volume Fusion (VolF) for pretraining 3D segmentation models. It minimizes the gap between pretext and downstream tasks by introducing a pseudo-segmentation pretext task, where two sub-volumes are fused by a discretized block-wise fusion coefficient map. The model takes the fused result as input and predicts the category of fusion coefficient for each voxel, which can be trained with standard supervised segmentation loss functions without manual annotations. Experiments with an abdominal CT dataset for pretraining and both in-domain and out-domain downstream datasets showed that VolF led to large performance gain from training from scratch with faster convergence speed, and outperformed several state-of-the-art SSL methods. In addition, it is general to different network structures, and the learned features have high generalizability to different body parts and modalities.

MRI-based habitat analysis for pathologic response prediction after neoadjuvant chemoradiotherapy in rectal cancer: a multicenter study.

Chen Q, Zhang Q, Li Z, Zhang S, Xia Y, Wang H, Lu Y, Zheng A, Shao C, Shen F

pubmed logopapersSep 22 2025
To investigate MRI-based habitat analysis for its value in predicting pathologic response following neoadjuvant chemoradiotherapy (nCRT) in rectal cancer (RC) patients. 1021 RC patients in three hospitals were divided into the training and test sets (n = 319), the internal validation set (n = 317), and external validation sets 1 (n = 158) and 2 (n = 227). Deep learning was performed to automatically segment the entire lesion on high-resolution MRI. Simple linear iterative clustering was used to divide each tumor into subregions, from which radiomics features were extracted. The optimal number of clusters reflecting the diversity of the tumor ecosystem was determined. Finally, four models were developed: clinical, intratumoral heterogeneity (ITH)-based, radiomics, and fusion models. The performance of these models was evaluated. The impact of nCRT on disease-free survival (DFS) was further analyzed. The Delong test revealed the fusion model (AUCs of 0.867, 0.851, 0.852, and 0.818 in the four cohorts, respectively), the radiomics model (0.831, 0.694, 0.753, and 0.705, respectively), and the ITH model (0.790, 0.786, 0.759, and 0.722, respectively) were all superior to the clinical model (0.790, 0.605, 0.735, and 0.704, respectively). However, no significant differences were detected between the fusion and ITH models. Patients stratified using the fusion model showed significant differences in DFS between the good and poor response groups (all p < 0.05 in the four sets). The fusion model combining clinical factors, radiomics features, and ITH features may help predict pathologic response in RC cases receiving nCRT. Question Identifying rectal cancer (RC) patients likely to benefit from neoadjuvant chemoradiotherapy (nCRT) before treatment is crucial. Findings The fusion model shows the best performance in predicting response after neoadjuvant chemoradiotherapy. Clinical relevance The fusion model integrates clinical characteristics, radiomics features, and intratumoral heterogeneity (ITH)features, which can be applied for the prediction of response to nCRT in RC patients, offering potential benefits in terms of personalized treatment strategies.
Page 8 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.