Sort by:
Page 32 of 6076063 results

Zhang R, Wang K, Wang S, Wang C, Cao T, Ci C, Xu M, Ge M

pubmed logopapersOct 14 2025
Proper stratification of recurrence risk in breast cancer is crucial for guiding treatment decisions. This study aims to predict the recurrence risk of breast cancer patients using a multimodal deep learning model that integrates multiple sequence MRI imaging features with clinicopathologic characteristics. In this retrospective study, we enrolled 574 patients with non-metastatic invasive breast cancer from two Chinese institutions between September 2012 and July 2019. We developed a multimodal deep learning (MDL) model by constructing a multi-instance learning framework based on convolutional neural networks. We integrated imaging features from T2WI, DWI, and DCE-MRI sequences with clinicopathologic features for breast cancer recurrence risk stratification. Subsequently, the performance of the MDL model was evaluated using receiver operating characteristic (ROC) curves, the Hosmer-Lemeshow test, calibration curves, and decision curve analysis (DCA). Survival analysis was conducted with Kaplan-Meier survival curves to stratify breast cancer patients into high and low-recurrence risk groups. Time-dependent ROC curves were used to assess 3-year, 5-year, and 7-year recurrence-free survival (RFS) for breast cancer patients. Additionally, we performed differential and enrichment analyses on Oncotype DX genes. We correlated these genes with clinicopathologic features and deep-learning radiographic features using univariate Cox regression and Pearson correlation analysis. The MDL model demonstrated good performance in predicting breast cancer recurrence risk and accurately differentiated between high- and low-recurrence risk groups, with an AUC as high as 0.915 (95% CI 0.8448-0.9856). The C-index of prediction models was 0.803 in the testing cohort. The AUCs for 5-year and 7-year RFS were 0.936 (95% CI 0.876-0.997) and 0.956 (95% CI 0.902-1.000) in the validation cohort. In the testing cohort, these AUCs were 0.836 (95% CI 0.763-0.909) and 0.783 (95% CI 0.676-0.891). This study found a significant correlation between Oncotype DX gene expression, clinicopathologic features, and deep-learning radiographic features (p < 0.05). This study validated the robust predictive accuracy of the MDL model in identifying high- and low-risk groups for recurrence. The correlations identified between Oncotype DX genes, clinicopathologic features, and deep-learning radiographic features offer novel insights for future biomarker research in breast cancer.

Shi YJ, Zhang H, Wang LL, Liu YL, Zhu HT, Li XT, Wei YY, Sun YS

pubmed logopapersOct 14 2025
To develop and validate a deep learning tool for the automatic segmentation of pancreatic solid neoplasms and to establish a radiomics model for diagnosing these solid neoplasms in MRI. This retrospective study employed a three-dimensional nnU-Net-based model trained in plain MRI from patients who underwent resection for pancreatic neoplasms. A radiomics model was developed for diagnosing pancreatic neoplasms based on automatic segmentation. The segmentation performance of the deep learning model was quantitatively evaluated using dice similarity coefficient (DSC). The performance of the radiomics model was assessed through receiver operating characteristic analysis. The study included 165 and 89 patients in the training and testing cohorts. The deep learning model achieved excellent automatic segmentation performance, with mean DSC values of 0.82 on T2WI and 0.91 on DWI in the training cohort, and 0.64 on T2WI and 0.70 on DWI in the testing cohort, respectively. For pancreatic lesions smaller than 2 cm, the DSC values were 0.74 on T2WI and 0.92 on DWI in the training cohort, and 0.51 on T2WI and 0.62 on DWI in the testing cohort. Nine radiomics signatures were selected based on ROIs obtained from the automatic segmentation. The radiomics diagnostic model exhibited favorable performance in distinguishing pancreatic ductal adenocarcinomas (PDACs) from neuroendocrine neoplasms and solid pseudopapillary neoplasms, with AUCs of 0.968 and 0.790 in the training and testing cohorts, respectively. The deep learning automatic segmentation tool accurately detected pancreatic neoplasms in MRI scans, with reasonable efficiency for tumors smaller than 2 cm. The radiomics diagnostic model demonstrated favorable performance in differentiating PDACs from neuroendocrine neoplasms and solid pseudopapillary neoplasms.

Zhao Y, Huang D, Jin H, Dong Y, Shan J, Zhang D, Qiu P, Hong C, Shen T

pubmed logopapersOct 14 2025
This study aims to investigate the association between the retinal artery to vein ratio (AVR) and body fat distribution, and to further evaluate the potential beneficial effects of optimized fat distribution on systemic vascular health. A total of 2,698 participants aged 18 to 80 from Lanxi cohort were enrolled. After applying inclusion and exclusion criteria, 2,045 participants were retained for the final analysis. Retinal vessel images were obtained through fundoscopy, and retinal AVR was automatically calculated using a clinically validated AI algorithm. Body fat was assessed by dual-energy x-ray absorptiometry (DXA). Adjusted multivariate linear regression models were used to identify the associations of retinal AVR with fat distribution. Retinal AVR was negatively associated with waist-hip ratio (WHR), android fat mass percentage, android to gynoid fat ratio, and trunk fat mass percentage. Similar trends were observed when fat distribution indicators were categorized into quartiles (P for trend < 0.05). When stratified by age, a similar significant association was observed in the 45-60 age group. Retinal AVR was associated with fat distribution, with particularly correlations observed in middle-aged populations and those with metabolic abnormalities. And these associations differ based on the location of fat depots, indicating that exercise-induced fat redistribution is associated with vascular health.

Borys K, Haubold J, Keyl J, Bali MA, De Angelis R, Boni KB, Coquelet N, Kohnke J, Baldini G, Kroll L, Schramm S, Stang A, Malamutmann E, Kleesiek J, Kim M, Kasper S, Siveke JT, Wiesweg M, Merkel-Jens A, Schaarschmidt BM, Gruenwald V, Bauer S, Oezcelik A, Bölükbas S, Herrmann K, Kimmig R, Lang S, Treckmann J, Stuschke M, Hadaschik B, Umutlu L, Forsting M, Schadendorf D, Friedrich CM, Schuler M, Hosch R, Nensa F

pubmed logopapersOct 14 2025
This study evaluates the CT-based volumetric sarcopenia index (SI) as a baseline prognostic factor for overall survival (OS) in 10,340 solid tumor patients (40% female). Automated body composition analysis was applied to internal baseline abdomen CTs and to thorax CTs. SI's prognostic value was assessed using multivariable Cox proportional hazards regression, accelerated failure time models, and gradient-boosted machine learning. External validation included 439 patients (40% female). Higher SI was associated with prolonged OS in the internal abdomen (HR 0.56, 95% CI 0.52-0.59; P < 0.001) and thorax cohorts (HR 0.40, 95% CI 0.37-0.43; P < 0.001), as well as in the external validation cohort (HR 0.56, 95% CI 0.41-0.79; P < 0.001). Machine learning models identified SI as the most important factor in survival prediction. Our results demonstrate SI's potential as a fully automated body composition feature for standard oncologic workflows.

Rispoli M, Calgaro G, Strano G, Rosboch GL, Massullo D, Piccirillo F, Nespoli MR, Coppolino F, Piccioni F

pubmed logopapersOct 14 2025
The selection of the appropriate size of a double-lumen tube (DLT) is a critical yet often underestimated aspect of thoracic anaesthesia. The present narrative review evaluates traditional and emerging methods for determining DLT size, including anthropometric formulas, chest X-rays, CT scans, and ultrasonography. Despite the prevalence of height- and gender-based predictions, mounting evidence underscores their restricted correlation with airway anatomy. Chest X-rays and CT scans have been shown to offer more accurate estimations of tracheobronchial dimensions, while ultrasound has been identified as a promising bedside tool. Recent meta-analytic evidence and technological advancements, including 3D reconstruction and AI-based modelling, may support a more personalised and safer approach. It is recommended that a pragmatic, image-guided strategy be employed to minimise airway trauma, improve lung isolation, and optimise patient outcomes.

Yao G, Huang Y, Shang X, Guo L, Feng J, Lai Z, Huang W, Lu J, Chen L, Zheng M

pubmed logopapersOct 14 2025
The aim of this study was to develop a multimodal fusion model for accurate risk prediction and clinical decision support for ductal carcinoma in-situ (DCIS). By integrating deep learning (DL), radiomics and clinical features, this study constructed the combined model and validated its performance in a multicenter cohort containing 232 patients (103 in the training set, 43 in the validation set and 86 in the external test set). Unimodal DL models showed significant overfitting in external tests (e.g., DenseNet201 training set AUC = 0.85 vs. test set 1 AUC = 0.47), whereas the multimodal fusion model achieved optimal predictive performance across cohorts through heterogeneous data synergy (training set AUC = 0.925, test set 2 AUC = 0.801) with the DeLong test confirmed that it significantly outperformed the unimodal model (P < 0.05). Grad-CAM visualization showed that the model focus region was highly consistent with the radiologist annotation (81% overlap, Cohen's κ = 0.68). Calibration curves (Hosmer-Leeshawn test P > 0.05) and decision curve analysis (DCA) validated the model prediction reliability (error < 5%) with a net clinical benefit advantage (net benefit difference of 7% to 28% at thresholds of 5% to 80%). The multimodal fusion strategy could potentially mitigate the limitations of unimodal models, presenting a potential solution with promising high accuracy and interpretability for individualized diagnosis and treatment of DCIS.

Zhao W, Huang X, Xu L

pubmed logopapersOct 14 2025
Accurate preoperative risk stratification for patients with head and neck (H&N) cancer remained a critical challenge, as long-term survival rates are poor despite aggressive multimodality treatment. While deep learning models showed promise for outcome prediction from medical images, their typical requirement for massive datasets presented a significant barrier to development and clinical translation. To overcome this limitation, we developed a transfer learning-based framework to accurately predict key treatment outcomes, locoregional recurrence (LR), distant metastasis (DM), and overall survival (OS), from non-invasive computed tomography (CT) images. Our framework, OPHN-Net, utilized a VGG16 architecture pre-trained on ImageNet. The framework was trained and validated using a public dataset from The Cancer Imaging Archive, which comprises CT images and clinical data for 296 patients from four independent institutions. To overcome data limitations and class imbalance, we implemented a novel random-plane view resampling method for data augmentation. The network was trained and validated on data from two institutions and then independently tested on a cohort from the remaining two. Finally, we constructed an integrated model by combining the predictions from our imaging-based model with key clinical characteristics to further enhance performance. On the independent test cohort, our OPHN-Net framework substantially outperformed both traditional radiomics and a previously published deep learning model across all endpoints. The model achieved AUCs of 0.84 (95% CI, 0.75-0.90) for LR, 0.89 (95% CI, 0.82-0.95) for DM, and 0.79 (95% CI, 0.70-0.87) for OS. Furthermore, integrating clinical characteristics with the imaging-based predictions yielded a final model with even greater performance, boosting the AUCs to 0.87 (95% CI, 0.80-0.93) for LR, 0.91 (95% CI, 0.83-0.95) for DM, and 0.86 (95% CI, 0.78-0.92) for OS. Our transfer learning-based framework, OPHN-Net, provided a robust and data-efficient method for predicting treatment outcomes in H&N cancer from non-invasive CT images. The integration of imaging-based predictions with clinical characteristics created a more comprehensive prognostic model. This approach had the potential to facilitate personalized treatment stratification, ultimately leading to improved clinical decision-making and patient outcomes.

Yan P, Tong K, Liu Z, Cheng T, Li X, Wang Z, Wu H, Liu K, Xu H, Yang Z

pubmed logopapersOct 14 2025
To establish and validate a machine learning model using preoperative multi-sequence MRI radiomic features and clinical data to predict pancreatic fistula after pancreaticoduodenectomy (PD). We retrospectively analyzed 139 patients who underwent PD, dividing them into a training group (n = 97) and a test group (n = 42) through stratified sampling in a 7:3 ratio. Regions of interest (ROI) were delineated on non-enhanced T1-weighted imaging (T1WI), T2-weighted imaging (T2WI), high b-value diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC) maps, and dynamic contrast-enhanced (DCE) images; radiomic features were subsequently extracted. The most significant radiomic features were selected using the t-test and least absolute shrinkage and selection operator with cross-validation (LASSO-CV). We identified clinical risk factors for clinically relevant postoperative pancreatic fistula (CR-POPF) using univariate logistic regression. Ten distinct machine learning algorithms were used to develop radiomics and clinical models, which were then integrated via weighted voting. Model performance was evaluated using receiver operating characteristic (ROC) curves and calibration curves. Eight radiomic features and one clinical feature (BMI) were selected for model construction. Among the ten machine learning algorithms, the Random Forest (RF) algorithm yielded the optimal radiomics model, achieving an AUC of 0.702, an F1-score of 0.571, and an accuracy of 0.857 in the test set. The K-Nearest Neighbors (KNN) algorithm produced the optimal clinical model, with corresponding values of 0.846, 0.640, and 0.786. Finally, the integrated model, developed using a weighted voting strategy, demonstrated superior comprehensive performance with an AUC of 0.899, showing excellent discrimination and calibration. The machine learning model using preoperative multi-sequence MRI radiomic features showed moderate predictive value for CR-POPF risk, which was significantly enhanced by integrating BMI.

Liu Z, Li J, Nawaz SA

pubmed logopapersOct 14 2025
With the deep iteration and innovation of information technology, medical technology is moving towards informatization and intelligence. This has led to a large-scale collection of medical imaging data that carries patient identification information being stored and disseminated over the network. It greatly increases the risk of medical images being leaked, tampered with, and stolen. To address this issue, a zero-watermarking method for encrypted medical images has been proposed based on HC dual chaos and DWT-ResNet-DCT. Firstly, based on the dynamic characteristic coupling of the Henon chaotic map and the Chen chaotic system, an HC dual-chaotic composite system is innovatively designed. And based on the WHT-DCT transform, it proposes a lossless encryption algorithm characterized by initial value sensitivity and a large key space. While ensuring high encryption efficiency, the algorithm achieves "lossless" decryption of medical images. On this basis, this paper proposes a watermarking algorithm based on DWT-ResNet-DCT for encrypted medical images. This algorithm effectively integrates the characteristics of the DWT transform domain and the convolutional neural network ResNet50, enabling accurate extraction of the feature sequence of encrypted medical images. Finally, experiments verify that the algorithm maintains high NC values (greater than 0.8) under traditional attacks, geometric attacks, and combined attacks, demonstrating excellent anti-attack capabilities, especially having good robustness under high-intensity geometric attacks.

Vure RB, Pappala LK

pubmed logopapersOct 14 2025
The increasing prevalence of brain tumors calls for the development of accurate and reliable diagnostic tools. Whereas traditional techniques offer some benefits, they can hardly detect or accurately classify the type of a tumor at an early stage, crucial for proper treatment planning. For this purpose, deep learning methods have been taken into consideration but are often prohibitive in nature due to architectures not well-suited for handling complex, heterogeneous datasets and requirements for large numbers of labeled data samples. This paper will introduce an advanced deep learning framework for increase the classification accuracy of different classes of brain tumors, such as glioma, meningioma, no tumor, and pituitary. To achieve this, it uses an enriched comprehensive dataset, with data augmentation done through Generative Adversarial Networks, to boost model performance and address the problem of limited labeled data. In this paper we used ResNet18 model to extract features, because it has been proven to be very effective for medical image and sample feature extraction of a complex nature, hence improving computational efficiency and performance. DMFN model was then formulated using a number of the ResNet 18 models used for the purpose where 3 models of Res Net 18 had to act on the pair - wise .The model shows excellent improvement on the BRATS2021 Dataset over existing techniques to reduce the training loss to 0.1963, validate loss to 0.1382, and validation accuracy to 98.36%. These results underscore the potential of our approach to move forward the diagnostics of the brain tumors. The combination of GAN for data augmentation, combined with the innovative use of PCA-PSO for FS and DMFN for classification, provides an overall robust framework that allows further applications with other medical imaging tasks in order to bring about improved clinical outcomes and further the medical image analysis field.
Page 32 of 6076063 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.