Sort by:
Page 43 of 47469 results

Recent advancements in personalized management of prostate cancer biochemical recurrence after radical prostatectomy.

Falkenbach F, Ekrutt J, Maurer T

pubmed logopapersMay 15 2025
Biochemical recurrence (BCR) after radical prostatectomy exhibits heterogeneous prognostic implications. Recent advancements in imaging and biomarkers have high potential for personalizing care. Prostate-specific membrane antigen imaging (PSMA)-PET/CT has revolutionized the BCR management in prostate cancer by detecting microscopic lesions earlier than conventional staging, leading to improved cancer control outcomes and changes in treatment plans in approximately two-thirds of cases. Salvage radiotherapy, often combined with androgen deprivation therapy, remains the standard treatment for high-risk BCR postprostatectomy, with PSMA-PET/CT guiding treatment adjustments, such as the radiation field, and improving progression-free survival. Advancements in biomarkers, genomic classifiers, and artificial intelligence-based models have enhanced risk stratification and personalized treatment planning, resulting in both treatment intensification and de-escalation. While conventional risk grouping relying on Gleason score and PSA level and kinetics remain the foundation for BCR management, PSMA-PET/CT, novel biomarkers, and artificial intelligence may enable more personalized treatment strategies.

A monocular endoscopic image depth estimation method based on a window-adaptive asymmetric dual-branch Siamese network.

Chong N, Yang F, Wei K

pubmed logopapersMay 15 2025
Minimally invasive surgery involves entering the body through small incisions or natural orifices, using a medical endoscope for observation and clinical procedures. However, traditional endoscopic images often suffer from low texture and uneven illumination, which can negatively impact surgical and diagnostic outcomes. To address these challenges, many researchers have applied deep learning methods to enhance the processing of endoscopic images. This paper proposes a monocular medical endoscopic image depth estimation method based on a window-adaptive asymmetric dual-branch Siamese network. In this network, one branch focuses on processing global image information, while the other branch concentrates on local details. An improved lightweight Squeeze-and-Excitation (SE) module is added to the final layer of each branch, dynamically adjusting the inter-channel weights through self-attention. The outputs from both branches are then integrated using a lightweight cross-attention feature fusion module, enabling cross-branch feature interaction and enhancing the overall feature representation capability of the network. Extensive ablation and comparative experiments were conducted on medical datasets (EAD2019, Hamlyn, M2caiSeg, UCL) and a non-medical dataset (NYUDepthV2), with both qualitative and quantitative results-measured in terms of RMSE, AbsRel, FLOPs and running time-demonstrating the superiority of the proposed model. Additionally, comparisons with CT images show good organ boundary matching capability, highlighting the potential of our method for clinical applications. The key code of this paper is available at: https://github.com/superchongcnn/AttenAdapt_DE .

Application of deep learning with fractal images to sparse-view CT.

Kawaguchi R, Minagawa T, Hori K, Hashimoto T

pubmed logopapersMay 15 2025
Deep learning has been widely used in research on sparse-view computed tomography (CT) image reconstruction. While sufficient training data can lead to high accuracy, collecting medical images is often challenging due to legal or ethical concerns, making it necessary to develop methods that perform well with limited data. To address this issue, we explored the use of nonmedical images for pre-training. Therefore, in this study, we investigated whether fractal images could improve the quality of sparse-view CT images, even with a reduced number of medical images. Fractal images generated by an iterated function system (IFS) were used for nonmedical images, and medical images were obtained from the CHAOS dataset. Sinograms were then generated using 36 projections in sparse-view and the images were reconstructed by filtered back-projection (FBP). FBPConvNet and WNet (first module: learning fractal images, second module: testing medical images, and third module: learning output) were used as networks. The effectiveness of pre-training was then investigated for each network. The quality of the reconstructed images was evaluated using two indices: structural similarity (SSIM) and peak signal-to-noise ratio (PSNR). The network parameters pre-trained with fractal images showed reduced artifacts compared to the network trained exclusively with medical images, resulting in improved SSIM. WNet outperformed FBPConvNet in terms of PSNR. Pre-training WNet with fractal images produced the best image quality, and the number of medical images required for main-training was reduced from 5000 to 1000 (80% reduction). Using fractal images for network training can reduce the number of medical images required for artifact reduction in sparse-view CT. Therefore, fractal images can improve accuracy even with a limited amount of training data in deep learning.

Machine Learning-Based Multimodal Radiomics and Transcriptomics Models for Predicting Radiotherapy Sensitivity and Prognosis in Esophageal Cancer.

Ye C, Zhang H, Chi Z, Xu Z, Cai Y, Xu Y, Tong X

pubmed logopapersMay 15 2025
Radiotherapy plays a critical role in treating esophageal cancer, but individual responses vary significantly, impacting patient outcomes. This study integrates machine learning-driven multimodal radiomics and transcriptomics to develop predictive models for radiotherapy sensitivity and prognosis in esophageal cancer. We applied the SEResNet101 deep learning model to imaging and transcriptomic data from the UCSC Xena and TCGA databases, identifying prognosis-associated genes such as STUB1, PEX12, and HEXIM2. Using Lasso regression and Cox analysis, we constructed a prognostic risk model that accurately stratifies patients based on survival probability. Notably, STUB1, an E3 ubiquitin ligase, enhances radiotherapy sensitivity by promoting the ubiquitination and degradation of SRC, a key oncogenic protein. In vitro and in vivo experiments confirmed that STUB1 overexpression or SRC silencing significantly improves radiotherapy response in esophageal cancer models. These findings highlight the predictive power of multimodal data integration for individualized radiotherapy planning and underscore STUB1 as a promising therapeutic target for enhancing radiotherapy efficacy in esophageal cancer.

A Deep-Learning Framework for Ovarian Cancer Subtype Classification Using Whole Slide Images.

Wang C, Yi Q, Aflakian A, Ye J, Arvanitis T, Dearn KD, Hajiyavand A

pubmed logopapersMay 15 2025
Ovarian cancer, a leading cause of cancer-related deaths among women, comprises distinct subtypes each requiring different treatment approaches. This paper presents a deep-learning framework for classifying ovarian cancer subtypes using Whole Slide Imaging (WSI). Our method contains three stages: image tiling, feature extraction, and multi-instance learning. Our approach is trained and validated on a public dataset from 80 distinct patients, achieving up to 89,8% accuracy with a notable improvement in computational efficiency. The results demonstrate the potential of our framework to augment diagnostic precision in clinical settings, offering a scalable solution for the accurate classification of ovarian cancer subtypes.

CT-based AI framework leveraging multi-scale features for predicting pathological grade and Ki67 index in clear cell renal cell carcinoma: a multicenter study.

Yang H, Zhang Y, Li F, Liu W, Zeng H, Yuan H, Ye Z, Huang Z, Yuan Y, Xiang Y, Wu K, Liu H

pubmed logopapersMay 14 2025
To explore whether a CT-based AI framework, leveraging multi-scale features, can offer a non-invasive approach to accurately predict pathological grade and Ki67 index in clear cell renal cell carcinoma (ccRCC). In this multicenter retrospective study, a total of 1073 pathologically confirmed ccRCC patients from seven cohorts were split into internal cohorts (training and validation sets) and an external test set. The AI framework comprised an image processor, a 3D-kidney and tumor segmentation model by 3D-UNet, a multi-scale features extractor built upon unsupervised learning, and a multi-task classifier utilizing XGBoost. A quantitative model interpretation technique, known as SHapley Additive exPlanations (SHAP), was employed to explore the contribution of multi-scale features. The 3D-UNet model showed excellent performance in segmenting both the kidney and tumor regions, with Dice coefficients exceeding 0.92. The proposed multi-scale features model exhibited strong predictive capability for pathological grading and Ki67 index, with AUROC values of 0.84 and 0.87, respectively, in the internal validation set, and 0.82 and 0.82, respectively, in the external test set. The SHAP results demonstrated that features from radiomics, the 3D Auto-Encoder, and dimensionality reduction all made significant contributions to both prediction tasks. The proposed AI framework, leveraging multi-scale features, accurately predicts the pathological grade and Ki67 index of ccRCC. The CT-based AI framework leveraging multi-scale features offers a promising avenue for accurately predicting the pathological grade and Ki67 index of ccRCC preoperatively, indicating a direction for non-invasive assessment. Non-invasively determining pathological grade and Ki67 index in ccRCC could guide treatment decisions. The AI framework integrates segmentation, classification, and model interpretation, enabling fully automated analysis. The AI framework enables non-invasive preoperative detection of high-risk tumors, assisting clinical decision-making.

Development and Validation of Ultrasound Hemodynamic-based Prediction Models for Acute Kidney Injury After Renal Transplantation.

Ni ZH, Xing TY, Hou WH, Zhao XY, Tao YL, Zhou FB, Xing YQ

pubmed logopapersMay 14 2025
Acute kidney injury (AKI) post-renal transplantation often has a poor prognosis. This study aimed to identify patients with elevated risks of AKI after kidney transplantation. A retrospective analysis was conducted on 422 patients who underwent kidney transplants from January 2020 to April 2023. Participants from 2020 to 2022 were randomized to training group (n=261) and validation group 1 (n=113), and those in 2023, as validation group 2 (n=48). Risk factors were determined by employing logistic regression analysis alongside the least absolute shrinkage and selection operator, making use of ultrasound hemodynamic, clinical, and laboratory information. Models for prediction were developed using logistic regression analysis and six machine-learning techniques. The evaluation of the logistic regression model encompassed its discrimination, calibration, and applicability in clinical settings, and a nomogram was created to illustrate the model. SHapley Additive exPlanations were used to explain and visualize the best of the six machine learning models. The least absolute shrinkage and selection operator combined with logistic regression identified and incorporated five risk factors into the predictive model. The logistic regression model (AUC=0.927 in the validation set 1; AUC=0.968 in the validation set 2) and the random forest model (AUC=0.946 in the validation set 1;AUC=0.996 in the validation set 2) showed good performance post-validation, with no significant difference in their predictive accuracy. These findings can assist clinicians in the early identification of patients at high risk for AKI, allowing for timely interventions and potentially enhancing the prognosis following kidney transplantation.

The Future of Urodynamics: Innovations, Challenges, and Possibilities.

Chew LE, Hannick JH, Woo LL, Weaver JK, Damaser MS

pubmed logopapersMay 14 2025
Urodynamic studies (UDS) are essential for evaluating lower urinary tract function but are limited by patient discomfort, lack of standardization and diagnostic variability. Advances in technology aim to address these challenges and improve diagnostic accuracy and patient comfort. AUM offers physiological assessment by allowing natural bladder filling and monitoring during daily activities. Compared to conventional UDS, AUM demonstrates higher sensitivity for detecting detrusor overactivity and underlying pathophysiology. However, it faces challenges like motion artifacts, catheter-related discomfort, and difficulty measuring continuous bladder volume. Emerging devices such as Urodynamics Monitor and UroSound offer more patient-friendly alternatives. These tools have the potential to improve diagnostic accuracy for bladder pressure and voiding metrics but remain limited and still require further validation and testing. Ultrasound-based modalities, including dynamic ultrasonography and shear wave elastography, provide real-time, noninvasive assessment of bladder structure and function. These modalities are promising but will require further development of standardized protocols. AI and machine learning models enhance diagnostic accuracy and reduce variability in UDS interpretation. Applications include detecting detrusor overactivity and distinguishing bladder outlet obstruction from detrusor underactivity. However, further validation is required for clinical adoption. Advances in AUM, wearable technologies, ultrasonography, and AI demonstrate potential for transforming UDS into a more accurate, patient-centered tool. Despite significant progress, challenges like technical complexity, standardization, and cost-effectiveness must be addressed to integrate these innovations into routine practice. Nonetheless, these technologies provide the possibility of a future of improved diagnosis and treatment of lower urinary tract dysfunction.

The utility of low-dose pre-operative CT of ovarian tumor with artificial intelligence iterative reconstruction for diagnosing peritoneal invasion, lymph node and hepatic metastasis.

Cai X, Han J, Zhou W, Yang F, Liu J, Wang Q, Li R

pubmed logopapersMay 13 2025
Diagnosis of peritoneal invasion, lymph node metastasis, and hepatic metastasis is crucial in the decision-making process of ovarian tumor treatment. This study aimed to test the feasibility of low-dose abdominopelvic CT with an artificial intelligence iterative reconstruction (AIIR) for diagnosing peritoneal invasion, lymph node metastasis, and hepatic metastasis in pre-operative imaging of ovarian tumor. This study prospectively enrolled 88 patients with pathology-confirmed ovarian tumors, where routine-dose CT at portal venous phase (120 kVp/ref. 200 mAs) with hybrid iterative reconstruction (HIR) was followed by a low-dose scan (120 kVp/ref. 40 mAs) with AIIR. The performance of diagnosing peritoneal invasion and lymph node metastasis was assessed using receiver operating characteristic (ROC) analysis with pathological results serving as the reference. The hepatic parenchymal metastases were diagnosed and signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were measured. The perihepatic structures were also scored on the clarity of porta hepatis, gallbladder fossa and intersegmental fissure. The effective dose of low-dose CT was 79.8% lower than that of routine-dose scan (2.64 ± 0.46 vs. 13.04 ± 2.25 mSv, p < 0.001). The low-dose AIIR showed similar area under the ROC curve (AUC) with routine-dose HIR for diagnosing both peritoneal invasion (0.961 vs. 0.960, p = 0.734) and lymph node metastasis (0.711 vs. 0.715, p = 0.355). The 10 hepatic parenchymal metastases were all accurately diagnosed on the two image sets. The low-dose AIIR exhibited higher SNR and CNR for hepatic parenchymal metastases and superior clarity for perihepatic structures. In low-dose pre-operative CT of ovarian tumor, AIIR delivers similar diagnostic accuracy for peritoneal invasion, lymph node metastasis, and hepatic metastasis, as compared to routine-dose abdominopelvic CT. It is feasible and diagnostically safe to apply up to 80% dose reduction in CT imaging of ovarian tumor by using AIIR.

Segmentation of renal vessels on non-enhanced CT images using deep learning models.

Zhong H, Zhao Y, Zhang Y

pubmed logopapersMay 13 2025
To evaluate the possibility of performing renal vessel reconstruction on non-enhanced CT images using deep learning models. 177 patients' CT scans in the non-enhanced phase, arterial phase and venous phase were chosen. These data were randomly divided into the training set (n = 120), validation set (n = 20) and test set (n = 37). In training set and validation set, a radiologist marked out the right renal arteries and veins on non-enhanced CT phase images using contrast phases as references. Trained deep learning models were tested and evaluated on the test set. A radiologist performed renal vessel reconstruction on the test set without the contrast phase reference, and the results were used for comparison. Reconstruction using the arterial phase and venous phase was used as the gold standard. Without the contrast phase reference, both radiologist and model could accurately identify artery and vein main trunk. The accuracy was 91.9% vs. 97.3% (model vs. radiologist) in artery and 91.9% vs. 100% in vein, the difference was insignificant. The model had difficulty identify accessory arteries, the accuracy was significantly lower than radiologist (44.4% vs. 77.8%, p = 0.044). The model also had lower accuracy in accessory veins, but the difference was insignificant (64.3% vs. 85.7%, p = 0.094). Deep learning models could accurately recognize the right renal artery and vein main trunk, and accuracy was comparable to that of radiologists. Although the current model still had difficulty recognizing small accessory vessels, further training and model optimization would solve these problems.
Page 43 of 47469 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.