Sort by:
Page 37 of 41408 results

Application of deep learning with fractal images to sparse-view CT.

Kawaguchi R, Minagawa T, Hori K, Hashimoto T

pubmed logopapersMay 15 2025
Deep learning has been widely used in research on sparse-view computed tomography (CT) image reconstruction. While sufficient training data can lead to high accuracy, collecting medical images is often challenging due to legal or ethical concerns, making it necessary to develop methods that perform well with limited data. To address this issue, we explored the use of nonmedical images for pre-training. Therefore, in this study, we investigated whether fractal images could improve the quality of sparse-view CT images, even with a reduced number of medical images. Fractal images generated by an iterated function system (IFS) were used for nonmedical images, and medical images were obtained from the CHAOS dataset. Sinograms were then generated using 36 projections in sparse-view and the images were reconstructed by filtered back-projection (FBP). FBPConvNet and WNet (first module: learning fractal images, second module: testing medical images, and third module: learning output) were used as networks. The effectiveness of pre-training was then investigated for each network. The quality of the reconstructed images was evaluated using two indices: structural similarity (SSIM) and peak signal-to-noise ratio (PSNR). The network parameters pre-trained with fractal images showed reduced artifacts compared to the network trained exclusively with medical images, resulting in improved SSIM. WNet outperformed FBPConvNet in terms of PSNR. Pre-training WNet with fractal images produced the best image quality, and the number of medical images required for main-training was reduced from 5000 to 1000 (80% reduction). Using fractal images for network training can reduce the number of medical images required for artifact reduction in sparse-view CT. Therefore, fractal images can improve accuracy even with a limited amount of training data in deep learning.

Recent advancements in personalized management of prostate cancer biochemical recurrence after radical prostatectomy.

Falkenbach F, Ekrutt J, Maurer T

pubmed logopapersMay 15 2025
Biochemical recurrence (BCR) after radical prostatectomy exhibits heterogeneous prognostic implications. Recent advancements in imaging and biomarkers have high potential for personalizing care. Prostate-specific membrane antigen imaging (PSMA)-PET/CT has revolutionized the BCR management in prostate cancer by detecting microscopic lesions earlier than conventional staging, leading to improved cancer control outcomes and changes in treatment plans in approximately two-thirds of cases. Salvage radiotherapy, often combined with androgen deprivation therapy, remains the standard treatment for high-risk BCR postprostatectomy, with PSMA-PET/CT guiding treatment adjustments, such as the radiation field, and improving progression-free survival. Advancements in biomarkers, genomic classifiers, and artificial intelligence-based models have enhanced risk stratification and personalized treatment planning, resulting in both treatment intensification and de-escalation. While conventional risk grouping relying on Gleason score and PSA level and kinetics remain the foundation for BCR management, PSMA-PET/CT, novel biomarkers, and artificial intelligence may enable more personalized treatment strategies.

A Deep-Learning Framework for Ovarian Cancer Subtype Classification Using Whole Slide Images.

Wang C, Yi Q, Aflakian A, Ye J, Arvanitis T, Dearn KD, Hajiyavand A

pubmed logopapersMay 15 2025
Ovarian cancer, a leading cause of cancer-related deaths among women, comprises distinct subtypes each requiring different treatment approaches. This paper presents a deep-learning framework for classifying ovarian cancer subtypes using Whole Slide Imaging (WSI). Our method contains three stages: image tiling, feature extraction, and multi-instance learning. Our approach is trained and validated on a public dataset from 80 distinct patients, achieving up to 89,8% accuracy with a notable improvement in computational efficiency. The results demonstrate the potential of our framework to augment diagnostic precision in clinical settings, offering a scalable solution for the accurate classification of ovarian cancer subtypes.

A monocular endoscopic image depth estimation method based on a window-adaptive asymmetric dual-branch Siamese network.

Chong N, Yang F, Wei K

pubmed logopapersMay 15 2025
Minimally invasive surgery involves entering the body through small incisions or natural orifices, using a medical endoscope for observation and clinical procedures. However, traditional endoscopic images often suffer from low texture and uneven illumination, which can negatively impact surgical and diagnostic outcomes. To address these challenges, many researchers have applied deep learning methods to enhance the processing of endoscopic images. This paper proposes a monocular medical endoscopic image depth estimation method based on a window-adaptive asymmetric dual-branch Siamese network. In this network, one branch focuses on processing global image information, while the other branch concentrates on local details. An improved lightweight Squeeze-and-Excitation (SE) module is added to the final layer of each branch, dynamically adjusting the inter-channel weights through self-attention. The outputs from both branches are then integrated using a lightweight cross-attention feature fusion module, enabling cross-branch feature interaction and enhancing the overall feature representation capability of the network. Extensive ablation and comparative experiments were conducted on medical datasets (EAD2019, Hamlyn, M2caiSeg, UCL) and a non-medical dataset (NYUDepthV2), with both qualitative and quantitative results-measured in terms of RMSE, AbsRel, FLOPs and running time-demonstrating the superiority of the proposed model. Additionally, comparisons with CT images show good organ boundary matching capability, highlighting the potential of our method for clinical applications. The key code of this paper is available at: https://github.com/superchongcnn/AttenAdapt_DE .

Development and Validation of Ultrasound Hemodynamic-based Prediction Models for Acute Kidney Injury After Renal Transplantation.

Ni ZH, Xing TY, Hou WH, Zhao XY, Tao YL, Zhou FB, Xing YQ

pubmed logopapersMay 14 2025
Acute kidney injury (AKI) post-renal transplantation often has a poor prognosis. This study aimed to identify patients with elevated risks of AKI after kidney transplantation. A retrospective analysis was conducted on 422 patients who underwent kidney transplants from January 2020 to April 2023. Participants from 2020 to 2022 were randomized to training group (n=261) and validation group 1 (n=113), and those in 2023, as validation group 2 (n=48). Risk factors were determined by employing logistic regression analysis alongside the least absolute shrinkage and selection operator, making use of ultrasound hemodynamic, clinical, and laboratory information. Models for prediction were developed using logistic regression analysis and six machine-learning techniques. The evaluation of the logistic regression model encompassed its discrimination, calibration, and applicability in clinical settings, and a nomogram was created to illustrate the model. SHapley Additive exPlanations were used to explain and visualize the best of the six machine learning models. The least absolute shrinkage and selection operator combined with logistic regression identified and incorporated five risk factors into the predictive model. The logistic regression model (AUC=0.927 in the validation set 1; AUC=0.968 in the validation set 2) and the random forest model (AUC=0.946 in the validation set 1;AUC=0.996 in the validation set 2) showed good performance post-validation, with no significant difference in their predictive accuracy. These findings can assist clinicians in the early identification of patients at high risk for AKI, allowing for timely interventions and potentially enhancing the prognosis following kidney transplantation.

The Future of Urodynamics: Innovations, Challenges, and Possibilities.

Chew LE, Hannick JH, Woo LL, Weaver JK, Damaser MS

pubmed logopapersMay 14 2025
Urodynamic studies (UDS) are essential for evaluating lower urinary tract function but are limited by patient discomfort, lack of standardization and diagnostic variability. Advances in technology aim to address these challenges and improve diagnostic accuracy and patient comfort. AUM offers physiological assessment by allowing natural bladder filling and monitoring during daily activities. Compared to conventional UDS, AUM demonstrates higher sensitivity for detecting detrusor overactivity and underlying pathophysiology. However, it faces challenges like motion artifacts, catheter-related discomfort, and difficulty measuring continuous bladder volume. Emerging devices such as Urodynamics Monitor and UroSound offer more patient-friendly alternatives. These tools have the potential to improve diagnostic accuracy for bladder pressure and voiding metrics but remain limited and still require further validation and testing. Ultrasound-based modalities, including dynamic ultrasonography and shear wave elastography, provide real-time, noninvasive assessment of bladder structure and function. These modalities are promising but will require further development of standardized protocols. AI and machine learning models enhance diagnostic accuracy and reduce variability in UDS interpretation. Applications include detecting detrusor overactivity and distinguishing bladder outlet obstruction from detrusor underactivity. However, further validation is required for clinical adoption. Advances in AUM, wearable technologies, ultrasonography, and AI demonstrate potential for transforming UDS into a more accurate, patient-centered tool. Despite significant progress, challenges like technical complexity, standardization, and cost-effectiveness must be addressed to integrate these innovations into routine practice. Nonetheless, these technologies provide the possibility of a future of improved diagnosis and treatment of lower urinary tract dysfunction.

CT-based AI framework leveraging multi-scale features for predicting pathological grade and Ki67 index in clear cell renal cell carcinoma: a multicenter study.

Yang H, Zhang Y, Li F, Liu W, Zeng H, Yuan H, Ye Z, Huang Z, Yuan Y, Xiang Y, Wu K, Liu H

pubmed logopapersMay 14 2025
To explore whether a CT-based AI framework, leveraging multi-scale features, can offer a non-invasive approach to accurately predict pathological grade and Ki67 index in clear cell renal cell carcinoma (ccRCC). In this multicenter retrospective study, a total of 1073 pathologically confirmed ccRCC patients from seven cohorts were split into internal cohorts (training and validation sets) and an external test set. The AI framework comprised an image processor, a 3D-kidney and tumor segmentation model by 3D-UNet, a multi-scale features extractor built upon unsupervised learning, and a multi-task classifier utilizing XGBoost. A quantitative model interpretation technique, known as SHapley Additive exPlanations (SHAP), was employed to explore the contribution of multi-scale features. The 3D-UNet model showed excellent performance in segmenting both the kidney and tumor regions, with Dice coefficients exceeding 0.92. The proposed multi-scale features model exhibited strong predictive capability for pathological grading and Ki67 index, with AUROC values of 0.84 and 0.87, respectively, in the internal validation set, and 0.82 and 0.82, respectively, in the external test set. The SHAP results demonstrated that features from radiomics, the 3D Auto-Encoder, and dimensionality reduction all made significant contributions to both prediction tasks. The proposed AI framework, leveraging multi-scale features, accurately predicts the pathological grade and Ki67 index of ccRCC. The CT-based AI framework leveraging multi-scale features offers a promising avenue for accurately predicting the pathological grade and Ki67 index of ccRCC preoperatively, indicating a direction for non-invasive assessment. Non-invasively determining pathological grade and Ki67 index in ccRCC could guide treatment decisions. The AI framework integrates segmentation, classification, and model interpretation, enabling fully automated analysis. The AI framework enables non-invasive preoperative detection of high-risk tumors, assisting clinical decision-making.

Artificial Intelligence in Sincalide-Stimulated Cholescintigraphy: A Pilot Study.

Nguyen NC, Luo J, Arefan D, Vasireddi AK, Wu S

pubmed logopapersMay 13 2025
Sincalide-stimulated cholescintigraphy (SSC) calculates the gallbladder ejection fraction (GBEF) to diagnose functional gallbladder disorder. Currently, artificial intelligence (AI)-driven workflows that integrate real-time image processing and organ function calculation remain unexplored in nuclear medicine practice. This pilot study explored an AI-based application for gallbladder radioactivity tracking. We retrospectively analyzed 20 SSC exams, categorized into 10 easy and 10 challenging cases. Two human operators (H1 and H2) independently annotated the gallbladder regions of interest manually over the course of the 60-minute SSC. A U-Net-based deep learning model was developed to automatically segment gallbladder masks, and a 10-fold cross-validation was performed for both easy and challenging cases. The AI-generated masks were compared with human-annotated ones, with Dice similarity coefficients (DICE) used to assess agreement. AI achieved an average DICE of 0.746 against H1 and 0.676 against H2, performing better in easy cases (0.781) than in challenging ones (0.641). Visual inspection showed AI was prone to errors with patient motion or low-count activity. This study highlights AI's potential in real-time gallbladder tracking and GBEF calculation during SSC. AI-enabled real-time evaluation of nuclear imaging data holds promise for advancing clinical workflows by providing instantaneous organ function assessments and feedback to technologists. This AI-enabled workflow could enhance diagnostic efficiency, reduce scan duration, and improve patient comfort by alleviating symptoms associated with SSC, such as abdominal discomfort due to sincalide administration.

Deep learning based on ultrasound images to predict platinum resistance in patients with epithelial ovarian cancer.

Su C, Miao K, Zhang L, Dong X

pubmed logopapersMay 13 2025
The study aimed at developing and validating a deep learning (DL) model based on the ultrasound imaging for predicting the platinum resistance of patients with epithelial ovarian cancer (EOC). 392 patients were enrolled in this retrospective study who had been diagnosed with EOC between 2014 and 2020 and underwent pelvic ultrasound before initial treatment. A DL model was developed to predict patients' platinum resistance, and the model underwent evaluation through receiver-operating characteristic (ROC) curves, decision curve analysis (DCA), and calibration curve. The ROC curves showed that the area under the curve (AUC) of the DL model for predicting patients' platinum resistance in the internal and external test sets were 0.86 (95% CI 0.83-0.90) and 0.86 (95% CI 0.84-0.89), respectively. The model demonstrated high clinical value through clinical decision curve analysis and exhibited good calibration efficiency in the training cohort. Kaplan-Meier analyses showed that the model's optimal cutoff value successfully distinguished between patients at high and low risk of recurrence, with hazard ratios of 3.1 (95% CI 2.3-4.1, P < 0.0001) and 2.9 (95% CI 2.3-3.9; P < 0.0001) in the high-risk group of the internal and external test sets, serving as a prognostic indicator. The DL model based on ultrasound imaging can predict platinum resistance in patients with EOC and may support clinicians in making the most appropriate treatment decisions.

Improving AI models for rare thyroid cancer subtype by text guided diffusion models.

Dai F, Yao S, Wang M, Zhu Y, Qiu X, Sun P, Qiu C, Yin J, Shen G, Sun J, Wang M, Wang Y, Yang Z, Sang J, Wang X, Sun F, Cai W, Zhang X, Lu H

pubmed logopapersMay 13 2025
Artificial intelligence applications in oncology imaging often struggle with diagnosing rare tumors. We identify significant gaps in detecting uncommon thyroid cancer types with ultrasound, where scarce data leads to frequent misdiagnosis. Traditional augmentation strategies do not capture the unique disease variations, hindering model training and performance. To overcome this, we propose a text-driven generative method that fuses clinical insights with image generation, producing synthetic samples that realistically reflect rare subtypes. In rigorous evaluations, our approach achieves substantial gains in diagnostic metrics, surpasses existing methods in authenticity and diversity measures, and generalizes effectively to other private and public datasets with various rare cancers. In this work, we demonstrate that text-guided image augmentation substantially enhances model accuracy and robustness for rare tumor detection, offering a promising avenue for more reliable and widespread clinical adoption.
Page 37 of 41408 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.