Sort by:
Page 112 of 1521519 results

Impact of contrast enhancement phase on CT-based radiomics analysis for predicting post-surgical recurrence in renal cell carcinoma.

Khene ZE, Bhanvadia R, Tachibana I, Sharma P, Trevino I, Graber W, Bertail T, Fleury R, Acosta O, De Crevoisier R, Bensalah K, Lotan Y, Margulis V

pubmed logopapersJun 1 2025
To investigate the effect of CT enhancement phase on radiomics features for predicting post-surgical recurrence of clear cell renal cell carcinoma (ccRCC). This retrospective study included 144 patients who underwent radical or partial nephrectomy for ccRCC. Preoperative multiphase abdominal CT scans (non-contrast, corticomedullary, and nephrographic phases) were obtained for each patient. Automated segmentation of renal masses was performed using the nnU-Net framework. Radiomics signatures (RS) were developed for each phase using ensembles of machine learning-based models (Random Survival Forests [RSF], Survival Support Vector Machines [S-SVM], and Extreme Gradient Boosting [XGBoost]) with and without feature selection. Feature selection was performed using Affinity Propagation Clustering. The primary endpoint was disease-free survival, assessed by concordance index (C-index). The study included 144 patients. Radical and partial nephrectomies were performed in 81% and 19% of patients, respectively, with 81% of tumors classified as high grade. Disease recurrence occurred in 74 patients (51%). A total of 1,316 radiomics features were extracted per phase per patient. Without feature selection, C-index values for RSF, S-SVM, XGBoost, and Penalized Cox models ranged from 0.43 to 0.61 across phases. With Affinity Propagation feature selection, C-index values improved to 0.51-0.74, with the corticomedullary phase achieving the highest performance (C-index up to 0.74). The results of our study indicate that radiomics analysis of corticomedullary phase contrast-enhanced CT images may provide valuable predictive insight into recurrence risk for non-metastatic ccRCC following surgical resection. However, the lack of external validation is a limitation, and further studies are needed to confirm these findings in independent cohorts.

Ultra-fast biparametric MRI in prostate cancer assessment: Diagnostic performance and image quality compared to conventional multiparametric MRI.

Pausch AM, Filleböck V, Elsner C, Rupp NJ, Eberli D, Hötker AM

pubmed logopapersJun 1 2025
To compare the diagnostic performance and image quality of a deep-learning-assisted ultra-fast biparametric MRI (bpMRI) with the conventional multiparametric MRI (mpMRI) for the diagnosis of clinically significant prostate cancer (csPCa). This prospective single-center study enrolled 123 biopsy-naïve patients undergoing conventional mpMRI and additionally ultra-fast bpMRI at 3 T between 06/2023-02/2024. Two radiologists (R1: 4 years and R2: 3 years of experience) independently assigned PI-RADS scores (PI-RADS v2.1) and assessed image quality (mPI-QUAL score) in two blinded study readouts. Weighted Cohen's Kappa (κ) was calculated to evaluate inter-reader agreement. Diagnostic performance was analyzed using clinical data and histopathological results from clinically indicated biopsies. Inter-reader agreement was good for both mpMRI (κ = 0.83) and ultra-fast bpMRI (κ = 0.87). Both readers demonstrated high sensitivity (≥94 %/≥91 %, R1/R2) and NPV (≥96 %/≥95 %) for csPCa detection using both protocols. The more experienced reader mostly showed notably higher specificity (≥77 %/≥53 %), PPV (≥62 %/≥45 %), and diagnostic accuracy (≥82 %/≥65 %) compared to the less experienced reader. There was no significant difference in the diagnostic performance of correctly identifying csPCa between both protocols (p > 0.05). The ultra-fast bpMRI protocol had significantly better image quality ratings (p < 0.001) and achieved a reduction in scan time of 80 % compared to conventional mpMRI. Deep-learning-assisted ultra-fast bpMRI protocols offer a promising alternative to conventional mpMRI for diagnosing csPCa in biopsy-naïve patients with comparable inter-reader agreement and diagnostic performance at superior image quality. However, reader experience remains essential for diagnostic performance.

An explainable transformer model integrating PET and tabular data for histologic grading and prognosis of follicular lymphoma: a multi-institutional digital biopsy study.

Jiang C, Jiang Z, Zhang Z, Huang H, Zhou H, Jiang Q, Teng Y, Li H, Xu B, Li X, Xu J, Ding C, Li K, Tian R

pubmed logopapersJun 1 2025
Pathological grade is a critical determinant of clinical outcomes and decision-making of follicular lymphoma (FL). This study aimed to develop a deep learning model as a digital biopsy for the non-invasive identification of FL grade. This study retrospectively included 513 FL patients from five independent hospital centers, randomly divided into training, internal validation, and external validation cohorts. A multimodal fusion Transformer model was developed integrating 3D PET tumor images with tabular data to predict FL grade. Additionally, the model is equipped with explainable modules, including Gradient-weighted Class Activation Mapping (Grad-CAM) for PET images, SHapley Additive exPlanations analysis for tabular data, and the calculation of predictive contribution ratios for both modalities, to enhance clinical interpretability and reliability. The predictive performance was evaluated using the area under the receiver operating characteristic curve (AUC) and accuracy, and its prognostic value was also assessed. The Transformer model demonstrated high accuracy in grading FL, with AUCs of 0.964-0.985 and accuracies of 90.2-96.7% in the training cohort, and similar performance in the validation cohorts (AUCs: 0.936-0.971, accuracies: 86.4-97.0%). Ablation studies confirmed that the fusion model outperformed single-modality models (AUCs: 0.974 - 0.956, accuracies: 89.8%-85.8%). Interpretability analysis revealed that PET images contributed 81-89% of the predictive value. Grad-CAM highlighted the tumor and peri-tumor regions. The model also effectively stratified patients by survival risk (P < 0.05), highlighting its prognostic value. Our study developed an explainable multimodal fusion Transformer model for accurate grading and prognosis of FL, with the potential to aid clinical decision-making.

Automated contouring for breast cancer radiotherapy in the isocentric lateral decubitus position: a neural network-based solution for enhanced precision and efficiency.

Loap P, Monteil R, Kirova Y, Vu-Bezin J

pubmed logopapersJun 1 2025
Adjuvant radiotherapy is essential for reducing local recurrence and improving survival in breast cancer patients, but it carries a risk of ischemic cardiac toxicity, which increases with heart exposure. The isocentric lateral decubitus position, where the breast rests flat on a support, reduces heart exposure and leads to delivery of a more uniform dose. This position is particularly beneficial for patients with unique anatomies, such as those with pectus excavatum or larger breast sizes. While artificial intelligence (AI) algorithms for autocontouring have shown promise, they have not been tailored to this specific position. This study aimed to develop and evaluate a neural network-based autocontouring algorithm for patients treated in the isocentric lateral decubitus position. In this single-center study, 1189 breast cancer patients treated after breast-conserving surgery were included. Their simulation CT scans (1209 scans) were used to train and validate a neural network-based autocontouring algorithm (nnU-Net). Of these, 1087 scans were used for training, and 122 scans were reserved for validation. The algorithm's performance was assessed using the Dice similarity coefficient (DSC) to compare the automatically delineated volumes with manual contours. A clinical evaluation of the algorithm was performed on 30 additional patients, with contours rated by two expert radiation oncologists. The neural network-based algorithm achieved a segmentation time of approximately 4 min, compared to 20 min for manual segmentation. The DSC values for the validation cohort were 0.88 for the treated breast, 0.90 for the heart, 0.98 for the right lung, and 0.97 for the left lung. In the clinical evaluation, 90% of the automatically contoured breast volumes were rated as acceptable without corrections, while the remaining 10% required minor adjustments. All lung contours were accepted without corrections, and heart contours were rated as acceptable in 93.3% of cases, with minor corrections needed in 6.6% of cases. This neural network-based autocontouring algorithm offers a practical, time-saving solution for breast cancer radiotherapy planning in the isocentric lateral decubitus position. Its strong geometric performance, clinical acceptability, and significant time efficiency make it a valuable tool for modern radiotherapy practices, particularly in high-volume centers.

A continuous-action deep reinforcement learning-based agent for coronary artery centerline extraction in coronary CT angiography images.

Zhang Y, Luo G, Wang W, Cao S, Dong S, Yu D, Wang X, Wang K

pubmed logopapersJun 1 2025
The lumen centerline of the coronary artery allows vessel reconstruction used to detect stenoses and plaques. Discrete-action-based centerline extraction methods suffer from artifacts and plaques. This study aimed to develop a continuous-action-based method which performs more effectively in cases involving artifacts or plaques. A continuous-action deep reinforcement learning-based model was trained to predict the artery's direction and radius value. The model is based on an Actor-Critic architecture. The Actor learns a deterministic policy to output the actions made by an agent. These actions indicate the centerline's direction and radius value consecutively. The Critic learns a value function to evaluate the quality of the agent's actions. A novel DDR reward was introduced to measure the agent's action (both centerline extraction and radius estimate) at each step. The method achieved an average OV of 95.7%, OF of 93.6%, OT of 97.3%, and AI of 0.22 mm in 80 test data. In 53 cases with artifacts or plaques, it achieved an average OV of 95.0%, OF of 91.5%, OT of 96.7%, and AI of 0.23 mm. The 95% limits of agreement between the reference and estimated radius values were <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>-</mo></math> 0.46 mm and 0.43 mm in the 80 test data. Experiments demonstrate that the Actor-Critic architecture can achieve efficient centerline extraction and radius estimate. Compared with discrete-action-based methods, our method performs more effectively in cases involving artifacts or plaques. The extracted centerlines and radius values allow accurate coronary artery reconstruction that facilitates the detection of stenoses and plaques.

Combining Multifrequency Magnetic Resonance Elastography With Automatic Segmentation to Assess Renal Function in Patients With Chronic Kidney Disease.

Liang Q, Lin H, Li J, Luo P, Qi R, Chen Q, Meng F, Qin H, Qu F, Zeng Y, Wang W, Lu J, Huang B, Chen Y

pubmed logopapersJun 1 2025
Multifrequency MR elastography (mMRE) enables noninvasive quantification of renal stiffness in patients with chronic kidney disease (CKD). Manual segmentation of the kidneys on mMRE is time-consuming and prone to increased interobserver variability. To evaluate the performance of mMRE combined with automatic segmentation in assessing CKD severity. Prospective. A total of 179 participants consisting of 95 healthy volunteers and 84 participants with CKD. 3 T, single shot spin echo planar imaging sequence. Participants were randomly assigned into training (n = 58), validation (n = 15), and test (n = 106) sets. Test set included 47 healthy volunteers and 58 CKD participants with different stages (21 stage 1-2, 22 stage 3, and 16 stage 4-5) based on estimated glomerular filtration rate (eGFR). Shear wave speed (SWS) values from mMRE was measured using automatic segmentation constructed through the nnU-Net deep-learning network. Standard manual segmentation was created by a radiologist. In the test set, the automatically segmented renal SWS were compared between healthy volunteers and CKD subgroups, with age as a covariate. The association between SWS and eGFR was investigated in participants with CKD. Dice similarity coefficient (DSC), analysis of covariance, Pearson and Spearman correlation analyses. P < 0.05 was considered statistically significant. Mean DSCs between standard manual and automatic segmentation were 0.943, 0.901, and 0.970 for the renal cortex, medulla, and parenchyma, respectively. The automatically quantified cortical, medullary, and parenchymal SWS were significantly correlated with eGFR (r = 0.620, 0.605, and 0.640, respectively). Participants with CKD stage 1-2 exhibited significantly lower cortical SWS values compared to healthy volunteers (2.44 ± 0.16 m/second vs. 2.56 ± 0.17 m/second), after adjusting age. mMRE combined with automatic segmentation revealed abnormal renal stiffness in patients with CKD, even with mild renal impairment. The renal stiffness of patients with chronic kidney disease varies according to the function and structure of the kidney. This study integrates multifrequency magnetic resonance elastography with automated segmentation technique to assess renal stiffness in patients with chronic kidney disease. The findings indicate that this method is capable of distinguishing between patients with chronic kidney disease, including those with mild renal impairment, while simultaneously reducing the subjectivity and time required for radiologists to analyze images. This research enhances the efficiency of image processing for radiologists and assists nephrologists in detecting early-stage damage in patients with chronic kidney disease. 2 TECHNICAL EFFICACY: Stage 2.

An Artificial Intelligence Model Using Diffusion Basis Spectrum Imaging Metrics Accurately Predicts Clinically Significant Prostate Cancer.

Kim EH, Jing H, Utt KL, Vetter JM, Weimholt RC, Bullock AD, Klim AP, Bergeron KA, Frankel JK, Smith ZL, Andriole GL, Song SK, Ippolito JE

pubmed logopapersJun 1 2025
Conventional prostate magnetic resonance imaging has limited accuracy for clinically significant prostate cancer (csPCa). We performed diffusion basis spectrum imaging (DBSI) before biopsy and applied artificial intelligence models to these DBSI metrics to predict csPCa. Between February 2020 and March 2024, 241 patients underwent prostate MRI that included conventional and DBSI-specific sequences before prostate biopsy. We used artificial intelligence models with DBSI metrics as input classifiers and the biopsy pathology as the ground truth. The DBSI-based model was compared with available biomarkers (PSA, PSA density [PSAD], and Prostate Imaging Reporting and Data System [PI-RADS]) for risk discrimination of csPCa defined as Gleason score <u>></u> 7. The DBSI-based model was an independent predictor of csPCa (odds ratio [OR] 2.04, 95% CI 1.52-2.73, <i>P</i> < .01), as were PSAD (OR 2.02, 95% CI 1.21-3.35, <i>P</i> = .01) and PI-RADS classification (OR 4.00, 95% CI 1.37-11.6 for PI-RADS 3, <i>P</i> = .01; OR 9.67, 95% CI 2.89-32.7 for PI-RADS 4-5, <i>P</i> < .01), adjusting for age, family history, and race. Within our dataset, the DBSI-based model alone performed similarly to PSAD + PI-RADS (AUC 0.863 vs 0.859, <i>P</i> = .89), while the combination of the DBSI-based model + PI-RADS had the highest risk discrimination for csPCa (AUC 0.894, <i>P</i> < .01). A clinical strategy using the DBSI-based model for patients with PI-RADS 1-3 could have reduced biopsies by 27% while missing 2% of csPCa (compared with biopsy for all). Our DBSI-based artificial intelligence model accurately predicted csPCa on biopsy and can be combined with PI-RADS to potentially reduce unnecessary prostate biopsies.

Generative artificial intelligence enables the generation of bone scintigraphy images and improves generalization of deep learning models in data-constrained environments.

Haberl D, Ning J, Kluge K, Kumpf K, Yu J, Jiang Z, Constantino C, Monaci A, Starace M, Haug AR, Calabretta R, Camoni L, Bertagna F, Mascherbauer K, Hofer F, Albano D, Sciagra R, Oliveira F, Costa D, Nitsche C, Hacker M, Spielvogel CP

pubmed logopapersJun 1 2025
Advancements of deep learning in medical imaging are often constrained by the limited availability of large, annotated datasets, resulting in underperforming models when deployed under real-world conditions. This study investigated a generative artificial intelligence (AI) approach to create synthetic medical images taking the example of bone scintigraphy scans, to increase the data diversity of small-scale datasets for more effective model training and improved generalization. We trained a generative model on <sup>99m</sup>Tc-bone scintigraphy scans from 9,170 patients in one center to generate high-quality and fully anonymized annotated scans of patients representing two distinct disease patterns: abnormal uptake indicative of (i) bone metastases and (ii) cardiac uptake indicative of cardiac amyloidosis. A blinded reader study was performed to assess the clinical validity and quality of the generated data. We investigated the added value of the generated data by augmenting an independent small single-center dataset with synthetic data and by training a deep learning model to detect abnormal uptake in a downstream classification task. We tested this model on 7,472 scans from 6,448 patients across four external sites in a cross-tracer and cross-scanner setting and associated the resulting model predictions with clinical outcomes. The clinical value and high quality of the synthetic imaging data were confirmed by four readers, who were unable to distinguish synthetic scans from real scans (average accuracy: 0.48% [95% CI 0.46-0.51]), disagreeing in 239 (60%) of 400 cases (Fleiss' kappa: 0.18). Adding synthetic data to the training set improved model performance by a mean (± SD) of 33(± 10)% AUC (p < 0.0001) for detecting abnormal uptake indicative of bone metastases and by 5(± 4)% AUC (p < 0.0001) for detecting uptake indicative of cardiac amyloidosis across both internal and external testing cohorts, compared to models without synthetic training data. Patients with predicted abnormal uptake had adverse clinical outcomes (log-rank: p < 0.0001). Generative AI enables the targeted generation of bone scintigraphy images representing different clinical conditions. Our findings point to the potential of synthetic data to overcome challenges in data sharing and in developing reliable and prognostic deep learning models in data-limited environments.

Semiautomated Extraction of Research Topics and Trends From National Cancer Institute Funding in Radiological Sciences From 2000 to 2020.

Nguyen MH, Beidler PG, Tsai J, Anderson A, Chen D, Kinahan PE, Kang J

pubmed logopapersJun 1 2025
Investigators and funding organizations desire knowledge on topics and trends in publicly funded research but current efforts for manual categorization have been limited in breadth and depth of understanding. We present a semiautomated analysis of 21 years of R-type National Cancer Institute (NCI) grants to departments of radiation oncology and radiology using natural language processing. We selected all noneducation R-type NCI grants from 2000 to 2020 awarded to departments of radiation oncology/radiology with affiliated schools of medicine. We used pretrained word embedding vectors to represent each grant abstract. A sequential clustering algorithm assigned each grant to 1 of 60 clusters representing research topics; we repeated the same workflow for 15 clusters for comparison. Each cluster was then manually named using the top words and closest documents to each cluster centroid. The interpretability of document embeddings was evaluated by projecting them onto 2 dimensions. Changes in clusters over time were used to examine temporal funding trends. We included 5874 grants totaling 1.9 billion dollars of NCI funding over 21 years. The human-model agreement was similar to the human interrater agreement. Two-dimensional projections of grant clusters showed 2 dominant axes: physics-biology and therapeutic-diagnostic. Therapeutic and physics clusters have grown faster over time than diagnostic and biology clusters. The 3 topics with largest funding increase were imaging biomarkers, informatics, and radiopharmaceuticals, which all had a mean annual growth of >$218,000. The 3 topics with largest funding decrease were cellular stress response, advanced imaging hardware technology, and improving performance of breast cancer computer-aided detection, which all had a mean decrease of >$110,000. We developed a semiautomated natural language processing approach to analyze research topics and funding trends. We applied this approach to NCI funding in the radiological sciences to extract both domains of research being funded and temporal trends.

Adaptive ensemble loss and multi-scale attention in breast ultrasound segmentation with UMA-Net.

Dar MF, Ganivada A

pubmed logopapersJun 1 2025
The generalization of deep learning (DL) models is critical for accurate lesion segmentation in breast ultrasound (BUS) images. Traditional DL models often struggle to generalize well due to the high frequency and scale variations inherent in BUS images. Moreover, conventional loss functions used in these models frequently result in imbalanced optimization, either prioritizing region overlap or boundary accuracy, which leads to suboptimal segmentation performance. To address these issues, we propose UMA-Net, an enhanced UNet architecture specifically designed for BUS image segmentation. UMA-Net integrates residual connections, attention mechanisms, and a bottleneck with atrous convolutions to effectively capture multi-scale contextual information without compromising spatial resolution. Additionally, we introduce an adaptive ensemble loss function that dynamically balances the contributions of different loss components during training, ensuring optimization across key segmentation metrics. This novel approach mitigates the imbalances found in conventional loss functions. We validate UMA-Net on five diverse BUS datasets-BUET, BUSI, Mendeley, OMI, and UDIAT-demonstrating superior performance. Our findings highlight the importance of addressing frequency and scale variations, confirming UMA-Net as a robust and generalizable solution for BUS image segmentation.
Page 112 of 1521519 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.