Sort by:
Page 100 of 1401396 results

An explainable transformer model integrating PET and tabular data for histologic grading and prognosis of follicular lymphoma: a multi-institutional digital biopsy study.

Jiang C, Jiang Z, Zhang Z, Huang H, Zhou H, Jiang Q, Teng Y, Li H, Xu B, Li X, Xu J, Ding C, Li K, Tian R

pubmed logopapersJun 1 2025
Pathological grade is a critical determinant of clinical outcomes and decision-making of follicular lymphoma (FL). This study aimed to develop a deep learning model as a digital biopsy for the non-invasive identification of FL grade. This study retrospectively included 513 FL patients from five independent hospital centers, randomly divided into training, internal validation, and external validation cohorts. A multimodal fusion Transformer model was developed integrating 3D PET tumor images with tabular data to predict FL grade. Additionally, the model is equipped with explainable modules, including Gradient-weighted Class Activation Mapping (Grad-CAM) for PET images, SHapley Additive exPlanations analysis for tabular data, and the calculation of predictive contribution ratios for both modalities, to enhance clinical interpretability and reliability. The predictive performance was evaluated using the area under the receiver operating characteristic curve (AUC) and accuracy, and its prognostic value was also assessed. The Transformer model demonstrated high accuracy in grading FL, with AUCs of 0.964-0.985 and accuracies of 90.2-96.7% in the training cohort, and similar performance in the validation cohorts (AUCs: 0.936-0.971, accuracies: 86.4-97.0%). Ablation studies confirmed that the fusion model outperformed single-modality models (AUCs: 0.974 - 0.956, accuracies: 89.8%-85.8%). Interpretability analysis revealed that PET images contributed 81-89% of the predictive value. Grad-CAM highlighted the tumor and peri-tumor regions. The model also effectively stratified patients by survival risk (P < 0.05), highlighting its prognostic value. Our study developed an explainable multimodal fusion Transformer model for accurate grading and prognosis of FL, with the potential to aid clinical decision-making.

Automated contouring for breast cancer radiotherapy in the isocentric lateral decubitus position: a neural network-based solution for enhanced precision and efficiency.

Loap P, Monteil R, Kirova Y, Vu-Bezin J

pubmed logopapersJun 1 2025
Adjuvant radiotherapy is essential for reducing local recurrence and improving survival in breast cancer patients, but it carries a risk of ischemic cardiac toxicity, which increases with heart exposure. The isocentric lateral decubitus position, where the breast rests flat on a support, reduces heart exposure and leads to delivery of a more uniform dose. This position is particularly beneficial for patients with unique anatomies, such as those with pectus excavatum or larger breast sizes. While artificial intelligence (AI) algorithms for autocontouring have shown promise, they have not been tailored to this specific position. This study aimed to develop and evaluate a neural network-based autocontouring algorithm for patients treated in the isocentric lateral decubitus position. In this single-center study, 1189 breast cancer patients treated after breast-conserving surgery were included. Their simulation CT scans (1209 scans) were used to train and validate a neural network-based autocontouring algorithm (nnU-Net). Of these, 1087 scans were used for training, and 122 scans were reserved for validation. The algorithm's performance was assessed using the Dice similarity coefficient (DSC) to compare the automatically delineated volumes with manual contours. A clinical evaluation of the algorithm was performed on 30 additional patients, with contours rated by two expert radiation oncologists. The neural network-based algorithm achieved a segmentation time of approximately 4 min, compared to 20 min for manual segmentation. The DSC values for the validation cohort were 0.88 for the treated breast, 0.90 for the heart, 0.98 for the right lung, and 0.97 for the left lung. In the clinical evaluation, 90% of the automatically contoured breast volumes were rated as acceptable without corrections, while the remaining 10% required minor adjustments. All lung contours were accepted without corrections, and heart contours were rated as acceptable in 93.3% of cases, with minor corrections needed in 6.6% of cases. This neural network-based autocontouring algorithm offers a practical, time-saving solution for breast cancer radiotherapy planning in the isocentric lateral decubitus position. Its strong geometric performance, clinical acceptability, and significant time efficiency make it a valuable tool for modern radiotherapy practices, particularly in high-volume centers.

A continuous-action deep reinforcement learning-based agent for coronary artery centerline extraction in coronary CT angiography images.

Zhang Y, Luo G, Wang W, Cao S, Dong S, Yu D, Wang X, Wang K

pubmed logopapersJun 1 2025
The lumen centerline of the coronary artery allows vessel reconstruction used to detect stenoses and plaques. Discrete-action-based centerline extraction methods suffer from artifacts and plaques. This study aimed to develop a continuous-action-based method which performs more effectively in cases involving artifacts or plaques. A continuous-action deep reinforcement learning-based model was trained to predict the artery's direction and radius value. The model is based on an Actor-Critic architecture. The Actor learns a deterministic policy to output the actions made by an agent. These actions indicate the centerline's direction and radius value consecutively. The Critic learns a value function to evaluate the quality of the agent's actions. A novel DDR reward was introduced to measure the agent's action (both centerline extraction and radius estimate) at each step. The method achieved an average OV of 95.7%, OF of 93.6%, OT of 97.3%, and AI of 0.22 mm in 80 test data. In 53 cases with artifacts or plaques, it achieved an average OV of 95.0%, OF of 91.5%, OT of 96.7%, and AI of 0.23 mm. The 95% limits of agreement between the reference and estimated radius values were <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>-</mo></math> 0.46 mm and 0.43 mm in the 80 test data. Experiments demonstrate that the Actor-Critic architecture can achieve efficient centerline extraction and radius estimate. Compared with discrete-action-based methods, our method performs more effectively in cases involving artifacts or plaques. The extracted centerlines and radius values allow accurate coronary artery reconstruction that facilitates the detection of stenoses and plaques.

Combining Multifrequency Magnetic Resonance Elastography With Automatic Segmentation to Assess Renal Function in Patients With Chronic Kidney Disease.

Liang Q, Lin H, Li J, Luo P, Qi R, Chen Q, Meng F, Qin H, Qu F, Zeng Y, Wang W, Lu J, Huang B, Chen Y

pubmed logopapersJun 1 2025
Multifrequency MR elastography (mMRE) enables noninvasive quantification of renal stiffness in patients with chronic kidney disease (CKD). Manual segmentation of the kidneys on mMRE is time-consuming and prone to increased interobserver variability. To evaluate the performance of mMRE combined with automatic segmentation in assessing CKD severity. Prospective. A total of 179 participants consisting of 95 healthy volunteers and 84 participants with CKD. 3 T, single shot spin echo planar imaging sequence. Participants were randomly assigned into training (n = 58), validation (n = 15), and test (n = 106) sets. Test set included 47 healthy volunteers and 58 CKD participants with different stages (21 stage 1-2, 22 stage 3, and 16 stage 4-5) based on estimated glomerular filtration rate (eGFR). Shear wave speed (SWS) values from mMRE was measured using automatic segmentation constructed through the nnU-Net deep-learning network. Standard manual segmentation was created by a radiologist. In the test set, the automatically segmented renal SWS were compared between healthy volunteers and CKD subgroups, with age as a covariate. The association between SWS and eGFR was investigated in participants with CKD. Dice similarity coefficient (DSC), analysis of covariance, Pearson and Spearman correlation analyses. P < 0.05 was considered statistically significant. Mean DSCs between standard manual and automatic segmentation were 0.943, 0.901, and 0.970 for the renal cortex, medulla, and parenchyma, respectively. The automatically quantified cortical, medullary, and parenchymal SWS were significantly correlated with eGFR (r = 0.620, 0.605, and 0.640, respectively). Participants with CKD stage 1-2 exhibited significantly lower cortical SWS values compared to healthy volunteers (2.44 ± 0.16 m/second vs. 2.56 ± 0.17 m/second), after adjusting age. mMRE combined with automatic segmentation revealed abnormal renal stiffness in patients with CKD, even with mild renal impairment. The renal stiffness of patients with chronic kidney disease varies according to the function and structure of the kidney. This study integrates multifrequency magnetic resonance elastography with automated segmentation technique to assess renal stiffness in patients with chronic kidney disease. The findings indicate that this method is capable of distinguishing between patients with chronic kidney disease, including those with mild renal impairment, while simultaneously reducing the subjectivity and time required for radiologists to analyze images. This research enhances the efficiency of image processing for radiologists and assists nephrologists in detecting early-stage damage in patients with chronic kidney disease. 2 TECHNICAL EFFICACY: Stage 2.

An Artificial Intelligence Model Using Diffusion Basis Spectrum Imaging Metrics Accurately Predicts Clinically Significant Prostate Cancer.

Kim EH, Jing H, Utt KL, Vetter JM, Weimholt RC, Bullock AD, Klim AP, Bergeron KA, Frankel JK, Smith ZL, Andriole GL, Song SK, Ippolito JE

pubmed logopapersJun 1 2025
Conventional prostate magnetic resonance imaging has limited accuracy for clinically significant prostate cancer (csPCa). We performed diffusion basis spectrum imaging (DBSI) before biopsy and applied artificial intelligence models to these DBSI metrics to predict csPCa. Between February 2020 and March 2024, 241 patients underwent prostate MRI that included conventional and DBSI-specific sequences before prostate biopsy. We used artificial intelligence models with DBSI metrics as input classifiers and the biopsy pathology as the ground truth. The DBSI-based model was compared with available biomarkers (PSA, PSA density [PSAD], and Prostate Imaging Reporting and Data System [PI-RADS]) for risk discrimination of csPCa defined as Gleason score <u>></u> 7. The DBSI-based model was an independent predictor of csPCa (odds ratio [OR] 2.04, 95% CI 1.52-2.73, <i>P</i> < .01), as were PSAD (OR 2.02, 95% CI 1.21-3.35, <i>P</i> = .01) and PI-RADS classification (OR 4.00, 95% CI 1.37-11.6 for PI-RADS 3, <i>P</i> = .01; OR 9.67, 95% CI 2.89-32.7 for PI-RADS 4-5, <i>P</i> < .01), adjusting for age, family history, and race. Within our dataset, the DBSI-based model alone performed similarly to PSAD + PI-RADS (AUC 0.863 vs 0.859, <i>P</i> = .89), while the combination of the DBSI-based model + PI-RADS had the highest risk discrimination for csPCa (AUC 0.894, <i>P</i> < .01). A clinical strategy using the DBSI-based model for patients with PI-RADS 1-3 could have reduced biopsies by 27% while missing 2% of csPCa (compared with biopsy for all). Our DBSI-based artificial intelligence model accurately predicted csPCa on biopsy and can be combined with PI-RADS to potentially reduce unnecessary prostate biopsies.

Generative artificial intelligence enables the generation of bone scintigraphy images and improves generalization of deep learning models in data-constrained environments.

Haberl D, Ning J, Kluge K, Kumpf K, Yu J, Jiang Z, Constantino C, Monaci A, Starace M, Haug AR, Calabretta R, Camoni L, Bertagna F, Mascherbauer K, Hofer F, Albano D, Sciagra R, Oliveira F, Costa D, Nitsche C, Hacker M, Spielvogel CP

pubmed logopapersJun 1 2025
Advancements of deep learning in medical imaging are often constrained by the limited availability of large, annotated datasets, resulting in underperforming models when deployed under real-world conditions. This study investigated a generative artificial intelligence (AI) approach to create synthetic medical images taking the example of bone scintigraphy scans, to increase the data diversity of small-scale datasets for more effective model training and improved generalization. We trained a generative model on <sup>99m</sup>Tc-bone scintigraphy scans from 9,170 patients in one center to generate high-quality and fully anonymized annotated scans of patients representing two distinct disease patterns: abnormal uptake indicative of (i) bone metastases and (ii) cardiac uptake indicative of cardiac amyloidosis. A blinded reader study was performed to assess the clinical validity and quality of the generated data. We investigated the added value of the generated data by augmenting an independent small single-center dataset with synthetic data and by training a deep learning model to detect abnormal uptake in a downstream classification task. We tested this model on 7,472 scans from 6,448 patients across four external sites in a cross-tracer and cross-scanner setting and associated the resulting model predictions with clinical outcomes. The clinical value and high quality of the synthetic imaging data were confirmed by four readers, who were unable to distinguish synthetic scans from real scans (average accuracy: 0.48% [95% CI 0.46-0.51]), disagreeing in 239 (60%) of 400 cases (Fleiss' kappa: 0.18). Adding synthetic data to the training set improved model performance by a mean (± SD) of 33(± 10)% AUC (p < 0.0001) for detecting abnormal uptake indicative of bone metastases and by 5(± 4)% AUC (p < 0.0001) for detecting uptake indicative of cardiac amyloidosis across both internal and external testing cohorts, compared to models without synthetic training data. Patients with predicted abnormal uptake had adverse clinical outcomes (log-rank: p < 0.0001). Generative AI enables the targeted generation of bone scintigraphy images representing different clinical conditions. Our findings point to the potential of synthetic data to overcome challenges in data sharing and in developing reliable and prognostic deep learning models in data-limited environments.

Semiautomated Extraction of Research Topics and Trends From National Cancer Institute Funding in Radiological Sciences From 2000 to 2020.

Nguyen MH, Beidler PG, Tsai J, Anderson A, Chen D, Kinahan PE, Kang J

pubmed logopapersJun 1 2025
Investigators and funding organizations desire knowledge on topics and trends in publicly funded research but current efforts for manual categorization have been limited in breadth and depth of understanding. We present a semiautomated analysis of 21 years of R-type National Cancer Institute (NCI) grants to departments of radiation oncology and radiology using natural language processing. We selected all noneducation R-type NCI grants from 2000 to 2020 awarded to departments of radiation oncology/radiology with affiliated schools of medicine. We used pretrained word embedding vectors to represent each grant abstract. A sequential clustering algorithm assigned each grant to 1 of 60 clusters representing research topics; we repeated the same workflow for 15 clusters for comparison. Each cluster was then manually named using the top words and closest documents to each cluster centroid. The interpretability of document embeddings was evaluated by projecting them onto 2 dimensions. Changes in clusters over time were used to examine temporal funding trends. We included 5874 grants totaling 1.9 billion dollars of NCI funding over 21 years. The human-model agreement was similar to the human interrater agreement. Two-dimensional projections of grant clusters showed 2 dominant axes: physics-biology and therapeutic-diagnostic. Therapeutic and physics clusters have grown faster over time than diagnostic and biology clusters. The 3 topics with largest funding increase were imaging biomarkers, informatics, and radiopharmaceuticals, which all had a mean annual growth of >$218,000. The 3 topics with largest funding decrease were cellular stress response, advanced imaging hardware technology, and improving performance of breast cancer computer-aided detection, which all had a mean decrease of >$110,000. We developed a semiautomated natural language processing approach to analyze research topics and funding trends. We applied this approach to NCI funding in the radiological sciences to extract both domains of research being funded and temporal trends.

Adaptive ensemble loss and multi-scale attention in breast ultrasound segmentation with UMA-Net.

Dar MF, Ganivada A

pubmed logopapersJun 1 2025
The generalization of deep learning (DL) models is critical for accurate lesion segmentation in breast ultrasound (BUS) images. Traditional DL models often struggle to generalize well due to the high frequency and scale variations inherent in BUS images. Moreover, conventional loss functions used in these models frequently result in imbalanced optimization, either prioritizing region overlap or boundary accuracy, which leads to suboptimal segmentation performance. To address these issues, we propose UMA-Net, an enhanced UNet architecture specifically designed for BUS image segmentation. UMA-Net integrates residual connections, attention mechanisms, and a bottleneck with atrous convolutions to effectively capture multi-scale contextual information without compromising spatial resolution. Additionally, we introduce an adaptive ensemble loss function that dynamically balances the contributions of different loss components during training, ensuring optimization across key segmentation metrics. This novel approach mitigates the imbalances found in conventional loss functions. We validate UMA-Net on five diverse BUS datasets-BUET, BUSI, Mendeley, OMI, and UDIAT-demonstrating superior performance. Our findings highlight the importance of addressing frequency and scale variations, confirming UMA-Net as a robust and generalizable solution for BUS image segmentation.

PEDRA-EFB0: colorectal cancer prognostication using deep learning with patch embeddings and dual residual attention.

Zhao Z, Wang H, Wu D, Zhu Q, Tan X, Hu S, Ge Y

pubmed logopapersJun 1 2025
In computer-aided diagnosis systems, precise feature extraction from CT scans of colorectal cancer using deep learning is essential for effective prognosis. However, existing convolutional neural networks struggle to capture long-range dependencies and contextual information, resulting in incomplete CT feature extraction. To address this, the PEDRA-EFB0 architecture integrates patch embeddings and a dual residual attention mechanism for enhanced feature extraction and survival prediction in colorectal cancer CT scans. A patch embedding method processes CT scans into patches, creating positional features for global representation and guiding spatial attention computation. Additionally, a dual residual attention mechanism during the upsampling stage selectively combines local and global features, enhancing CT data utilization. Furthermore, this paper proposes a feature selection algorithm that combines autoencoders and entropy technology, encoding and compressing high-dimensional data to reduce redundant information and using entropy to assess the importance of features, thereby achieving precise feature selection. Experimental results indicate the PEDRA-EFB0 model outperforms traditional methods on colorectal cancer CT metrics, notably in C-index, BS, MCC, and AUC, enhancing survival prediction accuracy. Our code is freely available at https://github.com/smile0208z/PEDRA .

Deep learning radiomics analysis for prediction of survival in patients with unresectable gastric cancer receiving immunotherapy.

Gou M, Zhang H, Qian N, Zhang Y, Sun Z, Li G, Wang Z, Dai G

pubmed logopapersJun 1 2025
Immunotherapy has become an option for the first-line therapy of advanced gastric cancer (GC), with improved survival. Our study aimed to investigate unresectable GC from an imaging perspective combined with clinicopathological variables to identify patients who were most likely to benefit from immunotherapy. Patients with unresectable GC who were consecutively treated with immunotherapy at two different medical centers of Chinese PLA General Hospital were included and divided into the training and validation cohorts, respectively. A deep learning neural network, using a multimodal ensemble approach based on CT imaging data before immunotherapy, was trained in the training cohort to predict survival, and an internal validation cohort was constructed to select the optimal ensemble model. Data from another cohort were used for external validation. The area under the receiver operating characteristic curve was analyzed to evaluate performance in predicting survival. Detailed clinicopathological data and peripheral blood prior to immunotherapy were collected for each patient. Univariate and multivariable logistic regression analysis of imaging models and clinicopathological variables was also applied to identify the independent predictors of survival. A nomogram based on multivariable logistic regression was constructed. A total of 79 GC patients in the training cohort and 97 patients in the external validation cohort were enrolled in this study. A multi-model ensemble approach was applied to train a model to predict the 1-year survival of GC patients. Compared to individual models, the ensemble model showed improvement in performance metrics in both the internal and external validation cohorts. There was a significant difference in overall survival (OS) among patients with different imaging models based on the optimum cutoff score of 0.5 (HR = 0.20, 95 % CI: 0.10-0.37, <i>P</i> < 0.001). Multivariate Cox regression analysis revealed that the imaging models, PD-L1 expression, and lung immune prognostic index were independent prognostic factors for OS. We combined these variables and built a nomogram. The calibration curves showed that the C-index of the nomogram was 0.85 and 0.78 in the training and validation cohorts. The deep learning model in combination with several clinical factors showed predictive value for survival in patients with unresectable GC receiving immunotherapy.
Page 100 of 1401396 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.