Sort by:
Page 58 of 1381373 results

Combining Multifrequency Magnetic Resonance Elastography With Automatic Segmentation to Assess Renal Function in Patients With Chronic Kidney Disease.

Liang Q, Lin H, Li J, Luo P, Qi R, Chen Q, Meng F, Qin H, Qu F, Zeng Y, Wang W, Lu J, Huang B, Chen Y

pubmed logopapersJun 1 2025
Multifrequency MR elastography (mMRE) enables noninvasive quantification of renal stiffness in patients with chronic kidney disease (CKD). Manual segmentation of the kidneys on mMRE is time-consuming and prone to increased interobserver variability. To evaluate the performance of mMRE combined with automatic segmentation in assessing CKD severity. Prospective. A total of 179 participants consisting of 95 healthy volunteers and 84 participants with CKD. 3 T, single shot spin echo planar imaging sequence. Participants were randomly assigned into training (n = 58), validation (n = 15), and test (n = 106) sets. Test set included 47 healthy volunteers and 58 CKD participants with different stages (21 stage 1-2, 22 stage 3, and 16 stage 4-5) based on estimated glomerular filtration rate (eGFR). Shear wave speed (SWS) values from mMRE was measured using automatic segmentation constructed through the nnU-Net deep-learning network. Standard manual segmentation was created by a radiologist. In the test set, the automatically segmented renal SWS were compared between healthy volunteers and CKD subgroups, with age as a covariate. The association between SWS and eGFR was investigated in participants with CKD. Dice similarity coefficient (DSC), analysis of covariance, Pearson and Spearman correlation analyses. P < 0.05 was considered statistically significant. Mean DSCs between standard manual and automatic segmentation were 0.943, 0.901, and 0.970 for the renal cortex, medulla, and parenchyma, respectively. The automatically quantified cortical, medullary, and parenchymal SWS were significantly correlated with eGFR (r = 0.620, 0.605, and 0.640, respectively). Participants with CKD stage 1-2 exhibited significantly lower cortical SWS values compared to healthy volunteers (2.44 ± 0.16 m/second vs. 2.56 ± 0.17 m/second), after adjusting age. mMRE combined with automatic segmentation revealed abnormal renal stiffness in patients with CKD, even with mild renal impairment. The renal stiffness of patients with chronic kidney disease varies according to the function and structure of the kidney. This study integrates multifrequency magnetic resonance elastography with automated segmentation technique to assess renal stiffness in patients with chronic kidney disease. The findings indicate that this method is capable of distinguishing between patients with chronic kidney disease, including those with mild renal impairment, while simultaneously reducing the subjectivity and time required for radiologists to analyze images. This research enhances the efficiency of image processing for radiologists and assists nephrologists in detecting early-stage damage in patients with chronic kidney disease. 2 TECHNICAL EFFICACY: Stage 2.

An Artificial Intelligence Model Using Diffusion Basis Spectrum Imaging Metrics Accurately Predicts Clinically Significant Prostate Cancer.

Kim EH, Jing H, Utt KL, Vetter JM, Weimholt RC, Bullock AD, Klim AP, Bergeron KA, Frankel JK, Smith ZL, Andriole GL, Song SK, Ippolito JE

pubmed logopapersJun 1 2025
Conventional prostate magnetic resonance imaging has limited accuracy for clinically significant prostate cancer (csPCa). We performed diffusion basis spectrum imaging (DBSI) before biopsy and applied artificial intelligence models to these DBSI metrics to predict csPCa. Between February 2020 and March 2024, 241 patients underwent prostate MRI that included conventional and DBSI-specific sequences before prostate biopsy. We used artificial intelligence models with DBSI metrics as input classifiers and the biopsy pathology as the ground truth. The DBSI-based model was compared with available biomarkers (PSA, PSA density [PSAD], and Prostate Imaging Reporting and Data System [PI-RADS]) for risk discrimination of csPCa defined as Gleason score <u>></u> 7. The DBSI-based model was an independent predictor of csPCa (odds ratio [OR] 2.04, 95% CI 1.52-2.73, <i>P</i> < .01), as were PSAD (OR 2.02, 95% CI 1.21-3.35, <i>P</i> = .01) and PI-RADS classification (OR 4.00, 95% CI 1.37-11.6 for PI-RADS 3, <i>P</i> = .01; OR 9.67, 95% CI 2.89-32.7 for PI-RADS 4-5, <i>P</i> < .01), adjusting for age, family history, and race. Within our dataset, the DBSI-based model alone performed similarly to PSAD + PI-RADS (AUC 0.863 vs 0.859, <i>P</i> = .89), while the combination of the DBSI-based model + PI-RADS had the highest risk discrimination for csPCa (AUC 0.894, <i>P</i> < .01). A clinical strategy using the DBSI-based model for patients with PI-RADS 1-3 could have reduced biopsies by 27% while missing 2% of csPCa (compared with biopsy for all). Our DBSI-based artificial intelligence model accurately predicted csPCa on biopsy and can be combined with PI-RADS to potentially reduce unnecessary prostate biopsies.

Generative artificial intelligence enables the generation of bone scintigraphy images and improves generalization of deep learning models in data-constrained environments.

Haberl D, Ning J, Kluge K, Kumpf K, Yu J, Jiang Z, Constantino C, Monaci A, Starace M, Haug AR, Calabretta R, Camoni L, Bertagna F, Mascherbauer K, Hofer F, Albano D, Sciagra R, Oliveira F, Costa D, Nitsche C, Hacker M, Spielvogel CP

pubmed logopapersJun 1 2025
Advancements of deep learning in medical imaging are often constrained by the limited availability of large, annotated datasets, resulting in underperforming models when deployed under real-world conditions. This study investigated a generative artificial intelligence (AI) approach to create synthetic medical images taking the example of bone scintigraphy scans, to increase the data diversity of small-scale datasets for more effective model training and improved generalization. We trained a generative model on <sup>99m</sup>Tc-bone scintigraphy scans from 9,170 patients in one center to generate high-quality and fully anonymized annotated scans of patients representing two distinct disease patterns: abnormal uptake indicative of (i) bone metastases and (ii) cardiac uptake indicative of cardiac amyloidosis. A blinded reader study was performed to assess the clinical validity and quality of the generated data. We investigated the added value of the generated data by augmenting an independent small single-center dataset with synthetic data and by training a deep learning model to detect abnormal uptake in a downstream classification task. We tested this model on 7,472 scans from 6,448 patients across four external sites in a cross-tracer and cross-scanner setting and associated the resulting model predictions with clinical outcomes. The clinical value and high quality of the synthetic imaging data were confirmed by four readers, who were unable to distinguish synthetic scans from real scans (average accuracy: 0.48% [95% CI 0.46-0.51]), disagreeing in 239 (60%) of 400 cases (Fleiss' kappa: 0.18). Adding synthetic data to the training set improved model performance by a mean (± SD) of 33(± 10)% AUC (p < 0.0001) for detecting abnormal uptake indicative of bone metastases and by 5(± 4)% AUC (p < 0.0001) for detecting uptake indicative of cardiac amyloidosis across both internal and external testing cohorts, compared to models without synthetic training data. Patients with predicted abnormal uptake had adverse clinical outcomes (log-rank: p < 0.0001). Generative AI enables the targeted generation of bone scintigraphy images representing different clinical conditions. Our findings point to the potential of synthetic data to overcome challenges in data sharing and in developing reliable and prognostic deep learning models in data-limited environments.

Semiautomated Extraction of Research Topics and Trends From National Cancer Institute Funding in Radiological Sciences From 2000 to 2020.

Nguyen MH, Beidler PG, Tsai J, Anderson A, Chen D, Kinahan PE, Kang J

pubmed logopapersJun 1 2025
Investigators and funding organizations desire knowledge on topics and trends in publicly funded research but current efforts for manual categorization have been limited in breadth and depth of understanding. We present a semiautomated analysis of 21 years of R-type National Cancer Institute (NCI) grants to departments of radiation oncology and radiology using natural language processing. We selected all noneducation R-type NCI grants from 2000 to 2020 awarded to departments of radiation oncology/radiology with affiliated schools of medicine. We used pretrained word embedding vectors to represent each grant abstract. A sequential clustering algorithm assigned each grant to 1 of 60 clusters representing research topics; we repeated the same workflow for 15 clusters for comparison. Each cluster was then manually named using the top words and closest documents to each cluster centroid. The interpretability of document embeddings was evaluated by projecting them onto 2 dimensions. Changes in clusters over time were used to examine temporal funding trends. We included 5874 grants totaling 1.9 billion dollars of NCI funding over 21 years. The human-model agreement was similar to the human interrater agreement. Two-dimensional projections of grant clusters showed 2 dominant axes: physics-biology and therapeutic-diagnostic. Therapeutic and physics clusters have grown faster over time than diagnostic and biology clusters. The 3 topics with largest funding increase were imaging biomarkers, informatics, and radiopharmaceuticals, which all had a mean annual growth of >$218,000. The 3 topics with largest funding decrease were cellular stress response, advanced imaging hardware technology, and improving performance of breast cancer computer-aided detection, which all had a mean decrease of >$110,000. We developed a semiautomated natural language processing approach to analyze research topics and funding trends. We applied this approach to NCI funding in the radiological sciences to extract both domains of research being funded and temporal trends.

Adaptive ensemble loss and multi-scale attention in breast ultrasound segmentation with UMA-Net.

Dar MF, Ganivada A

pubmed logopapersJun 1 2025
The generalization of deep learning (DL) models is critical for accurate lesion segmentation in breast ultrasound (BUS) images. Traditional DL models often struggle to generalize well due to the high frequency and scale variations inherent in BUS images. Moreover, conventional loss functions used in these models frequently result in imbalanced optimization, either prioritizing region overlap or boundary accuracy, which leads to suboptimal segmentation performance. To address these issues, we propose UMA-Net, an enhanced UNet architecture specifically designed for BUS image segmentation. UMA-Net integrates residual connections, attention mechanisms, and a bottleneck with atrous convolutions to effectively capture multi-scale contextual information without compromising spatial resolution. Additionally, we introduce an adaptive ensemble loss function that dynamically balances the contributions of different loss components during training, ensuring optimization across key segmentation metrics. This novel approach mitigates the imbalances found in conventional loss functions. We validate UMA-Net on five diverse BUS datasets-BUET, BUSI, Mendeley, OMI, and UDIAT-demonstrating superior performance. Our findings highlight the importance of addressing frequency and scale variations, confirming UMA-Net as a robust and generalizable solution for BUS image segmentation.

PEDRA-EFB0: colorectal cancer prognostication using deep learning with patch embeddings and dual residual attention.

Zhao Z, Wang H, Wu D, Zhu Q, Tan X, Hu S, Ge Y

pubmed logopapersJun 1 2025
In computer-aided diagnosis systems, precise feature extraction from CT scans of colorectal cancer using deep learning is essential for effective prognosis. However, existing convolutional neural networks struggle to capture long-range dependencies and contextual information, resulting in incomplete CT feature extraction. To address this, the PEDRA-EFB0 architecture integrates patch embeddings and a dual residual attention mechanism for enhanced feature extraction and survival prediction in colorectal cancer CT scans. A patch embedding method processes CT scans into patches, creating positional features for global representation and guiding spatial attention computation. Additionally, a dual residual attention mechanism during the upsampling stage selectively combines local and global features, enhancing CT data utilization. Furthermore, this paper proposes a feature selection algorithm that combines autoencoders and entropy technology, encoding and compressing high-dimensional data to reduce redundant information and using entropy to assess the importance of features, thereby achieving precise feature selection. Experimental results indicate the PEDRA-EFB0 model outperforms traditional methods on colorectal cancer CT metrics, notably in C-index, BS, MCC, and AUC, enhancing survival prediction accuracy. Our code is freely available at https://github.com/smile0208z/PEDRA .

Deep learning radiomics analysis for prediction of survival in patients with unresectable gastric cancer receiving immunotherapy.

Gou M, Zhang H, Qian N, Zhang Y, Sun Z, Li G, Wang Z, Dai G

pubmed logopapersJun 1 2025
Immunotherapy has become an option for the first-line therapy of advanced gastric cancer (GC), with improved survival. Our study aimed to investigate unresectable GC from an imaging perspective combined with clinicopathological variables to identify patients who were most likely to benefit from immunotherapy. Patients with unresectable GC who were consecutively treated with immunotherapy at two different medical centers of Chinese PLA General Hospital were included and divided into the training and validation cohorts, respectively. A deep learning neural network, using a multimodal ensemble approach based on CT imaging data before immunotherapy, was trained in the training cohort to predict survival, and an internal validation cohort was constructed to select the optimal ensemble model. Data from another cohort were used for external validation. The area under the receiver operating characteristic curve was analyzed to evaluate performance in predicting survival. Detailed clinicopathological data and peripheral blood prior to immunotherapy were collected for each patient. Univariate and multivariable logistic regression analysis of imaging models and clinicopathological variables was also applied to identify the independent predictors of survival. A nomogram based on multivariable logistic regression was constructed. A total of 79 GC patients in the training cohort and 97 patients in the external validation cohort were enrolled in this study. A multi-model ensemble approach was applied to train a model to predict the 1-year survival of GC patients. Compared to individual models, the ensemble model showed improvement in performance metrics in both the internal and external validation cohorts. There was a significant difference in overall survival (OS) among patients with different imaging models based on the optimum cutoff score of 0.5 (HR = 0.20, 95 % CI: 0.10-0.37, <i>P</i> < 0.001). Multivariate Cox regression analysis revealed that the imaging models, PD-L1 expression, and lung immune prognostic index were independent prognostic factors for OS. We combined these variables and built a nomogram. The calibration curves showed that the C-index of the nomogram was 0.85 and 0.78 in the training and validation cohorts. The deep learning model in combination with several clinical factors showed predictive value for survival in patients with unresectable GC receiving immunotherapy.

Empowering PET imaging reporting with retrieval-augmented large language models and reading reports database: a pilot single center study.

Choi H, Lee D, Kang YK, Suh M

pubmed logopapersJun 1 2025
The potential of Large Language Models (LLMs) in enhancing a variety of natural language tasks in clinical fields includes medical imaging reporting. This pilot study examines the efficacy of a retrieval-augmented generation (RAG) LLM system considering zero-shot learning capability of LLMs, integrated with a comprehensive database of PET reading reports, in improving reference to prior reports and decision making. We developed a custom LLM framework with retrieval capabilities, leveraging a database of over 10 years of PET imaging reports from a single center. The system uses vector space embedding to facilitate similarity-based retrieval. Queries prompt the system to generate context-based answers and identify similar cases or differential diagnoses. From routine clinical PET readings, experienced nuclear medicine physicians evaluated the performance of system in terms of the relevance of queried similar cases and the appropriateness score of suggested potential diagnoses. The system efficiently organized embedded vectors from PET reports, showing that imaging reports were accurately clustered within the embedded vector space according to the diagnosis or PET study type. Based on this system, a proof-of-concept chatbot was developed and showed the framework's potential in referencing reports of previous similar cases and identifying exemplary cases for various purposes. From routine clinical PET readings, 84.1% of the cases retrieved relevant similar cases, as agreed upon by all three readers. Using the RAG system, the appropriateness score of the suggested potential diagnoses was significantly better than that of the LLM without RAG. Additionally, it demonstrated the capability to offer differential diagnoses, leveraging the vast database to enhance the completeness and precision of generated reports. The integration of RAG LLM with a large database of PET imaging reports suggests the potential to support clinical practice of nuclear medicine imaging reading by various tasks of AI including finding similar cases and deriving potential diagnoses from them. This study underscores the potential of advanced AI tools in transforming medical imaging reporting practices.

Enhancing detection of previously missed non-palpable breast carcinomas through artificial intelligence.

Mansour S, Kamal R, Hussein SA, Emara M, Kassab Y, Taha SN, Gomaa MMM

pubmed logopapersJun 1 2025
To investigate the impact of artificial intelligence (AI) reading digital mammograms in increasing the chance of detecting missed breast cancer, by studying the AI- flagged early morphology indictors, overlooked by the radiologist, and correlating them with the missed cancer pathology types. Mammograms done in 2020-2023, presenting breast carcinomas (n = 1998), were analyzed in concordance with the prior one year's result (2019-2022) assumed negative or benign. Present mammograms reviewed for the descriptors: asymmetry, distortion, mass, and microcalcifications. The AI presented abnormalities by overlaying color hue and scoring percentage for the degree of suspicion of malignancy. Prior mammogram with AI marking compromised 54 % (n = 555), and in the present mammograms, AI targeted 904 (88 %) carcinomas. The descriptor proportion of "asymmetry" was the common presentation of missed breast carcinoma (64.1 %) in the prior mammograms and the highest detection rate for AI was presented by "distortion" (100 %) followed by "grouped microcalcifications" (80 %). AI performance to predict malignancy in previously assigned negative or benign mammograms showed sensitivity of 73.4 %, specificity of 89 %, and accuracy of 78.4 %. Reading mammograms with AI significantly enhances the detection of early cancerous changes, particularly in dense breast tissues. The AI's detection rate does not correlate with specific pathological types of breast cancer, highlighting its broad utility. Subtle mammographic changes in postmenopausal women, not corroborated by ultrasound but marked by AI, warrant further evaluation by advanced applications of digital mammograms and close interval AI-reading mammogram follow up to minimize the potential for missed breast carcinoma.

Comparative diagnostic accuracy of ChatGPT-4 and machine learning in differentiating spinal tuberculosis and spinal tumors.

Hu X, Xu D, Zhang H, Tang M, Gao Q

pubmed logopapersJun 1 2025
In clinical practice, distinguishing between spinal tuberculosis (STB) and spinal tumors (ST) poses a significant diagnostic challenge. The application of AI-driven large language models (LLMs) shows great potential for improving the accuracy of this differential diagnosis. To evaluate the performance of various machine learning models and ChatGPT-4 in distinguishing between STB and ST. A retrospective cohort study. A total of 143 STB cases and 153 ST cases admitted to Xiangya Hospital Central South University, from January 2016 to June 2023 were collected. This study incorporates basic patient information, standard laboratory results, serum tumor markers, and comprehensive imaging records, including Magnetic Resonance Imaging (MRI) and Computed Tomography (CT), for individuals diagnosed with STB and ST. Machine learning techniques and ChatGPT-4 were utilized to distinguish between STB and ST separately. Six distinct machine learning models, along with ChatGPT-4, were employed to evaluate their differential diagnostic effectiveness. Among the 6 machine learning models, the Gradient Boosting Machine (GBM) algorithm model demonstrated the highest differential diagnostic efficiency. In the training cohort, the GBM model achieved a sensitivity of 98.84% and a specificity of 100.00% in distinguishing STB from ST. In the testing cohort, its sensitivity was 98.25%, and specificity was 91.80%. ChatGPT-4 exhibited a sensitivity of 70.37% and a specificity of 90.65% for differential diagnosis. In single-question cases, ChatGPT-4's sensitivity and specificity were 71.67% and 92.55%, respectively, while in re-questioning cases, they were 44.44% and 76.92%. The GBM model demonstrates significant value in the differential diagnosis of STB and ST, whereas the diagnostic performance of ChatGPT-4 remains suboptimal.
Page 58 of 1381373 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.