Sort by:
Page 132 of 2132126 results

Semiautomated Extraction of Research Topics and Trends From National Cancer Institute Funding in Radiological Sciences From 2000 to 2020.

Nguyen MH, Beidler PG, Tsai J, Anderson A, Chen D, Kinahan PE, Kang J

pubmed logopapersJun 1 2025
Investigators and funding organizations desire knowledge on topics and trends in publicly funded research but current efforts for manual categorization have been limited in breadth and depth of understanding. We present a semiautomated analysis of 21 years of R-type National Cancer Institute (NCI) grants to departments of radiation oncology and radiology using natural language processing. We selected all noneducation R-type NCI grants from 2000 to 2020 awarded to departments of radiation oncology/radiology with affiliated schools of medicine. We used pretrained word embedding vectors to represent each grant abstract. A sequential clustering algorithm assigned each grant to 1 of 60 clusters representing research topics; we repeated the same workflow for 15 clusters for comparison. Each cluster was then manually named using the top words and closest documents to each cluster centroid. The interpretability of document embeddings was evaluated by projecting them onto 2 dimensions. Changes in clusters over time were used to examine temporal funding trends. We included 5874 grants totaling 1.9 billion dollars of NCI funding over 21 years. The human-model agreement was similar to the human interrater agreement. Two-dimensional projections of grant clusters showed 2 dominant axes: physics-biology and therapeutic-diagnostic. Therapeutic and physics clusters have grown faster over time than diagnostic and biology clusters. The 3 topics with largest funding increase were imaging biomarkers, informatics, and radiopharmaceuticals, which all had a mean annual growth of >$218,000. The 3 topics with largest funding decrease were cellular stress response, advanced imaging hardware technology, and improving performance of breast cancer computer-aided detection, which all had a mean decrease of >$110,000. We developed a semiautomated natural language processing approach to analyze research topics and funding trends. We applied this approach to NCI funding in the radiological sciences to extract both domains of research being funded and temporal trends.

Adaptive ensemble loss and multi-scale attention in breast ultrasound segmentation with UMA-Net.

Dar MF, Ganivada A

pubmed logopapersJun 1 2025
The generalization of deep learning (DL) models is critical for accurate lesion segmentation in breast ultrasound (BUS) images. Traditional DL models often struggle to generalize well due to the high frequency and scale variations inherent in BUS images. Moreover, conventional loss functions used in these models frequently result in imbalanced optimization, either prioritizing region overlap or boundary accuracy, which leads to suboptimal segmentation performance. To address these issues, we propose UMA-Net, an enhanced UNet architecture specifically designed for BUS image segmentation. UMA-Net integrates residual connections, attention mechanisms, and a bottleneck with atrous convolutions to effectively capture multi-scale contextual information without compromising spatial resolution. Additionally, we introduce an adaptive ensemble loss function that dynamically balances the contributions of different loss components during training, ensuring optimization across key segmentation metrics. This novel approach mitigates the imbalances found in conventional loss functions. We validate UMA-Net on five diverse BUS datasets-BUET, BUSI, Mendeley, OMI, and UDIAT-demonstrating superior performance. Our findings highlight the importance of addressing frequency and scale variations, confirming UMA-Net as a robust and generalizable solution for BUS image segmentation.

PEDRA-EFB0: colorectal cancer prognostication using deep learning with patch embeddings and dual residual attention.

Zhao Z, Wang H, Wu D, Zhu Q, Tan X, Hu S, Ge Y

pubmed logopapersJun 1 2025
In computer-aided diagnosis systems, precise feature extraction from CT scans of colorectal cancer using deep learning is essential for effective prognosis. However, existing convolutional neural networks struggle to capture long-range dependencies and contextual information, resulting in incomplete CT feature extraction. To address this, the PEDRA-EFB0 architecture integrates patch embeddings and a dual residual attention mechanism for enhanced feature extraction and survival prediction in colorectal cancer CT scans. A patch embedding method processes CT scans into patches, creating positional features for global representation and guiding spatial attention computation. Additionally, a dual residual attention mechanism during the upsampling stage selectively combines local and global features, enhancing CT data utilization. Furthermore, this paper proposes a feature selection algorithm that combines autoencoders and entropy technology, encoding and compressing high-dimensional data to reduce redundant information and using entropy to assess the importance of features, thereby achieving precise feature selection. Experimental results indicate the PEDRA-EFB0 model outperforms traditional methods on colorectal cancer CT metrics, notably in C-index, BS, MCC, and AUC, enhancing survival prediction accuracy. Our code is freely available at https://github.com/smile0208z/PEDRA .

Deep learning radiomics analysis for prediction of survival in patients with unresectable gastric cancer receiving immunotherapy.

Gou M, Zhang H, Qian N, Zhang Y, Sun Z, Li G, Wang Z, Dai G

pubmed logopapersJun 1 2025
Immunotherapy has become an option for the first-line therapy of advanced gastric cancer (GC), with improved survival. Our study aimed to investigate unresectable GC from an imaging perspective combined with clinicopathological variables to identify patients who were most likely to benefit from immunotherapy. Patients with unresectable GC who were consecutively treated with immunotherapy at two different medical centers of Chinese PLA General Hospital were included and divided into the training and validation cohorts, respectively. A deep learning neural network, using a multimodal ensemble approach based on CT imaging data before immunotherapy, was trained in the training cohort to predict survival, and an internal validation cohort was constructed to select the optimal ensemble model. Data from another cohort were used for external validation. The area under the receiver operating characteristic curve was analyzed to evaluate performance in predicting survival. Detailed clinicopathological data and peripheral blood prior to immunotherapy were collected for each patient. Univariate and multivariable logistic regression analysis of imaging models and clinicopathological variables was also applied to identify the independent predictors of survival. A nomogram based on multivariable logistic regression was constructed. A total of 79 GC patients in the training cohort and 97 patients in the external validation cohort were enrolled in this study. A multi-model ensemble approach was applied to train a model to predict the 1-year survival of GC patients. Compared to individual models, the ensemble model showed improvement in performance metrics in both the internal and external validation cohorts. There was a significant difference in overall survival (OS) among patients with different imaging models based on the optimum cutoff score of 0.5 (HR = 0.20, 95 % CI: 0.10-0.37, <i>P</i> < 0.001). Multivariate Cox regression analysis revealed that the imaging models, PD-L1 expression, and lung immune prognostic index were independent prognostic factors for OS. We combined these variables and built a nomogram. The calibration curves showed that the C-index of the nomogram was 0.85 and 0.78 in the training and validation cohorts. The deep learning model in combination with several clinical factors showed predictive value for survival in patients with unresectable GC receiving immunotherapy.

Empowering PET imaging reporting with retrieval-augmented large language models and reading reports database: a pilot single center study.

Choi H, Lee D, Kang YK, Suh M

pubmed logopapersJun 1 2025
The potential of Large Language Models (LLMs) in enhancing a variety of natural language tasks in clinical fields includes medical imaging reporting. This pilot study examines the efficacy of a retrieval-augmented generation (RAG) LLM system considering zero-shot learning capability of LLMs, integrated with a comprehensive database of PET reading reports, in improving reference to prior reports and decision making. We developed a custom LLM framework with retrieval capabilities, leveraging a database of over 10 years of PET imaging reports from a single center. The system uses vector space embedding to facilitate similarity-based retrieval. Queries prompt the system to generate context-based answers and identify similar cases or differential diagnoses. From routine clinical PET readings, experienced nuclear medicine physicians evaluated the performance of system in terms of the relevance of queried similar cases and the appropriateness score of suggested potential diagnoses. The system efficiently organized embedded vectors from PET reports, showing that imaging reports were accurately clustered within the embedded vector space according to the diagnosis or PET study type. Based on this system, a proof-of-concept chatbot was developed and showed the framework's potential in referencing reports of previous similar cases and identifying exemplary cases for various purposes. From routine clinical PET readings, 84.1% of the cases retrieved relevant similar cases, as agreed upon by all three readers. Using the RAG system, the appropriateness score of the suggested potential diagnoses was significantly better than that of the LLM without RAG. Additionally, it demonstrated the capability to offer differential diagnoses, leveraging the vast database to enhance the completeness and precision of generated reports. The integration of RAG LLM with a large database of PET imaging reports suggests the potential to support clinical practice of nuclear medicine imaging reading by various tasks of AI including finding similar cases and deriving potential diagnoses from them. This study underscores the potential of advanced AI tools in transforming medical imaging reporting practices.

Enhancing detection of previously missed non-palpable breast carcinomas through artificial intelligence.

Mansour S, Kamal R, Hussein SA, Emara M, Kassab Y, Taha SN, Gomaa MMM

pubmed logopapersJun 1 2025
To investigate the impact of artificial intelligence (AI) reading digital mammograms in increasing the chance of detecting missed breast cancer, by studying the AI- flagged early morphology indictors, overlooked by the radiologist, and correlating them with the missed cancer pathology types. Mammograms done in 2020-2023, presenting breast carcinomas (n = 1998), were analyzed in concordance with the prior one year's result (2019-2022) assumed negative or benign. Present mammograms reviewed for the descriptors: asymmetry, distortion, mass, and microcalcifications. The AI presented abnormalities by overlaying color hue and scoring percentage for the degree of suspicion of malignancy. Prior mammogram with AI marking compromised 54 % (n = 555), and in the present mammograms, AI targeted 904 (88 %) carcinomas. The descriptor proportion of "asymmetry" was the common presentation of missed breast carcinoma (64.1 %) in the prior mammograms and the highest detection rate for AI was presented by "distortion" (100 %) followed by "grouped microcalcifications" (80 %). AI performance to predict malignancy in previously assigned negative or benign mammograms showed sensitivity of 73.4 %, specificity of 89 %, and accuracy of 78.4 %. Reading mammograms with AI significantly enhances the detection of early cancerous changes, particularly in dense breast tissues. The AI's detection rate does not correlate with specific pathological types of breast cancer, highlighting its broad utility. Subtle mammographic changes in postmenopausal women, not corroborated by ultrasound but marked by AI, warrant further evaluation by advanced applications of digital mammograms and close interval AI-reading mammogram follow up to minimize the potential for missed breast carcinoma.

Comparative diagnostic accuracy of ChatGPT-4 and machine learning in differentiating spinal tuberculosis and spinal tumors.

Hu X, Xu D, Zhang H, Tang M, Gao Q

pubmed logopapersJun 1 2025
In clinical practice, distinguishing between spinal tuberculosis (STB) and spinal tumors (ST) poses a significant diagnostic challenge. The application of AI-driven large language models (LLMs) shows great potential for improving the accuracy of this differential diagnosis. To evaluate the performance of various machine learning models and ChatGPT-4 in distinguishing between STB and ST. A retrospective cohort study. A total of 143 STB cases and 153 ST cases admitted to Xiangya Hospital Central South University, from January 2016 to June 2023 were collected. This study incorporates basic patient information, standard laboratory results, serum tumor markers, and comprehensive imaging records, including Magnetic Resonance Imaging (MRI) and Computed Tomography (CT), for individuals diagnosed with STB and ST. Machine learning techniques and ChatGPT-4 were utilized to distinguish between STB and ST separately. Six distinct machine learning models, along with ChatGPT-4, were employed to evaluate their differential diagnostic effectiveness. Among the 6 machine learning models, the Gradient Boosting Machine (GBM) algorithm model demonstrated the highest differential diagnostic efficiency. In the training cohort, the GBM model achieved a sensitivity of 98.84% and a specificity of 100.00% in distinguishing STB from ST. In the testing cohort, its sensitivity was 98.25%, and specificity was 91.80%. ChatGPT-4 exhibited a sensitivity of 70.37% and a specificity of 90.65% for differential diagnosis. In single-question cases, ChatGPT-4's sensitivity and specificity were 71.67% and 92.55%, respectively, while in re-questioning cases, they were 44.44% and 76.92%. The GBM model demonstrates significant value in the differential diagnosis of STB and ST, whereas the diagnostic performance of ChatGPT-4 remains suboptimal.

AI image analysis as the basis for risk-stratified screening.

Strand F

pubmed logopapersJun 1 2025
Artificial intelligence (AI) has emerged as a transformative tool in breast cancer screening, with two distinct applications: computer-aided cancer detection (CAD) and risk prediction. While AI CAD systems are slowly finding its way into clinical practice to assist radiologists or make independent reads, this review focuses on AI risk models, which aim to predict a patient's likelihood of being diagnosed with breast cancer within a few years after negative screening. Unlike AI CAD systems, AI risk models are mainly explored in research settings without widespread clinical adoption. This review synthesizes advances in AI-driven risk prediction models, from traditional imaging biomarkers to cutting-edge deep learning methodologies and multimodal approaches. Contributions by leading researchers are explored with critical appraisal of their methods and findings. Ethical, practical, and clinical challenges in implementing AI models are also discussed, with an emphasis on real-world applications. This review concludes by proposing future directions to optimize the adoption of AI tools in breast cancer screening and improve equity and outcomes for diverse populations.

Semi-Supervised Learning Allows for Improved Segmentation With Reduced Annotations of Brain Metastases Using Multicenter MRI Data.

Ottesen JA, Tong E, Emblem KE, Latysheva A, Zaharchuk G, Bjørnerud A, Grøvik E

pubmed logopapersJun 1 2025
Deep learning-based segmentation of brain metastases relies on large amounts of fully annotated data by domain experts. Semi-supervised learning offers potential efficient methods to improve model performance without excessive annotation burden. This work tests the viability of semi-supervision for brain metastases segmentation. Retrospective. There were 156, 65, 324, and 200 labeled scans from four institutions and 519 unlabeled scans from a single institution. All subjects included in the study had diagnosed with brain metastases. 1.5 T and 3 T, 2D and 3D T1-weighted pre- and post-contrast, and fluid-attenuated inversion recovery (FLAIR). Three semi-supervision methods (mean teacher, cross-pseudo supervision, and interpolation consistency training) were adapted with the U-Net architecture. The three semi-supervised methods were compared to their respective supervised baseline on the full and half-sized training. Evaluation was performed on a multinational test set from four different institutions using 5-fold cross-validation. Method performance was evaluated by the following: the number of false-positive predictions, the number of true positive predictions, the 95th Hausdorff distance, and the Dice similarity coefficient (DSC). Significance was tested using a paired samples t test for a single fold, and across all folds within a given cohort. Semi-supervision outperformed the supervised baseline for all sites with the best-performing semi-supervised method achieved an on average DSC improvement of 6.3% ± 1.6%, 8.2% ± 3.8%, 8.6% ± 2.6%, and 15.4% ± 1.4%, when trained on half the dataset and 3.6% ± 0.7%, 2.0% ± 1.5%, 1.8% ± 5.7%, and 4.7% ± 1.7%, compared to the supervised baseline on four test cohorts. In addition, in three of four datasets, the semi-supervised training produced equal or better results than the supervised models trained on twice the labeled data. Semi-supervised learning allows for improved segmentation performance over the supervised baseline, and the improvement was particularly notable for independent external test sets when trained on small amounts of labeled data. Artificial intelligence requires extensive datasets with large amounts of annotated data from medical experts which can be difficult to acquire due to the large workload. To compensate for this, it is possible to utilize large amounts of un-annotated clinical data in addition to annotated data. However, this method has not been widely tested for the most common intracranial brain tumor, brain metastases. This study shows that this approach allows for data efficient deep learning models across multiple institutions with different clinical protocols and scanners. 3 TECHNICAL EFFICACY: Stage 2.

Preliminary study on detection and diagnosis of focal liver lesions based on a deep learning model using multimodal PET/CT images.

Luo Y, Yang Q, Hu J, Qin X, Jiang S, Liu Y

pubmed logopapersJun 1 2025
To develop and validate a deep learning model using multimodal PET/CT imaging for detecting and classifying focal liver lesions (FLL). This study included 185 patients who underwent <sup>18</sup>F-FDG PET/CT imaging at our institution from March 2022 to February 2023. We analyzed serological data and imaging. Liver lesions were segmented on PET and CT, serving as the "reference standard". Deep learning models were trained using PET and CT images to generate predicted segmentations and classify lesion nature. Model performance was evaluated by comparing the predicted segmentations with the reference segmentations, using metrics such as Dice, Precision, Recall, F1-score, ROC, and AUC, and compared it with physician diagnoses. This study finally included 150 patients, comprising 46 patients with benign liver nodules, 51 patients with malignant liver nodules, and 53 patients with no FLLs. Significant differences were observed among groups for age, AST, ALP, GGT, AFP, CA19-9and CEA. On the validation set, the Dice coefficient of the model was 0.740. For the normal group, the recall was 0.918, precision was 0.904, F1-score was 0.909, and AUC was 0.976. For the benign group, the recall was 0.869, precision was 0.862, F1-score was 0.863, and AUC was 0.928. For the malignant group, the recall was 0.858, precision was 0.914, F1-score was 0.883, and AUC was 0.979. The model's overall diagnostic performance was between that of junior and senior physician. This deep learning model demonstrated high sensitivity in detecting FLLs and effectively differentiated between benign and malignant lesions.
Page 132 of 2132126 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.