Sort by:
Page 26 of 43426 results

An explainable transformer model integrating PET and tabular data for histologic grading and prognosis of follicular lymphoma: a multi-institutional digital biopsy study.

Jiang C, Jiang Z, Zhang Z, Huang H, Zhou H, Jiang Q, Teng Y, Li H, Xu B, Li X, Xu J, Ding C, Li K, Tian R

pubmed logopapersJun 1 2025
Pathological grade is a critical determinant of clinical outcomes and decision-making of follicular lymphoma (FL). This study aimed to develop a deep learning model as a digital biopsy for the non-invasive identification of FL grade. This study retrospectively included 513 FL patients from five independent hospital centers, randomly divided into training, internal validation, and external validation cohorts. A multimodal fusion Transformer model was developed integrating 3D PET tumor images with tabular data to predict FL grade. Additionally, the model is equipped with explainable modules, including Gradient-weighted Class Activation Mapping (Grad-CAM) for PET images, SHapley Additive exPlanations analysis for tabular data, and the calculation of predictive contribution ratios for both modalities, to enhance clinical interpretability and reliability. The predictive performance was evaluated using the area under the receiver operating characteristic curve (AUC) and accuracy, and its prognostic value was also assessed. The Transformer model demonstrated high accuracy in grading FL, with AUCs of 0.964-0.985 and accuracies of 90.2-96.7% in the training cohort, and similar performance in the validation cohorts (AUCs: 0.936-0.971, accuracies: 86.4-97.0%). Ablation studies confirmed that the fusion model outperformed single-modality models (AUCs: 0.974 - 0.956, accuracies: 89.8%-85.8%). Interpretability analysis revealed that PET images contributed 81-89% of the predictive value. Grad-CAM highlighted the tumor and peri-tumor regions. The model also effectively stratified patients by survival risk (P < 0.05), highlighting its prognostic value. Our study developed an explainable multimodal fusion Transformer model for accurate grading and prognosis of FL, with the potential to aid clinical decision-making.

Influence of prior probability information on large language model performance in radiological diagnosis.

Fukushima T, Kurokawa R, Hagiwara A, Sonoda Y, Asari Y, Kurokawa M, Kanzawa J, Gonoi W, Abe O

pubmed logopapersJun 1 2025
Large language models (LLMs) show promise in radiological diagnosis, but their performance may be affected by the context of the cases presented. Our purpose is to investigate how providing information about prior probabilities influences the diagnostic performance of an LLM in radiological quiz cases. We analyzed 322 consecutive cases from Radiology's "Diagnosis Please" quiz using Claude 3.5 Sonnet under three conditions: without context (Condition 1), informed as quiz cases (Condition 2), and presented as primary care cases (Condition 3). Diagnostic accuracy was compared using McNemar's test. The overall accuracy rate significantly improved in Condition 2 compared to Condition 1 (70.2% vs. 64.9%, p = 0.029). Conversely, the accuracy rate significantly decreased in Condition 3 compared to Condition 1 (59.9% vs. 64.9%, p = 0.027). Providing information that may influence prior probabilities significantly affects the diagnostic performance of the LLM in radiological cases. This suggests that LLMs may incorporate Bayesian-like principles and adjust the weighting of their diagnostic responses based on prior information, highlighting the potential for optimizing LLM's performance in clinical settings by providing relevant contextual information.

A Dual-Energy Computed Tomography Guided Intelligent Radiation Therapy Platform.

Wen N, Zhang Y, Zhang H, Zhang M, Zhou J, Liu Y, Liao C, Jia L, Zhang K, Chen J

pubmed logopapersJun 1 2025
The integration of advanced imaging and artificial intelligence technologies in radiation therapy has revolutionized cancer treatment by enhancing precision and adaptability. This study introduces a novel dual-energy computed tomography (DECT) guided intelligent radiation therapy (DEIT) platform designed to streamline and optimize the radiation therapy process. The DEIT system combines DECT, a newly designed dual-layer multileaf collimator, deep learning algorithms for auto-segmentation, and automated planning and quality assurance capabilities. The DEIT system integrates an 80-slice computed tomography (CT) scanner with an 87 cm bore size, a linear accelerator delivering 4 photon and 5 electron energies, and a flat panel imager optimized for megavoltage (MV) cone beam CT acquisition. A comprehensive evaluation of the system's accuracy was conducted using end-to-end tests. Virtual monoenergetic CT images and electron density images of the DECT were generated and compared on both phantom and patient. The system's auto-segmentation algorithms were tested on 5 cases for each of the 99 organs at risk, and the automated optimization and planning capabilities were evaluated on clinical cases. The DEIT system demonstrated systematic errors of less than 1 mm for target localization. DECT reconstruction showed electron density mapping deviations ranging from -0.052 to 0.001, with stable Hounsfield unit consistency across monoenergetic levels above 60 keV, except for high-Z materials at lower energies. Auto-segmentation achieved dice similarity coefficients above 0.9 for most organs with an inference time of less than 2 seconds. Dose-volume histogram comparisons showed improved dose conformity indices and reduced doses to critical structures in auto-plans compared to manual plans across various clinical cases. In addition, high gamma passing rates at 2%/2 mm in both 2-dimensional (above 97%) and 3-dimensional (above 99%) in vivo analyses further validate the accuracy and reliability of treatment plans. The DEIT platform represents a viable solution for radiation treatment. The DEIT system uses artificial intelligence-driven automation, real-time adjustments, and CT imaging to enhance the radiation therapy process, improving efficiency and flexibility.

Advancing Acoustic Droplet Vaporization for Tissue Characterization Using Quantitative Ultrasound and Transfer Learning.

Kaushik A, Fabiilli ML, Myers DD, Fowlkes JB, Aliabouzar M

pubmed logopapersJun 1 2025
Acoustic droplet vaporization (ADV) is an emerging technique with expanding applications in biomedical ultrasound. ADV-generated bubbles can function as microscale probes that provide insights into the mechanical properties of their surrounding microenvironment. This study investigated the acoustic and imaging characteristics of phase-shift nanodroplets in fibrin-based, tissue-mimicking hydrogels using passive cavitation detection and active imaging techniques, including B-mode and contrast-enhanced ultrasound. The findings demonstrated that the backscattered signal intensities and pronounced nonlinear acoustic responses, including subharmonic and higher harmonic frequencies, of ADV-generated bubbles correlated inversely with fibrin density. Additionally, we quantified the mean echo intensity, bubble cloud area, and second-order texture features of the generated ADV bubbles across varying fibrin densities. ADV bubbles in softer hydrogels displayed significantly higher mean echo intensities, larger bubble cloud areas, and more heterogeneous textures. In contrast, texture uniformity, characterized by variance, homogeneity, and energy, correlated directly with fibrin density. Furthermore, we incorporated transfer learning with convolutional neural networks, adapting AlexNet into two specialized models for differentiating fibrin hydrogels. The integration of deep learning techniques with ADV offers great potential, paving the way for future advancements in biomedical diagnostics.

Evaluating artificial intelligence chatbots for patient education in oral and maxillofacial radiology.

Helvacioglu-Yigit D, Demirturk H, Ali K, Tamimi D, Koenig L, Almashraqi A

pubmed logopapersJun 1 2025
This study aimed to compare the quality and readability of the responses generated by 3 publicly available artificial intelligence (AI) chatbots in answering frequently asked questions (FAQs) related to Oral and Maxillofacial Radiology (OMR) to assess their suitability for patient education. Fifteen OMR-related questions were selected from professional patient information websites. These questions were posed to ChatGPT-3.5 by OpenAI, Gemini 1.5 Pro by Google, and Copilot by Microsoft to generate responses. Three board-certified OMR specialists evaluated the responses regarding scientific adequacy, ease of understanding, and overall reader satisfaction. Readability was assessed using the Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease (FRE) scores. The Wilcoxon signed-rank test was conducted to compare the scores assigned by the evaluators to the responses from the chatbots and professional websites. Interevaluator agreement was examined by calculating the Fleiss kappa coefficient. There were no significant differences between groups in terms of scientific adequacy. In terms of readability, chatbots had overall mean FKGL and FRE scores of 12.97 and 34.11, respectively. Interevaluator agreement level was generally high. Although chatbots are relatively good at responding to FAQs, validating AI-generated information using input from healthcare professionals can enhance patient care and safety. Readability of the text content in the chatbots and websites requires high reading levels.

American College of Veterinary Radiology and European College of Veterinary Diagnostic Imaging position statement on artificial intelligence.

Appleby RB, Difazio M, Cassel N, Hennessey R, Basran PS

pubmed logopapersJun 1 2025
The American College of Veterinary Radiology (ACVR) and the European College of Veterinary Diagnostic Imaging (ECVDI) recognize the transformative potential of AI in veterinary diagnostic imaging and radiation oncology. This position statement outlines the guiding principles for the ethical development and integration of AI technologies to ensure patient safety and clinical effectiveness. Artificial intelligence systems must adhere to good machine learning practices, emphasizing transparency, error reporting, and the involvement of clinical experts throughout development. These tools should also include robust mechanisms for secure patient data handling and postimplementation monitoring. The position highlights the critical importance of maintaining a veterinarian in the loop, preferably a board-certified radiologist or radiation oncologist, to interpret AI outputs and safeguard diagnostic quality. Currently, no commercially available AI products for veterinary diagnostic imaging meet the required standards for transparency, validation, or safety. The ACVR and ECVDI advocate for rigorous peer-reviewed research, unbiased third-party evaluations, and interdisciplinary collaboration to establish evidence-based benchmarks for AI applications. Additionally, the statement calls for enhanced education on AI for veterinary professionals, from foundational training in curricula to continuing education for practitioners. Veterinarians are encouraged to disclose AI usage to pet owners and provide alternative diagnostic options as needed. Regulatory bodies should establish guidelines to prevent misuse and protect the profession and patients. The ACVR and ECVDI stress the need for a cautious, informed approach to AI adoption, ensuring these technologies augment, rather than compromise, veterinary care.

Estimating patient-specific organ doses from head and abdominal CT scans via machine learning with optimized regulation strength and feature quantity.

Shao W, Qu L, Lin X, Yun W, Huang Y, Zhuo W, Liu H

pubmed logopapersJun 1 2025
This study aims to investigate estimation of patient-specific organ doses from CT scans via radiomics feature-based SVR models with training parameter optimization, and maximize SVR models' predictive accuracy and robustness via fine-tuning regularization parameter and input feature quantities. CT images from head and abdominal scans underwent processing using DeepViewer®, an auto-segmentation tool for defining regions of interest (ROIs) of their organs. Radiomics features were extracted from the CT data and ROIs. Benchmark organ doses were then calculated through Monte Carlo (MC) simulations. SVR models, utilizing these extracted radiomics features as inputs for model training, were employed to predict patient-specific organ doses from CT scans. The trained SVR models underwent optimization by adjusting parameters for the input radiomics feature quantity and regulation parameter, resulting in appropriate configurations for accurate patient-specific organ dose predictions. The C values of 5 and 10 have made the SVR models arrive at a saturation state for the head and abdominal organs. The SVR models' MAPE and R<sup>2</sup> strongly depend on organ types. The appropriate parameters respectively are C = 5 or 10 coupled with input feature quantities of 50 for the brain and 200 for the left eye, right eye, left lens, and right lens. the appropriate parameters would be C = 5 or 10 accompanying input feature quantities of 80 for the bowel, 50 for the left kidney, right kidney, and 100 for the liver. Performance optimization of selecting appropriate combinations of input feature quantity and regulation parameters can maximize the predictive accuracy and robustness of radiomics feature-based SVR models in the realm of patient-specific organ dose predictions from CT scans.

PET and CT based DenseNet outperforms advanced deep learning models for outcome prediction of oropharyngeal cancer.

Ma B, Guo J, Dijk LVV, Langendijk JA, Ooijen PMAV, Both S, Sijtsema NM

pubmed logopapersJun 1 2025
In the HECKTOR 2022 challenge set [1], several state-of-the-art (SOTA, achieving best performance) deep learning models were introduced for predicting recurrence-free period (RFP) in head and neck cancer patients using PET and CT images. This study investigates whether a conventional DenseNet architecture, with optimized numbers of layers and image-fusion strategies, could achieve comparable performance as SOTA models. The HECKTOR 2022 dataset comprises 489 oropharyngeal cancer (OPC) patients from seven distinct centers. It was randomly divided into a training set (n = 369) and an independent test set (n = 120). Furthermore, an additional dataset of 400 OPC patients, who underwent chemo(radiotherapy) at our center, was employed for external testing. Each patients' data included pre-treatment CT- and PET-scans, manually generated GTV (Gross tumour volume) contours for primary tumors and lymph nodes, and RFP information. The present study compared the performance of DenseNet against three SOTA models developed on the HECKTOR 2022 dataset. When inputting CT, PET and GTV using the early fusion (considering them as different channels of input) approach, DenseNet81 (with 81 layers) obtained an internal test C-index of 0.69, a performance metric comparable with SOTA models. Notably, the removal of GTV from the input data yielded the same internal test C-index of 0.69 while improving the external test C-index from 0.59 to 0.63. Furthermore, compared to PET-only models, when utilizing the late fusion (concatenation of extracted features) with CT and PET, DenseNet81 demonstrated superior C-index values of 0.68 and 0.66 in both internal and external test sets, while using early fusion was better in only the internal test set. The basic DenseNet architecture with 81 layers demonstrated a predictive performance on par with SOTA models featuring more intricate architectures in the internal test set, and better performance in the external test. The late fusion of CT and PET imaging data yielded superior performance in the external test.

Patent Analysis of Dental CBCT Machines.

Yeung AWK, Nalley A, Hung KF, Oenning AC, Tanaka R

pubmed logopapersJun 1 2025
Cone Beam Computed Tomography (CBCT) has become a crucial imaging tool in modern dentistry. At present, a review does not exist to provide comprehensive understanding of the technological advancements and the entities driving the innovations of CBCT. This study aimed to analyse the patent records associated with CBCT technology, to gain valuable insights into the trends and breakthroughs, and identify key areas of focus from manufacturers. The online patent database called The Lens was accessed on 3 January 2025 to identify relevant patent records. A total of 706 patent records were identified and analysed. The majority of over 700 CBCT patents was contributed by CBCT manufacturers. The United States was the jurisdiction with most patent records, followed by Europe and China. Some manufacturers hold patent records for common features of CBCT systems, such as motion artifact correction, metal artifact reduction, reconstruction of panoramic image based on 3D data, and incorporation of artificial intelligence. Patent analysis can offer valuable insights into the development and advancement of CBCT technology, and foster collaboration between manufacturers, researchers, and clinicians. The advancements in CBCT technology, as reflected by patent trends, enhance diagnostic accuracy and treatment planning. Understanding these technological innovations can aid clinicians in selecting the most effective imaging tools for patient care.

A radiomics model combining machine learning and neural networks for high-accuracy prediction of cervical lymph node metastasis on ultrasound of head and neck squamous cell carcinoma.

Fukuda M, Eida S, Katayama I, Takagi Y, Sasaki M, Sumi M, Ariji Y

pubmed logopapersJun 1 2025
This study aimed to develop an ultrasound image-based radiomics model for diagnosing cervical lymph node (LN) metastasis in patients with head and neck squamous cell carcinoma (HNSCC) that shows higher accuracy than previous models. A total of 537 LN (260 metastatic and 277 nonmetastatic) from 126 patients (78 men, 48 women, average age 63 years) were enrolled. The multivariate analysis software Prediction One (Sony Network Communications Corporation) was used to create the diagnostic models. Furthermore, three machine learning methods were adopted as comparison approaches. Based on a combination of texture analysis results, clinical information, and ultrasound findings interpretated by specialists, a total of 12 models were created, three for each machine learning method, and their diagnostic performance was compared. The three best models had area under the curve of 0.98. Parameters related to ultrasound findings, such as presence of a hilum, echogenicity, and granular parenchymal echoes, showed particularly high contributions. Other significant contributors were those from texture analysis that indicated the minimum pixel value, number of contiguous pixels with the same echogenicity, and uniformity of gray levels. The radiomics model developed was able to accurately diagnose cervical LN metastasis in HNSCC.
Page 26 of 43426 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.