Sort by:
Page 64 of 81807 results

Deep Learning-Enhanced Ultra-high-resolution CT Imaging for Superior Temporal Bone Visualization.

Brockstedt L, Grauhan NF, Kronfeld A, Mercado MAA, Döge J, Sanner A, Brockmann MA, Othman AE

pubmed logopapersJun 1 2025
This study assesses the image quality of temporal bone ultra-high-resolution (UHR) Computed tomography (CT) scans in adults and children using hybrid iterative reconstruction (HIR) and a novel, vendor-specific deep learning-based reconstruction (DLR) algorithm called AiCE Inner Ear. In a retrospective, single-center study (February 1-July 30, 2023), UHR-CT scans of 57 temporal bones of 35 patients (5 children, 23 male) with at least one anatomical unremarkable temporal bone were included. There is an adult computed tomography dose index volume (CTDIvol 25.6 mGy) and a pediatric protocol (15.3 mGy). Images were reconstructed using HIR at normal resolution (0.5-mm slice thickness, 512² matrix) and UHR (0.25-mm, 1024² and 2048² matrix) as well as with a vendor-specific DLR advanced intelligent clear-IQ engine inner ear (AiCE Inner Ear) at UHR (0.25-mm, 1024² matrix). Three radiologists evaluated 18 anatomic structures using a 5-point Likert scale. Signal-to-noise (SNR) and contrast-to-noise ratio (CNR) were measured automatically. In the adult protocol subgroup (n=30; median age: 51 [11-89]; 19 men) and the pediatric protocol subgroup (n=5; median age: 2 [1-3]; 4 men), UHR-CT with DLR significantly improved subjective image quality (p<0.024), reduced noise (p<0.001), and increased CNR and SNR (p<0.001). DLR also enhanced visualization of key structures, including the tendon of the stapedius muscle (p<0.001), tympanic membrane (p<0.009), and basal aspect of the osseous spiral lamina (p<0.018). Vendor-specific DLR-enhanced UHR-CT significantly improves temporal bone image quality and diagnostic performance.

Classification of differentially activated groups of fibroblasts using morphodynamic and motile features.

Kang M, Min C, Devarasou S, Shin JH

pubmed logopapersJun 1 2025
Fibroblasts play essential roles in cancer progression, exhibiting activation states that can either promote or inhibit tumor growth. Understanding these differential activation states is critical for targeting the tumor microenvironment (TME) in cancer therapy. However, traditional molecular markers used to identify cancer-associated fibroblasts are limited by their co-expression across multiple fibroblast subtypes, making it difficult to distinguish specific activation states. Morphological and motility characteristics of fibroblasts reflect their underlying gene expression patterns and activation states, making these features valuable descriptors of fibroblast behavior. This study proposes an artificial intelligence-based classification framework to identify and characterize differentially activated fibroblasts by analyzing their morphodynamic and motile features. We extract these features from label-free live-cell imaging data of fibroblasts co-cultured with breast cancer cell lines using deep learning and machine learning algorithms. Our findings show that morphodynamic and motile features offer robust insights into fibroblast activation states, complementing molecular markers and overcoming their limitations. This biophysical state-based cellular classification framework provides a novel, comprehensive approach for characterizing fibroblast activation, with significant potential for advancing our understanding of the TME and informing targeted cancer therapies.

A Large Language Model to Detect Negated Expressions in Radiology Reports.

Su Y, Babore YB, Kahn CE

pubmed logopapersJun 1 2025
Natural language processing (NLP) is crucial to extract information accurately from unstructured text to provide insights for clinical decision-making, quality improvement, and medical research. This study compared the performance of a rule-based NLP system and a medical-domain transformer-based model to detect negated concepts in radiology reports. Using a corpus of 984 de-identified radiology reports from a large U.S.-based academic health system (1000 consecutive reports, excluding 16 duplicates), the investigators compared the rule-based medspaCy system and the Clinical Assertion and Negation Classification Bidirectional Encoder Representations from Transformers (CAN-BERT) system to detect negated expressions of terms from RadLex, the Unified Medical Language System Metathesaurus, and the Radiology Gamuts Ontology. Power analysis determined a sample size of 382 terms to achieve α = 0.05 and β = 0.8 for McNemar's test; based on an estimate of 15% negated terms, 2800 randomly selected terms were annotated manually as negated or not negated. Precision, recall, and F1 of the two models were compared using McNemar's test. Of the 2800 terms, 387 (13.8%) were negated. For negation detection, medspaCy attained a recall of 0.795, precision of 0.356, and F1 of 0.492. CAN-BERT achieved a recall of 0.785, precision of 0.768, and F1 of 0.777. Although recall was not significantly different, CAN-BERT had significantly better precision (χ2 = 304.64; p < 0.001). The transformer-based CAN-BERT model detected negated terms in radiology reports with high precision and recall; its precision significantly exceeded that of the rule-based medspaCy system. Use of this system will improve data extraction from textual reports to support information retrieval, AI model training, and discovery of causal relationships.

MR Image Fusion-Based Parotid Gland Tumor Detection.

Sunnetci KM, Kaba E, Celiker FB, Alkan A

pubmed logopapersJun 1 2025
The differentiation of benign and malignant parotid gland tumors is of major significance as it directly affects the treatment process. In addition, it is also a vital task in terms of early and accurate diagnosis of parotid gland tumors and the determination of treatment planning accordingly. As in other diseases, the differentiation of tumor types involves several challenging, time-consuming, and laborious processes. In the study, Magnetic Resonance (MR) images of 114 patients with parotid gland tumors are used for training and testing purposes by Image Fusion (IF). After the Apparent Diffusion Coefficient (ADC), Contrast-enhanced T1-w (T1C-w), and T2-w sequences are cropped, IF (ADC, T1C-w), IF (ADC, T2-w), IF (T1C-w, T2-w), and IF (ADC, T1C-w, T2-w) datasets are obtained for different combinations of these sequences using a two-dimensional Discrete Wavelet Transform (DWT)-based fusion technique. For each of these four datasets, ResNet18, GoogLeNet, and DenseNet-201 architectures are trained separately, and thus, 12 models are obtained in the study. A Graphical User Interface (GUI) application that contains the most successful of these trained architectures for each data is also designed to support the users. The designed GUI application not only allows the fusing of different sequence images but also predicts whether the label of the fused image is benign or malignant. The results show that the DenseNet-201 models for IF (ADC, T1C-w), IF (ADC, T2-w), and IF (ADC, T1C-w, T2-w) are better than the others, with accuracies of 95.45%, 95.96%, and 92.93%, respectively. It is also noted in the study that the most successful model for IF (T1C-w, T2-w) is ResNet18, and its accuracy is equal to 94.95%.

Ocular Imaging Challenges, Current State, and a Path to Interoperability: A HIMSS-SIIM Enterprise Imaging Community Whitepaper.

Goetz KE, Boland MV, Chu Z, Reed AA, Clark SD, Towbin AJ, Purt B, O'Donnell K, Bui MM, Eid M, Roth CJ, Luviano DM, Folio LR

pubmed logopapersJun 1 2025
Office-based testing, enhanced by advances in imaging technology, is routinely used in eye care to non-invasively assess ocular structure and function. This type of imaging coupled with autonomous artificial intelligence holds immense opportunity to diagnose eye diseases quickly. Despite the wide availability and use of ocular imaging, there are several factors that hinder optimization of clinical practice and patient care. While some large institutions have developed end-to-end digital workflows that utilize electronic health records, enterprise imaging archives, and dedicated diagnostic viewers, this experience has not yet made its way to smaller and independent eye clinics. Fractured interoperability practices impact patient care in all healthcare domains, including eye care where there is a scarcity of care centers, making collaboration essential among providers, specialists, and primary care who might be treating systemic conditions with profound impact on vision. The purpose of this white paper is to describe the current state of ocular imaging by focusing on the challenges related to interoperability, reporting, and clinical workflow.

Deep Conformal Supervision: Leveraging Intermediate Features for Robust Uncertainty Quantification.

Vahdani AM, Faghani S

pubmed logopapersJun 1 2025
Trustworthiness is crucial for artificial intelligence (AI) models in clinical settings, and a fundamental aspect of trustworthy AI is uncertainty quantification (UQ). Conformal prediction as a robust uncertainty quantification (UQ) framework has been receiving increasing attention as a valuable tool in improving model trustworthiness. An area of active research is the method of non-conformity score calculation for conformal prediction. We propose deep conformal supervision (DCS), which leverages the intermediate outputs of deep supervision for non-conformity score calculation, via weighted averaging based on the inverse of mean calibration error for each stage. We benchmarked our method on two publicly available datasets focused on medical image classification: a pneumonia chest radiography dataset and a preprocessed version of the 2019 RSNA Intracranial Hemorrhage dataset. Our method achieved mean coverage errors of 16e-4 (CI: 1e-4, 41e-4) and 5e-4 (CI: 1e-4, 10e-4) compared to baseline mean coverage errors of 28e-4 (CI: 2e-4, 64e-4) and 21e-4 (CI: 8e-4, 3e-4) on the two datasets, respectively (p < 0.001 on both datasets). Based on our findings, the baseline results of conformal prediction already exhibit small coverage errors. However, our method shows a significant improvement on coverage error, particularly noticeable in scenarios involving smaller datasets or when considering smaller acceptable error levels, which are crucial in developing UQ frameworks for healthcare AI applications.

Using Machine Learning on MRI Radiomics to Diagnose Parotid Tumours Before Comparing Performance with Radiologists: A Pilot Study.

Ammari S, Quillent A, Elvira V, Bidault F, Garcia GCTE, Hartl DM, Balleyguier C, Lassau N, Chouzenoux É

pubmed logopapersJun 1 2025
The parotid glands are the largest of the major salivary glands. They can harbour both benign and malignant tumours. Preoperative work-up relies on MR images and fine needle aspiration biopsy, but these diagnostic tools have low sensitivity and specificity, often leading to surgery for diagnostic purposes. The aim of this paper is (1) to develop a machine learning algorithm based on MR images characteristics to automatically classify parotid gland tumours and (2) compare its results with the diagnoses of junior and senior radiologists in order to evaluate its utility in routine practice. While automatic algorithms applied to parotid tumours classification have been developed in the past, we believe that our study is one of the first to leverage four different MRI sequences and propose a comparison with clinicians. In this study, we leverage data coming from a cohort of 134 patients treated for benign or malignant parotid tumours. Using radiomics extracted from the MR images of the gland, we train a random forest and a logistic regression to predict the corresponding histopathological subtypes. On the test set, the best results are given by the random forest: we obtain a 0.720 accuracy, a 0.860 specificity, and a 0.720 sensitivity over all histopathological subtypes, with an average AUC of 0.838. When considering the discrimination between benign and malignant tumours, the algorithm results in a 0.760 accuracy and a 0.769 AUC, both on test set. Moreover, the clinical experiment shows that our model helps to improve diagnostic abilities of junior radiologists as their sensitivity and accuracy raised by 6 % when using our proposed method. This algorithm may be useful for training of physicians. Radiomics with a machine learning algorithm may help improve discrimination between benign and malignant parotid tumours, decreasing the need for diagnostic surgery. Further studies are warranted to validate our algorithm for routine use.

Deep Learning-Based Estimation of Radiographic Position to Automatically Set Up the X-Ray Prime Factors.

Del Cerro CF, Giménez RC, García-Blas J, Sosenko K, Ortega JM, Desco M, Abella M

pubmed logopapersJun 1 2025
Radiation dose and image quality in radiology are influenced by the X-ray prime factors: KVp, mAs, and source-detector distance. These parameters are set by the X-ray technician prior to the acquisition considering the radiographic position. A wrong setting of these parameters may result in exposure errors, forcing the test to be repeated with the increase of the radiation dose delivered to the patient. This work presents a novel approach based on deep learning that automatically estimates the radiographic position from a photograph captured prior to X-ray exposure, which can then be used to select the optimal prime factors. We created a database using 66 radiographic positions commonly used in clinical settings, prospectively obtained during 2022 from 75 volunteers in two different X-ray facilities. The architecture for radiographic position classification was a lightweight version of ConvNeXt trained with fine-tuning, discriminative learning rates, and a one-cycle policy scheduler. Our resulting model achieved an accuracy of 93.17% for radiographic position classification and increased to 95.58% when considering the correct selection of prime factors, since half of the errors involved positions with the same KVp and mAs values. Most errors occurred for radiographic positions with similar patient pose in the photograph. Results suggest the feasibility of the method to facilitate the acquisition workflow reducing the occurrence of exposure errors while preventing unnecessary radiation dose delivered to patients.

Multi-modal large language models in radiology: principles, applications, and potential.

Shen Y, Xu Y, Ma J, Rui W, Zhao C, Heacock L, Huang C

pubmed logopapersJun 1 2025
Large language models (LLMs) and multi-modal large language models (MLLMs) represent the cutting-edge in artificial intelligence. This review provides a comprehensive overview of their capabilities and potential impact on radiology. Unlike most existing literature reviews focusing solely on LLMs, this work examines both LLMs and MLLMs, highlighting their potential to support radiology workflows such as report generation, image interpretation, EHR summarization, differential diagnosis generation, and patient education. By streamlining these tasks, LLMs and MLLMs could reduce radiologist workload, improve diagnostic accuracy, support interdisciplinary collaboration, and ultimately enhance patient care. We also discuss key limitations, such as the limited capacity of current MLLMs to interpret 3D medical images and to integrate information from both image and text data, as well as the lack of effective evaluation methods. Ongoing efforts to address these challenges are introduced.

Parapharyngeal Space: Diagnostic Imaging and Intervention.

Vogl TJ, Burck I, Stöver T, Helal R

pubmed logopapersJun 1 2025
Diagnosis of lesions of the parapharyngeal space (PPS) often poses a diagnostic and therapeutic challenge due to its deep location. As a result of the topographical relationship to nearby neck spaces, a very precise differential diagnosis is possible based on imaging criteria. When in doubt, imaging-guided - usually CT-guided - biopsy and even drainage remain options.Through a precise analysis of the literature including the most recent publications, this review precisely describes the basic and most recent imaging applications for various PPS pathologies and the differential diagnostic scheme for assigning the respective lesions in addition to the possibilities of using interventional radiology.The different pathologies of PPS from congenital malformations and inflammation to tumors are discussed according to frequency. Characteristic criteria and, more recently, the use of advanced imaging procedures and the introduction of artificial intelligence (AI) allow a very precise differential diagnosis and support further diagnosis and therapy. After precise access planning, almost all pathologies of the PPS can be biopsied or, if necessary, drained using CT-assisted procedures.Radiological procedures play an important role in the diagnosis and treatment planning of PPS pathologies. · Lesions of the PPS account for about 1-2% of all pathologies of the head and neck region. The majority are benign lesions and inflammatory processes.. · If differential diagnostic questions remain unanswered, material can - if necessary - be obtained via a CT-guided biopsy. Exclusion criteria are hypervascularized processes, especially paragangliomas and angiomas.. · The use of artificial intelligence (AI) in head and neck imaging of various pathologies, such as tumor segmentation, pathological TMN classification, detection of lymph node metastases, and extranodal extension, has significantly increased in recent years.. · Vogl TJ, Burck I, Stöver T et al. Parapharyngeal Space: Diagnostic Imaging and Intervention. Rofo 2025; 197: 638-646.
Page 64 of 81807 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.