Sort by:
Page 102 of 2402393 results

Contrast-Enhanced CT-Based Deep Learning and Habitat Radiomics for Analysing the Predictive Capability for Oral Squamous Cell Carcinoma.

Liu Q, Liang Z, Qi X, Yang S, Fu B, Dong H

pubmed logopapersJul 24 2025
This study aims to explore a novel approach for predicting cervical lymph node metastasis (CLNM) and pathological subtypes in oral squamous cell carcinoma (OSCC) by comparing deep learning (DL) and habitat analysis models based on contrast-enhanced CT (CECT). A retrospective analysis was conducted using CECT images from patients diagnosed with OSCC via paraffin pathology at the Second Affiliated Hospital of Dalian Medical University. All patients underwent primary tumor resection and cervical lymph node dissection, with a total of 132 cases included. A DL model was developed by analysing regions of interest (ROIs) in the CECT images using a convolutional neural network (CNN). For habitat analysis, the ROI images were segmented into 3 regions using K-means clustering, and features were selected through a fully connected neural network (FCNN) to build the model. A separate clinical model was constructed based on nine clinical features, including age, gender, and tumor location. Using LNM and pathological subtypes as endpoints, the predictive performance of the clinical model, DL model, habitat analysis model, and a combined clinical + habitat model was evaluated using confusion matrices and receiver operating characteristic (ROC) curves. For LNM prediction, the combined clinical + habitat model achieved an area under the ROC curve (AUC) of 0.97. For pathological subtype prediction, the AUC was 0.96. The DL model yielded an AUC of 0.83 for LNM prediction and 0.91 for pathological subtype classification. The clinical model alone achieved an AUC of 0.94 for predicting LNM. The integrated habitat-clinical model demonstrates improved predictive performance. Combining habitat analysis with clinical features offers a promising approach for the prediction of oral cancer. The habitat-clinical integrated model may assist clinicians in performing accurate preoperative prognostic assessments in patients with oral cancer.

Artificial intelligence in radiology: 173 commercially available products and their scientific evidence.

Antonissen N, Tryfonos O, Houben IB, Jacobs C, de Rooij M, van Leeuwen KG

pubmed logopapersJul 24 2025
To assess changes in peer-reviewed evidence on commercially available radiological artificial intelligence (AI) products from 2020 to 2023, as a follow-up to a 2020 review of 100 products. A literature review was conducted, covering January 2015 to March 2023, focusing on CE-certified radiological AI products listed on www.healthairegister.com . Papers were categorised using the hierarchical model of efficacy: technical/diagnostic accuracy (levels 1-2), clinical decision-making and patient outcomes (levels 3-5), or socio-economic impact (level 6). Study features such as design, vendor independence, and multicentre/multinational data usage were also examined. By 2023, 173 CE-certified AI products from 90 vendors were identified, compared to 100 products in 2020. Products with peer-reviewed evidence increased from 36% to 66%, supported by 639 papers (up from 237). Diagnostic accuracy studies (level 2) remained predominant, though their share decreased from 65% to 57%. Studies addressing higher-efficacy levels (3-6) remained constant at 22% and 24%, with the number of products supported by such evidence increasing from 18% to 31%. Multicentre studies rose from 30% to 41% (p < 0.01). However, vendor-independent studies decreased (49% to 45%), as did multinational studies (15% to 11%) and prospective designs (19% to 16%), all with p > 0.05. The increase in peer-reviewed evidence and higher levels of evidence per product indicate maturation in the radiological AI market. However, the continued focus on lower-efficacy studies and reductions in vendor independence, multinational data, and prospective designs highlight persistent challenges in establishing unbiased, real-world evidence. Question Evaluating advancements in peer-reviewed evidence for CE-certified radiological AI products is crucial to understand their clinical adoption and impact. Findings CE-certified AI products with peer-reviewed evidence increased from 36% in 2020 to 66% in 2023, but the proportion of higher-level evidence papers (~24%) remained unchanged. Clinical relevance The study highlights increased validation of radiological AI products but underscores a continued lack of evidence on their clinical and socio-economic impact, which may limit these tools' safe and effective implementation into clinical workflows.

Fractal Analysis for Cognitive Impairment Classification in DAVF Using Machine Learning.

Sivan Sulaja J, Kannath SK, Menon RN, Thomas B

pubmed logopapersJul 24 2025
Intracranial dural arteriovenous fistula (DAVF) is an acquired vascular condition involving abnormal connections between dural arteries and veins without intervening capillary beds. Cognitive impairment is a common symptom in DAVFs, often linked to disrupted brain network connectivity. Resting-state functional MRI (rsfMRI) allows for examining functional connectivity through blood oxygenation level dependent (BOLD) signal analysis. However, rsfMRI signals exhibit fractal behavior that complicates connectivity analysis. This study explores nonfractal connectivity as a potential biomarker for cognitive impairment in DAVF patients by isolating short-memory components in BOLD signals.&#xD;Method: 50 DAVF patients and 50 healthy controls underwent neuropsychological assessments and rsfMRI. Preprocessed BOLD signals were decomposed using wavelet transforms to isolate fractal and nonfractal components. Connectivity matrices based on fractal, nonfractal, and Pearson correlation components were generated and used as features for classification. Machine learning classifiers, including SVM and decision trees, were optimized via cross-validation in MATLAB, with performance assessed by accuracy, sensitivity, specificity, and AUC.&#xD;Results: Nonfractal connectivity outperformed fractal and Pearson correlation measures, achieving a classification accuracy of 89.82% using SVM, with high sensitivity (86.54%), specificity (92.4%), and an AUC of 0.96. Nonfractal connectivity effectively differentiated cognitive impairment in DAVFs, offering a clearer depiction of neural activity by reducing the influence of fractal patterns.&#xD;Conclusion: This study suggests that nonfractal connectivity is a promising biomarker for assessing cognitive impairment in DAVF patients, potentially supporting early diagnosis and intervention. While nonfractal analysis showed promising classification accuracy, further research with larger datasets is needed to validate these findings and explore applicability in other neurological conditions.&#xD.

Malignancy classification of thyroid incidentalomas using 18F-fluorodeoxy-d-glucose PET/computed tomography-derived radiomics.

Yeghaian M, Piek MW, Bartels-Rutten A, Abdelatty MA, Herrero-Huertas M, Vogel WV, de Boer JP, Hartemink KJ, Bodalal Z, Beets-Tan RGH, Trebeschi S, van der Ploeg IMC

pubmed logopapersJul 24 2025
Thyroid incidentalomas (TIs) are incidental thyroid lesions detected on fluorodeoxy-d-glucose (18F-FDG) PET/computed tomography (PET/CT) scans. This study aims to investigate the role of noninvasive PET/CT-derived radiomic features in characterizing 18F-FDG PET/CT TIs and distinguishing benign from malignant thyroid lesions in oncological patients. We included 46 patients with PET/CT TIs who underwent thyroid ultrasound and thyroid surgery at our oncological referral hospital. Radiomic features extracted from regions of interest (ROI) in both PET and CT images and analyzed for their association with thyroid cancer and their predictive ability. The TIs were graded using the ultrasound TIRADS classification, and histopathological results served as the reference standard. Univariate and multivariate analyses were performed using features from each modality individually and combined. The performance of radiomic features was compared to the TIRADS classification. Among the 46 included patients, 36 patients (78%) had malignant thyroid lesions, while 10 patients (22%) had benign lesions. The combined run length nonuniformity radiomic feature from PET and CT cubical ROIs demonstrated the highest area under the curve (AUC) of 0.88 (P < 0.05), with a negative correlation with malignancy. This performance was comparable to the TIRADS classification (AUC: 0.84, P < 0.05), which showed a positive correlation with thyroid cancer. Multivariate analysis showed higher predictive performance using CT-derived radiomics (AUC: 0.86 ± 0.13) compared to TIRADS (AUC: 0.80 ± 0.08). This study highlights the potential of 18F-FDG PET/CT-derived radiomics to distinguish benign from malignant thyroid lesions. Further studies with larger cohorts and deep learning-based methods could obtain more robust results.

A Dynamic Machine Learning Model to Predict Angiographic Vasospasm After Aneurysmal Subarachnoid Hemorrhage.

Sen RD, McGrath MC, Shenoy VS, Meyer RM, Park C, Fong CT, Lele AV, Kim LJ, Levitt MR, Wang LL, Sekhar LN

pubmed logopapersJul 24 2025
The goal of this study was to develop a highly precise, dynamic machine learning model centered on daily transcranial Doppler ultrasound (TCD) data to predict angiographic vasospasm (AV) in the context of aneurysmal subarachnoid hemorrhage (aSAH). A retrospective review of patients with aSAH treated at a single institution was performed. The primary outcome was AV, defined as angiographic narrowing of any intracranial artery at any time point during admission from risk assessment. Standard demographic, clinical, and radiographic data were collected. Quantitative data including mean arterial pressure, cerebral perfusion pressure, daily serum sodium, and hourly ventriculostomy output were collected. Detailed daily TCD data of intracranial arteries including maximum velocities, pulsatility indices, and Lindegaard ratios were collected. Three predictive machine learning models were created and compared: A static multivariate logistics regression model based on data collected on the date of admission (Baseline Model; BM), a standard TCD model using middle cerebral artery flow velocity and Lindegaard ratio measurements (SM), and a machine learning long short term memory (LSTM) model using all data trended through the hospitalization. A total of 424 patients with aSAH were reviewed, 78 of whom developed AV. In predicting AV at any time point in the future, the LSTM model had the highest precision (0.571) and accuracy (0.776), whereas the SM model had the highest overall performance with an F1 score of 0.566. In predicting AV within 5 days, the LSTM continued to have the highest precision (0.488) and accuracy (0.803). After an ablation test removing all non-TCD elements, the LSTM model improved to a precision of 0.824. Longitudinal TCD data can be used to create a dynamic machine learning model with higher precision than static TCD measurements for predicting AV after aSAH.

Comparative Analysis of Vision Transformers and Convolutional Neural Networks for Medical Image Classification

Kunal Kawadkar

arxiv logopreprintJul 24 2025
The emergence of Vision Transformers (ViTs) has revolutionized computer vision, yet their effectiveness compared to traditional Convolutional Neural Networks (CNNs) in medical imaging remains under-explored. This study presents a comprehensive comparative analysis of CNN and ViT architectures across three critical medical imaging tasks: chest X-ray pneumonia detection, brain tumor classification, and skin cancer melanoma detection. We evaluated four state-of-the-art models - ResNet-50, EfficientNet-B0, ViT-Base, and DeiT-Small - across datasets totaling 8,469 medical images. Our results demonstrate task-specific model advantages: ResNet-50 achieved 98.37% accuracy on chest X-ray classification, DeiT-Small excelled at brain tumor detection with 92.16% accuracy, and EfficientNet-B0 led skin cancer classification at 81.84% accuracy. These findings provide crucial insights for practitioners selecting architectures for medical AI applications, highlighting the importance of task-specific architecture selection in clinical decision support systems.

The impacts of artificial intelligence on the workload of diagnostic radiology services: A rapid review and stakeholder contextualisation

Sutton, C., Prowse, J., Elshehaly, M., Randell, R.

medrxiv logopreprintJul 24 2025
BackgroundAdvancements in imaging technology, alongside increasing longevity and co-morbidities, have led to heightened demand for diagnostic radiology services. However, there is a shortfall in radiology and radiography staff to acquire, read and report on such imaging examinations. Artificial intelligence (AI) has been identified, notably by AI developers, as a potential solution to impact positively the workload of radiology services for diagnostics to address this staffing shortfall. MethodsA rapid review complemented with data from interviews with UK radiology service stakeholders was undertaken. ArXiv, Cochrane Library, Embase, Medline and Scopus databases were searched for publications in English published between 2007 and 2022. Following screening 110 full texts were included. Interviews with 15 radiology service managers, clinicians and academics were carried out between May and September 2022. ResultsMost literature was published in 2021 and 2022 with a distinct focus on AI for diagnostics of lung and chest disease (n = 25) notably COVID-19 and respiratory system cancers, closely followed by AI for breast screening (n = 23). AI contribution to streamline the workload of radiology services was categorised as autonomous, augmentative and assistive contributions. However, percentage estimates, of workload reduction, varied considerably with the most significant reduction identified in national screening programmes. AI was also recognised as aiding radiology services through providing second opinion, assisting in prioritisation of images for reading and improved quantification in diagnostics. Stakeholders saw AI as having the potential to remove some of the laborious work and contribute service resilience. ConclusionsThis review has shown there is limited data on real-world experiences from radiology services for the implementation of AI in clinical production. Autonomous, augmentative and assistive AI can, as noted in the article, decrease workload and aid reading and reporting, however the governance surrounding these advancements lags.

Enhancing InceptionResNet to Diagnose COVID-19 from Medical Images.

Aljawarneh S, Ray I

pubmed logopapersJul 24 2025
This investigation delves into the diagnosis of COVID-19, using X-ray images generated by way of an effective deep learning model. In terms of assessing the COVID-19 diagnosis learning model, the methods currently employed tend to focus on the accuracy rate level, while neglecting several significant assessment parameters. These parameters, which include precision, sensitivity and specificity, significantly, F1-score, and ROC-AUC influence the performance level of the model. In this paper, we have improved the InceptionResNet and called Enhanced InceptionResNet with restructured parameters termed, "Enhanced InceptionResNet," which incorporates depth-wise separable convolutions to enhance the efficiency of feature extraction and minimize the consumption of computational resources. For this investigation, three residual network (ResNet) models, namely Res- Net, InceptionResNet model, and the Enhanced InceptionResNet with restructured parameters, were employed for a medical image classification assignment. The performance of each model was evaluated on a balanced dataset of 2600 X-ray images. The models were subsequently assessed for accuracy and loss, as well subjected to a confusion matrix analysis. The Enhanced InceptionResNet consistently outperformed ResNet and InceptionResNet in terms of validation and testing accuracy, recall, precision, F1-score, and ROC-AUC demonstrating its superior capacity for identifying pertinent information in the data. In the context of validation and testing accuracy, our Enhanced InceptionRes- Net repeatedly proved to be more reliable than ResNet, an indication of the former's capacity for the efficient identification of pertinent information in the data (99.0% and 98.35%, respectively), suggesting enhanced feature extraction capabilities. The Enhanced InceptionResNet excelled in COVID-19 diagnosis from chest X-rays, surpassing ResNet and Default InceptionResNet in accuracy, precision, and sensitivity. Despite computational demands, it shows promise for medical image classification. Future work should leverage larger datasets, cloud platforms, and hyperparameter optimisation to improve performance, especially for distinguishing normal and pneumonia cases.

Analyzing pediatric forearm X-rays for fracture analysis using machine learning.

Lam V, Parida A, Dance S, Tabaie S, Cleary K, Anwar SM

pubmed logopapersJul 24 2025
Forearm fractures constitute a significant proportion of emergency department presentations in pediatric population. The treatment goal is to restore length and alignment between the distal and proximal bone fragments. While immobilization through splinting or casting is enough for non-displaced and minimally displaced fractures. However, moderately or severely displaced fractures often require reduction for realignment. However, appropriate treatment in current practices has challenges due to the lack of resources required for specialized pediatric care leading to delayed and unnecessary transfers between medical centers, which potentially create treatment complications and burdens. The purpose of this study is to build a machine learning model for analyzing forearm fractures to assist clinical centers that lack surgical expertise in pediatric orthopedics. X-ray scans from 1250 children were curated, preprocessed, and manually annotated at our clinical center. Several machine learning models were fine-tuned using a pretraining strategy leveraging self-supervised learning model with vision transformer backbone. We further employed strategies to identify the most important region related to fractures within the forearm X-ray. The model performance was evaluated with and without region of interest (ROI) detection to find an optimal model for forearm fracture analyses. Our proposed strategy leverages self-supervised pretraining (without labels) followed by supervised fine-tuning (with labels). The fine-tuned model using regions cropped with ROI identification resulted in the highest classification performance with a true-positive rate (TPR) of 0.79, true-negative rate (TNR) of 0.74, AUROC of 0.81, and AUPR of 0.86 when evaluated on the testing data. The results showed the feasibility of using machine learning models in predicting the appropriate treatment for forearm fractures in pediatric cases. With further improvement, the algorithm could potentially be used as a tool to assist non-specialized orthopedic providers in diagnosing and providing treatment.

Patient Perspectives on Artificial Intelligence in Health Care: Focus Group Study for Diagnostic Communication and Tool Implementation.

Foresman G, Biro J, Tran A, MacRae K, Kazi S, Schubel L, Visconti A, Gallagher W, Smith KM, Giardina T, Haskell H, Miller K

pubmed logopapersJul 24 2025
Artificial intelligence (AI) is rapidly transforming health care, offering potential benefits in diagnosis, treatment, and workflow efficiency. However, limited research explores patient perspectives on AI, especially in its role in diagnosis and communication. This study examines patient perceptions of various AI applications, focusing on the diagnostic process and communication. This study aimed to examine patient perspectives on AI use in health care, particularly in diagnostic processes and communication, identifying key concerns, expectations, and opportunities to guide the development and implementation of AI tools. This study used a qualitative focus group methodology with co-design principles to explore patient and family member perspectives on AI in clinical practice. A single 2-hour session was conducted with 17 adult participants. The session included interactive activities and breakout sessions focused on five specific AI scenarios relevant to diagnosis and communication: (1) portal messaging, (2) radiology review, (3) digital scribe, (4) virtual human, and (5) decision support. The session was audio-recorded and transcribed, with facilitator notes and demographic questionnaires collected. Data were analyzed using inductive thematic analysis by 2 independent researchers (GF and JB), with discrepancies resolved via consensus. Participants reported varying comfort levels with AI applications contingent on the level of patient interaction, with digital scribe (average 4.24, range 2-5) and radiology review (average 4.00, range 2-5) being the highest, and virtual human (average 1.68, range 1-4) being the lowest. In total, five cross-cutting themes emerged: (1) validation (concerns about model reliability), (2) usability (impact on diagnostic processes), (3) transparency (expectations for disclosing AI usage), (4) opportunities (potential for AI to improve care), and (5) privacy (concerns about data security). Participants valued the co-design session and felt they had a significant say in the discussions. This study highlights the importance of incorporating patient perspectives in the design and implementation of AI tools in health care. Transparency, human oversight, clear communication, and data privacy are crucial for patient trust and acceptance of AI in diagnostic processes. These findings inform strategies for individual clinicians, health care organizations, and policy makers to ensure responsible and patient-centered AI deployment in health care.
Page 102 of 2402393 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.