Sort by:
Page 8 of 18173 results

Bridging innovation to implementation in artificial intelligence fracture detection : a commentary piece.

Khattak M, Kierkegaard P, McGregor A, Perry DC

pubmed logopapersJun 1 2025
The deployment of AI in medical imaging, particularly in areas such as fracture detection, represents a transformative advancement in orthopaedic care. AI-driven systems, leveraging deep-learning algorithms, promise to enhance diagnostic accuracy, reduce variability, and streamline workflows by analyzing radiograph images swiftly and accurately. Despite these potential benefits, the integration of AI into clinical settings faces substantial barriers, including slow adoption across health systems, technical challenges, and a major lag between technology development and clinical implementation. This commentary explores the role of AI in healthcare, highlighting its potential to enhance patient outcomes through more accurate and timely diagnoses. It addresses the necessity of bridging the gap between AI innovation and practical application. It also emphasizes the importance of implementation science in effectively integrating AI technologies into healthcare systems, using frameworks such as the Consolidated Framework for Implementation Research and the Knowledge-to-Action Cycle to guide this process. We call for a structured approach to address the challenges of deploying AI in clinical settings, ensuring that AI's benefits translate into improved healthcare delivery and patient care.

Predicting strength of femora with metastatic lesions from single 2D radiographic projections using convolutional neural networks.

Synek A, Benca E, Licandro R, Hirtler L, Pahr DH

pubmed logopapersJun 1 2025
Patients with metastatic bone disease are at risk of pathological femoral fractures and may require prophylactic surgical fixation. Current clinical decision support tools often overestimate fracture risk, leading to overtreatment. While novel scores integrating femoral strength assessment via finite element (FE) models show promise, they require 3D imaging, extensive computation, and are difficult to automate. Predicting femoral strength directly from single 2D radiographic projections using convolutional neural networks (CNNs) could address these limitations, but this approach has not yet been explored for femora with metastatic lesions. This study aimed to test whether CNNs can accurately predict strength of femora with metastatic lesions from single 2D radiographic projections. CNNs with various architectures were developed and trained using an FE model generated training dataset. This training dataset was based on 36,000 modified computed tomography (CT) scans, created by randomly inserting artificial lytic lesions into the CT scans of 36 intact anatomical femoral specimens. From each modified CT scan, an anterior-posterior 2D projection was generated and femoral strength in one-legged stance was determined using nonlinear FE models. Following training, the CNN performance was evaluated on an independent experimental test dataset consisting of 31 anatomical femoral specimens (16 intact, 15 with artificial lytic lesions). 2D projections of each specimen were created from corresponding CT scans and femoral strength was assessed in mechanical tests. The CNNs' performance was evaluated using linear regression analysis and compared to 2D densitometric predictors (bone mineral density and content) and CT-based 3D FE models. All CNNs accurately predicted the experimentally measured strength in femora with and without metastatic lesions of the test dataset (R²≥0.80, CCC≥0.81). In femora with metastatic lesions, the performance of the CNNs (best: R²=0.84, CCC=0.86) was considerably superior to 2D densitometric predictors (R²≤0.07) and slightly inferior to 3D FE models (R²=0.90, CCC=0.94). CNNs, trained on a large dataset generated via FE models, predicted experimentally measured strength of femora with artificial metastatic lesions with accuracy comparable to 3D FE models. By eliminating the need for 3D imaging and reducing computational demands, this novel approach demonstrates potential for application in a clinical setting.

Deep Learning-Based Estimation of Radiographic Position to Automatically Set Up the X-Ray Prime Factors.

Del Cerro CF, Giménez RC, García-Blas J, Sosenko K, Ortega JM, Desco M, Abella M

pubmed logopapersJun 1 2025
Radiation dose and image quality in radiology are influenced by the X-ray prime factors: KVp, mAs, and source-detector distance. These parameters are set by the X-ray technician prior to the acquisition considering the radiographic position. A wrong setting of these parameters may result in exposure errors, forcing the test to be repeated with the increase of the radiation dose delivered to the patient. This work presents a novel approach based on deep learning that automatically estimates the radiographic position from a photograph captured prior to X-ray exposure, which can then be used to select the optimal prime factors. We created a database using 66 radiographic positions commonly used in clinical settings, prospectively obtained during 2022 from 75 volunteers in two different X-ray facilities. The architecture for radiographic position classification was a lightweight version of ConvNeXt trained with fine-tuning, discriminative learning rates, and a one-cycle policy scheduler. Our resulting model achieved an accuracy of 93.17% for radiographic position classification and increased to 95.58% when considering the correct selection of prime factors, since half of the errors involved positions with the same KVp and mAs values. Most errors occurred for radiographic positions with similar patient pose in the photograph. Results suggest the feasibility of the method to facilitate the acquisition workflow reducing the occurrence of exposure errors while preventing unnecessary radiation dose delivered to patients.

Applying Deep-Learning Algorithm Interpreting Kidney, Ureter, and Bladder (KUB) X-Rays to Detect Colon Cancer.

Lee L, Lin C, Hsu CJ, Lin HH, Lin TC, Liu YH, Hu JM

pubmed logopapersJun 1 2025
Early screening is crucial in reducing the mortality of colorectal cancer (CRC). Current screening methods, including fecal occult blood tests (FOBT) and colonoscopy, are primarily limited by low patient compliance and the invasive nature of the procedures. Several advanced imaging techniques such as computed tomography (CT) and histological imaging have been integrated with artificial intelligence (AI) to enhance the detection of CRC. There are still limitations because of the challenges associated with image acquisition and the cost. Kidney, ureter, and bladder (KUB) radiograph which is inexpensive and widely used for abdominal assessments in emergency settings and shows potential for detecting CRC when enhanced using advanced techniques. This study aimed to develop a deep learning model (DLM) to detect CRC using KUB radiographs. This retrospective study was conducted using data from the Tri-Service General Hospital (TSGH) between January 2011 and December 2020, including patients with at least one KUB radiograph. Patients were divided into development (n = 28,055), tuning (n = 11,234), and internal validation (n = 16,875) sets. An additional 15,876 patients were collected from a community hospital as the external validation set. A 121-layer DenseNet convolutional network was trained to classify KUB images for CRC detection. The model performance was evaluated using receiver operating characteristic curves, with sensitivity, specificity, and area under the curve (AUC) as metrics. The AUC, sensitivity, and specificity of the DLM in the internal and external validation sets achieved 0.738, 61.3%, and 74.4%, as well as 0.656, 47.7%, and 72.9%, respectively. The model performed better for high-grade CRC, with AUCs of 0.744 and 0.674 in the internal and external sets, respectively. Stratified analysis showed superior performance in females aged 55-64 with high-grade cancers. AI-positive predictions were associated with a higher long-term risk of all-cause mortality in both validation cohorts. AI-enhanced KUB X-ray analysis can enhance CRC screening coverage and effectiveness, providing a cost-effective alternative to traditional methods. Further prospective studies are necessary to validate these findings and fully integrate this technology into clinical practice.

Intraoperative stenosis detection in X-ray coronary angiography via temporal fusion and attention-based CNN.

Chen M, Wang S, Liang K, Chen X, Xu Z, Zhao C, Yuan W, Wan J, Huang Q

pubmed logopapersJun 1 2025
Coronary artery disease (CAD), the leading cause of mortality, is caused by atherosclerotic plaque buildup in the arteries. The gold standard for the diagnosis of CAD is via X-ray coronary angiography (XCA) during percutaneous coronary intervention, where locating coronary artery stenosis is fundamental and essential. However, due to complex vascular features and motion artifacts caused by heartbeat and respiratory movement, manually recognizing stenosis is challenging for physicians, which may prolong the surgery decision-making time and lead to irreversible myocardial damage. Therefore, we aim to provide an automatic method for accurate stenosis localization. In this work, we present a convolutional neural network (CNN) with feature-level temporal fusion and attention modules to detect coronary artery stenosis in XCA images. The temporal fusion module, composed of the deformable convolution and the correlation-based module, is proposed to integrate time-varifying vessel features from consecutive frames. The attention module adopts channel-wise recalibration to capture global context as well as spatial-wise recalibration to enhance stenosis features with local width and morphology information. We compare our method to the commonly used attention methods, state-of-the-art object detection methods, and stenosis detection methods. Experimental results show that our fusion and attention strategy significantly improves performance in discerning stenosis (P<0.05), achieving the best average recall score on two different datasets. This is the first study to integrate both temporal fusion and attention mechanism into a novel feature-level hybrid CNN framework for stenosis detection in XCA images, which is proved effective in improving detection performance and therefore is potentially helpful in intraoperative stenosis localization.

Detection of COVID-19, lung opacity, and viral pneumonia via X-ray using machine learning and deep learning.

Lamouadene H, El Kassaoui M, El Yadari M, El Kenz A, Benyoussef A, El Moutaouakil A, Mounkachi O

pubmed logopapersJun 1 2025
The COVID-19 pandemic has significantly strained healthcare systems, highlighting the need for early diagnosis to isolate positive cases and prevent the spread. This study combines machine learning, deep learning, and transfer learning techniques to automatically diagnose COVID-19 and other pulmonary conditions from radiographic images. First, we used Convolutional Neural Networks (CNNs) and a Support Vector Machine (SVM) classifier on a dataset of 21,165 chest X-ray images. Our model achieved an accuracy of 86.18 %. This approach aids medical experts in rapidly and accurateky detecting lung diseases. Next, we applied transfer learning using ResNet18 combined with SVM on a dataset comprising normal, COVID-19, lung opacity, and viral pneumonia images. This model outperformed traditional methods, with classification rates of 98 % with Stochastic Gradient Descent (SGD), 97 % with Adam, 96 % with RMSProp, and 94 % with Adagrad optimizers. Additionally, we incorporated two additional transfer learning models, EfficientNet-CNN and Xception-CNN, which achieved classification accuracies of 99.20 % and 98.80 %, respectively. However, we observed limitations in dataset diversity and representativeness, which may affect model generalization. Future work will focus on implementing advanced data augmentation techniques and collaborations with medical experts to enhance model performance.This research demonstrates the potential of cutting-edge deep learning techniques to improve diagnostic accuracy and efficiency in medical imaging applications.

Dental practitioners versus artificial intelligence software in assessing alveolar bone loss using intraoral radiographs.

Almarghlani A, Fakhri J, Almarhoon A, Ghonaim G, Abed H, Sharka R

pubmed logopapersJun 1 2025
Integrating artificial intelligence (AI) in the dental field can potentially enhance the efficiency of dental care. However, few studies have investigated whether AI software can achieve results comparable to those obtained by dental practitioners (general practitioners (GPs) and specialists) when assessing alveolar bone loss in a clinical setting. Thus, this study compared the performance of AI in assessing periodontal bone loss with those of GPs and specialists. This comparative cross-sectional study evaluated the performance of dental practitioners and AI software in assessing alveolar bone loss. Radiographs were randomly selected to ensure representative samples. Dental practitioners independently evaluated the radiographs, and the AI software "Second Opinion Software" was tested using the same set of radiographs evaluated by the dental practitioners. The results produced by the AI software were then compared with the baseline values to measure their accuracy and allow direct comparison with the performance of human specialists. The survey received 149 responses, where each answered 10 questions to compare the measurements made by AI and dental practitioners when assessing the amount of bone loss radiographically. The mean estimates of the participants had a moderate positive correlation with the radiographic measurements (rho = 0.547, <i>p</i> < 0.001) and a weaker but still significant correlation with AI measurements (rho = 0.365, <i>p</i> < 0.001). AI measurements had a stronger positive correlation with the radiographic measurements (rho = 0.712, <i>p</i> < 0.001) compared with their correlation with the estimates of dental practitioners. This study highlights the capacity of AI software to enhance the accuracy and efficiency of radiograph-based evaluations of alveolar bone loss. Dental practitioners are vital for the clinical experience but AI technology provides a consistent and replicable methodology. Future collaborations between AI experts, researchers, and practitioners could potentially optimize patient care.

Evaluating artificial intelligence chatbots for patient education in oral and maxillofacial radiology.

Helvacioglu-Yigit D, Demirturk H, Ali K, Tamimi D, Koenig L, Almashraqi A

pubmed logopapersJun 1 2025
This study aimed to compare the quality and readability of the responses generated by 3 publicly available artificial intelligence (AI) chatbots in answering frequently asked questions (FAQs) related to Oral and Maxillofacial Radiology (OMR) to assess their suitability for patient education. Fifteen OMR-related questions were selected from professional patient information websites. These questions were posed to ChatGPT-3.5 by OpenAI, Gemini 1.5 Pro by Google, and Copilot by Microsoft to generate responses. Three board-certified OMR specialists evaluated the responses regarding scientific adequacy, ease of understanding, and overall reader satisfaction. Readability was assessed using the Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease (FRE) scores. The Wilcoxon signed-rank test was conducted to compare the scores assigned by the evaluators to the responses from the chatbots and professional websites. Interevaluator agreement was examined by calculating the Fleiss kappa coefficient. There were no significant differences between groups in terms of scientific adequacy. In terms of readability, chatbots had overall mean FKGL and FRE scores of 12.97 and 34.11, respectively. Interevaluator agreement level was generally high. Although chatbots are relatively good at responding to FAQs, validating AI-generated information using input from healthcare professionals can enhance patient care and safety. Readability of the text content in the chatbots and websites requires high reading levels.

Managing class imbalance in the training of a large language model to predict patient selection for total knee arthroplasty: Results from the Artificial intelligence to Revolutionise the patient Care pathway in Hip and knEe aRthroplastY (ARCHERY) project.

Farrow L, Anderson L, Zhong M

pubmed logopapersJun 1 2025
This study set out to test the efficacy of different techniques used to manage to class imbalance, a type of data bias, in application of a large language model (LLM) to predict patient selection for total knee arthroplasty (TKA). This study utilised data from the Artificial Intelligence to Revolutionise the Patient Care Pathway in Hip and Knee Arthroplasty (ARCHERY) project (ISRCTN18398037). Data included the pre-operative radiology reports of patients referred to secondary care for knee-related complaints from within the North of Scotland. A clinically based LLM (GatorTron) was trained regarding prediction of selection for TKA. Three methods for managing class imbalance were assessed: a standard model, use of class weighting, and majority class undersampling. A total of 7707 individual knee radiology reports were included (dated from 2015 to 2022). The mean text length was 74 words (range 26-275). Only 910/7707 (11.8%) patients underwent TKA surgery (the designated 'minority class'). Class weighting technique performed better for minority class discrimination and calibration compared with the other two techniques (Recall 0.61/AUROC 0.73 for class weighting compared with 0.54/0.70 and 0.59/0.72 for the standard model and majority class undersampling, respectively. There was also significant data loss for majority class undersampling when compared with class-weighting. Use of class-weighting appears to provide the optimal method of training a an LLM to perform analytical tasks on free-text clinical information in the face of significant data bias ('class imbalance'). Such knowledge is an important consideration in the development of high-performance clinical AI models within Trauma and Orthopaedics.

Automated engineered-stone silicosis screening and staging using Deep Learning with X-rays.

Priego-Torres B, Sanchez-Morillo D, Khalili E, Conde-Sánchez MÁ, García-Gámez A, León-Jiménez A

pubmed logopapersJun 1 2025
Silicosis, a debilitating occupational lung disease caused by inhaling crystalline silica, continues to be a significant global health issue, especially with the increasing use of engineered stone (ES) surfaces containing high silica content. Traditional diagnostic methods, dependent on radiological interpretation, have low sensitivity, especially, in the early stages of the disease, and present variability between evaluators. This study explores the efficacy of deep learning techniques in automating the screening and staging of silicosis using chest X-ray images. Utilizing a comprehensive dataset, obtained from the medical records of a cohort of workers exposed to artificial quartz conglomerates, we implemented a preprocessing stage for rib-cage segmentation, followed by classification using state-of-the-art deep learning models. The segmentation model exhibited high precision, ensuring accurate identification of thoracic structures. In the screening phase, our models achieved near-perfect accuracy, with ROC AUC values reaching 1.0, effectively distinguishing between healthy individuals and those with silicosis. The models demonstrated remarkable precision in the staging of the disease. Nevertheless, differentiating between simple silicosis and progressive massive fibrosis, the evolved and complicated form of the disease, presented certain difficulties, especially during the transitional period, when assessment can be significantly subjective. Notwithstanding these difficulties, the models achieved an accuracy of around 81% and ROC AUC scores nearing 0.93. This study highlights the potential of deep learning to generate clinical decision support tools to increase the accuracy and effectiveness in the diagnosis and staging of silicosis, whose early detection would allow the patient to be moved away from all sources of occupational exposure, therefore constituting a substantial advancement in occupational health diagnostics.
Page 8 of 18173 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.