Sort by:
Page 304 of 3433423 results

The integration of artificial intelligence into clinical medicine: Trends, challenges, and future directions.

Aravazhi PS, Gunasekaran P, Benjamin NZY, Thai A, Chandrasekar KK, Kolanu ND, Prajjwal P, Tekuru Y, Brito LV, Inban P

pubmed logopapersJun 1 2025
AI has emerged as a transformative force in clinical medicine, changing the diagnosis, treatment, and management of patients. Tools have been derived for working with ML, DL, and NLP algorithms to analyze large complex medical datasets with unprecedented accuracy and speed, thereby improving diagnostic precision, treatment personalization, and patient care outcomes. For example, CNNs have dramatically improved the accuracy of medical imaging diagnoses, and NLP algorithms have greatly helped extract insights from unstructured data, including EHRs. However, there are still numerous challenges that face AI integration into clinical workflows, including data privacy, algorithmic bias, ethical dilemmas, and problems with the interpretability of "black-box" AI models. These barriers have thus far prevented the widespread application of AI in health care, and its possible trends, obstacles, and future implications are necessary to be systematically explored. The purpose of this paper is, therefore, to assess the current trends in AI applications in clinical medicine, identify those obstacles that are hindering adoption, and identify possible future directions. This research hopes to synthesize evidence from other peer-reviewed articles to provide a more comprehensive understanding of the role that AI plays to advance clinical practices, improve patient outcomes, or enhance decision-making. A systematic review was done according to the PRISMA guidelines to explore the integration of Artificial Intelligence in clinical medicine, including trends, challenges, and future directions. PubMed, Cochrane Library, Web of Science, and Scopus databases were searched for peer-reviewed articles from 2014 to 2024 with keywords such as "Artificial Intelligence in Medicine," "AI in Clinical Practice," "Machine Learning in Healthcare," and "Ethical Implications of AI in Medicine." Studies focusing on AI application in diagnostics, treatment planning, and patient care reporting measurable clinical outcomes were included. Non-clinical AI applications and articles published before 2014 were excluded. Selected studies were screened for relevance, and then their quality was critically appraised to synthesize data reliably and rigorously. This systematic review includes the findings of 8 studies that pointed out the transformational role of AI in clinical medicine. AI tools, such as CNNs, had diagnostic accuracy more than the traditional methods, particularly in radiology and pathology. Predictive models efficiently supported risk stratification, early disease detection, and personalized medicine. Despite these improvements, significant hurdles, including data privacy, algorithmic bias, and resistance from clinicians regarding the "black-box" nature of AI, had yet to be surmounted. XAI has emerged as an attractive solution that offers the promise to enhance interpretability and trust. As a whole, AI appeared promising in enhancing diagnostics, treatment personalization, and clinical workflows by dealing with systemic inefficiencies. The transformation potential of AI in clinical medicine can transform diagnostics, treatment strategies, and efficiency. Overcoming obstacles such as concerns about data privacy, the danger of algorithmic bias, and difficulties with interpretability may pave the way for broader use and facilitate improvement in patient outcomes while transforming clinical workflows to bring sustainability into healthcare delivery.

Machine Learning Models in the Detection of MB2 Canal Orifice in CBCT Images.

Shetty S, Yuvali M, Ozsahin I, Al-Bayatti S, Narasimhan S, Alsaegh M, Al-Daghestani H, Shetty R, Castelino R, David LR, Ozsahin DU

pubmed logopapersJun 1 2025
The objective of the present study was to determine the accuracy of machine learning (ML) models in the detection of mesiobuccal (MB2) canals in axial cone-beam computed tomography (CBCT) sections. A total of 2500 CBCT scans from the oral radiology department of University Dental Hospital, Sharjah were screened to obtain 277 high-resolution, small field-of-view CBCT scans with maxillary molars. Among the 277 scans, 160 of them showed the presence of MB2 orifice and the rest (117) did not. Two-dimensional axial images of these scans were then cropped. The images were classified and labelled as N (absence of MB2) and M (presence of MB2) by 2 examiners. The images were embedded using Google's Inception V3 and transferred to the ML classification model. Six different ML models (logistic regression [LR], naïve Bayes [NB], support vector machine [SVM], K-nearest neighbours [Knn], random forest [RF], neural network [NN]) were then tested on their ability to classify the images into M and N. The classification metrics (area under curve [AUC], accuracy, F1-score, precision) of the models were assessed in 3 steps. NN (0.896), LR (0.893), and SVM (0.886) showed the highest values of AUC with specified target variables (steps 2 and 3). The highest accuracy was exhibited by LR (0.849) and NN (0.848) with specified target variables. The highest precision (86.8%) and recall (92.5%) was observed with the SVM model. The success rates (AUC, precision, recall) of ML algorithms in the detection of MB2 were remarkable in our study. It was also observed that when the target variable was specified, significant success rates such as 86.8% in precision and 92.5% in recall were achieved. The present study showed promising results in the ML-based detection of MB2 canal using axial CBCT slices.

Predicting hepatocellular carcinoma response to TACE: A machine learning study based on 2.5D CT imaging and deep features analysis.

Lin C, Cao T, Tang M, Pu W, Lei P

pubmed logopapersJun 1 2025
Prior to the commencement of treatment, it is essential to establish an objective method for accurately predicting the prognosis of patients with hepatocellular carcinoma (HCC) undergoing transarterial chemoembolization (TACE). In this study, we aimed to develop a machine learning (ML) model to predict the response of HCC patients to TACE based on CT images analysis. Public dataset from The Cancer Imaging Archive (TCIA), uploaded in August 2022, comprised a total of 105 cases, including 68 males and 37 females. The external testing dataset was collected from March 1, 2019 to July 1, 2022, consisting of total of 26 patients who underwent TACE treatment at our institution and were followed up for at least 3 months after TACE, including 22 males and 4 females. The public dataset was utilized for ResNet50 transfer learning and ML model construction, while the external testing dataset was used for model performance evaluation. All CT images with the largest lesions in axial, sagittal, and coronal orientations were selected to construct 2.5D images. Pre-trained ResNet50 weights were adapted through transfer learning to serve as a feature extractor to derive deep features for building ML models. Model performance was assessed using area under the curve (AUC), accuracy, F1-Score, confusion matrix analysis, decision curves, and calibration curves. The AUC values for the external testing dataset were 0.90, 0.90, 0.91, and 0.89 for random forest classifier (RFC), support vector classifier (SVC), logistic regression (LR), and extreme gradient boosting (XGB), respectively. The accuracy values for the external testing dataset were 0.79, 0.81, 0.80, and 0.80 for RFC, SVC, LR, and XGB, respectively. The F1-score values for the external testing dataset were 0.75, 0.77, 0.78, and 0.79 for RFC, SVC, LR, and XGB, respectively. The ML model constructed using deep features from 2.5D images has the potential to be applied in predicting the prognosis of HCC patients following TACE treatment.

Predicting strength of femora with metastatic lesions from single 2D radiographic projections using convolutional neural networks.

Synek A, Benca E, Licandro R, Hirtler L, Pahr DH

pubmed logopapersJun 1 2025
Patients with metastatic bone disease are at risk of pathological femoral fractures and may require prophylactic surgical fixation. Current clinical decision support tools often overestimate fracture risk, leading to overtreatment. While novel scores integrating femoral strength assessment via finite element (FE) models show promise, they require 3D imaging, extensive computation, and are difficult to automate. Predicting femoral strength directly from single 2D radiographic projections using convolutional neural networks (CNNs) could address these limitations, but this approach has not yet been explored for femora with metastatic lesions. This study aimed to test whether CNNs can accurately predict strength of femora with metastatic lesions from single 2D radiographic projections. CNNs with various architectures were developed and trained using an FE model generated training dataset. This training dataset was based on 36,000 modified computed tomography (CT) scans, created by randomly inserting artificial lytic lesions into the CT scans of 36 intact anatomical femoral specimens. From each modified CT scan, an anterior-posterior 2D projection was generated and femoral strength in one-legged stance was determined using nonlinear FE models. Following training, the CNN performance was evaluated on an independent experimental test dataset consisting of 31 anatomical femoral specimens (16 intact, 15 with artificial lytic lesions). 2D projections of each specimen were created from corresponding CT scans and femoral strength was assessed in mechanical tests. The CNNs' performance was evaluated using linear regression analysis and compared to 2D densitometric predictors (bone mineral density and content) and CT-based 3D FE models. All CNNs accurately predicted the experimentally measured strength in femora with and without metastatic lesions of the test dataset (R²≥0.80, CCC≥0.81). In femora with metastatic lesions, the performance of the CNNs (best: R²=0.84, CCC=0.86) was considerably superior to 2D densitometric predictors (R²≤0.07) and slightly inferior to 3D FE models (R²=0.90, CCC=0.94). CNNs, trained on a large dataset generated via FE models, predicted experimentally measured strength of femora with artificial metastatic lesions with accuracy comparable to 3D FE models. By eliminating the need for 3D imaging and reducing computational demands, this novel approach demonstrates potential for application in a clinical setting.

Combating Medical Label Noise through more precise partition-correction and progressive hard-enhanced learning.

Zhang S, Chu S, Qiang Y, Zhao J, Wang Y, Wei X

pubmed logopapersJun 1 2025
Computer-aided diagnosis systems based on deep neural networks heavily rely on datasets with high-quality labels. However, manual annotation for lesion diagnosis relies on image features, often requiring professional experience and complex image analysis process. This inevitably introduces noisy labels, which can misguide the training of classification models. Our goal is to design an effective method to address the challenges posed by label noise in medical images. we propose a novel noise-tolerant medical image classification framework consisting of two phases: fore-training correction and progressive hard-sample enhanced learning. In the first phase, we design a dual-branch sample partition detection scheme that effectively classifies each instance into one of three subsets: clean, hard, or noisy. Simultaneously, we propose a hard-sample label refinement strategy based on class prototypes with confidence-perception weighting and an effective joint correction method for noisy samples, enabling the acquisition of higher-quality training data. In the second phase, we design a progressive hard-sample reinforcement learning method to enhance the model's ability to learn discriminative feature representations. This approach accounts for sample difficulty and mitigates the effects of label noise in medical datasets. Our framework achieves an accuracy of 82.39% on the pneumoconiosis dataset collected by our laboratory. On a five-class skin disease dataset with six different levels of label noise (0, 0.05, 0.1, 0.2, 0.3, and 0.4), the average accuracy over the last ten epochs reaches 88.51%, 86.64%, 85.02%, 83.01%, 81.95%, 77.89%, respectively; For binary polyp classification under noise rates of 0.2, 0.3, and 0.4, the average accuracy over the last ten epochs is 97.90%, 93.77%, 89.33%, respectively. The effectiveness of our proposed framework is demonstrated through its performance on three challenging datasets with both real and synthetic noise. Experimental results further demonstrate the robustness of our method across varying noise rates.

Multi-class brain malignant tumor diagnosis in magnetic resonance imaging using convolutional neural networks.

Lv J, Wu L, Hong C, Wang H, Wu Z, Chen H, Liu Z

pubmed logopapersJun 1 2025
Glioblastoma (GBM), primary central nervous system lymphoma (PCNSL), and brain metastases (BM) are common malignant brain tumors with similar radiological features, while the accurate and non-invasive dialgnosis is essential for selecting appropriate treatment plans. This study develops a deep learning model, FoTNet, to improve the automatic diagnosis accuracy of these tumors, particularly for the relatively rare PCNSL tumor. The model integrates a frequency-based channel attention layer and the focal loss to address the class imbalance issue caused by the limited samples of PCNSL. A multi-center MRI dataset was constructed by collecting and integrating data from Sir Run Run Shaw Hospital, along with public datasets from UPENN and TCGA. The dataset includes T1-weighted contrast-enhanced (T1-CE) MRI images from 58 GBM, 82 PCNSL, and 269 BM cases, which were divided into training and testing sets with a 5:2 ratio. FoTNet achieved a classification accuracy of 92.5 % and an average AUC of 0.9754 on the test set, significantly outperforming existing machine learning and deep learning methods in distinguishing among GBM, PCNSL, and BM. Through multiple validations, FoTNet has proven to be an effective and robust tool for accurately classifying these brain tumors, providing strong support for preoperative diagnosis and assisting clinicians in making more informed treatment decisions.

Accuracy of a deep neural network for automated pulmonary embolism detection on dedicated CT pulmonary angiograms.

Zsarnoczay E, Rapaka S, Schoepf UJ, Gnasso C, Vecsey-Nagy M, Todoran TM, Hagar MT, Kravchenko D, Tremamunno G, Griffith JP, Fink N, Derrick S, Bowman M, Sam H, Tiller M, Godoy K, Condrea F, Sharma P, O'Doherty J, Maurovich-Horvat P, Emrich T, Varga-Szemes A

pubmed logopapersJun 1 2025
To assess the performance of a Deep Neural Network (DNN)-based prototype algorithm for automated PE detection on CTPA scans. Patients who had previously undergone CTPA with three different systems (SOMATOM Force, go.Top, and Definition AS; Siemens Healthineers, Forchheim, Germany) because of suspected PE from September 2022 to January 2023 were retrospectively enrolled in this study (n = 1,000, 58.8 % women). For detailed evaluation, all PE were divided into three location-based subgroups: central arteries, lobar branches, and peripheral regions. Clinical reports served as ground truth. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy were determined to evaluate the performance of DNN-based PE detection. Cases were excluded due to incomplete data (n = 32), inconclusive report (n = 17), insufficient contrast detected in the pulmonary trunk (n = 40), or failure of the preprocessing algorithms (n = 8). Therefore, the final cohort included 903 cases with a PE prevalence of 12 % (n = 110). The model achieved a sensitivity, specificity, PPV, and NPV of 84.6, 95.1, 70.5, and 97.8 %, respectively, and delivered an overall accuracy of 93.8 %. Among the false positive cases (n = 39), common sources of error included lung masses, pneumonia, and contrast flow artifacts. Common sources of false negatives (n = 17) included chronic and subsegmental PEs. The proposed DNN-based algorithm provides excellent performance for the detection of PE, suggesting its potential utility to support radiologists in clinical reading and exam prioritization.

GAN-based synthetic FDG PET images from T1 brain MRI can serve to improve performance of deep unsupervised anomaly detection models.

Zotova D, Pinon N, Trombetta R, Bouet R, Jung J, Lartizien C

pubmed logopapersJun 1 2025
Research in the cross-modal medical image translation domain has been very productive over the past few years in tackling the scarce availability of large curated multi-modality datasets with the promising performance of GAN-based architectures. However, only a few of these studies assessed task-based related performance of these synthetic data, especially for the training of deep models. We design and compare different GAN-based frameworks for generating synthetic brain[18F]fluorodeoxyglucose (FDG) PET images from T1 weighted MRI data. We first perform standard qualitative and quantitative visual quality evaluation. Then, we explore further impact of using these fake PET data in the training of a deep unsupervised anomaly detection (UAD) model designed to detect subtle epilepsy lesions in T1 MRI and FDG PET images. We introduce novel diagnostic task-oriented quality metrics of the synthetic FDG PET data tailored to our unsupervised detection task, then use these fake data to train a use case UAD model combining a deep representation learning based on siamese autoencoders with a OC-SVM density support estimation model. This model is trained on normal subjects only and allows the detection of any variation from the pattern of the normal population. We compare the detection performance of models trained on 35 paired real MR T1 of normal subjects paired either on 35 true PET images or on 35 synthetic PET images generated from the best performing generative models. Performance analysis is conducted on 17 exams of epilepsy patients undergoing surgery. The best performing GAN-based models allow generating realistic fake PET images of control subject with SSIM and PSNR values around 0.9 and 23.8, respectively and in distribution (ID) with regard to the true control dataset. The best UAD model trained on these synthetic normative PET data allows reaching 74% sensitivity. Our results confirm that GAN-based models are the best suited for MR T1 to FDG PET translation, outperforming transformer or diffusion models. We also demonstrate the diagnostic value of these synthetic data for the training of UAD models and evaluation on clinical exams of epilepsy patients. Our code and the normative image dataset are available.

Detection of COVID-19, lung opacity, and viral pneumonia via X-ray using machine learning and deep learning.

Lamouadene H, El Kassaoui M, El Yadari M, El Kenz A, Benyoussef A, El Moutaouakil A, Mounkachi O

pubmed logopapersJun 1 2025
The COVID-19 pandemic has significantly strained healthcare systems, highlighting the need for early diagnosis to isolate positive cases and prevent the spread. This study combines machine learning, deep learning, and transfer learning techniques to automatically diagnose COVID-19 and other pulmonary conditions from radiographic images. First, we used Convolutional Neural Networks (CNNs) and a Support Vector Machine (SVM) classifier on a dataset of 21,165 chest X-ray images. Our model achieved an accuracy of 86.18 %. This approach aids medical experts in rapidly and accurateky detecting lung diseases. Next, we applied transfer learning using ResNet18 combined with SVM on a dataset comprising normal, COVID-19, lung opacity, and viral pneumonia images. This model outperformed traditional methods, with classification rates of 98 % with Stochastic Gradient Descent (SGD), 97 % with Adam, 96 % with RMSProp, and 94 % with Adagrad optimizers. Additionally, we incorporated two additional transfer learning models, EfficientNet-CNN and Xception-CNN, which achieved classification accuracies of 99.20 % and 98.80 %, respectively. However, we observed limitations in dataset diversity and representativeness, which may affect model generalization. Future work will focus on implementing advanced data augmentation techniques and collaborations with medical experts to enhance model performance.This research demonstrates the potential of cutting-edge deep learning techniques to improve diagnostic accuracy and efficiency in medical imaging applications.

Boosting polyp screening with improved point-teacher weakly semi-supervised.

Du X, Zhang X, Chen J, Li L

pubmed logopapersJun 1 2025
Polyps, like a silent time bomb in the gut, are always lurking and can explode into deadly colorectal cancer at any time. Many methods are attempted to maximize the early detection of colon polyps by screening, however, there are still face some challenges: (i) the scarcity of per-pixel annotation data and clinical features such as the blurred boundary and low contrast of polyps result in poor performance. (ii) existing weakly semi-supervised methods directly using pseudo-labels to supervise student tend to ignore the value brought by intermediate features in the teacher. To adapt the point-prompt teacher model to the challenging scenarios of complex medical images and limited annotation data, we creatively leverage the diverse inductive biases of CNN and Transformer to extract robust and complementary representation of polyp features (boundary and context). At the same time, a novel designed teacher-student intermediate feature distillation method is introduced rather than just using pseudo-labels to guide student learning. Comprehensive experiments demonstrate that our proposed method effectively handles scenarios with limited annotations and exhibits good segmentation performance. All code is available at https://github.com/dxqllp/WSS-Polyp.
Page 304 of 3433423 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.