Sort by:
Page 473 of 7497488 results

Alotaibi SD, Alharbi AAK

pubmed logopapersJul 9 2025
Dementia is a degenerative and chronic disorder, increasingly prevalent among older adults, posing significant challenges in providing appropriate care. As the number of dementia cases continues to rise, delivering optimal care becomes more complex. Machine learning (ML) plays a crucial role in addressing this challenge by utilizing medical data to enhance care planning and management for individuals at risk of various types of dementia. Magnetic resonance imaging (MRI) is a commonly used method for analyzing neurological disorders. Recent evidence highlights the benefits of integrating artificial intelligence (AI) techniques with MRI, significantly enhancing the diagnostic accuracy for different forms of dementia. This paper explores the use of AI in the automated detection and classification of dementia, aiming to streamline early diagnosis and improve patient outcomes. Integrating ML models into clinical practice can transform dementia care by enabling early detection, personalized treatment plans, and more effectual monitoring of disease progression. In this study, an Enhancing Automated Detection and Classification of Dementia in Thinking Inability Persons using Artificial Intelligence Techniques (EADCD-TIPAIT) technique is presented. The goal of the EADCD-TIPAIT technique is for the detection and classification of dementia in individuals with cognitive impairment using MRI imaging. The EADCD-TIPAIT method performs preprocessing to scale the input data using z-score normalization to obtain this. Next, the EADCD-TIPAIT technique performs a binary greylag goose optimization (BGGO)-based feature selection approach to efficiently identify relevant features that distinguish between normal and dementia-affected brain regions. In addition, the wavelet neural network (WNN) classifier is employed to detect and classify dementia. Finally, the improved salp swarm algorithm (ISSA) is implemented to choose the WNN technique's hyperparameters optimally. The stimulation of the EADCD-TIPAIT technique is examined under a Dementia prediction dataset. The performance validation of the EADCD-TIPAIT approach portrayed a superior accuracy value of 95.00% under diverse measures.

Wang C, Zhang Y, Wu C, Liu J, Wu L, Wang Y, Huang X, Feng X, Wang Y

pubmed logopapersJul 9 2025
In the rapidly evolving field of dental intelligent healthcare, where Artificial Intelligence (AI) plays a pivotal role, the demand for multimodal datasets is critical. Existing public datasets are primarily composed of single-modal data, predominantly dental radiographs or scans, which limits the development of AI-driven applications for intelligent dental treatment. In this paper, we collect a MultiModal Dental (MMDental) dataset to address this gap. MMDental comprises data from 660 patients, including 3D Cone-beam Computed Tomography (CBCT) images and corresponding detailed expert medical records with initial diagnoses and follow-up documentation. All CBCT scans are conducted under the guidance of professional physicians, and all patient records are reviewed by senior doctors. To the best of our knowledge, this is the first and largest dataset containing 3D CBCT images of teeth with corresponding medical records. Furthermore, we provide a comprehensive analysis of the dataset by exploring patient demographics, prevalence of various dental conditions, and the disease distribution across age groups. We believe this work will be beneficial for further advancements in dental intelligent treatment.

Guo Y, Huang X, Chen W, Nakamoto I, Zhuang W, Chen H, Feng J, Wu J

pubmed logopapersJul 9 2025
Magnetic resonance imaging of the lumbar spine is a key technique for clarifying the cause of disease. The greatest challenges today are the repetitive and time-consuming process of interpreting these complex MR images and the problem of unequal diagnostic results from physicians with different levels of experience. To address these issues, in this study, an improved YOLOv8 model (GE-YOLOv8) that combines a gradient search module and efficient channel attention was developed. To address the difficulty of intervertebral disc feature extraction, the GS module was introduced into the backbone network, which enhances the feature learning ability for the key structures through the gradient splitting strategy, and the number of parameters was reduced by 2.1%. The ECA module optimizes the weights of the feature channels and enhances the sensitivity of detection for small-target lesions, and the mAP50 was improved by 4.4% compared with that of YOLOv8. GE-YOLOv8 demonstrated the significance of this innovation on the basis of a P value <.001, with YOLOv8 as the baseline. The experimental results on a dataset from the Pingtan Branch of Union Hospital of Fujian Medical University and an external test dataset show that the model has excellent accuracy.

Putra RH, Astuti ER, Nurrachman AS, Savitri Y, Vadya AV, Khairunisa ST, Iikubo M

pubmed logopapersJul 9 2025
The study aimed to review the applicability and performance of various Convolutional Neural Network (CNN) models for the identification of periodontal bone loss (PBL) in digital periapical radiographs achieved through classification, detection, and segmentation approaches. We searched the PubMed, IEEE Xplore, and SCOPUS databases for articles published up to June 2024. After the selection process, a total of 11 studies were included in this review. The reviewed studies demonstrated that CNNs have a significant potential application for automatic identification of PBL on periapical radiographs through classification and segmentation approaches. CNN architectures can be utilized to classify the presence or absence of PBL, the severity or degree of PBL, and PBL area segmentation. CNN showed a promising performance for PBL identification on periapical radiographs. Future research should focus on dataset preparation, proper selection of CNN architecture, and robust performance evaluation to improve the model. Utilizing an optimized CNN architecture is expected to assist dentists by providing accurate and efficient identification of PBL.

Hsu CC, Tsai MY, Yu CM

pubmed logopapersJul 9 2025
The rapid evolution of deepfake technology poses critical challenges to healthcare systems, particularly in safeguarding the integrity of medical imaging, electronic health records (EHR), and telemedicine platforms. As autonomous AI becomes increasingly integrated into smart healthcare, the potential misuse of deepfakes to manipulate sensitive healthcare data or impersonate medical professionals highlights the urgent need for robust and adaptive detection mechanisms. In this work, we propose DProm, a dynamic deepfake detection framework leveraging visual prompt tuning (VPT) with a pre-trained Swin Transformer. Unlike traditional static detection models, which struggle to adapt to rapidly evolving deepfake techniques, DProm fine-tunes a small set of visual prompts to efficiently adapt to new data distributions with minimal computational and storage requirements. Comprehensive experiments demonstrate that DProm achieves state-of-the-art performance in both static cross-dataset evaluations and dynamic scenarios, ensuring robust detection across diverse data distributions. By addressing the challenges of scalability, adaptability, and resource efficiency, DProm offers a transformative solution for enhancing the security and trustworthiness of autonomous AI systems in healthcare, paving the way for safer and more reliable smart healthcare applications.

Liang M, Wang F, Yang Y, Wen L, Wang S, Zhang D

pubmed logopapersJul 9 2025
To establish an interpretable and non-invasive machine learning (ML) model using clinicoradiological predictors and magnetic resonance imaging (MRI) radiomics features to predict the consistency of pituitary macroadenomas (PMAs) preoperatively. Total 350 patients with PMA (272 from Xinqiao Hospital of Army Medical University and 78 from Daping Hospital of Army Medical University) were stratified and randomly divided into training and test cohorts in a 7:3 ratio. The tumor consistency was classified as soft or firm. Clinicoradiological predictors were examined utilizing univariate and multivariate regression analyses. Radiomics features were selected employing the minimum redundancy maximum relevance (mRMR) and least absolute shrinkage and selection operator (LASSO) algorithms. Logistic regression (LR) and random forest (RF) classifiers were applied to construct the models. Receiver operating characteristic (ROC) curves and decision curve analyses (DCA) were performed to compare and validate the predictive capacities of the models. A comparative study of the area under the curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE) was performed. The Shapley additive explanation (SHAP) was applied to investigate the optimal model's interpretability. The combined model predicted the PMAs' consistency more effectively than the clinicoradiological and radiomics models. Specifically, the LR-combined model displayed optimal prediction performance (test cohort: AUC = 0.913; ACC = 0.840). The SHAP-based explanation of the LR-combined model suggests that the wavelet-transformed and Laplacian of Gaussian (LoG) filter features extracted from T<sub>2</sub>WI and CE-T<sub>1</sub>WI occupy a dominant position. Meanwhile, the skewness of the original first-order features extracted from T<sub>2</sub>WI (T<sub>2</sub>WI_original_first-order_Skewness) demonstrated the most substantial contribution. An interpretable machine learning model incorporating clinicoradiological predictors and multiparametric MRI (mpMRI)-based radiomics features may predict PMAs consistency, enabling tailored and precise therapies for patients with PMA.

Carriero S, Canella R, Cicchetti F, Angileri A, Bruno A, Biondetti P, Colciago RR, D'Antonio A, Della Pepa G, Grassi F, Granata V, Lanza C, Santicchia S, Miceli A, Piras A, Salvestrini V, Santo G, Pesapane F, Barile A, Carrafiello G, Giovagnoni A

pubmed logopapersJul 9 2025
The integration of artificial intelligence (AI) into clinical practice, particularly within radiology, nuclear medicine and radiation oncology, is transforming diagnostic and therapeutic processes. AI-driven tools, especially in deep learning and machine learning, have shown remarkable potential in enhancing image recognition, analysis and decision-making. This technological advancement allows for the automation of routine tasks, improved diagnostic accuracy, and the reduction of human error, leading to more efficient workflows. Moreover, the successful implementation of AI in healthcare requires comprehensive education and training for young clinicians, with a pressing need to incorporate AI into residency programmes, ensuring that future specialists are equipped with traditional skills and a deep understanding of AI technologies and their clinical applications. This includes knowledge of software, data analysis, imaging informatics and ethical considerations surrounding AI use in medicine. By fostering interdisciplinary integration and emphasising AI education, healthcare professionals can fully harness AI's potential to improve patient outcomes and advance the field of medical imaging and therapy. This review aims to evaluate how AI influences radiology, nuclear medicine and radiation oncology, while highlighting the necessity for specialised AI training in medical education to ensure its successful clinical integration.

Wu C, Wang L, Wang N, Shiao S, Dou T, Hsu YC, Christodoulou AG, Xie Y, Li D

pubmed logopapersJul 9 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To improve the generalizability of pathologic complete response (pCR) prediction following neoadjuvant chemotherapy using deep learning (DL)-based retrospective pharmacokinetic quantification (RoQ) of early-treatment dynamic contrast-enhanced (DCE) MRI. Materials and Methods This multicenter retrospective study included breast MRI data from four publicly available datasets of patients with breast cancer acquired from May 2002 to November 2016. RoQ was performed using a previously developed DL model for clinical multiphasic DCE-MRI datasets. Radiomic analysis was performed on RoQ maps and conventional enhancement maps. These data, together with clinicopathologic variables and shape-based radiomic analysis, were subsequently applied in pCR prediction using logistic regression. Prediction performance was evaluated by area under the receiver operating characteristic curve (AUC). Results A total of 1073 female patients with breast cancer were included. The proposed method showed improved consistency and generalizability compared with the reference method, achieving higher AUCs across external datasets (0.82 [CI: 0.72-0.91], 0.75 [CI: 0.71-0.79], and 0.77 [CI: 0.66-0.86] for Datasets A2, B, and C, respectively). On Dataset A2 (from the same study as the training dataset), there was no significant difference in performance between the proposed method and reference method (<i>P</i> = .80). Notably, on the combined external datasets, the proposed method significantly outperformed the reference method (AUC: 0.75 [CI: 0.72- 0.79] vs 0.71 [CI: 0.68-0.76], <i>P</i> = .003). Conclusion This work offers a novel approach to improve the generalizability and predictive accuracy of pCR response in breast cancer across diverse datasets, achieving higher and more consistent AUC scores than existing methods. ©RSNA, 2025.

Moro T, Yoshimura N, Saito T, Oka H, Muraki S, Iidaka T, Tanaka T, Ono K, Ishikura H, Wada N, Watanabe K, Kyomoto M, Tanaka S

pubmed logopapersJul 9 2025
The early detection and treatment of osteoporosis and prevention of fragility fractures are urgent societal issues. We developed an artificial intelligence-assisted diagnostic system that estimated not only lumbar bone mineral density but also femoral bone mineral density from anteroposterior lumbar X-ray images. We evaluated the performance of lumbar and femoral bone mineral density estimations and the osteoporosis classification accuracy of an artificial intelligence-assisted diagnostic system using lumbar X-ray images from a population-based cohort. The artificial neural network consisted of a deep neural network for estimating lumbar and femoral bone mineral density values and classifying lumbar X-ray images into osteoporosis categories. The deep neural network was built by training dual-energy X-ray absorptiometry-derived lumbar and femoral bone mineral density values as the ground truth of the training data and preprocessed X-ray images. Five-fold cross-validation was performed to evaluate the accuracy of the estimated BMD. A total of 1454 X-ray images from 1454 participants were analyzed using the artificial neural network. For the bone mineral density estimation performance, the mean absolute errors were 0.076 g/cm<sup>2</sup> for the lumbar and 0.071 g/cm<sup>2</sup> for the femur between dual-energy X-ray absorptiometry-derived and artificial intelligence-estimated bone mineral density values. The classification performances for the lumbar and femur of patients with osteopenia, in terms of sensitivity, were 86.4% and 80.4%, respectively, and the respective specificities were 84.1% and 76.3%. CLINICAL SIGNIFICANCE: The system was able to estimate the bone mineral density and classify the osteoporosis category of not only patients in clinics or hospitals but also of general inhabitants.

Wang S, Liu J, Song L, Zhao H, Wan X, Peng Y

pubmed logopapersJul 9 2025
This study aimed to develop an early predictive model for neoadjuvant therapy (NAT) response in breast cancer by integrating multimodal ultrasound (conventional B-mode, shear-wave elastography, and contrast-enhanced ultrasound) and radiomics with clinical-pathological data, and to evaluate its predictive accuracy after two cycles of NAT. This retrospective study included 239 breast cancer patients receiving neoadjuvant therapy, divided into training (n = 167) and validation (n = 72) cohorts. Multimodal ultrasound-B-mode, shear-wave elastography (SWE), and contrast-enhanced ultrasound (CEUS)-was performed at baseline and after two cycles. Tumors were segmented using a U-Net-based deep learning model with radiologist adjustment, and radiomic features were extracted via PyRadiomics. Candidate variables were screened using univariate analysis and multicollinearity checks, followed by LASSO and stepwise logistic regression to build three models: a clinical-ultrasound model, a radiomics-only model, and a combined model. Model performance for early response prediction was assessed using ROC analysis. In the training cohort (n = 167), Model_Clinic achieved an AUC of 0.85, with HER2 positivity, maximum tumor stiffness (Emax), stiffness heterogeneity (Estd), and the CEUS "radiation sign" emerging as independent predictors (all P < 0.05). The radiomics model showed moderate performance at baseline (AUC 0.69) but improved after two cycles (AUC 0.83), and a model using radiomic feature changes achieved an AUC of 0.79. Model_Combined demonstrated the best performance with a training AUC of 0.91 (sensitivity 89.4%, specificity 82.9%). In the validation cohort (n = 72), all models showed comparable AUCs (Model_Combined ~ 0.90) without significant degradation, and Model_Combined significantly outperformed Model_Clinic and Model_RSA (DeLong P = 0.006 and 0.042, respectively). In our study, integrating multimodal ultrasound and radiomic features improved the early prediction of NAT response in breast cancer, and could provide valuable information to enable timely treatment adjustments and more personalized management strategies.
Page 473 of 7497488 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,500+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.