Sort by:
Page 187 of 3593587 results

Development of a deep learning-based MRI diagnostic model for human Brucella spondylitis.

Wang B, Wei J, Wang Z, Niu P, Yang L, Hu Y, Shao D, Zhao W

pubmed logopapersJul 9 2025
Brucella spondylitis (BS) and tuberculous spondylitis (TS) are prevalent spinal infections with distinct treatment protocols. Rapid and accurate differentiation between these two conditions is crucial for effective clinical management; however, current imaging and pathogen-based diagnostic methods fall short of fully meeting clinical requirements. This study explores the feasibility of employing deep learning (DL) models based on conventional magnetic resonance imaging (MRI) to differentiate BS and TS. A total of 310 subjects were enrolled in our hospital, comprising 209 with BS, 101 with TS. The participants were randomly divided into a training set (n = 217) and a test set (n = 93). And 74 with other hospital was external validation set. Integrating Convolutional Block Attention Module (CBAM) into the ResNeXt-50 architecture and training the model using sagittal T2-weighted images (T2WI). Classification performance was evaluated using the area under the receiver operating characteristic (AUC) curve, and diagnostic accuracy was compared against general models such as ResNet50, GoogleNet, EfficientNetV2, and VGG16. The CBAM-ResNeXt model revealed superior performance, with accuracy, precision, recall, F1-score, and AUC from 0.942, 0.940, 0.928, 0.934, 0.953, respectively. These metrics outperformed those of the general models. The proposed model offers promising potential for the diagnosis of BS and TS using conventional MRI. It could serve as an invaluable tool in clinical practice, providing a reliable reference for distinguishing between these two diseases.

A machine learning model reveals invisible microscopic variation in acute ischaemic stroke (≤ 6 h) with non-contrast computed tomography.

Tan J, Xiao M, Wang Z, Wu S, Han K, Wang H, Huang Y

pubmed logopapersJul 9 2025
In most medical centers, particularly in primary hospitals, non-contrast computed tomography (NCCT) serves as the primary imaging modality for diagnosing acute ischemic stroke. However, due to the small density difference between the infarct and the surrounding normal brain tissue on NCCT images within the initial 6 h post-onset, it poses significant challenges in promptly and accurately positioning and quantifying the infarct at the early stage. To investigate whether a radiomics-based model using NCCT could effectively assess the risk of acute ischemic stroke (AIS). This study proposed a machine learning (ML) for infarct detection, enabling automated quantitative assessment of AIS lesions on NCCT images. In this retrospective study, NCCT images from 228 patients with AIS (< 6 h from onset) were included, and paired with MRI-diffusion-weighted imaging (DWI) images (attained within 1 to 7 days of onset). NCCT and DWI images were co-registered using the Elastix toolbox. The internal dataset (153 AIS patients) included 179 AIS VOIs and 153 non-AIS VOIs as the training and validation groups. Subsequent cases (75 patients) after 2021 served as the independent test set, comprising 94 AIS VOIs and 75 non-AIS VOIs. The random forest (RF) model demonstrated robust diagnostic performance across the training, validation, and independent test sets. The areas under the receiver operating characteristic (ROC) curves were 0.858 (95% CI: 0.808-0.908), 0.829 (95% CI: 0.748-0.910), and 0.789 (95% CI: 0.717-0.860), respectively. Accuracies were 79.399%, 77.778%, and 73.965%, while sensitivities were 81.679%, 77.083%, and 68.085%. Specificities were 76.471%, 78.431%, and 81.333%, respectively. NCCT-based radiomics combined with a machine learning model could discriminate between AIS and non-AIS patients within less than 6 h of onset. This approach holds promise for improving early stroke diagnosis and patient outcomes. Not applicable.

Applying deep learning techniques to identify tonsilloliths in panoramic radiography.

Katı E, Baybars SC, Danacı Ç, Tuncer SA

pubmed logopapersJul 9 2025
Tonsilloliths can be seen on panoramic radiographs (PRs) as deposits located on the middle portion of the ramus of the mandible. Although tonsilloliths are clinically harmless, the high risk of misdiagnosis leads to unnecessary advanced examinations and interventions, thus jeopardizing patient safety and increasing unnecessary resource use in the healthcare system. Therefore, this study aims to meet an important clinical need by providing accurate and rapid diagnostic support. The dataset consisted of a total of 275 PRs, with 125 PRs lacking tonsillolith and 150 PRs having tonsillolith. ResNet and EfficientNet CNN models were assessed during the model selection process. An evaluation was conducted to analyze the learning capacity, intricacy, and compatibility of each model with the problem at hand. The effectiveness of the models was evaluated using accuracy, recall, precision, and F1 score measures following the training phase. Both the ResNet18 and EfficientNetB0 models were able to differentiate between tonsillolith-present and tonsillolith-absent conditions with an average accuracy of 89%. ResNet101 demonstrated underperformance when contrasted with other models. EfficientNetB1 exhibits satisfactory accuracy in both categories. The EfficientNetB0 model exhibits a 93% precision, 87% recall, 90% F1 score, and 89% accuracy. This study indicates that implementing AI-powered deep learning techniques would significantly improve the clinical diagnosis of tonsilloliths.

Enhancing automated detection and classification of dementia in individuals with cognitive impairment using artificial intelligence techniques.

Alotaibi SD, Alharbi AAK

pubmed logopapersJul 9 2025
Dementia is a degenerative and chronic disorder, increasingly prevalent among older adults, posing significant challenges in providing appropriate care. As the number of dementia cases continues to rise, delivering optimal care becomes more complex. Machine learning (ML) plays a crucial role in addressing this challenge by utilizing medical data to enhance care planning and management for individuals at risk of various types of dementia. Magnetic resonance imaging (MRI) is a commonly used method for analyzing neurological disorders. Recent evidence highlights the benefits of integrating artificial intelligence (AI) techniques with MRI, significantly enhancing the diagnostic accuracy for different forms of dementia. This paper explores the use of AI in the automated detection and classification of dementia, aiming to streamline early diagnosis and improve patient outcomes. Integrating ML models into clinical practice can transform dementia care by enabling early detection, personalized treatment plans, and more effectual monitoring of disease progression. In this study, an Enhancing Automated Detection and Classification of Dementia in Thinking Inability Persons using Artificial Intelligence Techniques (EADCD-TIPAIT) technique is presented. The goal of the EADCD-TIPAIT technique is for the detection and classification of dementia in individuals with cognitive impairment using MRI imaging. The EADCD-TIPAIT method performs preprocessing to scale the input data using z-score normalization to obtain this. Next, the EADCD-TIPAIT technique performs a binary greylag goose optimization (BGGO)-based feature selection approach to efficiently identify relevant features that distinguish between normal and dementia-affected brain regions. In addition, the wavelet neural network (WNN) classifier is employed to detect and classify dementia. Finally, the improved salp swarm algorithm (ISSA) is implemented to choose the WNN technique's hyperparameters optimally. The stimulation of the EADCD-TIPAIT technique is examined under a Dementia prediction dataset. The performance validation of the EADCD-TIPAIT approach portrayed a superior accuracy value of 95.00% under diverse measures.

MMDental - A multimodal dataset of tooth CBCT images with expert medical records.

Wang C, Zhang Y, Wu C, Liu J, Wu L, Wang Y, Huang X, Feng X, Wang Y

pubmed logopapersJul 9 2025
In the rapidly evolving field of dental intelligent healthcare, where Artificial Intelligence (AI) plays a pivotal role, the demand for multimodal datasets is critical. Existing public datasets are primarily composed of single-modal data, predominantly dental radiographs or scans, which limits the development of AI-driven applications for intelligent dental treatment. In this paper, we collect a MultiModal Dental (MMDental) dataset to address this gap. MMDental comprises data from 660 patients, including 3D Cone-beam Computed Tomography (CBCT) images and corresponding detailed expert medical records with initial diagnoses and follow-up documentation. All CBCT scans are conducted under the guidance of professional physicians, and all patient records are reviewed by senior doctors. To the best of our knowledge, this is the first and largest dataset containing 3D CBCT images of teeth with corresponding medical records. Furthermore, we provide a comprehensive analysis of the dataset by exploring patient demographics, prevalence of various dental conditions, and the disease distribution across age groups. We believe this work will be beneficial for further advancements in dental intelligent treatment.

Securing Healthcare Data Integrity: Deepfake Detection Using Autonomous AI Approaches.

Hsu CC, Tsai MY, Yu CM

pubmed logopapersJul 9 2025
The rapid evolution of deepfake technology poses critical challenges to healthcare systems, particularly in safeguarding the integrity of medical imaging, electronic health records (EHR), and telemedicine platforms. As autonomous AI becomes increasingly integrated into smart healthcare, the potential misuse of deepfakes to manipulate sensitive healthcare data or impersonate medical professionals highlights the urgent need for robust and adaptive detection mechanisms. In this work, we propose DProm, a dynamic deepfake detection framework leveraging visual prompt tuning (VPT) with a pre-trained Swin Transformer. Unlike traditional static detection models, which struggle to adapt to rapidly evolving deepfake techniques, DProm fine-tunes a small set of visual prompts to efficiently adapt to new data distributions with minimal computational and storage requirements. Comprehensive experiments demonstrate that DProm achieves state-of-the-art performance in both static cross-dataset evaluations and dynamic scenarios, ensuring robust detection across diverse data distributions. By addressing the challenges of scalability, adaptability, and resource efficiency, DProm offers a transformative solution for enhancing the security and trustworthiness of autonomous AI systems in healthcare, paving the way for safer and more reliable smart healthcare applications.

MRI-based interpretable clinicoradiological and radiomics machine learning model for preoperative prediction of pituitary macroadenomas consistency: a dual-center study.

Liang M, Wang F, Yang Y, Wen L, Wang S, Zhang D

pubmed logopapersJul 9 2025
To establish an interpretable and non-invasive machine learning (ML) model using clinicoradiological predictors and magnetic resonance imaging (MRI) radiomics features to predict the consistency of pituitary macroadenomas (PMAs) preoperatively. Total 350 patients with PMA (272 from Xinqiao Hospital of Army Medical University and 78 from Daping Hospital of Army Medical University) were stratified and randomly divided into training and test cohorts in a 7:3 ratio. The tumor consistency was classified as soft or firm. Clinicoradiological predictors were examined utilizing univariate and multivariate regression analyses. Radiomics features were selected employing the minimum redundancy maximum relevance (mRMR) and least absolute shrinkage and selection operator (LASSO) algorithms. Logistic regression (LR) and random forest (RF) classifiers were applied to construct the models. Receiver operating characteristic (ROC) curves and decision curve analyses (DCA) were performed to compare and validate the predictive capacities of the models. A comparative study of the area under the curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE) was performed. The Shapley additive explanation (SHAP) was applied to investigate the optimal model's interpretability. The combined model predicted the PMAs' consistency more effectively than the clinicoradiological and radiomics models. Specifically, the LR-combined model displayed optimal prediction performance (test cohort: AUC = 0.913; ACC = 0.840). The SHAP-based explanation of the LR-combined model suggests that the wavelet-transformed and Laplacian of Gaussian (LoG) filter features extracted from T<sub>2</sub>WI and CE-T<sub>1</sub>WI occupy a dominant position. Meanwhile, the skewness of the original first-order features extracted from T<sub>2</sub>WI (T<sub>2</sub>WI_original_first-order_Skewness) demonstrated the most substantial contribution. An interpretable machine learning model incorporating clinicoradiological predictors and multiparametric MRI (mpMRI)-based radiomics features may predict PMAs consistency, enabling tailored and precise therapies for patients with PMA.

Development of Artificial Intelligence-Assisted Lumbar and Femoral BMD Estimation System Using Anteroposterior Lumbar X-Ray Images.

Moro T, Yoshimura N, Saito T, Oka H, Muraki S, Iidaka T, Tanaka T, Ono K, Ishikura H, Wada N, Watanabe K, Kyomoto M, Tanaka S

pubmed logopapersJul 9 2025
The early detection and treatment of osteoporosis and prevention of fragility fractures are urgent societal issues. We developed an artificial intelligence-assisted diagnostic system that estimated not only lumbar bone mineral density but also femoral bone mineral density from anteroposterior lumbar X-ray images. We evaluated the performance of lumbar and femoral bone mineral density estimations and the osteoporosis classification accuracy of an artificial intelligence-assisted diagnostic system using lumbar X-ray images from a population-based cohort. The artificial neural network consisted of a deep neural network for estimating lumbar and femoral bone mineral density values and classifying lumbar X-ray images into osteoporosis categories. The deep neural network was built by training dual-energy X-ray absorptiometry-derived lumbar and femoral bone mineral density values as the ground truth of the training data and preprocessed X-ray images. Five-fold cross-validation was performed to evaluate the accuracy of the estimated BMD. A total of 1454 X-ray images from 1454 participants were analyzed using the artificial neural network. For the bone mineral density estimation performance, the mean absolute errors were 0.076 g/cm<sup>2</sup> for the lumbar and 0.071 g/cm<sup>2</sup> for the femur between dual-energy X-ray absorptiometry-derived and artificial intelligence-estimated bone mineral density values. The classification performances for the lumbar and femur of patients with osteopenia, in terms of sensitivity, were 86.4% and 80.4%, respectively, and the respective specificities were 84.1% and 76.3%. CLINICAL SIGNIFICANCE: The system was able to estimate the bone mineral density and classify the osteoporosis category of not only patients in clinics or hospitals but also of general inhabitants.

Integrative multimodal ultrasound and radiomics for early prediction of neoadjuvant therapy response in breast cancer: a clinical study.

Wang S, Liu J, Song L, Zhao H, Wan X, Peng Y

pubmed logopapersJul 9 2025
This study aimed to develop an early predictive model for neoadjuvant therapy (NAT) response in breast cancer by integrating multimodal ultrasound (conventional B-mode, shear-wave elastography, and contrast-enhanced ultrasound) and radiomics with clinical-pathological data, and to evaluate its predictive accuracy after two cycles of NAT. This retrospective study included 239 breast cancer patients receiving neoadjuvant therapy, divided into training (n = 167) and validation (n = 72) cohorts. Multimodal ultrasound-B-mode, shear-wave elastography (SWE), and contrast-enhanced ultrasound (CEUS)-was performed at baseline and after two cycles. Tumors were segmented using a U-Net-based deep learning model with radiologist adjustment, and radiomic features were extracted via PyRadiomics. Candidate variables were screened using univariate analysis and multicollinearity checks, followed by LASSO and stepwise logistic regression to build three models: a clinical-ultrasound model, a radiomics-only model, and a combined model. Model performance for early response prediction was assessed using ROC analysis. In the training cohort (n = 167), Model_Clinic achieved an AUC of 0.85, with HER2 positivity, maximum tumor stiffness (Emax), stiffness heterogeneity (Estd), and the CEUS "radiation sign" emerging as independent predictors (all P < 0.05). The radiomics model showed moderate performance at baseline (AUC 0.69) but improved after two cycles (AUC 0.83), and a model using radiomic feature changes achieved an AUC of 0.79. Model_Combined demonstrated the best performance with a training AUC of 0.91 (sensitivity 89.4%, specificity 82.9%). In the validation cohort (n = 72), all models showed comparable AUCs (Model_Combined ~ 0.90) without significant degradation, and Model_Combined significantly outperformed Model_Clinic and Model_RSA (DeLong P = 0.006 and 0.042, respectively). In our study, integrating multimodal ultrasound and radiomic features improved the early prediction of NAT response in breast cancer, and could provide valuable information to enable timely treatment adjustments and more personalized management strategies.

4KAgent: Agentic Any Image to 4K Super-Resolution

Yushen Zuo, Qi Zheng, Mingyang Wu, Xinrui Jiang, Renjie Li, Jian Wang, Yide Zhang, Gengchen Mai, Lihong V. Wang, James Zou, Xiaoyu Wang, Ming-Hsuan Yang, Zhengzhong Tu

arxiv logopreprintJul 9 2025
We present 4KAgent, a unified agentic super-resolution generalist system designed to universally upscale any image to 4K resolution (and even higher, if applied iteratively). Our system can transform images from extremely low resolutions with severe degradations, for example, highly distorted inputs at 256x256, into crystal-clear, photorealistic 4K outputs. 4KAgent comprises three core components: (1) Profiling, a module that customizes the 4KAgent pipeline based on bespoke use cases; (2) A Perception Agent, which leverages vision-language models alongside image quality assessment experts to analyze the input image and make a tailored restoration plan; and (3) A Restoration Agent, which executes the plan, following a recursive execution-reflection paradigm, guided by a quality-driven mixture-of-expert policy to select the optimal output for each step. Additionally, 4KAgent embeds a specialized face restoration pipeline, significantly enhancing facial details in portrait and selfie photos. We rigorously evaluate our 4KAgent across 11 distinct task categories encompassing a total of 26 diverse benchmarks, setting new state-of-the-art on a broad spectrum of imaging domains. Our evaluations cover natural images, portrait photos, AI-generated content, satellite imagery, fluorescence microscopy, and medical imaging like fundoscopy, ultrasound, and X-ray, demonstrating superior performance in terms of both perceptual (e.g., NIQE, MUSIQ) and fidelity (e.g., PSNR) metrics. By establishing a novel agentic paradigm for low-level vision tasks, we aim to catalyze broader interest and innovation within vision-centric autonomous agents across diverse research communities. We will release all the code, models, and results at: https://4kagent.github.io.
Page 187 of 3593587 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.