Sort by:
Page 13 of 34334 results

A federated learning-based privacy-preserving image processing framework for brain tumor detection from CT scans.

Al-Saleh A, Tejani GG, Mishra S, Sharma SK, Mousavirad SJ

pubmed logopapersJul 2 2025
The detection of brain tumors is crucial in medical imaging, because accurate and early diagnosis can have a positive effect on patients. Because traditional deep learning models store all their data together, they raise questions about privacy, complying with regulations and the different types of data used by various institutions. We introduce the anisotropic-residual capsule hybrid Gorilla Badger optimized network (Aniso-ResCapHGBO-Net) framework for detecting brain tumors in a privacy-preserving, decentralized system used by many healthcare institutions. ResNet-50 and capsule networks are incorporated to achieve better feature extraction and maintain the structure of images' spatial data. To get the best results, the hybrid Gorilla Badger optimization algorithm (HGBOA) is applied for selecting the key features. Preprocessing techniques include anisotropic diffusion filtering, morphological operations, and mutual information-based image registration. Updates to the model are made secure and tamper-evident on the Ethereum network with its private blockchain and SHA-256 hashing scheme. The project is built using Python, TensorFlow and PyTorch. The model displays 99.07% accuracy, 98.54% precision and 99.82% sensitivity on assessments from benchmark CT imaging of brain tumors. This approach also helps to reduce the number of cases where no disease is found when there is one and vice versa. The framework ensures that patients' data is protected and does not decrease the accuracy of brain tumor detection.

Performance of two different artificial intelligence models in dental implant planning among four different implant planning software: a comparative study.

Roongruangsilp P, Narkbuakaew W, Khongkhunthian P

pubmed logopapersJul 2 2025
The integration of artificial intelligence (AI) in dental implant planning has emerged as a transformative approach to enhance diagnostic accuracy and efficiency. This study aimed to evaluate the performance of two object detection models, Faster R-CNN and YOLOv7 in analyzing cross-sectional and panoramic images derived from DICOM files processed by four distinct dental imaging software platforms. The dataset consisted of 332 implant position images derived from DICOM files of 184 CBCT scans. Three hundred images were processed using DentiPlan Pro 3.7 software (NECTEC, NSTDA, Thailand) for the development of Faster R-CNN and YOLOv7 models for dental implant planning. For model testing, 32 additional implant position images, which were not included in the training set, were processed using four different software programs: DentiPlan Pro 3.7, DentiPlan Pro Plus 5.0 (DTP; NECTEC, NSTDA, Thailand), Implastation (ProDigiDent USA, USA), and Romexis 6.0 (Planmeca, Finland). The performance of the models was evaluated using detection rate, accuracy, precision, recall, F1 score, and the Jaccard Index (JI). Faster R-CNN achieved superior accuracy across imaging modalities, while YOLOv7 demonstrated higher detection rates, albeit with lower precision. The impact of image rendering algorithms on model performance underscores the need for standardized preprocessing pipelines. Although Faster R-CNN demonstrated relatively higher performance metrics, statistical analysis revealed no significant differences between the models (p-value > 0.05). This study emphasizes the potential of AI-driven solutions in dental implant planning and advocates the need for further research in this area. The absence of statistically significant differences between Faster R-CNN and YOLOv7 suggests that both models can be effectively utilized, depending on the specific requirements for accuracy or detection. Furthermore, the variations in imaging rendering algorithms across different software platforms significantly influenced the model outcomes. AI models for DICOM analysis should rely on standardized image rendering to ensure consistent performance.

Developing an innovative lung cancer detection model for accurate diagnosis in AI healthcare systems.

Jian W, Haq AU, Afzal N, Khan S, Alsolai H, Alanazi SM, Zamani AT

pubmed logopapersJul 2 2025
Accurate Lung cancer (LC) identification is a big medical problem in the AI-based healthcare systems. Various deep learning-based methods have been proposed for Lung cancer diagnosis. In this study, we proposed a Deep learning techniques-based integrated model (CNN-GRU) for Lung cancer detection. In the proposed model development Convolutional neural networks (CNNs), and gated recurrent units (GRU) models are integrated to design an intelligent model for lung cancer detection. The CNN model extracts spatial features from lung CT images through convolutional and pooling layers. The extracted features from data are embedded in the GRUs model for the final prediction of LC. The model (CNN-GRU) was validated using LC data using the holdout validation technique. Data augmentation techniques such as rotation, and brightness were used to enlarge the data set size for effective training of the model. The optimization techniques Stochastic Gradient Descent(SGD) and Adaptive Moment Estimation(ADAM) were applied during model training for model training parameters optimization. Additionally, evaluation metrics were used to test the model performance. The experimental results of the model presented that the model achieved 99.77% accuracy as compared to previous models. The (CNN-GRU) model is recommended for accurate LC detection in AI-based healthcare systems due to its improved diagnosis accuracy.

Clinical validation of AI assisted animal ultrasound models for diagnosis of early liver trauma.

Song Q, He X, Wang Y, Gao H, Tan L, Ma J, Kang L, Han P, Luo Y, Wang K

pubmed logopapersJul 2 2025
The study aimed to develop an AI-assisted ultrasound model for early liver trauma identification, using data from Bama miniature pigs and patients in Beijing, China. A deep learning model was created and fine-tuned with animal and clinical data, achieving high accuracy metrics. In internal tests, the model outperformed both Junior and Senior sonographers. External tests showed the model's effectiveness, with a Dice Similarity Coefficient of 0.74, True Positive Rate of 0.80, Positive Predictive Value of 0.74, and 95% Hausdorff distance of 14.84. The model's performance was comparable to Junior sonographers and slightly lower than Senior sonographers. This AI model shows promise for liver injury detection, offering a valuable tool with diagnostic capabilities similar to those of less experienced human operators.

Automatic detection of orthodontically induced external root resorption based on deep convolutional neural networks using CBCT images.

Xu S, Peng H, Yang L, Zhong W, Gao X

pubmed logopapersJul 2 2025
Orthodontically-induced external root resorption (OIERR) is among the most common risks in orthodontic treatment. Traditional OIERR diagnosis is limited by subjective judgement as well as cumbersome manual measurement. The research aims to develop an intelligent detection model for OIERR based on deep convolutional neural networks (CNNs) through cone-beam computed tomography (CBCT) images, thus providing auxiliary diagnosis support for orthodontists. Six pretrained CNN architectures were adopted and 1717 CBCT slices were used for training to construct OIERR detection models. The performance of the models was tested on 429 CBCT slices and the activated regions during decision-making were visualized through heatmaps. The model performance was then compared with that of two orthodontists. The EfficientNet-B1 model, trained through hold-out cross-validation, proved to be the most effective for detecting OIERR. Its accuracy, precision, sensitivity, specificity as well as F1-score were 0.97, 0.98, 0.97, 0.98 and 0.98, respectively. The metrics remarkably outperformed those of orthodontists, whose accuracy, recall and F1-score were 0.86, 0.78, and 0.87 respectively (P < 0.01). The heatmaps suggested that the OIERR detection model primarily relied on root features for decision-making. Automatic detection of OIERR through CNNs as well as CBCT images is both accurate and efficient. The method outperforms orthodontists and is anticipated to serve as a clinical tool for the rapid screening and diagnosis of OIERR.

Convolutional neural network-based measurement of crown-implant ratio for implant-supported prostheses.

Zhang JP, Wang ZH, Zhang J, Qiu J

pubmed logopapersJul 1 2025
Research has revealed that the crown-implant ratio (CIR) is a critical variable influencing the long-term stability of implant-supported prostheses in the oral cavity. Nevertheless, inefficient manual measurement and varied measurement methods have caused significant inconvenience in both clinical and scientific work. This study aimed to develop an automated system for detecting the CIR of implant-supported prostheses from radiographs, with the objective of enhancing the efficiency of radiograph interpretation for dentists. The method for measuring the CIR of implant-supported prostheses was based on convolutional neural networks (CNNs) and was designed to recognize implant-supported prostheses and identify key points around it. The experiment used the You Only Look Once version 4 (Yolov4) to locate the implant-supported prosthesis using a rectangular frame. Subsequently, two CNNs were used to identify key points. The first CNN determined the general position of the feature points, while the second CNN finetuned the output of the first network to precisely locate the key points. The network underwent testing on a self-built dataset, and the anatomic CIR and clinical CIR were obtained simultaneously through the vertical distance method. Key point accuracy was validated through Normalized Error (NE) values, and a set of data was selected to compare machine and manual measurement results. For statistical analysis, the paired t test was applied (α=.05). A dataset comprising 1106 images was constructed. The integration of multiple networks demonstrated satisfactory recognition of implant-supported prostheses and their surrounding key points. The average NE value for key points indicated a high level of accuracy. Statistical studies confirmed no significant difference in the crown-implant ratio between machine and manual measurement results (P>.05). Machine learning proved effective in identifying implant-supported prostheses and detecting their crown-implant ratios. If applied as a clinical tool for analyzing radiographs, this research can assist dentists in efficiently and accurately obtaining crown-implant ratio results.

ResNet-Transformer deep learning model-aided detection of dens evaginatus.

Wang S, Liu J, Li S, He P, Zhou X, Zhao Z, Zheng L

pubmed logopapersJul 1 2025
Dens evaginatus is a dental morphological developmental anomaly. Failing to detect it may lead to tubercles fracture and pulpal/periapical disease. Consequently, early detection and intervention of dens evaginatus are significant to preserve vital pulp. This study aimed to develop a deep learning model to assist dentists in early diagnosing dens evaginatus, thereby supporting early intervention and mitigating the risk of severe consequences. In this study, a deep learning model was developed utilizing panoramic radiograph images sourced from 1410 patients aged 3-16 years, with high-quality annotations to enable the automatic detection of dens evaginatus. Model performance and model's efficacy in aiding dentists were evaluated. The findings indicated that the current deep learning model demonstrated commendable sensitivity (0.8600) and specificity (0.9200), outperforming dentists in detecting dens evaginatus with an F1-score of 0.8866 compared to their average F1-score of 0.8780, indicating that the model could detect dens evaginatus with greater precision. Furthermore, with its support, young dentists heightened their focus on dens evaginatus in tooth germs and achieved improved diagnostic accuracy. Based on these results, the integration of deep learning for dens evaginatus detection holds significance and can augment dentists' proficiency in identifying such anomaly.

A novel deep learning system for automated diagnosis and grading of lumbar spinal stenosis based on spine MRI: model development and validation.

Wang T, Wang A, Zhang Y, Liu X, Fan N, Yuan S, Du P, Wu Q, Chen R, Xi Y, Gu Z, Fei Q, Zang L

pubmed logopapersJul 1 2025
The study aimed to develop a single-stage deep learning (DL) screening system for automated binary and multiclass grading of lumbar central stenosis (LCS), lateral recess stenosis (LRS), and lumbar foraminal stenosis (LFS). Consecutive inpatients who underwent lumbar MRI at our center were retrospectively reviewed for the internal dataset. Axial and sagittal lumbar MRI scans were collected. Based on a new MRI diagnostic criterion, all MRI studies were labeled by two spine specialists and calibrated by a third spine specialist to serve as reference standard. Furthermore, two spine clinicians labeled all MRI studies independently to compare interobserver reliability with the DL model. Samples were assigned into training, validation, and test sets at a proportion of 8:1:1. Additional patients from another center were enrolled as the external test dataset. A modified single-stage YOLOv5 network was designed for simultaneous detection of regions of interest (ROIs) and grading of LCS, LRS, and LFS. Quantitative evaluation metrics of exactitude and reliability for the model were computed. In total, 420 and 50 patients were enrolled in the internal and external datasets. High recalls of 97.4%-99.8% were achieved for ROI detection of lumbar spinal stenosis (LSS). The system revealed multigrade area under curve (AUC) values of 0.93-0.97 in the internal test set and 0.85-0.94 in the external test set for LCS, LRS, and LFS. In binary grading, the DL model achieved high sensitivities of 0.97 for LCS, 0.98 for LRS, and 0.96 for LFS, slightly better than those achieved by spine clinicians in the internal test set. In the external test set, the binary sensitivities were 0.98 for LCS, 0.96 for LRS, and 0.95 for LFS. For reliability assessment, the kappa coefficients between the DL model and reference standard were 0.92, 0.88, and 0.91 for LCS, LRS, and LFS, respectively, slightly higher than those evaluated by nonexpert spine clinicians. The authors designed a novel DL system that demonstrated promising performance, especially in sensitivity, for automated diagnosis and grading of different types of lumbar spinal stenosis using spine MRI. The reliability of the system was better than that of spine surgeons. The authors' system may serve as a triage tool for LSS to reduce misdiagnosis and optimize routine processes in clinical work.

Improved unsupervised 3D lung lesion detection and localization by fusing global and local features: Validation in 3D low-dose computed tomography.

Lee JH, Oh SJ, Kim K, Lim CY, Choi SH, Chung MJ

pubmed logopapersJul 1 2025
Unsupervised anomaly detection (UAD) is crucial in low-dose computed tomography (LDCT). Recent AI technologies, leveraging global features, have enabled effective UAD with minimal training data of normal patients. However, this approach, devoid of utilizing local features, exhibits vulnerability in detecting deep lesions within the lungs. In other words, while the conventional use of global features can achieve high specificity, it often comes with limited sensitivity. Developing a UAD AI model with high sensitivity is essential to prevent false negatives, especially in screening patients with diseases demonstrating high mortality rates. We have successfully pioneered a new LDCT UAD AI model that leverages local features, achieving a previously unattainable increase in sensitivity compared to global methods (17.5% improvement). Furthermore, by integrating this approach with conventional global-based techniques, we have successfully consolidated the advantages of each model - high sensitivity from the local model and high specificity from the global model - into a single, unified, trained model (17.6% and 33.5% improvement, respectively). Without the need for additional training, we anticipate achieving significant diagnostic efficacy in various LDCT applications, where both high sensitivity and specificity are essential, using our fixed model. Code is available at https://github.com/kskim-phd/Fusion-UADL.

Added value of artificial intelligence for the detection of pelvic and hip fractures.

Jaillat A, Cyteval C, Baron Sarrabere MP, Ghomrani H, Maman Y, Thouvenin Y, Pastor M

pubmed logopapersJul 1 2025
To assess the added value of artificial intelligence (AI) for radiologists and emergency physicians in the radiographic detection of pelvic fractures. In this retrospective study, one junior radiologist reviewed 940 X-rays of patients admitted to emergency for a fall with suspicion of pelvic fracture between March 2020 and June 2021. The radiologist analyzed the X-rays alone and then using an AI system (BoneView). In a random sample of 100 exams, the same procedure was repeated alongside five other readers (three radiologists and two emergency physicians with 3-30 years of experience). The reference diagnosis was based on the patient's full set of medical imaging exams and medical records in the months following emergency admission. A total of 633 confirmed pelvic fractures (64.8% from hip and 35.2% from pelvic ring) in 940 patients and 68 pelvic fractures (60% from hip and 40% from pelvic ring) in the 100-patient sample were included. In the whole dataset, the junior radiologist achieved a significant sensitivity improvement with AI assistance (Se<sub>-PELVIC</sub> = 77.25% to 83.73%; p < 0.001, Se<sub>-HIP</sub> 93.24 to 96.49%; p < 0.001 and Se<sub>-PELVIC RING</sub> 54.60% to 64.50%; p < 0.001). However, there was a significant decrease in specificity with AI assistance (Spe<sub>-PELVIC</sub> = 95.24% to 93.25%; p = 0.005 and Spe<sub>-HIP</sub> = 98.30% to 96.90%; p = 0.005). In the 100-patient sample, the two emergency physicians obtained an improvement in fracture detection sensitivity across the pelvic area + 14.70% (p = 0.0011) and + 10.29% (p < 0.007) respectively without a significant decrease in specificity. For hip fractures, E1's sensitivity increased from 59.46% to 70.27% (p = 0.04), and E2's sensitivity increased from 78.38% to 86.49% (p = 0.08). For pelvic ring fractures, E1's sensitivity increased from 12.90% to 32.26% (p = 0.012), and E2's sensitivity increased from 19.35% to 32.26% (p = 0.043). AI improved the diagnostic performance for emergency physicians and radiologists with limited experience in pelvic fracture screening.
Page 13 of 34334 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.