Sort by:
Page 24 of 45442 results

A federated learning-based privacy-preserving image processing framework for brain tumor detection from CT scans.

Al-Saleh A, Tejani GG, Mishra S, Sharma SK, Mousavirad SJ

pubmed logopapersJul 2 2025
The detection of brain tumors is crucial in medical imaging, because accurate and early diagnosis can have a positive effect on patients. Because traditional deep learning models store all their data together, they raise questions about privacy, complying with regulations and the different types of data used by various institutions. We introduce the anisotropic-residual capsule hybrid Gorilla Badger optimized network (Aniso-ResCapHGBO-Net) framework for detecting brain tumors in a privacy-preserving, decentralized system used by many healthcare institutions. ResNet-50 and capsule networks are incorporated to achieve better feature extraction and maintain the structure of images' spatial data. To get the best results, the hybrid Gorilla Badger optimization algorithm (HGBOA) is applied for selecting the key features. Preprocessing techniques include anisotropic diffusion filtering, morphological operations, and mutual information-based image registration. Updates to the model are made secure and tamper-evident on the Ethereum network with its private blockchain and SHA-256 hashing scheme. The project is built using Python, TensorFlow and PyTorch. The model displays 99.07% accuracy, 98.54% precision and 99.82% sensitivity on assessments from benchmark CT imaging of brain tumors. This approach also helps to reduce the number of cases where no disease is found when there is one and vice versa. The framework ensures that patients' data is protected and does not decrease the accuracy of brain tumor detection.

Performance of two different artificial intelligence models in dental implant planning among four different implant planning software: a comparative study.

Roongruangsilp P, Narkbuakaew W, Khongkhunthian P

pubmed logopapersJul 2 2025
The integration of artificial intelligence (AI) in dental implant planning has emerged as a transformative approach to enhance diagnostic accuracy and efficiency. This study aimed to evaluate the performance of two object detection models, Faster R-CNN and YOLOv7 in analyzing cross-sectional and panoramic images derived from DICOM files processed by four distinct dental imaging software platforms. The dataset consisted of 332 implant position images derived from DICOM files of 184 CBCT scans. Three hundred images were processed using DentiPlan Pro 3.7 software (NECTEC, NSTDA, Thailand) for the development of Faster R-CNN and YOLOv7 models for dental implant planning. For model testing, 32 additional implant position images, which were not included in the training set, were processed using four different software programs: DentiPlan Pro 3.7, DentiPlan Pro Plus 5.0 (DTP; NECTEC, NSTDA, Thailand), Implastation (ProDigiDent USA, USA), and Romexis 6.0 (Planmeca, Finland). The performance of the models was evaluated using detection rate, accuracy, precision, recall, F1 score, and the Jaccard Index (JI). Faster R-CNN achieved superior accuracy across imaging modalities, while YOLOv7 demonstrated higher detection rates, albeit with lower precision. The impact of image rendering algorithms on model performance underscores the need for standardized preprocessing pipelines. Although Faster R-CNN demonstrated relatively higher performance metrics, statistical analysis revealed no significant differences between the models (p-value > 0.05). This study emphasizes the potential of AI-driven solutions in dental implant planning and advocates the need for further research in this area. The absence of statistically significant differences between Faster R-CNN and YOLOv7 suggests that both models can be effectively utilized, depending on the specific requirements for accuracy or detection. Furthermore, the variations in imaging rendering algorithms across different software platforms significantly influenced the model outcomes. AI models for DICOM analysis should rely on standardized image rendering to ensure consistent performance.

Diagnostic performance of artificial intelligence based on contrast-enhanced computed tomography in pancreatic ductal adenocarcinoma: a systematic review and meta-analysis.

Yan G, Chen X, Wang Y

pubmed logopapersJul 2 2025
This meta-analysis systematically evaluated the diagnostic performance of artificial intelligence (AI) based on contrast-enhanced computed tomography (CECT) in detecting pancreatic ductal adenocarcinoma (PDAC). Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Diagnostic Test Accuracy (PRISMA-DTA) guidelines, a comprehensive literature search was conducted across PubMed, Embase, and Web of Science from inception to March 2025. Bivariate random-effects models pooled sensitivity, specificity, and area under the curve (AUC). Heterogeneity was quantified via I² statistics, with subgroup analyses examining sources of variability, including AI methodologies, model architectures, sample sizes, geographic distributions, control groups and tumor stages. Nineteen studies involving 5,986 patients in internal validation cohorts and 2,069 patients in external validation cohorts were included. AI models demonstrated robust diagnostic accuracy in internal validation, with pooled sensitivity of 0.94 (95% CI 0.89-0.96), specificity of 0.93 (95% CI 0.90-0.96), and AUC of 0.98 (95% CI 0.96-0.99). External validation revealed moderately reduced sensitivity (0.84; 95% CI 0.78-0.89) and AUC (0.94; 95% CI 0.92-0.96), while specificity remained comparable (0.93; 95% CI 0.87-0.96). Substantial heterogeneity (I² > 85%) was observed, predominantly attributed to methodological variations in AI architectures and disparities in cohort sizes. AI demonstrates excellent diagnostic performance for PDAC on CECT, achieving high sensitivity and specificity across validation scenarios. However, its efficacy varies significantly with clinical context and tumor stage. Therefore, prospective multicenter trials that utilize standardized protocols and diverse cohorts, including early-stage tumors and complex benign conditions, are essential to validate the clinical utility of AI.

Deep learning-based image domain reconstruction enhances image quality and pulmonary nodule detection in ultralow-dose CT with adaptive statistical iterative reconstruction-V.

Ye K, Xu L, Pan B, Li J, Li M, Yuan H, Gong NJ

pubmed logopapersJul 1 2025
To evaluate the image quality and lung nodule detectability of ultralow-dose CT (ULDCT) with adaptive statistical iterative reconstruction-V (ASiR-V) post-processed using a deep learning image reconstruction (DLIR)-based image domain compared to low-dose CT (LDCT) and ULDCT without DLIR. A total of 210 patients undergoing lung cancer screening underwent LDCT (mean ± SD, 0.81 ± 0.28 mSv) and ULDCT (0.17 ± 0.03 mSv) scans. ULDCT images were reconstructed with ASiR-V (ULDCT-ASiR-V) and post-processed using DLIR (ULDCT-DLIR). The quality of the three CT images was analyzed. Three radiologists detected and measured pulmonary nodules on all CT images, with LDCT results serving as references. Nodule conspicuity was assessed using a five-point Likert scale, followed by further statistical analyses. A total of 463 nodules were detected using LDCT. The image noise of ULDCT-DLIR decreased by 60% compared to that of ULDCT-ASiR-V and was lower than that of LDCT (p < 0.001). The subjective image quality scores for ULDCT-DLIR (4.4 [4.1, 4.6]) were also higher than those for ULDCT-ASiR-V (3.6 [3.1, 3.9]) (p < 0.001). The overall nodule detection rates for ULDCT-ASiR-V and ULDCT-DLIR were 82.1% (380/463) and 87.0% (403/463), respectively (p < 0.001). The percentage difference between diameters > 1 mm was 2.9% (ULDCT-ASiR-V vs. LDCT) and 0.5% (ULDCT-DLIR vs. LDCT) (p = 0.009). Scores of nodule imaging sharpness on ULDCT-DLIR (4.0 ± 0.68) were significantly higher than those on ULDCT-ASiR-V (3.2 ± 0.50) (p < 0.001). DLIR-based image domain improves image quality, nodule detection rate, nodule imaging sharpness, and nodule measurement accuracy of ASiR-V on ULDCT. Question Deep learning post-processing is simple and cheap compared with raw data processing, but its performance is not clear on ultralow-dose CT. Findings Deep learning post-processing enhanced image quality and improved the nodule detection rate and accuracy of nodule measurement of ultralow-dose CT. Clinical relevance Deep learning post-processing improves the practicability of ultralow-dose CT and makes it possible for patients with less radiation exposure during lung cancer screening.

MedGround-R1: Advancing Medical Image Grounding via Spatial-Semantic Rewarded Group Relative Policy Optimization

Huihui Xu, Yuanpeng Nie, Hualiang Wang, Ying Chen, Wei Li, Junzhi Ning, Lihao Liu, Hongqiu Wang, Lei Zhu, Jiyao Liu, Xiaomeng Li, Junjun He

arxiv logopreprintJul 1 2025
Medical Image Grounding (MIG), which involves localizing specific regions in medical images based on textual descriptions, requires models to not only perceive regions but also deduce spatial relationships of these regions. Existing Vision-Language Models (VLMs) for MIG often rely on Supervised Fine-Tuning (SFT) with large amounts of Chain-of-Thought (CoT) reasoning annotations, which are expensive and time-consuming to acquire. Recently, DeepSeek-R1 demonstrated that Large Language Models (LLMs) can acquire reasoning abilities through Group Relative Policy Optimization (GRPO) without requiring CoT annotations. In this paper, we adapt the GRPO reinforcement learning framework to VLMs for Medical Image Grounding. We propose the Spatial-Semantic Rewarded Group Relative Policy Optimization to train the model without CoT reasoning annotations. Specifically, we introduce Spatial-Semantic Rewards, which combine spatial accuracy reward and semantic consistency reward to provide nuanced feedback for both spatially positive and negative completions. Additionally, we propose to use the Chain-of-Box template, which integrates visual information of referring bounding boxes into the <think> reasoning process, enabling the model to explicitly reason about spatial regions during intermediate steps. Experiments on three datasets MS-CXR, ChestX-ray8, and M3D-RefSeg demonstrate that our method achieves state-of-the-art performance in Medical Image Grounding. Ablation studies further validate the effectiveness of each component in our approach. Code, checkpoints, and datasets are available at https://github.com/bio-mlhui/MedGround-R1

Improving YOLO-based breast mass detection with transfer learning pretraining on the OPTIMAM Mammography Image Database.

Ho PS, Tsai HY, Liu I, Lee YY, Chan SW

pubmed logopapersJul 1 2025
Early detection of breast cancer through mammography significantly improves survival rates. However, high false positive and false negative rates remain a challenge. Deep learning-based computer-aided diagnosis systems can assist in lesion detection, but their performance is often limited by the availability of labeled clinical data. This study systematically evaluated the effectiveness of transfer learning, image preprocessing techniques, and the latest You Only Look Once (YOLO) model (v9) for optimizing breast mass detection models on small proprietary datasets. We examined 133 mammography images containing masses and assessed various preprocessing strategies, including cropping and contrast enhancement. We further investigated the impact of transfer learning using the OPTIMAM Mammography Image Database (OMI-DB) compared with training on proprietary data. The performance of YOLOv9 was evaluated against YOLOv7 to determine improvements in detection accuracy. Pretraining on the OMI-DB dataset with cropped images significantly improved model performance, with YOLOv7 achieving a 13.9 % higher mean average precision (mAP) and 13.2 % higher F1-score compared to training only on proprietary data. Among the tested models and configurations, the best results were obtained using YOLOv9 pretrained OMI-DB and fine-tuned with cropped proprietary images, yielding an mAP of 73.3 % ± 16.7 % and an F1-score of 76.0 % ± 13.4 %, under this condition, YOLOv9 outperformed YOLOv7 by 8.1 % in mAP and 9.2 % in F1-score. This study provides a systematic evaluation of transfer learning and preprocessing techniques for breast mass detection in small datasets. Our results demonstrating that YOLOv9 with OMI-DB pretraining significantly enhances the performance of breast mass detection models while reducing training time, providing a valuable guideline for optimizing deep learning models in data-limited clinical applications.

Convolutional neural network-based measurement of crown-implant ratio for implant-supported prostheses.

Zhang JP, Wang ZH, Zhang J, Qiu J

pubmed logopapersJul 1 2025
Research has revealed that the crown-implant ratio (CIR) is a critical variable influencing the long-term stability of implant-supported prostheses in the oral cavity. Nevertheless, inefficient manual measurement and varied measurement methods have caused significant inconvenience in both clinical and scientific work. This study aimed to develop an automated system for detecting the CIR of implant-supported prostheses from radiographs, with the objective of enhancing the efficiency of radiograph interpretation for dentists. The method for measuring the CIR of implant-supported prostheses was based on convolutional neural networks (CNNs) and was designed to recognize implant-supported prostheses and identify key points around it. The experiment used the You Only Look Once version 4 (Yolov4) to locate the implant-supported prosthesis using a rectangular frame. Subsequently, two CNNs were used to identify key points. The first CNN determined the general position of the feature points, while the second CNN finetuned the output of the first network to precisely locate the key points. The network underwent testing on a self-built dataset, and the anatomic CIR and clinical CIR were obtained simultaneously through the vertical distance method. Key point accuracy was validated through Normalized Error (NE) values, and a set of data was selected to compare machine and manual measurement results. For statistical analysis, the paired t test was applied (α=.05). A dataset comprising 1106 images was constructed. The integration of multiple networks demonstrated satisfactory recognition of implant-supported prostheses and their surrounding key points. The average NE value for key points indicated a high level of accuracy. Statistical studies confirmed no significant difference in the crown-implant ratio between machine and manual measurement results (P>.05). Machine learning proved effective in identifying implant-supported prostheses and detecting their crown-implant ratios. If applied as a clinical tool for analyzing radiographs, this research can assist dentists in efficiently and accurately obtaining crown-implant ratio results.

Enhancing ultrasonographic detection of hepatocellular carcinoma with artificial intelligence: current applications, challenges and future directions.

Wongsuwan J, Tubtawee T, Nirattisaikul S, Danpanichkul P, Cheungpasitporn W, Chaichulee S, Kaewdech A

pubmed logopapersJul 1 2025
Hepatocellular carcinoma (HCC) remains a leading cause of cancer-related mortality worldwide, with early detection playing a crucial role in improving survival rates. Artificial intelligence (AI), particularly in medical image analysis, has emerged as a potential tool for HCC diagnosis and surveillance. Recent advancements in deep learning-driven medical imaging have demonstrated significant potential in enhancing early HCC detection, particularly in ultrasound (US)-based surveillance. This review provides a comprehensive analysis of the current landscape, challenges, and future directions of AI in HCC surveillance, with a specific focus on the application in US imaging. Additionally, it explores AI's transformative potential in clinical practice and its implications for improving patient outcomes. We examine various AI models developed for HCC diagnosis, highlighting their strengths and limitations, with a particular emphasis on deep learning approaches. Among these, convolutional neural networks have shown notable success in detecting and characterising different focal liver lesions on B-mode US often outperforming conventional radiological assessments. Despite these advancements, several challenges hinder AI integration into clinical practice, including data heterogeneity, a lack of standardisation, concerns regarding model interpretability, regulatory constraints, and barriers to real-world clinical adoption. Addressing these issues necessitates the development of large, diverse, and high-quality data sets to enhance the robustness and generalisability of AI models. Emerging trends in AI for HCC surveillance, such as multimodal integration, explainable AI, and real-time diagnostics, offer promising advancements. These innovations have the potential to significantly improve the accuracy, efficiency, and clinical applicability of AI-driven HCC surveillance, ultimately contributing to enhanced patient outcomes.

Lung cancer screening with low-dose CT: definition of positive, indeterminate, and negative screen results. A nodule management recommendation from the European Society of Thoracic Imaging.

Snoeckx A, Silva M, Prosch H, Biederer J, Frauenfelder T, Gleeson F, Jacobs C, Kauczor HU, Parkar AP, Schaefer-Prokop C, Prokop M, Revel MP

pubmed logopapersJul 1 2025
Early detection of lung cancer through low-dose CT lung cancer screening in a high-risk population has proven to reduce lung cancer-specific mortality. Nodule management plays a pivotal role in early detection and further diagnostic approaches. The European Society of Thoracic Imaging (ESTI) has established a nodule management recommendation to improve the handling of pulmonary nodules detected during screening. For solid nodules, the primary method for assessing the likelihood of malignancy is to monitor nodule growth using volumetry software. For subsolid nodules, the aggressiveness is determined by measuring the solid part. The ESTI-recommendation enhances existing protocols but puts a stronger focus on lesion aggressiveness. The main goals are to minimise the overall number of follow-up examinations while preventing the risk of a major stage shift and reducing the risk of overtreatment. KEY POINTS: Question Assessment of nodule growth and management according to guidelines is essential in lung cancer screening. Findings Assessment of nodule aggressiveness defines follow-up in lung cancer screening. Clinical relevance The ESTI nodule management recommendation aims to reduce follow-up examinations while preventing major stage shift and overtreatment.

Determination of the oral carcinoma and sarcoma in contrast enhanced CT images using deep convolutional neural networks.

Warin K, Limprasert W, Paipongna T, Chaowchuen S, Vicharueang S

pubmed logopapersJul 1 2025
Oral cancer is a hazardous disease and a major cause of morbidity and mortality worldwide. The purpose of this study was to develop the deep convolutional neural networks (CNN)-based multiclass classification and object detection models for distinguishing and detection of oral carcinoma and sarcoma in contrast-enhanced CT images. This study included 3,259 slices of CT images of oral cancer cases from the cancer hospital and two regional hospitals from 2016 to 2020. Multiclass classification models were constructed using DenseNet-169, ResNet-50, EfficientNet-B0, ConvNeXt-Base, and ViT-Base-Patch16-224 to accurately differentiate between oral carcinoma and sarcoma. Additionally, multiclass object detection models, including Faster R-CNN, YOLOv8, and YOLOv11, were designed to autonomously identify and localize lesions by placing bounding boxes on CT images. Performance evaluation on a test dataset showed that the best classification model achieved an accuracy of 0.97, while the best detection models yielded a mean average precision (mAP) of 0.87. In conclusion, the CNN-based multiclass models have a great promise for accurately determining and distinguishing oral carcinoma and sarcoma in CT imaging, potentially enhancing early detection and informing treatment strategies.
Page 24 of 45442 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.