Sort by:
Page 37 of 82813 results

Development and validation of a deep learning ultrasound radiomics model for predicting drug resistance in lymph node tuberculosis a multicenter study.

Zhang X, Dong Z, Li H, Cheng Y, Tang W, Ni T, Zhang Y, Ai Q, Yang G

pubmed logopapersJul 2 2025
To develop and validate an ensemble machine learning ultrasound radiomics model for predicting drug resistance in lymph node tuberculosis (LNTB). This multicenter study retrospectively included 234 cervical LNTB patients from one center, randomly divided into training (70%) and internal validation (30%) cohorts. Radiomic features were extracted from ultrasound images, and an L1-based method was used for feature selection. A predictive model combining ensemble machine learning and AdaBoost algorithms was developed to predict drug resistance. Model performance was assessed using independent external test sets (Test A and Test B) from two other centres, with metrics including AUC, accuracy, precision, recall, F1 score, and decision curve analysis. Of the 851 radiometric features extracted, 161 were selected for the model. The model achieved AUCs of 0.998 (95% CI: 0.996-0.999), 0.798 (95% CI: 0.692-0.904), 0.846 (95% CI: 0.700-0.992), and 0.831 (95% CI: 0.688-0.974) in training, internal validation, and external test sets A and B, respectively. The decision curve analysis showed a substantial net benefit across a threshold probability range of 0.38 to 0.57. The LNTB resistance prediction model developed demonstrated high diagnostic efficacy in both internal and external validation. Radiomics, through the application of ensemble machine learning algorithms, provides new insights into drug resistance mechanisms and offers potential strategies for more effective patient treatment. Lymph node tuberculosis; Drug resistance; Ultrasound; Radiomics; Machine learning.

Hybrid deep learning architecture for scalable and high-quality image compression.

Al-Khafaji M, Ramaha NTA

pubmed logopapersJul 2 2025
The rapid growth of medical imaging data presents challenges for efficient storage and transmission, particularly in clinical and telemedicine applications where image fidelity is crucial. This study proposes a hybrid deep learning-based image compression framework that integrates Stationary Wavelet Transform (SWT), Stacked Denoising Autoencoder (SDAE), Gray-Level Co-occurrence Matrix (GLCM), and K-means clustering. The framework enables multiresolution decomposition, texture-aware feature extraction, and adaptive region-based compression. A custom loss function that combines Mean Squared Error (MSE) and Structural Similarity Index (SSIM) ensures high perceptual quality and compression efficiency. The proposed model was evaluated across multiple benchmark medical imaging datasets and achieved a Peak Signal-to-Noise Ratio (PSNR) of up to 50.36 dB, MS-SSIM of 0.9999, and an encoding-decoding time of 0.065 s. These results demonstrate the model's capability to outperform existing approaches while maintaining diagnostic integrity, scalability, and speed, making it suitable for real-time and resource-constrained clinical environments.

Artificial Intelligence-Driven Cancer Diagnostics: Enhancing Radiology and Pathology through Reproducibility, Explainability, and Multimodality.

Khosravi P, Fuchs TJ, Ho DJ

pubmed logopapersJul 2 2025
The integration of artificial intelligence (AI) in cancer research has significantly advanced radiology, pathology, and multimodal approaches, offering unprecedented capabilities in image analysis, diagnosis, and treatment planning. AI techniques provide standardized assistance to clinicians, in which many diagnostic and predictive tasks are manually conducted, causing low reproducibility. These AI methods can additionally provide explainability to help clinicians make the best decisions for patient care. This review explores state-of-the-art AI methods, focusing on their application in image classification, image segmentation, multiple instance learning, generative models, and self-supervised learning. In radiology, AI enhances tumor detection, diagnosis, and treatment planning through advanced imaging modalities and real-time applications. In pathology, AI-driven image analysis improves cancer detection, biomarker discovery, and diagnostic consistency. Multimodal AI approaches can integrate data from radiology, pathology, and genomics to provide comprehensive diagnostic insights. Emerging trends, challenges, and future directions in AI-driven cancer research are discussed, emphasizing the transformative potential of these technologies in improving patient outcomes and advancing cancer care. This article is part of a special series: Driving Cancer Discoveries with Computational Research, Data Science, and Machine Learning/AI.

Multi-scheme cross-level attention embedded U-shape transformer for MRI semantic segmentation.

Wang Q, Xue Y

pubmed logopapersJul 2 2025
Accurate MRI image segmentation is crucial for disease diagnosis, but current Transformer-based methods face two key challenges: limited capability to capture detailed information, leading to blurred boundaries and false localization, and the lack of MRI-specific embedding paradigms for attention modules, which limits their potential and representation capability. To address these challenges, this paper proposes a multi-scheme cross-level attention embedded U-shape Transformer (MSCL-SwinUNet). This model integrates cross-level spatial-wise attention (SW-Attention) to transfer detailed information from encoder to decoder, cross-stage channel-wise attention (CW-Attention) to filter out redundant features and enhance task-related channels, and multi-stage scale-wise attention (ScaleW-Attention) to adaptively process multi-scale features. Extensive experiments on the ACDC, MM-WHS and Synapse datasets demonstrate that the proposed MSCL-SwinUNet surpasses state-of-the-art methods in accuracy and generalizability. Visualization further confirms the superiority of our model in preserving detailed boundaries. This work not only advances Transformer-based segmentation in medical imaging but also provides new insights into designing MRI-specific attention embedding paradigms.Our code is available at https://github.com/waylans/MSCL-SwinUNet .

[AI-based applications in medical image computing].

Kepp T, Uzunova H, Ehrhardt J, Handels H

pubmed logopapersJul 2 2025
The processing of medical images plays a central role in modern diagnostics and therapy. Automated processing and analysis of medical images can efficiently accelerate clinical workflows and open new opportunities for improved patient care. However, the high variability, complexity, and varying quality of medical image data pose significant challenges. In recent years, the greatest progress in medical image analysis has been achieved through artificial intelligence (AI), particularly by using deep neural networks in the context of deep learning. These methods are successfully applied in medical image analysis, including segmentation, registration, and image synthesis.AI-based segmentation allows for the precise delineation of organs, tissues, or pathological changes. The application of AI-based image registration supports the accelerated creation of 3D planning models for complex surgeries by aligning relevant anatomical structures from different imaging modalities (e.g., CT, MRI, and PET) or time points. Generative AI methods can be used to generate additional image data for the improved training of AI models, thereby expanding the potential applications of deep learning methods in medicine. Examples from radiology, ophthalmology, dermatology, and surgery are described to illustrate their practical relevance and the potential of AI in image-based diagnostics and therapy.

Automatic detection of orthodontically induced external root resorption based on deep convolutional neural networks using CBCT images.

Xu S, Peng H, Yang L, Zhong W, Gao X

pubmed logopapersJul 2 2025
Orthodontically-induced external root resorption (OIERR) is among the most common risks in orthodontic treatment. Traditional OIERR diagnosis is limited by subjective judgement as well as cumbersome manual measurement. The research aims to develop an intelligent detection model for OIERR based on deep convolutional neural networks (CNNs) through cone-beam computed tomography (CBCT) images, thus providing auxiliary diagnosis support for orthodontists. Six pretrained CNN architectures were adopted and 1717 CBCT slices were used for training to construct OIERR detection models. The performance of the models was tested on 429 CBCT slices and the activated regions during decision-making were visualized through heatmaps. The model performance was then compared with that of two orthodontists. The EfficientNet-B1 model, trained through hold-out cross-validation, proved to be the most effective for detecting OIERR. Its accuracy, precision, sensitivity, specificity as well as F1-score were 0.97, 0.98, 0.97, 0.98 and 0.98, respectively. The metrics remarkably outperformed those of orthodontists, whose accuracy, recall and F1-score were 0.86, 0.78, and 0.87 respectively (P < 0.01). The heatmaps suggested that the OIERR detection model primarily relied on root features for decision-making. Automatic detection of OIERR through CNNs as well as CBCT images is both accurate and efficient. The method outperforms orthodontists and is anticipated to serve as a clinical tool for the rapid screening and diagnosis of OIERR.

Retrieval-augmented generation elevates local LLM quality in radiology contrast media consultation.

Wada A, Tanaka Y, Nishizawa M, Yamamoto A, Akashi T, Hagiwara A, Hayakawa Y, Kikuta J, Shimoji K, Sano K, Kamagata K, Nakanishi A, Aoki S

pubmed logopapersJul 2 2025
Large language models (LLMs) demonstrate significant potential in healthcare applications, but clinical deployment is limited by privacy concerns and insufficient medical domain training. This study investigated whether retrieval-augmented generation (RAG) can improve locally deployable LLM for radiology contrast media consultation. In 100 synthetic iodinated contrast media consultations we compared Llama 3.2-11B (baseline and RAG) with three cloud-based models-GPT-4o mini, Gemini 2.0 Flash and Claude 3.5 Haiku. A blinded radiologist ranked the five replies per case, and three LLM-based judges scored accuracy, safety, structure, tone, applicability and latency. Under controlled conditions, RAG eliminated hallucinations (0% vs 8%; χ²₍Yates₎ = 6.38, p = 0.012) and improved mean rank by 1.3 (Z = -4.82, p < 0.001), though performance gaps with cloud models persist. The RAG-enhanced model remained faster (2.6 s vs 4.9-7.3 s) while the LLM-based judges preferred it over GPT-4o mini, though the radiologist ranked GPT-4o mini higher. RAG thus provides meaningful improvements for local clinical LLMs while maintaining the privacy benefits of on-premise deployment.

SealPrint: The Anatomically Replicated Seal-and-Support Socket Abutment Technique A Proof-of-Concept with 12 months follow-up.

Lahoud P, Castro A, Walter E, Jacobs W, De Greef A, Jacobs R

pubmed logopapersJul 2 2025
This study aimed at investigating a novel technique for designing and manufacturing a sealing socket abutment (SSA) using artificial intelligence (AI)-driven tooth segmentation and 3D printing technologies. A validated AI-powered module was used to segment the tooth to be replaced on the presurgical Cone Beam Computed Tomography (CBCT) scan. Following virtual surgical planning, the CBCT and intraoral scan (IOS) were imported into Mimics software. The AI-segmented tooth was aligned with the IOS, sliced horizontally at the temporary abutment's neck, and further trimmed 2 mm above the gingival margin to capture the emergence profile. A conical cut, 2 mm wider than the temporary abutment with a 5° taper, was applied for a passive fit. This process produced a custom sealing socket abutment, which was then 3D-printed. After atraumatic tooth extraction and immediate implant placement, the temporary abutment was positioned, followed by the SealPrint atop. A flowable composite was used to fill the gap between the temporary abutment and the SealPrint; the whole structure sealing the extraction socket, providing by design support for the interdental papilla and protecting the implant and (bio)materials used. True to planning, the SealPrint passively fits on the temporary abutment. It provides an optimal seal over the entire surface of the extraction socket, preserving the emergence profile of the extracted tooth, protecting the dental implant and stabilizing the graft material and blood clot. The SealPrint technique provides a reliable and fast solution for protection and preservation of the soft-, hard-tissues and emergence profile following immediate implant placement.

A novel few-shot learning framework for supervised diffeomorphic image registration network.

Chen K, Han H, Wei J, Zhang Y

pubmed logopapersJul 2 2025
Image registration is a key technique in image processing and analysis. Due to its high complexity, the traditional registration frameworks often fail to meet real-time demands in practice. To address the real-time demand, several deep learning networks for registration have been proposed, including the supervised and the unsupervised networks. Unsupervised networks rely on large amounts of training data to minimize specific loss functions, but the lack of physical information constraints results in the lower accuracy compared with the supervised networks. However, the supervised networks in medical image registration face two major challenges: physical mesh folding and the scarcity of labeled training data. To address these two challenges, we propose a novel few-shot learning framework for image registration. The framework contains two parts: random diffeomorphism generator (RDG) and a supervised few-shot learning network for image registration. By randomly generating a complex vector field, the RDG produces a series of diffeomorphism. With the help of diffeomorphism generated by RDG, one can use only a few image data (theoretically, one image data is enough) to generate a series of labels for training the supervised few-shot learning network. Concerning the elimination of the physical mesh folding phenomenon, in the proposed network, the loss function is only required to ensure the smoothness of deformation (no other control for mesh folding elimination is necessary). The experimental results indicate that the proposed method demonstrates superior performance in eliminating physical mesh folding when compared to other existing learning-based methods. Our code is available at this link https://github.com/weijunping111/RDG-TMI.git.

Robust Multi-contrast MRI Medical Image Translation via Knowledge Distillation and Adversarial Attack.

Zhao X, Liang F, Long C, Yuan Z, Zhao J

pubmed logopapersJul 2 2025
Medical image translation is of great value but is very difficult due to the requirement with style change of noise pattern and anatomy invariance of image content. Various deep learning methods like the mainstream GAN, Transformer and Diffusion models have been developed to learn the multi-modal mapping to obtain the translated images, but the results from the generator are still far from being perfect for medical images. In this paper, we propose a robust multi-contrast translation framework for MRI medical images with knowledge distillation and adversarial attack, which can be integrated with any generator. The additional refinement network consists of teacher and student modules with similar structures but different inputs. Unlike the existing knowledge distillation works, our teacher module is designed as a registration network with more inputs to better learn the noise distribution well and further refine the translated results in the training stage. The knowledge is then well distilled to the student module to ensure that better translation results are generated. We also introduce an adversarial attack module before the generator. Such a black-box attacker can generate meaningful perturbations and adversarial examples throughout the training process. Our model has been tested on two public MRI medical image datasets considering different types and levels of perturbations, and each designed module is verified by the ablation study. The extensive experiments and comparison with SOTA methods have strongly demonstrated our model's superiority of refinement and robustness.
Page 37 of 82813 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.