Sort by:
Page 192 of 3623611 results

A Meta-Analysis of the Diagnosis of Condylar and Mandibular Fractures Based on 3-dimensional Imaging and Artificial Intelligence.

Wang F, Jia X, Meiling Z, Oscandar F, Ghani HA, Omar M, Li S, Sha L, Zhen J, Yuan Y, Zhao B, Abdullah JY

pubmed logopapersJul 8 2025
This article aims to review the literature, study the current situation of using 3D images and artificial intelligence-assisted methods to improve the rapid and accurate classification and diagnosis of condylar fractures and conduct a meta-analysis of mandibular fractures. Mandibular condyle fracture is a common fracture type in maxillofacial surgery. Accurate classification and diagnosis of condylar fractures are critical to developing an effective treatment plan. With the rapid development of 3-dimensional imaging technology and artificial intelligence (AI), traditional x-ray diagnosis is gradually replaced by more accurate technologies such as 3-dimensional computed tomography (CT). These emerging technologies provide more detailed anatomic information and significantly improve the accuracy and efficiency of condylar fracture diagnosis, especially in the evaluation and surgical planning of complex fractures. The application of artificial intelligence in medical imaging is further analyzed, especially the successful cases of fracture detection and classification through deep learning models. Although AI technology has demonstrated great potential in condylar fracture diagnosis, it still faces challenges such as data quality, model interpretability, and clinical validation. This article evaluates the accuracy and practicality of AI in diagnosing mandibular fractures through a systematic review and meta-analysis of the existing literature. The results show that AI-assisted diagnosis has high prediction accuracy in detecting condylar fractures and significantly improves diagnostic efficiency. However, more multicenter studies are still needed to verify the application of AI in different clinical settings to promote its widespread application in maxillofacial surgery.

AI lesion tracking in PET/CT imaging: a proposal for a Siamese-based CNN pipeline applied to PSMA PET/CT scans.

Hein SP, Schultheiss M, Gafita A, Zaum R, Yagubbayli F, Tauber R, Rauscher I, Eiber M, Pfeiffer F, Weber WA

pubmed logopapersJul 8 2025
Assessing tumor response to systemic therapies is one of the main applications of PET/CT. Routinely, only a small subset of index lesions out of multiple lesions is analyzed. However, this operator dependent selection may bias the results due to possible significant inter-metastatic heterogeneity of response to therapy. Automated, AI-based approaches for lesion tracking hold promise in enabling the analysis of many more lesions and thus providing a better assessment of tumor response. This work introduces a Siamese CNN approach for lesion tracking between PET/CT scans. Our approach is applied on the laborious task of tracking a high number of bone lesions in full-body baseline and follow-up [<sup>68</sup>Ga]Ga- or [<sup>18</sup>F]F-PSMA PET/CT scans after two cycles of [<sup>177</sup>Lu]Lu-PSMA therapy of metastatic castration resistant prostate cancer patients. Data preparation includes lesion segmentation and affine registration. Our algorithm extracts suitable lesion patches and forwards them into a Siamese CNN trained to classify the lesion patch pairs as corresponding or non-corresponding lesions. Experiments have been performed with different input patch types and a Siamese network in 2D and 3D. The CNN model successfully learned to classify lesion assignments, reaching an accuracy of 83 % in its best configuration with an AUC = 0.91. For corresponding lesions the pipeline accomplished lesion tracking accuracy of even 89 %. We proved that a CNN may facilitate the tracking of multiple lesions in PSMA PET/CT scans. Future clinical studies are necessary if this improves the prediction of the outcome of therapies.

A novel UNet-SegNet and vision transformer architectures for efficient segmentation and classification in medical imaging.

Tongbram S, Shimray BA, Singh LS

pubmed logopapersJul 8 2025
Medical imaging has become an essential tool in the diagnosis and treatment of various diseases, and provides critical insights through ultrasound, MRI, and X-ray modalities. Despite its importance, challenges remain in the accurate segmentation and classification of complex structures owing to factors such as low contrast, noise, and irregular anatomical shapes. This study addresses these challenges by proposing a novel hybrid deep learning model that integrates the strengths of Convolutional Autoencoders (CAE), UNet, and SegNet architectures. In the preprocessing phase, a Convolutional Autoencoder is used to effectively reduce noise while preserving essential image details, ensuring that the images used for segmentation and classification are of high quality. The ability of CAE to denoise images while retaining critical features enhances the accuracy of the subsequent analysis. The developed model employs UNet for multiscale feature extraction and SegNet for precise boundary reconstruction, with Dynamic Feature Fusion integrated at each skip connection to dynamically weight and combine the feature maps from the encoder and decoder. This ensures that both global and local features are effectively captured, while emphasizing the critical regions for segmentation. To further enhance the model's performance, the Hybrid Emperor Penguin Optimizer (HEPO) was employed for feature selection, while the Hybrid Vision Transformer with Convolutional Embedding (HyViT-CE) was used for the classification task. This hybrid approach allows the model to maintain high accuracy across different medical imaging tasks. The model was evaluated using three major datasets: brain tumor MRI, breast ultrasound, and chest X-rays. The results demonstrate exceptional performance, achieving an accuracy of 99.92% for brain tumor segmentation, 99.67% for breast cancer detection, and 99.93% for chest X-ray classification. These outcomes highlight the ability of the model to deliver reliable and accurate diagnostics across various medical contexts, underscoring its potential as a valuable tool in clinical settings. The findings of this study will contribute to advancing deep learning applications in medical imaging, addressing existing research gaps, and offering a robust solution for improved patient care.

A fully automated deep learning framework for age estimation in adults using periapical radiographs of canine teeth.

Upalananda W, Phisutphithayakun C, Assawasuksant P, Tanwattana P, Prasatkaew P

pubmed logopapersJul 8 2025
Determining age from dental remains is vital in forensic investigations, aiding in victim identification and anthropological research. Our framework uses a two-step pipeline: tooth detection followed by age estimation, based on either canine tooth images alone or combined with sex information. The dataset included 2,587 radiographs from 1,004 patients (691 females, 313 males) aged 13.42-85.45 years. The YOLOv8-Nano model achieved exceptional performance in detecting canine teeth, with an F1 score of 0.994, a 98.94% detection success rate, and accurate numbering of all detected teeth. For age estimation, we implemented four convolutional neural network architectures: ResNet-18, DenseNet-121, EfficientNet-B0, and MobileNetV3. Each model was trained to estimate age based on one of the four individual canine teeth (13, 23, 33, and 43). The models achieved median absolute errors ranging from 3.55 to 5.18 years. Incorporating sex as an additional input feature did not improve performance. Moreover, no significant differences in predictive accuracy were observed among the individual teeth. In conclusion, the proposed framework demonstrates potential as a robust and practical tool for forensic age estimation across diverse forensic contexts.

MTMedFormer: multi-task vision transformer for medical imaging with federated learning.

Nath A, Shukla S, Gupta P

pubmed logopapersJul 8 2025
Deep learning has revolutionized medical imaging, improving tasks like image segmentation, detection, and classification, often surpassing human accuracy. However, the training of effective diagnostic models is hindered by two major challenges: the need for large datasets for each task and privacy laws restricting the sharing of medical data. Multi-task learning (MTL) addresses the first challenge by enabling a single model to perform multiple tasks, though convolution-based MTL models struggle with contextualizing global features. Federated learning (FL) helps overcome the second challenge by allowing models to train collaboratively without sharing data, but traditional methods struggle to aggregate stable feature maps due to the permutation-invariant nature of neural networks. To tackle these issues, we propose MTMedFormer, a transformer-based multi-task medical imaging model. We leverage the transformers' ability to learn task-agnostic features using a shared encoder and utilize task-specific decoders for robust feature extraction. By combining MTL with a hybrid loss function, MTMedFormer learns distinct diagnostic tasks in a synergistic manner. Additionally, we introduce a novel Bayesian federation method for aggregating multi-task imaging models. Our results show that MTMedFormer outperforms traditional single-task and MTL models on mammogram and pneumonia datasets, while our Bayesian federation method surpasses traditional methods in image segmentation.

Development of a deep learning model for predicting skeletal muscle density from ultrasound data: a proof-of-concept study.

Pistoia F, Macciò M, Picasso R, Zaottini F, Marcenaro G, Rinaldi S, Bianco D, Rossi G, Tovt L, Pansecchi M, Sanguinetti S, Hamedani M, Schenone A, Martinoli C

pubmed logopapersJul 8 2025
Reduced muscle mass and function are associated with increased morbidity, and mortality. Ultrasound, despite being cost-effective and portable, is still underutilized in muscle trophism assessment due to its reliance on operator expertise and measurement variability. This proof-of-concept study aimed to overcome these limitations by developing a deep learning model that predicts muscle density, as assessed by CT, using Ultrasound data, exploring the feasibility of a novel Ultrasound-based parameter for muscle trophism.A sample of adult participants undergoing CT examination in our institution's emergency department between May 2022 and March 2023 was enrolled in this single-center study. Ultrasound examinations were performed with a L11-3 MHz probe. The rectus abdominis muscles, selected as target muscles, were scanned in the transverse plane, recording an Ultrasound image per side. For each participant, the same operator calculated the average target muscle density in Hounsfield Units from an axial CT slice closely matching the Ultrasound scanning plane.The final dataset included 1090 Ultrasound images from 551 participants (mean age 67 ± 17, 323 males). A deep learning model was developed to classify Ultrasound images into three muscle-density classes based on CT values. The model achieved promising performance, with a categorical accuracy of 70% and AUC values of 0.89, 0.79, and 0.90 across the three classes.This observational study introduces an innovative approach to automated muscle trophism assessment using Ultrasound imaging. Future efforts should focus on external validation in diverse populations and clinical settings, as well as expanding its application to other muscles.

Investigating the Potential of Generative AI Clinical Case-Based Simulations on Radiography Education: A Pilot Study.

Zhong D, Chow SKK

pubmed logopapersJul 8 2025
Education for medical imaging technologists or radiographers in regional and rural areas often faces significant challenges due to limited financial, technological, and teaching resources. Generative AI presents a promising solution to overcome these barriers and support the professional development of radiographers. This pilot study aimed to evaluate the educational value of an in-house AI-based imaging simulation tool designed to generate clinically relevant medical images for professional training purposes. In July 2023, a professional development lecture featuring AI-generated clinical imaging content was delivered to students (N = 122/130) and recent graduates (N = 155/532), alongside a pre-lecture survey. Following the session, participants completed a questionnaire comprising structured and open-ended items to assess their understanding, perceptions, and interest in AI within medical imaging education. Survey results indicated that both students and graduates possessed a foundational awareness of AI applications in medical imaging. Graduates demonstrated significantly higher expectations for clinical realism in AI-generated simulations, likely reflecting their clinical experience. Although the simulator's current capabilities are limited in replicating complex diagnostic imaging, participants acknowledged its pedagogical value, particularly in supporting basic anatomical education. Approximately 50% of respondents expressed interest in further developing their AI knowledge and contributing to the research and development of AI-based educational tools. AI-driven imaging simulation tools have the potential to enhance radiography education and reduce teaching barriers. While further development is needed to improve clinical fidelity, such tools can play a valuable role in foundational training and foster learner engagement in AI innovation.

Vision Transformers-Based Deep Feature Generation Framework for Hydatid Cyst Classification in Computed Tomography Images.

Sagik M, Gumus A

pubmed logopapersJul 8 2025
Hydatid cysts, caused by Echinococcus granulosus, form progressively enlarging fluid-filled cysts in organs like the liver and lungs, posing significant public health risks through severe complications or death. This study presents a novel deep feature generation framework utilizing vision transformer models (ViT-DFG) to enhance the classification accuracy of hydatid cyst types. The proposed framework consists of four phases: image preprocessing, feature extraction using vision transformer models, feature selection through iterative neighborhood component analysis, and classification, where the performance of the ViT-DFG model was evaluated and compared across different classifiers such as k-nearest neighbor and multi-layer perceptron (MLP). Both methods were evaluated independently to assess classification performance from different approaches. The dataset, comprising five cyst types, was analyzed for both five-class and three-class classification by grouping the cyst types into active, transition, and inactive categories. Experimental results showed that the proposed VIT-DFG method achieves higher accuracy than existing methods. Specifically, the ViT-DFG framework attained an overall classification accuracy of 98.10% for the three-class and 95.12% for the five-class classifications using 5-fold cross-validation. Statistical analysis through one-way analysis of variance (ANOVA), conducted to evaluate significant differences between models, confirmed significant differences between the proposed framework and individual vision transformer models ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ). These results highlight the effectiveness of combining multiple vision transformer architectures with advanced feature selection techniques in improving classification performance. The findings underscore the ViT-DFG framework's potential to advance medical image analysis, particularly in hydatid cyst classification, while offering clinical promise through automated diagnostics and improved decision-making.

Deep Learning Approach for Biomedical Image Classification.

Doshi RV, Badhiye SS, Pinjarkar L

pubmed logopapersJul 8 2025
Biomedical image classification is of paramount importance in enhancing diagnostic precision and improving patient outcomes across diverse medical disciplines. In recent years, the advent of deep learning methodologies has significantly transformed this domain by facilitating notable advancements in image analysis and classification endeavors. This paper provides a thorough overview of the application of deep learning techniques in biomedical image classification, encompassing various types of healthcare data, including medical images derived from modalities such as mammography, histopathology, and radiology. A detailed discourse on deep learning architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and advanced models such as generative adversarial networks (GANs), is presented. Additionally, we delineate the distinctions between supervised, unsupervised, and reinforcement learning approaches, along with their respective roles within the context of biomedical imaging. This study systematically investigates 50 deep learning methodologies employed in the healthcare sector, elucidating their effectiveness in various tasks, including disease detection, image segmentation, and classification. It particularly emphasizes models that have been trained on publicly available datasets, thereby highlighting the significant role of open-access data in fostering advancements in AI-driven healthcare innovations. Furthermore, this review accentuates the transformative potential of deep learning in the realm of biomedical image analysis and delineates potential avenues for future research within this rapidly evolving field.

Adaptive batch-fusion self-supervised learning for ultrasound image pretraining.

Zhang J, Wu X, Liu S, Fan Y, Chen Y, Lyu G, Liu P, Liu Z, He S

pubmed logopapersJul 8 2025
Medical self-supervised learning eliminates the reliance on labels, making feature extraction simple and efficient. The intricate design of pretext tasks in single-modal self-supervised analysis presents challenges, however, compounded by an excessive dependency on data augmentation, leading to a bottleneck in medical self-supervised learning research. Consequently, this paper reanalyzes the feature learnability introduced by data augmentation strategies in medical image self-supervised learning. We introduce an adaptive self-supervised learning data augmentation method from the perspective of batch fusion. Moreover, we propose a conv embedding block for learning the incremental representation between these batches. We tested 5 fused data tasks proposed by previous researchers and it achieved a linear classification protocol accuracy of 94.25% with only 150 self-supervised feature training in Vision Transformer(ViT), which is the best among the same methods. With a detailed ablation study on previous augmentation strategies, the results indicate that the proposed medical data augmentation strategy in this paper effectively represents ultrasound data features in the self-supervised learning process. The code and weights could be found at here.
Page 192 of 3623611 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.