Sort by:
Page 189 of 3593587 results

Machine learning techniques for stroke prediction: A systematic review of algorithms, datasets, and regional gaps.

Soladoye AA, Aderinto N, Popoola MR, Adeyanju IA, Osonuga A, Olawade DB

pubmed logopapersJul 9 2025
Stroke is a leading cause of mortality and disability worldwide, with approximately 15 million people suffering strokes annually. Machine learning (ML) techniques have emerged as powerful tools for stroke prediction, enabling early identification of risk factors through data-driven approaches. However, the clinical utility and performance characteristics of these approaches require systematic evaluation. To systematically review and analyze ML techniques used for stroke prediction, systematically synthesize performance metrics across different prediction targets and data sources, evaluate their clinical applicability, and identify research trends focusing on patient population characteristics and stroke prevalence patterns. A systematic review was conducted following PRISMA guidelines. Five databases (Google Scholar, Lens, PubMed, ResearchGate, and Semantic Scholar) were searched for open-access publications on ML-based stroke prediction published between January 2013 and December 2024. Data were extracted on publication characteristics, datasets, ML methodologies, evaluation metrics, prediction targets (stroke occurrence vs. outcomes), data sources (EHR, imaging, biosignals), patient demographics, and stroke prevalence. Descriptive synthesis was performed due to substantial heterogeneity precluding quantitative meta-analysis. Fifty-eight studies were included, with peak publication output in 2021 (21 articles). Studies targeted three main prediction objectives: stroke occurrence prediction (n = 52, 62.7 %), stroke outcome prediction (n = 19, 22.9 %), and stroke type classification (n = 12, 14.4 %). Data sources included electronic health records (n = 48, 57.8 %), medical imaging (n = 21, 25.3 %), and biosignals (n = 14, 16.9 %). Systematic analysis revealed ensemble methods consistently achieved highest accuracies for stroke occurrence prediction (range: 90.4-97.8 %), while deep learning excelled in imaging-based applications. African populations, despite highest stroke mortality rates globally, were represented in fewer than 4 studies. ML techniques show promising results for stroke prediction. However, significant gaps exist in representation of high-risk populations and real-world clinical validation. Future research should prioritize population-specific model development and clinical implementation frameworks.

A novel segmentation-based deep learning model for enhanced scaphoid fracture detection.

Bützow A, Anttila TT, Haapamäki V, Ryhänen J

pubmed logopapersJul 9 2025
To develop a deep learning model to detect apparent and occult scaphoid fractures from plain wrist radiographs and to compare the model's diagnostic performance with that of a group of experts. A dataset comprising 408 patients, 410 wrists, and 1011 radiographs was collected. 718 of these radiographs contained a scaphoid fracture, verified by magnetic resonance imaging or computed tomography scans. 58 of these fractures were occult. The images were divided into training, test, and occult fracture test sets. The images were annotated by marking the scaphoid bone and the possible fracture area. The performance of the developed DL model was compared with the ground truth and the assessments of three clinical experts. The DL model achieved a sensitivity of 0.86 (95 % CI: 0.75-0.93) and a specificity of 0.83 (0.64-0.94). The model's accuracy was 0.85 (0.76-0.92), and the area under the receiver operating characteristics curve was 0.92 (0.86-0.97). The clinical experts' sensitivity ranged from 0.77 to 0.89, and specificity from 0.83 to 0.97. The DL model detected 24 of 58 (41 %) occult fractures, compared to 10.3 %, 13.7 %, and 6.8 % by the clinical experts. Detecting scaphoid fractures using a segmentation-based DL model is feasible and comparable to previously developed DL models. The model performed similarly to a group of experts in identifying apparent scaphoid fractures and demonstrated higher diagnostic accuracy in detecting occult fractures. The improvement in occult fracture detection could enhance patient care.

[The standardization and digitalization and intelligentization represent the future development direction of hip arthroscopy diagnosis and treatment technology].

Li CB, Zhang J, Wang L, Wang YT, Kang XQ, Wang MX

pubmed logopapersJul 8 2025
In recent years, hip arthroscopy has made great progress and has been extended to the treatment of intra-articular or periarticular diseases. However, the complex structure of the hip joint, high technical operation requirements and relatively long learning curve have hindered the popularization and development of hip arthroscopy in China. Therefore, on the one hand, it is necessary to promote the research and training of standardized techniques for the diagnosis of hip disease and the treatment of arthroscopic surgery, so as to improve the safety, effectiveness and popularization of the technology. On the other hand, our organization proactively leverages cutting-edge digitalization and intelligentization technologies, including medical image digitalization, medical big data analytics, artificial intelligence, surgical navigation and robotic control, virtual reality, telemedicine, and 5G communication technology. We conduct a range of innovative research and development initiatives such as intelligent-assisted diagnosis of hip diseases, digital preoperative planning, surgical intelligent navigation and robotic procedures, and smart rehabilitation solutions. These efforts aim to facilitate a digitalization and intelligentization leap in technology and continuously enhance the precision of diagnosis and treatment. In conclusion, standardization promotes the homogenization of diagnosis and treatment, while digitalization and intelligentization facilitate the precision of operations. The synergy of the two lays the foundation for personalized diagnosis and treatment and continuous innovation, ultimately driving the rapid development of hip arthroscopy technology.

An Institutional Large Language Model for Musculoskeletal MRI Improves Protocol Adherence and Accuracy.

Patrick Decourcy Hallinan JT, Leow NW, Low YX, Lee A, Ong W, Zhou Chan MD, Devi GK, He SS, De-Liang Loh D, Wei Lim DS, Low XZ, Teo EC, Furqan SM, Yang Tham WW, Tan JH, Kumar N, Makmur A, Yonghan T

pubmed logopapersJul 8 2025
Privacy-preserving large language models (PP-LLMs) hold potential for assisting clinicians with documentation. We evaluated a PP-LLM to improve the clinical information on radiology request forms for musculoskeletal magnetic resonance imaging (MRI) and to automate protocoling, which ensures that the most appropriate imaging is performed. The present retrospective study included musculoskeletal MRI radiology request forms that had been randomly collected from June to December 2023. Studies without electronic medical record (EMR) entries were excluded. An institutional PP-LLM (Claude Sonnet 3.5) augmented the original radiology request forms by mining EMRs, and, in combination with rule-based processing of the LLM outputs, suggested appropriate protocols using institutional guidelines. Clinical information on the original and PP-LLM radiology request forms were compared with use of the RI-RADS (Reason for exam Imaging Reporting and Data System) grading by 2 musculoskeletal (MSK) radiologists independently (MSK1, with 13 years of experience, and MSK2, with 11 years of experience). These radiologists established a consensus reference standard for protocoling, against which the PP-LLM and of 2 second-year board-certified radiologists (RAD1 and RAD2) were compared. Inter-rater reliability was assessed with use of the Gwet AC1, and the percentage agreement with the reference standard was calculated. Overall, 500 musculoskeletal MRI radiology request forms were analyzed for 407 patients (202 women and 205 men with a mean age [and standard deviation] of 50.3 ± 19.5 years) across a range of anatomical regions, including the spine/pelvis (143 MRI scans; 28.6%), upper extremity (169 scans; 33.8%) and lower extremity (188 scans; 37.6%). Two hundred and twenty-two (44.4%) of the 500 MRI scans required contrast. The clinical information provided in the PP-LLM-augmented radiology request forms was rated as superior to that in the original requests. Only 0.4% to 0.6% of PP-LLM radiology request forms were rated as limited/deficient, compared with 12.4% to 22.6% of the original requests (p < 0.001). Almost-perfect inter-rater reliability was observed for LLM-enhanced requests (AC1 = 0.99; 95% confidence interval [CI], 0.99 to 1.0), compared with substantial agreement for the original forms (AC1 = 0.62; 95% CI, 0.56 to 0.67). For protocoling, MSK1 and MSK2 showed almost-perfect agreement on the region/coverage (AC1 = 0.96; 95% CI, 0.95 to 0.98) and contrast requirement (AC1 = 0.98; 95% CI, 0.97 to 0.99). Compared with the consensus reference standard, protocoling accuracy for the PP-LLM was 95.8% (95% CI, 94.0% to 97.6%), which was significantly higher than that for both RAD1 (88.6%; 95% CI, 85.8% to 91.4%) and RAD2 (88.2%; 95% CI, 85.4% to 91.0%) (p < 0.001 for both). Musculoskeletal MRI request form augmentation with an institutional LLM provided superior clinical information and improved protocoling accuracy compared with clinician requests and non-MSK-trained radiologists. Institutional adoption of such LLMs could enhance the appropriateness of MRI utilization and patient care. Diagnostic Level III. See Instructions for Authors for a complete description of levels of evidence.

Deep Learning Approach for Biomedical Image Classification.

Doshi RV, Badhiye SS, Pinjarkar L

pubmed logopapersJul 8 2025
Biomedical image classification is of paramount importance in enhancing diagnostic precision and improving patient outcomes across diverse medical disciplines. In recent years, the advent of deep learning methodologies has significantly transformed this domain by facilitating notable advancements in image analysis and classification endeavors. This paper provides a thorough overview of the application of deep learning techniques in biomedical image classification, encompassing various types of healthcare data, including medical images derived from modalities such as mammography, histopathology, and radiology. A detailed discourse on deep learning architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and advanced models such as generative adversarial networks (GANs), is presented. Additionally, we delineate the distinctions between supervised, unsupervised, and reinforcement learning approaches, along with their respective roles within the context of biomedical imaging. This study systematically investigates 50 deep learning methodologies employed in the healthcare sector, elucidating their effectiveness in various tasks, including disease detection, image segmentation, and classification. It particularly emphasizes models that have been trained on publicly available datasets, thereby highlighting the significant role of open-access data in fostering advancements in AI-driven healthcare innovations. Furthermore, this review accentuates the transformative potential of deep learning in the realm of biomedical image analysis and delineates potential avenues for future research within this rapidly evolving field.

Vision Transformers-Based Deep Feature Generation Framework for Hydatid Cyst Classification in Computed Tomography Images.

Sagik M, Gumus A

pubmed logopapersJul 8 2025
Hydatid cysts, caused by Echinococcus granulosus, form progressively enlarging fluid-filled cysts in organs like the liver and lungs, posing significant public health risks through severe complications or death. This study presents a novel deep feature generation framework utilizing vision transformer models (ViT-DFG) to enhance the classification accuracy of hydatid cyst types. The proposed framework consists of four phases: image preprocessing, feature extraction using vision transformer models, feature selection through iterative neighborhood component analysis, and classification, where the performance of the ViT-DFG model was evaluated and compared across different classifiers such as k-nearest neighbor and multi-layer perceptron (MLP). Both methods were evaluated independently to assess classification performance from different approaches. The dataset, comprising five cyst types, was analyzed for both five-class and three-class classification by grouping the cyst types into active, transition, and inactive categories. Experimental results showed that the proposed VIT-DFG method achieves higher accuracy than existing methods. Specifically, the ViT-DFG framework attained an overall classification accuracy of 98.10% for the three-class and 95.12% for the five-class classifications using 5-fold cross-validation. Statistical analysis through one-way analysis of variance (ANOVA), conducted to evaluate significant differences between models, confirmed significant differences between the proposed framework and individual vision transformer models ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ). These results highlight the effectiveness of combining multiple vision transformer architectures with advanced feature selection techniques in improving classification performance. The findings underscore the ViT-DFG framework's potential to advance medical image analysis, particularly in hydatid cyst classification, while offering clinical promise through automated diagnostics and improved decision-making.

Investigating the Potential of Generative AI Clinical Case-Based Simulations on Radiography Education: A Pilot Study.

Zhong D, Chow SKK

pubmed logopapersJul 8 2025
Education for medical imaging technologists or radiographers in regional and rural areas often faces significant challenges due to limited financial, technological, and teaching resources. Generative AI presents a promising solution to overcome these barriers and support the professional development of radiographers. This pilot study aimed to evaluate the educational value of an in-house AI-based imaging simulation tool designed to generate clinically relevant medical images for professional training purposes. In July 2023, a professional development lecture featuring AI-generated clinical imaging content was delivered to students (N = 122/130) and recent graduates (N = 155/532), alongside a pre-lecture survey. Following the session, participants completed a questionnaire comprising structured and open-ended items to assess their understanding, perceptions, and interest in AI within medical imaging education. Survey results indicated that both students and graduates possessed a foundational awareness of AI applications in medical imaging. Graduates demonstrated significantly higher expectations for clinical realism in AI-generated simulations, likely reflecting their clinical experience. Although the simulator's current capabilities are limited in replicating complex diagnostic imaging, participants acknowledged its pedagogical value, particularly in supporting basic anatomical education. Approximately 50% of respondents expressed interest in further developing their AI knowledge and contributing to the research and development of AI-based educational tools. AI-driven imaging simulation tools have the potential to enhance radiography education and reduce teaching barriers. While further development is needed to improve clinical fidelity, such tools can play a valuable role in foundational training and foster learner engagement in AI innovation.

Development of a deep learning model for predicting skeletal muscle density from ultrasound data: a proof-of-concept study.

Pistoia F, Macciò M, Picasso R, Zaottini F, Marcenaro G, Rinaldi S, Bianco D, Rossi G, Tovt L, Pansecchi M, Sanguinetti S, Hamedani M, Schenone A, Martinoli C

pubmed logopapersJul 8 2025
Reduced muscle mass and function are associated with increased morbidity, and mortality. Ultrasound, despite being cost-effective and portable, is still underutilized in muscle trophism assessment due to its reliance on operator expertise and measurement variability. This proof-of-concept study aimed to overcome these limitations by developing a deep learning model that predicts muscle density, as assessed by CT, using Ultrasound data, exploring the feasibility of a novel Ultrasound-based parameter for muscle trophism.A sample of adult participants undergoing CT examination in our institution's emergency department between May 2022 and March 2023 was enrolled in this single-center study. Ultrasound examinations were performed with a L11-3 MHz probe. The rectus abdominis muscles, selected as target muscles, were scanned in the transverse plane, recording an Ultrasound image per side. For each participant, the same operator calculated the average target muscle density in Hounsfield Units from an axial CT slice closely matching the Ultrasound scanning plane.The final dataset included 1090 Ultrasound images from 551 participants (mean age 67 ± 17, 323 males). A deep learning model was developed to classify Ultrasound images into three muscle-density classes based on CT values. The model achieved promising performance, with a categorical accuracy of 70% and AUC values of 0.89, 0.79, and 0.90 across the three classes.This observational study introduces an innovative approach to automated muscle trophism assessment using Ultrasound imaging. Future efforts should focus on external validation in diverse populations and clinical settings, as well as expanding its application to other muscles.

MTMedFormer: multi-task vision transformer for medical imaging with federated learning.

Nath A, Shukla S, Gupta P

pubmed logopapersJul 8 2025
Deep learning has revolutionized medical imaging, improving tasks like image segmentation, detection, and classification, often surpassing human accuracy. However, the training of effective diagnostic models is hindered by two major challenges: the need for large datasets for each task and privacy laws restricting the sharing of medical data. Multi-task learning (MTL) addresses the first challenge by enabling a single model to perform multiple tasks, though convolution-based MTL models struggle with contextualizing global features. Federated learning (FL) helps overcome the second challenge by allowing models to train collaboratively without sharing data, but traditional methods struggle to aggregate stable feature maps due to the permutation-invariant nature of neural networks. To tackle these issues, we propose MTMedFormer, a transformer-based multi-task medical imaging model. We leverage the transformers' ability to learn task-agnostic features using a shared encoder and utilize task-specific decoders for robust feature extraction. By combining MTL with a hybrid loss function, MTMedFormer learns distinct diagnostic tasks in a synergistic manner. Additionally, we introduce a novel Bayesian federation method for aggregating multi-task imaging models. Our results show that MTMedFormer outperforms traditional single-task and MTL models on mammogram and pneumonia datasets, while our Bayesian federation method surpasses traditional methods in image segmentation.

A fully automated deep learning framework for age estimation in adults using periapical radiographs of canine teeth.

Upalananda W, Phisutphithayakun C, Assawasuksant P, Tanwattana P, Prasatkaew P

pubmed logopapersJul 8 2025
Determining age from dental remains is vital in forensic investigations, aiding in victim identification and anthropological research. Our framework uses a two-step pipeline: tooth detection followed by age estimation, based on either canine tooth images alone or combined with sex information. The dataset included 2,587 radiographs from 1,004 patients (691 females, 313 males) aged 13.42-85.45 years. The YOLOv8-Nano model achieved exceptional performance in detecting canine teeth, with an F1 score of 0.994, a 98.94% detection success rate, and accurate numbering of all detected teeth. For age estimation, we implemented four convolutional neural network architectures: ResNet-18, DenseNet-121, EfficientNet-B0, and MobileNetV3. Each model was trained to estimate age based on one of the four individual canine teeth (13, 23, 33, and 43). The models achieved median absolute errors ranging from 3.55 to 5.18 years. Incorporating sex as an additional input feature did not improve performance. Moreover, no significant differences in predictive accuracy were observed among the individual teeth. In conclusion, the proposed framework demonstrates potential as a robust and practical tool for forensic age estimation across diverse forensic contexts.
Page 189 of 3593587 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.