Sort by:
Page 253 of 4064055 results

Machine learning is changing osteoporosis detection: an integrative review.

Zhang Y, Ma M, Huang X, Liu J, Tian C, Duan Z, Fu H, Huang L, Geng B

pubmed logopapersJun 10 2025
Machine learning drives osteoporosis detection and screening with higher clinical accuracy and accessibility than traditional osteoporosis screening tools. This review takes a step-by-step view of machine learning for osteoporosis detection, providing insights into today's osteoporosis detection and the outlook for the future. The early diagnosis and risk detection of osteoporosis have always been crucial and challenging issues in the medical field. With the in-depth application of artificial intelligence technology, especially machine learning technology in the medical field, significant breakthroughs have been made in the application of early diagnosis and risk detection of osteoporosis. Machine learning is a multidimensional technical system that encompasses a wide variety of algorithm types. Machine learning algorithms have become relatively mature and developed over many years in medical data processing. They possess stable and accurate detection performance, laying a solid foundation for the detection and diagnosis of osteoporosis. As an essential part of the machine learning technical system, deep-learning algorithms are complex algorithm models based on artificial neural networks. Due to their robust image recognition and feature extraction capabilities, deep learning algorithms have become increasingly mature in the early diagnosis and risk assessment of osteoporosis in recent years, opening new ideas and approaches for the early and accurate diagnosis and risk detection of osteoporosis. This paper reviewed the latest research over the past decade, ranging from relatively basic and widely adopted machine learning algorithms combined with clinical data to more advanced deep learning techniques integrated with imaging data such as X-ray, CT, and MRI. By analyzing the application of algorithms at different stages, we found that these basic machine learning algorithms performed well when dealing with single structured data but encountered limitations when handling high-dimensional and unstructured imaging data. On the other hand, deep learning can significantly improve detection accuracy. It does this by automatically extracting image features, especially in image histological analysis. However, it faces challenges. These include the "black-box" problem, heavy reliance on large amounts of labeled data, and difficulties in clinical interpretability. These issues highlighted the importance of model interpretability in future machine learning research. Finally, we expect to develop a predictive model in the future that combines multimodal data (such as clinical indicators, blood biochemical indicators, imaging data, and genetic data) integrated with electronic health records and machine learning techniques. This model aims to present a skeletal health monitoring system that is highly accessible, personalized, convenient, and efficient, furthering the early detection and prevention of osteoporosis.

Artificial intelligence and endoanal ultrasound: pioneering automated differentiation of benign anal and sphincter lesions.

Mascarenhas M, Almeida MJ, Martins M, Mendes F, Mota J, Cardoso P, Mendes B, Ferreira J, Macedo G, Poças C

pubmed logopapersJun 10 2025
Anal injuries, such as lacerations and fissures, are challenging to diagnose because of their anatomical complexity. Endoanal ultrasound (EAUS) has proven to be a reliable tool for detailed visualization of anal structures but relies on expert interpretation. Artificial intelligence (AI) may offer a solution for more accurate and consistent diagnoses. This study aims to develop and test a convolutional neural network (CNN)-based algorithm for automatic classification of fissures and anal lacerations (internal and external) on EUAS. A single-center retrospective study analyzed 238 EUAS radial probe exams (April 2022-January 2024), categorizing 4528 frames into fissures (516), external lacerations (2174), and internal lacerations (1838), following validation by three experts. Data was split 80% for training and 20% for testing. Performance metrics included sensitivity, specificity, and accuracy. For external lacerations, the CNN achieved 82.5% sensitivity, 93.5% specificity, and 88.2% accuracy. For internal lacerations, achieved 91.7% sensitivity, 85.9% specificity, and 88.2% accuracy. For anal fissures, achieved 100% sensitivity, specificity, and accuracy. This first EUAS AI-assisted model for differentiating benign anal injuries demonstrates excellent diagnostic performance. It highlights AI's potential to improve accuracy, reduce reliance on expertise, and support broader clinical adoption. While currently limited by small dataset and single-center scope, this work represents a significant step towards integrating AI in proctology.

Arthroscopy-validated diagnostic performance of sub-5-min deep learning super-resolution 3T knee MRI in children and adolescents.

Vosshenrich J, Breit HC, Donners R, Obmann MM, Harder D, Ahlawat S, Walter SS, Serfaty A, Cantarelli Rodrigues T, Recht M, Stern SE, Fritz J

pubmed logopapersJun 10 2025
This study aims to determine the diagnostic performance of sub-5-min combined sixfold parallel imaging (PIx3)-simultaneous multislice (SMSx2)-accelerated deep learning (DL) super-resolution 3T knee MRI in children and adolescents. Children with painful knee conditions who underwent PIx3-SMSx2-accelerated DL super-resolution 3T knee MRI and arthroscopy between October 2022 and December 2023 were retrospectively included. Nine fellowship-trained musculoskeletal radiologists independently scored the MRI studies for image quality and the presence of artifacts (Likert scales, range: 1 = very bad/severe, 5 = very good/absent), as well as structural abnormalities. Interreader agreements and diagnostic performance testing was performed. Forty-four children (mean age: 15 ± 2 years; range: 9-17 years; 24 boys) who underwent knee MRI and arthroscopic surgery within 22 days (range, 2-133) were evaluated. Overall image quality was very good (median rating: 5 [IQR: 4-5]). Motion artifacts (5 [5-5]) and image noise (5 [4-5]) were absent. Arthroscopy-verified abnormalities were detected with good or better interreader agreement (κ ≥ 0.74). Sensitivity, specificity, accuracy, and AUC values were 100%, 84%, 93%, and 0.92, respectively, for anterior cruciate ligament tears; 71%, 97%, 93%, and 0.84 for medial meniscus tears; 65%, 100%, 86%, and 0.82 for lateral meniscus tears; 100%, 100%, 100%, and 1.00 for discoid lateral menisci; 100%, 95%, 96%, and 0.98 for medial patellofemoral ligament tears; and 55%, 100%, 98%, and 0.77 for articular cartilage defects. Clinical sub-5-min PIx3-SMSx2-accelerated DL super-resolution 3T knee MRI provides excellent image quality and high diagnostic performance for diagnosing internal derangement in children and adolescents.

Evaluation of artificial-intelligence-based liver segmentation and its application for longitudinal liver volume measurement.

Kimura R, Hirata K, Tsuneta S, Takenaka J, Watanabe S, Abo D, Kudo K

pubmed logopapersJun 10 2025
Accurate liver-volume measurements from CT scans are essential for treatment planning, particularly in liver resection cases, to avoid postoperative liver failure. However, manual segmentation is time-consuming and prone to variability. Advancements in artificial intelligence (AI), specifically convolutional neural networks, have enhanced liver segmentation accuracy. We aimed to identify optimal CT phases for AI-based liver volume estimation and apply the model to track liver volume changes over time. We also evaluated temporal changes in liver volume in participants without liver disease. In this retrospective, single-center study, we assessed the performance of an open-source AI-based liver segmentation model previously reported, using non-contrast and dynamic CT phases. The accuracy of the model was compared with that of expert radiologists. The Dice similarity coefficient (DSC) was calculated across various CT phases, including arterial, portal venous, and non-contrast, to validate the model. The model was then applied to a longitudinal study involving 39 patients without liver disease (527 CT scans) to examine age-related liver volume changes over 5 to 20 years. The model demonstrated high accuracy across all phases compared to manual segmentation. Among the CT phases, the highest DSC of 0.988 ± 0.010 was in the arterial phase. The intraclass correlation coefficients for liver volume were also high, exceeding 0.9 for contrast-enhanced phases and 0.8 for non-contrast CT. In the longitudinal study, the model indicated an annual decrease of 0.95%. This model provides high accuracy in liver segmentation across various CT phases and offers insights into age-related liver volume reduction. Measuring changes in liver volume may help with the early detection of diseases and the understanding of pathophysiology.

Uncovering Image-Driven Subtypes with Distinct Pathology and Clinical Course in Autopsy-Confirmed Four Repeat Tauopathies.

Satoh R, Sekiya H, Ali F, Clark HM, Utianski RL, Duffy JR, Machulda MM, Dickson DW, Josephs KA, Whitwell JL

pubmed logopapersJun 10 2025
The four-repeat (4R) tauopathies are a group of neurodegenerative diseases, including progressive supranuclear palsy (PSP), corticobasal degeneration (CBD), and globular glial tauopathy (GGT). This study aimed to characterize spatiotemporal atrophy progression using structural magnetic resonance imaging (MRI) and to examine its relationship with clinical course and neuropathology in a cohort of autopsy-confirmed 4R tauopathies. The study included 85 autopsied patients (54 with PSP, 28 with CBD, and 3 with GGT) who underwent multiple 3T MRI scans, as well as neuropsychological, neurological, and speech/language examinations, and standardized postmortem neuropathological evaluations. An unsupervised machine-learning algorithm, Subtype and Stage Inference (SuStaIn), was applied to the cross-sectional brain volumes to estimate spatiotemporal atrophy patterns and data-driven subtypes and stages in each patient. The relationships among estimated subtypes, pathological diagnoses, and longitudinal changes in clinical testing were examined. The SuStaIn algorithm identified 2 distinct subtypes: (1) the subcortical subtype, in which atrophy progresses from the midbrain to the cortex, and (2) the cortical subtype, in which atrophy progresses from the frontal cortex to the subcortical regions. The subcortical subtype was more associated with typical PSP, whereas the cortical subtype was more associated with atypical PSP with a cortical distribution of pathology and CBD (p < 0.001). The cortical subtype had a faster rate of change on the PSP Rating Scale than the subcortical subtype (p < 0.05). SuStaIn analysis revealed 2 MRI-driven subtypes with distinct spatiotemporal atrophy patterns, clinical courses, and neuropathology. Our findings contribute to a comprehensive and improved understanding of disease progression and its relationship to tau pathology in 4R tauopathies. ANN NEUROL 2025.

SSS: Semi-Supervised SAM-2 with Efficient Prompting for Medical Imaging Segmentation

Hongjie Zhu, Xiwei Liu, Rundong Xue, Zeyu Zhang, Yong Xu, Daji Ergu, Ying Cai, Yang Zhao

arxiv logopreprintJun 10 2025
In the era of information explosion, efficiently leveraging large-scale unlabeled data while minimizing the reliance on high-quality pixel-level annotations remains a critical challenge in the field of medical imaging. Semi-supervised learning (SSL) enhances the utilization of unlabeled data by facilitating knowledge transfer, significantly improving the performance of fully supervised models and emerging as a highly promising research direction in medical image analysis. Inspired by the ability of Vision Foundation Models (e.g., SAM-2) to provide rich prior knowledge, we propose SSS (Semi-Supervised SAM-2), a novel approach that leverages SAM-2's robust feature extraction capabilities to uncover latent knowledge in unlabeled medical images, thus effectively enhancing feature support for fully supervised medical image segmentation. Specifically, building upon the single-stream "weak-to-strong" consistency regularization framework, this paper introduces a Discriminative Feature Enhancement (DFE) mechanism to further explore the feature discrepancies introduced by various data augmentation strategies across multiple views. By leveraging feature similarity and dissimilarity across multi-scale augmentation techniques, the method reconstructs and models the features, thereby effectively optimizing the salient regions. Furthermore, a prompt generator is developed that integrates Physical Constraints with a Sliding Window (PCSW) mechanism to generate input prompts for unlabeled data, fulfilling SAM-2's requirement for additional prompts. Extensive experiments demonstrate the superiority of the proposed method for semi-supervised medical image segmentation on two multi-label datasets, i.e., ACDC and BHSD. Notably, SSS achieves an average Dice score of 53.15 on BHSD, surpassing the previous state-of-the-art method by +3.65 Dice. Code will be available at https://github.com/AIGeeksGroup/SSS.

HiSin: Efficient High-Resolution Sinogram Inpainting via Resolution-Guided Progressive Inference

Jiaze E, Srutarshi Banerjee, Tekin Bicer, Guannan Wang, Yanfu Zhang, Bin Ren

arxiv logopreprintJun 10 2025
High-resolution sinogram inpainting is essential for computed tomography reconstruction, as missing high-frequency projections can lead to visible artifacts and diagnostic errors. Diffusion models are well-suited for this task due to their robustness and detail-preserving capabilities, but their application to high-resolution inputs is limited by excessive memory and computational demands. To address this limitation, we propose HiSin, a novel diffusion based framework for efficient sinogram inpainting via resolution-guided progressive inference. It progressively extracts global structure at low resolution and defers high-resolution inference to small patches, enabling memory-efficient inpainting. It further incorporates frequency-aware patch skipping and structure-adaptive step allocation to reduce redundant computation. Experimental results show that HiSin reduces peak memory usage by up to 31.25% and inference time by up to 18.15%, and maintains inpainting accuracy across datasets, resolutions, and mask conditions.

DIsoN: Decentralized Isolation Networks for Out-of-Distribution Detection in Medical Imaging

Felix Wagner, Pramit Saha, Harry Anthony, J. Alison Noble, Konstantinos Kamnitsas

arxiv logopreprintJun 10 2025
Safe deployment of machine learning (ML) models in safety-critical domains such as medical imaging requires detecting inputs with characteristics not seen during training, known as out-of-distribution (OOD) detection, to prevent unreliable predictions. Effective OOD detection after deployment could benefit from access to the training data, enabling direct comparison between test samples and the training data distribution to identify differences. State-of-the-art OOD detection methods, however, either discard training data after deployment or assume that test samples and training data are centrally stored together, an assumption that rarely holds in real-world settings. This is because shipping training data with the deployed model is usually impossible due to the size of training databases, as well as proprietary or privacy constraints. We introduce the Isolation Network, an OOD detection framework that quantifies the difficulty of separating a target test sample from the training data by solving a binary classification task. We then propose Decentralized Isolation Networks (DIsoN), which enables the comparison of training and test data when data-sharing is impossible, by exchanging only model parameters between the remote computational nodes of training and deployment. We further extend DIsoN with class-conditioning, comparing a target sample solely with training data of its predicted class. We evaluate DIsoN on four medical imaging datasets (dermatology, chest X-ray, breast ultrasound, histopathology) across 12 OOD detection tasks. DIsoN performs favorably against existing methods while respecting data-privacy. This decentralized OOD detection framework opens the way for a new type of service that ML developers could provide along with their models: providing remote, secure utilization of their training data for OOD detection services. Code will be available upon acceptance at: *****

Adapting Vision-Language Foundation Model for Next Generation Medical Ultrasound Image Analysis

Jingguo Qu, Xinyang Han, Tonghuan Xiao, Jia Ai, Juan Wu, Tong Zhao, Jing Qin, Ann Dorothy King, Winnie Chiu-Wing Chu, Jing Cai, Michael Tin-Cheung Yingınst

arxiv logopreprintJun 10 2025
Medical ultrasonography is an essential imaging technique for examining superficial organs and tissues, including lymph nodes, breast, and thyroid. It employs high-frequency ultrasound waves to generate detailed images of the internal structures of the human body. However, manually contouring regions of interest in these images is a labor-intensive task that demands expertise and often results in inconsistent interpretations among individuals. Vision-language foundation models, which have excelled in various computer vision applications, present new opportunities for enhancing ultrasound image analysis. Yet, their performance is hindered by the significant differences between natural and medical imaging domains. This research seeks to overcome these challenges by developing domain adaptation methods for vision-language foundation models. In this study, we explore the fine-tuning pipeline for vision-language foundation models by utilizing large language model as text refiner with special-designed adaptation strategies and task-driven heads. Our approach has been extensively evaluated on six ultrasound datasets and two tasks: segmentation and classification. The experimental results show that our method can effectively improve the performance of vision-language foundation models for ultrasound image analysis, and outperform the existing state-of-the-art vision-language and pure foundation models. The source code of this study is available at \href{https://github.com/jinggqu/NextGen-UIA}{GitHub}.

Geometric deep learning for local growth prediction on abdominal aortic aneurysm surfaces

Dieuwertje Alblas, Patryk Rygiel, Julian Suk, Kaj O. Kappe, Marieke Hofman, Christoph Brune, Kak Khee Yeung, Jelmer M. Wolterink

arxiv logopreprintJun 10 2025
Abdominal aortic aneurysms (AAAs) are progressive focal dilatations of the abdominal aorta. AAAs may rupture, with a survival rate of only 20\%. Current clinical guidelines recommend elective surgical repair when the maximum AAA diameter exceeds 55 mm in men or 50 mm in women. Patients that do not meet these criteria are periodically monitored, with surveillance intervals based on the maximum AAA diameter. However, this diameter does not take into account the complex relation between the 3D AAA shape and its growth, making standardized intervals potentially unfit. Personalized AAA growth predictions could improve monitoring strategies. We propose to use an SE(3)-symmetric transformer model to predict AAA growth directly on the vascular model surface enriched with local, multi-physical features. In contrast to other works which have parameterized the AAA shape, this representation preserves the vascular surface's anatomical structure and geometric fidelity. We train our model using a longitudinal dataset of 113 computed tomography angiography (CTA) scans of 24 AAA patients at irregularly sampled intervals. After training, our model predicts AAA growth to the next scan moment with a median diameter error of 1.18 mm. We further demonstrate our model's utility to identify whether a patient will become eligible for elective repair within two years (acc = 0.93). Finally, we evaluate our model's generalization on an external validation set consisting of 25 CTAs from 7 AAA patients from a different hospital. Our results show that local directional AAA growth prediction from the vascular surface is feasible and may contribute to personalized surveillance strategies.
Page 253 of 4064055 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.