Sort by:
Page 506 of 6386373 results

Nurislam Tursynbek, Hastings Greer, Basar Demir, Marc Niethammer

arxiv logopreprintJun 3 2025
Diffusion models, while trained for image generation, have emerged as powerful foundational feature extractors for downstream tasks. We find that off-the-shelf diffusion models, trained exclusively to generate natural RGB images, can identify semantically meaningful correspondences in medical images. Building on this observation, we propose to leverage diffusion model features as a similarity measure to guide deformable image registration networks. We show that common intensity-based similarity losses often fail in challenging scenarios, such as when certain anatomies are visible in one image but absent in another, leading to anatomically inaccurate alignments. In contrast, our method identifies true semantic correspondences, aligning meaningful structures while disregarding those not present across images. We demonstrate superior performance of our approach on two tasks: multimodal 2D registration (DXA to X-Ray) and monomodal 3D registration (brain-extracted to non-brain-extracted MRI). Code: https://github.com/uncbiag/dgir

Jaiswal R, Pivodic A, Zoulakis M, Axelsson KF, Litsne H, Johansson L, Lorentzon M

pubmed logopapersJun 3 2025
The socioeconomic burden of hip fractures, the most severe osteoporotic fracture outcome, is increasing and the current clinical risk assessment lacks sensitivity. This study aimed to develop a method for improved prediction of hip fracture by incorporating measurements of bone microstructure and composition derived from HR-pQCT. In a prospective cohort study of 3028 community-dwelling women aged 75-80, all participants answered questionnaires and underwent baseline examinations of anthropometrics and bone by DXA and HR-pQCT. Medical records, a regional x-ray archive, and registers were used to identify incident fractures and death. Prediction models for hip, major osteoporotic fracture (MOF), and any fracture were developed using Cox proportional hazards regression and machine learning algorithms (neural network, random forest, ensemble, and Extreme Gradient Boosting). In the 2856 (94.3%) women with complete HR-pQCT data at 2 tibia sites (distal and ultra-distal), the median follow-up period was 8.0 yr, and 217 hip, 746 MOF, and 1008 any type of incident fracture occurred. In Cox regression models adjusted for age, BMI, clinical risk factors (CRFs), and FN BMD, the strongest predictors of hip fracture were tibia total volumetric BMD and cortical thickness. The performance of the Cox regression-based prediction models for hip fracture was significantly improved by HR-pQCT (time-dependent AUC; area under receiver operating characteristic curve at 5 yr of follow-up 0.75 [0.64-0.85]), compared to a reference model including CRFs and FN BMD (AUC = 0.71 [0.58-0.81], p < .001) and a Fracture Risk Assessment Tool risk score model (AUC = 0.70 [0.60-0.80], p < .001). The Cox regression model for hip fracture had a significantly higher accuracy than the neural network-based model, the best-performing machine learning algorithm, at clinically relevant sensitivity levels. We conclude that the addition of HR-pQCT parameters improves the prediction of hip fractures in a cohort of older Swedish women.

Gleichgerrcht E, Kaestner E, Hassanzadeh R, Roth RW, Parashos A, Davis KA, Bagić A, Keller SS, Rüber T, Stoub T, Pardoe HR, Dugan P, Drane DL, Abrol A, Calhoun V, Kuzniecky RI, McDonald CR, Bonilha L

pubmed logopapersJun 3 2025
Despite decades of advancements in diagnostic MRI, 30%-50% of temporal lobe epilepsy (TLE) patients remain categorized as 'non-lesional' (i.e. MRI negative) based on visual assessment by human experts. MRI-negative patients face diagnostic uncertainty and significant delays in treatment planning. Quantitative MRI studies have demonstrated that MRI-negative patients often exhibit a TLE-specific pattern of temporal and limbic atrophy that might be too subtle for the human eye to detect. This signature pattern could be translated successfully into clinical use via advances in artificial intelligence in computer-aided MRI interpretation, thereby improving the detection of brain 'lesional' patterns associated with TLE. Here, we tested this hypothesis by using a three-dimensional convolutional neural network applied to a dataset of 1178 scans from 12 different centres, which was able to differentiate TLE from healthy controls with high accuracy (85.9% ± 2.8%), significantly outperforming support vector machines based on hippocampal (74.4% ± 2.6%) and whole-brain (78.3% ± 3.3%) volumes. Our analysis focused subsequently on a subset of patients who achieved sustained seizure freedom post-surgery as a gold standard for confirming TLE. Importantly, MRI-negative patients from this cohort were accurately identified as TLE 82.7% ± 0.9% of the time, an encouraging finding given that clinically these were all patients considered to be MRI negative (i.e. not radiographically different from controls). The saliency maps from the convolutional neural network revealed that limbic structures, particularly medial temporal, cingulate and orbitofrontal areas, were most influential in classification, confirming the importance of the well-established TLE signature atrophy pattern for diagnosis. Indeed, the saliency maps were similar in MRI-positive and MRI-negative TLE groups, suggesting that even when humans cannot distinguish more subtle levels of atrophy, these MRI-negative patients are on the same continuum common across all TLE patients. As such, artificial intelligence can identify TLE lesional patterns, and artificial intelligence-aided diagnosis has the potential to enhance the neuroimaging diagnosis of TLE greatly and to redefine the concept of 'lesional' TLE.

Negin Baghbanzadeh, Sajad Ashkezari, Elham Dolatabadi, Arash Afkanpour

arxiv logopreprintJun 3 2025
Compound figures, which are multi-panel composites containing diverse subfigures, are ubiquitous in biomedical literature, yet large-scale subfigure extraction remains largely unaddressed. Prior work on subfigure extraction has been limited in both dataset size and generalizability, leaving a critical open question: How does high-fidelity image-text alignment via large-scale subfigure extraction impact representation learning in vision-language models? We address this gap by introducing a scalable subfigure extraction pipeline based on transformer-based object detection, trained on a synthetic corpus of 500,000 compound figures, and achieving state-of-the-art performance on both ImageCLEF 2016 and synthetic benchmarks. Using this pipeline, we release OPEN-PMC-18M, a large-scale high quality biomedical vision-language dataset comprising 18 million clinically relevant subfigure-caption pairs spanning radiology, microscopy, and visible light photography. We train and evaluate vision-language models on our curated datasets and show improved performance across retrieval, zero-shot classification, and robustness benchmarks, outperforming existing baselines. We release our dataset, models, and code to support reproducible benchmarks and further study into biomedical vision-language modeling and representation learning.

Yuanpeng He, Lijian Li, Tianxiang Zhan, Chi-Man Pun, Wenpin Jiao, Zhi Jin

arxiv logopreprintJun 3 2025
Although existing semi-supervised image segmentation methods have achieved good performance, they cannot effectively utilize multiple sources of voxel-level uncertainty for targeted learning. Therefore, we propose two main improvements. First, we introduce a novel pignistic co-evidential fusion strategy using generalized evidential deep learning, extended by traditional D-S evidence theory, to obtain a more precise uncertainty measure for each voxel in medical samples. This assists the model in learning mixed labeled information and establishing semantic associations between labeled and unlabeled data. Second, we introduce the concept of information volume of mass function (IVUM) to evaluate the constructed evidence, implementing two evidential learning schemes. One optimizes evidential deep learning by combining the information volume of the mass function with original uncertainty measures. The other integrates the learning pattern based on the co-evidential fusion strategy, using IVUM to design a new optimization objective. Experiments on four datasets demonstrate the competitive performance of our method.

Haowen Pang, Weiyan Guo, Chuyang Ye

arxiv logopreprintJun 3 2025
Multi-modal brain magnetic resonance imaging (MRI) plays a crucial role in clinical diagnostics by providing complementary information across different imaging modalities. However, a common challenge in clinical practice is missing MRI modalities. In this paper, we apply SwinUNETR to the synthesize of missing modalities in brain MRI. SwinUNETR is a novel neural network architecture designed for medical image analysis, integrating the strengths of Swin Transformer and convolutional neural networks (CNNs). The Swin Transformer, a variant of the Vision Transformer (ViT), incorporates hierarchical feature extraction and window-based self-attention mechanisms, enabling it to capture both local and global contextual information effectively. By combining the Swin Transformer with CNNs, SwinUNETR merges global context awareness with detailed spatial resolution. This hybrid approach addresses the challenges posed by the varying modality characteristics and complex brain structures, facilitating the generation of accurate and realistic synthetic images. We evaluate the performance of SwinUNETR on brain MRI datasets and demonstrate its superior capability in generating clinically valuable images. Our results show significant improvements in image quality, anatomical consistency, and diagnostic value.

Günaydın T, Varlı S

pubmed logopapersJun 3 2025
The incidence of Alzheimer's disease is rising with the increasing elderly population worldwide. While no cure exists, early diagnosis can significantly slow disease progression. Computer-aided diagnostic systems are becoming critical tools for assisting in the early detection of Alzheimer's disease. In this systematic review, we aim to evaluate recent advancements in computer-aided decision support systems for Alzheimer's disease diagnosis, focusing on data modalities, machine learning methods, and performance metrics. We conducted a systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Studies published between 2021 and 2024 were retrieved from PubMed, IEEEXplore and Web of Science, using search terms related to Alzheimer's disease classification, neuroimaging, machine learning, and diagnostic performance. A total of 39 studies met the inclusion criteria, focusing on the use of Magnetic Resonance Imaging, Positron Emission Tomography, and biomarkers for Alzheimer's disease classification using machine learning models. Multimodal approaches, combining Magnetic Resonance Imaging with Positron Emission Tomography and Cognitive assessments, outperformed single-modality studies in diagnostic accuracy reliability. Convolutional Neural Networks were the most commonly used machine learning models, followed by hybrid models and Random Forest. The highest accuracy reported for binary classification was 100%, while multi-class classification achieved up to 99.98%. Techniques like Synthetic Minority Over-sampling Technique and data augmentation were frequently employed to address data imbalance, improving model generalizability. Our review highlights the advantages of using multimodal data in computer-aided decision support systems for more accurate Alzheimer's disease diagnosis. However, we also identified several limitations, including data imbalance, small sample sizes, and the lack of external validation in most studies. Future research should utilize larger, more diverse datasets, incorporate longitudinal data, and validate models in real-world clinical trials. Additionally, there is a growing need for explainability in machine learning models to ensure they are interpretable and trusted in clinical settings. While computer-aided decision support systems show great promise in improving the early diagnosis of Alzheimer's disease, further work is needed to enhance their robustness, generalizability, and clinical applicability. By addressing these challenges, computer-aided decision support systems could play a pivotal role in the early detection and management of Alzheimer's disease, potentially improving patient outcomes and reducing healthcare costs.

Blackman J, Giles JW

pubmed logopapersJun 3 2025
Statistical Shape Models are machine learning tools in computational orthopedics that enable the study of anatomical variability and the creation of synthetic models for pathogenetic analysis and surgical planning. Current models of the glenohumeral joint either describe individual bones or are limited to non-pathologic datasets, failing to capture coupled shape variation in arthropathic anatomy. We aimed to develop a novel combined scapula-proximal-humerus model applicable to clinical populations. Preoperative computed tomography scans from 45 Reverse Total Shoulder Arthroplasty patients were used to generate three-dimensional models of the scapula and proximal humerus. Correspondence point clouds were combined into a two-body shape model using Principal Component Analysis. Individual scapula-only and proximal-humerus-only shape models were also created for comparison. The models were validated using compactness, specificity, generalization ability, and leave-one-out cross-validation. The modes of variation for each model were also compared. The combined model was described using eigenvector decomposition into single body models. The models were further compared in their ability to predict the shape of one body when given the shape of its counterpart, and the generation of diverse realistic synthetic pairs de novo. The scapula and proximal-humerus models performed comparably to previous studies with median average leave-one-out cross-validation errors of 1.08 mm (IQR: 0.359 mm), and 0.521 mm (IQR: 0.111 mm); the combined model was similar with median error of 1.13 mm (IQR: 0.239 mm). The combined model described coupled variations between the shapes equalling 43.2% of their individual variabilities, including the relationship between glenoid and humeral head erosions. The combined model outperformed the individual models generatively with reduced missing shape prediction bias (> 10%) and uniformly diverse shape plausibility (uniformity p-value < .001 vs. .59). This study developed the first two-body scapulohumeral shape model that captures coupled variations in arthropathic shoulder anatomy and the first proximal-humeral statistical model constructed using a clinical dataset. While single-body models are effective for descriptive tasks, combined models excel in generating joint-level anatomy. This model can be used to augment computational analyses of synthetic populations investigating shoulder biomechanics and surgical planning.

Gao W, Zhang K, Jiao Q, Su L, Cui D, Lu S, Yang R

pubmed logopapersJun 3 2025
The thalamus plays a crucial role in sensory processing, emotional regulation, and cognitive functions, and its dysregulation may be implicated in psychosis. The aim of the present study was to examine the differences in thalamic subregional volumes between pediatric bipolar disorder patients with (P-PBD) and without psychotic symptoms (NP-PBD). Participants including 28 P-PBD, 26 NP-PBD, and 18 healthy controls (HCs) underwent structural magnetic resonance imaging (sMRI) scanning using a 3.0T MRI scanner. All T1-weighted imaging data were processed by FreeSurfer 7.4.0 software. The volumetric differences of thalamic subregions among three groups were compared by using analyses of covariance (ANCOVA) and post-hoc analyses. Additionally, we applied a standard support vector classification (SVC) model for pairwise comparison among the three groups to identify brain regions with significant volumetric differences. The ANCOVA revealed that significant volumetric differences were observed in the left pulvinar anterior (L_PuA) and left reuniens medial ventral (L_MV-re) thalamus among three groups. Post-hoc analysis revealed that patients with P-PBD exhibited decreased volumes in the L_PuA and L_MV-re when compared to the NP-PBD group and HCs, respectively. Furthermore, the SVC model revealed that the L_MV-re volume exhibited the best capacity to discriminate P-PBD from NP-PBD and HCs. The present findings demonstrated that reduced thalamic subregional volumes in the L_PuA and L_MV-re might be associated with psychotic symptoms in PBD.

Su HZ, Yang DH, Hong LC, Wu YH, Yu K, Zhang ZB, Zhang XD

pubmed logopapersJun 3 2025
Accurate preoperative discrimination of salivary gland pleomorphic adenoma (SPA) stromal subtypes is essential for therapeutic plannings. We aimed to establish and test machine learning (ML) models for classification of stromal subtypes in SPA based on ultrasound histogram analysis. A total of 256 SPA patients were enrolled in the study and categorized into two groups: stroma-low and stroma-high. The dataset was split into a training cohort with 177 patients and a validation cohort with 79 patients. The least absolute shrinkage and selection operator (LASSO) regression identified optimal features, which were then utilized to build predictive models using logistic regression (LR) and eight ML algorithms. The effectiveness of the models was evaluated using a range of performance metrics, with a particular focus on the area under the receiver operating characteristic curve (AUC). After LASSO regression, six key features (lesion size, shape, cystic areas, vascularity, mean, and skewness) were selected to develop predictive models. The AUCs ranged from 0.575 to 0.827 for the nine models. The support vector machine (SVM) algorithm achieved the highest performance with an AUC of 0.827, accompanied by an accuracy of 0.798, precision of 0.792, recall of 0.862, and an F1 score of 0.826. The LR algorithm also exhibited robust performance, achieving an AUC of 0.818, slightly trailing behind the SVM algorithm. Decision curve analysis indicated that the SVM-based model provided superior clinical utility compared to other models. The ML model based on ultrasound histogram analysis offers a precise and non-invasive approach for preoperative categorization of stromal subtypes in SPA.
Page 506 of 6386373 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.