Sort by:
Page 632 of 7647636 results

Abed S, Hergan K, Pfaff J, Dörrenberg J, Brandstetter L, Gradl J

pubmed logopapersJun 3 2025
The objective of this study was to assess the performance of an artificial intelligence (AI) algorithm in detecting intracranial haemorrhages (ICHs) on non-contrast CT scans (NCCT). Another objective was to gauge the department's acceptance of said algorithm. Surveys conducted at three and nine months post-implementation revealed an increase in radiologists' acceptance of the AI tool with an increasing performance. However, a significant portion still preferred an additional physician given comparable cost. Our findings emphasize the importance of careful software implementation into a robust IT architecture.

Wang D, Honnorat N, Toledo JB, Li K, Charisis S, Rashid T, Benet Nirmala A, Brandigampala SR, Mojtabai M, Seshadri S, Habes M

pubmed logopapersJun 3 2025
Concurrent neurodegenerative and vascular pathologies pose a diagnostic challenge in the clinical setting, with histopathology remaining the definitive modality for dementia-type diagnosis. To address this clinical challenge, we introduce a neuropathology-based, data-driven, multi-label deep-learning framework to identify and quantify in vivo biomarkers for Alzheimer's disease (AD), vascular dementia (VD) and Lewy body dementia (LBD) using antemortem T1-weighted MRI scans of 423 demented and 361 control participants from National Alzheimer's Coordinating Center and Alzheimer's Disease Neuroimaging Initiative datasets. Based on the best-performing deep-learning model, explainable heat maps were extracted to visualize disease patterns, and the novel Deep Signature of Pathology Atrophy REcognition (DeepSPARE) indices were developed, where a higher DeepSPARE score indicates more brain alterations associated with that specific pathology. A substantial discrepancy in clinical and neuropathological diagnosis was observed in the demented patients: 71% had more than one pathology, but 67% were diagnosed clinically as AD only. Based on these neuropathological diagnoses and leveraging cross-validation principles, the deep-learning model achieved the best performance, with a balanced accuracy of 0.844, 0.839 and 0.623 for AD, VD and LBD, respectively, and was used to generate the explainable deep-learning heat maps and DeepSPARE indices. The explainable deep-learning heat maps revealed distinct neuroimaging brain alteration patterns for each pathology: (i) the AD heat map highlighted bilateral hippocampal regions; (ii) the VD heat map emphasized white matter regions; and (iii) the LBD heat map exposed occipital alterations. The DeepSPARE indices were validated by examining their associations with cognitive testing and neuropathological and neuroimaging measures using linear mixed-effects models. The DeepSPARE-AD index was associated with Mini-Mental State Examination, the Trail Making Test B, memory, hippocampal volume, Braak stages, Consortium to Establish a Registry for Alzheimer's Disease (CERAD) scores and Thal phases [false-discovery rate (FDR)-adjusted P < 0.05]. The DeepSPARE-VD index was associated with white matter hyperintensity volume and cerebral amyloid angiopathy (FDR-adjusted P < 0.001), and the DeepSPARE-LBD index was associated with Lewy body stages (FDR-adjusted P < 0.05). The findings were replicated in an out-of-sample Alzheimer's Disease Neuroimaging Initiative dataset by testing associations with cognitive, imaging, plasma and CSF measures. CSF and plasma tau phosphorylated at threonine-181 (pTau181) were significantly associated with DeepSPARE-AD in the AD and mild cognitive impairment amyloid-β positive (AD/MCIΑβ+) group (FDR-adjusted P < 0.001), and CSF α-synuclein was associated solely with DeepSPARE-LBD (FDR-adjusted P = 0.036). Overall, these findings demonstrate the advantages of our innovative deep-learning framework in detecting antemortem neuroimaging signatures linked to different pathologies. The newly deep-learning-derived DeepSPARE indices are precise, pathology-sensitive and single-valued non-invasive neuroimaging metrics, bridging the traditional widely available in vivo T1 imaging with histopathology.

Agosta F, Basaia S, Spinelli EG, Facente F, Lumaca L, Ghirelli A, Canu E, Castelnovo V, Sibilla E, Tripodi C, Freri F, Cecchetti G, Magnani G, Caso F, Verde F, Ticozzi N, Silani V, Caroppo P, Prioni S, Villa C, Tremolizzo L, Appollonio I, Raj A, Filippi M

pubmed logopapersJun 3 2025
The ability to predict the spreading of pathology in patients with frontotemporal dementia (FTD) is crucial for early diagnosis and targeted interventions. In this study, we examined the relationship between network vulnerability and longitudinal progression of atrophy in FTD patients, using the network diffusion model (NDM) of the spread of pathology. Thirty behavioural variant FTD (bvFTD), 13 semantic variant primary progressive aphasia (svPPA), 14 non-fluent variant primary progressive aphasia (nfvPPA) and 12 semantic behavioural variant FTD (sbvFTD) patients underwent longitudinal T1-weighted MRI. Fifty young controls (20-31 years of age) underwent multi-shell diffusion MRI scan. An NDM was developed to model progression of FTD pathology as a spreading process from a seed through the healthy structural connectome, using connectivity measures from fractional anisotropy and intracellular volume fraction in young controls. Four disease epicentres were initially identified from the peaks of atrophy of each FTD variant: left insula (bvFTD), left temporal pole (svPPA), right temporal pole (sbvFTD) and left supplementary motor area (nfvPPA). Pearson's correlations were calculated between NDM-predicted atrophy in young controls and the observed longitudinal atrophy in FTD patients over a follow-up period of 24 months. The NDM was then run for all 220 brain seeds to verify whether the four epicentres were among those that yielded the highest correlation. Using the NDM, predictive maps in young controls showed progression of pathology from the peaks of atrophy in svPPA, nfvPPA and sbvFTD over 24 months. svPPA exhibited early involvement of the left temporal and occipital lobes, progressing to extensive left hemisphere impairment. nfvPPA and sbvFTD spread in a similar manner bilaterally to frontal, sensorimotor and temporal regions, with sbvFTD additionally affecting the right hemisphere. Moreover, the NDM-predicted atrophy of each region was positively correlated with longitudinal real atrophy, with a greater effect in svPPA and sbvFTD. In bvFTD, the model starting from the left insula (the peak of atrophy) demonstrated a highly left-lateralized pattern, with pathology spreading to frontal, sensorimotor, temporal and basal ganglia regions, with minimal extension to the contralateral hemisphere by 24 months. However, unlike the atrophy peaks observed in the other three phenotypes, the left insula did not show the strongest correlation between the estimated and real atrophy. Instead, the bilateral superior frontal gyrus emerged as optimal seeds for modelling atrophy spread, showing the highest correlation ranking in both hemispheres. Overall, NDM applied on the intracellular volume fraction connectome yielded higher correlations relative to NDM applied on fractional anisotropy maps. The NDM implementation using the cross-sectional structural connectome is a valuable tool to predict patterns of atrophy and spreading of pathology in FTD clinical variants.

Nurislam Tursynbek, Hastings Greer, Basar Demir, Marc Niethammer

arxiv logopreprintJun 3 2025
Diffusion models, while trained for image generation, have emerged as powerful foundational feature extractors for downstream tasks. We find that off-the-shelf diffusion models, trained exclusively to generate natural RGB images, can identify semantically meaningful correspondences in medical images. Building on this observation, we propose to leverage diffusion model features as a similarity measure to guide deformable image registration networks. We show that common intensity-based similarity losses often fail in challenging scenarios, such as when certain anatomies are visible in one image but absent in another, leading to anatomically inaccurate alignments. In contrast, our method identifies true semantic correspondences, aligning meaningful structures while disregarding those not present across images. We demonstrate superior performance of our approach on two tasks: multimodal 2D registration (DXA to X-Ray) and monomodal 3D registration (brain-extracted to non-brain-extracted MRI). Code: https://github.com/uncbiag/dgir

Jaiswal R, Pivodic A, Zoulakis M, Axelsson KF, Litsne H, Johansson L, Lorentzon M

pubmed logopapersJun 3 2025
The socioeconomic burden of hip fractures, the most severe osteoporotic fracture outcome, is increasing and the current clinical risk assessment lacks sensitivity. This study aimed to develop a method for improved prediction of hip fracture by incorporating measurements of bone microstructure and composition derived from HR-pQCT. In a prospective cohort study of 3028 community-dwelling women aged 75-80, all participants answered questionnaires and underwent baseline examinations of anthropometrics and bone by DXA and HR-pQCT. Medical records, a regional x-ray archive, and registers were used to identify incident fractures and death. Prediction models for hip, major osteoporotic fracture (MOF), and any fracture were developed using Cox proportional hazards regression and machine learning algorithms (neural network, random forest, ensemble, and Extreme Gradient Boosting). In the 2856 (94.3%) women with complete HR-pQCT data at 2 tibia sites (distal and ultra-distal), the median follow-up period was 8.0 yr, and 217 hip, 746 MOF, and 1008 any type of incident fracture occurred. In Cox regression models adjusted for age, BMI, clinical risk factors (CRFs), and FN BMD, the strongest predictors of hip fracture were tibia total volumetric BMD and cortical thickness. The performance of the Cox regression-based prediction models for hip fracture was significantly improved by HR-pQCT (time-dependent AUC; area under receiver operating characteristic curve at 5 yr of follow-up 0.75 [0.64-0.85]), compared to a reference model including CRFs and FN BMD (AUC = 0.71 [0.58-0.81], p < .001) and a Fracture Risk Assessment Tool risk score model (AUC = 0.70 [0.60-0.80], p < .001). The Cox regression model for hip fracture had a significantly higher accuracy than the neural network-based model, the best-performing machine learning algorithm, at clinically relevant sensitivity levels. We conclude that the addition of HR-pQCT parameters improves the prediction of hip fractures in a cohort of older Swedish women.

Gleichgerrcht E, Kaestner E, Hassanzadeh R, Roth RW, Parashos A, Davis KA, Bagić A, Keller SS, Rüber T, Stoub T, Pardoe HR, Dugan P, Drane DL, Abrol A, Calhoun V, Kuzniecky RI, McDonald CR, Bonilha L

pubmed logopapersJun 3 2025
Despite decades of advancements in diagnostic MRI, 30%-50% of temporal lobe epilepsy (TLE) patients remain categorized as 'non-lesional' (i.e. MRI negative) based on visual assessment by human experts. MRI-negative patients face diagnostic uncertainty and significant delays in treatment planning. Quantitative MRI studies have demonstrated that MRI-negative patients often exhibit a TLE-specific pattern of temporal and limbic atrophy that might be too subtle for the human eye to detect. This signature pattern could be translated successfully into clinical use via advances in artificial intelligence in computer-aided MRI interpretation, thereby improving the detection of brain 'lesional' patterns associated with TLE. Here, we tested this hypothesis by using a three-dimensional convolutional neural network applied to a dataset of 1178 scans from 12 different centres, which was able to differentiate TLE from healthy controls with high accuracy (85.9% ± 2.8%), significantly outperforming support vector machines based on hippocampal (74.4% ± 2.6%) and whole-brain (78.3% ± 3.3%) volumes. Our analysis focused subsequently on a subset of patients who achieved sustained seizure freedom post-surgery as a gold standard for confirming TLE. Importantly, MRI-negative patients from this cohort were accurately identified as TLE 82.7% ± 0.9% of the time, an encouraging finding given that clinically these were all patients considered to be MRI negative (i.e. not radiographically different from controls). The saliency maps from the convolutional neural network revealed that limbic structures, particularly medial temporal, cingulate and orbitofrontal areas, were most influential in classification, confirming the importance of the well-established TLE signature atrophy pattern for diagnosis. Indeed, the saliency maps were similar in MRI-positive and MRI-negative TLE groups, suggesting that even when humans cannot distinguish more subtle levels of atrophy, these MRI-negative patients are on the same continuum common across all TLE patients. As such, artificial intelligence can identify TLE lesional patterns, and artificial intelligence-aided diagnosis has the potential to enhance the neuroimaging diagnosis of TLE greatly and to redefine the concept of 'lesional' TLE.

Negin Baghbanzadeh, Sajad Ashkezari, Elham Dolatabadi, Arash Afkanpour

arxiv logopreprintJun 3 2025
Compound figures, which are multi-panel composites containing diverse subfigures, are ubiquitous in biomedical literature, yet large-scale subfigure extraction remains largely unaddressed. Prior work on subfigure extraction has been limited in both dataset size and generalizability, leaving a critical open question: How does high-fidelity image-text alignment via large-scale subfigure extraction impact representation learning in vision-language models? We address this gap by introducing a scalable subfigure extraction pipeline based on transformer-based object detection, trained on a synthetic corpus of 500,000 compound figures, and achieving state-of-the-art performance on both ImageCLEF 2016 and synthetic benchmarks. Using this pipeline, we release OPEN-PMC-18M, a large-scale high quality biomedical vision-language dataset comprising 18 million clinically relevant subfigure-caption pairs spanning radiology, microscopy, and visible light photography. We train and evaluate vision-language models on our curated datasets and show improved performance across retrieval, zero-shot classification, and robustness benchmarks, outperforming existing baselines. We release our dataset, models, and code to support reproducible benchmarks and further study into biomedical vision-language modeling and representation learning.

Yuanpeng He, Lijian Li, Tianxiang Zhan, Chi-Man Pun, Wenpin Jiao, Zhi Jin

arxiv logopreprintJun 3 2025
Although existing semi-supervised image segmentation methods have achieved good performance, they cannot effectively utilize multiple sources of voxel-level uncertainty for targeted learning. Therefore, we propose two main improvements. First, we introduce a novel pignistic co-evidential fusion strategy using generalized evidential deep learning, extended by traditional D-S evidence theory, to obtain a more precise uncertainty measure for each voxel in medical samples. This assists the model in learning mixed labeled information and establishing semantic associations between labeled and unlabeled data. Second, we introduce the concept of information volume of mass function (IVUM) to evaluate the constructed evidence, implementing two evidential learning schemes. One optimizes evidential deep learning by combining the information volume of the mass function with original uncertainty measures. The other integrates the learning pattern based on the co-evidential fusion strategy, using IVUM to design a new optimization objective. Experiments on four datasets demonstrate the competitive performance of our method.

Haowen Pang, Weiyan Guo, Chuyang Ye

arxiv logopreprintJun 3 2025
Multi-modal brain magnetic resonance imaging (MRI) plays a crucial role in clinical diagnostics by providing complementary information across different imaging modalities. However, a common challenge in clinical practice is missing MRI modalities. In this paper, we apply SwinUNETR to the synthesize of missing modalities in brain MRI. SwinUNETR is a novel neural network architecture designed for medical image analysis, integrating the strengths of Swin Transformer and convolutional neural networks (CNNs). The Swin Transformer, a variant of the Vision Transformer (ViT), incorporates hierarchical feature extraction and window-based self-attention mechanisms, enabling it to capture both local and global contextual information effectively. By combining the Swin Transformer with CNNs, SwinUNETR merges global context awareness with detailed spatial resolution. This hybrid approach addresses the challenges posed by the varying modality characteristics and complex brain structures, facilitating the generation of accurate and realistic synthetic images. We evaluate the performance of SwinUNETR on brain MRI datasets and demonstrate its superior capability in generating clinically valuable images. Our results show significant improvements in image quality, anatomical consistency, and diagnostic value.

Günaydın T, Varlı S

pubmed logopapersJun 3 2025
The incidence of Alzheimer's disease is rising with the increasing elderly population worldwide. While no cure exists, early diagnosis can significantly slow disease progression. Computer-aided diagnostic systems are becoming critical tools for assisting in the early detection of Alzheimer's disease. In this systematic review, we aim to evaluate recent advancements in computer-aided decision support systems for Alzheimer's disease diagnosis, focusing on data modalities, machine learning methods, and performance metrics. We conducted a systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Studies published between 2021 and 2024 were retrieved from PubMed, IEEEXplore and Web of Science, using search terms related to Alzheimer's disease classification, neuroimaging, machine learning, and diagnostic performance. A total of 39 studies met the inclusion criteria, focusing on the use of Magnetic Resonance Imaging, Positron Emission Tomography, and biomarkers for Alzheimer's disease classification using machine learning models. Multimodal approaches, combining Magnetic Resonance Imaging with Positron Emission Tomography and Cognitive assessments, outperformed single-modality studies in diagnostic accuracy reliability. Convolutional Neural Networks were the most commonly used machine learning models, followed by hybrid models and Random Forest. The highest accuracy reported for binary classification was 100%, while multi-class classification achieved up to 99.98%. Techniques like Synthetic Minority Over-sampling Technique and data augmentation were frequently employed to address data imbalance, improving model generalizability. Our review highlights the advantages of using multimodal data in computer-aided decision support systems for more accurate Alzheimer's disease diagnosis. However, we also identified several limitations, including data imbalance, small sample sizes, and the lack of external validation in most studies. Future research should utilize larger, more diverse datasets, incorporate longitudinal data, and validate models in real-world clinical trials. Additionally, there is a growing need for explainability in machine learning models to ensure they are interpretable and trusted in clinical settings. While computer-aided decision support systems show great promise in improving the early diagnosis of Alzheimer's disease, further work is needed to enhance their robustness, generalizability, and clinical applicability. By addressing these challenges, computer-aided decision support systems could play a pivotal role in the early detection and management of Alzheimer's disease, potentially improving patient outcomes and reducing healthcare costs.
Page 632 of 7647636 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,700+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.