Sort by:
Page 18 of 91902 results

A novel multimodal medical image fusion model for Alzheimer's and glioma disease detection based on hybrid fusion strategies in non-subsampled shearlet transform domain.

Alabduljabbar A, Khan SU, Altherwy YN, Almarshad F, Alsuhaibani A

pubmed logopapersJul 27 2025
BackgroundMedical professionals may increase diagnostic accuracy using multimodal medical image fusion techniques to peer inside organs and tissues.ObjectiveThis research work aims to propose a solution for diverse medical diagnostic challenges.MethodsWe propose a dual-purpose model. Initially, we developed a pair of images using the intensity, hue, and saturation (IHS) approach. Next, we applied non-subsampled shearlet transform (NSST) decomposition to these images to obtain the low-frequency and high-frequency coefficients. We then enhanced the structure and background details of the low-frequency coefficients using a novel structure feature modification technique. For the high-frequency coefficients, we utilized the layer-weighted pulse coupled neural network fusion technique to acquire complementary pixel-level information. Finally, we employed reversed NSST and IHS to generate the fused resulting image.ResultsThe proposed approach has been verified on 1350 image sets from two different diseases, Alzheimer's and glioma, across numerous imaging modalities. Our proposed method beats existing cutting-edge models, as proven by both qualitative and quantitative evaluations, and provides valuable information for medical diagnosis. In the majority of cases, our proposed method performed well in terms of entropy, structure similarity index, standard deviation, average distance, and average pixel intensity due to the careful selection of unique fusion strategies in our model. However, in a few cases, NSSTSIPCA performs better than our proposed work in terms of intensity variations (mean absolute error and average distance).ConclusionsThis research work utilized various fusion strategies in the NSST domain to efficiently enhance structural, anatomical, and spectral information.

Deep Diffusion MRI Template (DDTemplate): A Novel Deep Learning Groupwise Diffusion MRI Registration Method for Brain Template Creation.

Wang J, Zhu X, Zhang W, Du M, Wells WM, O'Donnell LJ, Zhang F

pubmed logopapersJul 26 2025
Diffusion MRI (dMRI) is an advanced imaging technique that enables in-vivo tracking of white matter fiber tracts and estimates the underlying cellular microstructure of brain tissues. Groupwise registration of dMRI data from multiple individuals is an important task for brain template creation and investigation of inter-subject brain variability. However, groupwise registration is a challenging task due to the uniqueness of dMRI data that include multi-dimensional, orientation-dependent signals that describe not only the strength but also the orientation of water diffusion in brain tissues. Deep learning approaches have shown successful performance in standard subject-to-subject dMRI registration. However, no deep learning methods have yet been proposed for groupwise dMRI registration. . In this work, we propose Deep Diffusion MRI Template (DDTemplate), which is a novel deep-learning-based method building upon the popular VoxelMorph framework to take into account dMRI fiber tract information. DDTemplate enables joint usage of whole-brain tissue microstructure and tract-specific fiber orientation information to ensure alignment of white matter fiber tracts and whole brain anatomical structures. We propose a novel deep learning framework that simultaneously trains a groupwise dMRI registration network and generates a population brain template. During inference, the trained model can be applied to register unseen subjects to the learned template. We compare DDTemplate with several state-of-the-art registration methods and demonstrate superior performance on dMRI data from multiple cohorts (adolescents, young adults, and elderly adults) acquired from different scanners. Furthermore, as a testbed task, we perform a between-population analysis to investigate sex differences in the brain, using the popular Tract-Based Spatial Statistics (TBSS) method that relies on groupwise dMRI registration. We find that using DDTemplate can increase the sensitivity in population difference detection, showing the potential of our method's utility in real neuroscientific applications.

Brainwide hemodynamics predict EEG neural rhythms across sleep and wakefulness in humans

Jacob, L. P. L., Bailes, S. M., Williams, S. D., Stringer, C., Lewis, L. D.

biorxiv logopreprintJul 26 2025
The brain exhibits rich oscillatory dynamics that play critical roles in vigilance and cognition, such as the neural rhythms that define sleep. These rhythms continuously fluctuate, signaling major changes in vigilance, but the widespread brain dynamics underlying these oscillations are difficult to investigate. Using simultaneous EEG and fast fMRI in humans who fell asleep inside the scanner, we developed a machine learning approach to investigate which fMRI regions and networks predict fluctuations in neural rhythms. We demonstrated that the rise and fall of alpha (8-12 Hz) and delta (1-4 Hz) power, two canonical EEG bands critically involved with cognition and vigilance, can be predicted from fMRI data in subjects that were not present in the training set. This approach also identified predictive information in individual brain regions across the cortex and subcortex. Finally, we developed an approach to identify shared and unique predictive information, and found that information about alpha rhythms was highly separable in two networks linked to arousal and visual systems. Conversely, delta rhythms were diffusely represented on a large spatial scale primarily across the cortex. These results demonstrate that EEG rhythms can be predicted from fMRI data, identify large-scale network patterns that underlie alpha and delta rhythms, and establish a novel framework for investigating multimodal brain dynamics.

Optimization of deep learning models for inference in low resource environments.

Thakur S, Pati S, Wu J, Panchumarthy R, Karkada D, Kozlov A, Shamporov V, Suslov A, Lyakhov D, Proshin M, Shah P, Makris D, Bakas S

pubmed logopapersJul 26 2025
Artificial Intelligence (AI), and particularly deep learning (DL), has shown great promise to revolutionize healthcare. However, clinical translation is often hindered by demanding hardware requirements. In this study, we assess the effectiveness of optimization techniques for DL models in healthcare applications, targeting varying AI workloads across the domains of radiology, histopathology, and medical RGB imaging, while evaluating across hardware configurations. The assessed AI workloads focus on both segmentation and classification workloads, by virtue of brain extraction in Magnetic Resonance Imaging (MRI), colorectal cancer delineation in Hematoxylin & Eosin (H&E) stained digitized tissue sections, and diabetic foot ulcer classification in RGB images. We quantitatively evaluate model performance in terms of model runtime during inference (including speedup, latency, and memory usage) and model utility on unseen data. Our results demonstrate that optimization techniques can substantially improve model runtime, without compromising model utility. These findings suggest that optimization techniques can facilitate the clinical translation of AI models in low-resource environments, making them more practical for real-world healthcare applications even in underserved regions.

Digitalizing English-language CT Interpretation for Positive Haemorrhage Evaluation Reporting: the DECIPHER study.

Bloom B, Haimovich A, Pott J, Williams SL, Cheetham M, Langsted S, Skene I, Astin-Chamberlain R, Thomas SH

pubmed logopapersJul 25 2025
Identifying whether there is a traumatic intracranial bleed (ICB+) on head CT is critical for clinical care and research. Free text CT reports are unstructured and therefore must undergo time-consuming manual review. Existing artificial intelligence classification schemes are not optimised for the emergency department endpoint of classification of ICB+ or ICB-. We sought to assess three methods for classifying CT reports: a text classification (TC) programme, a commercial natural language processing programme (Clinithink) and a generative pretrained transformer large language model (Digitalizing English-language CT Interpretation for Positive Haemorrhage Evaluation Reporting (DECIPHER)-LLM). Primary objective: determine the diagnostic classification performance of the dichotomous categorisation of each of the three approaches. determine whether the LLM could achieve a substantial reduction in CT report review workload while maintaining 100% sensitivity.Anonymised radiology reports of head CT scans performed for trauma were manually labelled as ICB+/-. Training and validation sets were randomly created to train the TC and natural language processing models. Prompts were written to train the LLM. 898 reports were manually labelled. Sensitivity and specificity (95% CI)) of TC, Clinithink and DECIPHER-LLM (with probability of ICB set at 10%) were respectively 87.9% (76.7% to 95.0%) and 98.2% (96.3% to 99.3%), 75.9% (62.8% to 86.1%) and 96.2% (93.8% to 97.8%) and 100% (93.8% to 100%) and 97.4% (95.3% to 98.8%).With DECIPHER-LLM probability of ICB+ threshold of 10% set to identify CT reports requiring manual evaluation, CT reports requiring manual classification reduced by an estimated 385/449 cases (85.7% (95% CI 82.1% to 88.9%)) while maintaining 100% sensitivity. DECIPHER-LLM outperformed other tested free-text classification methods.

Methinks AI software for identifying large vessel occlusion in non-contrast head CT: A pilot retrospective study in American population.

Sanders JV, Keigher K, Oliver M, Joshi K, Lopes D

pubmed logopapersJul 25 2025
BackgroundNon-contrast computed tomography (NCCT) is the first image for stroke assessment, but its sensitivity for detecting large vessel occlusion (LVO) is limited. Artificial intelligence (AI) algorithms may contribute to a faster LVO diagnosis using only NCCT. This study evaluates the performance and the potential diagnostic time saving of Methinks LVO AI algorithm in a U.S. multi-facility stroke network.MethodsThis retrospective pilot study reviewed NCCT and computed tomography angiography (CTA) images between 2015 and 2023. The Methinks AI algorithm, designed to detect LVOs in the internal carotid artery and middle cerebral artery, was tested for sensitivity, specificity, and predictive values. A neuroradiologist reviewed cases to establish a gold standard. To evaluate potential time saving in workflow, time gaps between NCCT and CTA were analyzed and stratified into four groups in true positive cases: Group 1 (<10 min), Group 2 (10-30 min), Group 3 (30-60 min), and Group 4 (>60 min).ResultsFrom a total of 1155 stroke codes, 608 NCCT exams were analyzed. Methinks LVO demonstrated 75% sensitivity and 83% specificity, identifying 146 out of 194 confirmed LVO cases correctly. The PPV of the algorithm was 72%. The NPV was 83% (considering 'other occlusion', 'stenosis' and 'posteriors' as negatives), and 73% considered the same conditions as positives. Among the true positive cases, we found 112 patients Group 1, 32 patients in Group 2, 15 patients in Group 3, 3 patients in Group 4.ConclusionThe Methinks AI algorithm shows promise for improving LVO detection from NCCT, especially in resource limited settings. However, its sensitivity remains lower than CTA-based systems, suggesting the need for further refinement.

Quantifying physiological variability and improving reproducibility in 4D-flow MRI cerebrovascular measurements with self-supervised deep learning.

Jolicoeur BW, Yardim ZS, Roberts GS, Rivera-Rivera LA, Eisenmenger LB, Johnson KM

pubmed logopapersJul 25 2025
To assess the efficacy of self-supervised deep learning (DL) denoising in reducing measurement variability in 4D-Flow MRI, and to clarify the contributions of physiological variation to cerebrovascular hemodynamics. A self-supervised DL denoising framework was trained on 3D radially sampled 4D-Flow MRI data. The model was evaluated in a prospective test-retest imaging study in which 10 participants underwent multiple 4D-Flow MRI scans. This included back-to-back scans and a single scan interleaved acquisition designed to isolate noise from physiological variations. The effectiveness of DL denoising was assessed by comparing pixelwise velocity and hemodynamic metrics before and after denoising. DL denoising significantly enhanced the reproducibility of 4D-Flow MRI measurements, reducing the 95% confidence interval of cardiac-resolved velocity from 215 to 142 mm/s in back-to-back scans and from 158 to 96 mm/s in interleaved scans, after adjusting for physiological variation. In derived parameters, DL denoising did not significantly improve integrated measures, such as flow rates, but did significantly improve noise sensitive measures, such as pulsatility index. Physiologic variation in back-to-back time-resolved scans contributed 26.37% ± 0.08% and 32.42% ± 0.05% of standard error before and after DL. Self-supervised DL denoising enhances the quantitative repeatability of 4D-Flow MRI by reducing technical noise; however, variations from physiology and post-processing are not removed. These findings underscore the importance of accounting for both technical and physiological variability in neurovascular flow imaging, particularly for studies aiming to establish biomarkers for neurodegenerative diseases with vascular contributions.

OCSVM-Guided Representation Learning for Unsupervised Anomaly Detection

Nicolas Pinon, Carole Lartizien

arxiv logopreprintJul 25 2025
Unsupervised anomaly detection (UAD) aims to detect anomalies without labeled data, a necessity in many machine learning applications where anomalous samples are rare or not available. Most state-of-the-art methods fall into two categories: reconstruction-based approaches, which often reconstruct anomalies too well, and decoupled representation learning with density estimators, which can suffer from suboptimal feature spaces. While some recent methods attempt to couple feature learning and anomaly detection, they often rely on surrogate objectives, restrict kernel choices, or introduce approximations that limit their expressiveness and robustness. To address this challenge, we propose a novel method that tightly couples representation learning with an analytically solvable one-class SVM (OCSVM), through a custom loss formulation that directly aligns latent features with the OCSVM decision boundary. The model is evaluated on two tasks: a new benchmark based on MNIST-C, and a challenging brain MRI subtle lesion detection task. Unlike most methods that focus on large, hyperintense lesions at the image level, our approach succeeds to target small, non-hyperintense lesions, while we evaluate voxel-wise metrics, addressing a more clinically relevant scenario. Both experiments evaluate a form of robustness to domain shifts, including corruption types in MNIST-C and scanner/age variations in MRI. Results demonstrate performance and robustness of our proposed mode,highlighting its potential for general UAD and real-world medical imaging applications. The source code is available at https://github.com/Nicolas-Pinon/uad_ocsvm_guided_repr_learning

Pre- and Post-Treatment Glioma Segmentation with the Medical Imaging Segmentation Toolkit

Adrian Celaya, Tucker Netherton, Dawid Schellingerhout, Caroline Chung, Beatrice Riviere, David Fuentes

arxiv logopreprintJul 25 2025
Medical image segmentation continues to advance rapidly, yet rigorous comparison between methods remains challenging due to a lack of standardized and customizable tooling. In this work, we present the current state of the Medical Imaging Segmentation Toolkit (MIST), with a particular focus on its flexible and modular postprocessing framework designed for the BraTS 2025 pre- and post-treatment glioma segmentation challenge. Since its debut in the 2024 BraTS adult glioma post-treatment segmentation challenge, MIST's postprocessing module has been significantly extended to support a wide range of transforms, including removal or replacement of small objects, extraction of the largest connected components, and morphological operations such as hole filling and closing. These transforms can be composed into user-defined strategies, enabling fine-grained control over the final segmentation output. We evaluate three such strategies - ranging from simple small-object removal to more complex, class-specific pipelines - and rank their performance using the BraTS ranking protocol. Our results highlight how MIST facilitates rapid experimentation and targeted refinement, ultimately producing high-quality segmentations for the BraTS 2025 challenge. MIST remains open source and extensible, supporting reproducible and scalable research in medical image segmentation.

DeepJIVE: Learning Joint and Individual Variation Explained from Multimodal Data Using Deep Learning

Matthew Drexler, Benjamin Risk, James J Lah, Suprateek Kundu, Deqiang Qiu

arxiv logopreprintJul 25 2025
Conventional multimodal data integration methods provide a comprehensive assessment of the shared or unique structure within each individual data type but suffer from several limitations such as the inability to handle high-dimensional data and identify nonlinear structures. In this paper, we introduce DeepJIVE, a deep-learning approach to performing Joint and Individual Variance Explained (JIVE). We perform mathematical derivation and experimental validations using both synthetic and real-world 1D, 2D, and 3D datasets. Different strategies of achieving the identity and orthogonality constraints for DeepJIVE were explored, resulting in three viable loss functions. We found that DeepJIVE can successfully uncover joint and individual variations of multimodal datasets. Our application of DeepJIVE to the Alzheimer's Disease Neuroimaging Initiative (ADNI) also identified biologically plausible covariation patterns between the amyloid positron emission tomography (PET) and magnetic resonance (MR) images. In conclusion, the proposed DeepJIVE can be a useful tool for multimodal data analysis.
Page 18 of 91902 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.