Sort by:
Page 67 of 1351347 results

A Trusted Medical Image Zero-Watermarking Scheme Based on DCNN and Hyperchaotic System.

Xiang R, Liu G, Dang M, Wang Q, Pan R

pubmed logopapersJun 1 2025
The zero-watermarking methods provide a means of lossless, which was adopted to protect medical image copyright requiring high integrity. However, most existing studies have only focused on robustness and there has been little discussion about the analysis and experiment on discriminability. Therefore, this paper proposes a trusted robust zero-watermarking scheme for medical images based on Deep convolution neural network (DCNN) and the hyperchaotic encryption system. Firstly, the medical image is converted into several feature map matrices by the specific convolution layer of DCNN. Then, a stable Gram matrix is obtained by calculating the colinear correlation between different channels in feature map matrices. Finally, the Gram matrixes of the medical image and the feature map matrixes of the watermark image are fused by the trained DCNN to generate the zero-watermark. Meanwhile, we propose two feature evaluation criteria for finding differentiated eigenvalues. The eigenvalue is used as the explicit key to encrypt the generated zero-watermark by Lorenz hyperchaotic encryption, which enhances security and discriminability. The experimental results show that the proposed scheme can resist common image attacks and geometric attacks, and is distinguishable in experiments, being applicable for the copyright protection of medical images.

IM-Diff: Implicit Multi-Contrast Diffusion Model for Arbitrary Scale MRI Super-Resolution.

Liu L, Zou J, Xu C, Wang K, Lyu J, Xu X, Hu Z, Qin J

pubmed logopapersJun 1 2025
Diffusion models have garnered significant attention for MRI Super-Resolution (SR) and have achieved promising results. However, existing diffusion-based SR models face two formidable challenges: 1) insufficient exploitation of complementary information from multi-contrast images, which hinders the faithful reconstruction of texture details and anatomical structures; and 2) reliance on fixed magnification factors, such as 2× or 4×, which is impractical for clinical scenarios that require arbitrary scale magnification. To circumvent these issues, this paper introduces IM-Diff, an implicit multi-contrast diffusion model for arbitrary-scale MRI SR, leveraging the merits of both multi-contrast information and the continuous nature of implicit neural representation (INR). Firstly, we propose an innovative hierarchical multi-contrast fusion (HMF) module with reference-aware cross Mamba (RCM) to effectively incorporate target-relevant information from the reference image into the target image, while ensuring a substantial receptive field with computational efficiency. Secondly, we introduce multiple wavelet INR magnification (WINRM) modules into the denoising process by integrating the wavelet implicit neural non-linearity, enabling effective learning of continuous representations of MR images. The involved wavelet activation enhances space-frequency concentration, further bolstering representation accuracy and robustness in INR. Extensive experiments on three public datasets demonstrate the superiority of our method over existing state-of-the-art SR models across various magnification factors.

Alzheimer's disease prediction using 3D-CNNs: Intelligent processing of neuroimaging data.

Rahman AU, Ali S, Saqia B, Halim Z, Al-Khasawneh MA, AlHammadi DA, Khan MZ, Ullah I, Alharbi M

pubmed logopapersJun 1 2025
Alzheimer's disease (AD) is a severe neurological illness that demolishes memory and brain functioning. This disease affects an individual's capacity to work, think, and behave. The proportion of individuals suffering from AD is rapidly increasing. It flatters a leading cause of disability and impacts millions of people worldwide. Early detection reduces disease expansion, provides more effective therapies, and leads to better results. However, predicting AD at an early stage is complex since its clinical symptoms match with normal aging, mild cognitive impairment (MCI), and neurodegenerative disorders. Prior studies indicate that early diagnosis is improved by the utilization of magnetic resonance imaging (MRI). However, MRI data is scarce, noisy, and extremely diverse among scanners and patient populations. The 2D CNNs analyze 3D data slices separately, resulting in a loss of inter-slice information and contextual coherence required to detect subtle and diffuse brain alterations. This study offered a novel 3Dimensional-Convolutional Neural Network (3D-CNN) and intelligent preprocessing pipeline for AD prediction. This work uses an intelligent frame selection and 3D dilated convolutions mechanism to recognize the most informative slices associated with AD disease. This enabled the model to capture subtle and diffuse structural changes across the brain visible in MRI scans. The proposed model examined brain structures by recognizing small volumetric changes associated with AD and acquiring spatial hierarchies within MRI data. After conducting various experiments, we observed that the proposed 3D-CNNs are highly proficient in capturing early brain changes. To validate the model's performance, a benchmark dataset called AD Neuroimaging Initiative (ADNI) is used and achieves a maximum accuracy of 92.89 %, outperforming state-of-the-art approaches.

MedKAFormer: When Kolmogorov-Arnold Theorem Meets Vision Transformer for Medical Image Representation.

Wang G, Zhu Q, Song C, Wei B, Li S

pubmed logopapersJun 1 2025
Vision Transformers (ViTs) suffer from high parameter complexity because they rely on Multi-layer Perceptrons (MLPs) for nonlinear representation. This issue is particularly challenging in medical image analysis, where labeled data is limited, leading to inadequate feature representation. Existing methods have attempted to optimize either the patch embedding stage or the non-embedding stage of ViTs. Still, they have struggled to balance effective modeling, parameter complexity, and data availability. Recently, the Kolmogorov-Arnold Network (KAN) was introduced as an alternative to MLPs, offering a potential solution to the large parameter issue in ViTs. However, KAN cannot be directly integrated into ViT due to challenges such as handling 2D structured data and dimensionality catastrophe. To solve this problem, we propose MedKAFormer, the first ViT model to incorporate the Kolmogorov-Arnold (KA) theorem for medical image representation. It includes a Dynamic Kolmogorov-Arnold Convolution (DKAC) layer for flexible nonlinear modeling in the patch embedding stage. Additionally, it introduces a Nonlinear Sparse Token Mixer (NSTM) and a Nonlinear Dynamic Filter (NDF) in the non-embedding stage. These components provide comprehensive nonlinear representation while reducing model overfitting. MedKAFormer reduces parameter complexity by 85.61% compared to ViT-Base and achieves competitive results on 14 medical datasets across various imaging modalities and structures.

P2TC: A Lightweight Pyramid Pooling Transformer-CNN Network for Accurate 3D Whole Heart Segmentation.

Cui H, Wang Y, Zheng F, Li Y, Zhang Y, Xia Y

pubmed logopapersJun 1 2025
Cardiovascular disease is a leading global cause of death, requiring accurate heart segmentation for diagnosis and surgical planning. Deep learning methods have been demonstrated to achieve superior performances in cardiac structures segmentation. However, there are still limitations in 3D whole heart segmentation, such as inadequate spatial context modeling, difficulty in capturing long-distance dependencies, high computational complexity, and limited representation of local high-level semantic information. To tackle the above problems, we propose a lightweight Pyramid Pooling Transformer-CNN (P2TC) network for accurate 3D whole heart segmentation. The proposed architecture comprises a dual encoder-decoder structure with a 3D pyramid pooling Transformer for multi-scale information fusion and a lightweight large-kernel Convolutional Neural Network (CNN) for local feature extraction. The decoder has two branches for precise segmentation and contextual residual handling. The first branch is used to generate segmentation masks for pixel-level classification based on the features extracted by the encoder to achieve accurate segmentation of cardiac structures. The second branch highlights contextual residuals across slices, enabling the network to better handle variations and boundaries. Extensive experimental results on the Multi-Modality Whole Heart Segmentation (MM-WHS) 2017 challenge dataset demonstrate that P2TC outperforms the most advanced methods, achieving the Dice scores of 92.6% and 88.1% in Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) modalities respectively, which surpasses the baseline model by 1.5% and 1.7%, and achieves state-of-the-art segmentation results.

A Survey of Surrogates and Health Care Professionals Indicates Support of Cognitive Motor Dissociation-Assisted Prognostication.

Heinonen GA, Carmona JC, Grobois L, Kruger LS, Velazquez A, Vrosgou A, Kansara VB, Shen Q, Egawa S, Cespedes L, Yazdi M, Bass D, Saavedra AB, Samano D, Ghoshal S, Roh D, Agarwal S, Park S, Alkhachroum A, Dugdale L, Claassen J

pubmed logopapersJun 1 2025
Prognostication of patients with acute disorders of consciousness is imprecise but more accurate technology-supported predictions, such as cognitive motor dissociation (CMD), are emerging. CMD refers to the detection of willful brain activation following motor commands using functional magnetic resonance imaging or machine learning-supported analysis of the electroencephalogram in clinically unresponsive patients. CMD is associated with long-term recovery, but acceptance by surrogates and health care professionals is uncertain. The objective of this study was to determine receptiveness for CMD to inform goals of care (GoC) decisions and research participation among health care professionals and surrogates of behaviorally unresponsive patients. This was a two-center study of surrogates of and health care professionals caring for unconscious patients with severe neurological injury who were enrolled in two prospective US-based studies. Participants completed a 13-item survey to assess demographics, religiosity, minimal acceptable level of recovery, enthusiasm for research participation, and receptiveness for CMD to support GoC decisions. Completed surveys were obtained from 196 participants (133 health care professionals and 63 surrogates). Across all respondents, 93% indicated that they would want their loved one or the patient they cared for to participate in a research study that supports recovery of consciousness if CMD were detected, compared to 58% if CMD were not detected. Health care professionals were more likely than surrogates to change GoC with a positive (78% vs. 59%, p = 0.005) or negative (83% vs. 59%, p = 0.0002) CMD result. Participants who reported religion was the most important part of their life were least likely to change GoC with or without CMD. Participants who identified as Black (odds ratio [OR] 0.12, 95% confidence interval [CI] 0.04-0.36) or Hispanic/Latino (OR 0.39, 95% CI 0.2-0.75) and those for whom religion was the most important part of their life (OR 0.18, 95% CI 0.05-0.64) were more likely to accept a lower minimum level of recovery. Technology-supported prognostication and enthusiasm for clinical trial participation was supported across a diverse spectrum of health care professionals and surrogate decision-makers. Education for surrogates and health care professionals should accompany integration of technology-supported prognostication.

Automated Ensemble Multimodal Machine Learning for Healthcare.

Imrie F, Denner S, Brunschwig LS, Maier-Hein K, van der Schaar M

pubmed logopapersJun 1 2025
The application of machine learning in medicine and healthcare has led to the creation of numerous diagnostic and prognostic models. However, despite their success, current approaches generally issue predictions using data from a single modality. This stands in stark contrast with clinician decision-making which employs diverse information from multiple sources. While several multimodal machine learning approaches exist, significant challenges in developing multimodal systems remain that are hindering clinical adoption. In this paper, we introduce a multimodal framework, AutoPrognosis-M, that enables the integration of structured clinical (tabular) data and medical imaging using automated machine learning. AutoPrognosis-M incorporates 17 imaging models, including convolutional neural networks and vision transformers, and three distinct multimodal fusion strategies. In an illustrative application using a multimodal skin lesion dataset, we highlight the importance of multimodal machine learning and the power of combining multiple fusion strategies using ensemble learning. We have open-sourced our framework as a tool for the community and hope it will accelerate the uptake of multimodal machine learning in healthcare and spur further innovation.

ScreenDx, an artificial intelligence-based algorithm for the incidental detection of pulmonary fibrosis.

Touloumes N, Gagianas G, Bradley J, Muelly M, Kalra A, Reicher J

pubmed logopapersJun 1 2025
Nonspecific symptoms and variability in radiographic reporting patterns contribute to a diagnostic delay of the diagnosis of pulmonary fibrosis. An attractive solution is the use of machine-learning algorithms to screen for radiographic features suggestive of pulmonary fibrosis. Thus, we developed and validated a machine learning classifier algorithm (ScreenDx) to screen computed tomography imaging and identify incidental cases of pulmonary fibrosis. ScreenDx is a deep learning convolutional neural network that was developed from a multi-source dataset (cohort A) of 3,658 cases of normal and abnormal CT's, including CT's from patients with COPD, emphysema, and community-acquired pneumonia. Cohort B, a US-based cohort (n = 381) was used for tuning the algorithm, and external validation was performed on cohort C (n = 683), a separate international dataset. At the optimal threshold, the sensitivity and specificity for detection of pulmonary fibrosis in cohort B was 0.91 (95 % CI 88-94 %) and 0.95 (95 % CI 93-97 %), respectively, with AUC 0.98. In the external validation dataset (cohort C), the sensitivity and specificity were 1.0 (95 % 99.9-100.0) and 0.98 (95 % CI 97.9-99.6), respectively, with AUC 0.997. There were no significant differences in the ability of ScreenDx to identify pulmonary fibrosis based on CT manufacturer (Phillips, Toshiba, GE Healthcare, or Siemens) or slice thickness (2 mm vs 2-4 mm vs 4 mm). Regardless of CT manufacturer or slice thickness, ScreenDx demonstrated high performance across two, multi-site datasets for identifying incidental cases of pulmonary fibrosis. This suggest that the algorithm may be generalizable across patient populations and different healthcare systems.

Extracerebral Normalization of <sup>18</sup>F-FDG PET Imaging Combined with Behavioral CRS-R Scores Predict Recovery from Disorders of Consciousness.

Guo K, Li G, Quan Z, Wang Y, Wang J, Kang F, Wang J

pubmed logopapersJun 1 2025
Identifying patients likely to regain consciousness early on is a challenge. The assessment of consciousness levels and the prediction of wakefulness probabilities are facilitated by <sup>18</sup>F-fluorodeoxyglucose (<sup>18</sup>F-FDG) positron emission tomography (PET). This study aimed to develop a prognostic model for predicting 1-year postinjury outcomes in prolonged disorders of consciousness (DoC) using <sup>18</sup>F-FDG PET alongside clinical behavioral scores. Eighty-seven patients with prolonged DoC newly diagnosed with behavioral Coma Recovery Scale-Revised (CRS-R) scores and <sup>18</sup>F-FDG PET/computed tomography (18F-FDG PET/CT) scans were included. PET images were normalized by the cerebellum and extracerebral tissue, respectively. Images were divided into training and independent test sets at a ratio of 5:1. Image-based classification was conducted using the DenseNet121 network, whereas tabular-based deep learning was employed to train depth features extracted from imaging models and behavioral CRS-R scores. The performance of the models was assessed and compared using the McNemar test. Among the 87 patients with DoC who received routine treatments, 52 patients showed recovery of consciousness, whereas 35 did not. The classification of the standardized uptake value ratio by extracerebral tissue model demonstrated a higher specificity and lower sensitivity in predicting consciousness recovery than the classification of the standardized uptake value ratio by cerebellum model. With area under the curve values of 0.751 ± 0.093 and 0.412 ± 0.104 on the test sets, respectively, the difference is not statistically significant (P = 0.73). The combination of standardized uptake value ratio by extracerebral tissue and computed tomography depth features with behavioral CRS-R scores yielded the highest classification accuracy, with area under the curve values of 0.950 ± 0.027 and 0.933 ± 0.015 on the training and test sets, respectively, outperforming any individual mode. In this preliminary study, a multimodal prognostic model based on <sup>18</sup>F-FDG PET extracerebral normalization and behavioral CRS-R scores facilitated the prediction of recovery in DoC.

Explicit Abnormality Extraction for Unsupervised Motion Artifact Reduction in Magnetic Resonance Imaging.

Zhou Y, Li H, Liu J, Kong Z, Huang T, Ahn E, Lv Z, Kim J, Feng DD

pubmed logopapersJun 1 2025
Motion artifacts compromise the quality of magnetic resonance imaging (MRI) and pose challenges to achieving diagnostic outcomes and image-guided therapies. In recent years, supervised deep learning approaches have emerged as successful solutions for motion artifact reduction (MAR). One disadvantage of these methods is their dependency on acquiring paired sets of motion artifact-corrupted (MA-corrupted) and motion artifact-free (MA-free) MR images for training purposes. Obtaining such image pairs is difficult and therefore limits the application of supervised training. In this paper, we propose a novel UNsupervised Abnormality Extraction Network (UNAEN) to alleviate this problem. Our network is capable of working with unpaired MA-corrupted and MA-free images. It converts the MA-corrupted images to MA-reduced images by extracting abnormalities from the MA-corrupted images using a proposed artifact extractor, which intercepts the residual artifact maps from the MA-corrupted MR images explicitly, and a reconstructor to restore the original input from the MA-reduced images. The performance of UNAEN was assessed by experimenting with various publicly available MRI datasets and comparing them with state-of-the-art methods. The quantitative evaluation demonstrates the superiority of UNAEN over alternative MAR methods and visually exhibits fewer residual artifacts. Our results substantiate the potential of UNAEN as a promising solution applicable in real-world clinical environments, with the capability to enhance diagnostic accuracy and facilitate image-guided therapies.
Page 67 of 1351347 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.