Sort by:
Page 1 of 658 results
Next

Secure and fault tolerant cloud based framework for medical image storage and retrieval in a distributed environment.

Amaithi Rajan A, V V, M A, R PK

pubmed logopapersSep 26 2025
In the evolving field of healthcare, centralized cloud-based medical image retrieval faces challenges related to security, availability, and adversarial threats. Existing deep learning-based solutions improve retrieval but remain vulnerable to adversarial attacks and quantum threats, necessitating a shift to more secure distributed cloud solutions. This article proposes SFMedIR, a secure and fault tolerant medical image retrieval framework that contains an adversarial attack-resistant federated learning for hashcode generation, utilizing a ConvNeXt-based model to improve accuracy and generalizability. The framework integrates quantum-chaos-based encryption for security, dynamic threshold-based shadow storage for fault tolerance, and a distributed cloud architecture to mitigate single points of failure. Unlike conventional methods, this approach significantly improves security and availability in cloud-based medical image retrieval systems, providing a resilient and efficient solution for healthcare applications. The framework is validated on Brain MRI and Kidney CT datasets, achieving a 60-70% improvement in retrieval accuracy for adversarial queries and an overall 90% retrieval accuracy, outperforming existing models by 5-10%. The results demonstrate superior performance in terms of both security and retrieval efficiency, making this framework a valuable contribution to the future of secure medical image management.

Leveraging multi-modal foundation model image encoders to enhance brain MRI-based headache classification.

Rafsani F, Sheth D, Che Y, Shah J, Siddiquee MMR, Chong CD, Nikolova S, Ross K, Dumkrieger G, Li B, Wu T, Schwedt TJ

pubmed logopapersSep 26 2025
Headaches are a nearly universal human experience traditionally diagnosed based solely on symptoms. Recent advances in imaging techniques and artificial intelligence (AI) have enabled the development of automated headache detection systems, which can enhance clinical diagnosis, especially when symptom-based evaluations are insufficient. Current AI models often require extensive data, limiting their clinical applicability where data availability is low. However, deep learning models, particularly pre-trained ones and fine-tuned with smaller, targeted datasets can potentially overcome this limitation. By leveraging BioMedCLIP, a pre-trained foundational model combining a vision transformer (ViT) image encoder with PubMedBERT text encoder, we fine-tuned the pre-trained ViT model for the specific purpose of classifying headaches and detecting biomarkers from brain MRI data. The dataset consisted of 721 individuals: 424 healthy controls (HC) from the IXI dataset and 297 local participants, including migraine sufferers (n = 96), individuals with acute post-traumatic headache (APTH, n = 48), persistent post-traumatic headache (PPTH, n = 49), and additional HC (n = 104). The model achieved high accuracy across multiple balanced test sets, including 89.96% accuracy for migraine versus HC, 88.13% for APTH versus HC, and 83.13% for PPTH versus HC, all validated through five-fold cross-validation for robustness. Brain regions identified by Gradient-weighted Class Activation Mapping analysis as responsible for migraine classification included the postcentral cortex, supramarginal gyrus, superior temporal cortex, and precuneus cortex; for APTH, rostral middle frontal and precentral cortices; and, for PPTH, cerebellar cortex and precentral cortex. To our knowledge, this is the first study to leverage a multimodal biomedical foundation model in the context of headache classification and biomarker detection using structural MRI, offering complementary insights into the causes and brain changes associated with headache disorders.

Improved pharmacokinetic parameter estimation from DCE-MRI via spatial-temporal information-driven unsupervised learning.

He X, Wang L, Yang Q, Wang J, Xing Z, Cao D, Cai C, Cai S

pubmed logopapersSep 23 2025
<b>Objective</b>: Pharmacokinetic (PK) parameters derived from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) provide quantitative characterization of tissue perfusion and permeability. However, existing deep learning methods for PK parameter estimation rely on either temporal or spatial features alone, overlooking the integrated spatial-temporal characteristics of DCE-MRI data. This study aims to remove this barrier by fully leveraging the spatial and temporal information to improve parameter estimation.&#xD;<b>Approach</b>: A spatial-temporal information-driven unsupervised deep learning method (STUDE) was proposed. STUDE combines convolutional neural networks (CNNs) and a customized Vision Transformer (ViT) to separately capture spatial and temporal features, enabling comprehensive modelling of contrast agent dynamics and tissue heterogeneity. Besides, a spatial-temporal attention (STA) feature fusion module was proposed to enable adaptive focus on both dimensions for more effective feature fusion. Moreover, the extended Tofts model imposed physical constraints on PK parameter estimation, enabling unsupervised training of STUDE. The accuracy and diagnostic value of STUDE was compared with the orthodox non-linear least squares (NLLS) and representative deep learning-based methods (i.e., GRU, CNN, U-Net, and VTDCE-Net) on a numerical brain phantom and 87 glioma patients, respectively.&#xD;<b>Main results</b>: On the numerical brain phantom, STUDE produced PK parameter maps with the lowest systematic and random errors even under low SNR conditions (SNR = 10 dB). On glioma data, STUDE generated parameter maps with reduced noise compared to NLLS and demonstrated superior structural clarity compared to other methods. Furthermore, STUDE outshined all other methods in the identification of glioma isocitrate dehydrogenase (IDH) mutation status, achieving the area under the curve (AUC) values at 0.840 and 0.908 for the receiver operating characteristic curves of<i>K<sup>trans</sup></i>and<i>V<sub>e</sub></i>, respectively. A combination of all PK parameters improved AUC to 0.926.&#xD;<b>Significance</b>: STUDE advances spatial-temporal information-driven and physics-informed learning for precise PK parameter estimation, demonstrating its potential clinical significance.&#xD.

Uncovering genetic architecture of the heart via genetic association studies of unsupervised deep learning derived endophenotypes.

You L, Zhao X, Xie Z, Patel KA, Chen C, Kitkungvan D, Mohammed KK, Narula N, Arbustini E, Cassidy CK, Narula J, Zhi D

pubmed logopapersSep 20 2025
Recent genome-wide association studies (GWAS) have effectively linked genetic variants to quantitative traits derived from time-series cardiac magnetic resonance imaging, revealing insights into cardiac morphology and function. Deep learning approach generally requires extensive supervised training on manually annotated data. In this study, we developed a novel framework using a 3D U-architecture autoencoder (cineMAE) to learn deep image phenotypes from cardiac magnetic resonance (CMR) imaging for genetic discovery, focusing on long-axis two-chamber and four-chamber views. We trained a masked autoencoder to develop <b>U</b> nsupervised <b>D</b> erived <b>I</b> mage <b>P</b> henotypes for heart (Heart-UDIPs). These representations were found to be informative to indicate various heart-specific phenotypes (e.g., left ventricular hypertrophy) and diseases (e.g., hypertrophic cardiomyopathy). GWAS on Heart UDIP identified 323 lead SNP and 628 SNP-prioritized genes, which exceeded previous methods. The genes identified by method described herein, exhibited significant associations with cardiac function and showed substantial enrichment in pathways related to cardiac disorders. These results underscore the utility of our Heart-UDIP approach in enhancing the discovery potential for genetic associations, without the need for clinically defined phenotypes or manual annotations.

Visual language model-assisted spectral CT reconstruction by diffusion and low-rank priors from limited-angle measurements.

Wang Y, Liang N, Ren J, Zhang X, Shen Y, Cai A, Zheng Z, Li L, Yan B

pubmed logopapersSep 19 2025
Spectral computed tomography (CT) is a critical tool in clinical practice, offering capabilities in multi-energy spectrum imaging and material identification. The limited-angle (LA) scanning strategy has attracted attention for its advantages in fast data acquisition and reduced radiation exposure, aligning with the as low as reasonably achievable principle. However, most deep learning-based methods require separate models for each LA setting, which limits their flexibility in adapting to new conditions. In this study, we developed a novel Visual-Language model-assisted Spectral CT Reconstruction (VLSR) method to address LA artifacts and enable multi-setting adaptation within a single model. The VLSR method integrates the image-text perception ability of visual-language models and the image generation potential of diffusion models. Prompt engineering is introduced to better represent LA artifact characteristics, further improving artifact accuracy. Additionally, a collaborative sampling framework combining data consistency, low-rank regularization, and image-domain diffusion models is developed to produce high-quality and consistent spectral CT reconstructions. The performance of VLSR is superior to other comparison methods. Under the scanning angles of 90° and 60° for simulated data, the VLSR method improves peak signal noise ratio by at least 0.41 dB and 1.13 dB compared with other methods. VLSR method can reconstruct high-quality spectral CT images under diverse LA configurations, allowing faster and more flexible scans with dose reductions.

Sex classification from hand X-ray images in pediatric patients: How zero-shot Segment Anything Model (SAM) can improve medical image analysis.

Mollineda RA, Becerra K, Mederos B

pubmed logopapersSep 13 2025
The potential to classify sex from hand data is a valuable tool in both forensic and anthropological sciences. This work presents possibly the most comprehensive study to date of sex classification from hand X-ray images. The research methodology involves a systematic evaluation of zero-shot Segment Anything Model (SAM) in X-ray image segmentation, a novel hand mask detection algorithm based on geometric criteria leveraging human knowledge (avoiding costly retraining and prompt engineering), the comparison of multiple X-ray image representations including hand bone structure and hand silhouette, a rigorous application of deep learning models and ensemble strategies, visual explainability of decisions by aggregating attribution maps from multiple models, and the transfer of models trained from hand silhouettes to sex prediction of prehistoric handprints. Training and evaluation of deep learning models were performed using the RSNA Pediatric Bone Age dataset, a collection of hand X-ray images from pediatric patients. Results showed very high effectiveness of zero-shot SAM in segmenting X-ray images, the contribution of segmenting before classifying X-ray images, hand sex classification accuracy above 95% on test data, and predictions from ancient handprints highly consistent with previous hypotheses based on sexually dimorphic features. Attention maps highlighted the carpometacarpal joints in the female class and the radiocarpal joint in the male class as sex discriminant traits. These findings are anatomically very close to previous evidence reported under different databases, classification models and visualization techniques.

X-ray Diffraction Reveals Alterations in Mouse Somatosensory Cortex Following Sensory Deprivation.

Murokh S, Willerson E, Lazarev A, Lazarev P, Mourokh L, Brumberg JC

pubmed logopapersSep 10 2025
Sensory experience impacts brain development. In the mouse somatosensory cortex, sensory deprivation via whisker trimming induces reductions in the perineuronal net, the size of neuronal cell bodies, the size and orientation of dendritic arbors, the density of dendritic spines, and the level of myelination, among other effects. Using a custom-developed laboratory diffractometer, we measured the X-ray diffraction patterns of mouse brain tissue to establish a novel method for examining nanoscale brain structures. Two groups of mice were examined: a control group and one that underwent 30 days of whisker-trimming from birth an established method of sensory deprivation that affects the mouse barrel cortex (whisker sensory processing region of the primary somatosensory cortex). Mice were perfused, and primary somatosensory cortices were isolated for immunocytochemistry and X-ray diffraction imaging. X-ray images were characterized using a specially developed machine-learning approach, and the clusters that correspond to the two groups are well separated in principal components space. We obtained the perfect values for sensitivity and specificity, as well as for the receiver operator curve classifier. New machine-learning approaches allow for the first time x-ray diffraction to identify cortex that has undergone sensory deprivation without the use of stains. We hypothesize that our results are related to the alteration of different nanoscale structural components in the brains of sensory deprived mice. The effects of these nanoscale structural formations can be reflective of changes in the micro- and macro-scale structures and assemblies with the neocortex.

Accelerated Patient-specific Non-Cartesian MRI Reconstruction using Implicit Neural Representations.

Xu D, Liu H, Miao X, O'Connor D, Scholey JE, Yang W, Feng M, Ohliger M, Lin H, Ruan D, Yang Y, Sheng K

pubmed logopapersSep 5 2025
Accelerating MR acquisition is essential for image guided therapeutic applications. Compressed sensing (CS) has been developed to minimize image artifacts in accelerated scans, but the required iterative reconstruction is computationally complex and difficult to generalize. Convolutional neural networks (CNNs)/Transformers-based deep learning (DL) methods emerged as a faster alternative but face challenges in modeling continuous k-space, a problem amplified with non-Cartesian sampling commonly used in accelerated acquisition. In comparison, implicit neural representations can model continuous signals in the frequency domain and thus are compatible with arbitrary k-space sampling patterns. The current study develops a novel generative-adversarially trained implicit neural representations (k-GINR) for de novo undersampled non-Cartesian k-space reconstruction. k-GINR consists of two stages: 1) supervised training on an existing patient cohort; 2) self-supervised patient-specific optimization. The StarVIBE T1-weighted liver dataset consisting of 118 prospectively acquired scans and corresponding coil data were employed for testing. k-GINR is compared with two INR based methods, NeRP and k-NeRP, an unrolled DL method, Deep Cascade CNN, and CS. k-GINR consistently outperformed the baselines with a larger performance advantage observed at very high accelerations (PSNR: 6.8%-15.2% higher at 3 times, 15.1%-48.8% at 10 times, and 29.3%-60.5% higher at 20 times). The reconstruction times for k-GINR, NeRP, k-NeRP, CS, and Deep Cascade CNN were approximately 3 minutes, 4-10 minutes, 3 minutes, 4 minutes and 3 second, respectively. k-GINR, an innovative two-stage INR network incorporating adversarial training, was designed for direct non-Cartesian k-space reconstruction for new incoming patients. It demonstrated superior image quality compared to CS and Deep Cascade CNN across a wide range of acceleration ratios.

Real-Time Super-Resolution Ultrasound Imaging for Monitoring Tumor Response During Intensive Care Management of Oncologic Emergencies.

Wu J, Xu W, Li L, Xie W, Tang B

pubmed logopapersSep 4 2025
<b><i>Background:</i></b> Oncologic emergencies in critically ill cancer patients frequently require rapid, real-time assessment of tumor responses to therapeutic interventions. However, conventional imaging modalities such as computed tomography and magnetic resonance imaging are often impractical in intensive care units (ICUs) due to logistical constraints and patient instability. Super-resolution ultrasound (SR-US) imaging has emerged as a promising noninvasive alternative, facilitating bedside evaluation of tumor microvascular dynamics with exceptional spatial resolution. This study assessed the clinical utility of real-time SR-US imaging in monitoring tumor perfusion changes during emergency management in oncological ICU settings. <b><i>Methods:</i></b> In this prospective observational study, critically ill patients with oncologic emergencies underwent bedside SR-US imaging before and after the initiation of emergency therapy (e.g., corticosteroids, decompression, or chemotherapy). SR-US was employed to quantify microvascular parameters, including perfusion density and flow heterogeneity. Data processing incorporated artificial intelligence for real-time vessel segmentation and quantitative analysis. <b><i>Results:</i></b> SR-US imaging successfully detected perfusion changes within hours of therapy initiation. A significant correlation was observed between reduced tumor perfusion and clinical improvement, including symptom relief and shorter ICU stay. This technology enables visualization of microvessels as small as 30 µm, surpassing conventional ultrasound limits. No adverse events were reported with the use of contrast microbubbles. In addition, SR-US imaging reduces the need for transportation to radiology departments, thereby optimizing ICU workflow. <b><i>Conclusions:</i></b> Real-time SR-US imaging offers a novel, bedside-compatible method for evaluating tumor vascular response during the acute phase of oncological emergencies. Its integration into ICU care pathways could enhance timely decision-making, reduce reliance on static imaging, and support personalized cancer management. Further multicenter validation is required.

Analog optical computer for AI inference and combinatorial optimization.

Kalinin KP, Gladrow J, Chu J, Clegg JH, Cletheroe D, Kelly DJ, Rahmani B, Brennan G, Canakci B, Falck F, Hansen M, Kleewein J, Kremer H, O'Shea G, Pickup L, Rajmohan S, Rowstron A, Ruhle V, Braine L, Khedekar S, Berloff NG, Gkantsidis C, Parmigiani F, Ballani H

pubmed logopapersSep 3 2025
Artificial intelligence (AI) and combinatorial optimization drive applications across science and industry, but their increasing energy demands challenge the sustainability of digital computing. Most unconventional computing systems<sup>1-7</sup> target either AI or optimization workloads and rely on frequent, energy-intensive digital conversions, limiting efficiency. These systems also face application-hardware mismatches, whether handling memory-bottlenecked neural models, mapping real-world optimization problems or contending with inherent analog noise. Here we introduce an analog optical computer (AOC) that combines analog electronics and three-dimensional optics to accelerate AI inference and combinatorial optimization in a single platform. This dual-domain capability is enabled by a rapid fixed-point search, which avoids digital conversions and enhances noise robustness. With this fixed-point abstraction, the AOC implements emerging compute-bound neural models with recursive reasoning potential and realizes an advanced gradient-descent approach for expressive optimization. We demonstrate the benefits of co-designing the hardware and abstraction, echoing the co-evolution of digital accelerators and deep learning models, through four case studies: image classification, nonlinear regression, medical image reconstruction and financial transaction settlement. Built with scalable, consumer-grade technologies, the AOC paves a promising path for faster and sustainable computing. Its native support for iterative, compute-intensive models offers a scalable analog platform for fostering future innovation in AI and optimization.
Page 1 of 658 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.