Sort by:
Page 61 of 3463455 results

A Deep Learning Pipeline for Mapping in situ Network-level Neurovascular Coupling in Multi-photon Fluorescence Microscopy

Rozak, M. W., Mester, J. R., Attarpour, A., Dorr, A., Patel, S., Koletar, M., Hill, M. E., McLaurin, J., Goubran, M., Stefanovic, B.

biorxiv logopreprintAug 25 2025
Functional hyperaemia is a well-established hallmark of healthy brain function, whereby local brain blood flow adjusts in response to a change in the activity of the surrounding neurons. Although functional hyperemia has been extensively studied at the level of both tissue and individual vessels, vascular network-level coordination remains largely unknown. To bridge this gap, we developed a deep learning-based computational pipeline that uses two-photon fluorescence microscopy images of cerebral microcirculation to enable automated reconstruction and quantification of the geometric changes across the microvascular network, comprising hundreds of interconnected blood vessels, pre and post-activation of the neighbouring neurons. The pipeline's utility was demonstrated in the Thy1-ChR2 optogenetic mouse model, where we observed network-wide vessel radius changes to depend on the photostimulation intensity, with both dilations and constrictions occurring across the cortical depth, at an average of 16.1{+/-}14.3 m (mean{+/-}stddev) away from the most proximal neuron for dilations; and at 21.9{+/-}14.6 m away for constrictions. We observed a significant heterogeneity of the vascular radius changes within vessels, with radius adjustment varying by an average of 24 {+/-} 28% of the resting diameter, likely reflecting the heterogeneity of the distribution of contractile cells on the vessel walls. A graph theory-based network analysis revealed that the assortativity of adjacent blood vessel responses rose by 152 {+/-} 65% at 4.3 mW/mm2 of blue photostimulation vs. the control, with a 4% median increase in the efficiency of the capillary networks during this level of blue photostimulation in relation to the baseline. Interrogating individual vessels is thus not sufficient to predict how the blood flow is modulated in the network. Our computational pipeline, to be made openly available, enables tracking of the microvascular network geometry over time, relating caliber adjustments to vessel wall-associated cells' state, and mapping network-level flow distribution impairments in experimental models of disease.

Displacement-Guided Anisotropic 3D-MRI Super-Resolution with Warp Mechanism.

Wang L, Liu S, Yu Z, Du J, Li Y

pubmed logopapersAug 25 2025
Enhancing the resolution of Magnetic Resonance Imaging (MRI) through super-resolution (SR) reconstruction is crucial for boosting diagnostic precision. However, current SR methods primarily rely on single LR images or multi-contrast features, limiting detail restoration. Inspired by video frame interpolation, this work utilizes the spatiotemporal correlations between adjacent slices to reformulate the SR task of anisotropic 3D-MRI image into the generation of new high-resolution (HR) slices between adjacent 2D slices. The generated SR slices are subsequently combined with the HR adjacent slices to create a new HR 3D-MRI image. We propose a innovative network architecture termed DGWMSR, comprising a backbone network and a feature supplement module (FSM). The backbone's core innovations include the displacement former block (DFB) module, which independently extracts structural and displacement features, and the maskdisplacement vector network (MDVNet) which combines with Warp mechanism to facilitate edge pixel detailing. The DFB integrates the inter-slice attention (ISA) mechanism into the Transformer, effectively minimizing the mutual interference between the two types of features and mitigating volume effects during reconstruction. Additionally, the FSM module combines self-attention with feed-forward neural network, which emphasizes critical details derived from the backbone architecture. Experimental results demonstrate the DGWMSR network outperforms current MRI SR methods on Kirby21, ANVIL-adult, and MSSEG datasets. Our code has been made publicly available on GitHub at https://github.com/Dohbby/DGWMSR.

Complex-Valued Spatio-Temporal Graph Convolution Neural Network optimized With Giraffe Kicking Optimization Algorithm for Thyroid Nodule Classification in Ultrasound Images.

Kumar K K, P R, M N, G D

pubmed logopapersAug 25 2025
Thyroid hormones are significant for controlling metabolism, and two common thyroid disorders, such as hypothyroidism. The hyperthyroidism are directly affect the metabolic rate of the human body. Predicting and diagnosing thyroid disease remain significant challenges in medical research due to the complexity of thyroid hormone regulation and its impact on metabolism. Therefore, this paper proposes a Complex-valued Spatio-Temporal Graph Convolution Neural Network optimized with Giraffe Kicking Optimization Algorithm for Thyroid Nodule Classification in Ultrasound Images (CSGCNN-GKOA-TNC-UI). Here, the ultrasound images are collected through DDTI (Digital Database of Thyroid ultrasound Imageries) dataset. The gathered data is given into the pre-processing stage using Bilinear Double-Order Filter (BDOF) approach to eradicate the noise and increase the input images quality. The pre-processing image is given into the Deep Adaptive Fuzzy Clustering (DAFC) for Region of Interest (RoI) segmentation. The segmented image is fed to the Multi-Objective Matched Synchro Squeezing Chirplet Transform (MMSSCT) for extracting the features, like Geometric features and Morphological features. The extracted features are fed into the CSGCNN, which classifies the Thyroid Nodule into Benign Nodules and Malign Nodules. Finally, Giraffe Kicking Optimization Algorithm (GKOA) is considered to enhance the CSGCNN classifier. The CSGCNN-GKOA-TNC-UI algorithm is implemented in MATLAB. The CSGCNN-GKOA-TNC-UI approach attains 34.9%, 21.5% and 26.8% higher f-score, 18.6%, 29.3 and 19.2% higher accuracy when compared with existing models: Thyroid diagnosis with classification utilizing DNN depending on hybrid meta-heuristic with LSTM method (LSTM-TNC-UI), innovative full-scale connected network for segmenting thyroid nodule in UI (FCG Net-TNC-UI), and Adversarial architecture dependent multi-scale fusion method for segmenting thyroid nodule (AMSeg-TNC-UI) methods respectively. The proposed model enhances thyroid nodule classification accuracy, aiding radiologists and endocrinologists. By reducing misclassification, it minimizes unnecessary biopsies and enables early malignancy detection.

DYNAFormer: Enhancing transformer segmentation with dynamic anchor mask for medical imaging.

Nguyen TC, Phung KA, Dao TTP, Nguyen-Mau TH, Nguyen-Quang T, Pham CN, Le TN, Shen J, Nguyen TV, Tran MT

pubmed logopapersAug 25 2025
Polyp shape is critical for diagnosing colorectal polyps and assessing cancer risk, yet there is limited data on segmenting pedunculated and sessile polyps. This paper introduces PolypDB_INS, a dataset of 4403 images containing 4918 annotated polyps, specifically for sessile and pedunculated polyps. In addition, we propose DYNAFormer, a novel transformer-based model utilizing an anchor mask-guided mechanism that incorporates cross-attention, dynamic query updates, and query denoising for improved object segmentation. Treating each positional query as an anchor mask dynamically updated through decoder layers enhances perceptual information regarding the object's position, allowing for more precise segmentation of complex structures like polyps. Extensive experiments on the PolypDB_INS dataset using standard evaluation metrics for both instance and semantic segmentation show that DYNAFormer significantly outperforms state-of-the-art methods. Ablation studies confirm the effectiveness of the proposed techniques, highlighting the model's robustness for diagnosing colorectal cancer. The source code and dataset are available at https://github.com/ntcongvn/DYNAFormer https://github.com/ntcongvn/DYNAFormer.

Radiomics-Driven Diffusion Model and Monte Carlo Compression Sampling for Reliable Medical Image Synthesis.

Zhao J, Li S

pubmed logopapersAug 25 2025
Reliable medical image synthesis is crucial for clinical applications and downstream tasks, where high-quality anatomical structure and predictive confidence are essential. Existing studies have made significant progress by embedding prior conditional knowledge, such as conditional images or textual information, to synthesize natural images. However, medical image synthesis remains a challenging task due to: 1) Data scarcity: High-quality medical text prompt are extremely rare and require specialized expertise. 2) Insufficient uncertainty estimation: The uncertainty estimation is critical for evaluating the confidence of reliable medical image synthesis. This paper presents a novel approach for medical image synthesis, driven by radiomics prompts and combined with Monte Carlo Compression Sampling (MCCS) to ensure reliability. For the first time, our method leverages clinically focused radiomics prompts to condition the generation process, guiding the model to produce reliable medical images. Furthermore, the innovative MCCS algorithm employs Monte Carlo methods to randomly select and compress sampling steps within the denoising diffusion implicit models (DDIM), enabling efficient uncertainty quantification. Additionally, we introduce a MambaTrans architecture to model long-range dependencies in medical images and embed prior conditions (e.g., radiomics prompts). Extensive experiments on benchmark medical imaging datasets demonstrate that our approach significantly improves image quality and reliability, outperforming SoTA methods in both qualitative and quantitative evaluations.

FCR: Investigating Generative AI models for Forensic Craniofacial Reconstruction

Ravi Shankar Prasad, Dinesh Singh

arxiv logopreprintAug 25 2025
Craniofacial reconstruction in forensics is one of the processes to identify victims of crime and natural disasters. Identifying an individual from their remains plays a crucial role when all other identification methods fail. Traditional methods for this task, such as clay-based craniofacial reconstruction, require expert domain knowledge and are a time-consuming process. At the same time, other probabilistic generative models like the statistical shape model or the Basel face model fail to capture the skull and face cross-domain attributes. Looking at these limitations, we propose a generic framework for craniofacial reconstruction from 2D X-ray images. Here, we used various generative models (i.e., CycleGANs, cGANs, etc) and fine-tune the generator and discriminator parts to generate more realistic images in two distinct domains, which are the skull and face of an individual. This is the first time where 2D X-rays are being used as a representation of the skull by generative models for craniofacial reconstruction. We have evaluated the quality of generated faces using FID, IS, and SSIM scores. Finally, we have proposed a retrieval framework where the query is the generated face image and the gallery is the database of real faces. By experimental results, we have found that this can be an effective tool for forensic science.

UniSino: Physics-Driven Foundational Model for Universal CT Sinogram Standardization

Xingyu Ai, Shaoyu Wang, Zhiyuan Jia, Ao Xu, Hongming Shan, Jianhua Ma, Qiegen Liu

arxiv logopreprintAug 25 2025
During raw-data acquisition in CT imaging, diverse factors can degrade the collected sinograms, with undersampling and noise leading to severe artifacts and noise in reconstructed images and compromising diagnostic accuracy. Conventional correction methods rely on manually designed algorithms or fixed empirical parameters, but these approaches often lack generalizability across heterogeneous artifact types. To address these limitations, we propose UniSino, a foundation model for universal CT sinogram standardization. Unlike existing foundational models that operate in image domain, UniSino directly standardizes data in the projection domain, which enables stronger generalization across diverse undersampling scenarios. Its training framework incorporates the physical characteristics of sinograms, enhancing generalization and enabling robust performance across multiple subtasks spanning four benchmark datasets. Experimental results demonstrate thatUniSino achieves superior reconstruction quality both single and mixed undersampling case, demonstrating exceptional robustness and generalization in sinogram enhancement for CT imaging. The code is available at: https://github.com/yqx7150/UniSino.

Development and evaluation of a convolutional neural network model for sex prediction using cephalometric radiographs and cranial photographs.

Handayani VW, Margareth Amiatun Ruth MS, Rulaningtyas R, Caesarardhi MR, Yudhantorro BA, Yudianto A

pubmed logopapersAug 25 2025
Accurately determining sex using features like facial bone profiles and teeth is crucial for identifying unknown victims. Lateral cephalometric radiographs effectively depict the lateral cranial structure, aiding the development of computational identification models. This study develops and evaluates a sex prediction model using cephalometric radiographs with several convolutional neural network (CNN) architectures. The primary goal is to evaluate the model's performance on standardized radiographic data and real-world cranial photographs to simulate forensic applications. Six CNN architectures-VGG16, VGG19, MobileNetV2, ResNet50V2, InceptionV3, and InceptionResNetV2-were employed to train and validate 340 cephalometric images of Indonesian individuals aged 18 to 40 years. The data were divided into training (70%), validation (15%), and testing (15%) subsets. Data augmentation was implemented to mitigate class imbalance. Additionally, a set of 40 cranial images from anatomical specimens was employed to evaluate the model's generalizability. Model performance metrics included accuracy, precision, recall, and F1-score. CNN models were trained and evaluated on 340 cephalometric images (255 females and 85 males). VGG19 and ResNet50V2 achieved high F1-scores of 95% (females) and 83% (males), respectively, using cephalometric data, highlighting their strong class-specific performance. Although the overall accuracy exceeded 90%, the F1-score better reflected model performance in this imbalanced dataset. In contrast, performance notably decreased with cranial photographs, particularly when classifying female samples. That is, while InceptionResNetV2 achieved the highest F1-score for cranial photographs (62%), misclassification of females remained significant. Confusion matrices and per-class metrics further revealed persistent issues related to data imbalance and generalization across imaging modalities. Basic CNN models perform well on standardized cephalometric images but less effectively on photographic cranial images, indicating a domain shift between image types that limits generalizability. Improving real-world forensic performance will require further optimization and more diverse training data. Not applicable.

Improving Interpretability in Alzheimer's Prediction via Joint Learning of ADAS-Cog Scores

Nur Amirah Abd Hamid, Mohd Shahrizal Rusli, Muhammad Thaqif Iman Mohd Taufek, Mohd Ibrahim Shapiai, Daphne Teck Ching Lai

arxiv logopreprintAug 25 2025
Accurate prediction of clinical scores is critical for early detection and prognosis of Alzheimers disease (AD). While existing approaches primarily focus on forecasting the ADAS-Cog global score, they often overlook the predictive value of its sub-scores (13 items), which capture domain-specific cognitive decline. In this study, we propose a multi task learning (MTL) framework that jointly predicts the global ADAS-Cog score and its sub-scores (13 items) at Month 24 using baseline MRI and longitudinal clinical scores from baseline and Month 6. The main goal is to examine how each sub scores particularly those associated with MRI features contribute to the prediction of the global score, an aspect largely neglected in prior MTL studies. We employ Vision Transformer (ViT) and Swin Transformer architectures to extract imaging features, which are fused with longitudinal clinical inputs to model cognitive progression. Our results show that incorporating sub-score learning improves global score prediction. Subscore level analysis reveals that a small subset especially Q1 (Word Recall), Q4 (Delayed Recall), and Q8 (Word Recognition) consistently dominates the predicted global score. However, some of these influential sub-scores exhibit high prediction errors, pointing to model instability. Further analysis suggests that this is caused by clinical feature dominance, where the model prioritizes easily predictable clinical scores over more complex MRI derived features. These findings emphasize the need for improved multimodal fusion and adaptive loss weighting to achieve more balanced learning. Our study demonstrates the value of sub score informed modeling and provides insights into building more interpretable and clinically robust AD prediction frameworks. (Github repo provided)

Reducing radiomics errors in nasopharyngeal cancer via deep learning-based synthetic CT generation from CBCT.

Xiao Y, Lin W, Xie F, Liu L, Zheng G, Xiao C

pubmed logopapersAug 25 2025
This study investigates the impact of cone beam computed tomography (CBCT) image quality on radiomic analysis and evaluates the potential of deep learning-based enhancement to improve radiomic feature accuracy in nasopharyngeal cancer (NPC). The CBAMRegGAN model was trained on 114 paired CT and CBCT datasets from 114 nasopharyngeal cancer patients to enhance CBCT images, with CT images as ground truth. The dataset was split into 82 patients for training, 12 for validation, and 20 for testing. The radiomic features in 6 different categories, including first-order, gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), gray-level size-zone matrix(GLSZM), neighbouring gray tone difference matrix (NGTDM), and gray-level dependence matrix (GLDM), were extracted from the gross tumor volume (GTV) of original CBCT, enhanced CBCT, and CT. Comparing feature errors between original and enhanced CBCT showed that deep learning-based enhancement improves radiomic feature accuracy. The CBAMRegGAN model achieved improved image quality with a peak signal-to-noise ratio (PSNR) of 29.52 ± 2.28 dB, normalized mean absolute error (NMAE) of 0.0129 ± 0.004, and structural similarity index (SSIM) of 0.910 ± 0.025 for enhanced CBCT images. This led to reduced errors in most radiomic features, with average reductions across 20 patients of 19.0%, 24.0%, 3.0%, 19%, 15.0%, and 5.0% for first-order, GLCM, GLRLM, GLSZM, NGTDM, and GLDM features. This study demonstrates that CBCT image quality significantly influences radiomic analysis, and deep learning-based enhancement techniques can effectively improve both image quality and the accuracy of radiomic features in NPC.
Page 61 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.