Sort by:
Page 228 of 6546537 results

Zhao J, Li S

pubmed logopapersAug 25 2025
Reliable medical image synthesis is crucial for clinical applications and downstream tasks, where high-quality anatomical structure and predictive confidence are essential. Existing studies have made significant progress by embedding prior conditional knowledge, such as conditional images or textual information, to synthesize natural images. However, medical image synthesis remains a challenging task due to: 1) Data scarcity: High-quality medical text prompt are extremely rare and require specialized expertise. 2) Insufficient uncertainty estimation: The uncertainty estimation is critical for evaluating the confidence of reliable medical image synthesis. This paper presents a novel approach for medical image synthesis, driven by radiomics prompts and combined with Monte Carlo Compression Sampling (MCCS) to ensure reliability. For the first time, our method leverages clinically focused radiomics prompts to condition the generation process, guiding the model to produce reliable medical images. Furthermore, the innovative MCCS algorithm employs Monte Carlo methods to randomly select and compress sampling steps within the denoising diffusion implicit models (DDIM), enabling efficient uncertainty quantification. Additionally, we introduce a MambaTrans architecture to model long-range dependencies in medical images and embed prior conditions (e.g., radiomics prompts). Extensive experiments on benchmark medical imaging datasets demonstrate that our approach significantly improves image quality and reliability, outperforming SoTA methods in both qualitative and quantitative evaluations.

Lyu J, Li L, Al-Hazzaa SAF, Wang C, Hossain MS

pubmed logopapersAug 25 2025
During labor, transperineal ultrasound imaging can acquire real-time midsagittal images, through which the pubic symphysis and fetal head can be accurately identified, and the angle of progression (AoP) between them can be calculated, thereby quantitatively evaluating the descent and position of the fetal head in the birth canal in real time. However, current segmentation methods based on convolutional neural networks (CNNs) and Transformers generally depend heavily on large-scale manually annotated data, which limits their adoption in practical applications. In light of this limitation, this paper develops a new Transformer-based Semi-supervised Segmentation Network (TransSeg). This method employs a Vision Transformer as the backbone network and introduces a Channel-wise Cross Attention (CCA) mechanism to effectively reconstruct the features of unlabeled samples into the labeled feature space, promoting architectural innovation in semi-supervised segmentation and eliminating the need for complex training strategies. In addition, we design a Semantic Information Storage (S-InfoStore) module and a Channel Semantic Update (CSU) strategy to dynamically store and update feature representations of unlabeled samples, thereby continuously enhancing their expressiveness in the feature space and significantly improving the model's utilization of unlabeled data. We conduct a systematic evaluation of the proposed method on the FH-PS-AoP dataset. Experimental results demonstrate that TransSeg outperforms existing mainstream methods across all evaluation metrics, verifying its effectiveness and advancement in semi-supervised semantic segmentation tasks.

Wang L, Liu S, Yu Z, Du J, Li Y

pubmed logopapersAug 25 2025
Enhancing the resolution of Magnetic Resonance Imaging (MRI) through super-resolution (SR) reconstruction is crucial for boosting diagnostic precision. However, current SR methods primarily rely on single LR images or multi-contrast features, limiting detail restoration. Inspired by video frame interpolation, this work utilizes the spatiotemporal correlations between adjacent slices to reformulate the SR task of anisotropic 3D-MRI image into the generation of new high-resolution (HR) slices between adjacent 2D slices. The generated SR slices are subsequently combined with the HR adjacent slices to create a new HR 3D-MRI image. We propose a innovative network architecture termed DGWMSR, comprising a backbone network and a feature supplement module (FSM). The backbone's core innovations include the displacement former block (DFB) module, which independently extracts structural and displacement features, and the maskdisplacement vector network (MDVNet) which combines with Warp mechanism to facilitate edge pixel detailing. The DFB integrates the inter-slice attention (ISA) mechanism into the Transformer, effectively minimizing the mutual interference between the two types of features and mitigating volume effects during reconstruction. Additionally, the FSM module combines self-attention with feed-forward neural network, which emphasizes critical details derived from the backbone architecture. Experimental results demonstrate the DGWMSR network outperforms current MRI SR methods on Kirby21, ANVIL-adult, and MSSEG datasets. Our code has been made publicly available on GitHub at https://github.com/Dohbby/DGWMSR.

Li S, Zhang Y, Hong Y, Yuan W, Sun J

pubmed logopapersAug 25 2025
Rectal cancer is a major cause of cancer-related mortality, requiring accurate diagnosis via MRI scans. However, detecting rectal cancer in MRI scans is challenging due to image complexity and the need for precise localization. While transformer-based object detection has excelled in natural images, applying these models to medical data is hindered by limited medical imaging resources. To address this, we propose the Spatially Prioritized Detection Transformer (SP DETR), which incorporates a Spatially Prioritized (SP) Decoder to constrain anchor boxes to regions of interest (ROI) based on anatomical maps, focusing the model on areas most likely to contain cancer. Additionally, the SP cross-attention mechanism refines the learning of anchor box offsets. To improve small cancer detection, we introduce the Global Context-Guided Feature Fusion Module (GCGFF), leveraging a transformer encoder for global context and a Globally-Guided Semantic Fusion Block (GGSF) to enhance high-level semantic features. Experimental results show that our model significantly improves detection accuracy, especially for small rectal cancers, demonstrating the effectiveness of integrating anatomical priors with transformer-based models for clinical applications.

Nakai H, Froemming AT, Kawashima A, LeGout JD, Kurata Y, Gloe JN, Borisch EA, Riederer SJ, Takahashi N

pubmed logopapersAug 25 2025
To determine whether deep learning (DL)-based image quality (IQ) assessment of T2-weighted images (T2WI) could be biased by the presence of clinically significant prostate cancer (csPCa). In this three-center retrospective study, five abdominal radiologists categorized IQ of 2,105 transverse T2WI series into optimal, mild, moderate, and severe degradation. An IQ classification model was developed using 1,719 series (development set). The agreement between the model and radiologists was assessed using the remaining 386 series with a quadratic weighted kappa. The model was applied to 11,723 examinations that were not included in the development set and without documented prostate cancer at the time of MRI (patient age, 65.5 ± 8.3 years [mean ± standard deviation]). Examinations categorized as mild to severe degradation were used as target groups, whereas those as optimal were used to construct matched control groups. Case-control matching was performed to mitigate the effects of pre-MRI confounding factors, such as age and prostate-specific antigen value. The proportion of patients with csPCa was compared between the target and matched control groups using the chi-squared test. The agreement between the model and radiologists was moderate with a quadratic weighted kappa of 0.53. The mild-moderate IQ-degraded groups had significantly higher csPCa proportions than the matched control groups with optimal IQ: moderate (N = 126) vs. optimal (N = 504), 26.3% vs. 22.7%, respectively, difference = 3.6% [95% confidence interval: 0.4%, 6.8%], p = 0.03; mild (N = 1,399) vs. optimal (N = 1,399), 22.9% vs. 20.2%, respectively, difference = 2.7% [0.7%, 4.7%], p = 0.008. The DL-based IQ tended to be worse in patients with csPCa, raising concerns about its clinical application.

Kurokawa R, Hagiwara A, Ueda D, Ito R, Saida T, Honda M, Nishioka K, Sakata A, Yanagawa M, Takumi K, Oda S, Ide S, Sofue K, Sugawara S, Watabe T, Hirata K, Kawamura M, Iima M, Naganawa S

pubmed logopapersAug 25 2025
Recent advances in molecular genetics have revolutionized the classification of pediatric-type high-grade gliomas in the 2021 World Health Organization central nervous system tumor classification. This narrative review synthesizes current evidence on the following four tumor types: diffuse midline glioma, H3 K27-altered; diffuse hemispheric glioma, H3 G34-mutant; diffuse pediatric-type high-grade glioma, H3-wildtype and IDH-wildtype; and infant-type hemispheric glioma. We conducted a comprehensive literature search for articles published through January 2025. For each tumor type, we analyze characteristic clinical presentations, molecular alterations, conventional and advanced magnetic resonance imaging features, radiological-molecular correlations, and current therapeutic approaches. Emerging radiogenomic approaches utilizing artificial intelligence, including radiomics and deep learning, show promise in identifying imaging biomarkers that correlate with molecular features. This review highlights the importance of integrating radiological and molecular data for accurate diagnosis and treatment planning, while acknowledging limitations in current methodologies and the need for prospective validation in larger cohorts. Understanding these correlations is crucial for advancing personalized treatment strategies for these challenging tumors.

Kumar K K, P R, M N, G D

pubmed logopapersAug 25 2025
Thyroid hormones are significant for controlling metabolism, and two common thyroid disorders, such as hypothyroidism. The hyperthyroidism are directly affect the metabolic rate of the human body. Predicting and diagnosing thyroid disease remain significant challenges in medical research due to the complexity of thyroid hormone regulation and its impact on metabolism. Therefore, this paper proposes a Complex-valued Spatio-Temporal Graph Convolution Neural Network optimized with Giraffe Kicking Optimization Algorithm for Thyroid Nodule Classification in Ultrasound Images (CSGCNN-GKOA-TNC-UI). Here, the ultrasound images are collected through DDTI (Digital Database of Thyroid ultrasound Imageries) dataset. The gathered data is given into the pre-processing stage using Bilinear Double-Order Filter (BDOF) approach to eradicate the noise and increase the input images quality. The pre-processing image is given into the Deep Adaptive Fuzzy Clustering (DAFC) for Region of Interest (RoI) segmentation. The segmented image is fed to the Multi-Objective Matched Synchro Squeezing Chirplet Transform (MMSSCT) for extracting the features, like Geometric features and Morphological features. The extracted features are fed into the CSGCNN, which classifies the Thyroid Nodule into Benign Nodules and Malign Nodules. Finally, Giraffe Kicking Optimization Algorithm (GKOA) is considered to enhance the CSGCNN classifier. The CSGCNN-GKOA-TNC-UI algorithm is implemented in MATLAB. The CSGCNN-GKOA-TNC-UI approach attains 34.9%, 21.5% and 26.8% higher f-score, 18.6%, 29.3 and 19.2% higher accuracy when compared with existing models: Thyroid diagnosis with classification utilizing DNN depending on hybrid meta-heuristic with LSTM method (LSTM-TNC-UI), innovative full-scale connected network for segmenting thyroid nodule in UI (FCG Net-TNC-UI), and Adversarial architecture dependent multi-scale fusion method for segmenting thyroid nodule (AMSeg-TNC-UI) methods respectively. The proposed model enhances thyroid nodule classification accuracy, aiding radiologists and endocrinologists. By reducing misclassification, it minimizes unnecessary biopsies and enables early malignancy detection.

Rozak, M. W., Mester, J. R., Attarpour, A., Dorr, A., Patel, S., Koletar, M., Hill, M. E., McLaurin, J., Goubran, M., Stefanovic, B.

biorxiv logopreprintAug 25 2025
Functional hyperaemia is a well-established hallmark of healthy brain function, whereby local brain blood flow adjusts in response to a change in the activity of the surrounding neurons. Although functional hyperemia has been extensively studied at the level of both tissue and individual vessels, vascular network-level coordination remains largely unknown. To bridge this gap, we developed a deep learning-based computational pipeline that uses two-photon fluorescence microscopy images of cerebral microcirculation to enable automated reconstruction and quantification of the geometric changes across the microvascular network, comprising hundreds of interconnected blood vessels, pre and post-activation of the neighbouring neurons. The pipeline's utility was demonstrated in the Thy1-ChR2 optogenetic mouse model, where we observed network-wide vessel radius changes to depend on the photostimulation intensity, with both dilations and constrictions occurring across the cortical depth, at an average of 16.1{+/-}14.3 m (mean{+/-}stddev) away from the most proximal neuron for dilations; and at 21.9{+/-}14.6 m away for constrictions. We observed a significant heterogeneity of the vascular radius changes within vessels, with radius adjustment varying by an average of 24 {+/-} 28% of the resting diameter, likely reflecting the heterogeneity of the distribution of contractile cells on the vessel walls. A graph theory-based network analysis revealed that the assortativity of adjacent blood vessel responses rose by 152 {+/-} 65% at 4.3 mW/mm2 of blue photostimulation vs. the control, with a 4% median increase in the efficiency of the capillary networks during this level of blue photostimulation in relation to the baseline. Interrogating individual vessels is thus not sufficient to predict how the blood flow is modulated in the network. Our computational pipeline, to be made openly available, enables tracking of the microvascular network geometry over time, relating caliber adjustments to vessel wall-associated cells' state, and mapping network-level flow distribution impairments in experimental models of disease.

S J, Perumalsamy M

pubmed logopapersAug 25 2025
The process of detecting and measuring the fat layer surrounding the heart from medical images is referred to as epicardial fat segmentation. Accurate segmentation is essential for assessing heart health and associated risk factors. It plays a critical role in evaluating cardiovascular disease, requiring advanced techniques to enhance precision and effectiveness. However, there is currently a shortage of resources made for fat mass measurement. The Visual Lab's cardiac fat database addresses this limitation by providing a comprehensive set of high-resolution images crucial for reliable fat analysis. This study proposes a novel method for epicardial fat segmentation, involving a multi-stage framework. In the preprocessing phase, window-aware guided bilateral filtering (WGBR) is applied to reduce noise while preserving structural features. For region-of-interest (ROI) selection, the White Shark Optimizer (WSO) is employed to improve exploration and exploitation accuracy. The segmentation task is handled using a bidirectional guided semi-3D network (BGSNet), which enhances robustness by extracting features in both forward and backward directions. Following segmentation, quantification is performed to estimate the epicardial fat volume. This is achieved using reflection-equivariant quantum neural networks (REQNN), which are well-suited for modelling complex visual patterns. The Parrot optimizer is further utilized to fine-tune hyperparameters, ensuring optimal performance. The experimental results confirm the effectiveness of the suggested BGSNet with REQNN approach, achieving a Dice score of 99.50 %, an accuracy of 99.50 %, and an execution time of 1.022 s per slice. Furthermore, the Spearman correlation coefficient for fat quantification yielded an R<sup>2</sup> value of 0.9867, indicating a strong agreement with the reference measurements. This integrated approach offers a reliable solution for epicardial fat segmentation and quantification, thereby supporting improved cardiovascular risk assessment and monitoring.

Nguyen TC, Phung KA, Dao TTP, Nguyen-Mau TH, Nguyen-Quang T, Pham CN, Le TN, Shen J, Nguyen TV, Tran MT

pubmed logopapersAug 25 2025
Polyp shape is critical for diagnosing colorectal polyps and assessing cancer risk, yet there is limited data on segmenting pedunculated and sessile polyps. This paper introduces PolypDB_INS, a dataset of 4403 images containing 4918 annotated polyps, specifically for sessile and pedunculated polyps. In addition, we propose DYNAFormer, a novel transformer-based model utilizing an anchor mask-guided mechanism that incorporates cross-attention, dynamic query updates, and query denoising for improved object segmentation. Treating each positional query as an anchor mask dynamically updated through decoder layers enhances perceptual information regarding the object's position, allowing for more precise segmentation of complex structures like polyps. Extensive experiments on the PolypDB_INS dataset using standard evaluation metrics for both instance and semantic segmentation show that DYNAFormer significantly outperforms state-of-the-art methods. Ablation studies confirm the effectiveness of the proposed techniques, highlighting the model's robustness for diagnosing colorectal cancer. The source code and dataset are available at https://github.com/ntcongvn/DYNAFormer https://github.com/ntcongvn/DYNAFormer.
Page 228 of 6546537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.