Sort by:
Page 8 of 874 results

CirnetamorNet: An ultrasonic temperature measurement network for microwave hyperthermia based on deep learning.

Cui F, Du Y, Qin L, Li B, Li C, Meng X

pubmed logopapersMay 9 2025
Microwave thermotherapy is a promising approach for cancer treatment, but accurate noninvasive temperature monitoring remains challenging. This study aims to achieve accurate temperature prediction during microwave thermotherapy by efficiently integrating multi-feature data, thereby improving the accuracy and reliability of noninvasive thermometry techniques. We proposed an enhanced recurrent neural network architecture, namely CirnetamorNet. The experimental data acquisition system is developed by using the material that simulates the characteristics of human tissue to construct the body model. Ultrasonic image data at different temperatures were collected, and 5 parameters with high temperature correlation were extracted from gray scale covariance matrix and Homodyned-K distribution. Using multi-feature data as input and temperature prediction as output, the CirnetamorNet model is constructed by multi-head attention mechanism. Model performance was evaluated by analyzing training losses, predicting mean square error and accuracy, and ablation experiments were performed to evaluate the contribution of each module. Compared with common models, the CirnetamorNet model performs well, with training losses as low as 1.4589 and mean square error of only 0.1856. Its temperature prediction accuracy of 0.3°C exceeds that of many advanced models. Ablation experiments show that the removal of any key module of the model will lead to performance degradation, which proves that the collaboration of all modules is significant for improving the performance of the model. The proposed CirnetamorNet model exhibits exceptional performance in noninvasive thermometry for microwave thermotherapy. It offers a novel approach to multi-feature data fusion in the medical field and holds significant practical application value.

FF-PNet: A Pyramid Network Based on Feature and Field for Brain Image Registration

Ying Zhang, Shuai Guo, Chenxi Sun, Yuchen Zhu, Jinhai Xiang

arxiv logopreprintMay 8 2025
In recent years, deformable medical image registration techniques have made significant progress. However, existing models still lack efficiency in parallel extraction of coarse and fine-grained features. To address this, we construct a new pyramid registration network based on feature and deformation field (FF-PNet). For coarse-grained feature extraction, we design a Residual Feature Fusion Module (RFFM), for fine-grained image deformation, we propose a Residual Deformation Field Fusion Module (RDFFM). Through the parallel operation of these two modules, the model can effectively handle complex image deformations. It is worth emphasizing that the encoding stage of FF-PNet only employs traditional convolutional neural networks without any attention mechanisms or multilayer perceptrons, yet it still achieves remarkable improvements in registration accuracy, fully demonstrating the superior feature decoding capabilities of RFFM and RDFFM. We conducted extensive experiments on the LPBA and OASIS datasets. The results show our network consistently outperforms popular methods in metrics like the Dice Similarity Coefficient.

A myocardial reorientation method based on feature point detection for quantitative analysis of PET myocardial perfusion imaging.

Shang F, Huo L, Gong T, Wang P, Shi X, Tang X, Liu S

pubmed logopapersMay 8 2025
Reorienting cardiac positron emission tomography (PET) images to the transaxial plane is essential for cardiac PET image analysis. This study aims to design a convolutional neural network (CNN) for automatic reorientation and evaluate its generalizability. An artificial intelligence (AI) method integrating U-Net and the differentiable spatial to numeric transform module (DSNT-U) was proposed to automatically position three feature points (P<sub>apex</sub>, P<sub>base</sub>, and P<sub>RV</sub>), with these three points manually located by an experienced radiologist as the reference standard (RS). A second radiologist performed manual location for reproducibility evaluation. The DSNT-U, initially trained and tested on a [<sup>11</sup>C]acetate dataset (training/testing: 40/17), was further compared with a CNN-spatial transformer network (CNN-STN). The network fine-tuned with 4 subjects was tested on a [<sup>13</sup>N]ammonia dataset (n = 30). The performance of the DSNT-U was evaluated in terms of coordinates, volume, and quantitative indexes (pharmacokinetic parameters and total perfusion deficit). The proposed DSNT-U successfully achieved automatic myocardial reorientation for both [<sup>11</sup>C]acetate and [<sup>13</sup>N]ammonia datasets. For the former dataset, the intraclass correlation coefficients (ICCs) between the coordinates predicted by the DSNT-U and the RS exceeded 0.876. The average normalized mean squared error (NMSE) between the short-axis (SA) images obtained through DSNT-U-based reorientation and the reference SA images was 0.051 ± 0.043. For pharmacokinetic parameters, the R² between the DSNT-U and the RS was larger than 0.968. Compared with the CNN-STN, the DSNT-U demonstrated a higher ICC between the estimated rigid transformation parameters and the RS. After fine-tuning on the [<sup>13</sup>N]ammonia dataset, the average NMSE between the SA images reoriented by the DSNT-U and the reference SA images was 0.056 ± 0.046. The ICC between the total perfusion deficit (TPD) values computed from DSNT-U-derived images and the reference values was 0.981. Furthermore, no significant differences were observed in the performance of the DSNT-U prediction among subjects with different genders or varying myocardial perfusion defect (MPD) statuses. The proposed DSNT-U can accurately position P<sub>apex</sub>, P<sub>base</sub>, and P<sub>RV</sub> on the [<sup>11</sup>C]acetate dataset. After fine-tuning, the positioning model can be applied to the [<sup>13</sup>N]ammonia perfusion dataset, demonstrating good generalization performance. This method can adapt to data of different genders (with or without MPD) and different tracers, displaying the potential to replace manual operations.

A diffusion-stimulated CT-US registration model with self-supervised learning and synthetic-to-real domain adaptation.

Li S, Jia B, Huang W, Zhang X, Zhou W, Wang C, Teng G

pubmed logopapersMay 8 2025
In abdominal interventional procedures, achieving precise registration of 2D ultrasound (US) frames with 3D computed tomography (CT) scans presents a significant challenge. Traditional tracking methods often rely on high-precision sensors, which can be prohibitively expensive. Furthermore, the clinical need for real-time registration with a broad capture range frequently exceeds the performance of standard image-based optimization techniques. Current automatic registration methods that utilize deep learning are either heavily reliant on manual annotations for training or struggle to effectively bridge the gap between different imaging domains. To address these challenges, we propose a novel diffusion-stimulated CT-US registration model. This model harnesses the physical diffusion properties of US to generate synthetic US images from preoperative CT data. Additionally, we introduce a synthetic-to-real domain adaptation strategy using a diffusion model to mitigate the discrepancies between real and synthetic US images. A dual-stream self-supervised regression neural network, trained on these synthetic images, is then used to estimate the pose within the CT space. The effectiveness of our proposed approach is verified through validation using US and CT scans from a dual-modality human abdominal phantom. The results of our experiments confirm that our method can accurately initialize the US image pose within an acceptable range of error and subsequently refine it to achieve precise alignment. This enables real-time, tracker-independent, and robust rigid registration of CT and US images.
Page 8 of 874 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.