Sort by:
Page 312 of 3763760 results

Joint Reconstruction of Activity and Attenuation in PET by Diffusion Posterior Sampling in Wavelet Coefficient Space

Clémentine Phung-Ngoc, Alexandre Bousse, Antoine De Paepe, Hong-Phuong Dang, Olivier Saut, Dimitris Visvikis

arxiv logopreprintMay 24 2025
Attenuation correction (AC) is necessary for accurate activity quantification in positron emission tomography (PET). Conventional reconstruction methods typically rely on attenuation maps derived from a co-registered computed tomography (CT) or magnetic resonance imaging scan. However, this additional scan may complicate the imaging workflow, introduce misalignment artifacts and increase radiation exposure. In this paper, we propose a joint reconstruction of activity and attenuation (JRAA) approach that eliminates the need for auxiliary anatomical imaging by relying solely on emission data. This framework combines wavelet diffusion model (WDM) and diffusion posterior sampling (DPS) to reconstruct fully three-dimensional (3-D) data. Experimental results show our method outperforms maximum likelihood activity and attenuation (MLAA) and MLAA with UNet-based post processing, and yields high-quality noise-free reconstructions across various count settings when time-of-flight (TOF) information is available. It is also able to reconstruct non-TOF data, although the reconstruction quality significantly degrades in low-count (LC) conditions, limiting its practical effectiveness in such settings. This approach represents a step towards stand-alone PET imaging by reducing the dependence on anatomical modalities while maintaining quantification accuracy, even in low-count scenarios when TOF information is available.

MATI: A GPU-accelerated toolbox for microstructural diffusion MRI simulation and data fitting with a graphical user interface.

Xu J, Devan SP, Shi D, Pamulaparthi A, Yan N, Zu Z, Smith DS, Harkins KD, Gore JC, Jiang X

pubmed logopapersMay 24 2025
To introduce MATI (Microstructural Analysis Toolbox for Imaging), a versatile MATLAB-based toolbox that combines both simulation and data fitting capabilities for microstructural dMRI research. MATI provides a user-friendly, graphical user interface that enables researchers, including those without much programming experience, to perform advanced simulations and data analyses for microstructural MRI research. For simulation, MATI supports arbitrary microstructural tissues and pulse sequences. For data fitting, MATI supports a range of fitting methods, including traditional non-linear least squares, Bayesian approaches, machine learning, and dictionary matching methods, allowing users to tailor analyses based on specific research needs. Optimized with vectorized matrix operations and high-performance numerical libraries, MATI achieves high computational efficiency, enabling rapid simulations and data fitting on CPU and GPU hardware. While designed for microstructural dMRI, MATI's generalized framework can be extended to other imaging methods, making it a flexible and scalable tool for quantitative MRI research. MATI offers a significant step toward translating advanced microstructural MRI techniques into clinical applications.

Cross-Fusion Adaptive Feature Enhancement Transformer: Efficient high-frequency integration and sparse attention enhancement for brain MRI super-resolution.

Yang Z, Xiao H, Wang X, Zhou F, Deng T, Liu S

pubmed logopapersMay 24 2025
High-resolution magnetic resonance imaging (MRI) is essential for diagnosing and treating brain diseases. Transformer-based approaches demonstrate strong potential in MRI super-resolution by capturing long-range dependencies effectively. However, existing Transformer-based super-resolution methods face several challenges: (1) they primarily focus on low-frequency information, neglecting the utilization of high-frequency information; (2) they lack effective mechanisms to integrate both low-frequency and high-frequency information; (3) they struggle to effectively eliminate redundant information during the reconstruction process. To address these issues, we propose the Cross-fusion Adaptive Feature Enhancement Transformer (CAFET). Our model maximizes the potential of both CNNs and Transformers. It consists of four key blocks: a high-frequency enhancement block for extracting high-frequency information; a hybrid attention block for capturing global information and local fitting, which includes channel attention and shifted rectangular window attention; a large-window fusion attention block for integrating local high-frequency features and global low-frequency features; and an adaptive sparse overlapping attention block for dynamically retaining key information and enhancing the aggregation of cross-window features. Extensive experiments validate the effectiveness of the proposed method. On the BraTS and IXI datasets, with an upsampling factor of ×2, the proposed method achieves a maximum PSNR improvement of 2.4 dB and 1.3 dB compared to state-of-the-art methods, along with an SSIM improvement of up to 0.16% and 1.42%. Similarly, at an upsampling factor of ×4, the proposed method achieves a maximum PSNR improvement of 1.04 dB and 0.3 dB over the current leading methods, along with an SSIM improvement of up to 0.25% and 1.66%. Our method is capable of reconstructing high-quality super-resolution brain MRI images, demonstrating significant clinical potential.

Symbolic and hybrid AI for brain tissue segmentation using spatial model checking.

Belmonte G, Ciancia V, Massink M

pubmed logopapersMay 24 2025
Segmentation of 3D medical images, and brain segmentation in particular, is an important topic in neuroimaging and in radiotherapy. Overcoming the current, time consuming, practise of manual delineation of brain tumours and providing an accurate, explainable, and replicable method of segmentation of the tumour area and related tissues is therefore an open research challenge. In this paper, we first propose a novel symbolic approach to brain segmentation and delineation of brain lesions based on spatial model checking. This method has its foundations in the theory of closure spaces, a generalisation of topological spaces, and spatial logics. At its core is a high-level declarative logic language for image analysis, ImgQL, and an efficient spatial model checker, VoxLogicA, exploiting state-of-the-art image analysis libraries in its model checking algorithm. We then illustrate how this technique can be combined with Machine Learning techniques leading to a hybrid AI approach that provides accurate and explainable segmentation results. We show the results of the application of the symbolic approach on several public datasets with 3D magnetic resonance (MR) images. Three datasets are provided by the 2017, 2019 and 2020 international MICCAI BraTS Challenges with 210, 259 and 293 MR images, respectively, and the fourth is the BrainWeb dataset with 20 (synthetic) 3D patient images of the normal brain. We then apply the hybrid AI method to the BraTS 2020 training set. Our segmentation results are shown to be in line with the state-of-the-art with respect to other recent approaches, both from the accuracy point of view as well as from the view of computational efficiency, but with the advantage of them being explainable.

Generalizable AI approach for detecting projection type and left-right reversal in chest X-rays.

Ohta Y, Katayama Y, Ichida T, Utsunomiya A, Ishida T

pubmed logopapersMay 23 2025
The verification of chest X-ray images involves several checkpoints, including orientation and reversal. To address the challenges of manual verification, this study developed an artificial intelligence (AI)-based system using a deep convolutional neural network (DCNN) to automatically verify the consistency between the imaging direction and examination orders. The system classified the chest X-ray images into four categories: anteroposterior (AP), posteroanterior (PA), flipped AP, and flipped PA. To evaluate the impact of internal and external datasets on the classification accuracy, the DCNN was trained using multiple publicly available chest X-ray datasets and tested on both internal and external data. The results demonstrated that the DCNN accurately classified the imaging directions and detected image reversal. However, the classification accuracy was strongly influenced by the training dataset. When trained exclusively on NIH data, the network achieved an accuracy of 98.9% on the same dataset; however, this reduced to 87.8% when evaluated with PADChest data. When trained on a mixed dataset, the accuracy improved to 96.4%; however, it decreased to 76.0% when tested on an external COVID-CXNet dataset. Further, using Grad-CAM, we visualized the decision-making process of the network, highlighting the areas of influence, such as the cardiac silhouette and arm positioning, depending on the imaging direction. Thus, this study demonstrated the potential of AI in assisting in automating the verification of imaging direction and positioning in chest X-rays. However, the network must be fine-tuned to local data characteristics to achieve optimal performance.

Construction of a Prediction Model for Adverse Perinatal Outcomes in Foetal Growth Restriction Based on a Machine Learning Algorithm: A Retrospective Study.

Meng X, Wang L, Wu M, Zhang N, Li X, Wu Q

pubmed logopapersMay 23 2025
To create and validate a machine learning (ML)-based model for predicting the adverse perinatal outcome (APO) in foetal growth restriction (FGR) at diagnosis. A retrospective study. Multi-centre in China. Pregnancies affected by FGR. We enrolled singleton foetuses with a perinatal diagnosis of FGR who were admitted between January 2021 and November 2023. A total of 361 pregnancies from Beijing Obstetrics and Gynecology Hospital were used as the training set and the internal test set. In comparison, data from 50 pregnancies from Haidian Maternal and Child Health Hospital were used as the external test set. Feature screening was performed using the random forest (RF), the Least Absolute Shrinkage and Selection Operator (LASSO) and logistic regression (LR). Subsequently, six ML methods, including Stacking, were used to construct models to predict the APO of FGR. Model's performance was evaluated through indicators such as the area under the receiver operating characteristic curve (AUROC). The Shapley Additive Explanation analysis was used to rank each model feature and explain the final model. Mean ± SD gestational age at diagnosis was 32.3 ± 4.8 weeks in the absent APO group and 27.3 ± 3.7 in the present APO group. Women enrolled in the present APO group had a higher rate of hypertension related to pregnancy (74.8% vs. 18.8%, p < 0.001). Among 17 candidate predictors (including maternal characteristics, maternal comorbidities, obstetric characteristics and ultrasound parameters), the integration of RF, LASSO and LR methodologies identified maternal body mass index, hypertension, gestational age at diagnosis of FGR, estimated foetal weight (EFW) z score, EFW growth velocity and abnormal umbilical artery Doppler (defined as a pulsatility index above the 95th percentile or instances of absent/reversed diastolic flow) as significant predictors. The Stacking model demonstrated a good performance in both the internal test set [AUROC: 0.861, 95% confidence interval (CI), 0.838-0.896] and the external test set [AUROC: 0.906, 95% CI, 0.875-0.947]. The calibration curve showed high agreement between the predicted and observed risks. The Hosmer-Lemeshow test for the internal and external test sets was p = 0.387 and p = 0.825, respectively. The ML algorithm for APO, which integrates maternal clinical factors and ultrasound parameters, demonstrates good predictive value for APO in FGR at diagnosis. This suggested that ML techniques may be a valid approach for the early detection of high-risk APO in FGR pregnancies.

Self-supervised feature learning for cardiac Cine MR image reconstruction.

Xu S, Fruh M, Hammernik K, Lingg A, Kubler J, Krumm P, Rueckert D, Gatidis S, Kustner T

pubmed logopapersMay 23 2025
We propose a self-supervised feature learning assisted reconstruction (SSFL-Recon) framework for MRI reconstruction to address the limitation of existing supervised learning methods. Although recent deep learning-based methods have shown promising performance in MRI reconstruction, most require fully-sampled images for supervised learning, which is challenging in practice considering long acquisition times under respiratory or organ motion. Moreover, nearly all fully-sampled datasets are obtained from conventional reconstruction of mildly accelerated datasets, thus potentially biasing the achievable performance. The numerous undersampled datasets with different accelerations in clinical practice, hence, remain underutilized. To address these issues, we first train a self-supervised feature extractor on undersampled images to learn sampling-insensitive features. The pre-learned features are subsequently embedded in the self-supervised reconstruction network to assist in removing artifacts. Experiments were conducted retrospectively on an in-house 2D cardiac Cine dataset, including 91 cardiovascular patients and 38 healthy subjects. The results demonstrate that the proposed SSFL-Recon framework outperforms existing self-supervised MRI reconstruction methods and even exhibits comparable or better performance to supervised learning up to 16× retrospective undersampling. The feature learning strategy can effectively extract global representations, which have proven beneficial in removing artifacts and increasing generalization ability during reconstruction.

AMVLM: Alignment-Multiplicity Aware Vision-Language Model for Semi-Supervised Medical Image Segmentation.

Pan Q, Li Z, Qiao W, Lou J, Yang Q, Yang G, Ji B

pubmed logopapersMay 23 2025
Low-quality pseudo labels pose a significant obstacle in semi-supervised medical image segmentation (SSMIS), impeding consistency learning on unlabeled data. Leveraging vision-language model (VLM) holds promise in ameliorating pseudo label quality by employing textual prompts to delineate segmentation regions, but it faces the challenge of cross-modal alignment uncertainty due to multiple correspondences (multiple images/texts tend to correspond to one text/image). Existing VLMs address this challenge by modeling semantics as distributions but such distributions lead to semantic degradation. To address these problems, we propose Alignment-Multiplicity Aware Vision-Language Model (AMVLM), a new VLM pre-training paradigm with two novel similarity metric strategies. (i) Cross-modal Similarity Supervision (CSS) proposes a probability distribution transformer to supervise similarity scores across fine-granularity semantics through measuring cross-modal distribution disparities, thus learning cross-modal multiple alignments. (ii) Intra-modal Contrastive Learning (ICL) takes into account the similarity metric of coarse-fine granularity information within each modality to encourage cross-modal semantic consistency. Furthermore, using the pretrained AMVLM, we propose a pioneering text-guided SSMIS network to compensate for the quality deficiencies of pseudo-labels. This network incorporates a text mask generator to produce multimodal supervision information, enhancing pseudo label quality and the model's consistency learning. Extensive experimentation validates the efficacy of our AMVLM-driven SSMIS, showcasing superior performance across four publicly available datasets. The code will be available at: https://github.com/QingtaoPan/AMVLM.

Non-invasive arterial input function estimation using an MRA atlas and machine learning.

Vashistha R, Moradi H, Hammond A, O'Brien K, Rominger A, Sari H, Shi K, Vegh V, Reutens D

pubmed logopapersMay 23 2025
Quantifying biological parameters of interest through dynamic positron emission tomography (PET) requires an arterial input function (AIF) conventionally obtained from arterial blood samples. The AIF can also be non-invasively estimated from blood pools in PET images, often identified using co-registered MRI images. Deploying methods without blood sampling or the use of MRI generally requires total body PET systems with a long axial field-of-view (LAFOV) that includes a large cardiovascular blood pool. However, the number of such systems in clinical use is currently much smaller than that of short axial field-of-view (SAFOV) scanners. We propose a data-driven approach for AIF estimation for SAFOV PET scanners, which is non-invasive and does not require MRI or blood sampling using brain PET scans. The proposed method was validated using dynamic <sup>18</sup>F-fluorodeoxyglucose [<sup>18</sup>F]FDG total body PET data from 10 subjects. A variational inference-based machine learning approach was employed to correct for peak activity. The prior was estimated using a probabilistic vascular MRI atlas, registered to each subject's PET image to identify cerebral arteries in the brain. The estimated AIF using brain PET images (IDIF-Brain) was compared to that obtained using data from the descending aorta of the heart (IDIF-DA). Kinetic rate constants (K<sub>1</sub>, k<sub>2</sub>, k<sub>3</sub>) and net radiotracer influx (K<sub>i</sub>) for both cases were computed and compared. Qualitatively, the shape of IDIF-Brain matched that of IDIF-DA, capturing information on both the peak and tail of the AIF. The area under the curve (AUC) of IDIF-Brain and IDIF-DA were similar, with an average relative error of 9%. The mean Pearson correlations between kinetic parameters (K<sub>1</sub>, k<sub>2</sub>, k<sub>3</sub>) estimated with IDIF-DA and IDIF-Brain for each voxel were between 0.92 and 0.99 in all subjects, and for K<sub>i</sub>, it was above 0.97. This study introduces a new approach for AIF estimation in dynamic PET using brain PET images, a probabilistic vascular atlas, and machine learning techniques. The findings demonstrate the feasibility of non-invasive and subject-specific AIF estimation for SAFOV scanners.

Optimizing the power of AI for fracture detection: from blind spots to breakthroughs.

Behzad S, Eibschutz L, Lu MY, Gholamrezanezhad A

pubmed logopapersMay 23 2025
Artificial Intelligence (AI) is increasingly being integrated into the field of musculoskeletal (MSK) radiology, from research methods to routine clinical practice. Within the field of fracture detection, AI is allowing for precision and speed previously unimaginable. Yet, AI's decision-making processes are sometimes wrought with deficiencies, undermining trust, hindering accountability, and compromising diagnostic precision. To make AI a trusted ally for radiologists, we recommend incorporating clinical history, rationalizing AI decisions by explainable AI (XAI) techniques, increasing the variety and scale of training data to approach the complexity of a clinical situation, and active interactions between clinicians and developers. By bridging these gaps, the true potential of AI can be unlocked, enhancing patient outcomes and fundamentally transforming radiology through a harmonious integration of human expertise and intelligent technology. In this article, we aim to examine the factors contributing to AI inaccuracies and offer recommendations to address these challenges-benefiting both radiologists and developers striving to improve future algorithms.
Page 312 of 3763760 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.