Sort by:
Page 16 of 2052045 results

Accelerating CEST MRI With Deep Learning-Based Frequency Selection and Parameter Estimation.

Shen C, Cheema K, Xie Y, Ruan D, Li D

pubmed logopapersJul 1 2025
Chemical exchange saturation transfer (CEST) MRI is a powerful molecular imaging technique for detecting metabolites through proton exchange. While CEST MRI provides high sensitivity, its clinical application is hindered by prolonged scan time due to the need for imaging across numerous frequency offsets for parameter estimation. Since scan time is directly proportional to the number of frequency offsets, identifying and selecting the most informative frequency can significantly reduce acquisition time. We propose a novel deep learning-based framework that integrates frequency selection and parameter estimation to accelerate CEST MRI. Our method leverages channel pruning via batch normalization to identify the most informative frequency offsets while simultaneously training the network for accurate parametric map prediction. Using data from six healthy volunteers, channel pruning selects 13 informative frequency offsets out of 53 without compromising map quality. Images from selected frequency offsets were reconstructed using the MR Multitasking method, which employs a low-rank tensor model to enable under-sampling of k-space lines for each frequency offset, further reducing scan time. Predicted parametric maps of amide proton transfer (APT), nuclear overhauser effect (NOE), and magnetization transfer (MT) based on these selected frequencies were comparable in quality to maps generated using all frequency offsets, achieving superior performance compared to Fisher information-based selection methods from our previous work. This integrated approach has the potential to reduce the whole-brain CEST MRI scan time from the original 5:30 min to under 1:30 min without compromising map quality. By leveraging deep learning for frequency selection and parametric map prediction, the proposed framework demonstrates its potential for efficient and practical clinical implementation. Future studies will focus on extending this method to patient populations and addressing challenges such as B<sub>0</sub> inhomogeneity and abnormal signal variation in diseased tissues.

Mamba-based deformable medical image registration with an annotated brain MR-CT dataset.

Wang Y, Guo T, Yuan W, Shu S, Meng C, Bai X

pubmed logopapersJul 1 2025
Deformable registration is essential in medical image analysis, especially for handling various multi- and mono-modal registration tasks in neuroimaging. Existing studies lack exploration of brain MR-CT registration, and face challenges in both accuracy and efficiency improvements of learning-based methods. To enlarge the practice of multi-modal registration in brain, we present SR-Reg, a new benchmark dataset comprising 180 volumetric paired MR-CT images and annotated anatomical regions. Building on this foundation, we introduce MambaMorph, a novel deformable registration network based on an efficient state space model Mamba for global feature learning, with a fine-grained feature extractor for low-level embedding. Experimental results demonstrate that MambaMorph surpasses advanced ConvNet-based and Transformer-based networks across several multi- and mono-modal tasks, showcasing impressive enhancements of efficacy and efficiency. Code and dataset are available at https://github.com/mileswyn/MambaMorph.

Assessment of biventricular cardiac function using free-breathing artificial intelligence cine with motion correction: Comparison with standard multiple breath-holding cine.

Ran L, Yan X, Zhao Y, Yang Z, Chen Z, Jia F, Song X, Huang L, Xia L

pubmed logopapersJul 1 2025
To assess the image quality and biventricular function utilizing a free-breathing artificial intelligence cine method with motion correction (FB AI MOCO). A total of 72 participants (mean age 38.3 ± 15.4 years, 40 males) prospectively enrolled in this single-center, cross-sectional study underwent cine scans using standard breath-holding (BH) cine and FB AI MOCO cine at 3.0 Tesla. The image quality of the cine images was evaluated with a 5-point Ordinal Likert scale based on blood-pool to myocardium contrast, endocardial edge definition, and artifacts, and overall quality score was calculated by the equal weight average of all three criteria, apparent signal to noise ratio (aSNR), estimated contrast to noise ratio (eCNR) were assessed. Biventricular functional parameters including Left Ventricular (LV), Right Ventricular (RV) End-Diastolic Volume (EDV), End-Systolic Volume (ESV), Stroke Volume (SV), Ejection Fraction (EF), and LV End-Diastolic Mass (LVEDM) were also assessed. Comparison between two sequences was assessed using paired t-test and Wilcoxon signed-rank test, correlation using Pearson correlation. The agreement of quantitative parameters was assessed using intraclass correlation coefficient (ICC) and Bland-Altman analysis. P < 0.05 was statistically significant. The total acquisition time of the entire stack for FB AI MOCO cine (14.7 s ± 1.9 s) was notably shorter than that for standard BH cine (82.6 s ± 11.9 s, P < 0.001). The aSNR between FB AI MOCO cine and standard BH cine has no significantly difference (76.7 ± 20.7 vs. 79.8 ± 20.7, P = 0.193). The eCNR of FB AI MOCO cine was higher than standard BH cine (191.6 ± 54.0 vs. 155.8 ± 68.4, P < 0.001), as was the scores of blood-pool to myocardium contrast (4.6 ± 0.5 vs. 4.4 ± 0.6, P = 0.003). Qualitative scores including endocardial edge definition (4.2 ± 0.5 vs. 4.3 ± 0.7, P = 0.123), artifact presence (4.3 ± 0.6 vs. 4.1 ± 0.8, P = 0.085), and overall image quality (4.4 ± 0.4 vs. 4.3 ± 0.6, P = 0.448), showed no significant differences between the two methods. Representative RV and LV functional parameters - including RVEDV (102.2 (86.4, 120.4) ml vs. 104.0 (88.5, 120.3) ml, P = 0.294), RVEF (31.0 ± 11.1 % vs. 31.2 ± 11.0 %, P = 0.570), and LVEDV (106.2 (86.7, 131.3) ml vs. 105.8 (84.4, 130.3) ml, P = 0.450) - also did not differ significantly between the two methods. Strong correlations (r > 0.900) and excellent agreement (ICC > 0.900) were found for all biventricular functional parameters between the two sequences. In subgroups with reduced LVEF (<50 %, n = 24) or elevated heart rate (≥80  bpm, n = 17), no significant differences were observed in any biventricular functional metrics (P > 0.05 for all) between the two sequences. In comparison to multiple BH cine, the FB AI MOCO cine achieved comparable image quality and biventricular functional parameters with shorter scan times, suggesting its promising potential for clinical applications.

A deep learning framework for reconstructing Breast Amide Proton Transfer weighted imaging sequences from sparse frequency offsets to dense frequency offsets.

Yang Q, Su S, Zhang T, Wang M, Dou W, Li K, Ren Y, Zheng Y, Wang M, Xu Y, Sun Y, Liu Z, Tan T

pubmed logopapersJul 1 2025
Amide Proton Transfer (APT) technique is a novel functional MRI technique that enables quantification of protein metabolism, but its wide application is largely limited in clinical settings by its long acquisition time. One way to reduce the scanning time is to obtain fewer frequency offset images during image acquisition. However, sparse frequency offset images are not inadequate to fit the z-spectral, a curve essential to quantifying the APT effect, which might compromise its quantification. In our study, we develop a deep learning-based model that allows for reconstructing dense frequency offsets from sparse ones, potentially reducing scanning time. We propose to leverage time-series convolution to extract both short and long-range spatial and frequency features of the APT imaging sequence. Our proposed model outperforms other seq2seq models, achieving superior reconstruction with a peak signal-to-noise ratio of 45.8 (95% confidence interval (CI): [44.9 46.7]), and a structural similarity index of 0.989 (95% CI:[0.987 0.993]) for the tumor region. We have integrated a weighted layer into our model to evaluate the impact of individual frequency offset on the reconstruction process. The weights assigned to the frequency offset at ±6.5 ppm, 0 ppm, and 3.5 ppm demonstrate higher significance as learned by the model. Experimental results demonstrate that our proposed model effectively reconstructs dense frequency offsets (n = 29, from 7 to -7 with 0.5 ppm as an interval) from data with 21 frequency offsets, reducing scanning time by 25%. This work presents a method for shortening the APT imaging acquisition time, offering potential guidance for parameter settings in APT imaging and serving as a valuable reference for clinicians.

Breast tumour classification in DCE-MRI via cross-attention and discriminant correlation analysis enhanced feature fusion.

Pan F, Wu B, Jian X, Li C, Liu D, Zhang N

pubmed logopapersJul 1 2025
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) has proven to be highly sensitive in diagnosing breast tumours, due to the kinetic and volumetric features inherent in it. To utilise the kinetics-related and volume-related information, this paper aims to develop and validate a classification for differentiating benign and malignant breast tumours based on DCE-MRI, though fusing deep features and cross-attention-encoded radiomics features using discriminant correlation analysis (DCA). Classification experiments were conducted on a dataset comprising 261 individuals who underwent DCE-MRI including those with multiple tumours, resulting in 137 benign and 163 malignant tumours. To improve the strength of correlation between features and reduce features' redundancy, a novel fusion method that fuses deep features and encoded radiomics features based on DCA (eFF-DCA) is proposed. The eFF-DCA includes three components: (1) a feature extraction module to capture kinetic information across phases, (2) a radiomics feature encoding module employing a cross-attention mechanism to enhance inter-phase feature correlation, and (3) a DCA-based fusion module that transforms features to maximise intra-class correlation while minimising inter-class redundancy, facilitating effective classification. The proposed eFF-DCA method achieved an accuracy of 90.9% and an area under the receiver operating characteristic curve of 0.942, outperforming methods using single-modal features. The proposed eFF-DCA utilises DCE-MRI kinetic-related and volume-related features to improve breast tumour diagnosis accuracy, but non-end-to-end design limits multimodal fusion. Future research should explore unified end-to-end deep learning architectures that enable seamless multimodal feature fusion and joint optimisation of feature extraction and classification.

A Minimal Annotation Pipeline for Deep Learning Segmentation of Skeletal Muscles.

Baudin PY, Balsiger F, Beck L, Boisserie JM, Jouan S, Marty B, Reyngoudt H, Scheidegger O

pubmed logopapersJul 1 2025
Translating quantitative skeletal muscle MRI biomarkers into clinics requires efficient automatic segmentation methods. The purpose of this work is to investigate a simple yet effective iterative methodology for building a high-quality automatic segmentation model while minimizing the manual annotation effort. We used a retrospective database of quantitative MRI examinations (n = 70) of healthy and pathological thighs for training a nnU-Net segmentation model. Healthy volunteers and patients with various neuromuscular diseases, broadly categorized as dystrophic, inflammatory, neurogenic, and unlabeled NMDs. We designed an iterative procedure, progressively adding cases to the training set and using a simple visual five-level rating scale to judge the validity of generated segmentations for clinical use. On an independent test set (n = 20), we assessed the quality of the segmentation in 13 individual thigh muscles using standard segmentation metrics-dice coefficient (DICE) and 95% Hausdorff distance (HD95)-and quantitative biomarkers-cross-sectional area (CSA), fat fraction (FF), and water-T1/T2. We obtained high-quality segmentations (DICE = 0.88 ± 0.15/0.86 ± 0.14, HD95 = 6.35 ± 12.33/6.74 ± 11.57 mm), comparable to recent works, although with a smaller training set (n = 30). Inter-rater agreement on the five-level scale was fair to moderate but showed progressive improvement of the segmentation model along with the iterations. We observed limited differences from manually delineated segmentations on the quantitative outcomes (MAD: CSA = 65.2 mm<sup>2</sup>, FF = 1%, water-T1 = 8.4 ms, water-T2 = 0.35 ms), with variability comparable to manual delineations.

The implementation of artificial intelligence in serial monitoring of post gamma knife vestibular schwannomas: A pilot study.

Singh M, Jester N, Lorr S, Briano A, Schwartz N, Mahajan A, Chiang V, Tommasini SM, Wiznia DH, Buono FD

pubmed logopapersJul 1 2025
Vestibular schwannomas (VS) are benign tumors that can lead to hearing loss, balance issues, and tinnitus. Gamma Knife Radiosurgery (GKS) is a common treatment for VS, aimed at halting tumor growth and preserving neurological function. Accurate monitoring of VS volume before and after GKS is essential for assessing treatment efficacy. To evaluate the accuracy of an artificial intelligence (AI) algorithm, originally developed to identify NF2-SWN-related VS, in segmenting non-NF2-SWN-related VS and detecting volume changes pre- and post-GKS. We hypothesize this AI algorithm, trained on NF2-SWN-related VS data, will accurately apply to non-NF2-SWN VS and VS treated with GKS. In this retrospective cohort study, we reviewed data from an established Gamma Knife database, identifying 16 patients who underwent GKS for VS and had pre- and post-GKS scans. Contrast-enhanced T1-weighted MRI scans were analyzed with both manual segmentation and the AI algorithm. DICE similarity coefficients were computed to compare AI and manual segmentations, and a paired t-test was used to assess statistical significance. Volume changes for pre- and post-GKS scans were calculated for both segmentation methods. The mean DICE score between AI and manual segmentations was 0.91 (range 0.79-0.97). Pre- and post-GKS DICE scores were 0.91 (range 0.79-0.97) and 0.92 (range 0.81-0.97), indicating high spatial overlap. AI-segmented VS volumes pre- and post-GKS were consistent with manual measurements, with high DICE scores indicating strong spatial overlap. The AI algorithm processed scans within 5 min, suggesting it offers a reliable, efficient alternative for clinical monitoring. DICE scores showed high similarity between manual and AI segmentations. The pre- and post-GKS VS volume percentage changes were also similar between manual and AI-segmented VS volumes, indicating that our AI algorithm can accurately detect changes in tumor growth.

CALIMAR-GAN: An unpaired mask-guided attention network for metal artifact reduction in CT scans.

Scardigno RM, Brunetti A, Marvulli PM, Carli R, Dotoli M, Bevilacqua V, Buongiorno D

pubmed logopapersJul 1 2025
High-quality computed tomography (CT) scans are essential for accurate diagnostic and therapeutic decisions, but the presence of metal objects within the body can produce distortions that lower image quality. Deep learning (DL) approaches using image-to-image translation for metal artifact reduction (MAR) show promise over traditional methods but often introduce secondary artifacts. Additionally, most rely on paired simulated data due to limited availability of real paired clinical data, restricting evaluation on clinical scans to qualitative analysis. This work presents CALIMAR-GAN, a generative adversarial network (GAN) model that employs a guided attention mechanism and the linear interpolation algorithm to reduce artifacts using unpaired simulated and clinical data for targeted artifact reduction. Quantitative evaluations on simulated images demonstrated superior performance, achieving a PSNR of 31.7, SSIM of 0.877, and Fréchet inception distance (FID) of 22.1, outperforming state-of-the-art methods. On real clinical images, CALIMAR-GAN achieved the lowest FID (32.7), validated as a valuable complement to qualitative assessments through correlation with pixel-based metrics (r=-0.797 with PSNR, p<0.01; r=-0.767 with MS-SSIM, p<0.01). This work advances DL-based artifact reduction into clinical practice with high-fidelity reconstructions that enhance diagnostic accuracy and therapeutic outcomes. Code is available at https://github.com/roberto722/calimar-gan.

Prediction of PD-L1 expression in NSCLC patients using PET/CT radiomics and prognostic modelling for immunotherapy in PD-L1-positive NSCLC patients.

Peng M, Wang M, Yang X, Wang Y, Xie L, An W, Ge F, Yang C, Wang K

pubmed logopapersJul 1 2025
To develop a positron emission tomography/computed tomography (PET/CT)-based radiomics model for predicting programmed cell death ligand 1 (PD-L1) expression in non-small cell lung cancer (NSCLC) patients and estimating progression-free survival (PFS) and overall survival (OS) in PD-L1-positive patients undergoing first-line immunotherapy. We retrospectively analysed 143 NSCLC patients who underwent pretreatment <sup>18</sup>F-fluorodeoxyglucose (<sup>18</sup>F-FDG) PET/CT scans, of whom 86 were PD-L1-positive. Clinical data collected included gender, age, smoking history, Tumor-Node-Metastases (TNM) staging system, pathologic types, laboratory parameters, and PET metabolic parameters. Four machine learning algorithms-Bayes, logistic, random forest, and Supportsupport vector machine (SVM)-were used to build models. The predictive performance was validated using receiver operating characteristic (ROC) curves. Univariate and multivariate Cox analyses identified independent predictors of OS and PFS in PD-L1-positive expression patients undergoing immunotherapy, and a nomogram was created to predict OS. A total of 20 models were built for predicting PD-L1 expression. The clinical combined PET/CT radiomics model based on the SVM algorithm performed best (area under curve for training and test sets: 0.914 and 0.877, respectively). The Cox analyses showed that smoking history independently predicted PFS. SUVmean, monocyte percentage and white blood cell count were independent predictors of OS, and the nomogram was created to predict 1-year, 2-year, and 3-year OS based on these three factors. We developed PET/CT-based machine learning models to help predict PD-L1 expression in NSCLC patients and identified independent predictors of PFS and OS in PD-L1-positive patients receiving immunotherapy, thereby aiding precision treatment.

Novel artificial intelligence approach in neurointerventional practice: Preliminary findings on filter movement and ischemic lesions in carotid artery stenting.

Sagawa H, Sakakura Y, Hanazawa R, Takahashi S, Wakabayashi H, Fujii S, Fujita K, Hirai S, Hirakawa A, Kono K, Sumita K

pubmed logopapersJul 1 2025
Embolic protection devices (EPDs) used during carotid artery stenting (CAS) are crucial in reducing ischemic complications. Although minimizing the filter-type EPD movement is considered important, limited research has demonstrated this practice. We used an artificial intelligence (AI)-based device recognition technology to investigate the correlation between filter movements and ischemic complications. We retrospectively studied 28 consecutive patients who underwent CAS using FilterWire EZ (Boston Scientific, Marlborough, MA, USA) from April 2022 to September 2023. Clinical data, procedural videos, and postoperative magnetic resonance imaging were collected. An AI-based device detection function in the Neuro-Vascular Assist (iMed Technologies, Tokyo, Japan) was used to quantify the filter movement. Multivariate proportional odds model analysis was performed to explore the correlations between postoperative diffusion-weighted imaging (DWI) hyperintense lesions and potential ischemic risk factors, including filter movement. In total, 23 patients had sufficient information and were eligible for quantitative analysis. Fourteen patients (60.9 %) showed postoperative DWI hyperintense lesions. Multivariate analysis revealed significant associations between filter movement distance (odds ratio, 1.01; 95 % confidence interval, 1.00-1.02; p = 0.003) and high-intensity signals in time-of-flight magnetic resonance angiography with DWI hyperintense lesions. Age, symptomatic status, and operative time were not significantly correlated. Increased filter movement during CAS was correlated with a higher incidence of postoperative DWI hyperintense lesions. AI-based quantitative evaluation of endovascular techniques may enable demonstration of previously unproven recommendations. To the best of our knowledge, this is the first study to use an AI system for quantitative evaluation to address real-world clinical issues.
Page 16 of 2052045 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.