Sort by:
Page 10 of 1521519 results

Association of Psychological Resilience With Decelerated Brain Aging in Cognitively Healthy World Trade Center Responders.

Seeley SH, Fremont R, Schreiber Z, Morris LS, Cahn L, Murrough JW, Schiller D, Charney DS, Pietrzak RH, Perez-Rodriguez MM, Feder A

pubmed logopapersJul 1 2025
Despite their exposure to potentially traumatic stressors, the majority of World Trade Center (WTC) responders-those who worked on rescue, recovery, and cleanup efforts on or following September 11, 2001-have shown psychological resilience, never developing long-term psychopathology. Psychological resilience may be protective against the earlier age-related cognitive changes associated with posttraumatic stress disorder (PTSD) in this cohort. In the current study, we calculated the difference between estimated brain age from structural magnetic resonance imaging (MRI) data and chronological age in WTC responders who participated in a parent functional MRI study of resilience (<i>N</i> = 97). We hypothesized that highly resilient responders would show the least brain aging and explored associations between brain aging and psychological and cognitive measures. WTC responders screened for the absence of cognitive impairment were classified into 3 groups: a WTC-related PTSD group (<i>n</i> = 32), a Highly Resilient group without lifetime psychopathology despite high WTC-related exposure (<i>n</i> = 34), and a Lower WTC-Exposed control group also without lifetime psychopathology (<i>n</i> = 31). We used <i>BrainStructureAges</i>, a deep learning algorithm that estimates voxelwise age from T1-weighted MRI data to calculate decelerated (or accelerated) brain aging relative to chronological age. Globally, brain aging was decelerated in the Highly Resilient group and accelerated in the PTSD group, with a significant group difference (<i>p</i> = .021, Cohen's <i>d</i> = 0.58); the Lower WTC-Exposed control group exhibited no significant brain age gap or group difference. Lesser brain aging was associated with resilience-linked factors including lower emotional suppression, greater optimism, and better verbal learning. Cognitively healthy WTC responders show differences in brain aging related to resilience and PTSD.

Development and validation of a fusion model based on multi-phase contrast CT radiomics combined with clinical features for predicting Ki-67 expression in gastric cancer.

Song T, Xue B, Liu M, Chen L, Cao A, Du P

pubmed logopapersJul 1 2025
The present study aimed to develop and validate a fusion model based on multi-phase contrast-enhanced computed tomography (CECT) radiomics features combined with clinical features to preoperatively predict the expression levels of Ki-67 in patients with gastric cancer (GC). A total of 164 patients with GC who underwent surgical treatment at our hospital between September 2015 and September 2023 were retrospectively included and were randomly divided into a training set (n=114) and a testing set (n=50). Using Pyradiomics, radiomics features were extracted from multi-phase CECT images and were combined with significant clinical features through various machine learning algorithms [support vector machine (SVM), random forest (RandomForest), K-nearest neighbors (KNN), LightGBM and XGBoost] to build a fusion model. Receiver operating characteristic, area under the curve (AUC), calibration curve and decision curve analysis (DCA) were used to evaluate, validate and compare the predictive performance and clinical utility of the model. Among the three single-phase models, for the arterial phase model, the SVM radiomics model had the highest AUC value in the training set, which was 0.697; and the RandomForest radiomics model had the highest AUC value in the testing set, which was 0.658. For the venous phase model, the SVM radiomics model had the highest AUC value in the training set, which was 0.783; and the LightGBM radiomics model had the highest AUC value in the testing set, which was 0.747. For the delayed phase model, the KNN radiomics model had the highest AUC value in the training set, which was 0.772; and the SVM radiomics model had the highest AUC in the testing set, which was 0.719. The clinical feature model had the lowest AUC values in both the training set and the testing set, which were 0.614 and 0.520, respectively. Notably, the multi-phase model and the fusion model, which were constructed by combining the clinical features and the multi-phase features, demonstrated excellent discriminative performance, with the fusion model achieving AUC values of 0.933 and 0.817 in the training and testing sets, thus outperforming other models (DeLong test, both P<0.05). The calibration curve showed that the fusion model had goodness of fit (Hosmer-Lemeshow test, >0.5 in the training and validation sets). The DCA showed that the net benefit of the fusion model in identifying high expression of Ki-67 was improved compared with that of other models. Furthermore, the fusion model achieved an AUC value of 0.805 in the external validation data from The Cancer Imaging Archive. In conclusion, the fusion model established in the present study was revealed to have excellent performance and is expected to serve as a non-invasive tool for predicting Ki-67 status and guiding clinical treatment.

Automatic adult age estimation using bone mineral density of proximal femur via deep learning.

Cao Y, Ma Y, Zhang S, Li C, Chen F, Zhang J, Huang P

pubmed logopapersJul 1 2025
Accurate adult age estimation (AAE) is critical for forensic and anthropological applications, yet traditional methods relying on bone mineral density (BMD) face significant challenges due to biological variability and methodological limitations. This study aims to develop an end-to-end Deep Learning (DL) based pipeline for automated AAE using BMD from proximal femoral CT scans. The main objectives are to construct a large-scale dataset of 5151 CT scans from real-world clinical and cadaver cohorts, fine-tune the Segment Anything Model (SAM) for accurate femoral bone segmentation, and evaluate multiple convolutional neural networks (CNNs) for precise age estimation based on segmented BMD data. Model performance was assessed through cross-validation, internal clinical testing, and external post-mortem validation. SAM achieved excellent segmentation performance with a Dice coefficient of 0.928 and an average intersection over union (mIoU) of 0.869. The CNN models achieved an average mean absolute error (MAE) of 5.20 years in cross-validation (male: 5.72; female: 4.51), which improved to 4.98 years in the independent clinical test set (male: 5.32; female: 4.56). External validation on the post-mortem dataset revealed an MAE of 6.91 years, with 6.97 for males and 6.69 for females. Ensemble learning further improved accuracy, reducing MAE to 4.78 years (male: 5.12; female: 4.35) in the internal test set, and 6.58 years (male: 6.64; female: 6.37) in the external validation set. These findings highlight the feasibility of dl-driven AAE and its potential for forensic applications, offering a fully automated framework for robust age estimation.

Accelerating CEST MRI With Deep Learning-Based Frequency Selection and Parameter Estimation.

Shen C, Cheema K, Xie Y, Ruan D, Li D

pubmed logopapersJul 1 2025
Chemical exchange saturation transfer (CEST) MRI is a powerful molecular imaging technique for detecting metabolites through proton exchange. While CEST MRI provides high sensitivity, its clinical application is hindered by prolonged scan time due to the need for imaging across numerous frequency offsets for parameter estimation. Since scan time is directly proportional to the number of frequency offsets, identifying and selecting the most informative frequency can significantly reduce acquisition time. We propose a novel deep learning-based framework that integrates frequency selection and parameter estimation to accelerate CEST MRI. Our method leverages channel pruning via batch normalization to identify the most informative frequency offsets while simultaneously training the network for accurate parametric map prediction. Using data from six healthy volunteers, channel pruning selects 13 informative frequency offsets out of 53 without compromising map quality. Images from selected frequency offsets were reconstructed using the MR Multitasking method, which employs a low-rank tensor model to enable under-sampling of k-space lines for each frequency offset, further reducing scan time. Predicted parametric maps of amide proton transfer (APT), nuclear overhauser effect (NOE), and magnetization transfer (MT) based on these selected frequencies were comparable in quality to maps generated using all frequency offsets, achieving superior performance compared to Fisher information-based selection methods from our previous work. This integrated approach has the potential to reduce the whole-brain CEST MRI scan time from the original 5:30 min to under 1:30 min without compromising map quality. By leveraging deep learning for frequency selection and parametric map prediction, the proposed framework demonstrates its potential for efficient and practical clinical implementation. Future studies will focus on extending this method to patient populations and addressing challenges such as B<sub>0</sub> inhomogeneity and abnormal signal variation in diseased tissues.

Mamba-based deformable medical image registration with an annotated brain MR-CT dataset.

Wang Y, Guo T, Yuan W, Shu S, Meng C, Bai X

pubmed logopapersJul 1 2025
Deformable registration is essential in medical image analysis, especially for handling various multi- and mono-modal registration tasks in neuroimaging. Existing studies lack exploration of brain MR-CT registration, and face challenges in both accuracy and efficiency improvements of learning-based methods. To enlarge the practice of multi-modal registration in brain, we present SR-Reg, a new benchmark dataset comprising 180 volumetric paired MR-CT images and annotated anatomical regions. Building on this foundation, we introduce MambaMorph, a novel deformable registration network based on an efficient state space model Mamba for global feature learning, with a fine-grained feature extractor for low-level embedding. Experimental results demonstrate that MambaMorph surpasses advanced ConvNet-based and Transformer-based networks across several multi- and mono-modal tasks, showcasing impressive enhancements of efficacy and efficiency. Code and dataset are available at https://github.com/mileswyn/MambaMorph.

Assessment of biventricular cardiac function using free-breathing artificial intelligence cine with motion correction: Comparison with standard multiple breath-holding cine.

Ran L, Yan X, Zhao Y, Yang Z, Chen Z, Jia F, Song X, Huang L, Xia L

pubmed logopapersJul 1 2025
To assess the image quality and biventricular function utilizing a free-breathing artificial intelligence cine method with motion correction (FB AI MOCO). A total of 72 participants (mean age 38.3 ± 15.4 years, 40 males) prospectively enrolled in this single-center, cross-sectional study underwent cine scans using standard breath-holding (BH) cine and FB AI MOCO cine at 3.0 Tesla. The image quality of the cine images was evaluated with a 5-point Ordinal Likert scale based on blood-pool to myocardium contrast, endocardial edge definition, and artifacts, and overall quality score was calculated by the equal weight average of all three criteria, apparent signal to noise ratio (aSNR), estimated contrast to noise ratio (eCNR) were assessed. Biventricular functional parameters including Left Ventricular (LV), Right Ventricular (RV) End-Diastolic Volume (EDV), End-Systolic Volume (ESV), Stroke Volume (SV), Ejection Fraction (EF), and LV End-Diastolic Mass (LVEDM) were also assessed. Comparison between two sequences was assessed using paired t-test and Wilcoxon signed-rank test, correlation using Pearson correlation. The agreement of quantitative parameters was assessed using intraclass correlation coefficient (ICC) and Bland-Altman analysis. P < 0.05 was statistically significant. The total acquisition time of the entire stack for FB AI MOCO cine (14.7 s ± 1.9 s) was notably shorter than that for standard BH cine (82.6 s ± 11.9 s, P < 0.001). The aSNR between FB AI MOCO cine and standard BH cine has no significantly difference (76.7 ± 20.7 vs. 79.8 ± 20.7, P = 0.193). The eCNR of FB AI MOCO cine was higher than standard BH cine (191.6 ± 54.0 vs. 155.8 ± 68.4, P < 0.001), as was the scores of blood-pool to myocardium contrast (4.6 ± 0.5 vs. 4.4 ± 0.6, P = 0.003). Qualitative scores including endocardial edge definition (4.2 ± 0.5 vs. 4.3 ± 0.7, P = 0.123), artifact presence (4.3 ± 0.6 vs. 4.1 ± 0.8, P = 0.085), and overall image quality (4.4 ± 0.4 vs. 4.3 ± 0.6, P = 0.448), showed no significant differences between the two methods. Representative RV and LV functional parameters - including RVEDV (102.2 (86.4, 120.4) ml vs. 104.0 (88.5, 120.3) ml, P = 0.294), RVEF (31.0 ± 11.1 % vs. 31.2 ± 11.0 %, P = 0.570), and LVEDV (106.2 (86.7, 131.3) ml vs. 105.8 (84.4, 130.3) ml, P = 0.450) - also did not differ significantly between the two methods. Strong correlations (r > 0.900) and excellent agreement (ICC > 0.900) were found for all biventricular functional parameters between the two sequences. In subgroups with reduced LVEF (<50 %, n = 24) or elevated heart rate (≥80  bpm, n = 17), no significant differences were observed in any biventricular functional metrics (P > 0.05 for all) between the two sequences. In comparison to multiple BH cine, the FB AI MOCO cine achieved comparable image quality and biventricular functional parameters with shorter scan times, suggesting its promising potential for clinical applications.

A deep learning framework for reconstructing Breast Amide Proton Transfer weighted imaging sequences from sparse frequency offsets to dense frequency offsets.

Yang Q, Su S, Zhang T, Wang M, Dou W, Li K, Ren Y, Zheng Y, Wang M, Xu Y, Sun Y, Liu Z, Tan T

pubmed logopapersJul 1 2025
Amide Proton Transfer (APT) technique is a novel functional MRI technique that enables quantification of protein metabolism, but its wide application is largely limited in clinical settings by its long acquisition time. One way to reduce the scanning time is to obtain fewer frequency offset images during image acquisition. However, sparse frequency offset images are not inadequate to fit the z-spectral, a curve essential to quantifying the APT effect, which might compromise its quantification. In our study, we develop a deep learning-based model that allows for reconstructing dense frequency offsets from sparse ones, potentially reducing scanning time. We propose to leverage time-series convolution to extract both short and long-range spatial and frequency features of the APT imaging sequence. Our proposed model outperforms other seq2seq models, achieving superior reconstruction with a peak signal-to-noise ratio of 45.8 (95% confidence interval (CI): [44.9 46.7]), and a structural similarity index of 0.989 (95% CI:[0.987 0.993]) for the tumor region. We have integrated a weighted layer into our model to evaluate the impact of individual frequency offset on the reconstruction process. The weights assigned to the frequency offset at ±6.5 ppm, 0 ppm, and 3.5 ppm demonstrate higher significance as learned by the model. Experimental results demonstrate that our proposed model effectively reconstructs dense frequency offsets (n = 29, from 7 to -7 with 0.5 ppm as an interval) from data with 21 frequency offsets, reducing scanning time by 25%. This work presents a method for shortening the APT imaging acquisition time, offering potential guidance for parameter settings in APT imaging and serving as a valuable reference for clinicians.

Breast tumour classification in DCE-MRI via cross-attention and discriminant correlation analysis enhanced feature fusion.

Pan F, Wu B, Jian X, Li C, Liu D, Zhang N

pubmed logopapersJul 1 2025
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) has proven to be highly sensitive in diagnosing breast tumours, due to the kinetic and volumetric features inherent in it. To utilise the kinetics-related and volume-related information, this paper aims to develop and validate a classification for differentiating benign and malignant breast tumours based on DCE-MRI, though fusing deep features and cross-attention-encoded radiomics features using discriminant correlation analysis (DCA). Classification experiments were conducted on a dataset comprising 261 individuals who underwent DCE-MRI including those with multiple tumours, resulting in 137 benign and 163 malignant tumours. To improve the strength of correlation between features and reduce features' redundancy, a novel fusion method that fuses deep features and encoded radiomics features based on DCA (eFF-DCA) is proposed. The eFF-DCA includes three components: (1) a feature extraction module to capture kinetic information across phases, (2) a radiomics feature encoding module employing a cross-attention mechanism to enhance inter-phase feature correlation, and (3) a DCA-based fusion module that transforms features to maximise intra-class correlation while minimising inter-class redundancy, facilitating effective classification. The proposed eFF-DCA method achieved an accuracy of 90.9% and an area under the receiver operating characteristic curve of 0.942, outperforming methods using single-modal features. The proposed eFF-DCA utilises DCE-MRI kinetic-related and volume-related features to improve breast tumour diagnosis accuracy, but non-end-to-end design limits multimodal fusion. Future research should explore unified end-to-end deep learning architectures that enable seamless multimodal feature fusion and joint optimisation of feature extraction and classification.

A Minimal Annotation Pipeline for Deep Learning Segmentation of Skeletal Muscles.

Baudin PY, Balsiger F, Beck L, Boisserie JM, Jouan S, Marty B, Reyngoudt H, Scheidegger O

pubmed logopapersJul 1 2025
Translating quantitative skeletal muscle MRI biomarkers into clinics requires efficient automatic segmentation methods. The purpose of this work is to investigate a simple yet effective iterative methodology for building a high-quality automatic segmentation model while minimizing the manual annotation effort. We used a retrospective database of quantitative MRI examinations (n = 70) of healthy and pathological thighs for training a nnU-Net segmentation model. Healthy volunteers and patients with various neuromuscular diseases, broadly categorized as dystrophic, inflammatory, neurogenic, and unlabeled NMDs. We designed an iterative procedure, progressively adding cases to the training set and using a simple visual five-level rating scale to judge the validity of generated segmentations for clinical use. On an independent test set (n = 20), we assessed the quality of the segmentation in 13 individual thigh muscles using standard segmentation metrics-dice coefficient (DICE) and 95% Hausdorff distance (HD95)-and quantitative biomarkers-cross-sectional area (CSA), fat fraction (FF), and water-T1/T2. We obtained high-quality segmentations (DICE = 0.88 ± 0.15/0.86 ± 0.14, HD95 = 6.35 ± 12.33/6.74 ± 11.57 mm), comparable to recent works, although with a smaller training set (n = 30). Inter-rater agreement on the five-level scale was fair to moderate but showed progressive improvement of the segmentation model along with the iterations. We observed limited differences from manually delineated segmentations on the quantitative outcomes (MAD: CSA = 65.2 mm<sup>2</sup>, FF = 1%, water-T1 = 8.4 ms, water-T2 = 0.35 ms), with variability comparable to manual delineations.

The implementation of artificial intelligence in serial monitoring of post gamma knife vestibular schwannomas: A pilot study.

Singh M, Jester N, Lorr S, Briano A, Schwartz N, Mahajan A, Chiang V, Tommasini SM, Wiznia DH, Buono FD

pubmed logopapersJul 1 2025
Vestibular schwannomas (VS) are benign tumors that can lead to hearing loss, balance issues, and tinnitus. Gamma Knife Radiosurgery (GKS) is a common treatment for VS, aimed at halting tumor growth and preserving neurological function. Accurate monitoring of VS volume before and after GKS is essential for assessing treatment efficacy. To evaluate the accuracy of an artificial intelligence (AI) algorithm, originally developed to identify NF2-SWN-related VS, in segmenting non-NF2-SWN-related VS and detecting volume changes pre- and post-GKS. We hypothesize this AI algorithm, trained on NF2-SWN-related VS data, will accurately apply to non-NF2-SWN VS and VS treated with GKS. In this retrospective cohort study, we reviewed data from an established Gamma Knife database, identifying 16 patients who underwent GKS for VS and had pre- and post-GKS scans. Contrast-enhanced T1-weighted MRI scans were analyzed with both manual segmentation and the AI algorithm. DICE similarity coefficients were computed to compare AI and manual segmentations, and a paired t-test was used to assess statistical significance. Volume changes for pre- and post-GKS scans were calculated for both segmentation methods. The mean DICE score between AI and manual segmentations was 0.91 (range 0.79-0.97). Pre- and post-GKS DICE scores were 0.91 (range 0.79-0.97) and 0.92 (range 0.81-0.97), indicating high spatial overlap. AI-segmented VS volumes pre- and post-GKS were consistent with manual measurements, with high DICE scores indicating strong spatial overlap. The AI algorithm processed scans within 5 min, suggesting it offers a reliable, efficient alternative for clinical monitoring. DICE scores showed high similarity between manual and AI segmentations. The pre- and post-GKS VS volume percentage changes were also similar between manual and AI-segmented VS volumes, indicating that our AI algorithm can accurately detect changes in tumor growth.
Page 10 of 1521519 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.