Sort by:
Page 4 of 1751742 results

Machine learning in neuroimaging and computational pathophysiology of Parkinson's disease: A comprehensive review and meta-analysis.

Sharma K, Shanbhog M, Singh K

pubmed logopapersJul 1 2025
In recent years, machine learning and deep learning have shown potential for improving Parkinson's disease (PD) diagnosis, one of the most common neurodegenerative diseases. This comprehensive analysis examines machine learning and deep learning-based Parkinson's disease diagnosis using MRI, speech, and handwriting datasets. To thoroughly analyze PD, this study collected data from scientific literature, experimental investigations, publicly accessible datasets, and global health reports. This study examines the worldwide historical setting of Parkinson's disease, focusing on its increasing prevalence and inequities in treatment access across various regions. A comprehensive summary consolidates essential findings from clinical investigations and pertinent datasets related to Parkinson's disease management. The worldwide context, prospective treatments, therapies, and drugs for Parkinson's disease have been thoroughly examined. This analysis identifies significant research deficiencies and suggests future methods, emphasizing the necessity for more extensive and diverse datasets and improved model accessibility. The current study proposes the Meta-Park model for diagnosing Parkinson's disease, achieving training, testing, and validation accuracy of 97.67 %, 95 %, and 94.04 %. This method provides a dependable and scalable way to improve clinical decision-making in managing Parkinson's disease. This research seeks to provide innovative, data-driven decisions for early diagnosis and effective treatment by merging the proposed method with a thorough examination of existing interventions, providing renewed hope to patients and the medical community.

Improve robustness to mismatched sampling rate: An alternating deep low-rank approach for exponential function reconstruction and its biomedical magnetic resonance applications.

Huang Y, Wang Z, Zhang X, Cao J, Tu Z, Lin M, Li L, Jiang X, Guo D, Qu X

pubmed logopapersJul 1 2025
Undersampling accelerates signal acquisition at the expense of introducing artifacts. Removing these artifacts is a fundamental problem in signal processing and this task is also called signal reconstruction. Through modeling signals as the superimposed exponential functions, deep learning has achieved fast and high-fidelity signal reconstruction by training a mapping from the undersampled exponentials to the fully sampled ones. However, the mismatch, such as undersampling rates (25 % vs. 50 %), anatomical region (knee vs. brain), and contrast configurations (PDw vs. T<sub>2</sub>w), between the training and target data will heavily compromise the reconstruction. To overcome this limitation, we propose Alternating Deep Low-Rank (ADLR), which combines deep learning solvers and classic optimization solvers. Experimental validation on the reconstruction of synthetic and real-world biomedical magnetic resonance signals demonstrates that ADLR can effectively alleviate the mismatch issue and achieve lower reconstruction errors than state-of-the-art methods.

Association of Psychological Resilience With Decelerated Brain Aging in Cognitively Healthy World Trade Center Responders.

Seeley SH, Fremont R, Schreiber Z, Morris LS, Cahn L, Murrough JW, Schiller D, Charney DS, Pietrzak RH, Perez-Rodriguez MM, Feder A

pubmed logopapersJul 1 2025
Despite their exposure to potentially traumatic stressors, the majority of World Trade Center (WTC) responders-those who worked on rescue, recovery, and cleanup efforts on or following September 11, 2001-have shown psychological resilience, never developing long-term psychopathology. Psychological resilience may be protective against the earlier age-related cognitive changes associated with posttraumatic stress disorder (PTSD) in this cohort. In the current study, we calculated the difference between estimated brain age from structural magnetic resonance imaging (MRI) data and chronological age in WTC responders who participated in a parent functional MRI study of resilience (<i>N</i> = 97). We hypothesized that highly resilient responders would show the least brain aging and explored associations between brain aging and psychological and cognitive measures. WTC responders screened for the absence of cognitive impairment were classified into 3 groups: a WTC-related PTSD group (<i>n</i> = 32), a Highly Resilient group without lifetime psychopathology despite high WTC-related exposure (<i>n</i> = 34), and a Lower WTC-Exposed control group also without lifetime psychopathology (<i>n</i> = 31). We used <i>BrainStructureAges</i>, a deep learning algorithm that estimates voxelwise age from T1-weighted MRI data to calculate decelerated (or accelerated) brain aging relative to chronological age. Globally, brain aging was decelerated in the Highly Resilient group and accelerated in the PTSD group, with a significant group difference (<i>p</i> = .021, Cohen's <i>d</i> = 0.58); the Lower WTC-Exposed control group exhibited no significant brain age gap or group difference. Lesser brain aging was associated with resilience-linked factors including lower emotional suppression, greater optimism, and better verbal learning. Cognitively healthy WTC responders show differences in brain aging related to resilience and PTSD.

Development and validation of a fusion model based on multi-phase contrast CT radiomics combined with clinical features for predicting Ki-67 expression in gastric cancer.

Song T, Xue B, Liu M, Chen L, Cao A, Du P

pubmed logopapersJul 1 2025
The present study aimed to develop and validate a fusion model based on multi-phase contrast-enhanced computed tomography (CECT) radiomics features combined with clinical features to preoperatively predict the expression levels of Ki-67 in patients with gastric cancer (GC). A total of 164 patients with GC who underwent surgical treatment at our hospital between September 2015 and September 2023 were retrospectively included and were randomly divided into a training set (n=114) and a testing set (n=50). Using Pyradiomics, radiomics features were extracted from multi-phase CECT images and were combined with significant clinical features through various machine learning algorithms [support vector machine (SVM), random forest (RandomForest), K-nearest neighbors (KNN), LightGBM and XGBoost] to build a fusion model. Receiver operating characteristic, area under the curve (AUC), calibration curve and decision curve analysis (DCA) were used to evaluate, validate and compare the predictive performance and clinical utility of the model. Among the three single-phase models, for the arterial phase model, the SVM radiomics model had the highest AUC value in the training set, which was 0.697; and the RandomForest radiomics model had the highest AUC value in the testing set, which was 0.658. For the venous phase model, the SVM radiomics model had the highest AUC value in the training set, which was 0.783; and the LightGBM radiomics model had the highest AUC value in the testing set, which was 0.747. For the delayed phase model, the KNN radiomics model had the highest AUC value in the training set, which was 0.772; and the SVM radiomics model had the highest AUC in the testing set, which was 0.719. The clinical feature model had the lowest AUC values in both the training set and the testing set, which were 0.614 and 0.520, respectively. Notably, the multi-phase model and the fusion model, which were constructed by combining the clinical features and the multi-phase features, demonstrated excellent discriminative performance, with the fusion model achieving AUC values of 0.933 and 0.817 in the training and testing sets, thus outperforming other models (DeLong test, both P<0.05). The calibration curve showed that the fusion model had goodness of fit (Hosmer-Lemeshow test, >0.5 in the training and validation sets). The DCA showed that the net benefit of the fusion model in identifying high expression of Ki-67 was improved compared with that of other models. Furthermore, the fusion model achieved an AUC value of 0.805 in the external validation data from The Cancer Imaging Archive. In conclusion, the fusion model established in the present study was revealed to have excellent performance and is expected to serve as a non-invasive tool for predicting Ki-67 status and guiding clinical treatment.

Automatic adult age estimation using bone mineral density of proximal femur via deep learning.

Cao Y, Ma Y, Zhang S, Li C, Chen F, Zhang J, Huang P

pubmed logopapersJul 1 2025
Accurate adult age estimation (AAE) is critical for forensic and anthropological applications, yet traditional methods relying on bone mineral density (BMD) face significant challenges due to biological variability and methodological limitations. This study aims to develop an end-to-end Deep Learning (DL) based pipeline for automated AAE using BMD from proximal femoral CT scans. The main objectives are to construct a large-scale dataset of 5151 CT scans from real-world clinical and cadaver cohorts, fine-tune the Segment Anything Model (SAM) for accurate femoral bone segmentation, and evaluate multiple convolutional neural networks (CNNs) for precise age estimation based on segmented BMD data. Model performance was assessed through cross-validation, internal clinical testing, and external post-mortem validation. SAM achieved excellent segmentation performance with a Dice coefficient of 0.928 and an average intersection over union (mIoU) of 0.869. The CNN models achieved an average mean absolute error (MAE) of 5.20 years in cross-validation (male: 5.72; female: 4.51), which improved to 4.98 years in the independent clinical test set (male: 5.32; female: 4.56). External validation on the post-mortem dataset revealed an MAE of 6.91 years, with 6.97 for males and 6.69 for females. Ensemble learning further improved accuracy, reducing MAE to 4.78 years (male: 5.12; female: 4.35) in the internal test set, and 6.58 years (male: 6.64; female: 6.37) in the external validation set. These findings highlight the feasibility of dl-driven AAE and its potential for forensic applications, offering a fully automated framework for robust age estimation.

Diagnostic tools in respiratory medicine (Review).

Georgakopoulou VE, Spandidos DA, Corlateanu A

pubmed logopapersJul 1 2025
Recent advancements in diagnostic technologies have significantly transformed the landscape of respiratory medicine, aiming for early detection, improved specificity and personalized therapeutic strategies. Innovations in imaging such as multi-slice computed tomography (CT) scanners, high-resolution CT and magnetic resonance imaging (MRI) have revolutionized our ability to visualize and assess the structural and functional aspects of the respiratory system. These techniques are complemented by breakthroughs in molecular biology that have identified specific biomarkers and genetic determinants of respiratory diseases, enabling targeted diagnostic approaches. Additionally, functional tests including spirometry and exercise testing continue to provide valuable insights into pulmonary function and capacity. The integration of artificial intelligence is poised to further refine these diagnostic tools, enhancing their accuracy and efficiency. The present narrative review explores these developments and their impact on the management and outcomes of respiratory conditions, underscoring the ongoing shift towards more precise and less invasive diagnostic modalities in respiratory medicine.

Use of Artificial Intelligence and Machine Learning in Critical Care Ultrasound.

Peck M, Conway H

pubmed logopapersJul 1 2025
This article explores the transformative potential of artificial intelligence (AI) in critical care ultrasound AI technologies, notably deep learning and convolutional neural networks, now assisting in image acquisition, interpretation, and quality assessment, streamlining workflow and reducing operator variability. By automating routine tasks, AI enhances diagnostic accuracy and bridges training gaps, potentially democratizing advanced ultrasound techniques. Furthermore, AI's integration into tele-ultrasound systems shows promise in extending expert-level diagnostics to underserved areas, significantly broadening access to quality care. The article highlights the ongoing need for explainable AI systems to gain clinician trust and facilitate broader adoption.

Accelerating CEST MRI With Deep Learning-Based Frequency Selection and Parameter Estimation.

Shen C, Cheema K, Xie Y, Ruan D, Li D

pubmed logopapersJul 1 2025
Chemical exchange saturation transfer (CEST) MRI is a powerful molecular imaging technique for detecting metabolites through proton exchange. While CEST MRI provides high sensitivity, its clinical application is hindered by prolonged scan time due to the need for imaging across numerous frequency offsets for parameter estimation. Since scan time is directly proportional to the number of frequency offsets, identifying and selecting the most informative frequency can significantly reduce acquisition time. We propose a novel deep learning-based framework that integrates frequency selection and parameter estimation to accelerate CEST MRI. Our method leverages channel pruning via batch normalization to identify the most informative frequency offsets while simultaneously training the network for accurate parametric map prediction. Using data from six healthy volunteers, channel pruning selects 13 informative frequency offsets out of 53 without compromising map quality. Images from selected frequency offsets were reconstructed using the MR Multitasking method, which employs a low-rank tensor model to enable under-sampling of k-space lines for each frequency offset, further reducing scan time. Predicted parametric maps of amide proton transfer (APT), nuclear overhauser effect (NOE), and magnetization transfer (MT) based on these selected frequencies were comparable in quality to maps generated using all frequency offsets, achieving superior performance compared to Fisher information-based selection methods from our previous work. This integrated approach has the potential to reduce the whole-brain CEST MRI scan time from the original 5:30 min to under 1:30 min without compromising map quality. By leveraging deep learning for frequency selection and parametric map prediction, the proposed framework demonstrates its potential for efficient and practical clinical implementation. Future studies will focus on extending this method to patient populations and addressing challenges such as B<sub>0</sub> inhomogeneity and abnormal signal variation in diseased tissues.

Mamba-based deformable medical image registration with an annotated brain MR-CT dataset.

Wang Y, Guo T, Yuan W, Shu S, Meng C, Bai X

pubmed logopapersJul 1 2025
Deformable registration is essential in medical image analysis, especially for handling various multi- and mono-modal registration tasks in neuroimaging. Existing studies lack exploration of brain MR-CT registration, and face challenges in both accuracy and efficiency improvements of learning-based methods. To enlarge the practice of multi-modal registration in brain, we present SR-Reg, a new benchmark dataset comprising 180 volumetric paired MR-CT images and annotated anatomical regions. Building on this foundation, we introduce MambaMorph, a novel deformable registration network based on an efficient state space model Mamba for global feature learning, with a fine-grained feature extractor for low-level embedding. Experimental results demonstrate that MambaMorph surpasses advanced ConvNet-based and Transformer-based networks across several multi- and mono-modal tasks, showcasing impressive enhancements of efficacy and efficiency. Code and dataset are available at https://github.com/mileswyn/MambaMorph.

Assessment of biventricular cardiac function using free-breathing artificial intelligence cine with motion correction: Comparison with standard multiple breath-holding cine.

Ran L, Yan X, Zhao Y, Yang Z, Chen Z, Jia F, Song X, Huang L, Xia L

pubmed logopapersJul 1 2025
To assess the image quality and biventricular function utilizing a free-breathing artificial intelligence cine method with motion correction (FB AI MOCO). A total of 72 participants (mean age 38.3 ± 15.4 years, 40 males) prospectively enrolled in this single-center, cross-sectional study underwent cine scans using standard breath-holding (BH) cine and FB AI MOCO cine at 3.0 Tesla. The image quality of the cine images was evaluated with a 5-point Ordinal Likert scale based on blood-pool to myocardium contrast, endocardial edge definition, and artifacts, and overall quality score was calculated by the equal weight average of all three criteria, apparent signal to noise ratio (aSNR), estimated contrast to noise ratio (eCNR) were assessed. Biventricular functional parameters including Left Ventricular (LV), Right Ventricular (RV) End-Diastolic Volume (EDV), End-Systolic Volume (ESV), Stroke Volume (SV), Ejection Fraction (EF), and LV End-Diastolic Mass (LVEDM) were also assessed. Comparison between two sequences was assessed using paired t-test and Wilcoxon signed-rank test, correlation using Pearson correlation. The agreement of quantitative parameters was assessed using intraclass correlation coefficient (ICC) and Bland-Altman analysis. P < 0.05 was statistically significant. The total acquisition time of the entire stack for FB AI MOCO cine (14.7 s ± 1.9 s) was notably shorter than that for standard BH cine (82.6 s ± 11.9 s, P < 0.001). The aSNR between FB AI MOCO cine and standard BH cine has no significantly difference (76.7 ± 20.7 vs. 79.8 ± 20.7, P = 0.193). The eCNR of FB AI MOCO cine was higher than standard BH cine (191.6 ± 54.0 vs. 155.8 ± 68.4, P < 0.001), as was the scores of blood-pool to myocardium contrast (4.6 ± 0.5 vs. 4.4 ± 0.6, P = 0.003). Qualitative scores including endocardial edge definition (4.2 ± 0.5 vs. 4.3 ± 0.7, P = 0.123), artifact presence (4.3 ± 0.6 vs. 4.1 ± 0.8, P = 0.085), and overall image quality (4.4 ± 0.4 vs. 4.3 ± 0.6, P = 0.448), showed no significant differences between the two methods. Representative RV and LV functional parameters - including RVEDV (102.2 (86.4, 120.4) ml vs. 104.0 (88.5, 120.3) ml, P = 0.294), RVEF (31.0 ± 11.1 % vs. 31.2 ± 11.0 %, P = 0.570), and LVEDV (106.2 (86.7, 131.3) ml vs. 105.8 (84.4, 130.3) ml, P = 0.450) - also did not differ significantly between the two methods. Strong correlations (r > 0.900) and excellent agreement (ICC > 0.900) were found for all biventricular functional parameters between the two sequences. In subgroups with reduced LVEF (<50 %, n = 24) or elevated heart rate (≥80  bpm, n = 17), no significant differences were observed in any biventricular functional metrics (P > 0.05 for all) between the two sequences. In comparison to multiple BH cine, the FB AI MOCO cine achieved comparable image quality and biventricular functional parameters with shorter scan times, suggesting its promising potential for clinical applications.
Page 4 of 1751742 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.