Sort by:
Page 1 of 24238 results
Next
You are viewing papers added to our database from 2025-09-29 to 2025-10-05.View all papers

Brain metabolic imaging with 18 F-PET-CT and machine-learning clustering analysis reveal divergent metabolic phenotypes in patients with amyotrophic lateral sclerosis.

Zhang J, Han F, Wang X, Wu F, Song X, Liu Q, Wang J, Grecucci A, Zhang Y, Yi X, Chen BT

pubmed logopapersOct 3 2025
Amyotrophic lateral sclerosis (ALS) is a fatal neurodegenerative disorder characterized by significant clinicopathologic heterogeneity. This study aimed to identify distinct ALS phenotypes by integrating brain 18 F-fluorodeoxyglucose positron emission tomography-computed tomography (18 F-FDG PET-CT) metabolic imaging with consensus clustering data. This study prospectively enrolled 127 patients with ALS and 128 healthy controls. All participants underwent a brain 18 F-FDG-PET-CT metabolic imaging, psychological questionnaires, and functional screening. K-means consensus clustering was applied to define neuroimaging-based phenotypes. Survival analyses were also performed. Whole exome sequencing (WES) was utilized to detect ALS-related genetic mutations, followed by GO/KEGG pathway enrichment and imaging-transcriptome analysis based on the brain metabolic activity on the 18 F-FDG-PET-CT imaging. Consensus clustering identified two metabolic phenotypes, i.e., the metabolic attenuation phenotype and the metabolic non-attenuation phenotype according to their glucose metabolic activity pattern. The metabolic attenuation phenotype was associated with worse survival (p = 0.022), poorer physical function (p = 0.005), more severe depression (p = 0.026) and greater anxiety level (p = 0.05). WES testing and neuroimaging-transcriptome analysis identified specific gene mutations and molecular pathways with each phenotype. We identified two distinct ALS phenotypes with varying clinicopathologic features, indicating that the unsupervised machine learning applied to PET imaging may effectively classify metabolic subtypes of ALS. These findings contributed novel insights into the heterogeneous pathophysiology of ALS, which should inform personalized therapeutic strategies for patients with ALS.

Enhanced retinal blood vessel segmentation via loss balancing in dense generative adversarial networks with quick attention mechanisms.

Sandeep D, Baranitharan K, Padmavathi A, Guganathan L

pubmed logopapersOct 3 2025
Manual segmentation of retinal blood vessels in fundus images has been widely used for detecting vascular occlusion, diabetic retinopathy, and other retinal conditions. However, existing automated methods face challenges in accurately segmenting fine vessels and optimizing loss functions effectively. This study aims to develop an integrated framework that enhances vessel segmentation accuracy and robustness for clinical applications. The proposed pipeline integrates multiple advanced techniques to address the limitations of current approaches. In preprocessing, Quasi-Cross Bilateral Filtering (QCBF) is applied to reduce noise and enhance vessel visibility. Feature extraction is performed using a Directed Acyclic Graph Neural Network with VGG16 (DAGNN-VGG16) for hierarchical and topologically-aware representation learning. Segmentation is achieved using a Dense Generative Adversarial Network with Quick Attention Network (Dense GAN-QAN), which balances loss and emphasizes critical vessel features. To further optimize training convergence, the Swarm Bipolar Algorithm (SBA) is employed for loss minimization. The method was evaluated on three benchmark retinal vessel segmentation datasets-CHASE-DB1, STARE, and DRIVE-using sixfold cross-validation. The proposed approach achieved consistently high performance with mean results of accuracy: 99.87%, F1- score: 99.82%, precision: 99.84%, recall: 99.78%, and specificity: 99.87% across all datasets, demonstrating strong generalization and robustness. The integrated QCBF-DAGNN-VGG16-Dense GAN-QAN-SBA framework advances the state-of-the-art in retinal vessel segmentation by effectively handling fine vessel structures and ensuring optimized training. Its consistently high performance across multiple datasets highlights its potential for reliable clinical deployment in retinal disease detection and diagnosis.

A Multimodal Classification Method for Nasal Obstruction Severity Based on Computed Tomography and Nasal Resistance.

Wang Q, Li S, Sun H, Cui S, Song W

pubmed logopapersOct 3 2025
The assessment of the degree of nasal obstruction is valuable in disease diagnosis, quality of life assessment, and epidemiological studies. To this end, this article proposes a multimodal nasal obstruction degree classification model based on cone beam computed tomography (CBCT) images and nasal resistance measurements. The model consists of four modules: image feature extraction, table feature extraction, feature fusion, and classification. In the image feature extraction module, this article proposes a strategy of using the trained MedicalNet large model to get the pre-training parameters and then migrating them to the three-dimensional convolutional neural network (3D CNN) feature extraction model. For the nasal resistance measurement form data, a method based on extreme gradient boosting (XGBoost) feature importance analysis is proposed to filter key features to reduce the data dimension. In order to fuse the two types of modal data, a feature fusion method based on local and global features was designed. Finally, the fused features are classified using the tabular network (TabNet) model. In order to verify the effectiveness of the proposed method, comparison experiments and ablation experiments are designed, and the experimental results show that the accuracy and recall of the proposed multimodal classification model reach 0.93 and 0.9, respectively, which are significantly higher than other methods.

A Tutorial on MRI Reconstruction: From Modern Methods to Clinical Implications.

Cukur T, Dar SU, Nezhad VA, Jun Y, Kim TH, Fujita S, Bilgic B

pubmed logopapersOct 3 2025
MRI is an indispensable clinical tool, offering a rich variety of tissue contrasts to support broad diagnostic and research applications. Protocols can incorporate multiple structural, functional, diffusion, spectroscopic, or relaxometry sequences to provide complementary information for differential diagnosis, and to capture multidimensional insights into tissue structure and composition. However, these capabilities come at the cost of prolonged scan times, which reduce patient throughput, increase susceptibility to motion artifacts, and may require trade-offs in image quality or diagnostic scope. Over the last two decades, advances in image reconstruction algorithms-alongside improvements in hardware and pulse sequence design-have made it possible to accelerate acquisitions while preserving diagnostic quality. Central to this progress is the ability to incorporate prior information to regularize the solutions to the reconstruction problem. In this tutorial, we overview the basics of MRI reconstruction and highlight state-of-the-art approaches, beginning with classical methods that rely on explicit hand-crafted priors, and then turning to deep learning methods that leverage a combination of learned and crafted priors to further push the performance envelope. We also explore the translational aspects and eventual clinical implications of these methods. We conclude by discussing future directions to address remaining challenges in MRI reconstruction. The tutorial is accompanied by a Python toolbox (https://github.com/tutorial-MRI-recon/tutorial) to demonstrate select methods discussed in the article.

Fast water/fat T2 and PDFF mapping via multiple overlapping‑echo detachment acquisition and deep learning reconstruction.

Lin Q, Chen W, Kang T, 吴 健, Chen X, Qu X, Lin L, Wang J, Lin J, Chen Z, Cai S, Cai C

pubmed logopapersOct 2 2025
Rapid and accurate quantitative assessment of muscle tissue characteristics is critical for the diagnosis and monitoring of neuromuscular diseases (NMDs). Quantitative magnetic resonance imaging enables non-invasive assessment of muscle pathology by using water T2 values to detect muscle damage and proton density fat fraction (PDFF) to quantify fat infiltration. However, conventional methods for simultaneous water-fat separation and T2 quantification often require long acquisition times. This study aims to develop an ultrafast method for simultaneous water-fat separation and T2 quantification.&#xD;Approach: A novel water-fat separation framework that combines chemical shift encoding with the multiple overlapping-echo detachment sequence (CSE-MOLED) was proposed. Synthetic training data and deep learning-based reconstruction were employed to address challenges in water-fat separation, including the complex multi-peak spectral characteristic of fat and the non-idealities in MRI acquisition. The proposed method was validated through numerical simulations, phantom studies, and in vivo experiments involving five healthy volunteers, one subject with muscle atrophy, and one with muscle damage.&#xD;Main results: In numerical experiments, the R2 values were all 0.999 for water T2, fat T2, and PDFF. In phantom experiments, the R2 values were 0.995, 0.733, and 0.996 for water T2, fat T2, and PDFF, respectively. High repeatability (coefficient of variation < 2.0%) was achieved in both phantom and in vivo experiments. In patient scans, CSE-MOLED successfully distinguished between fat infiltration and muscle damage.&#xD;Significance: CSE-MOLED simultaneously obtains T2 and proton density maps for both water and fat, along with T2-corrected PDFF map, in 162 ms per slice, offering the potential to enhance the diagnostic accuracy of NMDs without increasing the clinical scanning burden.

Transformer-enhanced vertebrae segmentation and anatomical variation recognition from CT images.

Yang C, Huang L, Sucharit W, Xie H, Huang X, Li Y

pubmed logopapersOct 2 2025
Accurate segmentation and anatomical classification of vertebrae in spinal CT scans are crucial for clinical diagnosis, surgical planning, and disease monitoring. However, the task is complicated by anatomical variability, degenerative changes, and the presence of rare vertebral anomalies. In this study, we propose a hybrid framework that combines a high-resolution WNet segmentation backbone with a Vision Transformer (ViT)-based classification module to perform vertebral identification and anomaly detection. Our model incorporates an attention-based anatomical variation module and leverages patient-specific metadata (age, sex, vertebral distribution) to improve the accuracy and personalization of vertebrae typing. Extensive experiments on the VerSe 2019 and 2020 datasets demonstrate that our approach outperforms state-of-the-art baselines such as nnUNet and SwinUNet, especially in detecting transitional vertebrae (e.g., T13, L6) and modeling morphological diversity. The system maintains high robustness under slice skipping, noise perturbation, and scanner variations, while offering interpretability through attention heatmaps and case-specific alerts. Our findings suggest that integrating anatomical priors and demographic context into transformer-based pipelines is a promising direction for personalized, intelligent spinal image analysis.

TDMAR-Net: A Frequency-Aware Tri-Domain Diffusion Network for CT Metal Artifact Reduction.

Chen W, Ning B, Zhou Z, Shi L, Liu Q

pubmed logopapersOct 2 2025
Metal implants and other high-density objects cause significant artifacts in computed tomography (CT) images, hindering clinical diagnosis. Traditional metal artifact reduction methods often leave residual artifacts due to sinogram edges discontinuities. Supervised deep learning approaches struggle due to reliance on paired data, while unsupervised methods often lack multi-domain information. In this paper, we propose TDMAR-Net, a diffusion model-based three-domain neural network that leverages priors from projection, image, and Fourier domains for removing metal artifact and enhancing CT image quality. To enhance the model's learning capability and gradient optimization while preventing reliance on a single data structure, we employ a two-stage training strategy that combines large-scale pretraining with masked data fine-tuning, improving both accuracy and adaptability in metal artifact removal. The specific process is to adjust the weight of the high frequency and low frequency components of the input image through the high-pass filter module in the Fourier domain, and process the image into blocks to extract the diffusion prior information. The prior information is then introduced iteratively into the sinogram and image domains to fill in the metal-induced artifacts. Our method overcomes the challenges of information sharing and complementarity across different domains, ensuring that each domain contributes effectively, thereby enhancing the precision and robustness of metal artifact elimination. Experiments show that our approach superior to existing unsupervised methods, which we have validated on both synthetic and clinical datasets.

Brain age as an accurate biomarker of preclinical cognitive decline: evidence from a 12-year longitudinal study.

Elkana O, Beheshti I

pubmed logopapersOct 2 2025
Cognitive decline in older adults, particularly during the preclinical stages of Alzheimer's disease (AD), presents a critical opportunity for early detection and intervention. While T1-weighted MRI is widely used in AD research, its capacity to identify early vulnerability and monitor longitudinal progression remains incompletely characterized. We analyzed longitudinal T1-weighted MRI data from 224 cognitively unimpaired older adults followed for up to 12 years. Participants were stratified by clinical outcome into converters to mild cognitive impairment (HC-converters, n = 112) and stable controls (HC-stable, n = 112). Groups were matched at baseline for age (mean ~ 74-75 years), education (~ 16.4 years), and cognitive scores (MMSE ≈ 29; CDR-SB ≈ 0.04). Four MRI-derived biomarkers were examined: brain-predicted age difference (brain-PAD), mean cortical thickness, AD-cortical signature, and hippocampal volume. Brain-PAD showed the strongest baseline association with future conversion (β = 1.25, t = 3.52, p = 0.0009) and highest classification accuracy (AUC = 0.66; sensitivity = 62%, and specificity = 67%). Longitudinal mixed-effects models focusing on the group × time interaction revealed a significant positive slope in brain-PAD for converters (β = 0.0079, p = 0.003) and a non-significant trend in stable controls (β = 0.0047, p = 0.075), indicating incipient divergence in brain aging trajectories during the preclinical window. Hippocampal volume and AD-cortical signature declined similarly in both groups. The mean cortical thickness had limited discriminative or dynamic utility. These findings support brain-PAD, derived from routine T1-weighted MRI using machine learning, as a sensitive, performance-independent biomarker for early risk stratification and monitoring of cognitive aging trajectories.

Multimodal imaging fusion and machine learning model development: differential diagnosis of spinal inflammatory lesions using combined CT hounsfield units and MRI features.

Wang Y, Bai X, Li T, Yuan S, Zong S, Chen Y, Wang H, Song Z, Wang H, Hao Y, Qu Y, Liu J, Zhang Q, Liu G

pubmed logopapersOct 2 2025
The objective is to develop a differential diagnosis model for tuberculous spondylitis (TS) and pyogenic spondylitis (PS) by integrating MRI morphological features and computed tomography (CT) density parameters (Hounsfield Units, HU). This study aims to leverage multimodal data complementarity to achieve fusion of qualitative and quantitative information, thereby providing clinicians with a rapid and objective decision support tool for spinal inflammatory lesion characterization. Imaging data were extracted from MRI and CT scans of patients with TS and PS, then compared and summarized. Receiver operating characteristic (ROC) curves were used to determine optimal HU value thresholds. The least absolute shrinkage and selection operator (Lasso) regression was applied to identify the most predictive features for model construction. A logistic regression-based predictive model was developed and visualized as a nomogram. Model validation was performed using bootstrap resampling, ROC analysis, and decision curve analysis (DCA). A total of 171 patients with TS (n = 91) or PS (n = 80) were included. Statistically significant differences in MRI features were observed between the two groups (P < 0.05). Additionally, significant HU value differences were found in diseased vertebral endplates, small cavitary abscesses, large cavitary abscesses, and intravertebral abscesses between TS and PS patients (P < 0.05). The predictive model incorporated seven independent predictors. Calibration curves, ROC analysis, and DCA all demonstrated excellent model performance. Combined MRI and CT HU value analysis effectively differentiates TS from PS. The predictive model integrating imaging features and quantitative parameters demonstrates high accuracy and clinical utility, offering a novel approach to optimize diagnostic and treatment strategies for spinal infectious diseases.

Enhanced brain tumour segmentation using a hybrid dual encoder-decoder model in federated learning.

Narmadha K, Varalakshmi P

pubmed logopapersOct 2 2025
Brain tumour segmentation is an important task in medical imaging, that requires accurate tumour localization for improved diagnostics and treatment planning. However, conventional segmentation models often struggle with boundary delineation and generalization across heterogeneous datasets. Furthermore, data privacy concerns limit centralized model training on large-scale, multi-institutional datasets. To address these drawbacks, we propose a Hybrid Dual Encoder-Decoder Segmentation Model in Federated Learning, that integrates EfficientNet with Swin Transformer as encoders and BASNet (Boundary-Aware Segmentation Network) decoder with MaskFormer as decoders. The proposed model aims to enhance segmentation accuracy and efficiency in terms of total training time. This model leverages hierarchical feature extraction, self-attention mechanisms, and boundary-aware segmentation for superior tumour delineation. The proposed model achieves a Dice Coefficient of 0.94, an Intersection over Union (IoU) of 0.87 and reduces total training time through faster convergence in fewer rounds. The proposed model exhibits strong boundary delineation performance, with a Hausdorff Distance (HD95) of 1.61, an Average Symmetric Surface Distance (ASSD) of 1.12, and a Boundary F1 Score (BF1) of 0.91, indicating precise segmentation contours. Evaluations on the Kaggle Mateuszbuda LGG-MRI segmentation dataset partitioned across multiple federated clients demonstrate consistent, high segmentation performance. These findings highlight that integrating transformers, lightweight CNNs, and advanced decoders within a federated setup supports enhanced segmentation accuracy while preserving medical data privacy.
Page 1 of 24238 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.