Sort by:
Page 23 of 92919 results

Harmonization in Magnetic Resonance Imaging: A Survey of Acquisition, Image-level, and Feature-level Methods

Qinqin Yang, Firoozeh Shomal-Zadeh, Ali Gholipour

arxiv logopreprintJul 22 2025
Modern medical imaging technologies have greatly advanced neuroscience research and clinical diagnostics. However, imaging data collected across different scanners, acquisition protocols, or imaging sites often exhibit substantial heterogeneity, known as "batch effects" or "site effects". These non-biological sources of variability can obscure true biological signals, reduce reproducibility and statistical power, and severely impair the generalizability of learning-based models across datasets. Image harmonization aims to eliminate or mitigate such site-related biases while preserving meaningful biological information, thereby improving data comparability and consistency. This review provides a comprehensive overview of key concepts, methodological advances, publicly available datasets, current challenges, and future directions in the field of medical image harmonization, with a focus on magnetic resonance imaging (MRI). We systematically cover the full imaging pipeline, and categorize harmonization approaches into prospective acquisition and reconstruction strategies, retrospective image-level and feature-level methods, and traveling-subject-based techniques. Rather than providing an exhaustive survey, we focus on representative methods, with particular emphasis on deep learning-based approaches. Finally, we summarize the major challenges that remain and outline promising avenues for future research.

SFNet: A Spatial-Frequency Domain Deep Learning Network for Efficient Alzheimer's Disease Diagnosis

Xinyue Yang, Meiliang Liu, Yunfang Xu, Xiaoxiao Yang, Zhengye Si, Zijin Li, Zhiwen Zhao

arxiv logopreprintJul 22 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that predominantly affects the elderly population and currently has no cure. Magnetic Resonance Imaging (MRI), as a non-invasive imaging technique, is essential for the early diagnosis of AD. MRI inherently contains both spatial and frequency information, as raw signals are acquired in the frequency domain and reconstructed into spatial images via the Fourier transform. However, most existing AD diagnostic models extract features from a single domain, limiting their capacity to fully capture the complex neuroimaging characteristics of the disease. While some studies have combined spatial and frequency information, they are mostly confined to 2D MRI, leaving the potential of dual-domain analysis in 3D MRI unexplored. To overcome this limitation, we propose Spatio-Frequency Network (SFNet), the first end-to-end deep learning framework that simultaneously leverages spatial and frequency domain information to enhance 3D MRI-based AD diagnosis. SFNet integrates an enhanced dense convolutional network to extract local spatial features and a global frequency module to capture global frequency-domain representations. Additionally, a novel multi-scale attention module is proposed to further refine spatial feature extraction. Experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset demonstrate that SFNet outperforms existing baselines and reduces computational overhead in classifying cognitively normal (CN) and AD, achieving an accuracy of 95.1%.

Verification of resolution and imaging time for high-resolution deep learning reconstruction techniques.

Harada S, Takatsu Y, Murayama K, Sano Y, Ikedo M

pubmed logopapersJul 22 2025
Magnetic resonance imaging (MRI) involves a trade-off between imaging time, signal-to-noise ratio (SNR), and spatial resolution. Reducing the imaging time often leads to a lower SNR or resolution. Deep-learning-based reconstruction (DLR) methods have been introduced to address these limitations. Image-domain super-resolution DLR enables high resolution without additional image scans. High-quality images can be obtained within a shorter timeframe by appropriately configuring DLR parameters. It is necessary to maximize the performance of super-resolution DLR to enable efficient use in MRI. We evaluated the performance of a vendor-provided super-resolution DLR method (PIQE) on a Canon 3 T MRI scanner using an edge phantom and clinical brain images from eight patients. Quantitative assessment included structural similarity index (SSIM), peak SNR (PSNR), root mean square error (RMSE), and full width at half maximum (FWHM). FWHM was used to quantitatively assess spatial resolution and image sharpness. Visual evaluation using a five-point Likert scale was also performed to assess perceived image quality. Image domain super-resolution DLR reduced scan time by up to 70 % while preserving the structural image quality. Acquisition matrices of 0.87 mm/pixel or finer with a zoom ratio of ×2 yielded SSIM ≥0.80, PSNR ≥35 dB, and non-significant FWHM differences compared to full-resolution references. In contrast, aggressive downsampling (zoom ratio 3 from low-resolution matrices) led to image degradation including truncation artifacts and reduced sharpness. These results clarify the optimal use of PIQE as an image-domain super-resolution method and provide practical guidance for its application in clinical MRI workflows.

Machine learning approach effectively discriminates between Parkinson's disease and progressive supranuclear palsy: multi-level indices of rs-fMRI.

Cheng W, Liang X, Zeng W, Guo J, Yin Z, Dai J, Hong D, Zhou F, Li F, Fang X

pubmed logopapersJul 22 2025
Parkinson's disease (PD) and progressive supranuclear palsy (PSP) present similar clinical symptoms, but their treatment options and clinical prognosis differ significantly. Therefore, we aimed to discriminate between PD and PSP based on multi-level indices of resting-state functional magnetic resonance imaging (rs-fMRI) via the machine learning approach. A total of 58 PD and 52 PSP patients were prospectively enrolled in this study. Participants were randomly allocated to a training set and a validation set in a 7:3 ratio. Various rs-fMRI indices were extracted, followed by a comprehensive feature screening for each index. We constructed fifteen distinct combinations of indices and selected four machine learning algorithms for model development. Subsequently, different validation templates were employed to assess the classification results and investigate the relationship between the most significant features and clinical assessment scales. The classification performance of logistic regression (LR) and support vector machine (SVM) models, based on multiple index combinations, was significantly superior to that of other machine learning models and combinations when utilizing automatic anatomical labeling (AAL) templates. This has been verified across different templates. The utilization of multiple rs-fMRI indices significantly enhances the performance of machine learning models and can effectively achieve the automatic identification of PD and PSP at the individual level.

CLIF-Net: Intersection-guided Cross-view Fusion Network for Infection Detection from Cranial Ultrasound

Yu, M., Peterson, M. R., Burgoine, K., Harbaugh, T., Olupot-Olupot, P., Gladstone, M., Hagmann, C., Cowan, F. M., Weeks, A., Morton, S. U., Mulondo, R., Mbabazi-Kabachelor, E., Schiff, S. J., Monga, V.

medrxiv logopreprintJul 22 2025
This paper addresses the problem of detecting possible serious bacterial infection (pSBI) of infancy, i.e. a clinical presentation consistent with bacterial sepsis in newborn infants using cranial ultrasound (cUS) images. The captured image set for each patient enables multiview imagery: coronal and sagittal, with geometric overlap. To exploit this geometric relation, we develop a new learning framework, called the intersection-guided Crossview Local-and Image-level Fusion Network (CLIF-Net). Our technique employs two distinct convolutional neural network branches to extract features from coronal and sagittal images with newly developed multi-level fusion blocks. Specifically, we leverage the spatial position of these images to locate the intersecting region. We then identify and enhance the semantic features from this region across multiple levels using cross-attention modules, facilitating the acquisition of mutually beneficial and more representative features from both views. The final enhanced features from the two views are then integrated and projected through the image-level fusion layer, outputting pSBI and non-pSBI class probabilities. We contend that our method of exploiting multi-view cUS images enables a first of its kind, robust 3D representation tailored for pSBI detection. When evaluated on a dataset of 302 cUS scans from Mbale Regional Referral Hospital in Uganda, CLIF-Net demonstrates substantially enhanced performance, surpassing the prevailing state-of-the-art infection detection techniques.

Artificial intelligence in radiology: diagnostic sensitivity of ChatGPT for detecting hemorrhages in cranial computed tomography scans.

Bayar-Kapıcı O, Altunışık E, Musabeyoğlu F, Dev Ş, Kaya Ö

pubmed logopapersJul 21 2025
Chat Generative Pre-trained Transformer (ChatGPT)-4V, a large language model developed by OpenAI, has been explored for its potential application in radiology. This study assesses ChatGPT-4V's diagnostic performance in identifying various types of intracranial hemorrhages in non-contrast cranial computed tomography (CT) images. Intracranial hemorrhages were presented to ChatGPT using the clearest 2D imaging slices. The first question, "Q1: Which imaging technique is used in this image?" was asked to determine the imaging modality. ChatGPT was then prompted with the second question, "Q2: What do you see in this image and what is the final diagnosis?" to assess whether the CT scan was normal or showed pathology. For CT scans containing hemorrhage that ChatGPT did not interpret correctly, a follow-up question-"Q3: There is bleeding in this image. Which type of bleeding do you see?"-was used to evaluate whether this guidance influenced its response. ChatGPT accurately identified the imaging technique (Q1) in all cases but demonstrated difficulty diagnosing epidural hematoma (EDH), subdural hematoma (SDH), and subarachnoid hemorrhage (SAH) when no clues were provided (Q2). When a hemorrhage clue was introduced (Q3), ChatGPT correctly identified EDH in 16.7% of cases, SDH in 60%, and SAH in 15.6%, and achieved 100% diagnostic accuracy for hemorrhagic cerebrovascular disease. Its sensitivity, specificity, and accuracy for Q2 were 23.6%, 92.5%, and 57.4%, respectively. These values improved substantially with the clue in Q3, with sensitivity rising to 50.9% and accuracy to 71.3%. ChatGPT also demonstrated higher diagnostic accuracy in larger hemorrhages in EDH and SDH images. Although the model performs well in recognizing imaging modalities, its diagnostic accuracy substantially improves when guided by additional contextual information. These findings suggest that ChatGPT's diagnostic performance improves with guided prompts, highlighting its potential as a supportive tool in clinical radiology.

The added value for MRI radiomics and deep-learning for glioblastoma prognostication compared to clinical and molecular information

D. Abler, O. Pusterla, A. Joye-Kühnis, N. Andratschke, M. Bach, A. Bink, S. M. Christ, P. Hagmann, B. Pouymayou, E. Pravatà, P. Radojewski, M. Reyes, L. Ruinelli, R. Schaer, B. Stieltjes, G. Treglia, W. Valenzuela, R. Wiest, S. Zoergiebel, M. Guckenberger, S. Tanadini-Lang, A. Depeursinge

arxiv logopreprintJul 21 2025
Background: Radiomics shows promise in characterizing glioblastoma, but its added value over clinical and molecular predictors has yet to be proven. This study assessed the added value of conventional radiomics (CR) and deep learning (DL) MRI radiomics for glioblastoma prognosis (<= 6 vs > 6 months survival) on a large multi-center dataset. Methods: After patient selection, our curated dataset gathers 1152 glioblastoma (WHO 2016) patients from five Swiss centers and one public source. It included clinical (age, gender), molecular (MGMT, IDH), and baseline MRI data (T1, T1 contrast, FLAIR, T2) with tumor regions. CR and DL models were developed using standard methods and evaluated on internal and external cohorts. Sub-analyses assessed models with different feature sets (imaging-only, clinical/molecular-only, combined-features) and patient subsets (S-1: all patients, S-2: with molecular data, S-3: IDH wildtype). Results: The best performance was observed in the full cohort (S-1). In external validation, the combined-feature CR model achieved an AUC of 0.75, slightly, but significantly outperforming clinical-only (0.74) and imaging-only (0.68) models. DL models showed similar trends, though without statistical significance. In S-2 and S-3, combined models did not outperform clinical-only models. Exploratory analysis of CR models for overall survival prediction suggested greater relevance of imaging data: across all subsets, combined-feature models significantly outperformed clinical-only models, though with a modest advantage of 2-4 C-index points. Conclusions: While confirming the predictive value of anatomical MRI sequences for glioblastoma prognosis, this multi-center study found standard CR and DL radiomics approaches offer minimal added value over demographic predictors such as age and gender.

Establishment of AI-assisted diagnosis of the infraorbital posterior ethmoid cells based on deep learning.

Ni T, Qian X, Zeng Q, Ma Y, Xie Z, Dai Y, Che Z

pubmed logopapersJul 21 2025
To construct an artificial intelligence (AI)-assisted model for identifying the infraorbital posterior ethmoid cells (IPECs) based on deep learning using sagittal CT images. Sagittal CT images of 277 samples with and 142 samples without IPECs were retrospectively collected. An experienced radiologist engaged in the relevant aspects picked a sagittal CT image that best showed IPECs. The images were randomly assigned to the training and test sets, with 541 sides in the training set and 97 sides in the test set. The training set was used to perform a five-fold cross-validation, and the results of each fold were used to predict the test set. The model was built using nnUNet, and its performance was evaluated using Dice and standard classification metrics. The model achieved a Dice coefficient of 0.900 in the training set and 0.891 in the additional set. Precision was 0.965 for the training set and 1.000 for the additional set, while sensitivity was 0.981 and 0.967, respectively. A comparison of the diagnostic efficacy between manual outlining by a less-experienced radiologist and AI-assisted outlining showed a significant improvement in detection efficiency (P < 0.05). The AI model aided correctly in identifying and outlining all IPECs, including 12 sides that the radiologist should improve portraying. AI models can help radiologists identify the IPECs, which can further prompt relevant clinical interventions.

Deep Learning-Driven Multimodal Fusion Model for Prediction of Middle Cerebral Artery Aneurysm Rupture Risk.

Jia X, Chen Y, Zheng K, Chen C, Liu J

pubmed logopapersJul 21 2025
The decision to treat unruptured intracranial aneurysms remains a clinical dilemma. Middle cerebral artery (MCA) aneurysms represent a prevalent subtype of intracranial aneurysms. This study aims to develop a multimodal fusion deep learning model for stratifying rupture risk in MCA aneurysms. We retrospectively enrolled internal cohort and two external validation datasets with 578 and 51 MCA aneurysms, respectively. Multivariate logistic regression analysis was performed to identify independent predictors of rupture. Aneurysm morphological parameters were quantified using reconstructed CT angiography (CTA) images. Radiomics features of aneurysms were extracted through computational analysis. We developed MCANet - a multimodal data-driven classification model integrating raw CTA images, radiomics features, clinical parameters, and morphological characteristics - to establish an aneurysm rupture risk assessment framework. External validation was conducted using datasets from two independent medical centers to evaluate model generalizability and small-sample robustness. Four key metrics, including accuracy, F1-score, precision, and recall, were employed to assess model performance. In the internal cohort, 369 aneurysms were ruptured. Independent predictors of rupture included: the presence of multiple aneurysms, aneurysm location, aneurysm angle, presence of daughter-sac aneurysm, and height-width ratio. MCANet demonstrated satisfactory predictive performance with 91.38% accuracy, 96.33% sensitivity, 90.52% precision, and 93.33% F1-score. External validation maintained good discriminative ability across both independent cohorts. The MCANet model effectively integrates multimodal heterogeneous data for MCA aneurysm rupture risk prediction, demonstrating clinical applicability even in data-constrained scenarios. This model shows potential to optimize therapeutic decision-making and mitigate patient anxiety through individualized risk assessment.

ASD-GraphNet: A novel graph learning approach for Autism Spectrum Disorder diagnosis using fMRI data.

Zeraati M, Davoodi A

pubmed logopapersJul 21 2025
Autism Spectrum Disorder (ASD) is a complex neurodevelopmental condition with heterogeneous symptomatology, making accurate diagnosis challenging. Traditional methods rely on subjective behavioral assessments, often overlooking subtle neural biomarkers. This study introduces ASD-GraphNet, a novel graph-based learning framework for diagnosing ASD using functional Magnetic Resonance Imaging (fMRI) data. Leveraging the Autism Brain Imaging Data Exchange (ABIDE) dataset, ASD-GraphNet constructs brain networks based on established atlases (Craddock 200, AAL, and Dosenbach 160) to capture intricate connectivity patterns. The framework employs systematic preprocessing, graph construction, and advanced feature extraction to derive node-level, edge-level, and graph-level metrics. Feature engineering techniques, including Mutual Information-based selection and Principal Component Analysis (PCA), are applied to enhance classification performance. ASD-GraphNet evaluates a range of classifiers, including Logistic Regression, Support Vector Machines, and ensemble methods like XGBoost and LightGBM, achieving an accuracy of 75.25% in distinguishing individuals with ASD from healthy controls. This demonstrates the framework's potential to provide objective, data-driven diagnostics based solely on resting-state fMRI data. By integrating graph-based learning with neuroimaging and addressing dataset imbalance, ASD-GraphNet offers a scalable and interpretable solution for early ASD detection, paving the way for more reliable interventions. The GitHub repository for this project is available at: https://github.com/AmirDavoodi/ASD-GraphNet.
Page 23 of 92919 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.