Sort by:
Page 21 of 91903 results

Dual-Network Deep Learning for Accelerated Head and Neck MRI: Enhanced Image Quality and Reduced Scan Time.

Li S, Yan W, Zhang X, Hu W, Ji L, Yue Q

pubmed logopapersJul 22 2025
Head-and-neck MRI faces inherent challenges, including motion artifacts and trade-offs between spatial resolution and acquisition time. We aimed to evaluate a dual-network deep learning (DL) super-resolution method for improving image quality and reducing scan time in T1- and T2-weighted head-and-neck MRI. In this prospective study, 97 patients with head-and-neck masses were enrolled at xx from August 2023 to August 2024. After exclusions, 58 participants underwent paired conventional and accelerated T1WI and T2WI MRI sequences, with the accelerated sequences being reconstructed using a dual-network DL framework for super-resolution. Image quality was assessed both quantitatively (signal-to-noise ratio [SNR], contrast-to-noise ratio [CNR], contrast ratio [CR]) and qualitatively by two blinded radiologists using a 5-point Likert scale for image sharpness, lesion conspicuity, structure delineation, and artifacts. Wilcoxon signed-rank tests were used to compare paired outcomes. Among 58 participants (34 men, 24 women; mean age 51.37 ± 13.24 years), DL reconstruction reduced scan times by 46.3% (T1WI) and 26.9% (T2WI). Quantitative analysis showed significant improvements in SNR (T1WI: 26.33 vs. 20.65; T2WI: 14.14 vs. 11.26) and CR (T1WI: 0.20 vs. 0.18; T2WI: 0.34 vs. 0.30; all p < 0.001), with comparable CNR (p > 0.05). Qualitatively, image sharpness, lesion conspicuity, and structure delineation improved significantly (p < 0.05), while artifact scores remained similar (all p > 0.05). The dual-network DL method significantly enhanced image quality and reduced scan times in head-and-neck MRI while maintaining diagnostic performance comparable to conventional methods. This approach offers potential for improved workflow efficiency and patient comfort.

SFNet: A Spatio-Frequency Domain Deep Learning Network for Efficient Alzheimer's Disease Diagnosis

Xinyue Yang, Meiliang Liu, Yunfang Xu, Xiaoxiao Yang, Zhengye Si, Zijin Li, Zhiwen Zhao

arxiv logopreprintJul 22 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that predominantly affects the elderly population and currently has no cure. Magnetic Resonance Imaging (MRI), as a non-invasive imaging technique, is essential for the early diagnosis of AD. MRI inherently contains both spatial and frequency information, as raw signals are acquired in the frequency domain and reconstructed into spatial images via the Fourier transform. However, most existing AD diagnostic models extract features from a single domain, limiting their capacity to fully capture the complex neuroimaging characteristics of the disease. While some studies have combined spatial and frequency information, they are mostly confined to 2D MRI, leaving the potential of dual-domain analysis in 3D MRI unexplored. To overcome this limitation, we propose Spatio-Frequency Network (SFNet), the first end-to-end deep learning framework that simultaneously leverages spatial and frequency domain information to enhance 3D MRI-based AD diagnosis. SFNet integrates an enhanced dense convolutional network to extract local spatial features and a global frequency module to capture global frequency-domain representations. Additionally, a novel multi-scale attention module is proposed to further refine spatial feature extraction. Experiments on the Alzheimer's Disease Neuroimaging Initiative (ANDI) dataset demonstrate that SFNet outperforms existing baselines and reduces computational overhead in classifying cognitively normal (CN) and AD, achieving an accuracy of 95.1%.

AgentMRI: A Vison Language Model-Powered AI System for Self-regulating MRI Reconstruction with Multiple Degradations.

Sajua GA, Akhib M, Chang Y

pubmed logopapersJul 22 2025
Artificial intelligence (AI)-driven autonomous agents are transforming multiple domains by integrating reasoning, decision-making, and task execution into a unified framework. In medical imaging, such agents have the potential to change workflows by reducing human intervention and optimizing image quality. In this paper, we introduce the AgentMRI. It is an AI-driven system that leverages vision language models (VLMs) for fully autonomous magnetic resonance imaging (MRI) reconstruction in the presence of multiple degradations. Unlike traditional MRI correction or reconstruction methods, AgentMRI does not rely on manual intervention for post-processing or does not rely on fixed correction models. Instead, it dynamically detects MRI corruption and then automatically selects the best correction model for image reconstruction. The framework uses a multi-query VLM strategy to ensure robust corruption detection through consensus-based decision-making and confidence-weighted inference. AgentMRI automatically chooses deep learning models that include MRI reconstruction, motion correction, and denoising models. We evaluated AgentMRI in both zero-shot and fine-tuned settings. Experimental results on a comprehensive brain MRI dataset demonstrate that AgentMRI achieves an average of 73.6% accuracy in zero-shot and 95.1% accuracy for fine-tuned settings. Experiments show that it accurately executes the reconstruction process without human intervention. AgentMRI eliminates manual intervention and introduces a scalable and multimodal AI framework for autonomous MRI processing. This work may build a significant step toward fully autonomous and intelligent MR image reconstruction systems.

Supervised versus unsupervised GAN for pseudo-CT synthesis in brain MR-guided radiotherapy.

Kermani MZ, Tavakoli MB, Khorasani A, Abedi I, Sadeghi V, Amouheidari A

pubmed logopapersJul 22 2025
Radiotherapy is a crucial treatment for brain tumor malignancies. To address the limitations of CT-based treatment planning, recent research has explored MR-only radiotherapy, requiring precise MR-to-CT synthesis. This study compares two deep learning approaches, supervised (Pix2Pix) and unsupervised (CycleGAN), for generating pseudo-CT (pCT) images from T1- and T2-weighted MR sequences. 3270 paired T1- and T2-weighted MRI images were collected and registered with corresponding CT images. After preprocessing, a supervised pCT generative model was trained using the Pix2Pix framework, and an unsupervised generative network (CycleGAN) was also trained to enable a comparative assessment of pCT quality relative to the Pix2Pix model. To assess differences between pCT and reference CT images, three key metrics (SSIM, PSNR, and MAE) were used. Additionally, a dosimetric evaluation was performed on selected cases to assess clinical relevance. The average SSIM, PSNR, and MAE for Pix2Pix on T1 images were 0.964 ± 0.03, 32.812 ± 5.21, and 79.681 ± 9.52 HU, respectively. Statistical analysis revealed that Pix2Pix significantly outperformed CycleGAN in generating high-fidelity pCT images (p < 0.05). There was no notable difference in the effectiveness of T1-weighted versus T2-weighted MR images for generating pCT (p > 0.05). Dosimetric evaluation confirmed comparable dose distributions between pCT and reference CT, supporting clinical feasibility. Both supervised and unsupervised methods demonstrated the capability to generate accurate pCT images from conventional T1- and T2-weighted MR sequences. While supervised methods like Pix2Pix achieve higher accuracy, unsupervised approaches such as CycleGAN offer greater flexibility by eliminating the need for paired training data, making them suitable for applications where paired data is unavailable.

Harmonization in Magnetic Resonance Imaging: A Survey of Acquisition, Image-level, and Feature-level Methods

Qinqin Yang, Firoozeh Shomal-Zadeh, Ali Gholipour

arxiv logopreprintJul 22 2025
Modern medical imaging technologies have greatly advanced neuroscience research and clinical diagnostics. However, imaging data collected across different scanners, acquisition protocols, or imaging sites often exhibit substantial heterogeneity, known as "batch effects" or "site effects". These non-biological sources of variability can obscure true biological signals, reduce reproducibility and statistical power, and severely impair the generalizability of learning-based models across datasets. Image harmonization aims to eliminate or mitigate such site-related biases while preserving meaningful biological information, thereby improving data comparability and consistency. This review provides a comprehensive overview of key concepts, methodological advances, publicly available datasets, current challenges, and future directions in the field of medical image harmonization, with a focus on magnetic resonance imaging (MRI). We systematically cover the full imaging pipeline, and categorize harmonization approaches into prospective acquisition and reconstruction strategies, retrospective image-level and feature-level methods, and traveling-subject-based techniques. Rather than providing an exhaustive survey, we focus on representative methods, with particular emphasis on deep learning-based approaches. Finally, we summarize the major challenges that remain and outline promising avenues for future research.

SFNet: A Spatial-Frequency Domain Deep Learning Network for Efficient Alzheimer's Disease Diagnosis

Xinyue Yang, Meiliang Liu, Yunfang Xu, Xiaoxiao Yang, Zhengye Si, Zijin Li, Zhiwen Zhao

arxiv logopreprintJul 22 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that predominantly affects the elderly population and currently has no cure. Magnetic Resonance Imaging (MRI), as a non-invasive imaging technique, is essential for the early diagnosis of AD. MRI inherently contains both spatial and frequency information, as raw signals are acquired in the frequency domain and reconstructed into spatial images via the Fourier transform. However, most existing AD diagnostic models extract features from a single domain, limiting their capacity to fully capture the complex neuroimaging characteristics of the disease. While some studies have combined spatial and frequency information, they are mostly confined to 2D MRI, leaving the potential of dual-domain analysis in 3D MRI unexplored. To overcome this limitation, we propose Spatio-Frequency Network (SFNet), the first end-to-end deep learning framework that simultaneously leverages spatial and frequency domain information to enhance 3D MRI-based AD diagnosis. SFNet integrates an enhanced dense convolutional network to extract local spatial features and a global frequency module to capture global frequency-domain representations. Additionally, a novel multi-scale attention module is proposed to further refine spatial feature extraction. Experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset demonstrate that SFNet outperforms existing baselines and reduces computational overhead in classifying cognitively normal (CN) and AD, achieving an accuracy of 95.1%.

Verification of resolution and imaging time for high-resolution deep learning reconstruction techniques.

Harada S, Takatsu Y, Murayama K, Sano Y, Ikedo M

pubmed logopapersJul 22 2025
Magnetic resonance imaging (MRI) involves a trade-off between imaging time, signal-to-noise ratio (SNR), and spatial resolution. Reducing the imaging time often leads to a lower SNR or resolution. Deep-learning-based reconstruction (DLR) methods have been introduced to address these limitations. Image-domain super-resolution DLR enables high resolution without additional image scans. High-quality images can be obtained within a shorter timeframe by appropriately configuring DLR parameters. It is necessary to maximize the performance of super-resolution DLR to enable efficient use in MRI. We evaluated the performance of a vendor-provided super-resolution DLR method (PIQE) on a Canon 3 T MRI scanner using an edge phantom and clinical brain images from eight patients. Quantitative assessment included structural similarity index (SSIM), peak SNR (PSNR), root mean square error (RMSE), and full width at half maximum (FWHM). FWHM was used to quantitatively assess spatial resolution and image sharpness. Visual evaluation using a five-point Likert scale was also performed to assess perceived image quality. Image domain super-resolution DLR reduced scan time by up to 70 % while preserving the structural image quality. Acquisition matrices of 0.87 mm/pixel or finer with a zoom ratio of ×2 yielded SSIM ≥0.80, PSNR ≥35 dB, and non-significant FWHM differences compared to full-resolution references. In contrast, aggressive downsampling (zoom ratio 3 from low-resolution matrices) led to image degradation including truncation artifacts and reduced sharpness. These results clarify the optimal use of PIQE as an image-domain super-resolution method and provide practical guidance for its application in clinical MRI workflows.

Machine learning approach effectively discriminates between Parkinson's disease and progressive supranuclear palsy: multi-level indices of rs-fMRI.

Cheng W, Liang X, Zeng W, Guo J, Yin Z, Dai J, Hong D, Zhou F, Li F, Fang X

pubmed logopapersJul 22 2025
Parkinson's disease (PD) and progressive supranuclear palsy (PSP) present similar clinical symptoms, but their treatment options and clinical prognosis differ significantly. Therefore, we aimed to discriminate between PD and PSP based on multi-level indices of resting-state functional magnetic resonance imaging (rs-fMRI) via the machine learning approach. A total of 58 PD and 52 PSP patients were prospectively enrolled in this study. Participants were randomly allocated to a training set and a validation set in a 7:3 ratio. Various rs-fMRI indices were extracted, followed by a comprehensive feature screening for each index. We constructed fifteen distinct combinations of indices and selected four machine learning algorithms for model development. Subsequently, different validation templates were employed to assess the classification results and investigate the relationship between the most significant features and clinical assessment scales. The classification performance of logistic regression (LR) and support vector machine (SVM) models, based on multiple index combinations, was significantly superior to that of other machine learning models and combinations when utilizing automatic anatomical labeling (AAL) templates. This has been verified across different templates. The utilization of multiple rs-fMRI indices significantly enhances the performance of machine learning models and can effectively achieve the automatic identification of PD and PSP at the individual level.

CLIF-Net: Intersection-guided Cross-view Fusion Network for Infection Detection from Cranial Ultrasound

Yu, M., Peterson, M. R., Burgoine, K., Harbaugh, T., Olupot-Olupot, P., Gladstone, M., Hagmann, C., Cowan, F. M., Weeks, A., Morton, S. U., Mulondo, R., Mbabazi-Kabachelor, E., Schiff, S. J., Monga, V.

medrxiv logopreprintJul 22 2025
This paper addresses the problem of detecting possible serious bacterial infection (pSBI) of infancy, i.e. a clinical presentation consistent with bacterial sepsis in newborn infants using cranial ultrasound (cUS) images. The captured image set for each patient enables multiview imagery: coronal and sagittal, with geometric overlap. To exploit this geometric relation, we develop a new learning framework, called the intersection-guided Crossview Local-and Image-level Fusion Network (CLIF-Net). Our technique employs two distinct convolutional neural network branches to extract features from coronal and sagittal images with newly developed multi-level fusion blocks. Specifically, we leverage the spatial position of these images to locate the intersecting region. We then identify and enhance the semantic features from this region across multiple levels using cross-attention modules, facilitating the acquisition of mutually beneficial and more representative features from both views. The final enhanced features from the two views are then integrated and projected through the image-level fusion layer, outputting pSBI and non-pSBI class probabilities. We contend that our method of exploiting multi-view cUS images enables a first of its kind, robust 3D representation tailored for pSBI detection. When evaluated on a dataset of 302 cUS scans from Mbale Regional Referral Hospital in Uganda, CLIF-Net demonstrates substantially enhanced performance, surpassing the prevailing state-of-the-art infection detection techniques.

Artificial intelligence in radiology: diagnostic sensitivity of ChatGPT for detecting hemorrhages in cranial computed tomography scans.

Bayar-Kapıcı O, Altunışık E, Musabeyoğlu F, Dev Ş, Kaya Ö

pubmed logopapersJul 21 2025
Chat Generative Pre-trained Transformer (ChatGPT)-4V, a large language model developed by OpenAI, has been explored for its potential application in radiology. This study assesses ChatGPT-4V's diagnostic performance in identifying various types of intracranial hemorrhages in non-contrast cranial computed tomography (CT) images. Intracranial hemorrhages were presented to ChatGPT using the clearest 2D imaging slices. The first question, "Q1: Which imaging technique is used in this image?" was asked to determine the imaging modality. ChatGPT was then prompted with the second question, "Q2: What do you see in this image and what is the final diagnosis?" to assess whether the CT scan was normal or showed pathology. For CT scans containing hemorrhage that ChatGPT did not interpret correctly, a follow-up question-"Q3: There is bleeding in this image. Which type of bleeding do you see?"-was used to evaluate whether this guidance influenced its response. ChatGPT accurately identified the imaging technique (Q1) in all cases but demonstrated difficulty diagnosing epidural hematoma (EDH), subdural hematoma (SDH), and subarachnoid hemorrhage (SAH) when no clues were provided (Q2). When a hemorrhage clue was introduced (Q3), ChatGPT correctly identified EDH in 16.7% of cases, SDH in 60%, and SAH in 15.6%, and achieved 100% diagnostic accuracy for hemorrhagic cerebrovascular disease. Its sensitivity, specificity, and accuracy for Q2 were 23.6%, 92.5%, and 57.4%, respectively. These values improved substantially with the clue in Q3, with sensitivity rising to 50.9% and accuracy to 71.3%. ChatGPT also demonstrated higher diagnostic accuracy in larger hemorrhages in EDH and SDH images. Although the model performs well in recognizing imaging modalities, its diagnostic accuracy substantially improves when guided by additional contextual information. These findings suggest that ChatGPT's diagnostic performance improves with guided prompts, highlighting its potential as a supportive tool in clinical radiology.
Page 21 of 91903 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.