Sort by:
Page 12 of 14137 results

Edge Computing for Physics-Driven AI in Computational MRI: A Feasibility Study

Yaşar Utku Alçalar, Yu Cao, Mehmet Akçakaya

arxiv logopreprintMay 30 2025
Physics-driven artificial intelligence (PD-AI) reconstruction methods have emerged as the state-of-the-art for accelerating MRI scans, enabling higher spatial and temporal resolutions. However, the high resolution of these scans generates massive data volumes, leading to challenges in transmission, storage, and real-time processing. This is particularly pronounced in functional MRI, where hundreds of volumetric acquisitions further exacerbate these demands. Edge computing with FPGAs presents a promising solution for enabling PD-AI reconstruction near the MRI sensors, reducing data transfer and storage bottlenecks. However, this requires optimization of PD-AI models for hardware efficiency through quantization and bypassing traditional FFT-based approaches, which can be a limitation due to their computational demands. In this work, we propose a novel PD-AI computational MRI approach optimized for FPGA-based edge computing devices, leveraging 8-bit complex data quantization and eliminating redundant FFT/IFFT operations. Our results show that this strategy improves computational efficiency while maintaining reconstruction quality comparable to conventional PD-AI methods, and outperforms standard clinical methods. Our approach presents an opportunity for high-resolution MRI reconstruction on resource-constrained devices, highlighting its potential for real-world deployment.

Self-supervised feature learning for cardiac Cine MR image reconstruction

Siying Xu, Marcel Früh, Kerstin Hammernik, Andreas Lingg, Jens Kübler, Patrick Krumm, Daniel Rueckert, Sergios Gatidis, Thomas Küstner

arxiv logopreprintMay 29 2025
We propose a self-supervised feature learning assisted reconstruction (SSFL-Recon) framework for MRI reconstruction to address the limitation of existing supervised learning methods. Although recent deep learning-based methods have shown promising performance in MRI reconstruction, most require fully-sampled images for supervised learning, which is challenging in practice considering long acquisition times under respiratory or organ motion. Moreover, nearly all fully-sampled datasets are obtained from conventional reconstruction of mildly accelerated datasets, thus potentially biasing the achievable performance. The numerous undersampled datasets with different accelerations in clinical practice, hence, remain underutilized. To address these issues, we first train a self-supervised feature extractor on undersampled images to learn sampling-insensitive features. The pre-learned features are subsequently embedded in the self-supervised reconstruction network to assist in removing artifacts. Experiments were conducted retrospectively on an in-house 2D cardiac Cine dataset, including 91 cardiovascular patients and 38 healthy subjects. The results demonstrate that the proposed SSFL-Recon framework outperforms existing self-supervised MRI reconstruction methods and even exhibits comparable or better performance to supervised learning up to $16\times$ retrospective undersampling. The feature learning strategy can effectively extract global representations, which have proven beneficial in removing artifacts and increasing generalization ability during reconstruction.

Operationalizing postmortem pathology-MRI association studies in Alzheimer's disease and related disorders with MRI-guided histology sampling.

Athalye C, Bahena A, Khandelwal P, Emrani S, Trotman W, Levorse LM, Khodakarami Z, Ohm DT, Teunissen-Bermeo E, Capp N, Sadaghiani S, Arezoumandan S, Lim SA, Prabhakaran K, Ittyerah R, Robinson JL, Schuck T, Lee EB, Tisdall MD, Das SR, Wolk DA, Irwin DJ, Yushkevich PA

pubmed logopapersMay 28 2025
Postmortem neuropathological examination, while the gold standard for diagnosing neurodegenerative diseases, often relies on limited regional sampling that may miss critical areas affected by Alzheimer's disease and related disorders. Ultra-high resolution postmortem MRI can help identify regions that fall outside the diagnostic sampling criteria for additional histopathologic evaluation. However, there are no standardized guidelines for integrating histology and MRI in a traditional brain bank. We developed a comprehensive protocol for whole hemisphere postmortem 7T MRI-guided histopathological sampling with whole-slide digital imaging and histopathological analysis, providing a reliable pipeline for high-volume brain banking in heterogeneous brain tissue. Our method uses patient-specific 3D printed molds built from postmortem MRI, allowing standardized tissue processing with a permanent spatial reference frame. To facilitate pathology-MRI association studies, we created a semi-automated MRI to histology registration pipeline and developed a quantitative pathology scoring system using weakly supervised deep learning. We validated this protocol on a cohort of 29 brains with diagnosis on the AD spectrum that revealed correlations between cortical thickness and phosphorylated tau accumulation. This pipeline has broad applicability across neuropathological research and brain banking, facilitating large-scale studies that integrate histology with neuroimaging. The innovations presented here provide a scalable and reproducible approach to studying postmortem brain pathology, with implications for advancing diagnostic and therapeutic strategies for Alzheimer's disease and related disorders.

SUFFICIENT: A scan-specific unsupervised deep learning framework for high-resolution 3D isotropic fetal brain MRI reconstruction

Jiangjie Wu, Lixuan Chen, Zhenghao Li, Xin Li, Saban Ozturk, Lihui Wang, Rongpin Wang, Hongjiang Wei, Yuyao Zhang

arxiv logopreprintMay 23 2025
High-quality 3D fetal brain MRI reconstruction from motion-corrupted 2D slices is crucial for clinical diagnosis. Reliable slice-to-volume registration (SVR)-based motion correction and super-resolution reconstruction (SRR) methods are essential. Deep learning (DL) has demonstrated potential in enhancing SVR and SRR when compared to conventional methods. However, it requires large-scale external training datasets, which are difficult to obtain for clinical fetal MRI. To address this issue, we propose an unsupervised iterative SVR-SRR framework for isotropic HR volume reconstruction. Specifically, SVR is formulated as a function mapping a 2D slice and a 3D target volume to a rigid transformation matrix, which aligns the slice to the underlying location in the target volume. The function is parameterized by a convolutional neural network, which is trained by minimizing the difference between the volume slicing at the predicted position and the input slice. In SRR, a decoding network embedded within a deep image prior framework is incorporated with a comprehensive image degradation model to produce the high-resolution (HR) volume. The deep image prior framework offers a local consistency prior to guide the reconstruction of HR volumes. By performing a forward degradation model, the HR volume is optimized by minimizing loss between predicted slices and the observed slices. Comprehensive experiments conducted on large-magnitude motion-corrupted simulation data and clinical data demonstrate the superior performance of the proposed framework over state-of-the-art fetal brain reconstruction frameworks.

AI in Action: A Roadmap from the Radiology AI Council for Effective Model Evaluation and Deployment.

Trivedi H, Khosravi B, Gichoya J, Benson L, Dyckman D, Galt J, Howard B, Kikano E, Kunjummen J, Lall N, Li X, Patel S, Safdar N, Salastekar N, Segovis C, van Assen M, Harri P

pubmed logopapersMay 23 2025
As the integration of artificial intelligence (AI) into radiology workflows continues to evolve, establishing standardized processes for the evaluation and deployment of AI models is crucial to ensure success. This paper outlines the creation of a Radiology AI Council at a large academic center and subsequent development of framework in the form of a rubric to formalize the evaluation of radiology AI models and onboard them into clinical workflows. The rubric aims to address the challenges faced during the deployment of AI models, such as real-world model performance, workflow implementation, resource allocation, return on investment (ROI), and impact to the broader health system. Using this comprehensive rubric, the council aims to ensure that the process for selecting AI models is both standardized and transparent. This paper outlines the steps taken to establish this rubric, its components, and initial results from evaluation of 13 models over an 8-month period. We emphasize the importance of holistic model evaluation beyond performance metrics, and transparency and objectivity in AI model evaluation with the goal of improving the efficacy and safety of AI models in radiology.

Evaluation of a deep-learning segmentation model for patients with colorectal cancer liver metastases (COALA) in the radiological workflow.

Zeeuw M, Bereska J, Strampel M, Wagenaar L, Janssen B, Marquering H, Kemna R, van Waesberghe JH, van den Bergh J, Nota I, Moos S, Nio Y, Kop M, Kist J, Struik F, Wesdorp N, Nelissen J, Rus K, de Sitter A, Stoker J, Huiskens J, Verpalen I, Kazemier G

pubmed logopapersMay 23 2025
For patients with colorectal liver metastases (CRLM), total tumor volume (TTV) is prognostic. A deep-learning segmentation model for CRLM to assess TTV called COlorectal cAncer Liver metastases Assessment (COALA) has been developed. This study evaluated COALA's performance and practical utility in the radiological picture archiving and communication system (PACS). A secondary aim was to provide lessons for future researchers on the implementation of artificial intelligence (AI) models. Patients discussed between January and December 2023 in a multidisciplinary meeting for CRLM were included. In those patients, CRLM was automatically segmented in portal-venous phase CT scans by COALA and integrated with PACS. Eight expert abdominal radiologists completed a questionnaire addressing segmentation accuracy and PACS integration. They were also asked to write down general remarks. In total, 57 patients were evaluated. Of those patients, 112 contrast-enhanced portal-venous phase CT scans were analyzed. Of eight radiologists, six (75%) evaluated the model as user-friendly in their radiological workflow. Areas of improvement of the COALA model were the segmentation of small lesions, heterogeneous lesions, and lesions at the border of the liver with involvement of the diaphragm or heart. Key lessons for implementation were a multidisciplinary approach, a robust method prior to model development and organizing evaluation sessions with end-users early in the development phase. This study demonstrates that the deep-learning segmentation model for patients with CRLM (COALA) is user-friendly in the radiologist's PACS. Future researchers striving for implementation should have a multidisciplinary approach, propose a robust methodology and involve end-users prior to model development. Many segmentation models are being developed, but none of those models are evaluated in the (radiological) workflow or clinically implemented. Our model is implemented in the radiological work system, providing valuable lessons for researchers to achieve clinical implementation. Developed segmentation models should be implemented in the radiological workflow. Our implemented segmentation model provides valuable lessons for future researchers. If implemented in clinical practice, our model could allow for objective radiological evaluation.

ActiveNaf: A novel NeRF-based approach for low-dose CT image reconstruction through active learning.

Zidane A, Shimshoni I

pubmed logopapersMay 22 2025
CT imaging provides essential information about internal anatomy; however, conventional CT imaging delivers radiation doses that can become problematic for patients requiring repeated imaging, highlighting the need for dose-reduction techniques. This study aims to reduce radiation doses without compromising image quality. We propose an approach that combines Neural Attenuation Fields (NAF) with an active learning strategy to better optimize CT reconstructions given a limited number of X-ray projections. Our method uses a secondary neural network to predict the Peak Signal-to-Noise Ratio (PSNR) of 2D projections generated by NAF from a range of angles in the operational range of the CT scanner. This prediction serves as a guide for the active learning process in choosing the most informative projections. In contrast to conventional techniques that acquire all X-ray projections in a single session, our technique iteratively acquires projections. The iterative process improves reconstruction quality, reduces the number of required projections, and decreases patient radiation exposure. We tested our methodology on spinal imaging using a limited subset of the VerSe 2020 dataset. We compare image quality metrics (PSNR3D, SSIM3D, and PSNR2D) to the baseline method and find significant improvements. Our method achieves the same quality with 36 projections as the baseline method achieves with 60. Our findings demonstrate that our approach achieves high-quality 3D CT reconstructions from sparse data, producing clearer and more detailed images of anatomical structures. This work lays the groundwork for advanced imaging techniques, paving the way for safer and more efficient medical imaging procedures.

Multi-modal Integration Analysis of Alzheimer's Disease Using Large Language Models and Knowledge Graphs

Kanan Kiguchi, Yunhao Tu, Katsuhiro Ajito, Fady Alnajjar, Kazuyuki Murase

arxiv logopreprintMay 21 2025
We propose a novel framework for integrating fragmented multi-modal data in Alzheimer's disease (AD) research using large language models (LLMs) and knowledge graphs. While traditional multimodal analysis requires matched patient IDs across datasets, our approach demonstrates population-level integration of MRI, gene expression, biomarkers, EEG, and clinical indicators from independent cohorts. Statistical analysis identified significant features in each modality, which were connected as nodes in a knowledge graph. LLMs then analyzed the graph to extract potential correlations and generate hypotheses in natural language. This approach revealed several novel relationships, including a potential pathway linking metabolic risk factors to tau protein abnormalities via neuroinflammation (r>0.6, p<0.001), and unexpected correlations between frontal EEG channels and specific gene expression profiles (r=0.42-0.58, p<0.01). Cross-validation with independent datasets confirmed the robustness of major findings, with consistent effect sizes across cohorts (variance <15%). The reproducibility of these findings was further supported by expert review (Cohen's k=0.82) and computational validation. Our framework enables cross modal integration at a conceptual level without requiring patient ID matching, offering new possibilities for understanding AD pathology through fragmented data reuse and generating testable hypotheses for future research.

Multi-modal Integration Analysis of Alzheimer's Disease Using Large Language Models and Knowledge Graphs

Kanan Kiguchi, Yunhao Tu, Katsuhiro Ajito, Fady Alnajjar, Kazuyuki Murase

arxiv logopreprintMay 21 2025
We propose a novel framework for integrating fragmented multi-modal data in Alzheimer's disease (AD) research using large language models (LLMs) and knowledge graphs. While traditional multimodal analysis requires matched patient IDs across datasets, our approach demonstrates population-level integration of MRI, gene expression, biomarkers, EEG, and clinical indicators from independent cohorts. Statistical analysis identified significant features in each modality, which were connected as nodes in a knowledge graph. LLMs then analyzed the graph to extract potential correlations and generate hypotheses in natural language. This approach revealed several novel relationships, including a potential pathway linking metabolic risk factors to tau protein abnormalities via neuroinflammation (r>0.6, p<0.001), and unexpected correlations between frontal EEG channels and specific gene expression profiles (r=0.42-0.58, p<0.01). Cross-validation with independent datasets confirmed the robustness of major findings, with consistent effect sizes across cohorts (variance <15%). The reproducibility of these findings was further supported by expert review (Cohen's k=0.82) and computational validation. Our framework enables cross modal integration at a conceptual level without requiring patient ID matching, offering new possibilities for understanding AD pathology through fragmented data reuse and generating testable hypotheses for future research.

CONSIGN: Conformal Segmentation Informed by Spatial Groupings via Decomposition

Bruno Viti, Elias Karabelas, Martin Holler

arxiv logopreprintMay 20 2025
Most machine learning-based image segmentation models produce pixel-wise confidence scores - typically derived from softmax outputs - that represent the model's predicted probability for each class label at every pixel. While this information can be particularly valuable in high-stakes domains such as medical imaging, these (uncalibrated) scores are heuristic in nature and do not constitute rigorous quantitative uncertainty estimates. Conformal prediction (CP) provides a principled framework for transforming heuristic confidence scores into statistically valid uncertainty estimates. However, applying CP directly to image segmentation ignores the spatial correlations between pixels, a fundamental characteristic of image data. This can result in overly conservative and less interpretable uncertainty estimates. To address this, we propose CONSIGN (Conformal Segmentation Informed by Spatial Groupings via Decomposition), a CP-based method that incorporates spatial correlations to improve uncertainty quantification in image segmentation. Our method generates meaningful prediction sets that come with user-specified, high-probability error guarantees. It is compatible with any pre-trained segmentation model capable of generating multiple sample outputs - such as those using dropout, Bayesian modeling, or ensembles. We evaluate CONSIGN against a standard pixel-wise CP approach across three medical imaging datasets and two COCO dataset subsets, using three different pre-trained segmentation models. Results demonstrate that accounting for spatial structure significantly improves performance across multiple metrics and enhances the quality of uncertainty estimates.
Page 12 of 14137 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.