Sort by:
Page 204 of 4064055 results

AI Model Passport: Data and System Traceability Framework for Transparent AI in Health

Varvara Kalokyri, Nikolaos S. Tachos, Charalampos N. Kalantzopoulos, Stelios Sfakianakis, Haridimos Kondylakis, Dimitrios I. Zaridis, Sara Colantonio, Daniele Regge, Nikolaos Papanikolaou, The ProCAncer-I consortium, Konstantinos Marias, Dimitrios I. Fotiadis, Manolis Tsiknakis

arxiv logopreprintJun 27 2025
The increasing integration of Artificial Intelligence (AI) into health and biomedical systems necessitates robust frameworks for transparency, accountability, and ethical compliance. Existing frameworks often rely on human-readable, manual documentation which limits scalability, comparability, and machine interpretability across projects and platforms. They also fail to provide a unique, verifiable identity for AI models to ensure their provenance and authenticity across systems and use cases, limiting reproducibility and stakeholder trust. This paper introduces the concept of the AI Model Passport, a structured and standardized documentation framework that acts as a digital identity and verification tool for AI models. It captures essential metadata to uniquely identify, verify, trace and monitor AI models across their lifecycle - from data acquisition and preprocessing to model design, development and deployment. In addition, an implementation of this framework is presented through AIPassport, an MLOps tool developed within the ProCAncer-I EU project for medical imaging applications. AIPassport automates metadata collection, ensures proper versioning, decouples results from source scripts, and integrates with various development environments. Its effectiveness is showcased through a lesion segmentation use case using data from the ProCAncer-I dataset, illustrating how the AI Model Passport enhances transparency, reproducibility, and regulatory readiness while reducing manual effort. This approach aims to set a new standard for fostering trust and accountability in AI-driven healthcare solutions, aspiring to serve as the basis for developing transparent and regulation compliant AI systems across domains.

Cardiovascular disease classification using radiomics and geometric features from cardiac CT

Ajay Mittal, Raghav Mehta, Omar Todd, Philipp Seeböck, Georg Langs, Ben Glocker

arxiv logopreprintJun 27 2025
Automatic detection and classification of Cardiovascular disease (CVD) from Computed Tomography (CT) images play an important part in facilitating better-informed clinical decisions. However, most of the recent deep learning based methods either directly work on raw CT data or utilize it in pair with anatomical cardiac structure segmentation by training an end-to-end classifier. As such, these approaches become much more difficult to interpret from a clinical perspective. To address this challenge, in this work, we break down the CVD classification pipeline into three components: (i) image segmentation, (ii) image registration, and (iii) downstream CVD classification. Specifically, we utilize the Atlas-ISTN framework and recent segmentation foundational models to generate anatomical structure segmentation and a normative healthy atlas. These are further utilized to extract clinically interpretable radiomic features as well as deformation field based geometric features (through atlas registration) for CVD classification. Our experiments on the publicly available ASOCA dataset show that utilizing these features leads to better CVD classification accuracy (87.50\%) when compared against classification model trained directly on raw CT images (67.50\%). Our code is publicly available: https://github.com/biomedia-mira/grc-net

Reasoning in machine vision: learning to think fast and slow

Shaheer U. Saeed, Yipei Wang, Veeru Kasivisvanathan, Brian R. Davidson, Matthew J. Clarkson, Yipeng Hu, Daniel C. Alexander

arxiv logopreprintJun 27 2025
Reasoning is a hallmark of human intelligence, enabling adaptive decision-making in complex and unfamiliar scenarios. In contrast, machine intelligence remains bound to training data, lacking the ability to dynamically refine solutions at inference time. While some recent advances have explored reasoning in machines, these efforts are largely limited to verbal domains such as mathematical problem-solving, where explicit rules govern step-by-step reasoning. Other critical real-world tasks - including visual perception, spatial reasoning, and radiological diagnosis - require non-verbal reasoning, which remains an open challenge. Here we present a novel learning paradigm that enables machine reasoning in vision by allowing performance improvement with increasing thinking time (inference-time compute), even under conditions where labelled data is very limited. Inspired by dual-process theories of human cognition in psychology, our approach integrates a fast-thinking System I module for familiar tasks, with a slow-thinking System II module that iteratively refines solutions using self-play reinforcement learning. This paradigm mimics human reasoning by proposing, competing over, and refining solutions in data-scarce scenarios. We demonstrate superior performance through extended thinking time, compared not only to large-scale supervised learning but also foundation models and even human experts, in real-world vision tasks. These tasks include computer-vision benchmarks and cancer localisation on medical images across five organs, showcasing transformative potential for non-verbal machine reasoning.

Towards Scalable and Robust White Matter Lesion Localization via Multimodal Deep Learning

Julia Machnio, Sebastian Nørgaard Llambias, Mads Nielsen, Mostafa Mehdipour Ghazi

arxiv logopreprintJun 27 2025
White matter hyperintensities (WMH) are radiological markers of small vessel disease and neurodegeneration, whose accurate segmentation and spatial localization are crucial for diagnosis and monitoring. While multimodal MRI offers complementary contrasts for detecting and contextualizing WM lesions, existing approaches often lack flexibility in handling missing modalities and fail to integrate anatomical localization efficiently. We propose a deep learning framework for WM lesion segmentation and localization that operates directly in native space using single- and multi-modal MRI inputs. Our study evaluates four input configurations: FLAIR-only, T1-only, concatenated FLAIR and T1, and a modality-interchangeable setup. It further introduces a multi-task model for jointly predicting lesion and anatomical region masks to estimate region-wise lesion burden. Experiments conducted on the MICCAI WMH Segmentation Challenge dataset demonstrate that multimodal input significantly improves the segmentation performance, outperforming unimodal models. While the modality-interchangeable setting trades accuracy for robustness, it enables inference in cases with missing modalities. Joint lesion-region segmentation using multi-task learning was less effective than separate models, suggesting representational conflict between tasks. Our findings highlight the utility of multimodal fusion for accurate and robust WMH analysis, and the potential of joint modeling for integrated predictions.

Noise-Inspired Diffusion Model for Generalizable Low-Dose CT Reconstruction

Qi Gao, Zhihao Chen, Dong Zeng, Junping Zhang, Jianhua Ma, Hongming Shan

arxiv logopreprintJun 27 2025
The generalization of deep learning-based low-dose computed tomography (CT) reconstruction models to doses unseen in the training data is important and remains challenging. Previous efforts heavily rely on paired data to improve the generalization performance and robustness through collecting either diverse CT data for re-training or a few test data for fine-tuning. Recently, diffusion models have shown promising and generalizable performance in low-dose CT (LDCT) reconstruction, however, they may produce unrealistic structures due to the CT image noise deviating from Gaussian distribution and imprecise prior information from the guidance of noisy LDCT images. In this paper, we propose a noise-inspired diffusion model for generalizable LDCT reconstruction, termed NEED, which tailors diffusion models for noise characteristics of each domain. First, we propose a novel shifted Poisson diffusion model to denoise projection data, which aligns the diffusion process with the noise model in pre-log LDCT projections. Second, we devise a doubly guided diffusion model to refine reconstructed images, which leverages LDCT images and initial reconstructions to more accurately locate prior information and enhance reconstruction fidelity. By cascading these two diffusion models for dual-domain reconstruction, our NEED requires only normal-dose data for training and can be effectively extended to various unseen dose levels during testing via a time step matching strategy. Extensive qualitative, quantitative, and segmentation-based evaluations on two datasets demonstrate that our NEED consistently outperforms state-of-the-art methods in reconstruction and generalization performance. Source code is made available at https://github.com/qgao21/NEED.

High Resolution Isotropic 3D Cine imaging with Automated Segmentation using Concatenated 2D Real-time Imaging and Deep Learning

Mark Wrobel, Michele Pascale, Tina Yao, Ruaraidh Campbell, Elena Milano, Michael Quail, Jennifer Steeden, Vivek Muthurangu

arxiv logopreprintJun 27 2025
Background: Conventional cardiovascular magnetic resonance (CMR) in paediatric and congenital heart disease uses 2D, breath-hold, balanced steady state free precession (bSSFP) cine imaging for assessment of function and cardiac-gated, respiratory-navigated, static 3D bSSFP whole-heart imaging for anatomical assessment. Our aim is to concatenate a stack 2D free-breathing real-time cines and use Deep Learning (DL) to create an isotropic a fully segmented 3D cine dataset from these images. Methods: Four DL models were trained on open-source data that performed: a) Interslice contrast correction; b) Interslice respiratory motion correction; c) Super-resolution (slice direction); and d) Segmentation of right and left atria and ventricles (RA, LA, RV, and LV), thoracic aorta (Ao) and pulmonary arteries (PA). In 10 patients undergoing routine cardiovascular examination, our method was validated on prospectively acquired sagittal stacks of real-time cine images. Quantitative metrics (ventricular volumes and vessel diameters) and image quality of the 3D cines were compared to conventional breath hold cine and whole heart imaging. Results: All real-time data were successfully transformed into 3D cines with a total post-processing time of <1 min in all cases. There were no significant biases in any LV or RV metrics with reasonable limits of agreement and correlation. There is also reasonable agreement for all vessel diameters, although there was a small but significant overestimation of RPA diameter. Conclusion: We have demonstrated the potential of creating a 3D-cine data from concatenated 2D real-time cine images using a series of DL models. Our method has short acquisition and reconstruction times with fully segmented data being available within 2 minutes. The good agreement with conventional imaging suggests that our method could help to significantly speed up CMR in clinical practice.

BrainMT: A Hybrid Mamba-Transformer Architecture for Modeling Long-Range Dependencies in Functional MRI Data

Arunkumar Kannan, Martin A. Lindquist, Brian Caffo

arxiv logopreprintJun 27 2025
Recent advances in deep learning have made it possible to predict phenotypic measures directly from functional magnetic resonance imaging (fMRI) brain volumes, sparking significant interest in the neuroimaging community. However, existing approaches, primarily based on convolutional neural networks or transformer architectures, often struggle to model the complex relationships inherent in fMRI data, limited by their inability to capture long-range spatial and temporal dependencies. To overcome these shortcomings, we introduce BrainMT, a novel hybrid framework designed to efficiently learn and integrate long-range spatiotemporal attributes in fMRI data. Our framework operates in two stages: (1) a bidirectional Mamba block with a temporal-first scanning mechanism to capture global temporal interactions in a computationally efficient manner; and (2) a transformer block leveraging self-attention to model global spatial relationships across the deep features processed by the Mamba block. Extensive experiments on two large-scale public datasets, UKBioBank and the Human Connectome Project, demonstrate that BrainMT achieves state-of-the-art performance on both classification (sex prediction) and regression (cognitive intelligence prediction) tasks, outperforming existing methods by a significant margin. Our code and implementation details will be made publicly available at this https://github.com/arunkumar-kannan/BrainMT-fMRI

Deep learning for hydrocephalus prognosis: Advances, challenges, and future directions: A review.

Huang J, Shen N, Tan Y, Tang Y, Ding Z

pubmed logopapersJun 27 2025
Diagnosis of hydrocephalus involves a careful check of the patient's history and thorough neurological assessment. The traditional diagnosis has predominantly depended on the professional judgment of physicians based on clinical experience, but with the advancement of precision medicine and individualized treatment, such experience-based methods are no longer sufficient to keep pace with current clinical requirements. To fit this adjustment, the medical community actively devotes itself to data-driven intelligent diagnostic solutions. Building a prognosis prediction model for hydrocephalus has thus become a new focus, among which intelligent prediction systems supported by deep learning offer new technical advantages for clinical diagnosis and treatment decisions. Over the past several years, algorithms of deep learning have demonstrated conspicuous advantages in medical image analysis. Studies revealed that the accuracy rate of the diagnosis of hydrocephalus by magnetic resonance imaging can reach 90% through convolutional neural networks, while their sensitivity and specificity are also better than these of traditional methods. With the extensive use of medical technology in terms of deep learning, its successful use in modeling hydrocephalus prognosis has also drawn extensive attention and recognition from scholars. This review explores the application of deep learning in hydrocephalus diagnosis and prognosis, focusing on image-based, biochemical, and structured data models. Highlighting recent advancements, challenges, and future trajectories, the study emphasizes deep learning's potential to enhance personalized treatment and improve outcomes.

Regional Cortical Thinning and Area Reduction Are Associated with Cognitive Impairment in Hemodialysis Patients.

Chen HJ, Qiu J, Qi Y, Guo Y, Zhang Z, Qin H, Wu F, Chen F

pubmed logopapersJun 27 2025
Magnetic resonance imaging (MRI) has shown that patients with end-stage renal disease have decreased gray matter volume and density. However, the cortical area and thickness in patients on hemodialysis are uncertain, and the relationship between patients' cognition and cortical alterations remains unclear. Thirty-six hemodialysis patients and 25 age- and sex-matched healthy controls were enrolled in this study and underwent brain MRI scans and neuropsychological assessments. According to the Desikan-Killiany atlas, the brain is divided into 68 regions. Using FreeSurfer software, we analyzed the differences in cortical area and thickness of each region between groups. Machine learning-based classification was also used to differentiate hemodialysis patients from healthy individuals. The patients exhibited decreased cortical thickness in the frontal and temporal regions, including the left bankssts, left lingual gyrus, left pars triangularis, bilateral superior temporal gyrus, and right pars opercularis and decreased cortical area in the left rostral middle frontal gyrus, left superior frontal gyrus, right fusiform gyrus, right pars orbitalis and right superior frontal gyrus. Decreased cortical thickness was positively associated with poorer scores on the neuropsychological tests and increased uric acid and urea levels. Cortical thickness pattern allowed differentiating the patients from the controls with 96.7% accuracy (97.5% sensitivity, 95.0% specificity, 97.5% precision, and AUC: 0.983) on the support vector machine analysis. Patients on hemodialysis exhibited decreased cortical area and thickness, which was associated with poorer cognition and uremic toxins.

Machine learning-based radiomic nomogram from unenhanced computed tomography and clinical data predicts bowel resection in incarcerated inguinal hernia.

Li DL, Zhu L, Liu SL, Wang ZB, Liu JN, Zhou XM, Hu JL, Liu RQ

pubmed logopapersJun 27 2025
Early identification of bowel resection risks is crucial for patients with incarcerated inguinal hernia (IIH). However, the prompt detection of these risks remains a significant challenge. Advancements in radiomic feature extraction and machine learning algorithms have paved the way for innovative diagnostic approaches to assess IIH more effectively. To devise a sophisticated radiomic-clinical model to evaluate bowel resection risks in IIH patients, thereby enhancing clinical decision-making processes. This single-center retrospective study analyzed 214 IIH patients randomized into training (<i>n</i> = 161) and test (<i>n</i> = 53) sets (3:1). Radiologists segmented hernia sac-trapped bowel volumes of interest (VOIs) on computed tomography images. Radiomic features extracted from VOIs generated Rad-scores, which were combined with clinical data to construct a nomogram. The nomogram's performance was evaluated against standalone clinical and radiomic models in both cohorts. A total of 1561 radiomic features were extracted from the VOIs. After dimensionality reduction, 13 radiomic features were used with eight machine learning algorithms to develop the radiomic model. The logistic regression algorithm was ultimately selected for its effectiveness, showing an area under the curve (AUC) of 0.828 [95% confidence interval (CI): 0.753-0.902] in the training set and 0.791 (95%CI: 0.668-0.915) in the test set. The comprehensive nomogram, incorporating clinical indicators showcased strong predictive capabilities for assessing bowel resection risks in IIH patients, with AUCs of 0.864 (95%CI: 0.800-0.929) and 0.800 (95%CI: 0.669-0.931) for the training and test sets, respectively. Decision curve analysis revealed the integrated model's superior performance over standalone clinical and radiomic approaches. This innovative radiomic-clinical nomogram has proven to be effective in predicting bowel resection risks in IIH patients and has substantially aided clinical decision-making.
Page 204 of 4064055 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.