Sort by:
Page 8 of 14137 results

Automated Finite Element Modeling of the Lumbar Spine: A Biomechanical and Clinical Approach to Spinal Load Distribution and Stress Analysis.

Ahmadi M, Zhang X, Lin M, Tang Y, Engeberg ED, Hashemi J, Vrionis FD

pubmed logopapersJun 30 2025
Biomechanical analysis of the lumbar spine is vital for understanding load distribution and stress patterns under physiological conditions. Traditional finite element analysis (FEA) relies on time-consuming manual segmentation and meshing, leading to long runtimes and inconsistent accuracy. Automating this process improves efficiency and reproducibility. This study introduces an automated FEA methodology for lumbar spine biomechanics, integrating deep learning-based segmentation with computational modeling to streamline workflows from imaging to simulation. Medical imaging data were segmented using deep learning frameworks for vertebrae and intervertebral discs. Segmented structures were transformed into optimized surface meshes via Laplacian smoothing and decimation. Using the Gibbon library and FEBio, FEA models incorporated cortical and cancellous bone, nucleus, annulus, cartilage, and ligaments. Ligament attachments used spherical coordinate-based segmentation; vertebral endplates were extracted via principal component analysis (PCA) for cartilage modeling. Simulations assessed stress, strain, and displacement under axial rotation, extension, flexion, and lateral bending. The automated pipeline cut model preparation time by 97.9%, from over 24 hours to 30 minutes and 49.48 seconds. Biomechanical responses aligned with experimental and traditional FEA data, showing high posterior element loads in extension and flexion, consistent ligament forces, and disc deformations. The approach enhanced reproducibility with minimal manual input. This automated methodology provides an efficient, accurate framework for lumbar spine biomechanics, eliminating manual segmentation challenges. It supports clinical diagnostics, implant design, and rehabilitation, advancing computational and patient-specific spinal studies. Rapid simulations enhance implant optimization, and early detection of degenerative spinal issues, improving personalized treatment and research.

Photon-counting micro-CT scanner for deep learning-enabled small animal perfusion imaging.

Allphin AJ, Nadkarni R, Clark DP, Badea CT

pubmed logopapersJun 27 2025
In this work, we introduce a benchtop, turn-table photon-counting (PC) micro-CT scanner and highlight its application for dynamic small animal perfusion imaging.
Approach: Built on recently published hardware, the system now features a CdTe-based photon-counting detector (PCD). We validated its static spectral PC micro-CT imaging using conventional phantoms and assessed dynamic performance with a custom flow-configurable dual-compartment perfusion phantom. The phantom was scanned under varied flow conditions during injections of a low molecular weight iodinated contrast agent. In vivo mouse studies with identical injection settings demonstrated potential applications. A pretrained denoising CNN processed large multi-energy, temporal datasets (20 timepoints × 4 energies × 3 spatial dimensions), reconstructed via weighted filtered back projection. A separate CNN, trained on simulated data, performed gamma variate-based 2D perfusion mapping, evaluated qualitatively in phantom and in vivo tests.
Main Results: Full five-dimensional reconstructions were denoised using a CNN in ~3% of the time of iterative reconstruction, reducing noise in water at the highest energy threshold from 1206 HU to 86 HU. Decomposed iodine maps, which improved contrast to noise ratio from 16.4 (in the lowest energy CT images) to 29.4 (in the iodine maps), were used for perfusion analysis. The perfusion CNN outperformed pixelwise gamma variate fitting by ~33%, with a test set error of 0.04 vs. 0.06 in blood flow index (BFI) maps, and quantified linear BFI changes in the phantom with a coefficient of determination of 0.98.
Significance: This work underscores the PC micro-CT scanner's utility for high-throughput small animal perfusion imaging, leveraging spectral PC micro-CT and iodine decomposition. It provides a versatile platform for preclinical vascular research and advanced, time-resolved studies of disease models and therapeutic interventions.

AI Model Passport: Data and System Traceability Framework for Transparent AI in Health

Varvara Kalokyri, Nikolaos S. Tachos, Charalampos N. Kalantzopoulos, Stelios Sfakianakis, Haridimos Kondylakis, Dimitrios I. Zaridis, Sara Colantonio, Daniele Regge, Nikolaos Papanikolaou, The ProCAncer-I consortium, Konstantinos Marias, Dimitrios I. Fotiadis, Manolis Tsiknakis

arxiv logopreprintJun 27 2025
The increasing integration of Artificial Intelligence (AI) into health and biomedical systems necessitates robust frameworks for transparency, accountability, and ethical compliance. Existing frameworks often rely on human-readable, manual documentation which limits scalability, comparability, and machine interpretability across projects and platforms. They also fail to provide a unique, verifiable identity for AI models to ensure their provenance and authenticity across systems and use cases, limiting reproducibility and stakeholder trust. This paper introduces the concept of the AI Model Passport, a structured and standardized documentation framework that acts as a digital identity and verification tool for AI models. It captures essential metadata to uniquely identify, verify, trace and monitor AI models across their lifecycle - from data acquisition and preprocessing to model design, development and deployment. In addition, an implementation of this framework is presented through AIPassport, an MLOps tool developed within the ProCAncer-I EU project for medical imaging applications. AIPassport automates metadata collection, ensures proper versioning, decouples results from source scripts, and integrates with various development environments. Its effectiveness is showcased through a lesion segmentation use case using data from the ProCAncer-I dataset, illustrating how the AI Model Passport enhances transparency, reproducibility, and regulatory readiness while reducing manual effort. This approach aims to set a new standard for fostering trust and accountability in AI-driven healthcare solutions, aspiring to serve as the basis for developing transparent and regulation compliant AI systems across domains.

Lightweight Physics-Informed Zero-Shot Ultrasound Plane Wave Denoising

Hojat Asgariandehkordi, Mostafa Sharifzadeh, Hassan Rivaz

arxiv logopreprintJun 26 2025
Ultrasound Coherent Plane Wave Compounding (CPWC) enhances image contrast by combining echoes from multiple steered transmissions. While increasing the number of angles generally improves image quality, it drastically reduces the frame rate and can introduce blurring artifacts in fast-moving targets. Moreover, compounded images remain susceptible to noise, particularly when acquired with a limited number of transmissions. We propose a zero-shot denoising framework tailored for low-angle CPWC acquisitions, which enhances contrast without relying on a separate training dataset. The method divides the available transmission angles into two disjoint subsets, each used to form compound images that include higher noise levels. The new compounded images are then used to train a deep model via a self-supervised residual learning scheme, enabling it to suppress incoherent noise while preserving anatomical structures. Because angle-dependent artifacts vary between the subsets while the underlying tissue response is similar, this physics-informed pairing allows the network to learn to disentangle the inconsistent artifacts from the consistent tissue signal. Unlike supervised methods, our model requires no domain-specific fine-tuning or paired data, making it adaptable across anatomical regions and acquisition setups. The entire pipeline supports efficient training with low computational cost due to the use of a lightweight architecture, which comprises only two convolutional layers. Evaluations on simulation, phantom, and in vivo data demonstrate superior contrast enhancement and structure preservation compared to both classical and deep learning-based denoising methods.

Semi-automatic segmentation of elongated interventional instruments for online calibration of C-arm imaging system.

Chabi N, Illanes A, Beuing O, Behme D, Preim B, Saalfeld S

pubmed logopapersJun 26 2025
The C-arm biplane imaging system, designed for cerebral angiography, detects pathologies like aneurysms using dual rotating detectors for high-precision, real-time vascular imaging. However, accuracy can be affected by source-detector trajectory deviations caused by gravitational artifacts and mechanical instabilities. This study addresses calibration challenges and suggests leveraging interventional devices with radio-opaque markers to optimize C-arm geometry. We propose an online calibration method using image-specific features derived from interventional devices like guidewires and catheters (In the remainder of this paper, the term"catheter" will refer to both catheter and guidewire). The process begins with gantry-recorded data, refined through iterative nonlinear optimization. A machine learning approach detects and segments elongated devices by identifying candidates via thresholding on a weighted sum of curvature, derivative, and high-frequency indicators. An ensemble classifier segments these regions, followed by post-processing to remove false positives, integrating vessel maps, manual correction and identification markers. An interpolation step filling gaps along the catheter. Among the optimized ensemble classifiers, the one trained on the first frames achieved the best performance, with a specificity of 99.43% and precision of 86.41%. The calibration method was evaluated on three clinical datasets and four phantom angiogram pairs, reducing the mean backprojection error from 4.11 ± 2.61 to 0.15 ± 0.01 mm. Additionally, 3D accuracy analysis showed an average root mean square error of 3.47% relative to the true marker distance. This study explores using interventional tools with radio-opaque markers for C-arm self-calibration. The proposed method significantly reduces 2D backprojection error and 3D RMSE, enabling accurate 3D vascular reconstruction.

Development and in silico imaging trial evaluation of a deep-learning-based transmission-less attenuation compensation method for DaT SPECT

Zitong Yu, Md Ashequr Rahman, Zekun Li, Chunwei Ying, Hongyu An, Tammie L. S. Benzinger, Richard Laforest, Jingqin Luo, Scott A. Norris, Abhinav K. Jha

arxiv logopreprintJun 25 2025
Quantitative measures of dopamine transporter (DaT) uptake in caudate, putamen, and globus pallidus derived from DaT-single-photon emission computed tomography (SPECT) images are being investigated as biomarkers to diagnose, assess disease status, and track the progression of Parkinsonism. Reliable quantification from DaT-SPECT images requires performing attenuation compensation (AC), typically with a separate X-ray CT scan. Such CT-based AC (CTAC) has multiple challenges, a key one being the non-availability of X-ray CT component on many clinical SPECT systems. Even when a CT is available, the additional CT scan leads to increased radiation dose, costs, and complexity, potential quantification errors due to SPECT-CT misalignment, and higher training and regulatory requirements. To overcome the challenges with the requirement of a CT scan for AC in DaT SPECT, we propose a deep learning (DL)-based transmission-less AC method for DaT-SPECT (DaT-CTLESS). An in silico imaging trial, titled ISIT-DaT, was designed to evaluate the performance of DaT-CTLESS on the regional uptake quantification task. We observed that DaT-CTLESS yielded a significantly higher correlation with CTAC than that between UAC and CTAC on the regional DaT uptake quantification task. Further, DaT-CLTESS had an excellent agreement with CTAC on this task, significantly outperformed UAC in distinguishing patients with normal versus reduced putamen SBR, yielded good generalizability across two scanners, was generally insensitive to intra-regional uptake heterogeneity, demonstrated good repeatability, exhibited robust performance even as the size of the training data was reduced, and generally outperformed the other considered DL methods on the task of quantifying regional uptake across different training dataset sizes. These results provide a strong motivation for further clinical evaluation of DaT-CTLESS.

Streamlining the annotation process by radiologists of volumetric medical images with few-shot learning.

Ryabtsev A, Lederman R, Sosna J, Joskowicz L

pubmed logopapersJun 25 2025
Radiologist's manual annotations limit robust deep learning in volumetric medical imaging. While supervised methods excel with large annotated datasets, few-shot learning performs well for large structures but struggles with small ones, such as lesions. This paper describes a novel method that leverages the advantages of both few-shot learning models and fully supervised models while reducing the cost of manual annotation. Our method inputs a small dataset of labeled scans and a large dataset of unlabeled scans and outputs a validated labeled dataset used to train a supervised model (nnU-Net). The estimated correction effort is reduced by having the radiologist correct a subset of the scan labels computed by a few-shot learning model (UniverSeg). The method uses an optimized support set of scan slice patches and prioritizes the resulting labeled scans that require the least correction. This process is repeated for the remaining unannotated scans until satisfactory performance is obtained. We validated our method on liver, lung, and brain lesions on CT and MRI scans (375 scans, 5933 lesions). It significantly reduces the estimated lesion detection correction effort by 34% for missed lesions, 387% for wrongly identified lesions, with 130% fewer lesion contour corrections, and 424% fewer pixels to correct in the lesion contours with respect to manual annotation from scratch. Our method effectively reduces the radiologist's annotation effort of small structures to produce sufficient high-quality annotated datasets to train deep learning models. The method is generic and can be applied to a variety of lesions in various organs imaged by different modalities.

Interventional Radiology Reporting Standards and Checklist for Artificial Intelligence Research Evaluation (iCARE).

Anibal JT, Huth HB, Boeken T, Daye D, Gichoya J, Muñoz FG, Chapiro J, Wood BJ, Sze DY, Hausegger K

pubmed logopapersJun 25 2025
As artificial intelligence (AI) becomes increasingly prevalent within interventional radiology (IR) research and clinical practice, steps must be taken to ensure the robustness of novel technological systems presented in peer-reviewed journals. This report introduces comprehensive standards and an evaluation checklist (iCARE) that covers the application of modern AI methods in IR-specific contexts. The iCARE checklist encompasses the full "code-to-clinic" pipeline of AI development, including dataset curation, pre-training, task-specific training, explainability, privacy protection, bias mitigation, reproducibility, and model deployment. The iCARE checklist aims to support the development of safe, generalizable technologies for enhancing IR workflows, the delivery of care, and patient outcomes.

Qualitative and quantitative analysis of functional cardiac MRI using a novel compressed SENSE sequence with artificial intelligence image reconstruction.

Konstantin K, Christian LM, Lenhard P, Thomas S, Robert T, Luisa LI, David M, Matej G, Kristina S, Philip NC

pubmed logopapersJun 19 2025
To evaluate the feasibility of combining Compressed SENSE (CS) with a newly developed deep learning-based algorithm (CS-AI) using a Convolutional Neural Network to accelerate balanced steady-state free precession (bSSFP)-sequences for cardiac magnetic resonance imaging (MRI). 30 healthy volunteers were examined prospectively with a 3 T MRI scanner. We acquired CINE bSSFP sequences for short axis (SA, multi-breath-hold) and four-chamber (4CH)-view of the heart. For each sequence, four different CS accelerations and CS-AI reconstructions with three different denoising parameters, CS-AI medium, CS-AI strong, and CS-AI complete, were used. Cardiac left ventricular (LV) function (i.e., ejection fraction, end-diastolic volume, end-systolic volume, and LV mass) was analyzed using the SA sequences in every CS factor and each AI level. Two readers, blinded to the acceleration and denoising levels, evaluated all sequences regarding image quality and artifacts using a 5-point Likert scale. Friedman and Dunn's multiple comparison tests were used for qualitative evaluation, ANOVA and Tukey Kramer test for quantitative metrics. Scan time could be decreased up to 57 % for the SA-Sequences and up to 56 % for the 4CH-Sequences compared to the clinically established sequences consisting of SA-CS3 and 4CH-CS2,5 (SA-CS3: 112 s vs. SA-CS6: 48 s; 4CH-CS2,5: 9 s vs. 4CH-CS5: 4 s, p < 0.001). LV-functional analysis was not compromised by using accelerated MRI sequences combined with CS-AI reconstructions (all p > 0.05). The image quality loss and artifact increase accompanying increasing acceleration levels could be entirely compensated by CS-AI post-processing, with the best results for image quality using the combination of the highest CS factor with strong AI (SA-CINE: Coef.:1.31, 95 %CI:1.05-1.58; 4CH-CINE: Coef.:1.18, 95 %CI:1.05-1.58; both p < 0.001), and with complete AI regarding the artifact score (SA-CINE: Coef.:1.33, 95 %CI:1.06-1.60; 4CH-CINE: Coef.:1.31, 95 %CI:0.86-1.77; both p < 0.001). Combining CS sequences with AI-based image reconstruction for denoising significantly decreases scan time in cardiac imaging while upholding LV functional analysis accuracy and delivering stable outcomes for image quality and artifact reduction. This integration presents a promising advancement in cardiac MRI, promising improved efficiency without compromising diagnostic quality.

Quality appraisal of radiomics-based studies on chondrosarcoma using METhodological RadiomICs Score (METRICS) and Radiomics Quality Score (RQS).

Gitto S, Cuocolo R, Klontzas ME, Albano D, Messina C, Sconfienza LM

pubmed logopapersJun 18 2025
To assess the methodological quality of radiomics-based studies on bone chondrosarcoma using METhodological RadiomICs Score (METRICS) and Radiomics Quality Score (RQS). A literature search was conducted on EMBASE and PubMed databases for research papers published up to July 2024 and focused on radiomics in bone chondrosarcoma, with no restrictions regarding the study aim. Three readers independently evaluated the study quality using METRICS and RQS. Baseline study characteristics were extracted. Inter-reader reliability was calculated using intraclass correlation coefficient (ICC). Out of 68 identified papers, 18 were finally included in the analysis. Radiomics research was aimed at lesion classification (n = 15), outcome prediction (n = 2) or both (n = 1). Study design was retrospective in all papers. Most studies employed MRI (n = 12), CT (n = 3) or both (n = 1). METRICS and RQS adherence rates ranged between 37.3-94.8% and 2.8-44.4%, respectively. Excellent inter-reader reliability was found for both METRICS (ICC = 0.961) and RQS (ICC = 0.975). Among the limitations of the evaluated studies, the absence of prospective studies and deep learning-based analyses was highlighted, along with the limited adherence to radiomics guidelines, use of external testing datasets and open science data. METRICS and RQS are reproducible quality assessment tools, with the former showing higher adherence rates in studies on chondrosarcoma. METRICS is better suited for assessing papers with retrospective design, which is often chosen in musculoskeletal oncology due to the low prevalence of bone sarcomas. Employing quality scoring systems should be promoted in radiomics-based studies to improve methodological quality and facilitate clinical translation. Employing reproducible quality scoring systems, especially METRICS (which shows higher adherence rates than RQS and is better suited for assessing retrospective investigations), is highly recommended to design radiomics-based studies on chondrosarcoma, improve methodological quality and facilitate clinical translation. The low scientific and reporting quality of radiomics studies on chondrosarcoma is the main reason preventing clinical translation. Quality appraisal using METRICS and RQS showed 37.3-94.8% and 2.8-44.4% adherence rates, respectively. Room for improvement was noted in study design, deep learning methods, external testing and open science. Employing reproducible quality scoring systems is recommended to design radiomics studies on bone chondrosarcoma and facilitate clinical translation.
Page 8 of 14137 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.