Sort by:
Page 43 of 3143139 results

Towards a zero-shot low-latency navigation for open surgery augmented reality applications.

Schwimmbeck M, Khajarian S, Auer C, Wittenberg T, Remmele S

pubmed logopapersAug 5 2025
Augmented reality (AR) enhances surgical navigation by superimposing visible anatomical structures with three-dimensional virtual models using head-mounted displays (HMDs). In particular, interventions such as open liver surgery can benefit from AR navigation, as it aids in identifying and distinguishing tumors and risk structures. However, there is a lack of automatic and markerless methods that are robust against real-world challenges, such as partial occlusion and organ motion. We introduce a novel multi-device approach for automatic live navigation in open liver surgery that enhances the visualization and interaction capabilities of a HoloLens 2 HMD through precise and reliable registration using an Intel RealSense RGB-D camera. The intraoperative RGB-D segmentation and the preoperative CT data are utilized to register a virtual liver model to the target anatomy. An AR-prompted Segment Anything Model (SAM) enables robust segmentation of the liver in situ without the need for additional training data. To mitigate algorithmic latency, Double Exponential Smoothing (DES) is applied to forecast registration results. We conducted a phantom study for open liver surgery, investigating various scenarios of liver motion, viewpoints, and occlusion. The mean registration errors (8.31 mm-18.78 mm TRE) are comparable to those reported in prior work, while our approach demonstrates high success rates even for high occlusion factors and strong motion. Using forecasting, we bypassed the algorithmic latency of 79.8 ms per frame, with median forecasting errors below 2 mms and 1.5 degrees between the quaternions. To our knowledge, this is the first work to approach markerless in situ visualization by combining a multi-device method with forecasting and a foundation model for segmentation and tracking. This enables a more reliable and precise AR registration of surgical targets with low latency. Our approach can be applied to other surgical applications and AR hardware with minimal effort.

Unsupervised learning based perfusion maps for temporally truncated CT perfusion imaging.

Tung CH, Li ZY, Huang HM

pubmed logopapersAug 5 2025

Computed tomography perfusion (CTP) imaging is a rapid diagnostic tool for acute stroke but is less robust when tissue time-attenuation curves are truncated. This study proposes an unsupervised learning method for generating perfusion maps from truncated CTP images. Real brain CTP images were artificially truncated to 15% and 30% of the original scan time. Perfusion maps of complete and truncated CTP images were calculated using the proposed method and compared with standard singular value decomposition (SVD), tensor total variation (TTV), nonlinear regression (NLR), and spatio-temporal perfusion physics-informed neural network (SPPINN).
Main results.
The NLR method yielded many perfusion values outside physiological ranges, indicating a lack of robustness. The proposed method did not improve the estimation of cerebral blood flow compared to both the SVD and TTV methods, but reduced the effect of truncation on the estimation of cerebral blood volume, with a relative difference of 15.4% in the infarcted region for 30% truncation (20.7% for SVD and 19.4% for TTV). The proposed method also showed better resistance to 30% truncation for mean transit time, with a relative difference of 16.6% in the infarcted region (25.9% for SVD and 26.2% for TTV). Compared to the SPPINN method, the proposed method had similar responses to truncation in gray and white matter, but was less sensitive to truncation in the infarcted region. These results demonstrate the feasibility of using unsupervised learning to generate perfusion maps from CTP images and improve robustness under truncation scenarios.&#xD.

Automated ultrasound system ARTHUR V.2.0 with AI analysis DIANA V.2.0 matches expert rheumatologist in hand joint assessment of rheumatoid arthritis patients.

Frederiksen BA, Hammer HB, Terslev L, Ammitzbøll-Danielsen M, Savarimuthu TR, Weber ABH, Just SA

pubmed logopapersAug 5 2025
To evaluate the agreement and repeatability of an automated robotic ultrasound system (ARTHUR V.2.0) combined with an AI model (DIANA V.2.0) in assessing synovial hypertrophy (SH) and Doppler activity in rheumatoid arthritis (RA) patients, using an expert rheumatologist's assessment as the reference standard. 30 RA patients underwent two consecutive ARTHUR V.2.0 scans and rheumatologist assessment of 22 hand joints, with the rheumatologist blinded to the automated system's results. Images were scored for SH and Doppler by DIANA V.2.0 using the EULAR-OMERACT scale (0-3). The agreement was evaluated by weighted Cohen's kappa, percent exact agreement (PEA), percent close agreement (PCA) and binary outcomes using Global OMERACT-EULAR Synovitis Scoring (healthy ≤1 vs diseased ≥2). Comparisons included intra-robot repeatability and agreement with the expert rheumatologist and a blinded independent assessor. ARTHUR successfully scanned 564 out of 660 joints, corresponding to an overall success rate of 85.5%. Intra-robot agreement for SH: PEA 63.0%, PCA 93.0%, binary 90.5% and for Doppler, PEA 74.8%, PCA 93.7%, binary 88.1% and kappa values of 0.54 and 0.49. Agreement between ARTHUR+DIANA and the rheumatologist: SH (PEA 57.9%, PCA 92.9%, binary 87.3%, kappa 0.38); Doppler (PEA 77.3%, PCA 94.2%, binary 91.2%, kappa 0.44) and with the independent assessor: SH (PEA 49.0%, PCA 91.2%, binary 80.0%, kappa 0.39); Doppler (PEA 62.6%, PCA 94.4%, binary 88.1%, kappa 0.48). ARTHUR V.2.0 and DIANA V.2.0 demonstrated repeatability on par with intra-expert agreement reported in the literature and showed encouraging agreement with human assessors, though further refinement is needed to optimise performance across specific joints.

Multi-Center 3D CNN for Parkinson's disease diagnosis and prognosis using clinical and T1-weighted MRI data.

Basaia S, Sarasso E, Sciancalepore F, Balestrino R, Musicco S, Pisano S, Stankovic I, Tomic A, Micco R, Tessitore A, Salvi M, Meiburger KM, Kostic VS, Molinari F, Agosta F, Filippi M

pubmed logopapersAug 5 2025
Parkinson's disease (PD) presents challenges in early diagnosis and progression prediction. Recent advancements in machine learning, particularly convolutional-neural-networks (CNNs), show promise in enhancing diagnostic accuracy and prognostic capabilities using neuroimaging data. The aims of this study were: (i) develop a 3D-CNN based on MRI to distinguish controls and PD patients and (ii) employ CNN to predict the progression of PD. Three cohorts were selected: 86 mild, 62 moderate-to-severe PD patients, and 60 controls; 14 mild-PD patients and 14 controls from Parkinson's Progression Markers Initiative database, and 38 de novo mild-PD patients and 38 controls. All participants underwent MRI scans and clinical evaluation at baseline and over 2-years. PD subjects were classified in two clusters of different progression using k-means clustering based on baseline and follow-up UDPRS-III scores. A 3D-CNN was built and tested on PD patients and controls, with binary classifications: controls vs moderate-to-severe PD, controls vs mild-PD, and two clusters of PD progression. The effect of transfer learning was also tested. CNN effectively differentiated moderate-to-severe PD from controls (74% accuracy) using MRI data alone. Transfer learning significantly improved performance in distinguishing mild-PD from controls (64% accuracy). For predicting disease progression, the model achieved over 70% accuracy by combining MRI and clinical data. Brain regions most influential in the CNN's decisions were visualized. CNN, integrating multimodal data and transfer learning, provides encouraging results toward early-stage classification and progression monitoring in PD. Its explainability through activation maps offers potential for clinical application in early diagnosis and personalized monitoring.

Are Vision-xLSTM-embedded U-Nets better at segmenting medical images?

Dutta P, Bose S, Roy SK, Mitra S

pubmed logopapersAug 5 2025
The development of efficient segmentation strategies for medical images has evolved from its initial dependence on Convolutional Neural Networks (CNNs) to the current investigation of hybrid models that combine CNNs with Vision Transformers (ViTs). There is an increasing focus on developing architectures that are both high-performing and computationally efficient, capable of being deployed on remote systems with limited resources. Although transformers can capture global dependencies in the input space, they face challenges from the corresponding high computational and storage expenses involved. The objective of this research is to propose that Vision Extended Long Short-Term Memory (Vision-xLSTM) forms an appropriate backbone for medical image segmentation, offering excellent performance with reduced computational costs. This study investigates the integration of CNNs with Vision-xLSTM by introducing the novel U-VixLSTM. The Vision-xLSTM blocks capture the temporal and global relationships within the patches extracted from the CNN feature maps. The convolutional feature reconstruction path upsamples the output volume from the Vision-xLSTM blocks to produce the segmentation output. The U-VixLSTM exhibits superior performance compared to the state-of-the-art networks in the publicly available Synapse, ISIC and ACDC datasets. The findings suggest that U-VixLSTM is a promising alternative to ViTs for medical image segmentation, delivering effective performance without substantial computational burden. This makes it feasible for deployment in healthcare environments with limited resources for faster diagnosis. Code provided: https://github.com/duttapallabi2907/U-VixLSTM.

Skin lesion segmentation: A systematic review of computational techniques, tools, and future directions.

Sharma AL, Sharma K, Ghosal P

pubmed logopapersAug 5 2025
Skin lesion segmentation is a highly sought-after research topic in medical image processing, which may help in the early diagnosis of skin diseases. Early detection of skin diseases like Melanoma can decrease the mortality rate by 95%. Distinguishing lesions from healthy skin through skin image segmentation is a critical step. Various factors such as color, size, shape of the skin lesion, presence of hair, and other noise pose challenges in segmenting a lesion from healthy skin. Hence, the effectiveness of the segmentation technique utilized is vital for precise disease diagnosis and treatment planning. This review explores and summarizes the latest advancements in skin lesion segmentation techniques and their state-of-the-art methods from 2018 to 2025. It also covers crucial information, including input datasets, pre-processing, augmentation, method configuration, loss functions, hyperparameter settings, and performance metrics. The review addresses the primary challenges encountered in skin lesion segmentation from images and comprehensively compares state-of-the-art techniques for skin lesion segmentation. Researchers in this field will find this review compelling due to the insights on skin lesion segmentation and methodological details, as well as the encouraging results analysis of the state-of-the-art methods.

Controllable Mask Diffusion Model for medical annotation synthesis with semantic information extraction.

Heo C, Jung J

pubmed logopapersAug 5 2025
Medical segmentation, a prominent task in medical image analysis utilizing artificial intelligence, plays a crucial role in computer-aided diagnosis and depends heavily on the quality of the training data. However, the availability of sufficient data is constrained by strict privacy regulations associated with medical data. To mitigate this issue, research on data augmentation has gained significant attention. Medical segmentation tasks require paired datasets consisting of medical images and annotation images, also known as mask images, which represent lesion areas or radiological information within the medical images. Consequently, it is essential to apply data augmentation to both image types. This study proposes a Controllable Mask Diffusion Model, a novel approach capable of controlling and generating new masks. This model leverages the binary structure of the mask to extract semantic information, namely, the mask's size, location, and count, which is then applied as multi-conditional input to a diffusion model via a regressor. Through the regressor, newly generated masks conform to the input semantic information, thereby enabling input-driven controllable generation. Additionally, a technique that analyzes correlation within semantic information was devised for large-scale data synthesis. The generative capacity of the proposed model was evaluated against real datasets, and the model's ability to control and generate new masks based on previously unseen semantic information was confirmed. Furthermore, the practical applicability of the model was demonstrated by augmenting the data with the generated data, applying it to segmentation tasks, and comparing the performance with and without augmentation. Additionally, experiments were conducted on single-label and multi-label masks, yielding superior results for both types. This demonstrates the potential applicability of this study to various areas within the medical field.

NUTRITIONAL IMPACT OF LEUCINE-ENRICHED SUPPLEMENTS: EVALUATING PROTEIN TYPE THROUGH ARTIFICIAL INTELLIGENCE (AI)-AUGMENTED MUSCLE ULTRASONOGRAPHY IN HYPERCALORIC, HYPERPROTEIC SUPPORT.

López Gómez JJ, Gutiérrez JG, Jauregui OI, Cebriá Á, Asensio LE, Martín DP, Velasco PF, Pérez López P, Sahagún RJ, Bargues DR, Godoy EJ, de Luis Román DA

pubmed logopapersAug 5 2025
Malnutrition adversely affects physical function and body composition in patients with chronic diseases. Leucine supplementation has shown benefits in improving body composition and clinical outcomes. This study aimed to evaluate the effects of a leucine-enriched oral nutritional supplement (ONS) on the nutritional status of patients at risk of malnutrition. This prospective observational study followed two cohorts of malnourished patients receiving personalized nutritional interventions over 3 months. One group received a leucine-enriched oral supplement (20% protein, 100% whey, 3 g leucine), while other received a standard supplement (hypercaloric and normo-hyperproteic) with mixed protein sources. Nutritional status was assessed at baseline and after 3 months using anthropometry, bioelectrical impedance analysis, AI assisted muscle ultrasound, and handgrip strength RESULTS: A total of 142 patients were included (76 Leucine-ONS, 66 Standard-ONS), mostly women (65.5%), mean age 62.00 (18.66) years. Malnutrition was present in 90.1% and 34.5% had sarcopenia. Cancer was the most common condition (30.3%). The Leucine-ONS group showed greater improvements in phase angle (+2.08% vs. -1.57%; p=0.02) and rectus femoris thickness (+1.72% vs. -5.89%; p=0.03). Multivariate analysis confirmed associations between Leucine-ONS and improved phase angle (OR=2.41; 95%CI: 1.18-4.92; p=0.02) and reduced intramuscular fat (OR=2.24; 95%CI: 1.13-4.46; p=0.02). Leucine-enriched-ONS significantly improved phase angle and muscle thickness compared to standard ONS, supporting its role in enhancing body composition in malnourished patients. These results must be interpreted in the context of the observational design of the study, the heterogeneity of comparison groups and the short duration of intervention. Further randomized controlled trials are needed to confirm these results and assess long-term clinical and functional outcomes.

Altered effective connectivity in patients with drug-naïve first-episode, recurrent, and medicated major depressive disorder: a multi-site fMRI study.

Dai P, Huang K, Hu T, Chen Q, Liao S, Grecucci A, Yi X, Chen BT

pubmed logopapersAug 5 2025
Major depressive disorder (MDD) has been diagnosed through subjective and inconsistent clinical assessments. Resting-state functional magnetic resonance imaging (rs-fMRI) with connectivity analysis has been valuable for identifying neural correlates of patients with MDD, yet most studies rely on single-site and small sample sizes. This study utilized large-scale, multi-site rs-fMRI data from the Rest-meta-MDD consortium to assess effective connectivity in patients with MDD and its subtypes, i.e., drug-naïve first-episode (FEDN), recurrent (RMDD), and medicated MDD (MMDD) subtypes. To mitigate site-related variability, the ComBat algorithm was applied, and multivariate linear regression was used to control for age and gender effects. A random forest classification model was developed to identify the most predictive features. Nested five-fold cross-validation was used to assess model performance. The model effectively distinguished FEDN subtype from healthy controls (HC) group, achieving 90.13% accuracy and 96.41% AUC. However, classification performance for RMDD vs. FEDN and MMDD vs. FEDN was lower, suggesting that differences between the subtypes were less pronounced than differences between the patients with MDD and the HC group. Patients with RMDD exhibited more extensive connectivity abnormalities in the frontal-limbic system and default mode network than the patients with FEDN, implying heightened rumination. Additionally, treatment with medication appeared to partially modulate the aberrant connectivity, steering it toward normalization. This study showed altered brain connectivity in patients with MDD and its subtypes, which could be classified with machine learning models with robust performance. Abnormal connectivity could be the potential neural correlates for the presenting symptoms of patients with MDD. These findings provide novel insights into the neural pathogenesis of patients with MDD.

The REgistry of Flow and Perfusion Imaging for Artificial INtelligEnce with PET(REFINE PET): Rationale and Design.

Ramirez G, Lemley M, Shanbhag A, Kwiecinski J, Miller RJH, Kavanagh PB, Liang JX, Dey D, Slipczuk L, Travin MI, Alexanderson E, Carvajal-Juarez I, Packard RRS, Al-Mallah M, Einstein AJ, Feher A, Acampa W, Knight S, Le VT, Mason S, Sanghani R, Wopperer S, Chareonthaitawee P, Buechel RR, Rosamond TL, deKemp RA, Berman DS, Di Carli MF, Slomka PJ

pubmed logopapersAug 5 2025
The REgistry of Flow and Perfusion Imaging for Artificial Intelligence with PET (REFINE PET) was established to collect multicenter PET and associated computed tomography (CT) images, together with clinical data and outcomes, into a comprehensive research resource. REFINE PET will enable validation and development of both standard and novel cardiac PET/CT processing methods. REFINE PET is a multicenter, international registry that contains both clinical and imaging data. The PET scans were processed using QPET software (Cedars-Sinai Medical Center, Los Angeles, CA), while the CT scans were processed using deep learning (DL) to detect coronary artery calcium (CAC). Patients were followed up for the occurrence of major adverse cardiovascular events (MACE), which include death, myocardial infarction, unstable angina, and late revascularization (>90 days from PET). The REFINE PET registry currently contains data for 35,588 patients from 14 sites, with additional patient data and sites anticipated. Comprehensive clinical data (including demographics, medical history, and stress test results) were integrated with more than 2200 imaging variables across 42 categories. The registry is poised to address a broad range of clinical questions, supported by correlating invasive angiography (within 6 months of MPI) in 5972 patients and a total of 9252 major adverse cardiovascular events during a median follow-up of 4.2 years. The REFINE PET registry leverages the integration of clinical, multimodality imaging, and novel quantitative and AI tools to advance the role of PET/CT MPI in diagnosis and risk stratification.
Page 43 of 3143139 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.