Sort by:
Page 26 of 2982975 results

Integration of Spatiotemporal Dynamics and Structural Connectivity for Automated Epileptogenic Zone Localization in Temporal Lobe Epilepsy.

Xiao L, Zheng Q, Li S, Wei Y, Si W, Pan Y

pubmed logopapersAug 5 2025
Accurate localization of the epileptogenic zone (EZ) is essential for surgical success in temporal lobe epilepsy. While stereoelectroencephalography (SEEG) and structural magnetic resonance imaging (MRI) provide complementary insights, existing unimodal methods fail to fully capture epileptogenic brain activity, and multimodal fusion remains challenging due to data complexity and surgeon-dependent interpretations. To address these issues, we proposed a novel multimodal framework to improve EZ localization with SEEG-drived electrophysiology with structural connectivity in temporal lobe epilepsy. By retrospectively analyzing SEEG, post-implant Computed Tomography (CT) and MRI (T1 & Diffusion Tensor Imaging (DTI)) data from 15 patients, we reconstructed SEEG electrode positions and obtained the SEEG and structural connectivity fusion features. We then proposed a spatiotemporal co-attention deep neural network (ST-CANet) to identify the fusion features, categorizing electrodes into seizure onset zone (SOZ), propagation zone (PZ), and non-involved zone (NIZ). Anatomical EZ boundaries were delineated by fusing the electrode position and classification information on brain atlas. The proposed method was evaluated based on the identification and localization performance of three epilepsy-related zones. The experiment results demonstrate that our method achieves 98.08% average accuracy and outperforms other identification methods, and improves the localization with Dice similarity coefficients (DSC) of 95.65% (SOZ), 92.13% (PZ), and 99.61% (NIZ), aligning with clinically validated surgical resection areas. This multimodal fusion strategy based on electrophysiological and structural connectivity information promises to assist neurosurgeons in accurately localizing EZ and may find broader applications in preoperative planning for epilepsy surgeries.

Real-time 3D US-CT fusion-based semi-automatic puncture robot system: clinical evaluation.

Nakayama M, Zhang B, Kuromatsu R, Nakano M, Noda Y, Kawaguchi T, Li Q, Maekawa Y, Fujie MG, Sugano S

pubmed logopapersAug 5 2025
Conventional systems supporting percutaneous radiofrequency ablation (PRFA) have faced difficulties in ensuring safe and accurate puncture due to issues inherent to the medical images used and organ displacement caused by patients' respiration. To address this problem, this study proposes a semi-automatic puncture robot system that integrates real-time ultrasound (US) images with computed tomography (CT) images. The purpose of this paper is to evaluate the system's usefulness through a pilot clinical experiment involving participants. For the clinical experiment using the proposed system, an improved U-net model based on fivefold cross-validation was constructed. Following the workflow of the proposed system, the model was trained using US images acquired from patients with robotic arms. The average Dice coefficient for the entire validation dataset was confirmed to be 0.87. Therefore, the model was implemented in the robotic system and applied to clinical experiment. A clinical experiment was conducted using the robotic system equipped with the developed AI model on five adult male and female participants. The centroid distances between the point clouds from each modality were evaluated in the 3D US-CT fusion process, assuming the blood vessel centerline represents the overall structural position. The results of the centroid distances showed a minimum value of 0.38 mm, a maximum value of 4.81 mm, and an average of 1.97 mm. Although the five participants had different CP classifications and the derived US images exhibited individual variability, all centroid distances satisfied the ablation margin of 5.00 mm considered in PRFA, suggesting the potential accuracy and utility of the robotic system for puncture navigation. Additionally, the results suggested the potential generalization performance of the AI model trained with data acquired according to the robotic system's workflow.

A novel lung cancer diagnosis model using hybrid convolution (2D/3D)-based adaptive DenseUnet with attention mechanism.

Deepa J, Badhu Sasikala L, Indumathy P, Jerrin Simla A

pubmed logopapersAug 5 2025
Existing Lung Cancer Diagnosis (LCD) models have difficulty in detecting early-stage lung cancer due to the asymptomatic nature of the disease which leads to an increased death rate of patients. Therefore, it is important to diagnose lung disease at an early stage to save the lives of affected persons. Hence, the research work aims to develop an efficient lung disease diagnosis using deep learning techniques for the early and accurate detection of lung cancer. This is achieved by. Initially, the proposed model collects the mandatory CT images from the standard benchmark datasets. Then, the lung cancer segmentation is done by using the development of Hybrid Convolution (2D/3D)-based Adaptive DenseUnet with Attention mechanism (HC-ADAM). The Hybrid Sewing Training with Spider Monkey Optimization (HSTSMO) is introduced to optimize the parameters in the developed HC-ADAM segmentation approach. Finally, the dissected lung nodule imagery is considered for the lung cancer classification stage, where the Hybrid Adaptive Dilated Networks with Attention mechanism (HADN-AM) are implemented with the serial cascading of ResNet and Long Short Term Memory (LSTM) for attaining better categorization performance. The accuracy, precision, and F1-score of the developed model for the LIDC-IDRI dataset are 96.3%, 96.38%, and 96.36%, respectively.

Brain tumor segmentation by optimizing deep learning U-Net model.

Asiri AA, Hussain L, Irfan M, Mehdar KM, Awais M, Alelyani M, Alshuhri M, Alghamdi AJ, Alamri S, Nadeem MA

pubmed logopapersAug 5 2025
BackgroundMagnetic Resonance Imaging (MRI) is a cornerstone in diagnosing brain tumors. However, the complex nature of these tumors makes accurate segmentation in MRI images a demanding task.ObjectiveAccurate brain tumor segmentation remains a critical challenge in medical image analysis, with early detection crucial for improving patient outcomes.MethodsTo develop and evaluate a novel UNet-based architecture for improved brain tumor segmentation in MRI images. This paper presents a novel UNet-based architecture for improved brain tumor segmentation. The UNet model architecture incorporates Leaky ReLU activation, batch normalization, and regularization to enhance training and performance. The model consists of varying numbers of layers and kernel sizes to capture different levels of detail. To address the issue of class imbalance in medical image segmentation, we employ focused loss and generalized Dice (GDL) loss functions.ResultsThe proposed model was evaluated on the BraTS'2020 dataset, achieving an accuracy of 99.64% and Dice coefficients of 0.8984, 0.8431, and 0.8824 for necrotic core, edema, and enhancing tumor regions, respectively.ConclusionThese findings demonstrate the efficacy of our approach in accurately predicting tumors, which has the potential to enhance diagnostic systems and improve patient outcomes.

Imaging in clinical trials of rheumatoid arthritis: where are we in 2025?

Østergaard M, Rolland MAJ, Terslev L

pubmed logopapersAug 5 2025
Accurate detection and assessment of inflammatory activity is crucial not only for diagnosing patients with rheumatoid arthritis but also for effective monitoring of treatment effect. Ultrasound and magnetic resonance imaging (MRI) have both been shown to be truthful, reproducible, and sensitive to change for inflammation in joints and tendon sheaths and have validated scoring systems, which altogether allow them to be used as outcome measurement instruments in clinical trials. Furthermore, MRI also allows sensitive and discriminative assessment of structural damage progression in RA, also with validated outcome measures. Other relevant imaging techniques, including the use of artificial intelligence, pose interesting possibilities for future clinical trials and will be briefly addressed in this review article.

Utilizing 3D fast spin echo anatomical imaging to reduce the number of contrast preparations in <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> quantification of knee cartilage using learning-based methods.

Zhong J, Huang C, Yu Z, Xiao F, Blu T, Li S, Ong TM, Ho KK, Chan Q, Griffith JF, Chen W

pubmed logopapersAug 5 2025
To propose and evaluate an accelerated <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> quantification method that combines <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted fast spin echo (FSE) images and proton density (PD)-weighted anatomical FSE images, leveraging deep learning models for <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> mapping. The goal is to reduce scan time and facilitate integration into routine clinical workflows for osteoarthritis (OA) assessment. This retrospective study utilized MRI data from 40 participants (30 OA patients and 10 healthy volunteers). A volume of PD-weighted anatomical FSE images and a volume of <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted images acquired at a non-zero spin-lock time were used as input to train deep learning models, including a 2D U-Net and a multi-layer perceptron (MLP). <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> maps generated by these models were compared with ground truth maps derived from a traditional non-linear least squares (NLLS) fitting method using four <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted images. Evaluation metrics included mean absolute error (MAE), mean absolute percentage error (MAPE), regional error (RE), and regional percentage error (RPE). The best-performed deep learning models achieved RPEs below 5% across all evaluated scenarios. This performance was consistent even in reduced acquisition settings that included only one PD-weighted image and one <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted image, where NLLS methods cannot be applied. Furthermore, the results were comparable to those obtained with NLLS when longer acquisitions with four <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted images were used. The proposed approach enables efficient <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> mapping using PD-weighted anatomical images, reducing scan time while maintaining clinical standards. This method has the potential to facilitate the integration of quantitative MRI techniques into routine clinical practice, benefiting OA diagnosis and monitoring.

Sex differences in white matter amplitude of low-frequency fluctuation associated with cognitive performance across the Alzheimer's disease continuum.

Chen X, Zhou S, Wang W, Gao Z, Ye W, Zhu W, Lu Y, Ma J, Li X, Yu Y, Li X

pubmed logopapersAug 5 2025
BackgroundSex differences in Alzheimer's disease (AD) progression offer insights into pathogenesis and clinical management. White matter (WM) amplitude of low-frequency fluctuation (ALFF), reflecting neural activity, represents a potential disease biomarker.ObjectiveTo explore whether there are sex differences in regional WM ALFF among AD patients, amnestic mild cognitive impairment (aMCI) patients, and healthy controls (HCs), how it is related to cognitive performance, and whether it can be used for disease classification.MethodsResting-state functional magnetic resonance images and cognitive assessments were obtained from 85 AD (36 female), 52 aMCI (23 female), and 78 HCs (43 female). Two-way ANOVA examined group × sex interactions for regional WM ALFF and cognitive scores. WM ALFF-cognition correlations and support vector machine diagnostic accuracy were evaluated.ResultsSex × group interaction effects on WM ALFF were detected in the right superior longitudinal fasciculus (<i>F</i> = 20.08, <i>p</i><sub>FDR_corrected</sub> < 0.001), left superior longitudinal fasciculus (<i>F</i> = 5.45, <i>p</i><sub>GRF_corrected</sub> < 0.001) and right inferior longitudinal fasciculus (<i>F</i> = 6.00, <i>p</i><sub>GRF_corrected</sub> = 0.001). These WM ALFF values positively correlated with different cognitive performance between sexes. The support vector machine learning best differentiated aMCI from AD in the full cohort and males (accuracy = 75%), and HCs from aMCI in females (accuracy = 93%).ConclusionsSex differences in regional WM ALFF during AD progression are associated with cognitive performance and can be utilized for disease classification.

Towards a zero-shot low-latency navigation for open surgery augmented reality applications.

Schwimmbeck M, Khajarian S, Auer C, Wittenberg T, Remmele S

pubmed logopapersAug 5 2025
Augmented reality (AR) enhances surgical navigation by superimposing visible anatomical structures with three-dimensional virtual models using head-mounted displays (HMDs). In particular, interventions such as open liver surgery can benefit from AR navigation, as it aids in identifying and distinguishing tumors and risk structures. However, there is a lack of automatic and markerless methods that are robust against real-world challenges, such as partial occlusion and organ motion. We introduce a novel multi-device approach for automatic live navigation in open liver surgery that enhances the visualization and interaction capabilities of a HoloLens 2 HMD through precise and reliable registration using an Intel RealSense RGB-D camera. The intraoperative RGB-D segmentation and the preoperative CT data are utilized to register a virtual liver model to the target anatomy. An AR-prompted Segment Anything Model (SAM) enables robust segmentation of the liver in situ without the need for additional training data. To mitigate algorithmic latency, Double Exponential Smoothing (DES) is applied to forecast registration results. We conducted a phantom study for open liver surgery, investigating various scenarios of liver motion, viewpoints, and occlusion. The mean registration errors (8.31 mm-18.78 mm TRE) are comparable to those reported in prior work, while our approach demonstrates high success rates even for high occlusion factors and strong motion. Using forecasting, we bypassed the algorithmic latency of 79.8 ms per frame, with median forecasting errors below 2 mms and 1.5 degrees between the quaternions. To our knowledge, this is the first work to approach markerless in situ visualization by combining a multi-device method with forecasting and a foundation model for segmentation and tracking. This enables a more reliable and precise AR registration of surgical targets with low latency. Our approach can be applied to other surgical applications and AR hardware with minimal effort.

Unsupervised learning based perfusion maps for temporally truncated CT perfusion imaging.

Tung CH, Li ZY, Huang HM

pubmed logopapersAug 5 2025
&#xD;Computed tomography perfusion (CTP) imaging is a rapid diagnostic tool for acute stroke but is less robust when tissue time-attenuation curves are truncated. This study proposes an unsupervised learning method for generating perfusion maps from truncated CTP images. Real brain CTP images were artificially truncated to 15% and 30% of the original scan time. Perfusion maps of complete and truncated CTP images were calculated using the proposed method and compared with standard singular value decomposition (SVD), tensor total variation (TTV), nonlinear regression (NLR), and spatio-temporal perfusion physics-informed neural network (SPPINN).&#xD;Main results.&#xD;The NLR method yielded many perfusion values outside physiological ranges, indicating a lack of robustness. The proposed method did not improve the estimation of cerebral blood flow compared to both the SVD and TTV methods, but reduced the effect of truncation on the estimation of cerebral blood volume, with a relative difference of 15.4% in the infarcted region for 30% truncation (20.7% for SVD and 19.4% for TTV). The proposed method also showed better resistance to 30% truncation for mean transit time, with a relative difference of 16.6% in the infarcted region (25.9% for SVD and 26.2% for TTV). Compared to the SPPINN method, the proposed method had similar responses to truncation in gray and white matter, but was less sensitive to truncation in the infarcted region. These results demonstrate the feasibility of using unsupervised learning to generate perfusion maps from CTP images and improve robustness under truncation scenarios.&#xD.

Automated ultrasound system ARTHUR V.2.0 with AI analysis DIANA V.2.0 matches expert rheumatologist in hand joint assessment of rheumatoid arthritis patients.

Frederiksen BA, Hammer HB, Terslev L, Ammitzbøll-Danielsen M, Savarimuthu TR, Weber ABH, Just SA

pubmed logopapersAug 5 2025
To evaluate the agreement and repeatability of an automated robotic ultrasound system (ARTHUR V.2.0) combined with an AI model (DIANA V.2.0) in assessing synovial hypertrophy (SH) and Doppler activity in rheumatoid arthritis (RA) patients, using an expert rheumatologist's assessment as the reference standard. 30 RA patients underwent two consecutive ARTHUR V.2.0 scans and rheumatologist assessment of 22 hand joints, with the rheumatologist blinded to the automated system's results. Images were scored for SH and Doppler by DIANA V.2.0 using the EULAR-OMERACT scale (0-3). The agreement was evaluated by weighted Cohen's kappa, percent exact agreement (PEA), percent close agreement (PCA) and binary outcomes using Global OMERACT-EULAR Synovitis Scoring (healthy ≤1 vs diseased ≥2). Comparisons included intra-robot repeatability and agreement with the expert rheumatologist and a blinded independent assessor. ARTHUR successfully scanned 564 out of 660 joints, corresponding to an overall success rate of 85.5%. Intra-robot agreement for SH: PEA 63.0%, PCA 93.0%, binary 90.5% and for Doppler, PEA 74.8%, PCA 93.7%, binary 88.1% and kappa values of 0.54 and 0.49. Agreement between ARTHUR+DIANA and the rheumatologist: SH (PEA 57.9%, PCA 92.9%, binary 87.3%, kappa 0.38); Doppler (PEA 77.3%, PCA 94.2%, binary 91.2%, kappa 0.44) and with the independent assessor: SH (PEA 49.0%, PCA 91.2%, binary 80.0%, kappa 0.39); Doppler (PEA 62.6%, PCA 94.4%, binary 88.1%, kappa 0.48). ARTHUR V.2.0 and DIANA V.2.0 demonstrated repeatability on par with intra-expert agreement reported in the literature and showed encouraging agreement with human assessors, though further refinement is needed to optimise performance across specific joints.
Page 26 of 2982975 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.