Sort by:
Page 42 of 3143139 results

TRI-PLAN: A deep learning-based automated assessment framework for right heart assessment in transcatheter tricuspid valve replacement planning.

Yang T, Wang Y, Zhu G, Liu W, Cao J, Liu Y, Lu F, Yang J

pubmed logopapersAug 6 2025
Efficient and accurate preoperative assessment of the right-sided heart structural complex (RSHSc) is crucial for planning transcatheter tricuspid valve replacement (TTVR). However, current manual methods remain time-consuming and inconsistent. To address this unmet clinical need, this study aimed to develop and validate TRI-PLAN, the first fully automated, deep learning (DL)-based framework for pre-TTVR assessment. A total of 140 preprocedural computed tomography angiography (CTA) scans (63,962 slices) from patients with severe tricuspid regurgitation (TR) at two high-volume cardiac centers in China were retrospectively included. The patients were divided into a training cohort (n = 100), an internal validation cohort (n = 20), and an external validation cohort (n = 20). TRI-PLAN was developed by a dual-stage right heart assessment network (DRA-Net) to segment the RSHSc and localize the tricuspid annulus (TA), followed by automated measurement of key anatomical parameters and right ventricular ejection fraction (RVEF). Performance was comprehensively evaluated in terms of accuracy, interobserver benchmark comparison, clinical usability, and workflow efficiency. TRI-PLAN achieved expert-level segmentation accuracy (volumetric Dice 0.952/0.955; surface Dice 0.934/0.940), precise localization (standard deviation 1.18/1.14 mm), excellent measurement agreement (ICC 0.984/0.979) and reliable RVEF evaluation (R = 0.97, bias<5 %) across internal and external cohorts. In addition, TRI-PLAN obtained a direct acceptance rate of 80 % and reduced total assessment time from 30 min manually to under 2 min (>95 % time saving). TRI-PLAN provides an accurate, efficient, and clinically applicable solution for pre-TTVR assessment, with strong potential to streamline TTVR planning and enhance procedural outcomes.

Automated vertebral bone quality score measurement on lumbar MRI using deep learning: Development and validation of an AI algorithm.

Jayasuriya NM, Feng E, Nathani KR, Delawan M, Katsos K, Bhagra O, Freedman BA, Bydon M

pubmed logopapersAug 5 2025
Bone health is a critical determinant of spine surgery outcomes, yet many patients undergo procedures without adequate preoperative assessment due to limitations in current bone quality assessment methods. This study aimed to develop and validate an artificial intelligence-based algorithm that predicts Vertebral Bone Quality (VBQ) scores from routine MRI scans, enabling improved preoperative identification of patients at risk for poor surgical outcomes. This study utilized 257 lumbar spine T1-weighted MRI scans from the SPIDER challenge dataset. VBQ scores were calculated through a three-step process: selecting the mid-sagittal slice, measuring vertebral body signal intensity from L1-L4, and normalizing by cerebrospinal fluid signal intensity. A YOLOv8 model was developed to automate region of interest placement and VBQ score calculation. The system was validated against manual annotations from 47 lumbar spine surgery patients, with performance evaluated using precision, recall, mean average precision, intraclass correlation coefficient, Pearson correlation, RMSE, and mean error. The YOLOv8 model demonstrated high accuracy in vertebral body detection (precision: 0.9429, recall: 0.9076, [email protected]: 0.9403, mAP@[0.5:0.95]: 0.8288). Strong interrater reliability was observed with ICC values of 0.95 (human-human), 0.88 and 0.93 (human-AI). Pearson correlations for VBQ scores between human and AI measurements were 0.86 and 0.9, with RMSE values of 0.58 and 0.42 respectively. The AI-based algorithm accurately predicts VBQ scores from routine lumbar MRIs. This approach has potential to enhance early identification and intervention for patients with poor bone health, leading to improved surgical outcomes. Further external validation is recommended to ensure generalizability and clinical applicability.

Innovative machine learning approach for liver fibrosis and disease severity evaluation in MAFLD patients using MRI fat content analysis.

Hou M, Zhu Y, Zhou H, Zhou S, Zhang J, Zhang Y, Liu X

pubmed logopapersAug 5 2025
This study employed machine learning models to quantitatively analyze liver fat content from MRI images for the evaluation of liver fibrosis and disease severity in patients with metabolic dysfunction-associated fatty liver disease (MAFLD). A total of 26 confirmed MAFLD cases, along with MRI image sequences obtained from public repositories, were included to perform a comprehensive assessment. Radiomics features-such as contrast, correlation, homogeneity, energy, and entropy-were extracted and used to construct a random forest classification model with optimized hyperparameters. The model achieved outstanding performance, with an accuracy of 96.8%, sensitivity of 95.7%, specificity of 97.8%, and an F1-score of 96.8%, demonstrating its strong capability in accurately evaluating the degree of liver fibrosis and overall disease severity in MAFLD patients. The integration of machine learning with MRI-based analysis offers a promising approach to enhancing clinical decision-making and guiding treatment strategies, underscoring the potential of advanced technologies to improve diagnostic precision and disease management in MAFLD.

Integration of Spatiotemporal Dynamics and Structural Connectivity for Automated Epileptogenic Zone Localization in Temporal Lobe Epilepsy.

Xiao L, Zheng Q, Li S, Wei Y, Si W, Pan Y

pubmed logopapersAug 5 2025
Accurate localization of the epileptogenic zone (EZ) is essential for surgical success in temporal lobe epilepsy. While stereoelectroencephalography (SEEG) and structural magnetic resonance imaging (MRI) provide complementary insights, existing unimodal methods fail to fully capture epileptogenic brain activity, and multimodal fusion remains challenging due to data complexity and surgeon-dependent interpretations. To address these issues, we proposed a novel multimodal framework to improve EZ localization with SEEG-drived electrophysiology with structural connectivity in temporal lobe epilepsy. By retrospectively analyzing SEEG, post-implant Computed Tomography (CT) and MRI (T1 & Diffusion Tensor Imaging (DTI)) data from 15 patients, we reconstructed SEEG electrode positions and obtained the SEEG and structural connectivity fusion features. We then proposed a spatiotemporal co-attention deep neural network (ST-CANet) to identify the fusion features, categorizing electrodes into seizure onset zone (SOZ), propagation zone (PZ), and non-involved zone (NIZ). Anatomical EZ boundaries were delineated by fusing the electrode position and classification information on brain atlas. The proposed method was evaluated based on the identification and localization performance of three epilepsy-related zones. The experiment results demonstrate that our method achieves 98.08% average accuracy and outperforms other identification methods, and improves the localization with Dice similarity coefficients (DSC) of 95.65% (SOZ), 92.13% (PZ), and 99.61% (NIZ), aligning with clinically validated surgical resection areas. This multimodal fusion strategy based on electrophysiological and structural connectivity information promises to assist neurosurgeons in accurately localizing EZ and may find broader applications in preoperative planning for epilepsy surgeries.

Real-time 3D US-CT fusion-based semi-automatic puncture robot system: clinical evaluation.

Nakayama M, Zhang B, Kuromatsu R, Nakano M, Noda Y, Kawaguchi T, Li Q, Maekawa Y, Fujie MG, Sugano S

pubmed logopapersAug 5 2025
Conventional systems supporting percutaneous radiofrequency ablation (PRFA) have faced difficulties in ensuring safe and accurate puncture due to issues inherent to the medical images used and organ displacement caused by patients' respiration. To address this problem, this study proposes a semi-automatic puncture robot system that integrates real-time ultrasound (US) images with computed tomography (CT) images. The purpose of this paper is to evaluate the system's usefulness through a pilot clinical experiment involving participants. For the clinical experiment using the proposed system, an improved U-net model based on fivefold cross-validation was constructed. Following the workflow of the proposed system, the model was trained using US images acquired from patients with robotic arms. The average Dice coefficient for the entire validation dataset was confirmed to be 0.87. Therefore, the model was implemented in the robotic system and applied to clinical experiment. A clinical experiment was conducted using the robotic system equipped with the developed AI model on five adult male and female participants. The centroid distances between the point clouds from each modality were evaluated in the 3D US-CT fusion process, assuming the blood vessel centerline represents the overall structural position. The results of the centroid distances showed a minimum value of 0.38 mm, a maximum value of 4.81 mm, and an average of 1.97 mm. Although the five participants had different CP classifications and the derived US images exhibited individual variability, all centroid distances satisfied the ablation margin of 5.00 mm considered in PRFA, suggesting the potential accuracy and utility of the robotic system for puncture navigation. Additionally, the results suggested the potential generalization performance of the AI model trained with data acquired according to the robotic system's workflow.

A novel lung cancer diagnosis model using hybrid convolution (2D/3D)-based adaptive DenseUnet with attention mechanism.

Deepa J, Badhu Sasikala L, Indumathy P, Jerrin Simla A

pubmed logopapersAug 5 2025
Existing Lung Cancer Diagnosis (LCD) models have difficulty in detecting early-stage lung cancer due to the asymptomatic nature of the disease which leads to an increased death rate of patients. Therefore, it is important to diagnose lung disease at an early stage to save the lives of affected persons. Hence, the research work aims to develop an efficient lung disease diagnosis using deep learning techniques for the early and accurate detection of lung cancer. This is achieved by. Initially, the proposed model collects the mandatory CT images from the standard benchmark datasets. Then, the lung cancer segmentation is done by using the development of Hybrid Convolution (2D/3D)-based Adaptive DenseUnet with Attention mechanism (HC-ADAM). The Hybrid Sewing Training with Spider Monkey Optimization (HSTSMO) is introduced to optimize the parameters in the developed HC-ADAM segmentation approach. Finally, the dissected lung nodule imagery is considered for the lung cancer classification stage, where the Hybrid Adaptive Dilated Networks with Attention mechanism (HADN-AM) are implemented with the serial cascading of ResNet and Long Short Term Memory (LSTM) for attaining better categorization performance. The accuracy, precision, and F1-score of the developed model for the LIDC-IDRI dataset are 96.3%, 96.38%, and 96.36%, respectively.

Brain tumor segmentation by optimizing deep learning U-Net model.

Asiri AA, Hussain L, Irfan M, Mehdar KM, Awais M, Alelyani M, Alshuhri M, Alghamdi AJ, Alamri S, Nadeem MA

pubmed logopapersAug 5 2025
BackgroundMagnetic Resonance Imaging (MRI) is a cornerstone in diagnosing brain tumors. However, the complex nature of these tumors makes accurate segmentation in MRI images a demanding task.ObjectiveAccurate brain tumor segmentation remains a critical challenge in medical image analysis, with early detection crucial for improving patient outcomes.MethodsTo develop and evaluate a novel UNet-based architecture for improved brain tumor segmentation in MRI images. This paper presents a novel UNet-based architecture for improved brain tumor segmentation. The UNet model architecture incorporates Leaky ReLU activation, batch normalization, and regularization to enhance training and performance. The model consists of varying numbers of layers and kernel sizes to capture different levels of detail. To address the issue of class imbalance in medical image segmentation, we employ focused loss and generalized Dice (GDL) loss functions.ResultsThe proposed model was evaluated on the BraTS'2020 dataset, achieving an accuracy of 99.64% and Dice coefficients of 0.8984, 0.8431, and 0.8824 for necrotic core, edema, and enhancing tumor regions, respectively.ConclusionThese findings demonstrate the efficacy of our approach in accurately predicting tumors, which has the potential to enhance diagnostic systems and improve patient outcomes.

Imaging in clinical trials of rheumatoid arthritis: where are we in 2025?

Østergaard M, Rolland MAJ, Terslev L

pubmed logopapersAug 5 2025
Accurate detection and assessment of inflammatory activity is crucial not only for diagnosing patients with rheumatoid arthritis but also for effective monitoring of treatment effect. Ultrasound and magnetic resonance imaging (MRI) have both been shown to be truthful, reproducible, and sensitive to change for inflammation in joints and tendon sheaths and have validated scoring systems, which altogether allow them to be used as outcome measurement instruments in clinical trials. Furthermore, MRI also allows sensitive and discriminative assessment of structural damage progression in RA, also with validated outcome measures. Other relevant imaging techniques, including the use of artificial intelligence, pose interesting possibilities for future clinical trials and will be briefly addressed in this review article.

Utilizing 3D fast spin echo anatomical imaging to reduce the number of contrast preparations in <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> quantification of knee cartilage using learning-based methods.

Zhong J, Huang C, Yu Z, Xiao F, Blu T, Li S, Ong TM, Ho KK, Chan Q, Griffith JF, Chen W

pubmed logopapersAug 5 2025
To propose and evaluate an accelerated <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> quantification method that combines <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted fast spin echo (FSE) images and proton density (PD)-weighted anatomical FSE images, leveraging deep learning models for <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> mapping. The goal is to reduce scan time and facilitate integration into routine clinical workflows for osteoarthritis (OA) assessment. This retrospective study utilized MRI data from 40 participants (30 OA patients and 10 healthy volunteers). A volume of PD-weighted anatomical FSE images and a volume of <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted images acquired at a non-zero spin-lock time were used as input to train deep learning models, including a 2D U-Net and a multi-layer perceptron (MLP). <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> maps generated by these models were compared with ground truth maps derived from a traditional non-linear least squares (NLLS) fitting method using four <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted images. Evaluation metrics included mean absolute error (MAE), mean absolute percentage error (MAPE), regional error (RE), and regional percentage error (RPE). The best-performed deep learning models achieved RPEs below 5% across all evaluated scenarios. This performance was consistent even in reduced acquisition settings that included only one PD-weighted image and one <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted image, where NLLS methods cannot be applied. Furthermore, the results were comparable to those obtained with NLLS when longer acquisitions with four <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted images were used. The proposed approach enables efficient <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> mapping using PD-weighted anatomical images, reducing scan time while maintaining clinical standards. This method has the potential to facilitate the integration of quantitative MRI techniques into routine clinical practice, benefiting OA diagnosis and monitoring.

Sex differences in white matter amplitude of low-frequency fluctuation associated with cognitive performance across the Alzheimer's disease continuum.

Chen X, Zhou S, Wang W, Gao Z, Ye W, Zhu W, Lu Y, Ma J, Li X, Yu Y, Li X

pubmed logopapersAug 5 2025
BackgroundSex differences in Alzheimer's disease (AD) progression offer insights into pathogenesis and clinical management. White matter (WM) amplitude of low-frequency fluctuation (ALFF), reflecting neural activity, represents a potential disease biomarker.ObjectiveTo explore whether there are sex differences in regional WM ALFF among AD patients, amnestic mild cognitive impairment (aMCI) patients, and healthy controls (HCs), how it is related to cognitive performance, and whether it can be used for disease classification.MethodsResting-state functional magnetic resonance images and cognitive assessments were obtained from 85 AD (36 female), 52 aMCI (23 female), and 78 HCs (43 female). Two-way ANOVA examined group × sex interactions for regional WM ALFF and cognitive scores. WM ALFF-cognition correlations and support vector machine diagnostic accuracy were evaluated.ResultsSex × group interaction effects on WM ALFF were detected in the right superior longitudinal fasciculus (<i>F</i> = 20.08, <i>p</i><sub>FDR_corrected</sub> < 0.001), left superior longitudinal fasciculus (<i>F</i> = 5.45, <i>p</i><sub>GRF_corrected</sub> < 0.001) and right inferior longitudinal fasciculus (<i>F</i> = 6.00, <i>p</i><sub>GRF_corrected</sub> = 0.001). These WM ALFF values positively correlated with different cognitive performance between sexes. The support vector machine learning best differentiated aMCI from AD in the full cohort and males (accuracy = 75%), and HCs from aMCI in females (accuracy = 93%).ConclusionsSex differences in regional WM ALFF during AD progression are associated with cognitive performance and can be utilized for disease classification.
Page 42 of 3143139 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.