Sort by:
Page 52 of 3523515 results

Equivariant Spatiotemporal Transformers with MDL-Guided Feature Selection for Malignancy Detection in Dynamic PET

Dadashkarimi, M.

medrxiv logopreprintAug 6 2025
Dynamic Positron Emission Tomography (PET) scans offer rich spatiotemporal data for detecting malignancies, but their high-dimensionality and noise pose significant challenges. We introduce a novel framework, the Equivariant Spatiotemporal Transformer with MDL-Guided Feature Selection (EST-MDL), which integrates group-theoretic symmetries, Kolmogorov complexity, and Minimum Description Length (MDL) principles. By enforcing spatial and temporal symmetries (e.g., translations and rotations) and leveraging MDL for robust feature selection, our model achieves improved generalization and interpretability. Evaluated on three realworld PET datasets--LUNG-PET, BRAIN-PET, and BREAST-PET--our approach achieves AUCs of 0.94, 0.92, and 0.95, respectively, outperforming CNNs, Vision Transformers (ViTs), and Graph Neural Networks (GNNs) in AUC, sensitivity, specificity, and computational efficiency. This framework offers a robust, interpretable solution for malignancy detection in clinical settings.

UNISELF: A Unified Network with Instance Normalization and Self-Ensembled Lesion Fusion for Multiple Sclerosis Lesion Segmentation

Jinwei Zhang, Lianrui Zuo, Blake E. Dewey, Samuel W. Remedios, Yihao Liu, Savannah P. Hays, Dzung L. Pham, Ellen M. Mowry, Scott D. Newsome, Peter A. Calabresi, Aaron Carass, Jerry L. Prince

arxiv logopreprintAug 6 2025
Automated segmentation of multiple sclerosis (MS) lesions using multicontrast magnetic resonance (MR) images improves efficiency and reproducibility compared to manual delineation, with deep learning (DL) methods achieving state-of-the-art performance. However, these DL-based methods have yet to simultaneously optimize in-domain accuracy and out-of-domain generalization when trained on a single source with limited data, or their performance has been unsatisfactory. To fill this gap, we propose a method called UNISELF, which achieves high accuracy within a single training domain while demonstrating strong generalizability across multiple out-of-domain test datasets. UNISELF employs a novel test-time self-ensembled lesion fusion to improve segmentation accuracy, and leverages test-time instance normalization (TTIN) of latent features to address domain shifts and missing input contrasts. Trained on the ISBI 2015 longitudinal MS segmentation challenge training dataset, UNISELF ranks among the best-performing methods on the challenge test dataset. Additionally, UNISELF outperforms all benchmark methods trained on the same ISBI training data across diverse out-of-domain test datasets with domain shifts and missing contrasts, including the public MICCAI 2016 and UMCL datasets, as well as a private multisite dataset. These test datasets exhibit domain shifts and/or missing contrasts caused by variations in acquisition protocols, scanner types, and imaging artifacts arising from imperfect acquisition. Our code is available at https://github.com/uponacceptance.

Artificial Intelligence and Extended Reality in TAVR: Current Applications and Challenges.

Skalidis I, Sayah N, Benamer H, Amabile N, Laforgia P, Champagne S, Hovasse T, Garot J, Garot P, Akodad M

pubmed logopapersAug 6 2025
Integration of AI and XR in TAVR is revolutionizing the management of severe aortic stenosis by enhancing diagnostic accuracy, risk stratification, and pre-procedural planning. Advanced algorithms now facilitate precise electrocardiographic, echocardiographic, and CT-based assessments that reduce observer variability and enable patient-specific risk prediction. Immersive XR technologies, including augmented, virtual, and mixed reality, improve spatial visualization of complex cardiac anatomy and support real-time procedural guidance. Despite these advancements, standardized protocols, regulatory frameworks, and ethical safeguards remain necessary for widespread clinical adoption.

TRI-PLAN: A deep learning-based automated assessment framework for right heart assessment in transcatheter tricuspid valve replacement planning.

Yang T, Wang Y, Zhu G, Liu W, Cao J, Liu Y, Lu F, Yang J

pubmed logopapersAug 6 2025
Efficient and accurate preoperative assessment of the right-sided heart structural complex (RSHSc) is crucial for planning transcatheter tricuspid valve replacement (TTVR). However, current manual methods remain time-consuming and inconsistent. To address this unmet clinical need, this study aimed to develop and validate TRI-PLAN, the first fully automated, deep learning (DL)-based framework for pre-TTVR assessment. A total of 140 preprocedural computed tomography angiography (CTA) scans (63,962 slices) from patients with severe tricuspid regurgitation (TR) at two high-volume cardiac centers in China were retrospectively included. The patients were divided into a training cohort (n = 100), an internal validation cohort (n = 20), and an external validation cohort (n = 20). TRI-PLAN was developed by a dual-stage right heart assessment network (DRA-Net) to segment the RSHSc and localize the tricuspid annulus (TA), followed by automated measurement of key anatomical parameters and right ventricular ejection fraction (RVEF). Performance was comprehensively evaluated in terms of accuracy, interobserver benchmark comparison, clinical usability, and workflow efficiency. TRI-PLAN achieved expert-level segmentation accuracy (volumetric Dice 0.952/0.955; surface Dice 0.934/0.940), precise localization (standard deviation 1.18/1.14 mm), excellent measurement agreement (ICC 0.984/0.979) and reliable RVEF evaluation (R = 0.97, bias<5 %) across internal and external cohorts. In addition, TRI-PLAN obtained a direct acceptance rate of 80 % and reduced total assessment time from 30 min manually to under 2 min (>95 % time saving). TRI-PLAN provides an accurate, efficient, and clinically applicable solution for pre-TTVR assessment, with strong potential to streamline TTVR planning and enhance procedural outcomes.

Development of a deep learning based approach for multi-material decomposition in spectral CT: a proof of principle in silico study.

Rajagopal JR, Rapaka S, Farhadi F, Abadi E, Segars WP, Nowak T, Sharma P, Pritchard WF, Malayeri A, Jones EC, Samei E, Sahbaee P

pubmed logopapersAug 6 2025
Conventional approaches to material decomposition in spectral CT face challenges related to precise algorithm calibration across imaged conditions and low signal quality caused by variable object size and reduced dose. In this proof-of-principle study, a deep learning approach to multi-material decomposition was developed to quantify iodine, gadolinium, and calcium in spectral CT. A dual-phase network architecture was trained using synthetic datasets containing computational models of cylindrical and virtual patient phantoms. Classification and quantification performance was evaluated across a range of patient size and dose parameters. The model was found to accurately classify (accuracy: cylinders - 98%, virtual patients - 97%) and quantify materials (mean absolute percentage difference: cylinders - 8-10%, virtual patients - 10-15%) in both datasets. Performance in virtual patient phantoms improved as the hybrid training dataset included a larger contingent of virtual patient phantoms (accuracy: 48% with 0 virtual patients to 97% with 8 virtual patients). For both datasets, the algorithm was able to maintain strong performance under challenging conditions of large patient size and reduced dose. This study shows the validity of a deep-learning based approach to multi-material decomposition trained with in-silico images that can overcome the limitations of conventional material decomposition approaches.

Assessing the spatial relationship between mandibular third molars and the inferior alveolar canal using a deep learning-based approach: a proof-of-concept study.

Lyu W, Lou S, Huang J, Huang Z, Zheng H, Liao H, Qiao Y, OuYang K

pubmed logopapersAug 6 2025
The distance between the mandibular third molar (M3) and the mandibular canal (MC) is a key factor in assessing the risk of injury to the inferior alveolar nerve (IAN). However, existing deep learning systems have not yet been able to accurately quantify the M3-MC distance in 3D space. The aim of this study was to develop and validate a deep learning-based system for accurate measurement of M3-MC spatial relationships in cone-beam computed tomography (CBCT) images and to evaluate its accuracy against conventional methods. We propose an innovative approach for low-resource environments, using DeeplabV3 + for semantic segmentation of CBCT-extracted 2D images, followed by multi-category 3D reconstruction and visualization. Based on the reconstruction model, we applied the KD-Tree algorithm to measure the spatial minimum distance between M3 and MC. Through internal validation with randomly selected CBCT images, we compared the differences between the AI system, conventional measurement methods on the CBCT, and the gold standard measured by senior experts. Statistical analysis was performed using one-way ANOVA with Tukey HSD post-hoc tests (p < 0.05), employing multiple error metrics for comprehensive evaluation. One-way ANOVA revealed significant differences among measurement methods. Subsequent Tukey HSD post-hoc tests showed significant differences between the AI reconstruction model and conventional methods. The measurement accuracy of the AI system compared to the gold standard was 0.19 for mean error (ME), 0.18 for mean absolute error (MAE), 0.69 for mean square error (MSE), 0.83 for root mean square error (RMSE), and 0.96 for coefficient of determination (R<sup>2</sup>) (p < 0.01). These results indicate that the proposed AI system is highly accurate and reliable in M3-MC distance measurement and provides a powerful tool for preoperative risk assessment of M3 extraction.

Towards Globally Predictable k-Space Interpolation: A White-box Transformer Approach

Chen Luo, Qiyu Jin, Taofeng Xie, Xuemei Wang, Huayu Wang, Congcong Liu, Liming Tang, Guoqing Chen, Zhuo-Xu Cui, Dong Liang

arxiv logopreprintAug 6 2025
Interpolating missing data in k-space is essential for accelerating imaging. However, existing methods, including convolutional neural network-based deep learning, primarily exploit local predictability while overlooking the inherent global dependencies in k-space. Recently, Transformers have demonstrated remarkable success in natural language processing and image analysis due to their ability to capture long-range dependencies. This inspires the use of Transformers for k-space interpolation to better exploit its global structure. However, their lack of interpretability raises concerns regarding the reliability of interpolated data. To address this limitation, we propose GPI-WT, a white-box Transformer framework based on Globally Predictable Interpolation (GPI) for k-space. Specifically, we formulate GPI from the perspective of annihilation as a novel k-space structured low-rank (SLR) model. The global annihilation filters in the SLR model are treated as learnable parameters, and the subgradients of the SLR model naturally induce a learnable attention mechanism. By unfolding the subgradient-based optimization algorithm of SLR into a cascaded network, we construct the first white-box Transformer specifically designed for accelerated MRI. Experimental results demonstrate that the proposed method significantly outperforms state-of-the-art approaches in k-space interpolation accuracy while providing superior interpretability.

Predictive Modeling of Osteonecrosis of the Femoral Head Progression Using MobileNetV3_Large and Long Short-Term Memory Network: Novel Approach.

Kong G, Zhang Q, Liu D, Pan J, Liu K

pubmed logopapersAug 6 2025
The assessment of osteonecrosis of the femoral head (ONFH) often presents challenges in accuracy and efficiency. Traditional methods rely on imaging studies and clinical judgment, prompting the need for advanced approaches. This study aims to use deep learning algorithms to enhance disease assessment and prediction in ONFH, optimizing treatment strategies. The primary objective of this research is to analyze pathological images of ONFH using advanced deep learning algorithms to evaluate treatment response, vascular reconstruction, and disease progression. By identifying the most effective algorithm, this study seeks to equip clinicians with precise tools for disease assessment and prediction. Magnetic resonance imaging (MRI) data from 30 patients diagnosed with ONFH were collected, totaling 1200 slices, which included 675 slices with lesions and 225 normal slices. The dataset was divided into training (630 slices), validation (135 slices), and test (135 slices) sets. A total of 10 deep learning algorithms were tested for training and optimization, and MobileNetV3_Large was identified as the optimal model for subsequent analyses. This model was applied for quantifying vascular reconstruction, evaluating treatment responses, and assessing lesion progression. In addition, a long short-term memory (LSTM) model was integrated for the dynamic prediction of time-series data. The MobileNetV3_Large model demonstrated an accuracy of 96.5% (95% CI 95.1%-97.8%) and a recall of 94.8% (95% CI 93.2%-96.4%) in ONFH diagnosis, significantly outperforming DenseNet201 (87.3%; P<.05). Quantitative evaluation of treatment responses showed that vascularized bone grafting resulted in an average increase of 12.4 mm in vascular length (95% CI 11.2-13.6 mm; P<.01) and an increase of 2.7 in branch count (95% CI 2.3-3.1; P<.01) among the 30 patients. The model achieved an AUC of 0.92 (95% CI 0.90-0.94) for predicting lesion progression, outperforming traditional methods like ResNet50 (AUC=0.85; P<.01). Predictions were consistent with clinical observations in 92.5% of cases (24/26). The application of deep learning algorithms in examining treatment response, vascular reconstruction, and disease progression in ONFH presents notable advantages. This study offers clinicians a precise tool for disease assessment and highlights the significance of using advanced technological solutions in health care practice.

Deep learning-based radiomics does not improve residual cancer burden prediction post-chemotherapy in LIMA breast MRI trial.

Janse MHA, Janssen LM, Wolters-van der Ben EJM, Moman MR, Viergever MA, van Diest PJ, Gilhuijs KGA

pubmed logopapersAug 6 2025
This study aimed to evaluate the potential additional value of deep radiomics for assessing residual cancer burden (RCB) in locally advanced breast cancer, after neoadjuvant chemotherapy (NAC) but before surgery, compared to standard predictors: tumor volume and subtype. This retrospective study used a 105-patient single-institution training set and a 41-patient external test set from three institutions in the LIMA trial. DCE-MRI was performed before and after NAC, and RCB was determined post-surgery. Three networks (nnU-Net, Attention U-net and vector-quantized encoder-decoder) were trained for tumor segmentation. For each network, deep features were extracted from the bottleneck layer and used to train random forest regression models to predict RCB score. Models were compared to (1) a model trained on tumor volume and (2) a model combining tumor volume and subtype. The potential complementary performance of combining deep radiomics with a clinical-radiological model was assessed. From the predicted RCB score, three metrics were calculated: area under the curve (AUC) for categories RCB-0/RCB-I versus RCB-II/III, pathological complete response (pCR) versus non-pCR, and Spearman's correlation. Deep radiomics models had an AUC between 0.68-0.74 for pCR and 0.68-0.79 for RCB, while the volume-only model had an AUC of 0.74 and 0.70 for pCR and RCB, respectively. Spearman's correlation varied from 0.45-0.51 (deep radiomics) to 0.53 (combined model). No statistical difference between models was observed. Segmentation network-derived deep radiomics contain similar information to tumor volume and subtype for inferring pCR and RCB after NAC, but do not complement standard clinical predictors in the LIMA trial. Question It is unknown if and which deep radiomics approach is most suitable to extract relevant features to assess neoadjuvant chemotherapy response on breast MRI. Findings Radiomic features extracted from deep-learning networks yield similar results in predicting neoadjuvant chemotherapy response as tumor volume and subtype in the LIMA study. However, they do not provide complementary information. Clinical relevance For predicting response to neoadjuvant chemotherapy in breast cancer patients, tumor volume on MRI and subtype remain important predictors of treatment outcome; deep radiomics might be an alternative when determining tumor volume and/or subtype is not feasible.

Automated detection of zygomatic fractures on spiral computed tomography using a deep learning model.

Yari A, Fasih P, Kamali Hakim L, Asadi A

pubmed logopapersAug 6 2025
The aim of this study was to evaluate the performance of the YOLOv8 deep learning model for detecting zygomatic fractures. Computed tomography scans with zygomatic fractures were collected, with all slices annotated to identify fracture lines across seven categories: zygomaticomaxillary suture, zygomatic arch, zygomaticofrontal suture, sphenozygomatic suture, orbital floor, zygomatic body, and maxillary sinus wall. The images were divided into training, validation, and test datasets in a 6:2:2 ratio. Performance metrics were calculated for each category. A total of 13,988 axial and 14,107 coronal slices were retrieved. The trained algorithm achieved accuracy of 94.2-97.9%. Recall exceeded 90% across all categories, with sphenozygomatic suture fractures having the highest value (96.6%). Average precision was highest for zygomatic arch fractures (0.827) and lowest for zygomatic body fractures (0.692). The highest F1 score was 96.7% for zygomaticomaxillary suture fractures, and the lowest was 82.1% for zygomatic body fractures. Area under the curve (AUC) values were also highest for zygomaticomaxillary suture (0.943) and lowest for zygomatic body fractures (0.876). The YOLOv8 model demonstrated promising results in the automated detection of zygomatic fractures, achieving the highest performance in identifying fractures of the zygomaticomaxillary suture and zygomatic arch.
Page 52 of 3523515 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.