Sort by:
Page 423 of 7477461 results

Sen RD, McGrath MC, Shenoy VS, Meyer RM, Park C, Fong CT, Lele AV, Kim LJ, Levitt MR, Wang LL, Sekhar LN

pubmed logopapersJul 24 2025
The goal of this study was to develop a highly precise, dynamic machine learning model centered on daily transcranial Doppler ultrasound (TCD) data to predict angiographic vasospasm (AV) in the context of aneurysmal subarachnoid hemorrhage (aSAH). A retrospective review of patients with aSAH treated at a single institution was performed. The primary outcome was AV, defined as angiographic narrowing of any intracranial artery at any time point during admission from risk assessment. Standard demographic, clinical, and radiographic data were collected. Quantitative data including mean arterial pressure, cerebral perfusion pressure, daily serum sodium, and hourly ventriculostomy output were collected. Detailed daily TCD data of intracranial arteries including maximum velocities, pulsatility indices, and Lindegaard ratios were collected. Three predictive machine learning models were created and compared: A static multivariate logistics regression model based on data collected on the date of admission (Baseline Model; BM), a standard TCD model using middle cerebral artery flow velocity and Lindegaard ratio measurements (SM), and a machine learning long short term memory (LSTM) model using all data trended through the hospitalization. A total of 424 patients with aSAH were reviewed, 78 of whom developed AV. In predicting AV at any time point in the future, the LSTM model had the highest precision (0.571) and accuracy (0.776), whereas the SM model had the highest overall performance with an F1 score of 0.566. In predicting AV within 5 days, the LSTM continued to have the highest precision (0.488) and accuracy (0.803). After an ablation test removing all non-TCD elements, the LSTM model improved to a precision of 0.824. Longitudinal TCD data can be used to create a dynamic machine learning model with higher precision than static TCD measurements for predicting AV after aSAH.

Wu CH, Murphy MC, Chiang CC, Chen ST, Chung CP, Lirng JF, Luo CB, Rossman PJ, Ehman RL, Huston J, Chang FC

pubmed logopapersJul 24 2025
Percutaneous transluminal angioplasty and stenting (PTAS) in patients with carotid stenosis may have potential effects on brain parenchyma. However, current studies on parenchymal changes are scarce due to the need for advanced imaging modalities. Consequently, the alterations in brain parenchyma following PTAS remain an unsolved issue. To investigate changes to the brain parenchyma using magnetic resonance elastography (MRE). Prospective. 13 patients (6 women and 7 men; 39 MRI imaging sessions) with severe unilateral carotid stenosis patients indicated for PTAS were recruited between 2021 and 2024. Noncontrast MRI sequences including MRE (spin echo) were acquired using 3 T scanners. All patients underwent MRE before (preprocedural), within 24 h (early postprocedural) and 3 months after (delayed postprocedural) PTAS. Preprocedural and delayed postprocedural ultrasonographic peak systolic velocity (PSV) was recorded. MRE stiffness and damping ratio were evaluated via neural network inversion of the whole brain, in 14 gray matter (GM) and 12 white matter (WM) regions. Stiffness and damping ratio differences between each pair of MR sessions for each subject were identified by paired sample t tests. The correlations of stiffness and damping ratio with stenosis grade and ultrasonographic PSV dynamics were evaluated by Pearson correlation coefficients. The statistical significance was defined as p < 0.05. The stiffness of lesion side insula, deep GM, and deep WM increased significantly from preprocedural to delayed postprocedural MRE. Increasing deep GM stiffness on the lesion side was positively correlated with the DSA stenosis grade significantly (r = 0.609). The lesion side insula stiffness increments were positively correlated with PSV decrements significantly (r = 0.664). Regional brain stiffness increased 3 months after PTAS. Lesion side stiffness was positively correlated with stenosis grades in deep GM and PSV decrements in the insula. EVIDENCE LEVEL: 2. Stage 2.

Yeghaian M, Piek MW, Bartels-Rutten A, Abdelatty MA, Herrero-Huertas M, Vogel WV, de Boer JP, Hartemink KJ, Bodalal Z, Beets-Tan RGH, Trebeschi S, van der Ploeg IMC

pubmed logopapersJul 24 2025
Thyroid incidentalomas (TIs) are incidental thyroid lesions detected on fluorodeoxy-d-glucose (18F-FDG) PET/computed tomography (PET/CT) scans. This study aims to investigate the role of noninvasive PET/CT-derived radiomic features in characterizing 18F-FDG PET/CT TIs and distinguishing benign from malignant thyroid lesions in oncological patients. We included 46 patients with PET/CT TIs who underwent thyroid ultrasound and thyroid surgery at our oncological referral hospital. Radiomic features extracted from regions of interest (ROI) in both PET and CT images and analyzed for their association with thyroid cancer and their predictive ability. The TIs were graded using the ultrasound TIRADS classification, and histopathological results served as the reference standard. Univariate and multivariate analyses were performed using features from each modality individually and combined. The performance of radiomic features was compared to the TIRADS classification. Among the 46 included patients, 36 patients (78%) had malignant thyroid lesions, while 10 patients (22%) had benign lesions. The combined run length nonuniformity radiomic feature from PET and CT cubical ROIs demonstrated the highest area under the curve (AUC) of 0.88 (P < 0.05), with a negative correlation with malignancy. This performance was comparable to the TIRADS classification (AUC: 0.84, P < 0.05), which showed a positive correlation with thyroid cancer. Multivariate analysis showed higher predictive performance using CT-derived radiomics (AUC: 0.86 ± 0.13) compared to TIRADS (AUC: 0.80 ± 0.08). This study highlights the potential of 18F-FDG PET/CT-derived radiomics to distinguish benign from malignant thyroid lesions. Further studies with larger cohorts and deep learning-based methods could obtain more robust results.

Xingyu Qiu, Mengying Yang, Xinghua Ma, Dong Liang, Yuzhen Li, Fanding Li, Gongning Luo, Wei Wang, Kuanquan Wang, Shuo Li

arxiv logopreprintJul 24 2025
EDM elucidates the unified design space of diffusion models, yet its fixed noise patterns restricted to pure Gaussian noise, limit advancements in image restoration. Our study indicates that forcibly injecting Gaussian noise corrupts the degraded images, overextends the image transformation distance, and increases restoration complexity. To address this problem, our proposed EDA Elucidates the Design space of Arbitrary-noise-based diffusion models. Theoretically, EDA expands the freedom of noise pattern while preserving the original module flexibility of EDM, with rigorous proof that increased noise complexity incurs no additional computational overhead during restoration. EDA is validated on three typical tasks: MRI bias field correction (global smooth noise), CT metal artifact reduction (global sharp noise), and natural image shadow removal (local boundary-aware noise). With only 5 sampling steps, EDA outperforms most task-specific methods and achieves state-of-the-art performance in bias field correction and shadow removal.

Yilong Hu, Shijie Chang, Lihe Zhang, Feng Tian, Weibing Sun, Huchuan Lu

arxiv logopreprintJul 24 2025
The Diffusion Probabilistic Model (DPM) has demonstrated remarkable performance across a variety of generative tasks. The inherent randomness in diffusion models helps address issues such as blurring at the edges of medical images and labels, positioning Diffusion Probabilistic Models (DPMs) as a promising approach for lesion segmentation. However, we find that the current training and inference strategies of diffusion models result in an uneven distribution of attention across different timesteps, leading to longer training times and suboptimal solutions. To this end, we propose UniSegDiff, a novel diffusion model framework designed to address lesion segmentation in a unified manner across multiple modalities and organs. This framework introduces a staged training and inference approach, dynamically adjusting the prediction targets at different stages, forcing the model to maintain high attention across all timesteps, and achieves unified lesion segmentation through pre-training the feature extraction network for segmentation. We evaluate performance on six different organs across various imaging modalities. Comprehensive experimental results demonstrate that UniSegDiff significantly outperforms previous state-of-the-art (SOTA) approaches. The code is available at https://github.com/HUYILONG-Z/UniSegDiff.

Francesco Dalmonte, Emirhan Bayar, Emre Akbas, Mariana-Iuliana Georgescu

arxiv logopreprintJul 24 2025
Anomaly detection in medical images is an important yet challenging task due to the diversity of possible anomalies and the practical impossibility of collecting comprehensively annotated data sets. In this work, we tackle unsupervised medical anomaly detection proposing a modernized autoencoder-based framework, the Q-Former Autoencoder, that leverages state-of-the-art pretrained vision foundation models, such as DINO, DINOv2 and Masked Autoencoder. Instead of training encoders from scratch, we directly utilize frozen vision foundation models as feature extractors, enabling rich, multi-stage, high-level representations without domain-specific fine-tuning. We propose the usage of the Q-Former architecture as the bottleneck, which enables the control of the length of the reconstruction sequence, while efficiently aggregating multiscale features. Additionally, we incorporate a perceptual loss computed using features from a pretrained Masked Autoencoder, guiding the reconstruction towards semantically meaningful structures. Our framework is evaluated on four diverse medical anomaly detection benchmarks, achieving state-of-the-art results on BraTS2021, RESC, and RSNA. Our results highlight the potential of vision foundation model encoders, pretrained on natural images, to generalize effectively to medical image analysis tasks without further fine-tuning. We release the code and models at https://github.com/emirhanbayar/QFAE.

Qilin Huang, Tianyu Lin, Zhiguang Chen, Fudan Zheng

arxiv logopreprintJul 24 2025
Leveraging the powerful capabilities of diffusion models has yielded quite effective results in medical image segmentation tasks. However, existing methods typically transfer the original training process directly without specific adjustments for segmentation tasks. Furthermore, the commonly used pre-trained diffusion models still have deficiencies in feature extraction. Based on these considerations, we propose LEAF, a medical image segmentation model grounded in latent diffusion models. During the fine-tuning process, we replace the original noise prediction pattern with a direct prediction of the segmentation map, thereby reducing the variance of segmentation results. We also employ a feature distillation method to align the hidden states of the convolutional layers with the features from a transformer-based vision encoder. Experimental results demonstrate that our method enhances the performance of the original diffusion model across multiple segmentation datasets for different disease types. Notably, our approach does not alter the model architecture, nor does it increase the number of parameters or computation during the inference phase, making it highly efficient.

Dhruv Jain, Romain Modzelewski, Romain Hérault, Clement Chatelain, Eva Torfeh, Sebastien Thureau

arxiv logopreprintJul 24 2025
In data-scarce scenarios, deep learning models often overfit to noise and irrelevant patterns, which limits their ability to generalize to unseen samples. To address these challenges in medical image segmentation, we introduce Diff-UMamba, a novel architecture that combines the UNet framework with the mamba mechanism for modeling long-range dependencies. At the heart of Diff-UMamba is a Noise Reduction Module (NRM), which employs a signal differencing strategy to suppress noisy or irrelevant activations within the encoder. This encourages the model to filter out spurious features and enhance task-relevant representations, thereby improving its focus on clinically meaningful regions. As a result, the architecture achieves improved segmentation accuracy and robustness, particularly in low-data settings. Diff-UMamba is evaluated on multiple public datasets, including MSD (lung and pancreas) and AIIB23, demonstrating consistent performance gains of 1-3% over baseline methods across diverse segmentation tasks. To further assess performance under limited-data conditions, additional experiments are conducted on the BraTS-21 dataset by varying the proportion of available training samples. The approach is also validated on a small internal non-small cell lung cancer (NSCLC) dataset for gross tumor volume (GTV) segmentation in cone beam CT (CBCT), where it achieves a 4-5% improvement over the baseline.

Pascal Spiegler, Taha Koleilat, Arash Harirpoush, Corey S. Miller, Hassan Rivaz, Marta Kersten-Oertel, Yiming Xiao

arxiv logopreprintJul 24 2025
Pancreatic cancer carries a poor prognosis and relies on endoscopic ultrasound (EUS) for targeted biopsy and radiotherapy. However, the speckle noise, low contrast, and unintuitive appearance of EUS make segmentation of pancreatic tumors with fully supervised deep learning (DL) models both error-prone and dependent on large, expert-curated annotation datasets. To address these challenges, we present TextSAM-EUS, a novel, lightweight, text-driven adaptation of the Segment Anything Model (SAM) that requires no manual geometric prompts at inference. Our approach leverages text prompt learning (context optimization) through the BiomedCLIP text encoder in conjunction with a LoRA-based adaptation of SAM's architecture to enable automatic pancreatic tumor segmentation in EUS, tuning only 0.86% of the total parameters. On the public Endoscopic Ultrasound Database of the Pancreas, TextSAM-EUS with automatic prompts attains 82.69% Dice and 85.28% normalized surface distance (NSD), and with manual geometric prompts reaches 83.10% Dice and 85.70% NSD, outperforming both existing state-of-the-art (SOTA) supervised DL models and foundation models (e.g., SAM and its variants). As the first attempt to incorporate prompt learning in SAM-based medical image segmentation, TextSAM-EUS offers a practical option for efficient and robust automatic EUS segmentation. Our code will be publicly available upon acceptance.

Kesheng Wang, Xiaoyu Chen, Chunlei He, Fenfen Li, Xinxin Yu, Dexing Kong, Shoujun Huang, Qi Dai

arxiv logopreprintJul 24 2025
In the medical image analysis field, precise quantification of curve tortuosity plays a critical role in the auxiliary diagnosis and pathological assessment of various diseases. In this study, we propose a novel framework for tortuosity quantification and demonstrate its effectiveness through the evaluation of meibomian gland atrophy uniformity,serving as a representative application scenario. We introduce an information entropy-based tortuosity quantification framework that integrates probability modeling with entropy theory and incorporates domain transformation of curve data. Unlike traditional methods such as curvature or arc-chord ratio, this approach evaluates the tortuosity of a target curve by comparing it to a designated reference curve. Consequently, it is more suitable for tortuosity assessment tasks in medical data where biologically plausible reference curves are available, providing a more robust and objective evaluation metric without relying on idealized straight-line comparisons. First, we conducted numerical simulation experiments to preliminarily assess the stability and validity of the method. Subsequently, the framework was applied to quantify the spatial uniformity of meibomian gland atrophy and to analyze the difference in this uniformity between \textit{Demodex}-negative and \textit{Demodex}-positive patient groups. The results demonstrated a significant difference in tortuosity-based uniformity between the two groups, with an area under the curve of 0.8768, sensitivity of 0.75, and specificity of 0.93. These findings highlight the clinical utility of the proposed framework in curve tortuosity analysis and its potential as a generalizable tool for quantitative morphological evaluation in medical diagnostics.
Page 423 of 7477461 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,400+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.