Sort by:
Page 301 of 3323316 results

Fluid fluctuations assessed with artificial intelligence during the maintenance phase impact anti-vascular endothelial growth factor visual outcomes in a multicentre, routine clinical care national age-related macular degeneration database.

Martin-Pinardel R, Izquierdo-Serra J, Bernal-Morales C, De Zanet S, Garay-Aramburu G, Puzo M, Arruabarrena C, Sararols L, Abraldes M, Broc L, Escobar-Barranco JJ, Figueroa M, Zapata MA, Ruiz-Moreno JM, Parrado-Carrillo A, Moll-Udina A, Alforja S, Figueras-Roca M, Gómez-Baldó L, Ciller C, Apostolopoulos S, Mishchuk A, Casaroli-Marano RP, Zarranz-Ventura J

pubmed logopapersMay 16 2025
To evaluate the impact of fluid volume fluctuations quantified with artificial intelligence in optical coherence tomography scans during the maintenance phase and visual outcomes at 12 and 24 months in a real-world, multicentre, national cohort of treatment-naïve neovascular age-related macular degeneration (nAMD) eyes. Demographics, visual acuity (VA) and number of injections were collected using the Fight Retinal Blindness tool. Intraretinal fluid (IRF), subretinal fluid (SRF), pigment epithelial detachment (PED), total fluid (TF) and central subfield thickness (CST) were quantified using the RetinAI Discovery tool. Fluctuations were defined as the SD of within-eye quantified values, and eyes were distributed according to SD quartiles for each biomarker. A total of 452 naïve nAMD eyes were included. Eyes with highest (Q4) versus lowest (Q1) fluid fluctuations showed significantly worse VA change (months 3-12) in IRF -3.91 versus 3.50 letters, PED -4.66 versus 3.29, TF -2.07 versus 2.97 and CST -1.85 versus 2.96 (all p<0.05), but not for SRF 0.66 versus 0.93 (p=0.91). Similar VA outcomes were observed at month 24 for PED -8.41 versus 4.98 (p<0.05), TF -7.38 versus 1.89 (p=0.07) and CST -10.58 versus 3.60 (p<0.05). The median number of injections (months 3-24) was significantly higher in Q4 versus Q1 eyes in IRF 9 versus 8, SRF 10 versus 8 and TF 10 versus 8 (all p<0.05). This multicentre study reports a negative effect in VA outcomes of fluid volume fluctuations during the maintenance phase in specific fluid compartments, suggesting that anatomical and functional treatment response patterns may be fluid-specific.

UGoDIT: Unsupervised Group Deep Image Prior Via Transferable Weights

Shijun Liang, Ismail R. Alkhouri, Siddhant Gautam, Qing Qu, Saiprasad Ravishankar

arxiv logopreprintMay 16 2025
Recent advances in data-centric deep generative models have led to significant progress in solving inverse imaging problems. However, these models (e.g., diffusion models (DMs)) typically require large amounts of fully sampled (clean) training data, which is often impractical in medical and scientific settings such as dynamic imaging. On the other hand, training-data-free approaches like the Deep Image Prior (DIP) do not require clean ground-truth images but suffer from noise overfitting and can be computationally expensive as the network parameters need to be optimized for each measurement set independently. Moreover, DIP-based methods often overlook the potential of learning a prior using a small number of sub-sampled measurements (or degraded images) available during training. In this paper, we propose UGoDIT, an Unsupervised Group DIP via Transferable weights, designed for the low-data regime where only a very small number, M, of sub-sampled measurement vectors are available during training. Our method learns a set of transferable weights by optimizing a shared encoder and M disentangled decoders. At test time, we reconstruct the unseen degraded image using a DIP network, where part of the parameters are fixed to the learned weights, while the remaining are optimized to enforce measurement consistency. We evaluate UGoDIT on both medical (multi-coil MRI) and natural (super resolution and non-linear deblurring) image recovery tasks under various settings. Compared to recent standalone DIP methods, UGoDIT provides accelerated convergence and notable improvement in reconstruction quality. Furthermore, our method achieves performance competitive with SOTA DM-based and supervised approaches, despite not requiring large amounts of clean training data.

Diff-Unfolding: A Model-Based Score Learning Framework for Inverse Problems

Yuanhao Wang, Shirin Shoushtari, Ulugbek S. Kamilov

arxiv logopreprintMay 16 2025
Diffusion models are extensively used for modeling image priors for inverse problems. We introduce \emph{Diff-Unfolding}, a principled framework for learning posterior score functions of \emph{conditional diffusion models} by explicitly incorporating the physical measurement operator into a modular network architecture. Diff-Unfolding formulates posterior score learning as the training of an unrolled optimization scheme, where the measurement model is decoupled from the learned image prior. This design allows our method to generalize across inverse problems at inference time by simply replacing the forward operator without retraining. We theoretically justify our unrolling approach by showing that the posterior score can be derived from a composite model-based optimization formulation. Extensive experiments on image restoration and accelerated MRI show that Diff-Unfolding achieves state-of-the-art performance, improving PSNR by up to 2 dB and reducing LPIPS by $22.7\%$, while being both compact (47M parameters) and efficient (0.72 seconds per $256 \times 256$ image). An optimized C++/LibTorch implementation further reduces inference time to 0.63 seconds, underscoring the practicality of our approach.

Uncertainty quantification for deep learning-based metastatic lesion segmentation on whole body PET/CT.

Schott B, Santoro-Fernandes V, Klanecek Z, Perlman S, Jeraj R

pubmed logopapersMay 16 2025
Deep learning models are increasingly being implemented for automated medical image analysis to inform patient care. Most models, however, lack uncertainty information, without which the reliability of model outputs cannot be ensured. Several uncertainty quantification (UQ) methods exist to capture model uncertainty. Yet, it is not clear which method is optimal for a given task. The purpose of this work was to investigate several commonly used UQ methods for the critical yet understudied task of metastatic lesion segmentation on whole body PET/CT. &#xD;Approach:&#xD;59 whole body 68Ga-DOTATATE PET/CT images of patients undergoing theranostic treatment of metastatic neuroendocrine tumors were used in this work. A 3D U-Net was trained for lesion segmentation following five-fold cross validation. Uncertainty measures derived from four UQ methods-probability entropy, Monte Carlo dropout, deep ensembles, and test time augmentation-were investigated. Each uncertainty measure was assessed across four quantitative evaluations: (1) its ability to detect artificially degraded image data at low, medium, and high degradation magnitudes; (2) to detect false-positive (FP) predicted regions; (3) to recover false-negative (FN) predicted regions; and (3) to establish correlations with model biomarker extraction and segmentation performance metrics. &#xD;Results: Test time augmentation and probability entropy respectively achieved the highest and lowest degraded image detection at low (AUC=0.54 vs. 0.68), medium (AUC=0.70 vs. 0.82), and high (AUC=0.83 vs. 0.90) degradation magnitudes. For detecting FPs, all UQ methods achieve strong performance, with AUC values ranging narrowly between 0.77 and 0.81. FN region recovery performance was strongest for test time augmentation and weakest for probability entropy. Performance for the correlation analysis was mixed, where the strongest performance was achieved by test time augmentation for SUVtotal capture (ρ=0.57) and segmentation Dice coefficient (ρ=0.72), by Monte Carlo dropout for SUVmean capture (ρ=0.35), and by probability entropy for segmentation cross entropy (ρ=0.96).&#xD;Significance: Overall, test time augmentation demonstrated superior uncertainty quantification performance and is recommended for use in metastatic lesion segmentation task. It also offers the advantage of being post hoc and computationally efficient. In contrast, probability entropy performed the worst, highlighting the need for advanced UQ approaches for this task.&#xD.

Multicenter development of a deep learning radiomics and dosiomics nomogram to predict radiation pneumonia risk in non-small cell lung cancer.

Wang X, Zhang A, Yang H, Zhang G, Ma J, Ye S, Ge S

pubmed logopapersMay 16 2025
Radiation pneumonia (RP) is the most common side effect of chest radiotherapy, and can affect patients' quality of life. This study aimed to establish a combined model of radiomics, dosiomics, deep learning (DL) based on simulated location CT and dosimetry images combining with clinical parameters to improve the predictive ability of ≥ 2 grade RP (RP2) in patients with non-small cell lung cancer (NSCLC). This study retrospectively collected 245 patients with NSCLC who received radiotherapy from three hospitals. 162 patients from Hospital I were randomly divided into training cohort and internal validation cohort according to 7:3. 83 patients from two other hospitals served as an external validation cohort. Multivariate analysis was used to screen independent clinical predictors and establish clinical model (CM). The radiomic and dosiomics (RD) features and DL features were extracted from simulated location CT and dosimetry images based on the region of interest (ROI) of total lung-PTV (TL-PTV). The features screened by the t-test and least absolute shrinkage and selection operator (LASSO) were used to construct the RD and DL model, and RD-score and DL-score were calculated. RD-score, DL-score and independent clinical features were combined to establish deep learning radiomics and dosiomics nomogram (DLRDN). The model performance was evaluated by area under the curve (AUC). Three clinical factors, including V20, V30, and mean lung dose (MLD), were used to establish the CM. 7 RD features including 4 radiomics features and 3 dosiomics features were selected to establish RD model. 10 DL features were selected to establish DL model. Among the different models, DLRDN showed the best predictions, with the AUCs of 0.891 (0.826-0.957), 0.825 (0.693-0.957), and 0.801 (0.698-0.904) in the training cohort, internal validation cohort and external validation cohort, respectively. DCA showed that DLRDN had a higher overall net benefit than other models. The calibration curve showed that the predicted value of DLRDN was in good agreement with the actual value. Overall, radiomics, dosiomics, and DL features based on simulated location CT and dosimetry images have the potential to help predict RP2. The combination of multi-dimensional data produced the optimal predictive model, which could provide guidance for clinicians.

Deep learning progressive distill for predicting clinical response to conversion therapy from preoperative CT images of advanced gastric cancer patients.

Han S, Zhang T, Deng W, Han S, Wu H, Jiang B, Xie W, Chen Y, Deng T, Wen X, Liu N, Fan J

pubmed logopapersMay 16 2025
Identifying patients suitable for conversion therapy through early non-invasive screening is crucial for tailoring treatment in advanced gastric cancer (AGC). This study aimed to develop and validate a deep learning method, utilizing preoperative computed tomography (CT) images, to predict the response to conversion therapy in AGC patients. This retrospective study involved 140 patients. We utilized Progressive Distill (PD) methodology to construct a deep learning model for predicting clinical response to conversion therapy based on preoperative CT images. Patients in the training set (n = 112) and in the test set (n = 28) were sourced from The First Affiliated Hospital of Wenzhou Medical University between September 2017 and November 2023. Our PD models' performance was compared with baseline models and those utilizing Knowledge Distillation (KD), with evaluation metrics including accuracy, sensitivity, specificity, receiver operating characteristic curves, areas under the receiver operating characteristic curve (AUCs), and heat maps. The PD model exhibited the best performance, demonstrating robust discrimination of clinical response to conversion therapy with an AUC of 0.99 and accuracy of 99.11% in the training set, and 0.87 AUC and 85.71% accuracy in the test set. Sensitivity and specificity were 97.44% and 100% respectively in the training set, 85.71% and 85.71% each in the test set, suggesting absence of discernible bias. The deep learning model of PD method accurately predicts clinical response to conversion therapy in AGC patients. Further investigation is warranted to assess its clinical utility alongside clinicopathological parameters.

FlowMRI-Net: A Generalizable Self-Supervised 4D Flow MRI Reconstruction Network.

Jacobs L, Piccirelli M, Vishnevskiy V, Kozerke S

pubmed logopapersMay 16 2025
Image reconstruction from highly undersampled 4D flow MRI data can be very time consuming and may result in significant underestimation of velocities depending on regularization, thereby limiting the applicability of the method. The objective of the present work was to develop a generalizable self-supervised deep learning-based framework for fast and accurate reconstruction of highly undersampled 4D flow MRI and to demonstrate the utility of the framework for aortic and cerebrovascular applications. The proposed deep-learning-based framework, called FlowMRI-Net, employs physics-driven unrolled optimization using a complex-valued convolutional recurrent neural network and is trained in a self-supervised manner. The generalizability of the framework is evaluated using aortic and cerebrovascular 4D flow MRI acquisitions acquired on systems from two different vendors for various undersampling factors (R=8,16,24) and compared to compressed sensing (CS-LLR) reconstructions. Evaluation includes an ablation study and a qualitative and quantitative analysis of image and velocity magnitudes. FlowMRI-Net outperforms CS-LLR for aortic 4D flow MRI reconstruction, resulting in significantly lower vectorial normalized root mean square error and mean directional errors for velocities in the thoracic aorta. Furthermore, the feasibility of FlowMRI-Net's generalizability is demonstrated for cerebrovascular 4D flow MRI reconstruction. Reconstruction times ranged from 3 to 7minutes on commodity CPU/GPU hardware. FlowMRI-Net enables fast and accurate reconstruction of highly undersampled aortic and cerebrovascular 4D flow MRI, with possible applications to other vascular territories.

Research on Machine Learning Models Based on Cranial CT Scan for Assessing Prognosis of Emergency Brain Injury.

Qin J, Shen R, Fu J, Sun J

pubmed logopapersMay 16 2025
To evaluate the prognosis of patients with traumatic brain injury according to the Computed Tomography (CT) findings of skull fracture and cerebral parenchymal hemorrhage. Retrospectively collected data from adult patients who received non-surgical or surgical treatment after the first CT scan with craniocerebral injuries from January 2020 to August 2021. The radiomics features were extracted by Pyradiomics. Dimensionality reduction was then performed using the max relevance and min-redundancy algorithm (mRMR) and the least absolute shrinkage and selection operator (LASSO), with ten-fold cross-validation to select the best radiomics features. Three parsimonious machine learning classifiers, multinomial logistic regression (LR), a support vector machine (SVM), and a naive Bayes (Gaussian distribution), were used to construct radiomics models. A personalized emergency prognostic nomogram for cranial injuries was erected using a logistic regression model based on selected radiomic labels and patients' baseline information at emergency admission. The mRMR algorithm and the LASSO regression model finally extracted 22 top-ranked radiological features and based on these image histological features, the emergency brain injury prediction model was built with SVM, LG, and naive Bayesian classifiers, respectively. The SVM model showed the largest AUC area in training cohort for the three classifications, indicating that the SVM model is more stable and accurate. Moreover, a nomogram prediction model for GOS prognostic score in patients was constructed. We established a nomogram for predicting patients' prognosis through radiomic features and clinical characteristics, provides some data support and guidance for clinical prediction of patients' brain injury prognosis and intervention.

Pretrained hybrid transformer for generalizable cardiac substructures segmentation from contrast and non-contrast CTs in lung and breast cancers

Aneesh Rangnekar, Nikhil Mankuzhy, Jonas Willmann, Chloe Choi, Abraham Wu, Maria Thor, Andreas Rimner, Harini Veeraraghavan

arxiv logopreprintMay 16 2025
AI automated segmentations for radiation treatment planning (RTP) can deteriorate when applied in clinical cases with different characteristics than training dataset. Hence, we refined a pretrained transformer into a hybrid transformer convolutional network (HTN) to segment cardiac substructures lung and breast cancer patients acquired with varying imaging contrasts and patient scan positions. Cohort I, consisting of 56 contrast-enhanced (CECT) and 124 non-contrast CT (NCCT) scans from patients with non-small cell lung cancers acquired in supine position, was used to create oracle with all 180 training cases and balanced (CECT: 32, NCCT: 32 training) HTN models. Models were evaluated on a held-out validation set of 60 cohort I patients and 66 patients with breast cancer from cohort II acquired in supine (n=45) and prone (n=21) positions. Accuracy was measured using DSC, HD95, and dose metrics. Publicly available TotalSegmentator served as the benchmark. The oracle and balanced models were similarly accurate (DSC Cohort I: 0.80 \pm 0.10 versus 0.81 \pm 0.10; Cohort II: 0.77 \pm 0.13 versus 0.80 \pm 0.12), outperforming TotalSegmentator. The balanced model, using half the training cases as oracle, produced similar dose metrics as manual delineations for all cardiac substructures. This model was robust to CT contrast in 6 out of 8 substructures and patient scan position variations in 5 out of 8 substructures and showed low correlations of accuracy to patient size and age. A HTN demonstrated robustly accurate (geometric and dose metrics) cardiac substructures segmentation from CTs with varying imaging and patient characteristics, one key requirement for clinical use. Moreover, the model combining pretraining with balanced distribution of NCCT and CECT scans was able to provide reliably accurate segmentations under varied conditions with far fewer labeled datasets compared to an oracle model.

Technology Advances in the placement of naso-enteral tubes and in the management of enteral feeding in critically ill patients: a narrative study.

Singer P, Setton E

pubmed logopapersMay 16 2025
Enteral feeding needs secure access to the upper gastrointestinal tract, an evaluation of the gastric function to detect gastrointestinal intolerance, and a nutritional target to reach the patient's needs. Only in the last decades has progress been accomplished in techniques allowing an appropriate placement of the nasogastric tube, mainly reducing pulmonary complications. These techniques include point-of-care ultrasound (POCUS), electromagnetic sensors, real-time video-assisted placement, impedance sensors, and virtual reality. Again, POCUS is the most accessible tool available to evaluate gastric emptying, with antrum echo density measurement. Automatic measurements of gastric antrum content supported by deep learning algorithms and electric impedance provide gastric volume. Intragastric balloons can evaluate motility. Finally, advanced technologies have been tested to improve nutritional intake: Stimulation of the esophagus mucosa inducing contraction mimicking a contraction wave that may improve enteral nutrition efficacy, impedance sensors to detect gastric reflux and modulate the rate of feeding accordingly have been clinically evaluated. Use of electronic health records integrating nutritional needs, target, and administration is recommended.
Page 301 of 3323316 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.