Sort by:
Page 145 of 1751742 results

Technology Advances in the placement of naso-enteral tubes and in the management of enteral feeding in critically ill patients: a narrative study.

Singer P, Setton E

pubmed logopapersMay 16 2025
Enteral feeding needs secure access to the upper gastrointestinal tract, an evaluation of the gastric function to detect gastrointestinal intolerance, and a nutritional target to reach the patient's needs. Only in the last decades has progress been accomplished in techniques allowing an appropriate placement of the nasogastric tube, mainly reducing pulmonary complications. These techniques include point-of-care ultrasound (POCUS), electromagnetic sensors, real-time video-assisted placement, impedance sensors, and virtual reality. Again, POCUS is the most accessible tool available to evaluate gastric emptying, with antrum echo density measurement. Automatic measurements of gastric antrum content supported by deep learning algorithms and electric impedance provide gastric volume. Intragastric balloons can evaluate motility. Finally, advanced technologies have been tested to improve nutritional intake: Stimulation of the esophagus mucosa inducing contraction mimicking a contraction wave that may improve enteral nutrition efficacy, impedance sensors to detect gastric reflux and modulate the rate of feeding accordingly have been clinically evaluated. Use of electronic health records integrating nutritional needs, target, and administration is recommended.

Research on Machine Learning Models Based on Cranial CT Scan for Assessing Prognosis of Emergency Brain Injury.

Qin J, Shen R, Fu J, Sun J

pubmed logopapersMay 16 2025
To evaluate the prognosis of patients with traumatic brain injury according to the Computed Tomography (CT) findings of skull fracture and cerebral parenchymal hemorrhage. Retrospectively collected data from adult patients who received non-surgical or surgical treatment after the first CT scan with craniocerebral injuries from January 2020 to August 2021. The radiomics features were extracted by Pyradiomics. Dimensionality reduction was then performed using the max relevance and min-redundancy algorithm (mRMR) and the least absolute shrinkage and selection operator (LASSO), with ten-fold cross-validation to select the best radiomics features. Three parsimonious machine learning classifiers, multinomial logistic regression (LR), a support vector machine (SVM), and a naive Bayes (Gaussian distribution), were used to construct radiomics models. A personalized emergency prognostic nomogram for cranial injuries was erected using a logistic regression model based on selected radiomic labels and patients' baseline information at emergency admission. The mRMR algorithm and the LASSO regression model finally extracted 22 top-ranked radiological features and based on these image histological features, the emergency brain injury prediction model was built with SVM, LG, and naive Bayesian classifiers, respectively. The SVM model showed the largest AUC area in training cohort for the three classifications, indicating that the SVM model is more stable and accurate. Moreover, a nomogram prediction model for GOS prognostic score in patients was constructed. We established a nomogram for predicting patients' prognosis through radiomic features and clinical characteristics, provides some data support and guidance for clinical prediction of patients' brain injury prognosis and intervention.

FlowMRI-Net: A Generalizable Self-Supervised 4D Flow MRI Reconstruction Network.

Jacobs L, Piccirelli M, Vishnevskiy V, Kozerke S

pubmed logopapersMay 16 2025
Image reconstruction from highly undersampled 4D flow MRI data can be very time consuming and may result in significant underestimation of velocities depending on regularization, thereby limiting the applicability of the method. The objective of the present work was to develop a generalizable self-supervised deep learning-based framework for fast and accurate reconstruction of highly undersampled 4D flow MRI and to demonstrate the utility of the framework for aortic and cerebrovascular applications. The proposed deep-learning-based framework, called FlowMRI-Net, employs physics-driven unrolled optimization using a complex-valued convolutional recurrent neural network and is trained in a self-supervised manner. The generalizability of the framework is evaluated using aortic and cerebrovascular 4D flow MRI acquisitions acquired on systems from two different vendors for various undersampling factors (R=8,16,24) and compared to compressed sensing (CS-LLR) reconstructions. Evaluation includes an ablation study and a qualitative and quantitative analysis of image and velocity magnitudes. FlowMRI-Net outperforms CS-LLR for aortic 4D flow MRI reconstruction, resulting in significantly lower vectorial normalized root mean square error and mean directional errors for velocities in the thoracic aorta. Furthermore, the feasibility of FlowMRI-Net's generalizability is demonstrated for cerebrovascular 4D flow MRI reconstruction. Reconstruction times ranged from 3 to 7minutes on commodity CPU/GPU hardware. FlowMRI-Net enables fast and accurate reconstruction of highly undersampled aortic and cerebrovascular 4D flow MRI, with possible applications to other vascular territories.

Deep learning predicts HER2 status in invasive breast cancer from multimodal ultrasound and MRI.

Fan Y, Sun K, Xiao Y, Zhong P, Meng Y, Yang Y, Du Z, Fang J

pubmed logopapersMay 16 2025
The preoperative human epidermal growth factor receptor type 2 (HER2) status of breast cancer is typically determined by pathological examination of a core needle biopsy, which influences the efficacy of neoadjuvant chemotherapy (NAC). However, the highly heterogeneous nature of breast cancer and the limitations of needle aspiration biopsy increase the instability of pathological evaluation. The aim of this study was to predict HER2 status in preoperative breast cancer using deep learning (DL) models based on ultrasound (US) and magnetic resonance imaging (MRI). The study included women with invasive breast cancer who underwent US and MRI at our institution between January 2021 and July 2024. US images and dynamic contrast-enhanced T1-weighted MRI images were used to construct DL models (DL-US: the DL model based on US; DL-MRI: the model based on MRI; and DL-MRI&US: the combined model based on both MRI and US). All classifications were based on postoperative pathological evaluation. Receiver operating characteristic analysis and the DeLong test were used to compare the diagnostic performance of the DL models. In the test cohort, DL-US differentiated the HER2 status of breast cancer with an AUC of 0.842 (95% CI: 0.708-0.931), and sensitivity and specificity of 89.5% and 79.3%, respectively. DL-MRI achieved an AUC of 0.800 (95% CI: 0.660-0.902), with sensitivity and specificity of 78.9% and 79.3%, respectively. DL-MRI&US yielded an AUC of 0.898 (95% CI: 0.777-0.967), with sensitivity and specificity of 63.2% and 100.0%, respectively.

The imaging crisis in axial spondyloarthritis.

Diekhoff T, Poddubnyy D

pubmed logopapersMay 16 2025
Imaging holds a pivotal yet contentious role in the early diagnosis of axial spondyloarthritis. Although MRI has enhanced our ability to detect early inflammatory changes, particularly bone marrow oedema in the sacroiliac joints, the poor specificity of this finding introduces a substantial risk of overdiagnosis. The well intentioned push by rheumatologists towards earlier intervention could inadvertently lead to the misclassification of mechanical or degenerative conditions (eg, osteitis condensans ilii) as inflammatory disease, especially in the absence of structural lesions. Diagnostic uncertainty is further fuelled by anatomical variability, sex differences, and suboptimal imaging protocols. Current strategies-such as quantifying bone marrow oedema and analysing its distribution patterns, and integrating clinical and laboratory data-offer partial guidance for avoiding overdiagnosis but fall short of resolving the core diagnostic dilemma. Emerging imaging technologies, including high-resolution sequences, quantitative MRI, radiomics, and artificial intelligence, could improve diagnostic precision, but these tools remain exploratory. This Viewpoint underscores the need for a shift in imaging approaches, recognising that although timely diagnosis and treatment is essential to prevent long-term structural damage, robust and reliable imaging criteria are also needed. Without such advances, the imaging field risks repeating past missteps seen in other rheumatological conditions.

Deep learning model based on ultrasound images predicts BRAF V600E mutation in papillary thyroid carcinoma.

Yu Y, Zhao C, Guo R, Zhang Y, Li X, Liu N, Lu Y, Han X, Tang X, Mao R, Peng C, Yu J, Zhou J

pubmed logopapersMay 16 2025
BRAF V600E mutation status detection facilitates prognosis prediction in papillary thyroid carcinoma (PTC). We developed a deep-learning model to determine the BRAF V600E status in PTC. PTC from three centers were collected as the training set (1341 patients), validation set (148 patients), and external test set (135 patients). After testing the performance of the ResNeSt-50, Vision Transformer, and Swin Transformer V2 (SwinT) models, SwinT was chosen as the optimal backbone. An integrated BrafSwinT model was developed by combining the backbone with a radiomics feature branch and a clinical parameter branch. BrafSwinT demonstrated an AUC of 0.869 in the external test set, outperforming the original SwinT, Vision Transformer, and ResNeSt-50 models (AUC: 0.782-0.824; <i>p</i> value: 0.017-0.041). BrafSwinT showed promising results in determining BRAF V600E mutation status in PTC based on routinely acquired ultrasound images and basic clinical information, thus facilitating risk stratification.

Evaluation of tumour pseudocapsule using computed tomography-based radiomics in pancreatic neuroendocrine tumours to predict prognosis and guide surgical strategy: a cohort study.

Wang Y, Gu W, Huang D, Zhang W, Chen Y, Xu J, Li Z, Zhou C, Chen J, Xu X, Tang W, Yu X, Ji S

pubmed logopapersMay 16 2025
To date, indications for a surgical approach of small pancreatic neuroendocrine tumours (PanNETs) remain controversial. This cohort study aimed to identify the pseudocapsule status preoperatively to estimate the rationality of enucleation and survival prognosis of PanNETs, particularly in small tumours. Clinicopathological data were collected from patients with PanNETs who underwent the first pancreatectomy at our hospital (n = 578) between February 2012 and September 2023. Kaplan-Meier curves were constructed to visualise prognostic differences. Five distinct tissue samples were obtained for single-cell RNA sequencing (scRNA-seq) to evaluate variations in the tumour microenvironment. Radiological features were extracted from preoperative arterial-phase contrast-enhanced computed tomography. The performance of the pseudocapsule radiomics model was assessed using the area under the curve (AUC) metric. 475 cases (mean [SD] age, 53.01 [12.20] years; female vs male, 1.24:1) were eligible for this study. The mean pathological diameter of tumour was 2.99 cm (median: 2.50 cm; interquartile range [IQR]: 1.50-4.00 cm). These cases were stratified into complete (223, 46.95%) and incomplete (252, 53.05%) pseudocapsule groups. A statistically significant difference in aggressive indicators was observed between the two groups (P < 0.001). Through scRNA-seq analysis, we identified that the incomplete group presented a markedly immunosuppressive microenvironment. Regarding the impact on recurrence-free survival, the 3-year and 5-year rates were 94.8% and 92.5%, respectively, for the complete pseudocapsule group, compared to 76.7% and 70.4% for the incomplete pseudocapsule group. The radiomics-predictive model has a significant discrimination for the state of the pseudocapsule, particularly in small tumours (AUC, 0.744; 95% CI, 0.652-0.837). By combining computed tomography-based radiomics and machine learning for preoperative identification of pseudocapsule status, the intact group is more likely to benefit from enucleation.

Escarcitys: A framework for enhancing medical image classification performance in scarcity of trainable samples scenarios.

Wang T, Dai Q, Xiong W

pubmed logopapersMay 16 2025
In the field of healthcare, the acquisition and annotation of medical images present significant challenges, resulting in a scarcity of trainable samples. This data limitation hinders the performance of deep learning models, creating bottlenecks in clinical applications. To address this issue, we construct a framework (EScarcityS) aimed at enhancing the success rate of disease diagnosis in scarcity of trainable medical image scenarios. Firstly, considering that Transformer-based deep learning networks rely on a large amount of trainable data, this study takes into account the unique characteristics of pathological regions. By extracting the feature representations of all particles in medical images at different granularities, a multi-granularity Transformer network (MGVit) is designed. This network leverages additional prior knowledge to assist the Transformer network during training, thereby reducing the data requirement to some extent. Next, the importance maps of particles at different granularities, generated by MGVit, are fused to construct disease probability maps corresponding to the images. Based on these maps, a disease probability map-guided diffusion generation model is designed to generate more realistic and interpretable synthetic data. Subsequently, authentic and synthetical data are mixed and used to retrain MGVit, aiming to enhance the accuracy of medical image classification in scarcity of trainable medical image scenarios. Finally, we conducted detailed experiments on four real medical image datasets to validate the effectiveness of EScarcityS and its specific modules.

Artificial intelligence-guided distal radius fracture detection on plain radiographs in comparison with human raters.

Ramadanov N, John P, Hable R, Schreyer AG, Shabo S, Prill R, Salzmann M

pubmed logopapersMay 16 2025
The aim of this study was to compare the performance of artificial intelligence (AI) in detecting distal radius fractures (DRFs) on plain radiographs with the performance of human raters. We retrospectively analysed all wrist radiographs taken in our hospital since the introduction of AI-guided fracture detection from 11 September 2023 to 10 September 2024. The ground truth was defined by the radiological report of a board-certified radiologist based solely on conventional radiographs. The following parameters were calculated: True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN), accuracy (%), Cohen's Kappa coefficient, F1 score, sensitivity (%), specificity (%), Youden Index (J Statistic). In total 1145 plain radiographs of the wrist were taken between 11 September 2023 and 10 September 2024. The mean age of the included patients was 46.6 years (± 27.3), ranging from 2 to 99 years and 59.0% were female. According to the ground truth, of the 556 anteroposterior (AP) radiographs, 225 cases (40.5%) had a DRF, and of the 589 lateral view radiographs, 240 cases (40.7%) had a DRF. The AI system showed the following results on AP radiographs: accuracy (%): 95.90; Cohen's Kappa: 0.913; F1 score: 0.947; sensitivity (%): 92.02; specificity (%): 98.45; Youden Index: 90.47. The orthopedic surgeon achieved a sensitivity of 91.5%, specificity of 97.8%, an overall accuracy of 95.1%, F1 score of 0.943, and Cohen's kappa of 0.901. These results were comparable to those of the AI model. AI-guided detection of DRF demonstrated diagnostic performance nearly identical to that of an experienced orthopedic surgeon across all key metrics. The marginal differences observed in sensitivity and specificity suggest that AI can reliably support clinical fracture assessment based solely on conventional radiographs.
Page 145 of 1751742 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.