Sort by:
Page 99 of 1281279 results

Fluid fluctuations assessed with artificial intelligence during the maintenance phase impact anti-vascular endothelial growth factor visual outcomes in a multicentre, routine clinical care national age-related macular degeneration database.

Martin-Pinardel R, Izquierdo-Serra J, Bernal-Morales C, De Zanet S, Garay-Aramburu G, Puzo M, Arruabarrena C, Sararols L, Abraldes M, Broc L, Escobar-Barranco JJ, Figueroa M, Zapata MA, Ruiz-Moreno JM, Parrado-Carrillo A, Moll-Udina A, Alforja S, Figueras-Roca M, Gómez-Baldó L, Ciller C, Apostolopoulos S, Mishchuk A, Casaroli-Marano RP, Zarranz-Ventura J

pubmed logopapersMay 16 2025
To evaluate the impact of fluid volume fluctuations quantified with artificial intelligence in optical coherence tomography scans during the maintenance phase and visual outcomes at 12 and 24 months in a real-world, multicentre, national cohort of treatment-naïve neovascular age-related macular degeneration (nAMD) eyes. Demographics, visual acuity (VA) and number of injections were collected using the Fight Retinal Blindness tool. Intraretinal fluid (IRF), subretinal fluid (SRF), pigment epithelial detachment (PED), total fluid (TF) and central subfield thickness (CST) were quantified using the RetinAI Discovery tool. Fluctuations were defined as the SD of within-eye quantified values, and eyes were distributed according to SD quartiles for each biomarker. A total of 452 naïve nAMD eyes were included. Eyes with highest (Q4) versus lowest (Q1) fluid fluctuations showed significantly worse VA change (months 3-12) in IRF -3.91 versus 3.50 letters, PED -4.66 versus 3.29, TF -2.07 versus 2.97 and CST -1.85 versus 2.96 (all p<0.05), but not for SRF 0.66 versus 0.93 (p=0.91). Similar VA outcomes were observed at month 24 for PED -8.41 versus 4.98 (p<0.05), TF -7.38 versus 1.89 (p=0.07) and CST -10.58 versus 3.60 (p<0.05). The median number of injections (months 3-24) was significantly higher in Q4 versus Q1 eyes in IRF 9 versus 8, SRF 10 versus 8 and TF 10 versus 8 (all p<0.05). This multicentre study reports a negative effect in VA outcomes of fluid volume fluctuations during the maintenance phase in specific fluid compartments, suggesting that anatomical and functional treatment response patterns may be fluid-specific.

Enhancing Craniomaxillofacial Surgeries with Artificial Intelligence Technologies.

Do W, van Nistelrooij N, Bergé S, Vinayahalingam S

pubmed logopapersMay 16 2025
Artificial intelligence (AI) can be applied in multiple subspecialties in craniomaxillofacial (CMF) surgeries. This article overviews AI fundamentals focusing on classification, object detection, and segmentation-core tasks used in CMF applications. The article then explores the development and integration of AI in dentoalveolar surgery, implantology, traumatology, oncology, craniofacial surgery, and orthognathic and feminization surgery. It highlights AI-driven advancements in diagnosis, pre-operative planning, intra-operative assistance, post-operative management, and outcome prediction. Finally, the challenges in AI adoption are discussed, including data limitations, algorithm validation, and clinical integration.

Application of Quantitative CT and Machine Learning in the Evaluation and Diagnosis of Polymyositis/Dermatomyositis-Associated Interstitial Lung Disease.

Yang K, Chen Y, He L, Sheng Y, Hei H, Zhang J, Jin C

pubmed logopapersMay 16 2025
To investigate lung changes in patients with polymyositis/dermatomyositis-associated interstitial lung disease (PM/DM-ILD) using quantitative CT and to construct a diagnostic model to evaluate the application of quantitative CT and machine learning in diagnosing PM/DM-ILD. Chest CT images from 348 PM/DM individuals were quantitatively analyzed to obtain the lung volume (LV), mean lung density (MLD), and intrapulmonary vascular volume (IPVV) of the whole lung and each lung lobe. The percentage of high attenuation area (HAA %) was determined using the lung density histogram. Patients hospitalized from 2016 to 2021 were used as the training set (n=258), and from 2022 to 2023 were used as the temporal test set (n=90). Seven classification models were established, and their performance was evaluated through ROC analysis, decision curve analysis, calibration, and precision-recall curve. The optimal model was selected and interpreted with Python's SHAP model interpretation package. Compared to the non-ILD group, the mean lung density and percentage of high attenuation area in the whole lung and each lung lobe were significantly increased, and the lung volume and intrapulmonary vessel volume were significantly decreased in the ILD group. The Random Forest (RF) model demonstrated superior performance with the test set area under the curve of 0.843 (95% CI: 0.821-0.865), accuracy of 0.778, sensitivity of 0.784, and specificity of 0.750. Quantitative CT serves as an objective and precise method to assess pulmonary changes in PM/DM-ILD patients. The RF model based on CT quantitative parameters displayed strong diagnostic efficiency in identifying ILD, offering a new and convenient approach for evaluating and diagnosing PM/DM-ILD patients.

Comparative analysis of deep learning methods for breast ultrasound lesion detection and classification.

Vallez N, Mateos-Aparicio-Ruiz I, Rienda MA, Deniz O, Bueno G

pubmed logopapersMay 16 2025
Breast ultrasound (BUS) computer-aided diagnosis (CAD) systems aims to perform two major steps: detecting lesions and classifying them as benign or malignant. However, the impact of combining both steps has not been previously addressed. Moreover, the specific method employed can influence the final outcome of the system. In this work, a comparison of the effects of using object detection, semantic segmentation and instance segmentation to detect lesions in BUS images was conducted. To this end, four approaches were examined: a) multi-class object detection, b) one-class object detection followed by localized region classification, c) multi-class segmentation, and d) one-class segmentation followed by segmented region classification. Additionally, a novel dataset for BUS segmentation, called BUS-UCLM, has been gathered, annotated and shared publicly. The evaluation of the methods proposed was carried out with this new dataset and four publicly available datasets: BUSI, OASBUD, RODTOOK and UDIAT. Among the four approaches compared, multi-class detection and multi-class segmentation achieved the best results when instance segmentation CNNs are used. The best results in detection were obtained with a multi-class Mask R-CNN with a COCO AP50 metric of 72.9%. In the multi-class segmentation scenario, Poolformer achieved the best results with a Dice score of 77.7%. The analysis of detection and segmentation models in BUS highlights several key challenges, emphasizing the complexity of accurately identifying and segmenting lesions. Among the methods evaluated, instance segmentation has proven to be the most effective for BUS images, offering superior performance in delineating individual lesions.

Assessing fetal lung maturity: Integration of ultrasound radiomics and deep learning.

Chen W, Zeng B, Ling X, Chen C, Lai J, Lin J, Liu X, Zhou H, Guo X

pubmed logopapersMay 16 2025
This study built a model to forecast the maturity of lungs by blending radiomics and deep learning methods. We examined ultrasound images from 263 pregnancies in the pregnancy stages. Utilizing the GE VOLUSON E8 system we captured images to extract and analyze radiomic features. These features were integrated with clinical data by means of deep learning algorithms such as DenseNet121 to enhance the accuracy of assessing fetal lung maturity. This combined model was validated by receiver operating characteristic (ROC) curve, calibration diagram, as well as decision curve analysis (DCA). We discovered that the accuracy and reliability of the diagnosis indicated that this method significantly improves the level of prediction of fetal lung maturity. This novel non-invasive diagnostic technology highlights the potential advantages of integrating diverse data sources to enhance prenatal care and infant health. The study lays groundwork, for validation and refinement of the model across various healthcare settings.

Technology Advances in the placement of naso-enteral tubes and in the management of enteral feeding in critically ill patients: a narrative study.

Singer P, Setton E

pubmed logopapersMay 16 2025
Enteral feeding needs secure access to the upper gastrointestinal tract, an evaluation of the gastric function to detect gastrointestinal intolerance, and a nutritional target to reach the patient's needs. Only in the last decades has progress been accomplished in techniques allowing an appropriate placement of the nasogastric tube, mainly reducing pulmonary complications. These techniques include point-of-care ultrasound (POCUS), electromagnetic sensors, real-time video-assisted placement, impedance sensors, and virtual reality. Again, POCUS is the most accessible tool available to evaluate gastric emptying, with antrum echo density measurement. Automatic measurements of gastric antrum content supported by deep learning algorithms and electric impedance provide gastric volume. Intragastric balloons can evaluate motility. Finally, advanced technologies have been tested to improve nutritional intake: Stimulation of the esophagus mucosa inducing contraction mimicking a contraction wave that may improve enteral nutrition efficacy, impedance sensors to detect gastric reflux and modulate the rate of feeding accordingly have been clinically evaluated. Use of electronic health records integrating nutritional needs, target, and administration is recommended.

Research on Machine Learning Models Based on Cranial CT Scan for Assessing Prognosis of Emergency Brain Injury.

Qin J, Shen R, Fu J, Sun J

pubmed logopapersMay 16 2025
To evaluate the prognosis of patients with traumatic brain injury according to the Computed Tomography (CT) findings of skull fracture and cerebral parenchymal hemorrhage. Retrospectively collected data from adult patients who received non-surgical or surgical treatment after the first CT scan with craniocerebral injuries from January 2020 to August 2021. The radiomics features were extracted by Pyradiomics. Dimensionality reduction was then performed using the max relevance and min-redundancy algorithm (mRMR) and the least absolute shrinkage and selection operator (LASSO), with ten-fold cross-validation to select the best radiomics features. Three parsimonious machine learning classifiers, multinomial logistic regression (LR), a support vector machine (SVM), and a naive Bayes (Gaussian distribution), were used to construct radiomics models. A personalized emergency prognostic nomogram for cranial injuries was erected using a logistic regression model based on selected radiomic labels and patients' baseline information at emergency admission. The mRMR algorithm and the LASSO regression model finally extracted 22 top-ranked radiological features and based on these image histological features, the emergency brain injury prediction model was built with SVM, LG, and naive Bayesian classifiers, respectively. The SVM model showed the largest AUC area in training cohort for the three classifications, indicating that the SVM model is more stable and accurate. Moreover, a nomogram prediction model for GOS prognostic score in patients was constructed. We established a nomogram for predicting patients' prognosis through radiomic features and clinical characteristics, provides some data support and guidance for clinical prediction of patients' brain injury prognosis and intervention.

FlowMRI-Net: A Generalizable Self-Supervised 4D Flow MRI Reconstruction Network.

Jacobs L, Piccirelli M, Vishnevskiy V, Kozerke S

pubmed logopapersMay 16 2025
Image reconstruction from highly undersampled 4D flow MRI data can be very time consuming and may result in significant underestimation of velocities depending on regularization, thereby limiting the applicability of the method. The objective of the present work was to develop a generalizable self-supervised deep learning-based framework for fast and accurate reconstruction of highly undersampled 4D flow MRI and to demonstrate the utility of the framework for aortic and cerebrovascular applications. The proposed deep-learning-based framework, called FlowMRI-Net, employs physics-driven unrolled optimization using a complex-valued convolutional recurrent neural network and is trained in a self-supervised manner. The generalizability of the framework is evaluated using aortic and cerebrovascular 4D flow MRI acquisitions acquired on systems from two different vendors for various undersampling factors (R=8,16,24) and compared to compressed sensing (CS-LLR) reconstructions. Evaluation includes an ablation study and a qualitative and quantitative analysis of image and velocity magnitudes. FlowMRI-Net outperforms CS-LLR for aortic 4D flow MRI reconstruction, resulting in significantly lower vectorial normalized root mean square error and mean directional errors for velocities in the thoracic aorta. Furthermore, the feasibility of FlowMRI-Net's generalizability is demonstrated for cerebrovascular 4D flow MRI reconstruction. Reconstruction times ranged from 3 to 7minutes on commodity CPU/GPU hardware. FlowMRI-Net enables fast and accurate reconstruction of highly undersampled aortic and cerebrovascular 4D flow MRI, with possible applications to other vascular territories.

Deep learning predicts HER2 status in invasive breast cancer from multimodal ultrasound and MRI.

Fan Y, Sun K, Xiao Y, Zhong P, Meng Y, Yang Y, Du Z, Fang J

pubmed logopapersMay 16 2025
The preoperative human epidermal growth factor receptor type 2 (HER2) status of breast cancer is typically determined by pathological examination of a core needle biopsy, which influences the efficacy of neoadjuvant chemotherapy (NAC). However, the highly heterogeneous nature of breast cancer and the limitations of needle aspiration biopsy increase the instability of pathological evaluation. The aim of this study was to predict HER2 status in preoperative breast cancer using deep learning (DL) models based on ultrasound (US) and magnetic resonance imaging (MRI). The study included women with invasive breast cancer who underwent US and MRI at our institution between January 2021 and July 2024. US images and dynamic contrast-enhanced T1-weighted MRI images were used to construct DL models (DL-US: the DL model based on US; DL-MRI: the model based on MRI; and DL-MRI&US: the combined model based on both MRI and US). All classifications were based on postoperative pathological evaluation. Receiver operating characteristic analysis and the DeLong test were used to compare the diagnostic performance of the DL models. In the test cohort, DL-US differentiated the HER2 status of breast cancer with an AUC of 0.842 (95% CI: 0.708-0.931), and sensitivity and specificity of 89.5% and 79.3%, respectively. DL-MRI achieved an AUC of 0.800 (95% CI: 0.660-0.902), with sensitivity and specificity of 78.9% and 79.3%, respectively. DL-MRI&US yielded an AUC of 0.898 (95% CI: 0.777-0.967), with sensitivity and specificity of 63.2% and 100.0%, respectively.
Page 99 of 1281279 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.