Sort by:
Page 123 of 1601600 results

Deep learning progressive distill for predicting clinical response to conversion therapy from preoperative CT images of advanced gastric cancer patients.

Han S, Zhang T, Deng W, Han S, Wu H, Jiang B, Xie W, Chen Y, Deng T, Wen X, Liu N, Fan J

pubmed logopapersMay 16 2025
Identifying patients suitable for conversion therapy through early non-invasive screening is crucial for tailoring treatment in advanced gastric cancer (AGC). This study aimed to develop and validate a deep learning method, utilizing preoperative computed tomography (CT) images, to predict the response to conversion therapy in AGC patients. This retrospective study involved 140 patients. We utilized Progressive Distill (PD) methodology to construct a deep learning model for predicting clinical response to conversion therapy based on preoperative CT images. Patients in the training set (n = 112) and in the test set (n = 28) were sourced from The First Affiliated Hospital of Wenzhou Medical University between September 2017 and November 2023. Our PD models' performance was compared with baseline models and those utilizing Knowledge Distillation (KD), with evaluation metrics including accuracy, sensitivity, specificity, receiver operating characteristic curves, areas under the receiver operating characteristic curve (AUCs), and heat maps. The PD model exhibited the best performance, demonstrating robust discrimination of clinical response to conversion therapy with an AUC of 0.99 and accuracy of 99.11% in the training set, and 0.87 AUC and 85.71% accuracy in the test set. Sensitivity and specificity were 97.44% and 100% respectively in the training set, 85.71% and 85.71% each in the test set, suggesting absence of discernible bias. The deep learning model of PD method accurately predicts clinical response to conversion therapy in AGC patients. Further investigation is warranted to assess its clinical utility alongside clinicopathological parameters.

Uncertainty quantification for deep learning-based metastatic lesion segmentation on whole body PET/CT.

Schott B, Santoro-Fernandes V, Klanecek Z, Perlman S, Jeraj R

pubmed logopapersMay 16 2025
Deep learning models are increasingly being implemented for automated medical image analysis to inform patient care. Most models, however, lack uncertainty information, without which the reliability of model outputs cannot be ensured. Several uncertainty quantification (UQ) methods exist to capture model uncertainty. Yet, it is not clear which method is optimal for a given task. The purpose of this work was to investigate several commonly used UQ methods for the critical yet understudied task of metastatic lesion segmentation on whole body PET/CT. 
Approach:
59 whole body 68Ga-DOTATATE PET/CT images of patients undergoing theranostic treatment of metastatic neuroendocrine tumors were used in this work. A 3D U-Net was trained for lesion segmentation following five-fold cross validation. Uncertainty measures derived from four UQ methods-probability entropy, Monte Carlo dropout, deep ensembles, and test time augmentation-were investigated. Each uncertainty measure was assessed across four quantitative evaluations: (1) its ability to detect artificially degraded image data at low, medium, and high degradation magnitudes; (2) to detect false-positive (FP) predicted regions; (3) to recover false-negative (FN) predicted regions; and (3) to establish correlations with model biomarker extraction and segmentation performance metrics. 
Results: Test time augmentation and probability entropy respectively achieved the highest and lowest degraded image detection at low (AUC=0.54 vs. 0.68), medium (AUC=0.70 vs. 0.82), and high (AUC=0.83 vs. 0.90) degradation magnitudes. For detecting FPs, all UQ methods achieve strong performance, with AUC values ranging narrowly between 0.77 and 0.81. FN region recovery performance was strongest for test time augmentation and weakest for probability entropy. Performance for the correlation analysis was mixed, where the strongest performance was achieved by test time augmentation for SUVtotal capture (ρ=0.57) and segmentation Dice coefficient (ρ=0.72), by Monte Carlo dropout for SUVmean capture (ρ=0.35), and by probability entropy for segmentation cross entropy (ρ=0.96).
Significance: Overall, test time augmentation demonstrated superior uncertainty quantification performance and is recommended for use in metastatic lesion segmentation task. It also offers the advantage of being post hoc and computationally efficient. In contrast, probability entropy performed the worst, highlighting the need for advanced UQ approaches for this task.&#xD.

Fluid fluctuations assessed with artificial intelligence during the maintenance phase impact anti-vascular endothelial growth factor visual outcomes in a multicentre, routine clinical care national age-related macular degeneration database.

Martin-Pinardel R, Izquierdo-Serra J, Bernal-Morales C, De Zanet S, Garay-Aramburu G, Puzo M, Arruabarrena C, Sararols L, Abraldes M, Broc L, Escobar-Barranco JJ, Figueroa M, Zapata MA, Ruiz-Moreno JM, Parrado-Carrillo A, Moll-Udina A, Alforja S, Figueras-Roca M, Gómez-Baldó L, Ciller C, Apostolopoulos S, Mishchuk A, Casaroli-Marano RP, Zarranz-Ventura J

pubmed logopapersMay 16 2025
To evaluate the impact of fluid volume fluctuations quantified with artificial intelligence in optical coherence tomography scans during the maintenance phase and visual outcomes at 12 and 24 months in a real-world, multicentre, national cohort of treatment-naïve neovascular age-related macular degeneration (nAMD) eyes. Demographics, visual acuity (VA) and number of injections were collected using the Fight Retinal Blindness tool. Intraretinal fluid (IRF), subretinal fluid (SRF), pigment epithelial detachment (PED), total fluid (TF) and central subfield thickness (CST) were quantified using the RetinAI Discovery tool. Fluctuations were defined as the SD of within-eye quantified values, and eyes were distributed according to SD quartiles for each biomarker. A total of 452 naïve nAMD eyes were included. Eyes with highest (Q4) versus lowest (Q1) fluid fluctuations showed significantly worse VA change (months 3-12) in IRF -3.91 versus 3.50 letters, PED -4.66 versus 3.29, TF -2.07 versus 2.97 and CST -1.85 versus 2.96 (all p<0.05), but not for SRF 0.66 versus 0.93 (p=0.91). Similar VA outcomes were observed at month 24 for PED -8.41 versus 4.98 (p<0.05), TF -7.38 versus 1.89 (p=0.07) and CST -10.58 versus 3.60 (p<0.05). The median number of injections (months 3-24) was significantly higher in Q4 versus Q1 eyes in IRF 9 versus 8, SRF 10 versus 8 and TF 10 versus 8 (all p<0.05). This multicentre study reports a negative effect in VA outcomes of fluid volume fluctuations during the maintenance phase in specific fluid compartments, suggesting that anatomical and functional treatment response patterns may be fluid-specific.

Enhancing Craniomaxillofacial Surgeries with Artificial Intelligence Technologies.

Do W, van Nistelrooij N, Bergé S, Vinayahalingam S

pubmed logopapersMay 16 2025
Artificial intelligence (AI) can be applied in multiple subspecialties in craniomaxillofacial (CMF) surgeries. This article overviews AI fundamentals focusing on classification, object detection, and segmentation-core tasks used in CMF applications. The article then explores the development and integration of AI in dentoalveolar surgery, implantology, traumatology, oncology, craniofacial surgery, and orthognathic and feminization surgery. It highlights AI-driven advancements in diagnosis, pre-operative planning, intra-operative assistance, post-operative management, and outcome prediction. Finally, the challenges in AI adoption are discussed, including data limitations, algorithm validation, and clinical integration.

Application of Quantitative CT and Machine Learning in the Evaluation and Diagnosis of Polymyositis/Dermatomyositis-Associated Interstitial Lung Disease.

Yang K, Chen Y, He L, Sheng Y, Hei H, Zhang J, Jin C

pubmed logopapersMay 16 2025
To investigate lung changes in patients with polymyositis/dermatomyositis-associated interstitial lung disease (PM/DM-ILD) using quantitative CT and to construct a diagnostic model to evaluate the application of quantitative CT and machine learning in diagnosing PM/DM-ILD. Chest CT images from 348 PM/DM individuals were quantitatively analyzed to obtain the lung volume (LV), mean lung density (MLD), and intrapulmonary vascular volume (IPVV) of the whole lung and each lung lobe. The percentage of high attenuation area (HAA %) was determined using the lung density histogram. Patients hospitalized from 2016 to 2021 were used as the training set (n=258), and from 2022 to 2023 were used as the temporal test set (n=90). Seven classification models were established, and their performance was evaluated through ROC analysis, decision curve analysis, calibration, and precision-recall curve. The optimal model was selected and interpreted with Python's SHAP model interpretation package. Compared to the non-ILD group, the mean lung density and percentage of high attenuation area in the whole lung and each lung lobe were significantly increased, and the lung volume and intrapulmonary vessel volume were significantly decreased in the ILD group. The Random Forest (RF) model demonstrated superior performance with the test set area under the curve of 0.843 (95% CI: 0.821-0.865), accuracy of 0.778, sensitivity of 0.784, and specificity of 0.750. Quantitative CT serves as an objective and precise method to assess pulmonary changes in PM/DM-ILD patients. The RF model based on CT quantitative parameters displayed strong diagnostic efficiency in identifying ILD, offering a new and convenient approach for evaluating and diagnosing PM/DM-ILD patients.

CheX-DS: Improving Chest X-ray Image Classification with Ensemble Learning Based on DenseNet and Swin Transformer

Xinran Li, Yu Liu, Xiujuan Xu, Xiaowei Zhao

arxiv logopreprintMay 16 2025
The automatic diagnosis of chest diseases is a popular and challenging task. Most current methods are based on convolutional neural networks (CNNs), which focus on local features while neglecting global features. Recently, self-attention mechanisms have been introduced into the field of computer vision, demonstrating superior performance. Therefore, this paper proposes an effective model, CheX-DS, for classifying long-tail multi-label data in the medical field of chest X-rays. The model is based on the excellent CNN model DenseNet for medical imaging and the newly popular Swin Transformer model, utilizing ensemble deep learning techniques to combine the two models and leverage the advantages of both CNNs and Transformers. The loss function of CheX-DS combines weighted binary cross-entropy loss with asymmetric loss, effectively addressing the issue of data imbalance. The NIH ChestX-ray14 dataset is selected to evaluate the model's effectiveness. The model outperforms previous studies with an excellent average AUC score of 83.76\%, demonstrating its superior performance.

From Embeddings to Accuracy: Comparing Foundation Models for Radiographic Classification

Xue Li, Jameson Merkow, Noel C. F. Codella, Alberto Santamaria-Pang, Naiteek Sangani, Alexander Ersoy, Christopher Burt, John W. Garrett, Richard J. Bruce, Joshua D. Warner, Tyler Bradshaw, Ivan Tarapov, Matthew P. Lungren, Alan B. McMillan

arxiv logopreprintMay 16 2025
Foundation models, pretrained on extensive datasets, have significantly advanced machine learning by providing robust and transferable embeddings applicable to various domains, including medical imaging diagnostics. This study evaluates the utility of embeddings derived from both general-purpose and medical domain-specific foundation models for training lightweight adapter models in multi-class radiography classification, focusing specifically on tube placement assessment. A dataset comprising 8842 radiographs classified into seven distinct categories was employed to extract embeddings using six foundation models: DenseNet121, BiomedCLIP, Med-Flamingo, MedImageInsight, Rad-DINO, and CXR-Foundation. Adapter models were subsequently trained using classical machine learning algorithms. Among these combinations, MedImageInsight embeddings paired with an support vector machine adapter yielded the highest mean area under the curve (mAUC) at 93.8%, followed closely by Rad-DINO (91.1%) and CXR-Foundation (89.0%). In comparison, BiomedCLIP and DenseNet121 exhibited moderate performance with mAUC scores of 83.0% and 81.8%, respectively, whereas Med-Flamingo delivered the lowest performance at 75.1%. Notably, most adapter models demonstrated computational efficiency, achieving training within one minute and inference within seconds on CPU, underscoring their practicality for clinical applications. Furthermore, fairness analyses on adapters trained on MedImageInsight-derived embeddings indicated minimal disparities, with gender differences in performance within 2% and standard deviations across age groups not exceeding 3%. These findings confirm that foundation model embeddings-especially those from MedImageInsight-facilitate accurate, computationally efficient, and equitable diagnostic classification using lightweight adapters for radiographic image analysis.

Comparative analysis of deep learning methods for breast ultrasound lesion detection and classification.

Vallez N, Mateos-Aparicio-Ruiz I, Rienda MA, Deniz O, Bueno G

pubmed logopapersMay 16 2025
Breast ultrasound (BUS) computer-aided diagnosis (CAD) systems aims to perform two major steps: detecting lesions and classifying them as benign or malignant. However, the impact of combining both steps has not been previously addressed. Moreover, the specific method employed can influence the final outcome of the system. In this work, a comparison of the effects of using object detection, semantic segmentation and instance segmentation to detect lesions in BUS images was conducted. To this end, four approaches were examined: a) multi-class object detection, b) one-class object detection followed by localized region classification, c) multi-class segmentation, and d) one-class segmentation followed by segmented region classification. Additionally, a novel dataset for BUS segmentation, called BUS-UCLM, has been gathered, annotated and shared publicly. The evaluation of the methods proposed was carried out with this new dataset and four publicly available datasets: BUSI, OASBUD, RODTOOK and UDIAT. Among the four approaches compared, multi-class detection and multi-class segmentation achieved the best results when instance segmentation CNNs are used. The best results in detection were obtained with a multi-class Mask R-CNN with a COCO AP50 metric of 72.9%. In the multi-class segmentation scenario, Poolformer achieved the best results with a Dice score of 77.7%. The analysis of detection and segmentation models in BUS highlights several key challenges, emphasizing the complexity of accurately identifying and segmenting lesions. Among the methods evaluated, instance segmentation has proven to be the most effective for BUS images, offering superior performance in delineating individual lesions.

GOUHFI: a novel contrast- and resolution-agnostic segmentation tool for Ultra-High Field MRI

Marc-Antoine Fortin, Anne Louise Kristoffersen, Michael Staff Larsen, Laurent Lamalle, Ruediger Stirnberg, Paal Erik Goa

arxiv logopreprintMay 16 2025
Recently, Ultra-High Field MRI (UHF-MRI) has become more available and one of the best tools to study the brain. One common step in quantitative neuroimaging is the brain segmentation. However, the differences between UHF-MRI and 1.5-3T images are such that the automatic segmentation techniques optimized at these field strengths usually produce unsatisfactory segmentation results for UHF images. It has been particularly challenging to perform quantitative analyses as typically done with 1.5-3T data, considerably limiting the potential of UHF-MRI. Hence, we propose a novel Deep Learning (DL)-based segmentation technique called GOUHFI: Generalized and Optimized segmentation tool for Ultra-High Field Images, designed to segment UHF images of various contrasts and resolutions. For training, we used a total of 206 label maps from four datasets acquired at 3T, 7T and 9.4T. In contrast to most DL strategies, we used a previously proposed domain randomization approach, where synthetic images generated from the label maps were used for training a 3D U-Net. GOUHFI was tested on seven different datasets and compared to techniques like FastSurferVINN and CEREBRUM-7T. GOUHFI was able to the segment six contrasts and seven resolutions tested at 3T, 7T and 9.4T. Average Dice-Sorensen Similarity Coefficient (DSC) scores of 0.87, 0.84, 0.91 were computed against the ground truth segmentations at 3T, 7T and 9.4T. Moreover, GOUHFI demonstrated impressive resistance to the typical inhomogeneities observed at UHF-MRI, making it a new powerful segmentation tool that allows to apply the usual quantitative analysis pipelines also at UHF. Ultimately, GOUHFI is a promising new segmentation tool, being the first of its kind proposing a contrast- and resolution-agnostic alternative for UHF-MRI, making it the forthcoming alternative for neuroscientists working with UHF-MRI or even lower field strengths.

Assessing fetal lung maturity: Integration of ultrasound radiomics and deep learning.

Chen W, Zeng B, Ling X, Chen C, Lai J, Lin J, Liu X, Zhou H, Guo X

pubmed logopapersMay 16 2025
This study built a model to forecast the maturity of lungs by blending radiomics and deep learning methods. We examined ultrasound images from 263 pregnancies in the pregnancy stages. Utilizing the GE VOLUSON E8 system we captured images to extract and analyze radiomic features. These features were integrated with clinical data by means of deep learning algorithms such as DenseNet121 to enhance the accuracy of assessing fetal lung maturity. This combined model was validated by receiver operating characteristic (ROC) curve, calibration diagram, as well as decision curve analysis (DCA). We discovered that the accuracy and reliability of the diagnosis indicated that this method significantly improves the level of prediction of fetal lung maturity. This novel non-invasive diagnostic technology highlights the potential advantages of integrating diverse data sources to enhance prenatal care and infant health. The study lays groundwork, for validation and refinement of the model across various healthcare settings.
Page 123 of 1601600 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.