Sort by:
Page 22 of 30297 results

Systematic review and epistemic meta-analysis to advance binomial AI-radiomics integration for predicting high-grade glioma progression and enhancing patient management.

Chilaca-Rosas MF, Contreras-Aguilar MT, Pallach-Loose F, Altamirano-Bustamante NF, Salazar-Calderon DR, Revilla-Monsalve C, Heredia-Gutiérrez JC, Conde-Castro B, Medrano-Guzmán R, Altamirano-Bustamante MM

pubmed logopapersMay 8 2025
High-grade gliomas, particularly glioblastoma (MeSH:Glioblastoma), are among the most aggressive and lethal central nervous system tumors, necessitating advanced diagnostic and prognostic strategies. This systematic review and epistemic meta-analysis explore the integration of Artificial Intelligence (AI) and Radiomics Inter-field (AIRI) to enhance predictive modeling for tumor progression. A comprehensive literature search identified 19 high-quality studies, which were analyzed to evaluate radiomic features and machine learning models in predicting overall survival (OS) and progression-free survival (PFS). Key findings highlight the predictive strength of specific MRI-derived radiomic features such as log-filter and Gabor textures and the superior performance of Support Vector Machines (SVM) and Random Forest (RF) models, achieving high accuracy and AUC scores (e.g., 98% AUC and 98.7% accuracy for OS). This research demonstrates the current state of the AIRI field and shows that current articles report their results with different performance indicators and metrics, making outcomes heterogenous and hard to integrate knowledge. Additionally, it was explored that today some articles use biased methodologies. This study proposes a structured AIRI development roadmap and guidelines, to avoid bias and make results comparable, emphasizing standardized feature extraction and AI model training to improve reproducibility across clinical settings. By advancing precision medicine, AIRI integration has the potential to refine clinical decision-making and enhance patient outcomes.

Effective data selection via deep learning processes and corresponding learning strategies in ultrasound image classification.

Lee H, Kwak JY, Lee E

pubmed logopapersMay 8 2025
In this study, we propose a novel approach to enhancing transfer learning by optimizing data selection through deep learning techniques and corresponding innovative learning strategies. This method is particularly beneficial when the available dataset has reached its limit and cannot be further expanded. Our approach focuses on maximizing the use of existing data to improve learning outcomes which offers an effective solution for data-limited applications in medical imaging classification. The proposed method consists of two stages. In the first stage, an original network performs the initial classification. When the original network exhibits low confidence in its predictions, ambiguous classifications are passed to a secondary decision-making step involving a newly trained network, referred to as the True network. The True network shares the same architecture as the original network but is trained on a subset of the original dataset that is selected based on consensus among multiple independent networks. It is then used to verify the classification results of the original network, identifying and correcting any misclassified images. To evaluate the effectiveness of our approach, we conducted experiments using thyroid nodule ultrasound images with the ResNet101 and Vision Transformer architectures along with eleven other pre-trained neural networks. The proposed method led to performance improvements across all five key metrics, accuracy, sensitivity, specificity, F1-score, and AUC, compared to using only the original or True networks in ResNet101. Additionally, the True network showed strong performance when applied to the Vision Transformer and similar enhancements were observed across multiple convolutional neural network architectures. Furthermore, to assess the robustness and adaptability of our method across different medical imaging modalities, we applied it to dermoscopic images and observed similar performance enhancements. These results provide evidence of the effectiveness of our approach in improving transfer learning-based medical image classification without requiring additional training data.

Patient-specific uncertainty calibration of deep learning-based autosegmentation networks for adaptive MRI-guided lung radiotherapy.

Rabe M, Meliadò EF, Marschner S, Belka C, Corradini S, Van den Berg CAT, Landry G, Kurz C

pubmed logopapersMay 8 2025
Uncertainty assessment of deep learning autosegmentation (DLAS) models can support contour corrections in adaptive radiotherapy (ART), e.g. by utilizing Monte Carlo Dropout (MCD) uncertainty maps. However, poorly calibrated uncertainties at the patient level often render these clinically nonviable. We evaluated population-based and patient-specific DLAS accuracy and uncertainty calibration and propose a patient-specific post-training uncertainty calibration method for DLAS in ART.&#xD;&#xD;Approach. The study included 122 lung cancer patients treated with a low-field MR-linac (80/19/23 training/validation/test cases). Ten single-label 3D-U-Net population-based baseline models (BM) were trained with dropout using planning MRIs (pMRIs) and contours for nine organs-at-riks (OARs) and gross tumor volumes (GTVs). Patient-specific models (PS) were created by fine-tuning BMs with each test patient's pMRI. Model uncertainty was assessed with MCD, averaged into probability maps. Uncertainty calibration was evaluated with reliability diagrams and expected calibration error (ECE). A proposed post-training calibration method rescaled MCD probabilities for fraction images in BM (calBM) and PS (calPS) after fitting reliability diagrams from pMRIs. All models were evaluated on fraction images using Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95) and ECE. Metrics were compared among models for all OARs combined (n=163), and the GTV (n=23), using Friedman and posthoc-Nemenyi tests (α=0.05).&#xD;&#xD;Main results. For the OARs, patient-specific fine-tuning significantly (p<0.001) increased median DSC from 0.78 (BM) to 0.86 (PS) and reduced HD95 from 14mm (BM) to 6.0mm (PS). Uncertainty calibration achieved substantial reductions in ECE, from 0.25 (BM) to 0.091 (calBM) and 0.22 (PS) to 0.11 (calPS) (p<0.001), without significantly affecting DSC or HD95 (p>0.05). For the GTV, BM performance was poor (DSC=0.05) but significantly (p<0.001) improved with PS training (DSC=0.75) while uncertainty calibration reduced ECE from 0.22 (PS) to 0.15 (calPS) (p=0.45).&#xD;&#xD;Significance. Post-training uncertainty calibration yields geometrically accurate DLAS models with well-calibrated uncertainty estimates, crucial for ART applications.

Relevance of choroid plexus volumes in multiple sclerosis.

Krieger B, Bellenberg B, Roenneke AK, Schneider R, Ladopoulos T, Abbas Z, Rust R, Schmitz-Hübsch T, Chien C, Gold R, Paul F, Lukas C

pubmed logopapersMay 8 2025
The choroid plexus (ChP) plays a pivotal role in inflammatory processes that occur in multiple sclerosis (MS). The enlargement of the ChP in relapsing-remitting multiple sclerosis (RRMS) is considered to be an indication of disease activity and has been associated with periventricular remyelination failure. This cross-sectional study aimed to identify the relationship between ChP and periventricular tissue damage which occurs in MS, and to elucidate the role of neuroinflammation in primary progressive multiple sclerosis (PPMS). ChP volume was assessed by a novel deep learning segmentation method based on structural MRI data acquired from two centers. In total, 141 RRMS and 64 PPMS patients were included, along with 75 healthy control subjects. In addition, T1w/FLAIR ratios were calculated within periventricular bands to quantify microstructural tissue damage and to assess its relationship to ChP volume. When compared to healthy controls, ChP volumes were significantly increased in RRMS, but not in patients with PPMS. T1w/FLAIR ratios in the normal appearing white matter (NAWM) showing periventricular gradients were decreased in patients with multiple sclerosis when compared to healthy control subjects and lower T1w/FLAIR ratios radiating out from the lateral ventricles were found in patients with PPMS. A relationship between ChP volume and T1w/FLAIR ratio in NAWM was found within the inner periventricular bands in RRMS patients. A longer duration of disease was associated with larger ChP volumes only in RRMS patients. Enlarged ChP volumes were also significantly associated with reduced cortex volumes and increased lesion volumes in RRMS. Our analysis confirmed that the ChP was significantly enlarged in patients with RRMS, which was related to brain lesion volumes and which suggested a dynamic development as it was associated with disease duration. Plexus enlargement was further associated with periventricular demyelination or tissue damage assessed by T1w/FLAIR ratios in RRMS. Furthermore, we did not find an enlargement of the ChP in patients with PPMS, possibly indicating the reduced involvement of inflammatory processes in the progressive phase of MS. The association between enlarged ChP volumes and cortical atrophy in RRMS highlighted the vulnerability of structures close to the CSF.

Hierarchical diagnosis of breast phyllodes tumors enabled by deep learning of ultrasound images: a retrospective multi-center study.

Yan Y, Liu Y, Wang Y, Jiang T, Xie J, Zhou Y, Liu X, Yan M, Zheng Q, Xu H, Chen J, Sui L, Chen C, Ru R, Wang K, Zhao A, Li S, Zhu Y, Zhang Y, Wang VY, Xu D

pubmed logopapersMay 8 2025
Phyllodes tumors (PTs) are rare breast tumors with high recurrence rates, current methods relying on post-resection pathology often delay detection and require further surgery. We propose a deep-learning-based Phyllodes Tumors Hierarchical Diagnosis Model (PTs-HDM) for preoperative identification and grading. Ultrasound images from five hospitals were retrospectively collected, with all patients having undergone surgical pathological confirmation of either PTs or fibroadenomas (FAs). PTs-HDM follows a two-stage classification: first distinguishing PTs from FAs, then grading PTs into benign or borderline/malignant. Model performance metrics including AUC and accuracy were quantitatively evaluated. A comparative analysis was conducted between the algorithm's diagnostic capabilities and those of radiologists with varying clinical experience within an external validation cohort. Through the provision of PTs-HDM's automated classification outputs and associated thermal activation mapping guidance, we systematically assessed the enhancement in radiologists' diagnostic concordance and classification accuracy. A total of 712 patients were included. On the external test set, PTs-HDM achieved an AUC of 0.883, accuracy of 87.3% for PT vs. FA classification. Subgroup analysis showed high accuracy for tumors < 2 cm (90.9%). In hierarchical classification, the model obtained an AUC of 0.856 and accuracy of 80.9%. Radiologists' performance improved with PTs-HDM assistance, with binary classification accuracy increasing from 82.7%, 67.7%, and 64.2-87.6%, 76.6%, and 82.1% for senior, attending, and resident radiologists, respectively. Their hierarchical classification AUCs improved from 0.566 to 0.827 to 0.725-0.837. PTs-HDM also enhanced inter-radiologist consistency, increasing Kappa values from - 0.05 to 0.41 to 0.12 to 0.65, and the intraclass correlation coefficient from 0.19 to 0.45. PTs-HDM shows strong diagnostic performance, especially for small lesions, and improves radiologists' accuracy across all experience levels, bridging diagnostic gaps and providing reliable support for PTs' hierarchical diagnosis.

Chest X-Ray Visual Saliency Modeling: Eye-Tracking Dataset and Saliency Prediction Model.

Lou J, Wang H, Wu X, Ng JCH, White R, Thakoor KA, Corcoran P, Chen Y, Liu H

pubmed logopapersMay 8 2025
Radiologists' eye movements during medical image interpretation reflect their perceptual-cognitive processes of diagnostic decisions. The eye movement data can be modeled to represent clinically relevant regions in a medical image and potentially integrated into an artificial intelligence (AI) system for automatic diagnosis in medical imaging. In this article, we first conduct a large-scale eye-tracking study involving 13 radiologists interpreting 191 chest X-ray (CXR) images, establishing a best-of-its-kind CXR visual saliency benchmark. We then perform analysis to quantify the reliability and clinical relevance of saliency maps (SMs) generated for CXR images. We develop CXR image saliency prediction method (CXRSalNet), a novel saliency prediction model that leverages radiologists' gaze information to optimize the use of unlabeled CXR images, enhancing training and mitigating data scarcity. We also demonstrate the application of our CXR saliency model in enhancing the performance of AI-powered diagnostic imaging systems.

Application of Artificial Intelligence to Deliver Healthcare From the Eye.

Weinreb RN, Lee AY, Baxter SL, Lee RWJ, Leng T, McConnell MV, El-Nimri NW, Rhew DC

pubmed logopapersMay 8 2025
Oculomics is the science of analyzing ocular data to identify, diagnose, and manage systemic disease. This article focuses on prescreening, its use with retinal images analyzed by artificial intelligence (AI), to identify ocular or systemic disease or potential disease in asymptomatic individuals. The implementation of prescreening in a coordinated care system, defined as Healthcare From the Eye prescreening, has the potential to improve access, affordability, equity, quality, and safety of health care on a global level. Stakeholders include physicians, payers, policymakers, regulators and representatives from industry, government, and data privacy sectors. The combination of AI analysis of ocular data with automated technologies that capture images during routine eye examinations enables prescreening of large populations for chronic disease. Retinal images can be acquired during either a routine eye examination or in settings outside of eye care with readily accessible, safe, quick, and noninvasive retinal imaging devices. The outcome of such an examination can then be digitally communicated across relevant stakeholders in a coordinated fashion to direct a patient to screening and monitoring services. Such an approach offers the opportunity to transform health care delivery and improve early disease detection, improve access to care, enhance equity especially in rural and underserved communities, and reduce costs. With effective implementation and collaboration among key stakeholders, this approach has the potential to contribute to an equitable and effective health care system.

Automated Emergent Large Vessel Occlusion Detection Using Viz.ai Software and Its Impact on Stroke Workflow Metrics and Patient Outcomes in Stroke Centers: A Systematic Review and Meta-analysis.

Sarhan K, Azzam AY, Moawad MHED, Serag I, Abbas A, Sarhan AE

pubmed logopapersMay 8 2025
The implementation of artificial intelligence (AI), particularly Viz.ai software in stroke care, has emerged as a promising tool to enhance the detection of large vessel occlusion (LVO) and to improve stroke workflow metrics and patient outcomes. The aim of this systematic review and meta-analysis is to evaluate the impact of Viz.ai on stroke workflow efficiency in hospitals and on patients' outcomes. Following the PRISMA guidelines, we conducted a comprehensive search on electronic databases, including PubMed, Web of Science, and Scopus databases, to obtain relevant studies until 25 October 2024. Our primary outcomes were door-to-groin puncture (DTG) time, CT scan-to-start of endovascular treatment (EVT) time, CT scan-to-recanalization time, and door-in-door-out time. Secondary outcomes included symptomatic intracranial hemorrhage (ICH), any ICH, mortality, mRS score < 2 at 90 days, and length of hospital stay. A total of 12 studies involving 15,595 patients were included in our analysis. The pooled analysis demonstrated that the implementation of the Viz.ai algorithm was associated with lesser CT scan to EVT time (SMD -0.71, 95% CI [-0.98, -0.44], p < 0.001) and DTG time (SMD -0.50, 95% CI [-0.66, -0.35], p < 0.001) as well as CT to recanalization time (SMD -0.55, 95% CI [-0.76, -0.33], p < 0.001). Additionally, patients in the post-AI group had significantly lower door-in door-out time than the pre-AI group (SMD -0.49, 95% CI [-0.71, -0.28], p < 0.001). Despite the workflow metrics improvement, our analysis did not reveal statistically significant differences in patient clinical outcomes (p > 0.05). Our results suggest that the integration of the Viz.ai platform in stroke care holds significant potential for reducing EVT delays in patients with LVO and optimizing stroke flow metrics in comprehensive stroke centers. Further studies are required to validate its efficacy in improving clinical outcomes in patients with LVO.

MRI-based machine learning reveals proteasome subunit PSMB8-mediated malignant glioma phenotypes through activating TGFBR1/2-SMAD2/3 axis.

Pei D, Ma Z, Qiu Y, Wang M, Wang Z, Liu X, Zhang L, Zhang Z, Li R, Yan D

pubmed logopapersMay 8 2025
Gliomas are the most prevalent and aggressive neoplasms of the central nervous system, representing a major challenge for effective treatment and patient prognosis. This study identifies the proteasome subunit beta type-8 (PSMB8/LMP7) as a promising prognostic biomarker for glioma. Using a multiparametric radiomic model derived from preoperative magnetic resonance imaging (MRI), we accurately predicted PSMB8 expression levels. Notably, radiomic prediction of poor prognosis was highly consistent with elevated PSMB8 expression. Our findings demonstrate that PSMB8 depletion not only suppressed glioma cell proliferation and migration but also induced apoptosis via activation of the transforming growth factor beta (TGF-β) signaling pathway. This was supported by downregulation of key receptors (TGFBR1 and TGFBR2). Furthermore, interference with PSMB8 expression impaired phosphorylation and nuclear translocation of SMAD2/3, critical mediators of TGF-β signaling. Consequently, these molecular alterations resulted in reduced tumor progression and enhanced sensitivity to temozolomide (TMZ), a standard chemotherapeutic agent. Overall, our findings highlight PSMB8's pivotal role in glioma pathophysiology and its potential as a prognostic marker. This study also demonstrates the clinical utility of MRI radiomics for preoperative risk stratification and pre-diagnosis. Targeted inhibition of PSMB8 may represent a therapeutic strategy to overcome TMZ resistance and improve glioma patient outcomes.

Are Diffusion Models Effective Good Feature Extractors for MRI Discriminative Tasks?

Li B, Sun Z, Li C, Kamagata K, Andica C, Uchida W, Takabayashi K, Guo S, Zou R, Aoki S, Tanaka T, Zhao Q

pubmed logopapersMay 8 2025
Diffusion models (DMs) excel in pixel-level and spatial tasks and are proven feature extractors for 2D image discriminative tasks when pretrained. However, their capabilities in 3D MRI discriminative tasks remain largely untapped. This study seeks to assess the effectiveness of DMs in this underexplored area. We use 59830 T1-weighted MR images (T1WIs) from the extensive, yet unlabeled, UK Biobank dataset. Additionally, we apply 369 T1WIs from the BraTS2020 dataset specifically for brain tumor classification, and 421 T1WIs from the ADNI1 dataset for the diagnosis of Alzheimer's disease. Firstly, a high-performing denoising diffusion probabilistic model (DDPM) with a U-Net backbone is pretrained on the UK Biobank, then fine-tuned on the BraTS2020 and ADNI1 datasets. Afterward, we assess its feature representation capabilities for discriminative tasks using linear probes. Finally, we accordingly introduce a novel fusion module, named CATS, that enhances the U-Net representations, thereby improving performance on discriminative tasks. Our DDPM produces synthetic images of high quality that match the distribution of the raw datasets. Subsequent analysis reveals that DDPM features extracted from middle blocks and smaller timesteps are of high quality. Leveraging these features, the CATS module, with just 1.7M additional parameters, achieved average classification scores of 0.7704 and 0.9217 on the BraTS2020 and ADNI1 datasets, demonstrating competitive performance with that of the representations extracted from the transferred DDPM model, as well as the 33.23M parameters ResNet18 trained from scratch. We have found that pretraining a DM on a large-scale dataset and then fine-tuning it on limited data from discriminative datasets is a viable approach for MRI data. With these well-performing DMs, we show that they excel not just in generation tasks but also as feature extractors when combined with our proposed CATS module.
Page 22 of 30297 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.