Sort by:
Page 5 of 1261257 results

Non-invasive prediction of nuclear grade in renal cell carcinoma using CT-Based radiomics: a systematic review and meta-analysis.

Salimi M, Hajikarimloo B, Vadipour P, Abdolizadeh A, Fayedeh F, Seifi S

pubmed logopapersJun 11 2025
Renal cell carcinoma (RCC) represents the most prevalent malignant neoplasm of the kidney, with a rising global incidence. Tumor nuclear grade is a crucial prognostic factor, guiding treatment decisions, but current histopathological grading via biopsy is invasive and prone to sampling errors. This study aims to assess the diagnostic performance and quality of CT-based radiomics for preoperatively predicting RCC nuclear grade. A comprehensive search was conducted across PubMed, Scopus, Embase, and Web of Science to identify relevant studies up until 19 April 2025. Quality was assessed using the QUADAS-2 and METRICS tools. A bivariate random-effects meta-analysis was performed to evaluate model performance, including sensitivity, specificity, and Area Under the Curve (AUC). Results from separate validation cohorts were pooled, and clinical and combined models were analyzed separately in distinct analyses. A total of 26 studies comprising 1993 individuals in 10 external and 16 internal validation cohorts were included. Meta-analysis of radiomics models showed pooled AUC of 0.88, sensitivity of 0.78, and specificity of 0.82. Clinical and combined (clinical-radiomics) models showed AUCs of 0.73 and 0.86, respectively. QUADAS-2 revealed significant risk of bias in the Index Test and Flow and Timing domains. METRICS scores ranged from 49.7 to 88.4%, with an average of 66.65%, indicating overall good quality, though gaps in some aspects of study methodologies were identified. This study suggests that radiomics models show great potential and diagnostic accuracy for non-invasive preoperative nuclear grading of RCC. However, challenges related to generalizability and clinical applicability remain, as further research with standardized methodologies, external validation, and larger cohorts is needed to enhance their reliability and integration into routine clinical practice.

Implementation of biomedical segmentation for brain tumor utilizing an adapted U-net model.

Alkhalid FF, Salih NZ

pubmed logopapersJun 11 2025
Using radio signals from a magnetic field, magnetic resonance imaging (MRI) represents a medical procedure that produces images to provide more information than typical scans. Diagnosing brain tumors from MRI is difficult because of the wide range of tumor shapes, areas, and visual features, thus universal and automated system to handle this task is required. Among the best deep learning methods, the U-Net architecture is the most widely used in diagnostic medical images. Therefore, U-Net-based attention is the most effective automated model in medical image segmentation dealing with various modalities. The self-attention structures that are used in the U-Net design allow for fast global preparation and better feature visualization. This research aims to study the progress of U-Net design and show how it improves the performance of brain tumor segmentation. We have investigated three U-Net designs (standard U-Net, Attention U-Net, and self-attention U-Net) for five epochs to find the last segmentation. An MRI image dataset that includes 3064 images from the Kaggle website is used to give a more comprehensive overview. Also, we offer a comparison with several studies that are based on U-Net structures to illustrate the evolution of this network from an accuracy standpoint. U-Net-based self-attention has demonstrated superior performance compared to other studies because self-attention can enhance segmentation quality, particularly for unclear structures, by concentrating on the most significant parts. Four main metrics are applied with a loss function of 5.03 %, a validation loss function of 4.82 %, a validation accuracy of 98.49 %, and an accuracy of 98.45 %.

Towards more reliable prostate cancer detection: Incorporating clinical data and uncertainty in MRI deep learning.

Taguelmimt K, Andrade-Miranda G, Harb H, Thanh TT, Dang HP, Malavaud B, Bert J

pubmed logopapersJun 11 2025
Prostate cancer (PCa) is one of the most common cancers among men, and artificial intelligence (AI) is emerging as a promising tool to enhance its diagnosis. This work proposes a classification approach for PCa cases using deep learning techniques. We conducted a comparison between unimodal models based either on biparametric magnetic resonance imaging (bpMRI) or clinical data (such as prostate-specific antigen levels, prostate volume, and age). We also introduced a bimodal model that simultaneously integrates imaging and clinical data to address the limitations of unimodal approaches. Furthermore, we propose a framework that not only detects the presence of PCa but also evaluates the uncertainty associated with the predictions. This approach makes it possible to identify highly confident predictions and distinguish them from those characterized by uncertainty, thereby enhancing the reliability and applicability of automated medical decisions in clinical practice. The results show that the bimodal model significantly improves performance, with an area under the curve (AUC) reaching 0.82±0.03, a sensitivity of 0.73±0.04, while maintaining high specificity. Uncertainty analysis revealed that the bimodal model produces more confident predictions, with an uncertainty accuracy of 0.85, surpassing the imaging-only model (which is 0.71). This increase in reliability is crucial in a clinical context, where precise and dependable diagnostic decisions are essential for patient care. The integration of clinical data with imaging data in a bimodal model not only improves diagnostic performance but also strengthens the reliability of predictions, making this approach particularly suitable for clinical use.

Automated Whole-Brain Focal Cortical Dysplasia Detection Using MR Fingerprinting With Deep Learning.

Ding Z, Morris S, Hu S, Su TY, Choi JY, Blümcke I, Wang X, Sakaie K, Murakami H, Alexopoulos AV, Jones SE, Najm IM, Ma D, Wang ZI

pubmed logopapersJun 10 2025
Focal cortical dysplasia (FCD) is a common pathology for pharmacoresistant focal epilepsy, yet detection of FCD on clinical MRI is challenging. Magnetic resonance fingerprinting (MRF) is a novel quantitative imaging technique providing fast and reliable tissue property measurements. The aim of this study was to develop an MRF-based deep-learning (DL) framework for whole-brain FCD detection. We included patients with pharmacoresistant focal epilepsy and pathologically/radiologically diagnosed FCD, as well as age-matched and sex-matched healthy controls (HCs). All participants underwent 3D whole-brain MRF and clinical MRI scans. T1, T2, gray matter (GM), and white matter (WM) tissue fraction maps were reconstructed from a dictionary-matching algorithm based on the MRF acquisition. A 3D ROI was manually created for each lesion. All MRF maps and lesion labels were registered to the Montreal Neurological Institute space. Mean and SD T1 and T2 maps were calculated voxel-wise across using HC data. T1 and T2 <i>z</i>-score maps for each patient were generated by subtracting the mean HC map and dividing by the SD HC map. MRF-based morphometric maps were produced in the same manner as in the morphometric analysis program (MAP), based on MRF GM and WM maps. A no-new U-Net model was trained using various input combinations, with performance evaluated through leave-one-patient-out cross-validation. We compared model performance using various input combinations from clinical MRI and MRF to assess the impact of different input types on model effectiveness. We included 40 patients with FCD (mean age 28.1 years, 47.5% female; 11 with FCD IIa, 14 with IIb, 12 with mMCD, 3 with MOGHE) and 67 HCs. The DL model with optimal performance used all MRF-based inputs, including MRF-synthesized T1w, T1z, and T2z maps; tissue fraction maps; and morphometric maps. The patient-level sensitivity was 80% with an average of 1.7 false positives (FPs) per patient. Sensitivity was consistent across subtypes, lobar locations, and lesional/nonlesional clinical MRI. Models using clinical images showed lower sensitivity and higher FPs. The MRF-DL model also outperformed the established MAP18 pipeline in sensitivity, FPs, and lesion label overlap. The MRF-DL framework demonstrated efficacy for whole-brain FCD detection. Multiparametric MRF features from a single scan offer promising inputs for developing a deep-learning tool capable of detecting subtle epileptic lesions.

Sonopermeation combined with stroma normalization enables complete cure using nano-immunotherapy in murine breast tumors.

Neophytou C, Charalambous A, Voutouri C, Angeli S, Panagi M, Stylianopoulos T, Mpekris F

pubmed logopapersJun 10 2025
Nano-immunotherapy shows great promise in improving patient outcomes, as seen in advanced triple-negative breast cancer, but it does not cure the disease, with median survival under two years. Therefore, understanding resistance mechanisms and developing strategies to enhance its effectiveness in breast cancer is crucial. A key resistance mechanism is the pronounced desmoplasia in the tumor microenvironment, which leads to dysfunction of tumor blood vessels and thus, to hypoperfusion, limited drug delivery and hypoxia. Ultrasound sonopermeation and agents that normalize the tumor stroma have been employed separately to restore vascular abnormalities in tumors with some success. Here, we performed in vivo studies in two murine, orthotopic breast tumor models to explore if combination of ultrasound sonopermeation with a stroma normalization drug can synergistically improve tumor perfusion and enhance the efficacy of nano-immunotherapy. We found that the proposed combinatorial treatment can drastically reduce primary tumor growth and in many cases tumors were no longer measurable. Overall survival studies showed that all mice that received the combination treatment survived and rechallenge experiments revealed that the survivors obtained immunological memory. Employing ultrasound elastography and contrast enhanced ultrasound along with proteomics analysis, flow cytometry and immunofluorescene staining, we found the combinatorial treatment reduced tumor stiffness to normal levels, restoring tumor perfusion and oxygenation. Furthermore, it increased infiltration and activity of immune cells and altered the levels of immunosupportive chemokines. Finally, using machine learning analysis, we identified that tumor stiffness, CD8<sup>+</sup> T cells and M2-type macrophages were strong predictors of treatment response.

Robotic Central Pancreatectomy with Omental Pedicle Flap: Tactics and Tips.

Kawano F, Lim MA, Kemprecos HJ, Tsai K, Cheah D, Tigranyan A, Kaviamuthan K, Pillai A, Chen JC, Polites G, Mise Y, Cohen M, Saiura A, Conrad C

pubmed logopapersJun 10 2025
Robotic central pancreatectomy is increasingly used for pre- or low-grade malignant tumors in the pancreatic body balancing preservation of pancreatic function while removing the target lesion.<sup>1-3</sup> Today, there is no established reconstruction method and high rates of postpancreatectomy fistulas (POPF) remain a significant concern. <sup>4,5</sup> We developed novel technique involving transgastric pancreaticogastrostomy with an omental pedicle advancement flap to reduce the risk of POPF. Additionally, preoperative deep-learning 3D organ modeling plays a crucial role in enhancing spatial understanding to enhance procedural safety.<sup>6,7</sup> METHODS: A 76-year-old female patient with a 33-mm, biopsy-confirmed high-risk IPMN underwent robotic-assisted central pancreatectomy. Preoperative CT was processed with a deep-learning system to create a patient-specific 3D model, enabling virtual simulation of port configurations. The optimal setup was selected based on the spatial relationship between port site, tumor location, and anatomy A transgastric pancreaticogastrostomy with omental flap reinforcement was performed to reduce POPF leading to a simpler reconstruction compared to pancreaticojejunostomy. The procedure lasted 218 min with minimal blood loss (50 ml). No complications occurred, and the patient was discharged on postoperative Day 3 after drain removal. Final pathology showed low-grade dysplasia. This approach, facilitated by robotic assistance, effectively preserves pancreatic function while treating a low-grade malignant tumor. Preoperative 3D organ modeling enhances the spatial understanding with the goal to increase procedural safety. Finally, the omental pedicle advancement flap technique shows promise in possibly reducing the incidence or at least the impact of POPF.

A Deep Learning Model for Identifying the Risk of Mesenteric Malperfusion in Acute Aortic Dissection Using Initial Diagnostic Data: Algorithm Development and Validation.

Jin Z, Dong J, Li C, Jiang Y, Yang J, Xu L, Li P, Xie Z, Li Y, Wang D, Ji Z

pubmed logopapersJun 10 2025
Mesenteric malperfusion (MMP) is an uncommon but devastating complication of acute aortic dissection (AAD) that combines 2 life-threatening conditions-aortic dissection and acute mesenteric ischemia. The complex pathophysiology of MMP poses substantial diagnostic and management challenges. Currently, delayed diagnosis remains a critical contributor to poor outcomes because of the absence of reliable individualized risk assessment tools. This study aims to develop and validate a deep learning-based model that integrates multimodal data to identify patients with AAD at high risk of MMP. This multicenter retrospective study included 525 patients with AAD from 2 hospitals. The training and internal validation cohort consisted of 450 patients from Beijing Anzhen Hospital, whereas the external validation cohort comprised 75 patients from Nanjing Drum Tower Hospital. Three machine learning models were developed: the benchmark model using laboratory parameters, the multiorgan feature-based AAD complicating MMP (MAM) model based on computed tomography angiography images, and the integrated model combining both data modalities. Model performance was assessed using the area under the curve, accuracy, sensitivity, specificity, and Brier score. To improve interpretability, gradient-weighted class activation mapping was used to identify and visualize discriminative imaging features. Univariate and multivariate regression analyses were used to evaluate the prognostic significance of the risk score generated by the optimal model. In the external validation cohort, the integrated model demonstrated superior performance, with an area under the curve of 0.780 (95% CI 0.777-0.785), which was significantly greater than those of the benchmark model (0.586, 95% CI 0.574-0.586) and the MAM model (0.732, 95% CI 0.724-0.734). This highlights the benefits of multimodal integration over single-modality approaches. Additional classification metrics revealed that the integrated model had an accuracy of 0.760 (95% CI 0.758-0.764), a sensitivity of 0.667 (95% CI 0.659-0.675), a specificity of 0.783 (95% CI 0.781-0.788), and a Brier score of 0.143 (95% CI 0.143-0.145). Moreover, gradient-weighted class activation mapping visualizations of the MAM model revealed that during positive predictions, the model focused more on key anatomical areas, particularly the superior mesenteric artery origin and intestinal regions with characteristic gas or fluid accumulation. Univariate and multivariate analyses also revealed that the risk score derived from the integrated model was independently associated with inhospital mortality risk among patients with AAD undergoing endovascular or surgical treatment (odds ratio 1.030, 95% CI 1.004-1.056; P=.02). Our findings demonstrate that compared with unimodal approaches, an integrated deep learning model incorporating both imaging and clinical data has greater diagnostic accuracy for MMP in patients with AAD. This model may serve as a valuable tool for early risk identification, facilitating timely therapeutic decision-making. Further prospective validation is warranted to confirm its clinical utility. Chinese Clinical Registry Center ChiCTR2400086050; http://www.chictr.org.cn/showproj.html?proj=226129.

Advancements and Applications of Hyperpolarized Xenon MRI for COPD Assessment in China.

Li H, Li H, Zhang M, Fang Y, Shen L, Liu X, Xiao S, Zeng Q, Zhou Q, Zhao X, Shi L, Han Y, Zhou X

pubmed logopapersJun 10 2025
Chronic obstructive pulmonary disease (COPD) is one of the leading causes of morbidity and mortality in China, highlighting the importance of early diagnosis and ongoing monitoring for effective management. In recent years, hyperpolarized 129Xe MRI technology has gained significant clinical attention due to its ability to non-invasively and visually assess lung ventilation, microstructure, and gas exchange function. Its recent clinical approval in China, the United States and several European countries, represents a significant advancement in pulmonary imaging. This review provides an overview of the latest developments in hyperpolarized 129Xe MRI technology for COPD assessment in China. It covers the progress in instrument development, advanced imaging techniques, artificial intelligence-driven reconstruction methods, molecular imaging, and the application of this technology in both COPD patients and animal models. Furthermore, the review explores potential technical innovations in 129Xe MRI and discusses future directions for its clinical applications, aiming to address existing challenges and expand the technology's impact in clinical practice.

Machine learning is changing osteoporosis detection: an integrative review.

Zhang Y, Ma M, Huang X, Liu J, Tian C, Duan Z, Fu H, Huang L, Geng B

pubmed logopapersJun 10 2025
Machine learning drives osteoporosis detection and screening with higher clinical accuracy and accessibility than traditional osteoporosis screening tools. This review takes a step-by-step view of machine learning for osteoporosis detection, providing insights into today's osteoporosis detection and the outlook for the future. The early diagnosis and risk detection of osteoporosis have always been crucial and challenging issues in the medical field. With the in-depth application of artificial intelligence technology, especially machine learning technology in the medical field, significant breakthroughs have been made in the application of early diagnosis and risk detection of osteoporosis. Machine learning is a multidimensional technical system that encompasses a wide variety of algorithm types. Machine learning algorithms have become relatively mature and developed over many years in medical data processing. They possess stable and accurate detection performance, laying a solid foundation for the detection and diagnosis of osteoporosis. As an essential part of the machine learning technical system, deep-learning algorithms are complex algorithm models based on artificial neural networks. Due to their robust image recognition and feature extraction capabilities, deep learning algorithms have become increasingly mature in the early diagnosis and risk assessment of osteoporosis in recent years, opening new ideas and approaches for the early and accurate diagnosis and risk detection of osteoporosis. This paper reviewed the latest research over the past decade, ranging from relatively basic and widely adopted machine learning algorithms combined with clinical data to more advanced deep learning techniques integrated with imaging data such as X-ray, CT, and MRI. By analyzing the application of algorithms at different stages, we found that these basic machine learning algorithms performed well when dealing with single structured data but encountered limitations when handling high-dimensional and unstructured imaging data. On the other hand, deep learning can significantly improve detection accuracy. It does this by automatically extracting image features, especially in image histological analysis. However, it faces challenges. These include the "black-box" problem, heavy reliance on large amounts of labeled data, and difficulties in clinical interpretability. These issues highlighted the importance of model interpretability in future machine learning research. Finally, we expect to develop a predictive model in the future that combines multimodal data (such as clinical indicators, blood biochemical indicators, imaging data, and genetic data) integrated with electronic health records and machine learning techniques. This model aims to present a skeletal health monitoring system that is highly accessible, personalized, convenient, and efficient, furthering the early detection and prevention of osteoporosis.

Artificial intelligence and endoanal ultrasound: pioneering automated differentiation of benign anal and sphincter lesions.

Mascarenhas M, Almeida MJ, Martins M, Mendes F, Mota J, Cardoso P, Mendes B, Ferreira J, Macedo G, Poças C

pubmed logopapersJun 10 2025
Anal injuries, such as lacerations and fissures, are challenging to diagnose because of their anatomical complexity. Endoanal ultrasound (EAUS) has proven to be a reliable tool for detailed visualization of anal structures but relies on expert interpretation. Artificial intelligence (AI) may offer a solution for more accurate and consistent diagnoses. This study aims to develop and test a convolutional neural network (CNN)-based algorithm for automatic classification of fissures and anal lacerations (internal and external) on EUAS. A single-center retrospective study analyzed 238 EUAS radial probe exams (April 2022-January 2024), categorizing 4528 frames into fissures (516), external lacerations (2174), and internal lacerations (1838), following validation by three experts. Data was split 80% for training and 20% for testing. Performance metrics included sensitivity, specificity, and accuracy. For external lacerations, the CNN achieved 82.5% sensitivity, 93.5% specificity, and 88.2% accuracy. For internal lacerations, achieved 91.7% sensitivity, 85.9% specificity, and 88.2% accuracy. For anal fissures, achieved 100% sensitivity, specificity, and accuracy. This first EUAS AI-assisted model for differentiating benign anal injuries demonstrates excellent diagnostic performance. It highlights AI's potential to improve accuracy, reduce reliance on expertise, and support broader clinical adoption. While currently limited by small dataset and single-center scope, this work represents a significant step towards integrating AI in proctology.
Page 5 of 1261257 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.