Sort by:
Page 4 of 24236 results

Accuracy and Reliability of Multimodal Imaging in Diagnosing Knee Sports Injuries.

Zhu D, Zhang Z, Li W

pubmed logopapersMay 15 2025
Due to differences in subjective experience and professional level among doctors, as well as inconsistent diagnostic criteria, there are issues with the accuracy and reliability of single imaging diagnosis results for knee joint injuries. To address these issues, magnetic resonance imaging (MRI), computed tomography (CT) and ultrasound (US) are adopted in this article for ensemble learning, and deep learning (DL) is combined for automatic analysis. By steps such as image enhancement, noise elimination, and tissue segmentation, the quality of image data is improved, and then convolutional neural networks (CNN) are used to automatically identify and classify injury types. The experimental results show that the DL model exhibits high sensitivity and specificity in the diagnosis of different types of injuries, such as anterior cruciate ligament tear, meniscus injury, cartilage injury, and fracture. The diagnostic accuracy of anterior cruciate ligament tear exceeds 90%, and the highest diagnostic accuracy of cartilage injury reaches 95.80%. In addition, compared with traditional manual image interpretation, the DL model has significant advantages in time efficiency, with a significant reduction in average interpretation time per case. The diagnostic consistency experiment shows that the DL model has high consistency with doctors' diagnosis results, with an overall error rate of less than 2%. The model has high accuracy and strong generalization ability when dealing with different types of joint injuries. These data indicate that combining multiple imaging technologies and the DL algorithm can effectively improve the accuracy and efficiency of diagnosing sports injuries of knee joints.

Segmentation of the thoracolumbar fascia in ultrasound imaging: a deep learning approach.

Bonaldi L, Pirri C, Giordani F, Fontanella CG, Stecco C, Uccheddu F

pubmed logopapersMay 15 2025
Only in recent years it has been demonstrated that the thoracolumbar fascia is involved in low back pain (LBP), thus highlighting its implications for treatments. Furthermore, an easily accessible and non-invasive way to investigate the fascia in real time is the ultrasound examination, which to be reliable as is, it must overcome the challenges related to the configuration of the machine and the experience of the operator. Therefore, the lack of a clear understanding of the fascial system combined with the penalty related to the setting of the ultrasound acquisition has generated a gap that makes its effective evaluation difficult during clinical routine. The aim of the present work is to fill this gap by investigating the effectiveness of using a deep learning approach to segment the thoracolumbar fascia from ultrasound imaging. A total of 538 ultrasound images of the thoracolumbar fascia of LBP subjects were finally used to train and test a deep learning network. An additional test set (so-called Test set 2) was collected from another center, operator, machine manufacturer, patient cohort, and protocol to improve the generalizability of the study. A U-Net-based architecture was demonstrated to be able to segment these structures with a final training accuracy of 0.99 and a validation accuracy of 0.91. The accuracy of the prediction computed on a test set (87 images not included in the training set) reached the 0.94, with a mean intersection over union index of 0.82 and a Dice-score of 0.76. These latter metrics were outperformed by those in Test set 2. The validity of the predictions was also verified and confirmed by two expert clinicians. Automatic identification of the thoracolumbar fascia has shown promising results to thoroughly investigate its alteration and target a personalized rehabilitation intervention based on each patient-specific scenario.

"MR Fingerprinting for Imaging Brain Hemodynamics and Oxygenation".

Coudert T, Delphin A, Barrier A, Barbier EL, Lemasson B, Warnking JM, Christen T

pubmed logopapersMay 15 2025
Over the past decade, several studies have explored the potential of magnetic resonance fingerprinting (MRF) for the quantification of brain hemodynamics, oxygenation, and perfusion. Recent advances in simulation models and reconstruction frameworks have also significantly enhanced the accuracy of vascular parameter estimation. This review provides an overview of key vascular MRF studies, emphasizing advancements in geometrical models for vascular simulations, novel sequences, and state-of-the-art reconstruction techniques incorporating machine learning and deep learning algorithms. Both pre-clinical and clinical applications are discussed. Based on these findings, we outline future directions and development areas that need to be addressed to facilitate their clinical translation. EVIDENCE LEVEL: N/A. TECHNICAL EFFICACY: Stage 1.

From error to prevention of wrong-level spine surgery: a review.

Javadnia P, Gohari H, Salimi N, Alimohammadi E

pubmed logopapersMay 15 2025
Wrong-level spine surgery remains a significant concern in spine surgery, leading to devastating consequences for patients and healthcare systems alike. This comprehensive review aims to analyze the existing literature on wrong-level spine surgery in spine procedures, identifying key factors that contribute to these errors and exploring advanced strategies and technologies designed to prevent them. A systematic literature search was conducted across multiple databases, including PubMed, Scopus, EMBASE, and CINAHL. The selection criteria focused on preclinical and clinical studies that specifically addressed wrong site and wrong level surgeries in the context of spine surgery. The findings reveal a range of contributing factors to wrong-level spine surgeries, including communication failures, inadequate preoperative planning, and insufficient surgical protocols. The review emphasizes the critical role of innovative technologies-such as artificial intelligence, advanced imaging techniques, and surgical navigation systems-alongside established safety protocols like digital checklists and simulation training in enhancing surgical accuracy and preventing errors. In conclusion, integrating advanced technologies and systematic safety protocols is instrumental in reducing the incidence of wrong-level spine surgeries. This review underscores the importance of continuous education and the adoption of innovative solutions to foster a culture of safety and improve surgical outcomes. By addressing the multifaceted challenges associated with these errors, the field can work towards minimizing their occurrence and enhancing patient care.

Deep normative modelling reveals insights into early-stage Alzheimer's disease using multi-modal neuroimaging data.

Lawry Aguila A, Lorenzini L, Janahi M, Barkhof F, Altmann A

pubmed logopapersMay 15 2025
Exploring the early stages of Alzheimer's disease (AD) is crucial for timely intervention to help manage symptoms and set expectations for affected individuals and their families. However, the study of the early stages of AD involves analysing heterogeneous disease cohorts which may present challenges for some modelling techniques. This heterogeneity stems from the diverse nature of AD itself, as well as the inclusion of undiagnosed or 'at-risk' AD individuals or the presence of comorbidities which differentially affect AD biomarkers within the cohort. Normative modelling is an emerging technique for studying heterogeneous disorders that can quantify how brain imaging-based measures of individuals deviate from a healthy population. The normative model provides a statistical description of the 'normal' range that can be used at subject level to detect deviations, which may relate to pathological effects. In this work, we applied a deep learning-based normative model, pre-trained on MRI scans in the UK Biobank, to investigate ageing and identify abnormal age-related decline. We calculated deviations, relative to the healthy population, in multi-modal MRI data of non-demented individuals in the external EPAD (ep-ad.org) cohort and explored these deviations with the aim of determining whether normative modelling could detect AD-relevant subtle differences between individuals. We found that aggregate measures of deviation based on the entire brain correlated with measures of cognitive ability and biological phenotypes, indicating the effectiveness of a general deviation metric in identifying AD-related differences among individuals. We then explored deviations in individual imaging features, stratified by cognitive performance and genetic risk, across different brain regions and found that the brain regions showing deviations corresponded to those affected by AD such as the hippocampus. Finally, we found that 'at-risk' individuals in the EPAD cohort exhibited increasing deviation over time, with an approximately 6.4 times greater t-statistic in a pairwise t-test compared to a 'super-healthy' cohort. This study highlights the capability of deep normative modelling approaches to detect subtle differences in brain morphology among individuals at risk of developing AD in a non-demented population. Our findings allude to the potential utility of normative deviation metrics in monitoring disease progression.

Modifying the U-Net's Encoder-Decoder Architecture for Segmentation of Tumors in Breast Ultrasound Images.

Derakhshandeh S, Mahloojifar A

pubmed logopapersMay 15 2025
Segmentation is one of the most significant steps in image processing. Segmenting an image is a technique that makes it possible to separate a digital image into various areas based on the different characteristics of pixels in the image. In particular, the segmentation of breast ultrasound images is widely used for cancer identification. As a result of image segmentation, it is possible to make early diagnoses of a diseases via medical images in a very effective way. Due to various ultrasound artifacts and noises, including speckle noise, low signal-to-noise ratio, and intensity heterogeneity, the process of accurately segmenting medical images, such as ultrasound images, is still a challenging task. In this paper, we present a new method to improve the accuracy and effectiveness of breast ultrasound image segmentation. More precisely, we propose a neural network (NN) based on U-Net and an encoder-decoder architecture. By taking U-Net as the basis, both the encoder and decoder parts are developed by combining U-Net with other deep neural networks (Res-Net and MultiResUNet) and introducing a new approach and block (Co-Block), which preserve as much as possible the low-level and the high-level features. The designed network is evaluated using the Breast Ultrasound Images (BUSI) Dataset. It consists of 780 images, and the images are categorized into three classes, which are normal, benign, and malignant. According to our extensive evaluations on a public breast ultrasound dataset, the designed network segments the breast lesions more accurately than other state-of-the-art deep learning methods. With only 8.88 M parameters, our network (CResU-Net) obtained 82.88%, 77.5%, 90.3%, and 98.4% in terms of Dice similarity coefficients (DSC), intersection over union (IoU), area under curve (AUC), and global accuracy (ACC), respectively, on the BUSI dataset.

Dual-Domain deep prior guided sparse-view CT reconstruction with multi-scale fusion attention.

Wu J, Lin J, Jiang X, Zheng W, Zhong L, Pang Y, Meng H, Li Z

pubmed logopapersMay 15 2025
Sparse-view CT reconstruction is a challenging ill-posed inverse problem, where insufficient projection data leads to degraded image quality with increased noise and artifacts. Recent deep learning approaches have shown promising results in CT reconstruction. However, existing methods often neglect projection data constraints and rely heavily on convolutional neural networks, resulting in limited feature extraction capabilities and inadequate adaptability. To address these limitations, we propose a Dual-domain deep Prior-guided Multi-scale fusion Attention (DPMA) model for sparse-view CT reconstruction, aiming to enhance reconstruction accuracy while ensuring data consistency and stability. First, we establish a residual regularization strategy that applies constraints on the difference between the prior image and target image, effectively integrating deep learning-based priors with model-based optimization. Second, we develop a multi-scale fusion attention mechanism that employs parallel pathways to simultaneously model global context, regional dependencies, and local details in a unified framework. Third, we incorporate a physics-informed consistency module based on range-null space decomposition to ensure adherence to projection data constraints. Experimental results demonstrate that DPMA achieves improved reconstruction quality compared to existing approaches, particularly in noise suppression, artifact reduction, and fine detail preservation.

A monocular endoscopic image depth estimation method based on a window-adaptive asymmetric dual-branch Siamese network.

Chong N, Yang F, Wei K

pubmed logopapersMay 15 2025
Minimally invasive surgery involves entering the body through small incisions or natural orifices, using a medical endoscope for observation and clinical procedures. However, traditional endoscopic images often suffer from low texture and uneven illumination, which can negatively impact surgical and diagnostic outcomes. To address these challenges, many researchers have applied deep learning methods to enhance the processing of endoscopic images. This paper proposes a monocular medical endoscopic image depth estimation method based on a window-adaptive asymmetric dual-branch Siamese network. In this network, one branch focuses on processing global image information, while the other branch concentrates on local details. An improved lightweight Squeeze-and-Excitation (SE) module is added to the final layer of each branch, dynamically adjusting the inter-channel weights through self-attention. The outputs from both branches are then integrated using a lightweight cross-attention feature fusion module, enabling cross-branch feature interaction and enhancing the overall feature representation capability of the network. Extensive ablation and comparative experiments were conducted on medical datasets (EAD2019, Hamlyn, M2caiSeg, UCL) and a non-medical dataset (NYUDepthV2), with both qualitative and quantitative results-measured in terms of RMSE, AbsRel, FLOPs and running time-demonstrating the superiority of the proposed model. Additionally, comparisons with CT images show good organ boundary matching capability, highlighting the potential of our method for clinical applications. The key code of this paper is available at: https://github.com/superchongcnn/AttenAdapt_DE .

Enhancing medical explainability in deep learning for age-related macular degeneration diagnosis.

Shi L

pubmed logopapersMay 15 2025
Deep learning models hold significant promise for disease diagnosis but often lack transparency in their decision-making processes, limiting trust and hindering clinical adoption. This study introduces a novel multi-task learning framework to enhance the medical explainability of deep learning models for diagnosing age-related macular degeneration (AMD) using fundus images. The framework simultaneously performs AMD classification and lesion segmentation, allowing the model to support its diagnoses with AMD-associated lesions identified through segmentation. In addition, we perform an in-depth interpretability analysis of the model, proposing the Medical Explainability Index (MXI), a novel metric that quantifies the medical relevance of the generated heatmaps by comparing them with the model's lesion segmentation output. This metric provides a measurable basis to evaluate whether the model's decisions are grounded in clinically meaningful information. The proposed method was trained and evaluated on the Automatic Detection Challenge on Age-Related Macular Degeneration (ADAM) dataset. Experimental results demonstrate robust performance, achieving an area under the curve (AUC) of 0.96 for classification and a Dice similarity coefficient (DSC) of 0.59 for segmentation, outperforming single-task models. By offering interpretable and clinically relevant insights, our approach aims to foster greater trust in AI-driven disease diagnosis and facilitate its adoption in clinical practice.

Machine learning for grading prediction and survival analysis in high grade glioma.

Li X, Huang X, Shen Y, Yu S, Zheng L, Cai Y, Yang Y, Zhang R, Zhu L, Wang E

pubmed logopapersMay 15 2025
We developed and validated a magnetic resonance imaging (MRI)-based radiomics model for the classification of high-grade glioma (HGG) and determined the optimal machine learning (ML) approach. This retrospective analysis included 184 patients (59 grade III lesions and 125 grade IV lesions). Radiomics features were extracted from MRI with T1-weighted imaging (T1WI). The least absolute shrinkage and selection operator (LASSO) feature selection method and seven classification methods including logistic regression, XGBoost, Decision Tree, Random Forest (RF), Adaboost, Gradient Boosting Decision Tree, and Stacking fusion model were used to differentiate HGG. Performance was compared on AUC, sensitivity, accuracy, precision and specificity. In the non-fusion models, the best performance was achieved by using the XGBoost classifier, and using SMOTE to deal with the data imbalance to improve the performance of all the classifiers. The Stacking fusion model performed the best, with an AUC = 0.95 (sensitivity of 0.84; accuracy of 0.85; F1 score of 0.85). MRI-based quantitative radiomics features have good performance in identifying the classification of HGG. The XGBoost method outperforms the classifiers in the non-fusion model and the Stacking fusion model outperforms the non-fusion model.
Page 4 of 24236 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.