Sort by:
Page 11 of 24236 results

Insights into radiomics: a comprehensive review for beginners.

Mariotti F, Agostini A, Borgheresi A, Marchegiani M, Zannotti A, Giacomelli G, Pierpaoli L, Tola E, Galiffa E, Giovagnoni A

pubmed logopapersMay 12 2025
Radiomics and artificial intelligence (AI) are rapidly evolving, significantly transforming the field of medical imaging. Despite their growing adoption, these technologies remain challenging to approach due to their technical complexity. This review serves as a practical guide for early-career radiologists and researchers seeking to integrate radiomics into their studies. It provides practical insights for clinical and research applications, addressing common challenges, limitations, and future directions in the field. This work offers a structured overview of the essential steps in the radiomics workflow, focusing on concrete aspects of each step, including indicative and practical examples. It covers the main steps such as dataset definition, image acquisition and preprocessing, segmentation, feature extraction and selection, and AI model training and validation. Different methods to be considered are discussed, accompanied by summary diagrams. This review equips readers with the knowledge necessary to approach radiomics and AI in medical imaging from a hands-on research perspective.

Effect of Deep Learning-Based Image Reconstruction on Lesion Conspicuity of Liver Metastases in Pre- and Post-contrast Enhanced Computed Tomography.

Ichikawa Y, Hasegawa D, Domae K, Nagata M, Sakuma H

pubmed logopapersMay 12 2025
The purpose of this study was to investigate the utility of deep learning image reconstruction at medium and high intensity levels (DLIR-M and DLIR-H, respectively) for better delineation of liver metastases in pre-contrast and post-contrast CT, compared to conventional hybrid iterative reconstruction (IR) methods. Forty-one patients with liver metastases who underwent abdominal CT were studied. The raw data were reconstructed with three different algorithms: hybrid IR (ASiR-V 50%), DLIR-M (TrueFildelity-M), and DLIR-H (TrueFildelity-H). Three experienced radiologists independently rated the lesion conspicuity of liver metastases on a qualitative 5-point scale (score 1 = very poor; score 5 = excellent). The observers also selected each image series for pre- and post-contrast CT per patient that was considered most preferable for liver metastases assessment. For pre-contrast CT, lesion conspicuity scores for DLIR-H and DLIR-M were significantly higher than those for hybrid IR for two of the three observers, while there was no significant difference for one observer. For post-contrast CT, the lesion conspicuity scores for DLIR-H images were significantly higher than those for DLIR-M images for two of the three observers on post-contrast CT (Observer 1: DLIR-H, 4.3 ± 0.8 vs. DLIR-M, 3.9 ± 0.9, p = 0.0006; Observer 3: DLIR-H, 4.6 ± 0.6 vs. DLIR-M, 4.3 ± 0.6, p = 0.0013). For post-contrast CT, all observers most often selected DLIR-H as the best reconstruction method for the diagnosis of liver metastases. However, in the pre-contrast CT, there was variation among the three observers in determining the most preferred image reconstruction method, and DLIR was not necessarily preferred over hybrid IR for the diagnosis of liver metastases.

Two-Stage Automatic Liver Classification System Based on Deep Learning Approach Using CT Images.

Kılıç R, Yalçın A, Alper F, Oral EA, Ozbek IY

pubmed logopapersMay 12 2025
Alveolar echinococcosis (AE) is a parasitic disease caused by Echinococcus multilocularis, where early detection is crucial for effective treatment. This study introduces a novel method for the early diagnosis of liver diseases by differentiating between tumor, AE, and healthy cases using non-contrast CT images, which are widely accessible and eliminate the risks associated with contrast agents. The proposed approach integrates an automatic liver region detection method based on RCNN followed by a CNN-based classification framework. A dataset comprising over 27,000 thorax-abdominal images from 233 patients, including 8206 images with liver tissue, was constructed and used to evaluate the proposed method. The experimental results demonstrate the importance of the two-stage classification approach. In a 2-class classification problem for healthy and non-healthy classes, an accuracy rate of 0.936 (95% CI: 0.925 <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>-</mo></math> 0.947) was obtained, and that for 3-class classification problem with AE, tumor, and healthy classes was obtained as 0.863 (95% CI: 0.847 <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>-</mo></math> 0.879). These results highlight the potential use of the proposed framework as a fully automatic approach for liver classification without the use of contrast agents. Furthermore, the proposed framework demonstrates competitive performance compared to other state-of-the-art techniques, suggesting its applicability in clinical practice.

Biological markers and psychosocial factors predict chronic pain conditions.

Fillingim M, Tanguay-Sabourin C, Parisien M, Zare A, Guglietti GV, Norman J, Petre B, Bortsov A, Ware M, Perez J, Roy M, Diatchenko L, Vachon-Presseau E

pubmed logopapersMay 12 2025
Chronic pain is a multifactorial condition presenting significant diagnostic and prognostic challenges. Biomarkers for the classification and the prediction of chronic pain are therefore critically needed. Here, in this multidataset study of over 523,000 participants, we applied machine learning to multidimensional biological data from the UK Biobank to identify biomarkers for 35 medical conditions associated with pain (for example, rheumatoid arthritis and gout) or self-reported chronic pain (for example, back pain and knee pain). Biomarkers derived from blood immunoassays, brain and bone imaging, and genetics were effective in predicting medical conditions associated with chronic pain (area under the curve (AUC) 0.62-0.87) but not self-reported pain (AUC 0.50-0.62). Notably, all biomarkers worked in synergy with psychosocial factors, accurately predicting both medical conditions (AUC 0.69-0.91) and self-reported pain (AUC 0.71-0.92). These findings underscore the necessity of adopting a holistic approach in the development of biomarkers to enhance their clinical utility.

Preoperative prediction of malignant transformation in sinonasal inverted papilloma: a novel MRI-based deep learning approach.

Ding C, Wen B, Han Q, Hu N, Kang Y, Wang Y, Wang C, Zhang L, Xian J

pubmed logopapersMay 12 2025
To develop a novel MRI-based deep learning (DL) diagnostic model, utilizing multicenter large-sample data, for the preoperative differentiation of sinonasal inverted papilloma (SIP) from SIP-transformed squamous cell carcinoma (SIP-SCC). This study included 568 patients from four centers with confirmed SIP (n = 421) and SIP-SCC (n = 147). Deep learning models were built using T1WI, T2WI, and CE-T1WI. A combined model was constructed by integrating these features through an attention mechanism. The diagnostic performance of radiologists, both with and without the model's assistance, was compared. Model performance was evaluated through receiver operating characteristic (ROC) analysis, calibration curves, and decision curve analysis (DCA). The combined model demonstrated superior performance in differentiating SIP from SIP-SCC, achieving AUCs of 0.954, 0.897, and 0.859 in the training, internal validation, and external validation cohorts, respectively. It showed optimal accuracy, stability, and clinical benefit, as confirmed by Brier scores and calibration curves. The diagnostic performance of radiologists, especially for less experienced ones, was significantly improved with model assistance. The MRI-based deep learning model enhances the capability to predict malignant transformation of sinonasal inverted papilloma before surgery. By facilitating earlier diagnosis and promoting timely pathological examination or surgical intervention, this approach holds the potential to enhance patient prognosis. Questions Sinonasal inverted papilloma (SIP) is prone to malignant transformation locally, leading to poor prognosis; current diagnostic methods are invasive and inaccurate, necessitating effective preoperative differentiation. Findings The MRI-based deep learning model accurately diagnoses malignant transformations of SIP, enabling junior radiologists to achieve greater clinical benefits with the assistance of the model. Clinical relevance A novel MRI-based deep learning model enhances the capability of preoperative diagnosis of malignant transformation in sinonasal inverted papilloma, providing a non-invasive tool for personalized treatment planning.

Automatic CTA analysis for blood vessels and aneurysm features extraction in EVAR planning.

Robbi E, Ravanelli D, Allievi S, Raunig I, Bonvini S, Passerini A, Trianni A

pubmed logopapersMay 12 2025
Endovascular Aneurysm Repair (EVAR) is a minimally invasive procedure crucial for treating abdominal aortic aneurysms (AAA), where precise pre-operative planning is essential. Current clinical methods rely on manual measurements, which are time-consuming and prone to errors. Although AI solutions are increasingly being developed to automate aspects of these processes, most existing approaches primarily focus on computing volumes and diameters, falling short of delivering a fully automated pre-operative analysis. This work presents BRAVE (Blood Vessels Recognition and Aneurysms Visualization Enhancement), the first comprehensive AI-driven solution for vascular segmentation and AAA analysis using pre-operative CTA scans. BRAVE offers exhaustive segmentation, identifying both the primary abdominal aorta and secondary vessels, often overlooked by existing methods, providing a complete view of the vascular structure. The pipeline performs advanced volumetric analysis of the aneurysm sac, quantifying thrombotic tissue and calcifications, and automatically identifies the proximal and distal sealing zones, critical for successful EVAR procedures. BRAVE enables fully automated processing, reducing manual intervention and improving clinical workflow efficiency. Trained on a multi-center open-access dataset, it demonstrates generalizability across different CTA protocols and patient populations, ensuring robustness in diverse clinical settings. This solution saves time, ensures precision, and standardizes the process, enhancing vascular surgeons' decision-making.

Enhancing noninvasive pancreatic cystic neoplasm diagnosis with multimodal machine learning.

Huang W, Xu Y, Li Z, Li J, Chen Q, Huang Q, Wu Y, Chen H

pubmed logopapersMay 12 2025
Pancreatic cystic neoplasms (PCNs) are a complex group of lesions with a spectrum of malignancy. Accurate differentiation of PCN types is crucial for patient management, as misdiagnosis can result in unnecessary surgeries or treatment delays, affecting the quality of life. The significance of developing a non-invasive, accurate diagnostic model is underscored by the need to improve patient outcomes and reduce the impact of these conditions. We developed a machine learning model capable of accurately identifying different types of PCNs in a non-invasive manner, by using a dataset comprising 449 MRI and 568 CT scans from adult patients, spanning from 2009 to 2022. The study's results indicate that our multimodal machine learning algorithm, which integrates both clinical and imaging data, significantly outperforms single-source data algorithms. Specifically, it demonstrated state-of-the-art performance in classifying PCN types, achieving an average accuracy of 91.2%, precision of 91.7%, sensitivity of 88.9%, and specificity of 96.5%. Remarkably, for patients with mucinous cystic neoplasms (MCNs), regardless of undergoing MRI or CT imaging, the model achieved a 100% prediction accuracy rate. It indicates that our non-invasive multimodal machine learning model offers strong support for the early screening of MCNs, and represents a significant advancement in PCN diagnosis for improving clinical practice and patient outcomes. We also achieved the best results on an additional pancreatic cancer dataset, which further proves the generality of our model.

Application of improved graph convolutional network for cortical surface parcellation.

Tan J, Ren X, Chen Y, Yuan X, Chang F, Yang R, Ma C, Chen X, Tian M, Chen W, Wang Z

pubmed logopapersMay 12 2025
Accurate cortical surface parcellation is essential for elucidating brain organizational principles, functional mechanisms, and the neural substrates underlying higher cognitive and emotional processes. However, the cortical surface is a highly folded complex geometry, and large regional variations make the analysis of surface data challenging. Current methods rely on geometric simplification, such as spherical expansion, which takes hours for spherical mapping and registration, a popular but costly process that does not take full advantage of inherent structural information. In this study, we propose an Attention-guided Deep Graph Convolutional network (ADGCN) for end-to-end parcellation on primitive cortical surface manifolds. ADGCN consists of a deep graph convolutional layer with a symmetrical U-shaped structure, which enables it to effectively transmit detailed information of the original brain map and learn the complex graph structure, help the network enhance feature extraction capability. What's more, we introduce the Squeeze and Excitation (SE) module, which enables the network to better capture key features, suppress unimportant features, and significantly improve parcellation performance with a small amount of computation. We evaluated the model on a public dataset of 100 artificially labeled brain surfaces. Compared with other methods, the proposed network achieves Dice coefficient of 88.53% and an accuracy of 90.27%. The network can segment the cortex directly in the original domain, and has the advantages of high efficiency, simple operation and strong interpretability. This approach facilitates the investigation of cortical changes during development, aging, and disease progression, with the potential to enhance the accuracy of neurological disease diagnosis and the objectivity of treatment efficacy evaluation.

MRI-Based Diagnostic Model for Alzheimer's Disease Using 3D-ResNet.

Chen D, Yang H, Li H, He X, Mu H

pubmed logopapersMay 12 2025
Alzheimer's disease (AD), a progressive neurodegenerative disorder, is the leading cause of dementia worldwide and remains incurable once it begins. Therefore, early and accurate diagnosis is essential for effective intervention. Leveraging recent advances in deep learning, this study proposes a novel diagnostic model based on the 3D-ResNet architecture to classify three cognitive states: AD, mild cognitive impairment (MCI), and cognitively normal (CN) individuals, using MRI data. The model integrates the strengths of ResNet and 3D convolutional neural networks (3D-CNN), and incorporates a special attention mechanism(SAM) within the residual structure to enhance feature representation. The study utilized the ADNI dataset, comprising 800 brain MRI scans. The dataset was split in a 7:3 ratio for training and testing, and the network was trained using data augmentation and cross-validation strategies. The proposed model achieved 92.33% accuracy in the three-class classification task, and 97.61%, 95.83%, and 93.42% accuracy in binary classifications of AD vs. CN, AD vs. MCI, and CN vs. MCI, respectively, outperforming existing state-of-the-art methods. Furthermore, Grad-CAM heatmaps and 3D MRI reconstructions revealed that the cerebral cortex and hippocampus are critical regions for AD classification. These findings demonstrate a robust and interpretable AI-based diagnostic framework for AD, providing valuable technical support for its timely detection and clinical intervention.

Use of Artificial Intelligence in Recognition of Fetal Open Neural Tube Defect on Prenatal Ultrasound.

Kumar M, Arora U, Sengupta D, Nain S, Meena D, Yadav R, Perez M

pubmed logopapersMay 12 2025
To compare the axial cranial ultrasound images of normal and open neural tube defect (NTD) fetuses using a deep learning (DL) model and to assess its predictive accuracy in identifying open NTD.It was a prospective case-control study. Axial trans-thalamic fetal ultrasound images of participants with open fetal NTD and normal controls between 14 and 28 weeks of gestation were taken after consent. The images were divided into training, testing, and validation datasets randomly in the ratio of 70:15:15. The images were further processed and classified using DL convolutional neural network (CNN) transfer learning (TL) models. The TL models were trained for 50 epochs. The data was analyzed in terms of Cohen kappa score, accuracy score, area under receiver operating curve (AUROC) score, F1 score validity, sensitivity, and specificity of the test.A total of 59 cases and 116 controls were fully followed. Efficient net B0, Visual Geometry Group (VGG), and Inception V3 TL models were used. Both Efficient net B0 and VGG16 models gave similar high training and validation accuracy (100 and 95.83%, respectively). Using inception V3, the training and validation accuracy was 98.28 and 95.83%, respectively. The sensitivity and specificity of Efficient NetB0 was 100 and 89%, respectively, and was the best.The analysis of the changes in axial images of the fetal cranium using the DL model, Efficient Net B0 proved to be an effective model to be used in clinical application for the identification of open NTD. · Open spina bifida is often missed due to the nonrecognition of the lemon sign on ultrasound.. · Image classification using DL identified open spina bifida with excellent accuracy.. · The research is clinically relevant in low- and middle-income countries..
Page 11 of 24236 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.