Sort by:
Page 130 of 1381373 results

Artificial Intelligence based radiomic model in Craniopharyngiomas: A Systematic Review and Meta-Analysis on Diagnosis, Segmentation, and Classification.

Mohammadzadeh I, Hajikarimloo B, Niroomand B, Faizi N, Faizi N, Habibi MA, Mohammadzadeh S, Soltani R

pubmed logopapersMay 7 2025
Craniopharyngiomas (CPs) are rare, benign brain tumors originating from Rathke's pouch remnants, typically located in the sellar/parasellar region. Accurate differentiation is crucial due to varying prognoses, with ACPs having higher recurrence and worse outcomes. MRI struggles with overlapping features, complicating diagnosis. this study evaluates the role of Artificial Intelligence (AI) in diagnosing, segmenting, and classifying CPs, emphasizing its potential to improve clinical decision-making, particularly for radiologists and neurosurgeons. This systematic review and meta-analysis assess AI applications in diagnosing, segmenting, and classifying on CPs patients. a comprehensive search was conducted across PubMed, Scopus, Embase and Web of Science for studies employing AI models in patients with CP. Performance metrics such as sensitivity, specificity, accuracy, and area under the curve (AUC) were extracted and synthesized. Eleven studies involving 1916 patients were included in the analysis. The pooled results revealed a sensitivity of 0.740 (95% CI: 0.673-0.808), specificity of 0.813 (95% CI: 0.729-0.898), and accuracy of 0.746 (95% CI: 0.679-0.813). The area under the curve (AUC) for diagnosis was 0.793 (95% CI: 0.719-0.866), and for classification, it was 0.899 (95% CI: 0.846-0.951). The sensitivity for segmentation was found to be 0.755 (95% CI: 0.704-0.805). AI-based models show strong potential in enhancing the diagnostic accuracy and clinical decision-making process for CPs. These findings support the use of AI tools for more reliable preoperative assessment, leading to better treatment planning and patient outcomes. Further research with larger datasets is needed to optimize and validate AI applications in clinical practice.

Impact of the recent advances in coronary artery disease imaging on pilot medical certification and aviation safety: current state and future perspective.

Benjamin MM, Rabbat MG, Park W, Benjamin M, Davenport E

pubmed logopapersMay 7 2025
Coronary artery disease (CAD) is highly prevalent among pilots due to the nature of their lifestyle, and occupational stresses. CAD is one the most common conditions affecting pilots' medical certification and is frequently nondisclosed by pilots fearing the loss of their certification. Traditional screening methods, such as resting electrocardiograms (EKGs) and functional stress tests, have limitations, especially in detecting non-obstructive CAD. Recent advances in cardiac imaging are challenging the current paradigms of CAD screening and risk assessment protocols, offering tools uniquely suited to address the occupational health challenges faced by pilots. Coronary artery calcium scoring (CACS) has proven valuable in refining risk stratification in asymptomatic individuals. Coronary computed tomography angiography (CCTA), is being increasingly adopted as a superior tool for ruling out CAD in symptomatic individuals, assessing plaque burden as well as morphologically identifying vulnerable plaque. CT-derived fractional flow reserve (CT-FFR) adds a physiologic component to the anatomical prowess of CCTA. Cardiac magnetic resonance imaging (CMR) is now used as a prognosticating tool following a coronary event as well as a stress testing modality. Investigational technologies like pericoronary fat attenuation and artificial intelligence (AI)-enabled plaque quantification hold the promise of enhancing diagnostic accuracy and risk stratification. This review highlights the interplay between occupational demands, regulatory considerations, and the limitations of the traditional modalities for pilot CAD screening and surveillance. We also discuss the potential role of the recent advances in cardiac imaging in optimizing pilot health and flight safety.

Potential of artificial intelligence for radiation dose reduction in computed tomography -A scoping review.

Bani-Ahmad M, England A, McLaughlin L, Hadi YH, McEntee M

pubmed logopapersMay 7 2025
Artificial intelligence (AI) is now transforming medical imaging, with extensive ramifications for nearly every aspect of diagnostic imaging, including computed tomography (CT). This current work aims to review, evaluate, and summarise the role of AI in radiation dose optimisation across three fundamental domains in CT: patient positioning, scan range determination, and image reconstruction. A comprehensive scoping review of the literature was performed. Electronic databases including Scopus, Ovid, EBSCOhost and PubMed were searched between January 2018 and December 2024. Relevant articles were identified from their titles had their abstracts evaluated, and those deemed relevant had their full text reviewed. Extracted data from selected studies included the application of AI, radiation dose, anatomical part, and any relevant evaluation metrics based on the CT parameter in which AI is applied. 90 articles met the selection criteria. Included studies evaluated the performance of AI for dose optimisation through patient positioning, scan range determination, and reconstruction across various CT scans, including the abdomen, chest, head, neck, and pelvis, as well as CT angiography. A concise overview of the present state of AI in these three domains, emphasising benefits, limitations, and impact on the transformation of dose reduction in CT scanning, is provided. AI methods can help minimise positioning offsets and over-scanning caused by manual errors and helped to overcome the limitation associated with low-dose CT settings through deep learning image reconstruction algorithms. Further clinical integration of AI will continue to allow for improvements in optimising CT scan protocols and radiation dose. This review underscores the significance of AI in optimizing radiation doses in CT imaging, focusing on three key areas: patient positioning, scan range determination, and image reconstruction.

Hybrid method for automatic initialization and segmentation of ventricular on large-scale cardiovascular magnetic resonance images.

Pan N, Li Z, Xu C, Gao J, Hu H

pubmed logopapersMay 7 2025
Cardiovascular diseases are the number one cause of death globally, making cardiac magnetic resonance image segmentation a popular research topic. Existing schemas relying on manual user interaction or semi-automatic segmentation are infeasible when dealing thousands of cardiac MRI studies. Thus, we proposed a full automatic and robust algorithm for large-scale cardiac MRI segmentation by combining the advantages of deep learning localization and 3D-ASM restriction. The proposed method comprises several key techniques: 1) a hybrid network integrating CNNs and Transformer as a encoder with the EFG (Edge feature guidance) module (named as CTr-HNs) to localize the target regions of the cardiac on MRI images, 2) initial shape acquisition by alignment of coarse segmentation contours to the initial surface model of 3D-ASM, 3) refinement of the initial shape to cover all slices of MRI in the short axis by complex transformation. The datasets used are from the UK BioBank and the CAP (Cardiac Atlas Project). In cardiac coarse segmentation experiments on MR images, Dice coefficients (Dice), mean contour distances (MCD), and mean Hausdorff distances (HD95) are used to evaluate segmentation performance. In SPASM experiments, Point-to-surface (P2S) distances, Dice score are compared between automatic results and ground truth. The CTr-HNs from our proposed method achieves Dice coefficients (Dice), mean contour distances (MCD), and mean Hausdorff distances (HD95) of 0.95, 0.10 and 1.54 for the LV segmentation respectively, 0.88, 0.13 and 1.94 for the LV myocardium segmentation, and 0.91, 0.24 and 3.25 for the RV segmentation. The overall P2S errors from our proposed schema is 1.45 mm. For endocardium and epicardium, the Dice scores are 0.87 and 0.91 respectively. Our experimental results show that the proposed schema can automatically analyze large-scale quantification from population cardiac images with robustness and accuracy.

Deep learning approaches for classification tasks in medical X-ray, MRI, and ultrasound images: a scoping review.

Laçi H, Sevrani K, Iqbal S

pubmed logopapersMay 7 2025
Medical images occupy the largest part of the existing medical information and dealing with them is challenging not only in terms of management but also in terms of interpretation and analysis. Hence, analyzing, understanding, and classifying them, becomes a very expensive and time-consuming task, especially if performed manually. Deep learning is considered a good solution for image classification, segmentation, and transfer learning tasks since it offers a large number of algorithms to solve such complex problems. PRISMA-ScR guidelines have been followed to conduct the scoping review with the aim of exploring how deep learning is being used to classify a broad spectrum of diseases diagnosed using an X-ray, MRI, or Ultrasound image modality.Findings contribute to the existing research by outlining the characteristics of the adopted datasets and the preprocessing or augmentation techniques applied to them. The authors summarized all relevant studies based on the deep learning models used and the accuracy achieved for classification. Whenever possible, they included details about the hardware and software configurations, as well as the architectural components of the models employed. Moreover, the models that achieved the highest accuracy in disease classification were highlighted, along with their strengths. The authors also discussed the limitations of the current approaches and proposed future directions for medical image classification.

A deep learning model combining circulating tumor cells and radiological features in the multi-classification of mediastinal lesions in comparison with thoracic surgeons: a large-scale retrospective study.

Wang F, Bao M, Tao B, Yang F, Wang G, Zhu L

pubmed logopapersMay 7 2025
CT images and circulating tumor cells (CTCs) are indispensable for diagnosing the mediastinal lesions by providing radiological and intra-tumoral information. This study aimed to develop and validate a deep multimodal fusion network (DMFN) combining CTCs and CT images for the multi-classification of mediastinal lesions. In this retrospective diagnostic study, we enrolled 1074 patients with 1500 enhanced CT images and 1074 CTCs results between Jan 1, 2020, and Dec 31, 2023. Patients were divided into the training cohort (n = 434), validation cohort (n = 288), and test cohort (n = 352). The DMFN and monomodal convolutional neural network (CNN) models were developed and validated using the CT images and CTCs results. The diagnostic performances of DMFN and monomodal CNN models were based on the Paraffin-embedded pathologies from surgical tissues. The predictive abilities were compared with thoracic resident physicians, attending physicians, and chief physicians by the area under the receiver operating characteristic (ROC) curve, and diagnostic results were visualized in the heatmap. For binary classification, the predictive performances of DMFN (AUC = 0.941, 95% CI 0.901-0.982) were better than the monomodal CNN model (AUC = 0.710, 95% CI 0.664-0.756). In addition, the DMFN model achieved better predictive performances than the thoracic chief physicians, attending physicians, and resident physicians (P = 0.054, 0.020, 0.016) respectively. For the multiclassification, the DMFN achieved encouraging predictive abilities (AUC = 0.884, 95%CI 0.837-0.931), significantly outperforming the monomodal CNN (AUC = 0.722, 95%CI 0.705-0.739), also better than the chief physicians (AUC = 0.787, 95%CI 0.714-0.862), attending physicians (AUC = 0.632, 95%CI 0.612-0.654), and resident physicians (AUC = 0.541, 95%CI 0.508-0.574). This study showed the feasibility and effectiveness of CNN model combing CT images and CTCs levels in predicting the diagnosis of mediastinal lesions. It could serve as a useful method to assist thoracic surgeons in improving diagnostic accuracy and has the potential to make management decisions.

Enhancing efficient deep learning models with multimodal, multi-teacher insights for medical image segmentation.

Hossain KF, Kamran SA, Ong J, Tavakkoli A

pubmed logopapersMay 7 2025
The rapid evolution of deep learning has dramatically enhanced the field of medical image segmentation, leading to the development of models with unprecedented accuracy in analyzing complex medical images. Deep learning-based segmentation holds significant promise for advancing clinical care and enhancing the precision of medical interventions. However, these models' high computational demand and complexity present significant barriers to their application in resource-constrained clinical settings. To address this challenge, we introduce Teach-Former, a novel knowledge distillation (KD) framework that leverages a Transformer backbone to effectively condense the knowledge of multiple teacher models into a single, streamlined student model. Moreover, it excels in the contextual and spatial interpretation of relationships across multimodal images for more accurate and precise segmentation. Teach-Former stands out by harnessing multimodal inputs (CT, PET, MRI) and distilling the final predictions and the intermediate attention maps, ensuring a richer spatial and contextual knowledge transfer. Through this technique, the student model inherits the capacity for fine segmentation while operating with a significantly reduced parameter set and computational footprint. Additionally, introducing a novel training strategy optimizes knowledge transfer, ensuring the student model captures the intricate mapping of features essential for high-fidelity segmentation. The efficacy of Teach-Former has been effectively tested on two extensive multimodal datasets, HECKTOR21 and PI-CAI22, encompassing various image types. The results demonstrate that our KD strategy reduces the model complexity and surpasses existing state-of-the-art methods to achieve superior performance. The findings of this study indicate that the proposed methodology could facilitate efficient segmentation of complex multimodal medical images, supporting clinicians in achieving more precise diagnoses and comprehensive monitoring of pathological conditions ( https://github.com/FarihaHossain/TeachFormer ).

Radiological evaluation and clinical implications of deep learning- and MRI-based synthetic CT for the assessment of cervical spine injuries.

Fischer G, Schlosser TPC, Dietrich TJ, Kim OC, Zdravkovic V, Martens B, Fehlings MG, Jans L, Vereecke E, Stienen MN, Hejrati N

pubmed logopapersMay 7 2025
Efficient evaluation of soft tissues and bony structures following cervical spine trauma is critical. We sought to evaluate the diagnostic validity of magnetic resonance imaging (MRI)-based synthetic CT (sCT) compared with conventional computed tomography (CT) for cervical spine injuries. In a prospective, multicenter study, patients with cervical spine injuries underwent CT and MRI within 48 h after injury. A panel of five clinicians independently reviewed the images for diagnostic accuracy, lesion characterization (AO Spine classification), and soft tissue trauma. Fracture visibility, anterior (AVH) and posterior wall height (PVH), vertebral body angle (VBA), segmental kyphosis (SK), with corresponding interobserver reliability (intraclass correlation coefficients (ICC)) and intermodal differences (Fleiss' Kappa), were recorded. The accuracy of estimating Hounsfield unit (HU) values and mean cortical surface distances were measured. Thirty-seven patients (44 cervical spine fractures) were enrolled. sCT demonstrated a sensitivity of 97.3% for visualizing fractures. Intermodal agreement regarding injury classification indicated almost perfect agreement (κ = 0.922; p < 0.001). Inter-reader ICCs were good to excellent (CT vs. sCT): AVH (0.88, 0.87); PVH (0.87, 0.88); VBA (0.78, 0.76); SK (0.77, 0.93). Intermodal agreement showed a mean absolute difference of 0.3 mm (AVH), 0.3 mm (PVH), 1.15° (VBA) and 0.51° (SK), respectively. MRI visualized additional soft tissue trauma in 56.8% of patients. Voxelwise comparisons of sCT showed good to excellent agreement with CT in terms of HUs (mean absolute error of 20 (SD ± 62)) and a mean absolute cortical surface distance of 0.45 mm (SD ± 0.13). sCT is a promising, radiation-free imaging technique for diagnosing cervical spine injuries with similar accuracy to CT. Question Assessing the accuracy of MRI-based synthetic CT (sCT) for fracture visualization and classification in comparison to the gold standard of CT for cervical spine injuries. Findings sCT demonstrated a 97.3% sensitivity in detecting fractures and exhibited near-perfect intermodal agreement in classifying injuries according to the AO Spine classification system. Clinical relevance sCT is a promising, radiation-free imaging modality that offers comparable accuracy to CT in visualizing and classifying cervical spine injuries. The combination of conventional MRI sequences for soft tissue evaluation with sCT reconstruction for bone visualization provides comprehensive diagnostic information.

Automated Detection of Black Hole Sign for Intracerebral Hemorrhage Patients Using Self-Supervised Learning.

Wang H, Schwirtlich T, Houskamp EJ, Hutch MR, Murphy JX, do Nascimento JS, Zini A, Brancaleoni L, Giacomozzi S, Luo Y, Naidech AM

pubmed logopapersMay 7 2025
Intracerebral Hemorrhage (ICH) is a devastating form of stroke. Hematoma expansion (HE), growth of the hematoma on interval scans, predicts death and disability. Accurate prediction of HE is crucial for targeted interventions to improve patient outcomes. The black hole sign (BHS) on non-contrast computed tomography (CT) scans is a predictive marker for HE. An automated method to recognize the BHS and predict HE could speed precise patient selection for treatment. In. this paper, we presented a novel framework leveraging self-supervised learning (SSL) techniques for BHS identification on head CT images. A ResNet-50 encoder model was pre-trained on over 1.7 million unlabeled head CT images. Layers for binary classification were added on top of the pre-trained model. The resulting model was fine-tuned using the training data and evaluated on the held-out test set to collect AUC and F1 scores. The evaluations were performed on scan and slice levels. We ran different panels, one using two multi-center datasets for external validation and one including parts of them in the pre-training RESULTS: Our model demonstrated strong performance in identifying BHS when compared with the baseline model. Specifically, the model achieved scan-level AUC scores between 0.75-0.89 and F1 scores between 0.60-0.70. Furthermore, it exhibited robustness and generalizability across an external dataset, achieving a scan-level AUC score of up to 0.85 and an F1 score of up to 0.60, while it performed less well on another dataset with more heterogeneous samples. The negative effects could be mitigated after including parts of the external datasets in the fine-tuning process. This study introduced a novel framework integrating SSL into medical image classification, particularly on BHS identification from head CT scans. The resulting pre-trained head CT encoder model showed potential to minimize manual annotation, which would significantly reduce labor, time, and costs. After fine-tuning, the framework demonstrated promising performance for a specific downstream task, identifying the BHS to predict HE, upon comprehensive evaluation on diverse datasets. This approach holds promise for enhancing medical image analysis, particularly in scenarios with limited data availability. ICH = Intracerebral Hemorrhage; HE = Hematoma Expansion; BHS = Black Hole Sign; CT = Computed Tomography; SSL = Self-supervised Learning; AUC = Area Under the receiver operator Curve; CNN = Convolutional Neural Network; SimCLR = Simple framework for Contrastive Learning of visual Representation; HU = Hounsfield Unit; CLAIM = Checklist for Artificial Intelligence in Medical Imaging; VNA = Vendor Neutral Archive; DICOM = Digital Imaging and Communications in Medicine; NIfTI = Neuroimaging Informatics Technology Initiative; INR = International Normalized Ratio; GPU= Graphics Processing Unit; NIH= National Institutes of Health.

Prompt Engineering for Large Language Models in Interventional Radiology.

Dietrich N, Bradbury NC, Loh C

pubmed logopapersMay 7 2025
Prompt engineering plays a crucial role in optimizing artificial intelligence (AI) and large language model (LLM) outputs by refining input structure, a key factor in medical applications where precision and reliability are paramount. This Clinical Perspective provides an overview of prompt engineering techniques and their relevance to interventional radiology (IR). It explores key strategies, including zero-shot, one- or few-shot, chain-of-thought, tree-of-thought, self-consistency, and directional stimulus prompting, demonstrating their application in IR-specific contexts. Practical examples illustrate how these techniques can be effectively structured for workplace and clinical use. Additionally, the article discusses best practices for designing effective prompts and addresses challenges in the clinical use of generative AI, including data privacy and regulatory concerns. It concludes with an outlook on the future of generative AI in IR, highlighting advances including retrieval-augmented generation, domain-specific LLMs, and multimodal models.
Page 130 of 1381373 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.