Sort by:
Page 5 of 549 results

Deep learning approaches for classification tasks in medical X-ray, MRI, and ultrasound images: a scoping review.

Laçi H, Sevrani K, Iqbal S

pubmed logopapersMay 7 2025
Medical images occupy the largest part of the existing medical information and dealing with them is challenging not only in terms of management but also in terms of interpretation and analysis. Hence, analyzing, understanding, and classifying them, becomes a very expensive and time-consuming task, especially if performed manually. Deep learning is considered a good solution for image classification, segmentation, and transfer learning tasks since it offers a large number of algorithms to solve such complex problems. PRISMA-ScR guidelines have been followed to conduct the scoping review with the aim of exploring how deep learning is being used to classify a broad spectrum of diseases diagnosed using an X-ray, MRI, or Ultrasound image modality.Findings contribute to the existing research by outlining the characteristics of the adopted datasets and the preprocessing or augmentation techniques applied to them. The authors summarized all relevant studies based on the deep learning models used and the accuracy achieved for classification. Whenever possible, they included details about the hardware and software configurations, as well as the architectural components of the models employed. Moreover, the models that achieved the highest accuracy in disease classification were highlighted, along with their strengths. The authors also discussed the limitations of the current approaches and proposed future directions for medical image classification.

Prompt Engineering for Large Language Models in Interventional Radiology.

Dietrich N, Bradbury NC, Loh C

pubmed logopapersMay 7 2025
Prompt engineering plays a crucial role in optimizing artificial intelligence (AI) and large language model (LLM) outputs by refining input structure, a key factor in medical applications where precision and reliability are paramount. This Clinical Perspective provides an overview of prompt engineering techniques and their relevance to interventional radiology (IR). It explores key strategies, including zero-shot, one- or few-shot, chain-of-thought, tree-of-thought, self-consistency, and directional stimulus prompting, demonstrating their application in IR-specific contexts. Practical examples illustrate how these techniques can be effectively structured for workplace and clinical use. Additionally, the article discusses best practices for designing effective prompts and addresses challenges in the clinical use of generative AI, including data privacy and regulatory concerns. It concludes with an outlook on the future of generative AI in IR, highlighting advances including retrieval-augmented generation, domain-specific LLMs, and multimodal models.

From Pixels to Polygons: A Survey of Deep Learning Approaches for Medical Image-to-Mesh Reconstruction

Fengming Lin, Arezoo Zakeri, Yidan Xue, Michael MacRaild, Haoran Dou, Zherui Zhou, Ziwei Zou, Ali Sarrami-Foroushani, Jinming Duan, Alejandro F. Frangi

arxiv logopreprintMay 6 2025
Deep learning-based medical image-to-mesh reconstruction has rapidly evolved, enabling the transformation of medical imaging data into three-dimensional mesh models that are critical in computational medicine and in silico trials for advancing our understanding of disease mechanisms, and diagnostic and therapeutic techniques in modern medicine. This survey systematically categorizes existing approaches into four main categories: template models, statistical models, generative models, and implicit models. Each category is analysed in detail, examining their methodological foundations, strengths, limitations, and applicability to different anatomical structures and imaging modalities. We provide an extensive evaluation of these methods across various anatomical applications, from cardiac imaging to neurological studies, supported by quantitative comparisons using standard metrics. Additionally, we compile and analyze major public datasets available for medical mesh reconstruction tasks and discuss commonly used evaluation metrics and loss functions. The survey identifies current challenges in the field, including requirements for topological correctness, geometric accuracy, and multi-modality integration. Finally, we present promising future research directions in this domain. This systematic review aims to serve as a comprehensive reference for researchers and practitioners in medical image analysis and computational medicine.

Rethinking Boundary Detection in Deep Learning-Based Medical Image Segmentation

Yi Lin, Dong Zhang, Xiao Fang, Yufan Chen, Kwang-Ting Cheng, Hao Chen

arxiv logopreprintMay 6 2025
Medical image segmentation is a pivotal task within the realms of medical image analysis and computer vision. While current methods have shown promise in accurately segmenting major regions of interest, the precise segmentation of boundary areas remains challenging. In this study, we propose a novel network architecture named CTO, which combines Convolutional Neural Networks (CNNs), Vision Transformer (ViT) models, and explicit edge detection operators to tackle this challenge. CTO surpasses existing methods in terms of segmentation accuracy and strikes a better balance between accuracy and efficiency, without the need for additional data inputs or label injections. Specifically, CTO adheres to the canonical encoder-decoder network paradigm, with a dual-stream encoder network comprising a mainstream CNN stream for capturing local features and an auxiliary StitchViT stream for integrating long-range dependencies. Furthermore, to enhance the model's ability to learn boundary areas, we introduce a boundary-guided decoder network that employs binary boundary masks generated by dedicated edge detection operators to provide explicit guidance during the decoding process. We validate the performance of CTO through extensive experiments conducted on seven challenging medical image segmentation datasets, namely ISIC 2016, PH2, ISIC 2018, CoNIC, LiTS17, and BTCV. Our experimental results unequivocally demonstrate that CTO achieves state-of-the-art accuracy on these datasets while maintaining competitive model complexity. The codes have been released at: https://github.com/xiaofang007/CTO.

Path and Bone-Contour Regularized Unpaired MRI-to-CT Translation

Teng Zhou, Jax Luo, Yuping Sun, Yiheng Tan, Shun Yao, Nazim Haouchine, Scott Raymond

arxiv logopreprintMay 6 2025
Accurate MRI-to-CT translation promises the integration of complementary imaging information without the need for additional imaging sessions. Given the practical challenges associated with acquiring paired MRI and CT scans, the development of robust methods capable of leveraging unpaired datasets is essential for advancing the MRI-to-CT translation. Current unpaired MRI-to-CT translation methods, which predominantly rely on cycle consistency and contrastive learning frameworks, frequently encounter challenges in accurately translating anatomical features that are highly discernible on CT but less distinguishable on MRI, such as bone structures. This limitation renders these approaches less suitable for applications in radiation therapy, where precise bone representation is essential for accurate treatment planning. To address this challenge, we propose a path- and bone-contour regularized approach for unpaired MRI-to-CT translation. In our method, MRI and CT images are projected to a shared latent space, where the MRI-to-CT mapping is modeled as a continuous flow governed by neural ordinary differential equations. The optimal mapping is obtained by minimizing the transition path length of the flow. To enhance the accuracy of translated bone structures, we introduce a trainable neural network to generate bone contours from MRI and implement mechanisms to directly and indirectly encourage the model to focus on bone contours and their adjacent regions. Evaluations conducted on three datasets demonstrate that our method outperforms existing unpaired MRI-to-CT translation approaches, achieving lower overall error rates. Moreover, in a downstream bone segmentation task, our approach exhibits superior performance in preserving the fidelity of bone structures. Our code is available at: https://github.com/kennysyp/PaBoT.

Machine Learning Approach to 3×4 Mueller Polarimetry for Complete Reconstruction of Diagnostic Polarimetric Images of Biological Tissues.

Chae S, Huang T, Rodriguez-Nunez O, Lucas T, Vanel JC, Vizet J, Pierangelo A, Piavchenko G, Genova T, Ajmal A, Ramella-Roman JC, Doronin A, Ma H, Novikova T

pubmed logopapersMay 6 2025
The translation of imaging Mueller polarimetry to clinical practice is often hindered by large footprint and relatively slow acquisition speed of the existing instruments. Using polarization-sensitive camera as a detector may reduce instrument dimensions and allow data streaming at video rate. However, only the first three rows of a complete 4×4 Mueller matrix can be measured. To overcome this hurdle we developed a machine learning approach using sequential neural network algorithm for the reconstruction of missing elements of a Mueller matrix from the measured elements of the first three rows. The algorithm was trained and tested on the dataset of polarimetric images of various excised human tissues (uterine cervix, colon, skin, brain) acquired with two different imaging Mueller polarimeters operating in either reflection (wide-field imaging system) or transmission (microscope) configurations at different wavelengths of 550 nm and 385 nm, respectively. Reconstruction performance was evaluated using various error metrics, all of which confirmed low error values. The reconstruction of full images of the fourth row of Mueller matrix with GPU parallelization and increasing batch size took less than 50 milliseconds. It suggests that a machine learning approach with parallel processing of all image pixels combined with the partial Mueller polarimeter operating at video rate can effectively substitute for the complete Mueller polarimeter and produce accurate maps of depolarization, linear retardance and orientation of the optical axis of biological tissues, which can be used for medical diagnosis in clinical settings.

A Deep Learning Approach for Mandibular Condyle Segmentation on Ultrasonography.

Keser G, Yülek H, Öner Talmaç AG, Bayrakdar İŞ, Namdar Pekiner F, Çelik Ö

pubmed logopapersMay 6 2025
Deep learning techniques have demonstrated potential in various fields, including segmentation, and have recently been applied to medical image processing. This study aims to develop and evaluate computer-based diagnostic software designed to assess the segmentation of the mandibular condyle in ultrasound images. A total of 668 retrospective ultrasound images of anonymous adult mandibular condyles were analyzed. The CranioCatch labeling program (CranioCatch, Eskişehir, Turkey) was utilized to annotate the mandibular condyle using a polygonal labeling method. These annotations were subsequently reviewed and validated by experts in oral and maxillofacial radiology. In this study, all test images were detected and segmented using the YOLOv8 deep learning artificial intelligence (AI) model. When evaluating the model's performance in image estimation, it achieved an F1 score of 0.93, a sensitivity of 0.90, and a precision of 0.96. The automatic segmentation of the mandibular condyle from ultrasound images presents a promising application of artificial intelligence. This approach can help surgeons, radiologists, and other specialists save time in the diagnostic process.

New Targets for Imaging in Nuclear Medicine.

Brink A, Paez D, Estrada Lobato E, Delgado Bolton RC, Knoll P, Korde A, Calapaquí Terán AK, Haidar M, Giammarile F

pubmed logopapersMay 6 2025
Nuclear medicine is rapidly evolving with new molecular imaging targets and advanced computational tools that promise to enhance diagnostic precision and personalized therapy. Recent years have seen a surge in novel PET and SPECT tracers, such as those targeting prostate-specific membrane antigen (PSMA) in prostate cancer, fibroblast activation protein (FAP) in tumor stroma, and tau protein in neurodegenerative disease. These tracers enable more specific visualization of disease processes compared to traditional agents, fitting into a broader shift toward precision imaging in oncology and neurology. In parallel, artificial intelligence (AI) and machine learning techniques are being integrated into tracer development and image analysis. AI-driven methods can accelerate radiopharmaceutical discovery, optimize pharmacokinetic properties, and assist in interpreting complex imaging datasets. This editorial provides an expanded overview of emerging imaging targets and techniques, including theranostic applications that pair diagnosis with radionuclide therapy, and examines how AI is augmenting nuclear medicine. We discuss the implications of these advancements within the field's historical trajectory and address the regulatory, manufacturing, and clinical challenges that must be navigated. Innovations in molecular targeting and AI are poised to transform nuclear medicine practice, enabling more personalized diagnostics and radiotheranostic strategies in the era of precision healthcare.

Keypoint localization and parameter measurement in ultrasound biomicroscopy anterior segment images based on deep learning.

Qinghao M, Sheng Z, Jun Y, Xiaochun W, Min Z

pubmed logopapersMay 6 2025
Accurate measurement of anterior segment parameters is crucial for diagnosing and managing ophthalmic conditions, such as glaucoma, cataracts, and refractive errors. However, traditional clinical measurement methods are often time-consuming, labor-intensive, and susceptible to inaccuracies. With the growing potential of artificial intelligence in ophthalmic diagnostics, this study aims to develop and evaluate a deep learning model capable of automatically extracting key points and precisely measuring multiple clinically significant anterior segment parameters from ultrasound biomicroscopy (UBM) images. These parameters include central corneal thickness (CCT), anterior chamber depth (ACD), pupil diameter (PD), angle-to-angle distance (ATA), sulcus-to-sulcus distance (STS), lens thickness (LT), and crystalline lens rise (CLR). A data set of 716 UBM anterior segment images was collected from Tianjin Medical University Eye Hospital. YOLOv8 was utilized to segment four key anatomical structures: cornea-sclera, anterior chamber, pupil, and iris-ciliary body-thereby enhancing the accuracy of keypoint localization. Only images with intact posterior capsule lentis were selected to create an effective data set for parameter measurement. Ten keypoints were localized across the data set, allowing the calculation of seven essential parameters. Control experiments were conducted to evaluate the impact of segmentation on measurement accuracy, with model predictions compared against clinical gold standards. The segmentation model achieved a mean IoU of 0.8836 and mPA of 0.9795. Following segmentation, the binary classification model attained an mAP of 0.9719, with a precision of 0.9260 and a recall of 0.9615. Keypoint localization exhibited a Euclidean distance error of 58.73 ± 63.04 μm, improving from the pre-segmentation error of 71.57 ± 67.36 μm. Localization mAP was 0.9826, with a precision of 0.9699, a recall of 0.9642 and an FPS of 32.64. In addition, parameter error analysis and Bland-Altman plots demonstrated improved agreement with clinical gold standards after segmentation. This deep learning approach for UBM image segmentation, keypoint localization, and parameter measurement is feasible, enhancing clinical diagnostic efficiency for anterior segment parameters.
Page 5 of 549 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.