Sort by:
Page 22 of 32311 results

A review: Lightweight architecture model in deep learning approach for lung disease identification.

Maharani DA, Utaminingrum F, Husnina DNN, Sukmaningrum B, Rahmania FN, Handani F, Chasanah HN, Arrahman A, Febrianto F

pubmed logopapersJun 14 2025
As one of the leading causes of death worldwide, early detection of lung disease is a very important step to improve the effectiveness of treatment. By using medical image data, such as X-ray or CT-scan, classification of lung disease can be done. Deep learning methods have been widely used to recognize complex patterns in medical images, but this approach has the constraints of requiring large data variations and high computing resources. In overcoming these constraints, the lightweight architecture in deep learning can provide a more efficient solution based on the number of parameters and computing time. This method can be applied to devices with low processor specifications on portable devices such as mobile phones. This article presents a comprehensive review of 23 research studies published between 2020 and 2025, focusing on various lightweight architectures and optimization techniques aimed at improving the accuracy of lung disease detection. The results show that these models are able to significantly reduce parameter sizes, resulting in faster computation times while maintaining competitive accuracy compared to traditional deep learning architectures. From the research that has been done, it can be seen that SqueezeNet applied on public COVID-19 datasets is the best basic architecture with high accuracy, and the number of parameters is 570 thousand, which is very low. On the other hand, UNet requires 31.07 million parameters, and SegNet requires 29.45 million parameters trained on CT scan images from Italian Society of Medical and Interventional Radiology and Radiopedia, so it is less efficient. For the combination method, EfficientNetV2 and Extreme Learning Machine (ELM) are able to achieve the highest accuracy of 98.20 % and can significantly reduce parameters. The worst performance is shown by VGG and UNet with a decrease in accuracy from 91.05 % to 87 % and an increase in the number of parameters. It can be concluded that the lightweight architecture can be applied to medical image classification in the diagnosis of lung disease quickly and efficiently on devices with limited specifications.

Prediction of functional outcome after traumatic brain injury: a narrative review.

Iaquaniello C, Scordo E, Robba C

pubmed logopapersJun 13 2025
To synthesize current evidence on prognostic factors, tools, and strategies influencing functional outcomes in patients with traumatic brain injury (TBI), with a focus on the acute and postacute phases of care. Key early predictors such as Glasgow Coma Scale (GCS) scores, pupillary reactivity, and computed tomography (CT) imaging findings remain fundamental in guiding clinical decision-making. Prognostic models like IMPACT and CRASH enhance early risk stratification, while outcome measures such as the Glasgow Outcome Scale-Extended (GOS-E) provide structured long-term assessments. Despite their utility, heterogeneity in assessment approaches and treatment protocols continues to limit consistency in outcome predictions. Recent advancements highlight the value of fluid biomarkers like neurofilament light chain (NFL) and glial fibrillary acidic protein (GFAP), which offer promising avenues for improved accuracy. Additionally, artificial intelligence models are emerging as powerful tools to integrate complex datasets and refine individualized outcome forecasting. Neurological prognostication after TBI is evolving through the integration of clinical, radiological, molecular, and computational data. Although standardized models and scales remain foundational, emerging technologies and therapies - such as biomarkers, machine learning, and neurostimulants - represent a shift toward more personalized and actionable strategies to optimize recovery and long-term function.

Exploring the limit of image resolution for human expert classification of vascular ultrasound images in giant cell arteritis and healthy subjects: the GCA-US-AI project.

Bauer CJ, Chrysidis S, Dejaco C, Koster MJ, Kohler MJ, Monti S, Schmidt WA, Mukhtyar CB, Karakostas P, Milchert M, Ponte C, Duftner C, de Miguel E, Hocevar A, Iagnocco A, Terslev L, Døhn UM, Nielsen BD, Juche A, Seitz L, Keller KK, Karalilova R, Daikeler T, Mackie SL, Torralba K, van der Geest KSM, Boumans D, Bosch P, Tomelleri A, Aschwanden M, Kermani TA, Diamantopoulos A, Fredberg U, Inanc N, Petzinna SM, Albarqouni S, Behning C, Schäfer VS

pubmed logopapersJun 12 2025
Prompt diagnosis of giant cell arteritis (GCA) with ultrasound is crucial for preventing severe ocular and other complications, yet expertise in ultrasound performance is scarce. The development of an artificial intelligence (AI)-based assistant that facilitates ultrasound image classification and helps to diagnose GCA early promises to close the existing gap. In the projection of the planned AI, this study investigates the minimum image resolution required for human experts to reliably classify ultrasound images of arteries commonly affected by GCA for the presence or absence of GCA. Thirty-one international experts in GCA ultrasonography participated in a web-based exercise. They were asked to classify 10 ultrasound images for each of 5 vascular segments as GCA, normal, or not able to classify. The following segments were assessed: (1) superficial common temporal artery, (2) its frontal and (3) parietal branches (all in transverse view), (4) axillary artery in transverse view, and 5) axillary artery in longitudinal view. Identical images were shown at different resolutions, namely 32 × 32, 64 × 64, 128 × 128, 224 × 224, and 512 × 512 pixels, thereby resulting in a total of 250 images to be classified by every study participant. Classification performance improved with increasing resolution up to a threshold, plateauing at 224 × 224 pixels. At 224 × 224 pixels, the overall classification sensitivity was 0.767 (95% CI, 0.737-0.796), and specificity was 0.862 (95% CI, 0.831-0.888). A resolution of 224 × 224 pixels ensures reliable human expert classification and aligns with the input requirements of many common AI-based architectures. Thus, the results of this study substantially guide projected AI development.

Generalist Models in Medical Image Segmentation: A Survey and Performance Comparison with Task-Specific Approaches

Andrea Moglia, Matteo Leccardi, Matteo Cavicchioli, Alice Maccarini, Marco Marcon, Luca Mainardi, Pietro Cerveri

arxiv logopreprintJun 12 2025
Following the successful paradigm shift of large language models, leveraging pre-training on a massive corpus of data and fine-tuning on different downstream tasks, generalist models have made their foray into computer vision. The introduction of Segment Anything Model (SAM) set a milestone on segmentation of natural images, inspiring the design of a multitude of architectures for medical image segmentation. In this survey we offer a comprehensive and in-depth investigation on generalist models for medical image segmentation. We start with an introduction on the fundamentals concepts underpinning their development. Then, we provide a taxonomy on the different declinations of SAM in terms of zero-shot, few-shot, fine-tuning, adapters, on the recent SAM 2, on other innovative models trained on images alone, and others trained on both text and images. We thoroughly analyze their performances at the level of both primary research and best-in-literature, followed by a rigorous comparison with the state-of-the-art task-specific models. We emphasize the need to address challenges in terms of compliance with regulatory frameworks, privacy and security laws, budget, and trustworthy artificial intelligence (AI). Finally, we share our perspective on future directions concerning synthetic data, early fusion, lessons learnt from generalist models in natural language processing, agentic AI and physical AI, and clinical translation.

Machine learning is changing osteoporosis detection: an integrative review.

Zhang Y, Ma M, Huang X, Liu J, Tian C, Duan Z, Fu H, Huang L, Geng B

pubmed logopapersJun 10 2025
Machine learning drives osteoporosis detection and screening with higher clinical accuracy and accessibility than traditional osteoporosis screening tools. This review takes a step-by-step view of machine learning for osteoporosis detection, providing insights into today's osteoporosis detection and the outlook for the future. The early diagnosis and risk detection of osteoporosis have always been crucial and challenging issues in the medical field. With the in-depth application of artificial intelligence technology, especially machine learning technology in the medical field, significant breakthroughs have been made in the application of early diagnosis and risk detection of osteoporosis. Machine learning is a multidimensional technical system that encompasses a wide variety of algorithm types. Machine learning algorithms have become relatively mature and developed over many years in medical data processing. They possess stable and accurate detection performance, laying a solid foundation for the detection and diagnosis of osteoporosis. As an essential part of the machine learning technical system, deep-learning algorithms are complex algorithm models based on artificial neural networks. Due to their robust image recognition and feature extraction capabilities, deep learning algorithms have become increasingly mature in the early diagnosis and risk assessment of osteoporosis in recent years, opening new ideas and approaches for the early and accurate diagnosis and risk detection of osteoporosis. This paper reviewed the latest research over the past decade, ranging from relatively basic and widely adopted machine learning algorithms combined with clinical data to more advanced deep learning techniques integrated with imaging data such as X-ray, CT, and MRI. By analyzing the application of algorithms at different stages, we found that these basic machine learning algorithms performed well when dealing with single structured data but encountered limitations when handling high-dimensional and unstructured imaging data. On the other hand, deep learning can significantly improve detection accuracy. It does this by automatically extracting image features, especially in image histological analysis. However, it faces challenges. These include the "black-box" problem, heavy reliance on large amounts of labeled data, and difficulties in clinical interpretability. These issues highlighted the importance of model interpretability in future machine learning research. Finally, we expect to develop a predictive model in the future that combines multimodal data (such as clinical indicators, blood biochemical indicators, imaging data, and genetic data) integrated with electronic health records and machine learning techniques. This model aims to present a skeletal health monitoring system that is highly accessible, personalized, convenient, and efficient, furthering the early detection and prevention of osteoporosis.

Foundation Models in Medical Imaging -- A Review and Outlook

Vivien van Veldhuizen, Vanessa Botha, Chunyao Lu, Melis Erdal Cesur, Kevin Groot Lipman, Edwin D. de Jong, Hugo Horlings, Clárisa Sanchez, Cees Snoek, Ritse Mann, Eric Marcus, Jonas Teuwen

arxiv logopreprintJun 10 2025
Foundation models (FMs) are changing the way medical images are analyzed by learning from large collections of unlabeled data. Instead of relying on manually annotated examples, FMs are pre-trained to learn general-purpose visual features that can later be adapted to specific clinical tasks with little additional supervision. In this review, we examine how FMs are being developed and applied in pathology, radiology, and ophthalmology, drawing on evidence from over 150 studies. We explain the core components of FM pipelines, including model architectures, self-supervised learning methods, and strategies for downstream adaptation. We also review how FMs are being used in each imaging domain and compare design choices across applications. Finally, we discuss key challenges and open questions to guide future research.

Foundation Models in Medical Imaging -- A Review and Outlook

Vivien van Veldhuizen, Vanessa Botha, Chunyao Lu, Melis Erdal Cesur, Kevin Groot Lipman, Edwin D. de Jong, Hugo Horlings, Clárisa I. Sanchez, Cees G. M. Snoek, Lodewyk Wessels, Ritse Mann, Eric Marcus, Jonas Teuwen

arxiv logopreprintJun 10 2025
Foundation models (FMs) are changing the way medical images are analyzed by learning from large collections of unlabeled data. Instead of relying on manually annotated examples, FMs are pre-trained to learn general-purpose visual features that can later be adapted to specific clinical tasks with little additional supervision. In this review, we examine how FMs are being developed and applied in pathology, radiology, and ophthalmology, drawing on evidence from over 150 studies. We explain the core components of FM pipelines, including model architectures, self-supervised learning methods, and strategies for downstream adaptation. We also review how FMs are being used in each imaging domain and compare design choices across applications. Finally, we discuss key challenges and open questions to guide future research.

Large Language Models in Medical Diagnostics: Scoping Review With Bibliometric Analysis.

Su H, Sun Y, Li R, Zhang A, Yang Y, Xiao F, Duan Z, Chen J, Hu Q, Yang T, Xu B, Zhang Q, Zhao J, Li Y, Li H

pubmed logopapersJun 9 2025
The integration of large language models (LLMs) into medical diagnostics has garnered substantial attention due to their potential to enhance diagnostic accuracy, streamline clinical workflows, and address health care disparities. However, the rapid evolution of LLM research necessitates a comprehensive synthesis of their applications, challenges, and future directions. This scoping review aimed to provide an overview of the current state of research regarding the use of LLMs in medical diagnostics. The study sought to answer four primary subquestions, as follows: (1) Which LLMs are commonly used? (2) How are LLMs assessed in diagnosis? (3) What is the current performance of LLMs in diagnosing diseases? (4) Which medical domains are investigating the application of LLMs? This scoping review was conducted according to the Joanna Briggs Institute Manual for Evidence Synthesis and adheres to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews). Relevant literature was searched from the Web of Science, PubMed, Embase, IEEE Xplore, and ACM Digital Library databases from 2022 to 2025. Articles were screened and selected based on predefined inclusion and exclusion criteria. Bibliometric analysis was performed using VOSviewer to identify major research clusters and trends. Data extraction included details on LLM types, application domains, and performance metrics. The field is rapidly expanding, with a surge in publications after 2023. GPT-4 and its variants dominated research (70/95, 74% of studies), followed by GPT-3.5 (34/95, 36%). Key applications included disease classification (text or image-based), medical question answering, and diagnostic content generation. LLMs demonstrated high accuracy in specialties like radiology, psychiatry, and neurology but exhibited biases in race, gender, and cost predictions. Ethical concerns, including privacy risks and model hallucination, alongside regulatory fragmentation, were critical barriers to clinical adoption. LLMs hold transformative potential for medical diagnostics but require rigorous validation, bias mitigation, and multimodal integration to address real-world complexities. Future research should prioritize explainable artificial intelligence frameworks, specialty-specific optimization, and international regulatory harmonization to ensure equitable and safe clinical deployment.

A review of multimodal fusion-based deep learning for Alzheimer's disease.

Zhang R, Sheng J, Zhang Q, Wang J, Wang B

pubmed logopapersJun 7 2025
Alzheimer's Disease (AD) as one of the most prevalent neurodegenerative disorders worldwide, characterized by significant memory and cognitive decline in its later stages, severely impacting daily lives. Consequently, early diagnosis and accurate assessment are crucial for delaying disease progression. In recent years, multimodal imaging has gained widespread adoption in AD diagnosis and research, particularly the combined use of Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). The complementarity of these modalities in structural and metabolic information offers a unique advantage for comprehensive disease understanding and precise diagnosis. With the rapid advancement of deep learning techniques, efficient fusion of MRI and PET multimodal data has emerged as a prominent research focus. This review systematically surveys the latest advancements in deep learning-based multimodal fusion of MRI and PET images for AD research, with a particular focus on studies published in the past five years (2021-2025). It first introduces the main sources of AD-related data, along with data preprocessing and feature extraction methods. Then, it summarizes performance metrics and multimodal fusion techniques. Next, it explores the application of various deep learning models and their variants in multimodal fusion tasks. Finally, it analyzes the key challenges currently faced in the field, including data scarcity and imbalance, inter-institutional data heterogeneity, etc., and discusses potential solutions and future research directions. This review aims to provide systematic guidance for researchers in the field of MRI and PET multimodal fusion, with the ultimate goal of advancing the development of early AD diagnosis and intervention strategies.

Advances in disease detection through retinal imaging: A systematic review.

Bilal H, Keles A, Bendechache M

pubmed logopapersJun 6 2025
Ocular and non-ocular diseases significantly impact millions of people worldwide, leading to vision impairment or blindness if not detected and managed early. Many individuals could be prevented from becoming blind by treating these diseases early on and stopping their progression. Despite advances in medical imaging and diagnostic tools, the manual detection of these diseases remains labor-intensive, time-consuming, and dependent on the expert's experience. Computer-aided diagnosis (CAD) has been transformed by machine learning (ML), providing promising methods for the automated detection and grading of diseases using various retinal imaging modalities. In this paper, we present a comprehensive systematic literature review that discusses the use of ML techniques to detect diseases from retinal images, utilizing both single and multi-modal imaging approaches. We analyze the efficiency of various Deep Learning and classical ML models, highlighting their achievements in accuracy, sensitivity, and specificity. Even with these advancements, the review identifies several critical challenges. We propose future research directions to address these issues. By overcoming these challenges, the potential of ML to enhance diagnostic accuracy and patient outcomes can be fully realized, opening the way for more reliable and effective ocular and non-ocular disease management.
Page 22 of 32311 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.