Sort by:
Page 6 of 32311 results

Beyond the norm: Exploring the diverse facets of adrenal lesions.

Afif S, Mahmood Z, Zaheer A, Azadi JR

pubmed logopapersAug 26 2025
Radiological diagnosis of adrenal lesions can be challenging due to the overlap between benign and malignant imaging features. The primary challenge in managing adrenal lesions is to accurately identify and characterize them to minimize unnecessary diagnostic examinations and interventions. However, there are substantial risks of underdiagnosis and misdiagnosis. This review article provides a comprehensive overview of typical, atypical, and overlapping imaging features of both common and rare adrenal lesions and explores emerging applications of artificial intelligence powered analysis of CT and MRI, which could play a pivotal role in distinguishing benign from malignant and functioning from non-functioning adrenal lesions with significant diagnostic accuracy, thereby enhancing diagnostic confidence and potentially reducing unnecessary interventions.

Physical foundations for trustworthy medical imaging: A survey for artificial intelligence researchers.

Cobo M, Corral Fontecha D, Silva W, Lloret Iglesias L

pubmed logopapersAug 26 2025
Artificial intelligence in medical imaging has grown rapidly in the past decade, driven by advances in deep learning and widespread access to computing resources. Applications cover diverse imaging modalities, including those based on electromagnetic radiation (e.g., X-rays), subatomic particles (e.g., nuclear imaging), and acoustic waves (ultrasound). Each modality features and limitations are defined by its underlying physics. However, many artificial intelligence practitioners lack a solid understanding of the physical principles involved in medical image acquisition. This gap hinders leveraging the full potential of deep learning, as incorporating physics knowledge into artificial intelligence systems promotes trustworthiness, especially in limited data scenarios. This work reviews the fundamental physical concepts behind medical imaging and examines their influence on recent developments in artificial intelligence, particularly, generative models and reconstruction algorithms. Finally, we describe physics-informed machine learning approaches to improve feature learning in medical imaging.

Illuminating radiogenomic signatures in pediatric-type diffuse gliomas: insights into molecular, clinical, and imaging correlations. Part I: high-grade group.

Kurokawa R, Hagiwara A, Ueda D, Ito R, Saida T, Honda M, Nishioka K, Sakata A, Yanagawa M, Takumi K, Oda S, Ide S, Sofue K, Sugawara S, Watabe T, Hirata K, Kawamura M, Iima M, Naganawa S

pubmed logopapersAug 25 2025
Recent advances in molecular genetics have revolutionized the classification of pediatric-type high-grade gliomas in the 2021 World Health Organization central nervous system tumor classification. This narrative review synthesizes current evidence on the following four tumor types: diffuse midline glioma, H3 K27-altered; diffuse hemispheric glioma, H3 G34-mutant; diffuse pediatric-type high-grade glioma, H3-wildtype and IDH-wildtype; and infant-type hemispheric glioma. We conducted a comprehensive literature search for articles published through January 2025. For each tumor type, we analyze characteristic clinical presentations, molecular alterations, conventional and advanced magnetic resonance imaging features, radiological-molecular correlations, and current therapeutic approaches. Emerging radiogenomic approaches utilizing artificial intelligence, including radiomics and deep learning, show promise in identifying imaging biomarkers that correlate with molecular features. This review highlights the importance of integrating radiological and molecular data for accurate diagnosis and treatment planning, while acknowledging limitations in current methodologies and the need for prospective validation in larger cohorts. Understanding these correlations is crucial for advancing personalized treatment strategies for these challenging tumors.

Why Relational Graphs Will Save the Next Generation of Vision Foundation Models?

Fatemeh Ziaeetabar

arxiv logopreprintAug 25 2025
Vision foundation models (FMs) have become the predominant architecture in computer vision, providing highly transferable representations learned from large-scale, multimodal corpora. Nonetheless, they exhibit persistent limitations on tasks that require explicit reasoning over entities, roles, and spatio-temporal relations. Such relational competence is indispensable for fine-grained human activity recognition, egocentric video understanding, and multimodal medical image analysis, where spatial, temporal, and semantic dependencies are decisive for performance. We advance the position that next-generation FMs should incorporate explicit relational interfaces, instantiated as dynamic relational graphs (graphs whose topology and edge semantics are inferred from the input and task context). We illustrate this position with cross-domain evidence from recent systems in human manipulation action recognition and brain tumor segmentation, showing that augmenting FMs with lightweight, context-adaptive graph-reasoning modules improves fine-grained semantic fidelity, out of distribution robustness, interpretability, and computational efficiency relative to FM only baselines. Importantly, by reasoning sparsely over semantic nodes, such hybrids also achieve favorable memory and hardware efficiency, enabling deployment under practical resource constraints. We conclude with a targeted research agenda for FM graph hybrids, prioritizing learned dynamic graph construction, multi-level relational reasoning (e.g., part object scene in activity understanding, or region organ in medical imaging), cross-modal fusion, and evaluation protocols that directly probe relational competence in structured vision tasks.

Non-invasive intracranial pressure assessment in adult critically ill patients: A narrative review on current approaches and future perspectives.

Deana C, Biasucci DG, Aspide R, Bagatto D, Brasil S, Brunetti D, Saitta T, Vapireva M, Zanza C, Longhitano Y, Bignami EG, Vetrugno L

pubmed logopapersAug 23 2025
Intracranial hypertension (IH) is a life-threatening complication that may occur after acute brain injury. Early recognition of IH allows prompt interventions that improve outcomes. Even if invasive intracranial monitoring is considered the gold standard for the most severely injured patients, scarce availability of resources, the need for advanced skills, and potential for complications often limit its utilization. On the other hand, different non-invasive methods to evaluate acutely brain-injured patients for elevated intracranial pressure have been investigated. Clinical examination and neuroradiology represent the cornerstone of a patient's evaluation in the intensive care unit (ICU). However, multimodal neuromonitoring, employing widely used different tools, such as brain ultrasound, automated pupillometry, and skull micro-deformation recordings, increase the possibility for continuous or semi-continuous intracranial pressure monitoring. Furthermore, artificial intelligence (AI) has been investigated to as a tool to predict elevated intracranial pressure, shedding light on new diagnostic and treatment horizons with the potential to improve patient outcomes. This narrative review, based on a systematic literature search, summarizes the best available evidence on the use of non-invasive monitoring tools and methods for the assessment of intracranial pressure.

Advancements in deep learning for image-guided tumor ablation therapies: a comprehensive review.

Zhao Z, Hu Y, Xu LX, Sun J

pubmed logopapersAug 22 2025
Image-guided tumor ablation (IGTA) has revolutionized modern oncological treatments by providing minimally invasive options that ensure precise tumor eradication with minimal patient discomfort. Traditional techniques such as ultrasound (US), Computed Tomography (CT), and Magnetic Resonance Imaging (MRI) have been instrumental in the planning, execution, and evaluation of ablation therapies. However, these methods often face limitations, including poor contrast, susceptibility to artifacts, and variability in operator expertise, which can undermine the accuracy of tumor targeting and therapeutic outcomes. Incorporating deep learning (DL) into IGTA represents a significant advancement that addresses these challenges. This review explores the role and potential of DL in different phases of tumor ablation therapy: preoperative, intraoperative, and postoperative. In the preoperative stage, DL excels in advanced image segmentation, enhancement, and synthesis, facilitating precise surgical planning and optimized treatment strategies. During the intraoperative phase, DL supports image registration and fusion, and real-time surgical planning, enhancing navigation accuracy and ensuring precise ablation while safeguarding surrounding healthy tissues. In the postoperative phase, DL is pivotal in automating the monitoring of treatment responses and in the early detection of recurrences through detailed analyses of follow-up imaging. This review highlights the essential role of deep learning in modernizing IGTA, showcasing its significant implications for procedural safety, efficacy, and patient outcomes in oncology. As deep learning technologies continue to evolve, they are poised to redefine the standards of care in tumor ablation therapies, making treatments more accurate, personalized, and patient-friendly.

Multimodal Integration in Health Care: Development With Applications in Disease Management.

Hao Y, Cheng C, Li J, Li H, Di X, Zeng X, Jin S, Han X, Liu C, Wang Q, Luo B, Zeng X, Li K

pubmed logopapersAug 21 2025
Multimodal data integration has emerged as a transformative approach in the health care sector, systematically combining complementary biological and clinical data sources such as genomics, medical imaging, electronic health records, and wearable device outputs. This approach provides a multidimensional perspective of patient health that enhances the diagnosis, treatment, and management of various medical conditions. This viewpoint presents an overview of the current state of multimodal integration in health care, spanning clinical applications, current challenges, and future directions. We focus primarily on its applications across different disease domains, particularly in oncology and ophthalmology. Other diseases are briefly discussed due to the few available literature. In oncology, the integration of multimodal data enables more precise tumor characterization and personalized treatment plans. Multimodal fusion demonstrates accurate prediction of anti-human epidermal growth factor receptor 2 therapy response (area under the curve=0.91). In ophthalmology, multimodal integration through the combination of genetic and imaging data facilitates the early diagnosis of retinal diseases. However, substantial challenges remain regarding data standardization, model deployment, and model interpretability. We also highlight the future directions of multimodal integration, including its expanded disease applications, such as neurological and otolaryngological diseases, and the trend toward large-scale multimodal models, which enhance accuracy. Overall, the innovative potential of multimodal integration is expected to further revolutionize the health care industry, providing more comprehensive and personalized solutions for disease management.

Artificial intelligence in precision medicine: transforming disease subtyping, medical imaging, and pharmacogenomics.

Rodriguez-Martinez A, Kothalawala D, Carrillo-Larco RM, Poulakakis-Daktylidis A

pubmed logopapersAug 20 2025
Precision medicine marks a transformative shift towards a patient-centric treatment approach, aiming to match 'the right patients with the right drugs at the right time'. The exponential growth of data from diverse omics modalities, electronic health records, and medical imaging has created unprecedented opportunities for precision medicine. This explosion of data requires advanced processing and analytical tools. At the forefront of this revolution is artificial intelligence (AI), which excels at uncovering hidden patterns within these high-dimensional and complex datasets. AI facilitates the integration and analysis of diverse data types, unlocking unparalleled potential to characterise complex diseases, improve prognosis, and predict treatment response. Despite the enormous potential of AI, challenges related to interpretability, reliability, generalisability, and ethical considerations emerge when translating these tools from research settings into clinical practice.

Review of GPU-based Monte Carlo simulation platforms for transmission and emission tomography in medicine.

Chi Y, Schubert KE, Badal A, Roncali E

pubmed logopapersAug 20 2025
Monte Carlo (MC) simulation remains the gold standard for modeling complex physical interactions in transmission and emission tomography, with GPU parallel computing offering unmatched computational performance and enabling practical, large-scale MC applications. In recent years, rapid advancements in both GPU technologies and tomography techniques have been observed. Harnessing emerging GPU capabilities to accelerate MC simulation and strengthen its role in supporting the rapid growth of medical tomography has become an important topic. To provide useful insights, we conducted a comprehensive review of state-of-the-art GPU-accelerated MC simulations in tomography, highlighting current achievements and underdeveloped areas.

Approach: We reviewed key technical developments across major tomography modalities, including computed tomography (CT), cone-beam CT (CBCT), positron emission tomography, single-photon emission computed tomography, proton CT, emerging techniques, and hybrid modalities. We examined MC simulation methods and major CPU-based MC platforms that have historically supported medical imaging development, followed by a review of GPU acceleration strategies, hardware evolutions, and leading GPU-based MC simulation packages. Future development directions were also discussed.

Main Results: Significant advancements have been achieved in both tomography and MC simulation technologies over the past half-century. The introduction of GPUs has enabled speedups often exceeding 100-1000 times over CPU implementations, providing essential support to the development of new imaging systems. Emerging GPU features like ray-tracing cores, tensor cores, and GPU-execution-friendly transport methods offer further opportunities for performance enhancement. 

Significance: GPU-based MC simulation is expected to remain essential in advancing medical emission and transmission tomography. With the emergence of new concepts such as training Machine Learning with synthetic data, Digital Twins for Healthcare, and Virtual Clinical Trials, improving hardware portability and modularizing GPU-based MC codes to adapt to these evolving simulation needs represent important future research directions. This review aims to provide useful insights for researchers, developers, and practitioners in relevant fields.

Physician-in-the-Loop Active Learning in Radiology Artificial Intelligence Workflows: Opportunities, Challenges, and Future Directions.

Luo M, Yousefirizi F, Rouzrokh P, Jin W, Alberts I, Gowdy C, Bouchareb Y, Hamarneh G, Klyuzhin I, Rahmim A

pubmed logopapersAug 20 2025
Artificial intelligence (AI) is being explored for a growing range of applications in radiology, including image reconstruction, image segmentation, synthetic image generation, disease classification, worklist triage, and examination scheduling. However, training accurate AI models typically requires substantial amounts of expert-labeled data, which can be time-consuming and expensive to obtain. Active learning offers a potential strategy for mitigating the impacts of such labeling requirements. In contrast with other machine-learning approaches used for data-limited situations, active learning aims to produce labeled datasets by identifying the most informative or uncertain data for human annotation, thereby reducing labeling burden to improve model performance under constrained datasets. This Review explores the application of active learning to radiology AI, focusing on the role of active learning in reducing the resources needed to train radiology AI models while enhancing physician-AI interaction and collaboration. We discuss how active learning can be incorporated into radiology workflows to promote physician-in-the-loop AI systems, presenting key active learning concepts and use cases for radiology-based tasks, including through literature-based examples. Finally, we provide summary recommendations for the integration of active learning in radiology workflows while highlighting relevant opportunities, challenges, and future directions.
Page 6 of 32311 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.