Sort by:
Page 28 of 78779 results

Adapting foundation models for rapid clinical response: intracerebral hemorrhage segmentation in emergency settings.

Gerbasi A, Mazzacane F, Ferrari F, Del Bello B, Cavallini A, Bellazzi R, Quaglini S

pubmed logopapersAug 3 2025
Intracerebral hemorrhage (ICH) is a medical emergency that demands rapid and accurate diagnosis for optimal patient management. Hemorrhagic lesions' segmentation on CT scans is a necessary first step for acquiring quantitative imaging data that are becoming increasingly useful in the clinical setting. However, traditional manual segmentation is time-consuming and prone to inter-rater variability, creating a need for automated solutions. This study introduces a novel approach combining advanced deep learning models to segment extensive and morphologically variable ICH lesions in non-contrast CT scans. We propose a two-step methodology that begins with a user-defined loose bounding box around the lesion, followed by a fine-tuned YOLOv8-S object detection model to generate precise, slice-specific bounding boxes. These bounding boxes are then used to prompt the Medical Segment Anything Model for accurate lesion segmentation. Our pipeline achieves high segmentation accuracy with minimal supervision, demonstrating strong potential as a practical alternative to task-specific models. We evaluated the model on a dataset of 252 CT scans demonstrating high performance in segmentation accuracy and robustness. Finally, the resulting segmentation tool is integrated into a user-friendly web application prototype, offering clinicians a simple interface for lesion identification and radiomic quantification.

Multimodal Attention-Aware Fusion for Diagnosing Distal Myopathy: Evaluating Model Interpretability and Clinician Trust

Mohsen Abbaspour Onari, Lucie Charlotte Magister, Yaoxin Wu, Amalia Lupi, Dario Creazzo, Mattia Tordin, Luigi Di Donatantonio, Emilio Quaia, Chao Zhang, Isel Grau, Marco S. Nobile, Yingqian Zhang, Pietro Liò

arxiv logopreprintAug 2 2025
Distal myopathy represents a genetically heterogeneous group of skeletal muscle disorders with broad clinical manifestations, posing diagnostic challenges in radiology. To address this, we propose a novel multimodal attention-aware fusion architecture that combines features extracted from two distinct deep learning models, one capturing global contextual information and the other focusing on local details, representing complementary aspects of the input data. Uniquely, our approach integrates these features through an attention gate mechanism, enhancing both predictive performance and interpretability. Our method achieves a high classification accuracy on the BUSI benchmark and a proprietary distal myopathy dataset, while also generating clinically relevant saliency maps that support transparent decision-making in medical diagnosis. We rigorously evaluated interpretability through (1) functionally grounded metrics, coherence scoring against reference masks and incremental deletion analysis, and (2) application-grounded validation with seven expert radiologists. While our fusion strategy boosts predictive performance relative to single-stream and alternative fusion strategies, both quantitative and qualitative evaluations reveal persistent gaps in anatomical specificity and clinical usefulness of the interpretability. These findings highlight the need for richer, context-aware interpretability methods and human-in-the-loop feedback to meet clinicians' expectations in real-world diagnostic settings.

Advances in renal cancer: diagnosis, treatment, and emerging technologies.

Saida T, Iima M, Ito R, Ueda D, Nishioka K, Kurokawa R, Kawamura M, Hirata K, Honda M, Takumi K, Ide S, Sugawara S, Watabe T, Sakata A, Yanagawa M, Sofue K, Oda S, Naganawa S

pubmed logopapersAug 2 2025
This review provides a comprehensive overview of current practices and recent advancements in the diagnosis and treatment of renal cancer. It introduces updates in histological classification and explains the imaging characteristics of each tumour based on these changes. The review highlights state-of-the-art imaging modalities, including magnetic resonance imaging, computed tomography, positron emission tomography, and ultrasound, emphasising their crucial role in tumour characterisation and optimising treatment planning. Emerging technologies, such as radiomics and artificial intelligence, are also discussed for their transformative impact on enhancing diagnostic precision, prognostic prediction, and personalised patient management. Furthermore, the review explores current treatment options, including minimally invasive techniques such as cryoablation, radiofrequency ablation, and stereotactic body radiation therapy, as well as systemic therapies such as immune checkpoint inhibitors and targeted therapies.

M4CXR: Exploring Multitask Potentials of Multimodal Large Language Models for Chest X-Ray Interpretation.

Park J, Kim S, Yoon B, Hyun J, Choi K

pubmed logopapersAug 1 2025
The rapid evolution of artificial intelligence, especially in large language models (LLMs), has significantly impacted various domains, including healthcare. In chest X-ray (CXR) analysis, previous studies have employed LLMs, but with limitations: either underutilizing the LLMs' capability for multitask learning or lacking clinical accuracy. This article presents M4CXR, a multimodal LLM designed to enhance CXR interpretation. The model is trained on a visual instruction-following dataset that integrates various task-specific datasets in a conversational format. As a result, the model supports multiple tasks such as medical report generation (MRG), visual grounding, and visual question answering (VQA). M4CXR achieves state-of-the-art clinical accuracy in MRG by employing a chain-of-thought (CoT) prompting strategy, in which it identifies findings in CXR images and subsequently generates corresponding reports. The model is adaptable to various MRG scenarios depending on the available inputs, such as single-image, multiimage, and multistudy contexts. In addition to MRG, M4CXR performs visual grounding at a level comparable to specialized models and demonstrates outstanding performance in VQA. Both quantitative and qualitative assessments reveal M4CXR's versatility in MRG, visual grounding, and VQA, while consistently maintaining clinical accuracy.

Natural language processing and LLMs in liver imaging: a practical review of clinical applications.

López-Úbeda P, Martín-Noguerol T, Luna A

pubmed logopapersAug 1 2025
Liver diseases pose a significant global health challenge due to their silent progression and high mortality. Proper interpretation of radiology reports is essential for the evaluation and management of these conditions but is limited by variability in reporting styles and the complexity of unstructured medical language. In this context, Natural Language Processing (NLP) techniques and Large Language Models (LLMs) have emerged as promising tools to extract relevant clinical information from unstructured liver radiology reports. This work reviews, from a practical point of view, the current state of NLP and LLM applications for liver disease classification, clinical feature extraction, diagnostic support, and staging from reports. It also discusses existing limitations, such as the need for high-quality annotated data, lack of explainability, and challenges in clinical integration. With responsible and validated implementation, these technologies have the potential to transform liver clinical management by enabling faster and more accurate diagnoses and optimizing radiology workflows, ultimately improving patient care in liver diseases.

Rapid review: Growing usage of Multimodal Large Language Models in healthcare.

Gupta P, Zhang Z, Song M, Michalowski M, Hu X, Stiglic G, Topaz M

pubmed logopapersAug 1 2025
Recent advancements in large language models (LLMs) have led to multimodal LLMs (MLLMs), which integrate multiple data modalities beyond text. Although MLLMs show promise, there is a gap in the literature that empirically demonstrates their impact in healthcare. This paper summarizes the applications of MLLMs in healthcare, highlighting their potential to transform health practices. A rapid literature review was conducted in August 2024 using World Health Organization (WHO) rapid-review methodology and PRISMA standards, with searches across four databases (Scopus, Medline, PubMed and ACM Digital Library) and top-tier conferences-including NeurIPS, ICML, AAAI, MICCAI, CVPR, ACL and EMNLP. Articles on MLLMs healthcare applications were included for analysis based on inclusion and exclusion criteria. The search yielded 115 articles, 39 included in the final analysis. Of these, 77% appeared online (preprints and published) in 2024, reflecting the emergence of MLLMs. 80% of studies were from Asia and North America (mainly China and US), with Europe lagging. Studies split evenly between pre-built MLLMs evaluations (60% focused on GPT versions) and custom MLLMs/frameworks development with task-specific customizations. About 81% of studies examined MLLMs for diagnosis and reporting in radiology, pathology, and ophthalmology, with additional applications in education, surgery, and mental health. Prompting strategies, used in 80% of studies, improved performance in nearly half. However, evaluation practices were inconsistent with 67% reported accuracy. Error analysis was mostly anecdotal, with only 18% categorized failure types. Only 13% validated explainability through clinician feedback. Clinical deployment was demonstrated in just 3% of studies, and workflow integration, governance, and safety were rarely addressed. MLLMs offer substantial potential for healthcare transformation through multimodal data integration. Yet, methodological inconsistencies, limited validation, and underdeveloped deployment strategies highlight the need for standardized evaluation metrics, structured error analysis, and human-centered design to support safe, scalable, and trustworthy clinical adoption.

Minimum Data, Maximum Impact: 20 annotated samples for explainable lung nodule classification

Luisa Gallée, Catharina Silvia Lisson, Christoph Gerhard Lisson, Daniela Drees, Felix Weig, Daniel Vogele, Meinrad Beer, Michael Götz

arxiv logopreprintAug 1 2025
Classification models that provide human-interpretable explanations enhance clinicians' trust and usability in medical image diagnosis. One research focus is the integration and prediction of pathology-related visual attributes used by radiologists alongside the diagnosis, aligning AI decision-making with clinical reasoning. Radiologists use attributes like shape and texture as established diagnostic criteria and mirroring these in AI decision-making both enhances transparency and enables explicit validation of model outputs. However, the adoption of such models is limited by the scarcity of large-scale medical image datasets annotated with these attributes. To address this challenge, we propose synthesizing attribute-annotated data using a generative model. We enhance the Diffusion Model with attribute conditioning and train it using only 20 attribute-labeled lung nodule samples from the LIDC-IDRI dataset. Incorporating its generated images into the training of an explainable model boosts performance, increasing attribute prediction accuracy by 13.4% and target prediction accuracy by 1.8% compared to training with only the small real attribute-annotated dataset. This work highlights the potential of synthetic data to overcome dataset limitations, enhancing the applicability of explainable models in medical image analysis.

Emerging Applications of Feature Selection in Osteoporosis Research: From Biomarker Discovery to Clinical Decision Support.

Wang J, Wang Y, Ren J, Li Z, Guo L, Lv J

pubmed logopapersAug 1 2025
Osteoporosis (OP), a systemic skeletal disease characterized by compromised bone strength and elevated fracture susceptibility, represents a growing global health challenge that necessitates early detection and accurate risk stratification. With the exponential growth of multidimensional biomedical data in OP research, feature selection has become an indispensable machine learning paradigm that improves model generalizability. At the same time, it preserves clinical interpretability and enhances predictive accuracy. This perspective article systematically reviews the transformative role of feature selection methodologies across three critical domains of OP investigation: 1) multi-omics biomarker identification, 2) diagnostic pattern recognition, and 3) fracture risk prognostication. In biomarker discovery, advanced feature selection algorithms systematically refine high-dimensional multi-omics datasets (genomic, proteomic, metabolomic) to isolate key molecular signatures correlated with bone mineral density (BMD) trajectories and microarchitectural deterioration. For clinical diagnostics, these techniques enable efficient extraction of discriminative pattern from multimodal imaging data, including dual-energy X-ray absorptiometry (DXA), quantitative computed tomography (CT), and emerging dental radiographic biomarkers. In prognostic modeling, strategic variable selection optimizes prognostic accuracy by integrating demographic, biochemical, and biomechanical predictors while migrating overfitting in heterogeneous patient cohorts. Current challenges include heterogeneity in dataset quality and dimensionality, translational gaps between algorithmic outputs and clinical decision parameters, and limited reproducibility across diverse populations. Future directions should prioritize the development of adaptive feature selection frameworks capable of dynamic multi-omics data integration, coupled with hybrid intelligence systems that synergize machine-derived biomarkers with clinician expertise. Addressing these challenges requires coordinated interdisciplinary efforts to establish standardized validation protocols and create clinician-friendly decision support interfaces, ultimately bridging the gap between computational OP research and personalized patient care.

Transparent brain tumor detection using DenseNet169 and LIME.

Abraham LA, Palanisamy G, Veerapu G

pubmed logopapersAug 1 2025
A crucial area of research in the field of medical imaging is that of brain tumor classification, which greatly aids diagnosis and facilitates treatment planning. This paper proposes DenseNet169-LIME-TumorNet, a model based on deep learning and an integrated combination of DenseNet169 with LIME to boost the performance of brain tumor classification and its interpretability. The model was trained and evaluated on the publicly available Brain Tumor MRI Dataset containing 2,870 images spanning three tumor types. Dense169-LIME-TumorNet achieves a classification accuracy of 98.78%, outperforming widely used architectures including Inception V3, ResNet50, MobileNet V2, EfficientNet variants, and other DenseNet configurations. The integration of LIME provides visual explanations that enhance transparency and reliability in clinical decision-making. Furthermore, the model demonstrates minimal computational overhead, enabling faster inference and deployment in resource-constrained clinical environments, thereby highlighting its practical utility for real-time diagnostic support. Work in the future should run towards creating generalization through the adoption of a multi-modal learning approach, hybrid deep learning development, and real-time application development for AI-assisted diagnosis.

Impact of large language models and vision deep learning models in predicting neoadjuvant rectal score for rectal cancer treated with neoadjuvant chemoradiation.

Kim HB, Tan HQ, Nei WL, Tan YCRS, Cai Y, Wang F

pubmed logopapersJul 31 2025
This study aims to explore Deep Learning methods, namely Large Language Models (LLMs) and Computer Vision models to accurately predict neoadjuvant rectal (NAR) score for locally advanced rectal cancer (LARC) treated with neoadjuvant chemoradiation (NACRT). The NAR score is a validated surrogate endpoint for LARC. 160 CT scans of patients were used in this study, along with 4 different types of radiology reports, 2 generated from CT scans and other 2 from MRI scans, both before and after NACRT. For CT scans, two different approaches with convolutional neural network were utilized to tackle the 3D scan entirely or tackle it slice by slice. For radiology reports, an encoder architecture LLM was used. The performance of the approaches was quantified by the Area under the Receiver Operating Characteristic curve (AUC). The two different approaches for CT scans yielded [Formula: see text] and [Formula: see text] while the LLM trained on post NACRT MRI reports showed the most predictive potential at [Formula: see text] and a statistical improvement, p = 0.03, over the baseline clinical approach (from [Formula: see text] to [Formula: see text])). This study showcases the potential of Large Language Models and the inadequacies of CT scans in predicting NAR values. Clinical trial number Not applicable.
Page 28 of 78779 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.