Sort by:
Page 47 of 66652 results

Unsupervised risk factor identification across cancer types and data modalities via explainable artificial intelligence

Maximilian Ferle, Jonas Ader, Thomas Wiemers, Nora Grieb, Adrian Lindenmeyer, Hans-Jonas Meyer, Thomas Neumuth, Markus Kreuz, Kristin Reiche, Maximilian Merz

arxiv logopreprintJun 15 2025
Risk stratification is a key tool in clinical decision-making, yet current approaches often fail to translate sophisticated survival analysis into actionable clinical criteria. We present a novel method for unsupervised machine learning that directly optimizes for survival heterogeneity across patient clusters through a differentiable adaptation of the multivariate logrank statistic. Unlike most existing methods that rely on proxy metrics, our approach represents novel methodology for training any neural network architecture on any data modality to identify prognostically distinct patient groups. We thoroughly evaluate the method in simulation experiments and demonstrate its utility in practice by applying it to two distinct cancer types: analyzing laboratory parameters from multiple myeloma patients and computed tomography images from non-small cell lung cancer patients, identifying prognostically distinct patient subgroups with significantly different survival outcomes in both cases. Post-hoc explainability analyses uncover clinically meaningful features determining the group assignments which align well with established risk factors and thus lend strong weight to the methods utility. This pan-cancer, model-agnostic approach represents a valuable advancement in clinical risk stratification, enabling the discovery of novel prognostic signatures across diverse data types while providing interpretable results that promise to complement treatment personalization and clinical decision-making in oncology and beyond.

A review: Lightweight architecture model in deep learning approach for lung disease identification.

Maharani DA, Utaminingrum F, Husnina DNN, Sukmaningrum B, Rahmania FN, Handani F, Chasanah HN, Arrahman A, Febrianto F

pubmed logopapersJun 14 2025
As one of the leading causes of death worldwide, early detection of lung disease is a very important step to improve the effectiveness of treatment. By using medical image data, such as X-ray or CT-scan, classification of lung disease can be done. Deep learning methods have been widely used to recognize complex patterns in medical images, but this approach has the constraints of requiring large data variations and high computing resources. In overcoming these constraints, the lightweight architecture in deep learning can provide a more efficient solution based on the number of parameters and computing time. This method can be applied to devices with low processor specifications on portable devices such as mobile phones. This article presents a comprehensive review of 23 research studies published between 2020 and 2025, focusing on various lightweight architectures and optimization techniques aimed at improving the accuracy of lung disease detection. The results show that these models are able to significantly reduce parameter sizes, resulting in faster computation times while maintaining competitive accuracy compared to traditional deep learning architectures. From the research that has been done, it can be seen that SqueezeNet applied on public COVID-19 datasets is the best basic architecture with high accuracy, and the number of parameters is 570 thousand, which is very low. On the other hand, UNet requires 31.07 million parameters, and SegNet requires 29.45 million parameters trained on CT scan images from Italian Society of Medical and Interventional Radiology and Radiopedia, so it is less efficient. For the combination method, EfficientNetV2 and Extreme Learning Machine (ELM) are able to achieve the highest accuracy of 98.20 % and can significantly reduce parameters. The worst performance is shown by VGG and UNet with a decrease in accuracy from 91.05 % to 87 % and an increase in the number of parameters. It can be concluded that the lightweight architecture can be applied to medical image classification in the diagnosis of lung disease quickly and efficiently on devices with limited specifications.

AI-Based screening for thoracic aortic aneurysms in routine breast MRI.

Bounias D, Führes T, Brock L, Graber J, Kapsner LA, Liebert A, Schreiter H, Eberle J, Hadler D, Skwierawska D, Floca R, Neher P, Kovacs B, Wenkel E, Ohlmeyer S, Uder M, Maier-Hein K, Bickelhaupt S

pubmed logopapersJun 12 2025
Prognosis for thoracic aortic aneurysms is significantly worse for women than men, with a higher mortality rate observed among female patients. The increasing use of magnetic resonance breast imaging (MRI) offers a unique opportunity for simultaneous detection of both breast cancer and thoracic aortic aneurysms. We retrospectively validate a fully-automated artificial neural network (ANN) pipeline on 5057 breast MRI examinations from public (Duke University Hospital/EA1141 trial) and in-house (Erlangen University Hospital) data. The ANN, benchmarked against 3D-ground-truth segmentations, clinical reports, and a multireader panel, demonstrates high technical robustness (dice/clDice 0.88-0.91/0.97-0.99) across different vendors and field strengths. The ANN improves aneurysm detection rates by 3.5-fold compared with routine clinical readings, highlighting its potential to improve early diagnosis and patient outcomes. Notably, a higher odds ratio (OR = 2.29, CI: [0.55,9.61]) for thoracic aortic aneurysms is observed in women with breast cancer or breast cancer history, suggesting potential further benefits from integrated simultaneous assessment for cancer and aortic aneurysms.

A strategy for the automatic diagnostic pipeline towards feature-based models: a primer with pleural invasion prediction from preoperative PET/CT images.

Kong X, Zhang A, Zhou X, Zhao M, Liu J, Zhang X, Zhang W, Meng X, Li N, Yang Z

pubmed logopapersJun 12 2025
This study aims to explore the feasibility to automate the application process of nomograms in clinical medicine, demonstrated through the task of preoperative pleural invasion prediction in non-small cell lung cancer patients using PET/CT imaging. The automatic pipeline involves multimodal segmentation, feature extraction, and model prediction. It is validated on a cohort of 1116 patients from two medical centers. The performance of the feature-based diagnostic model outperformed both the radiomics model and individual machine learning models. The segmentation models for CT and PET images achieved mean dice similarity coefficients of 0.85 and 0.89, respectively, and the segmented lung contours showed high consistency with the actual contours. The automatic diagnostic system achieved an accuracy of 0.87 in the internal test set and 0.82 in the external test set, demonstrating comparable overall diagnostic performance to the human-based diagnostic model. In comparative analysis, the automatic diagnostic system showed superior performance relative to other segmentation and diagnostic pipelines. The proposed automatic diagnostic system provides an interpretable, automated solution for predicting pleural invasion in non-small cell lung cancer.

Anatomy-Grounded Weakly Supervised Prompt Tuning for Chest X-ray Latent Diffusion Models

Konstantinos Vilouras, Ilias Stogiannidis, Junyu Yan, Alison Q. O'Neil, Sotirios A. Tsaftaris

arxiv logopreprintJun 12 2025
Latent Diffusion Models have shown remarkable results in text-guided image synthesis in recent years. In the domain of natural (RGB) images, recent works have shown that such models can be adapted to various vision-language downstream tasks with little to no supervision involved. On the contrary, text-to-image Latent Diffusion Models remain relatively underexplored in the field of medical imaging, primarily due to limited data availability (e.g., due to privacy concerns). In this work, focusing on the chest X-ray modality, we first demonstrate that a standard text-conditioned Latent Diffusion Model has not learned to align clinically relevant information in free-text radiology reports with the corresponding areas of the given scan. Then, to alleviate this issue, we propose a fine-tuning framework to improve multi-modal alignment in a pre-trained model such that it can be efficiently repurposed for downstream tasks such as phrase grounding. Our method sets a new state-of-the-art on a standard benchmark dataset (MS-CXR), while also exhibiting robust performance on out-of-distribution data (VinDr-CXR). Our code will be made publicly available.

Tackling Tumor Heterogeneity Issue: Transformer-Based Multiple Instance Enhancement Learning for Predicting EGFR Mutation via CT Images.

Fang Y, Wang M, Song Q, Cao C, Gao Z, Song B, Min X, Li A

pubmed logopapersJun 12 2025
Accurate and non-invasive prediction of epidermal growth factor receptor (EGFR) mutation is crucial for the diagnosis and treatment of non-small cell lung cancer (NSCLC). While computed tomography (CT) imaging shows promise in identifying EGFR mutation, current prediction methods heavily rely on fully supervised learning, which overlooks the substantial heterogeneity of tumors and therefore leads to suboptimal results. To tackle tumor heterogeneity issue, this study introduces a novel weakly supervised method named TransMIEL, which leverages multiple instance learning techniques for accurate EGFR mutation prediction. Specifically, we first propose an innovative instance enhancement learning (IEL) strategy that strengthens the discriminative power of instance features for complex tumor CT images by exploring self-derived soft pseudo-labels. Next, to improve tumor representation capability, we design a spatial-aware transformer (SAT) that fully captures inter-instance relationships of different pathological subregions to mirror the diagnostic processes of radiologists. Finally, an instance adaptive gating (IAG) module is developed to effectively emphasize the contribution of informative instance features in heterogeneous tumors, facilitating dynamic instance feature aggregation and increasing model generalization performance. Experimental results demonstrate that TransMIEL significantly outperforms existing fully and weakly supervised methods on both public and in-house NSCLC datasets. Additionally, visualization results show that our approach can highlight intra-tumor and peri-tumor areas relevant to EGFR mutation status. Therefore, our method holds significant potential as an effective tool for EGFR prediction and offers a novel perspective for future research on tumor heterogeneity.

CT-based deep learning model for improved disease-free survival prediction in clinical stage I lung cancer: a real-world multicenter study.

Fu Y, Hou R, Qian L, Feng W, Zhang Q, Yu W, Cai X, Liu J, Wang Y, Ding Z, Xu Y, Zhao J, Fu X

pubmed logopapersJun 12 2025
To develop a deep learning (DL) model for predicting disease-free survival (DFS) in clinical stage I lung cancer patients who underwent surgical resection using pre-treatment CT images, and further validate it in patients receiving stereotactic body radiation therapy (SBRT). A retrospective cohort of 2489 clinical stage I non-small cell lung cancer (NSCLC) patients treated with operation (2015-2017) was enrolled to develop a DL-based DFS prediction model. Tumor features were extracted from CT images using a three-dimensional convolutional neural network. External validation was performed on 248 clinical stage I patients receiving SBRT from two hospitals. A clinical model was constructed by multivariable Cox regression for comparison. Model performance was evaluated with Harrell's concordance index (C-index), which measures the model's ability to correctly rank survival times by comparing all possible pairs of subjects. In the surgical cohort, the DL model effectively predicted DFS with a C-index of 0.85 (95% CI: 0.80-0.89) in the internal testing set, significantly outperforming the clinical model (C-index: 0.76). Based on the DL model, 68 patients in the SBRT cohort identified as high-risk had significantly worse DFS compared to the low-risk group (p < 0.01, 5-year DFS rate: 34.7% vs 77.4%). The DL-score was demonstrated to be an independent predictor of DFS in both cohorts (p < 0.01). The CT-based DL model improved DFS prediction in clinical stage I lung cancer patients, identifying populations at high risk of recurrence and metastasis to guide clinical decision-making. Question The recurrence or metastasis rate of early-stage lung cancer remains high and varies among patients following radical treatments such as surgery or SBRT. Findings This CT-based DL model successfully predicted DFS and stratified varying disease risks in clinical stage I lung cancer patients undergoing surgery or SBRT. Clinical relevance The CT-based DL model is a reliable predictive tool for the prognosis of early-stage lung cancer. Its accurate risk stratification assists clinicians in identifying specific patients for personalized clinical decision making.

Enhancing Pulmonary Disease Prediction Using Large Language Models With Feature Summarization and Hybrid Retrieval-Augmented Generation: Multicenter Methodological Study Based on Radiology Report.

Li R, Mao S, Zhu C, Yang Y, Tan C, Li L, Mu X, Liu H, Yang Y

pubmed logopapersJun 11 2025
The rapid advancements in natural language processing, particularly the development of large language models (LLMs), have opened new avenues for managing complex clinical text data. However, the inherent complexity and specificity of medical texts present significant challenges for the practical application of prompt engineering in diagnostic tasks. This paper explores LLMs with new prompt engineering technology to enhance model interpretability and improve the prediction performance of pulmonary disease based on a traditional deep learning model. A retrospective dataset including 2965 chest CT radiology reports was constructed. The reports were from 4 cohorts, namely, healthy individuals and patients with pulmonary tuberculosis, lung cancer, and pneumonia. Then, a novel prompt engineering strategy that integrates feature summarization (F-Sum), chain of thought (CoT) reasoning, and a hybrid retrieval-augmented generation (RAG) framework was proposed. A feature summarization approach, leveraging term frequency-inverse document frequency (TF-IDF) and K-means clustering, was used to extract and distill key radiological findings related to 3 diseases. Simultaneously, the hybrid RAG framework combined dense and sparse vector representations to enhance LLMs' comprehension of disease-related text. In total, 3 state-of-the-art LLMs, GLM-4-Plus, GLM-4-air (Zhipu AI), and GPT-4o (OpenAI), were integrated with the prompt strategy to evaluate the efficiency in recognizing pneumonia, tuberculosis, and lung cancer. The traditional deep learning model, BERT (Bidirectional Encoder Representations from Transformers), was also compared to assess the superiority of LLMs. Finally, the proposed method was tested on an external validation dataset consisted of 343 chest computed tomography (CT) report from another hospital. Compared with BERT-based prediction model and various other prompt engineering techniques, our method with GLM-4-Plus achieved the best performance on test dataset, attaining an F1-score of 0.89 and accuracy of 0.89. On the external validation dataset, F1-score (0.86) and accuracy (0.92) of the proposed method with GPT-4o were the highest. Compared to the popular strategy with manually selected typical samples (few-shot) and CoT designed by doctors (F1-score=0.83 and accuracy=0.83), the proposed method that summarized disease characteristics (F-Sum) based on LLM and automatically generated CoT performed better (F1-score=0.89 and accuracy=0.90). Although the BERT-based model got similar results on the test dataset (F1-score=0.85 and accuracy=0.88), its predictive performance significantly decreased on the external validation set (F1-score=0.48 and accuracy=0.78). These findings highlight the potential of LLMs to revolutionize pulmonary disease prediction, particularly in resource-constrained settings, by surpassing traditional models in both accuracy and flexibility. The proposed prompt engineering strategy not only improves predictive performance but also enhances the adaptability of LLMs in complex medical contexts, offering a promising tool for advancing disease diagnosis and clinical decision-making.

Autonomous Computer Vision Development with Agentic AI

Jin Kim, Muhammad Wahi-Anwa, Sangyun Park, Shawn Shin, John M. Hoffman, Matthew S. Brown

arxiv logopreprintJun 11 2025
Agentic Artificial Intelligence (AI) systems leveraging Large Language Models (LLMs) exhibit significant potential for complex reasoning, planning, and tool utilization. We demonstrate that a specialized computer vision system can be built autonomously from a natural language prompt using Agentic AI methods. This involved extending SimpleMind (SM), an open-source Cognitive AI environment with configurable tools for medical image analysis, with an LLM-based agent, implemented using OpenManus, to automate the planning (tool configuration) for a particular computer vision task. We provide a proof-of-concept demonstration that an agentic system can interpret a computer vision task prompt, plan a corresponding SimpleMind workflow by decomposing the task and configuring appropriate tools. From the user input prompt, "provide sm (SimpleMind) config for lungs, heart, and ribs segmentation for cxr (chest x-ray)"), the agent LLM was able to generate the plan (tool configuration file in YAML format), and execute SM-Learn (training) and SM-Think (inference) scripts autonomously. The computer vision agent automatically configured, trained, and tested itself on 50 chest x-ray images, achieving mean dice scores of 0.96, 0.82, 0.83, for lungs, heart, and ribs, respectively. This work shows the potential for autonomous planning and tool configuration that has traditionally been performed by a data scientist in the development of computer vision applications.

Autonomous Computer Vision Development with Agentic AI

Jin Kim, Muhammad Wahi-Anwa, Sangyun Park, Shawn Shin, John M. Hoffman, Matthew S. Brown

arxiv logopreprintJun 11 2025
Agentic Artificial Intelligence (AI) systems leveraging Large Language Models (LLMs) exhibit significant potential for complex reasoning, planning, and tool utilization. We demonstrate that a specialized computer vision system can be built autonomously from a natural language prompt using Agentic AI methods. This involved extending SimpleMind (SM), an open-source Cognitive AI environment with configurable tools for medical image analysis, with an LLM-based agent, implemented using OpenManus, to automate the planning (tool configuration) for a particular computer vision task. We provide a proof-of-concept demonstration that an agentic system can interpret a computer vision task prompt, plan a corresponding SimpleMind workflow by decomposing the task and configuring appropriate tools. From the user input prompt, "provide sm (SimpleMind) config for lungs, heart, and ribs segmentation for cxr (chest x-ray)"), the agent LLM was able to generate the plan (tool configuration file in YAML format), and execute SM-Learn (training) and SM-Think (inference) scripts autonomously. The computer vision agent automatically configured, trained, and tested itself on 50 chest x-ray images, achieving mean dice scores of 0.96, 0.82, 0.83, for lungs, heart, and ribs, respectively. This work shows the potential for autonomous planning and tool configuration that has traditionally been performed by a data scientist in the development of computer vision applications.
Page 47 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.