Sort by:
Page 21 of 42416 results

[Incidental pulmonary nodules on CT imaging: what to do?].

van der Heijden EHFM, Snoeren M, Jacobs C

pubmed logopapersJun 23 2025
Incidental pulmonary nodules are very frequently found on CT imaging and may represent (early stage) lung cancers without any signs or symptoms. These incidental findings can be solid lesions or ground glass lesions that may be solitary or multiple. Careful, and systematic evaluation of these findings in imaging is needed to determine the risk of malignancy, based on imaging characteristics, patient factors like smoking habits, prior cancers or family history, and growth rate preferably determined by volume measurements. Once the risk of malignancy is increased, minimal invasive image guided biopsy is warranted, preferably by navigation bronchoscopy. We present two cases to illustrate this clinical workup: one case with a benign solitary pulmonary nodule, and a second case with multiple ground glass opacities, diagnosed as synchronous primary adenocarcinomas of the lung. This is followed by a review of the current status of computer and artificial intelligence aided diagnostic support and clinical workflow optimization.

From BERT to generative AI - Comparing encoder-only vs. large language models in a cohort of lung cancer patients for named entity recognition in unstructured medical reports.

Arzideh K, Schäfer H, Allende-Cid H, Baldini G, Hilser T, Idrissi-Yaghir A, Laue K, Chakraborty N, Doll N, Antweiler D, Klug K, Beck N, Giesselbach S, Friedrich CM, Nensa F, Schuler M, Hosch R

pubmed logopapersJun 23 2025
Extracting clinical entities from unstructured medical documents is critical for improving clinical decision support and documentation workflows. This study examines the performance of various encoder and decoder models trained for Named Entity Recognition (NER) of clinical parameters in pathology and radiology reports, highlighting the applicability of Large Language Models (LLMs) for this task. Three NER methods were evaluated: (1) flat NER using transformer-based models, (2) nested NER with a multi-task learning setup, and (3) instruction-based NER utilizing LLMs. A dataset of 2013 pathology reports and 413 radiology reports, annotated by medical students, was used for training and testing. The performance of encoder-based NER models (flat and nested) was superior to that of LLM-based approaches. The best-performing flat NER models achieved F1-scores of 0.87-0.88 on pathology reports and up to 0.78 on radiology reports, while nested NER models performed slightly lower. In contrast, multiple LLMs, despite achieving high precision, yielded significantly lower F1-scores (ranging from 0.18 to 0.30) due to poor recall. A contributing factor appears to be that these LLMs produce fewer but more accurate entities, suggesting they become overly conservative when generating outputs. LLMs in their current form are unsuitable for comprehensive entity extraction tasks in clinical domains, particularly when faced with a high number of entity types per document, though instructing them to return more entities in subsequent refinements may improve recall. Additionally, their computational overhead does not provide proportional performance gains. Encoder-based NER models, particularly those pre-trained on biomedical data, remain the preferred choice for extracting information from unstructured medical documents.

Benchmarking Foundation Models and Parameter-Efficient Fine-Tuning for Prognosis Prediction in Medical Imaging

Filippo Ruffini, Elena Mulero Ayllon, Linlin Shen, Paolo Soda, Valerio Guarrasi

arxiv logopreprintJun 23 2025
Artificial Intelligence (AI) holds significant promise for improving prognosis prediction in medical imaging, yet its effective application remains challenging. In this work, we introduce a structured benchmark explicitly designed to evaluate and compare the transferability of Convolutional Neural Networks and Foundation Models in predicting clinical outcomes in COVID-19 patients, leveraging diverse publicly available Chest X-ray datasets. Our experimental methodology extensively explores a wide set of fine-tuning strategies, encompassing traditional approaches such as Full Fine-Tuning and Linear Probing, as well as advanced Parameter-Efficient Fine-Tuning methods including Low-Rank Adaptation, BitFit, VeRA, and IA3. The evaluations were conducted across multiple learning paradigms, including both extensive full-data scenarios and more clinically realistic Few-Shot Learning settings, which are critical for modeling rare disease outcomes and rapidly emerging health threats. By implementing a large-scale comparative analysis involving a diverse selection of pretrained models, including general-purpose architectures pretrained on large-scale datasets such as CLIP and DINOv2, to biomedical-specific models like MedCLIP, BioMedCLIP, and PubMedCLIP, we rigorously assess each model's capacity to effectively adapt and generalize to prognosis tasks, particularly under conditions of severe data scarcity and pronounced class imbalance. The benchmark was designed to capture critical conditions common in prognosis tasks, including variations in dataset size and class distribution, providing detailed insights into the strengths and limitations of each fine-tuning strategy. This extensive and structured evaluation aims to inform the practical deployment and adoption of robust, efficient, and generalizable AI-driven solutions in real-world clinical prognosis prediction workflows.

Enhancing Lung Cancer Diagnosis: An Optimization-Driven Deep Learning Approach with CT Imaging.

Lakshminarasimha K, Priyeshkumar AT, Karthikeyan M, Sakthivel R

pubmed logopapersJun 23 2025
Lung cancer (LC) remains a leading cause of mortality worldwide, affecting individuals across all genders and age groups. Early and accurate diagnosis is critical for effective treatment and improved survival rates. Computed Tomography (CT) imaging is widely used for LC detection and classification. However, manual identification can be time-consuming and error-prone due to the visual similarities among various LC types. Deep learning (DL) has shown significant promise in medical image analysis. Although numerous studies have investigated LC detection using deep learning techniques, the effective extraction of highly correlated features remains a significant challenge, thereby limiting diagnostic accuracy. Furthermore, most existing models encounter substantial computational complexity and find it difficult to efficiently handle the high-dimensional nature of CT images. This study introduces an optimized CBAM-EfficientNet model to enhance feature extraction and improve LC classification. EfficientNet is utilized to reduce computational complexity, while the Convolutional Block Attention Module (CBAM) emphasizes essential spatial and channel features. Additionally, optimization algorithms including Gray Wolf Optimization (GWO), Whale Optimization (WO), and the Bat Algorithm (BA) are applied to fine-tune hyperparameters and boost predictive accuracy. The proposed model, integrated with different optimization strategies, is evaluated on two benchmark datasets. The GWO-based CBAM-EfficientNet achieves outstanding classification accuracies of 99.81% and 99.25% on the Lung-PET-CT-Dx and LIDC-IDRI datasets, respectively. Following GWO, the BA-based CBAM-EfficientNet achieves 99.44% and 98.75% accuracy on the same datasets. Comparative analysis highlights the superiority of the proposed model over existing approaches, demonstrating strong potential for reliable and automated LC diagnosis. Its lightweight architecture also supports real-time implementation, offering valuable assistance to radiologists in high-demand clinical environments.

Chest X-ray Foundation Model with Global and Local Representations Integration.

Yang Z, Xu X, Zhang J, Wang G, Kalra MK, Yan P

pubmed logopapersJun 23 2025
Chest X-ray (CXR) is the most frequently ordered imaging test, supporting diverse clinical tasks from thoracic disease detection to postoperative monitoring. However, task-specific classification models are limited in scope, require costly labeled data, and lack generalizability to out-of-distribution datasets. To address these challenges, we introduce CheXFound, a self-supervised vision foundation model that learns robust CXR representations and generalizes effectively across a wide range of downstream tasks. We pretrained CheXFound on a curated CXR-987K dataset, comprising over approximately 987K unique CXRs from 12 publicly available sources. We propose a Global and Local Representations Integration (GLoRI) head for downstream adaptations, by incorporating fine- and coarse-grained disease-specific local features with global image features for enhanced performance in multilabel classification. Our experimental results showed that CheXFound outperformed state-of-the-art models in classifying 40 disease findings across different prevalence levels on the CXR-LT 24 dataset and exhibited superior label efficiency on downstream tasks with limited training data. Additionally, CheXFound achieved significant improvements on downstream tasks with out-of-distribution datasets, including opportunistic cardiovascular disease risk estimation, mortality prediction, malpositioned tube detection, and anatomical structure segmentation. The above results demonstrate CheXFound's strong generalization capabilities, which will enable diverse downstream adaptations with improved label efficiency in future applications. The project source code is publicly available at https://github.com/RPIDIAL/CheXFound.

Trans${^2}$-CBCT: A Dual-Transformer Framework for Sparse-View CBCT Reconstruction

Minmin Yang, Huantao Ren, Senem Velipasalar

arxiv logopreprintJun 20 2025
Cone-beam computed tomography (CBCT) using only a few X-ray projection views enables faster scans with lower radiation dose, but the resulting severe under-sampling causes strong artifacts and poor spatial coverage. We address these challenges in a unified framework. First, we replace conventional UNet/ResNet encoders with TransUNet, a hybrid CNN-Transformer model. Convolutional layers capture local details, while self-attention layers enhance global context. We adapt TransUNet to CBCT by combining multi-scale features, querying view-specific features per 3D point, and adding a lightweight attenuation-prediction head. This yields Trans-CBCT, which surpasses prior baselines by 1.17 dB PSNR and 0.0163 SSIM on the LUNA16 dataset with six views. Second, we introduce a neighbor-aware Point Transformer to enforce volumetric coherence. This module uses 3D positional encoding and attention over k-nearest neighbors to improve spatial consistency. The resulting model, Trans$^2$-CBCT, provides an additional gain of 0.63 dB PSNR and 0.0117 SSIM. Experiments on LUNA16 and ToothFairy show consistent gains from six to ten views, validating the effectiveness of combining CNN-Transformer features with point-based geometry reasoning for sparse-view CBCT reconstruction.

Automatic Detection of B-Lines in Lung Ultrasound Based on the Evaluation of Multiple Characteristic Parameters Using Raw RF Data.

Shen W, Zhang Y, Zhang H, Zhong H, Wan M

pubmed logopapersJun 20 2025
B-line artifacts in lung ultrasound, pivotal for diagnosing pulmonary conditions, warrant automated recognition to enhance diagnostic accuracy. In this paper, a lung ultrasound B-line vertical artifact identification method based on radio frequency (RF) signal was proposed. B-line regions were distinguished from non-B-line regions by inputting multiple characteristic parameters into nonlinear support vector machine (SVM). Six characteristic parameters were evaluated, including permutation entropy, information entropy, kurtosis, skewness, Nakagami shape factor, and approximate entropy. Following the evaluation that demonstrated the performance differences in parameter recognition, Principal Component Analysis (PCA) was utilized to reduce the dimensionality to a four-dimensional feature set for input into a nonlinear Support Vector Machine (SVM) for classification purposes. Four types of experiments were conducted: a sponge with dripping water model, gelatin phantoms containing either glass beads or gelatin droplets, and in vivo experiments. By employing precise feature selection and analyzing scan lines rather than full images, this approach significantly reduced the dependency on large image datasets without compromising discriminative accuracy. The method exhibited performance comparable to contemporary image-based deep learning approaches, which, while highly effective, typically necessitate extensive data for training and require expert annotation of large datasets to establish ground truth. Owing to the optimized architecture of our model, efficient sample recognition was achieved, with the capability to process between 27,000 and 33,000 scan lines per second (resulting in a frame rate exceeding 100 FPS, with 256 scan lines per frame), thus supporting real-time analysis. The results demonstrate that the accuracy of the method to classify a scan line as belonging to a B-line region was up to 88%, with sensitivity reaching up to 90%, specificity up to 87%, and an F1-score up to 89%. This approach effectively reflects the performance of scan line classification pertinent to B-line identification. Our approach reduces the reliance on large annotated datasets, thereby streamlining the preprocessing phase.

Current and future applications of artificial intelligence in lung cancer and mesothelioma.

Roche JJ, Seyedshahi F, Rakovic K, Thu AW, Le Quesne J, Blyth KG

pubmed logopapersJun 20 2025
Considerable challenges exist in managing lung cancer and mesothelioma, including diagnostic complexity, treatment stratification, early detection and imaging quantification. Variable incidence in mesothelioma also makes equitable provision of high-quality care difficult. In this context, artificial intelligence (AI) offers a range of assistive/automated functions that can potentially enhance clinical decision-making, while reducing inequality and pathway delay. In this state-of-the-art narrative review, we synthesise evidence on this topic, focusing particularly on tools that ingest routine pathology and radiology images. We summarise the strengths and weaknesses of AI applied to common multidisciplinary team (MDT) functions, including histological diagnosis, therapeutic response prediction, radiological detection and quantification, and survival estimation. We also review emerging methods capable of generating novel biological insights and current barriers to implementation, including access to high-quality training data and suitable regulatory and technical infrastructure. Neural networks trained on pathology images have proven utility in histological classification, prognostication, response prediction and survival. Self-supervised models can also generate new insights into biological features responsible for adverse outcomes. Radiology applications include lung nodule tools, which offer critical pathway support for imminent lung cancer screening and urgent referrals. Tumour segmentation AI offers particular advantages in mesothelioma, where response assessment and volumetric staging are difficult using human readers due to tumour size and morphological complexity. AI is also critical for radiogenomics, permitting effective integration of molecular and radiomic features for discovery of non-invasive markers for molecular subtyping and enhanced stratification. AI solutions offer considerable potential benefits across the MDT, particularly in repetitive or time-consuming tasks based on pathology and radiology images. Effective leveraging of this technology is critical for lung cancer screening and efficient delivery of increasingly complex diagnostic and predictive MDT functions. Future AI research should involve transparent and interpretable outputs that assist in explaining the basis of AI-supported decision making.

Combination of 2D and 3D nnU-Net for ground glass opacity segmentation in CT images of Post-COVID-19 patients.

Nguyen QH, Hoang DA, Pham HV

pubmed logopapersJun 20 2025
The COVID-19 pandemic plays a significant roles in the global health, highlighting the imperative for effective management of post-recovery symptoms. Within this context, Ground Glass Opacity (GGO) in lung computed tomography CT scans emerges as a critical indicator for early intervention. Recently, most researchers have investigated initially a challenge to refine techniques for GGO segmentation. These approaches aim to scrutinize and juxtapose cutting-edge methods for analyzing lung CT images of patients recuperating from COVID-19. While many methods in this challenge utilize the nnU-Net architecture, its general approach has not concerned completely GGO areas such as marking infected areas, ground-glass opacity, irregular shapes and fuzzy boundaries. This research has investigated a specialized machine learning algorithm, advancing the nn-UNet framework to accurately segment GGO in lung CT scans of post-COVID-19 patients. We propose a novel approach for two-stage image segmentation methods based on nnU-Net 2D and 3D models including lung and shadow image segmentation, incorporating the attention mechanism. The combination models enhance automatic segmentation and models' accuracy when using different error function in the training process. Experimental results show that the proposed model's outcomes DSC score ranks fifth among the compared results. The proposed method has also the second-highest sensitivity value among the methods, which shows that this method has a higher true segmentation rate than most of the other methods. The proposed method has achieved a Hausdorff95 of 54.566, Surface dice of 0.7193, Sensitivity of 0.7528, and Specificity of 0.7749. As compared with the state-of-the-art methods, the proposed model in experimental results is improved much better than the current methods in term of segmentation of infected areas. The proposed model has been deployed in the case study of real-world problems with the combination of 2D and 3D models. It is demonstrated the capacity to comprehensively detect lung lesions correctly. Additionally, the boundary loss function has assisted in achieving more precise segmentation for low-resolution images. Initially segmenting lung area has reduced the volume of images requiring processing, while diminishing for training process.

PMFF-Net: A deep learning-based image classification model for UIP, NSIP, and OP.

Xu MW, Zhang ZH, Wang X, Li CT, Yang HY, Liao ZH, Zhang JQ

pubmed logopapersJun 19 2025
High-resolution computed tomography (HRCT) is helpful for diagnosing interstitial lung diseases (ILD), but it largely depends on the experience of physicians. Herein, our study aims to develop a deep-learning-based classification model to differentiate the three common types of ILD, so as to provide a reference to help physicians make the diagnosis and improve the accuracy of ILD diagnosis. Patients were selected from four tertiary Grade A hospitals in Kunming based on inclusion and exclusion criteria. HRCT scans of 130 patients were included. The imaging manifestations were usual interstitial pneumonia (UIP), non-specific interstitial pneumonia (NSIP), and organizing pneumonia (OP). Additionally, 50 chest HRCT cases without imaging abnormalities during the same period were selected.Construct a data set. Conduct the training, validation, and testing of the Parallel Multi-scale Feature Fusion Network (PMFF-Net) deep learning model. Utilize Python software to generate data and charts pertaining to model performance. Assess the model's accuracy, precision, recall, and F1-score, and juxtapose its diagnostic efficacy against that of physicians across various hospital levels, with differing levels of seniority, and from various departments. The PMFF -Net deep learning model is capable of classifying imaging types such as UIP, NSIP, and OP, as well as normal imaging. In a mere 105 s, it makes the diagnosis for 18 HRCT images with a diagnostic accuracy of 92.84 %, precision of 91.88 %, recall of 91.95 %, and an F1 score of 0.9171. The diagnostic accuracy of senior radiologists (83.33 %) and pulmonologists (77.77 %) from tertiary hospitals is higher than that of internists from secondary hospitals (33.33 %). Meanwhile, the diagnostic accuracy of middle-aged radiologists (61.11 %) and pulmonologists (66.66 %) are higher than junior radiologists (38.88 %) and pulmonologists (44.44 %) in tertiary hospitals, whereas junior and middle-aged internists at secondary hospitals were unable to complete the tests. This study found that the PMFF-Net model can effectively classify UIP, NSIP, OP imaging types, and normal imaging, which can help doctors of different hospital levels and departments make clinical decisions quickly and effectively.
Page 21 of 42416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.