Sort by:
Page 36 of 42412 results

RADAR: Enhancing Radiology Report Generation with Supplementary Knowledge Injection

Wenjun Hou, Yi Cheng, Kaishuai Xu, Heng Li, Yan Hu, Wenjie Li, Jiang Liu

arxiv logopreprintMay 20 2025
Large language models (LLMs) have demonstrated remarkable capabilities in various domains, including radiology report generation. Previous approaches have attempted to utilize multimodal LLMs for this task, enhancing their performance through the integration of domain-specific knowledge retrieval. However, these approaches often overlook the knowledge already embedded within the LLMs, leading to redundant information integration and inefficient utilization of learned representations. To address this limitation, we propose RADAR, a framework for enhancing radiology report generation with supplementary knowledge injection. RADAR improves report generation by systematically leveraging both the internal knowledge of an LLM and externally retrieved information. Specifically, it first extracts the model's acquired knowledge that aligns with expert image-based classification outputs. It then retrieves relevant supplementary knowledge to further enrich this information. Finally, by aggregating both sources, RADAR generates more accurate and informative radiology reports. Extensive experiments on MIMIC-CXR, CheXpert-Plus, and IU X-ray demonstrate that our model outperforms state-of-the-art LLMs in both language quality and clinical accuracy

Non-invasive CT based multiregional radiomics for predicting pathologic complete response to preoperative neoadjuvant chemoimmunotherapy in non-small cell lung cancer.

Fan S, Xie J, Zheng S, Wang J, Zhang B, Zhang Z, Wang S, Cui Y, Liu J, Zheng X, Ye Z, Cui X, Yue D

pubmed logopapersMay 19 2025
This study aims to develop and validate a multiregional radiomics model to predict pathological complete response (pCR) to neoadjuvant chemoimmunotherapy in non-small cell lung cancer (NSCLC), and further evaluate the performance of the model in different specific subgroups (N2 stage and anti-PD-1/PD-L1). 216 patients with NSCLC who underwent neoadjuvant chemoimmunotherapy followed by surgical intervention were included and assigned to training and validation sets randomly. From pre-treatment baseline CT, one intratumoral (T) and two peritumoral regions (P<sub>3</sub>: 0-3 mm; P<sub>6</sub>: 0-6 mm) were extracted. Five radiomics models were developed using machine learning algorithms to predict pCR, utilizing selected features from intratumoral (T), peritumoral (P<sub>3</sub>, P<sub>6</sub>), and combined intra- and peritumoral regions (T + P<sub>3</sub>, T + P<sub>6</sub>). Additionally, the predictive efficacy of the optimal model was specifically assessed for patients in the N2 stage and anti-PD-1/PD-L1 subgroups. A total of 51.4 % (111/216) of patients exhibited pCR following neoadjuvant chemoimmunotherapy. Multivariable analysis identified that only the T + P<sub>3</sub> radiomics signature served as independent predictor of pCR (P < 0.001). The multiregional radiomics model (T + P<sub>3</sub>) exhibited superior predictive performance for pCR, achieving an area under the curve (AUC) of 0.75 in the validation cohort. Furthermore, this multiregional model maintained robust predictive accuracy in both N2 stage and anti-PD-1/PD-L1 subgroups, with an AUC of 0.829 and 0.833, respectively. The proposed multiregional radiomics model showed potential in predicting pCR in NSCLC after neoadjuvant chemoimmunotherapy, and demonstrated good predictive performance in different specific subgroups. This capability may assist clinicians in identifying suitable candidates for neoadjuvant chemoimmunotherapy and promote the advancement in precision therapy.

Thymoma habitat segmentation and risk prediction model using CT imaging and K-means clustering.

Liang Z, Li J, He S, Li S, Cai R, Chen C, Zhang Y, Deng B, Wu Y

pubmed logopapersMay 19 2025
Thymomas, though rare, present a wide range of clinical behaviors, from indolent to aggressive forms, making accurate risk stratification crucial for treatment planning. Traditional methods such as histopathology and radiological assessments often lack the ability to capture tumor heterogeneity, which can impact prognosis. Radiomics, combined with machine learning, provides a method to extract and analyze quantitative imaging features, offering the potential to improve tumor classification and risk prediction. By segmenting tumors into distinct habitat zones, it becomes possible to assess intratumoral heterogeneity more effectively. This study employs radiomics and machine learning techniques to enhance thymoma risk prediction, aiming to improve diagnostic consistency and reduce variability in radiologists' assessments. This study aims to identify different habitat zones within thymomas through CT imaging feature analysis and to establish a predictive model to differentiate between high and low-risk thymomas. Additionally, the study explores how this model can assist radiologists. We obtained CT imaging data from 133 patients with thymoma who were treated at the Affiliated Hospital of Guangdong Medical University from 2015 to 2023. Images from the plain scan phase, venous phase, arterial phase, and their differential images (subtracted images) were used. Tumor regions were segmented into three habitat zones using K-Means clustering. Imaging features from each habitat zone were extracted using the PyRadiomics (van Griethuysen, 2017) library. The 28 most distinguishing features were selected through Mann-Whitney U tests (Mann, 1947) and Spearman's correlation analysis (Spearman, 1904). Five predictive models were built using the same machine learning algorithm (Support Vector Machine [SVM]): Habitat1, Habitat2, Habitat3 (trained on features from individual tumor habitat regions), Habitat All (trained on combined features from all regions), and Intra (trained on intratumoral features), and their performances were evaluated for comparison. The models' diagnostic outcomes were compared with the diagnoses of four radiologists (two junior and two experienced physicians). The AUC (area under curve) for habitat zone 1 was 0.818, for habitat zone 2 was 0.732, and for habitat zone 3 was 0.763. The comprehensive model, which combined data from all habitat zones, achieved an AUC of 0.960, outperforming the model based on traditional radiomic features (AUC of 0.720). The model significantly improved the diagnostic accuracy of all four radiologists. The AUCs for junior radiologists 1 and 2 increased from 0.747 and 0.775 to 0.932 and 0.972, respectively, while for experienced radiologists 1 and 2, the AUCs increased from 0.932 and 0.859 to 0.977 and 0.972, respectively. This study successfully identified distinct habitat zones within thymomas through CT imaging feature analysis and developed an efficient predictive model that significantly improved diagnostic accuracy. This model offers a novel tool for risk assessment of thymomas and can aid in guiding clinical decision-making.

Diagnosis of early idiopathic pulmonary fibrosis: current status and future perspective.

Wang X, Xia X, Hou Y, Zhang H, Han W, Sun J, Li F

pubmed logopapersMay 19 2025
The standard approach to diagnosing idiopathic pulmonary fibrosis (IPF) includes identifying the usual interstitial pneumonia (UIP) pattern via high resolution computed tomography (HRCT) or lung biopsy and excluding known causes of interstitial lung disease (ILD). However, limitations of manual interpretation of lung imaging, along with other reasons such as lack of relevant knowledge and non-specific symptoms have hindered the timely diagnosis of IPF. This review proposes the definition of early IPF, emphasizes the diagnostic urgency of early IPF, and highlights current diagnostic strategies and future prospects for early IPF. The integration of artificial intelligence (AI), specifically machine learning (ML) and deep learning (DL), is revolutionizing the diagnostic procedure of early IPF by standardizing and accelerating the interpretation of thoracic images. Innovative bronchoscopic techniques such as transbronchial lung cryobiopsy (TBLC), genomic classifier, and endobronchial optical coherence tomography (EB-OCT) provide less invasive diagnostic alternatives. In addition, chest auscultation, serum biomarkers, and susceptibility genes are pivotal for the indication of early diagnosis. Ongoing research is essential for refining diagnostic methods and treatment strategies for early IPF.

Artificial intelligence based pulmonary vessel segmentation: an opportunity for automated three-dimensional planning of lung segmentectomy.

Mank QJ, Thabit A, Maat APWM, Siregar S, Van Walsum T, Kluin J, Sadeghi AH

pubmed logopapersMay 19 2025
This study aimed to develop an automated method for pulmonary artery and vein segmentation in both left and right lungs from computed tomography (CT) images using artificial intelligence (AI). The segmentations were evaluated using PulmoSR software, which provides 3D visualizations of patient-specific anatomy, potentially enhancing a surgeon's understanding of the lung structure. A dataset of 125 CT scans from lung segmentectomy patients at Erasmus MC was used. Manual annotations for pulmonary arteries and veins were created with 3D Slicer. nnU-Net models were trained for both lungs, assessed using Dice score, sensitivity, and specificity. Intraoperative recordings demonstrated clinical applicability. A paired t-test evaluated statistical significance of the differences between automatic and manual segmentations. The nnU-Net model, trained at full 3D resolution, achieved a mean Dice score between 0.91 and 0.92. The mean sensitivity and specificity were: left artery: 0.86 and 0.99, right artery: 0.84 and 0.99, left vein: 0.85 and 0.99, right vein: 0.85 and 0.99. The automatic method reduced segmentation time from ∼1.5 hours to under 5 min. Five cases were evaluated to demonstrate how the segmentations support lung segmentectomy procedures. P-values for Dice scores were all below 0.01, indicating statistical significance. The nnU-Net models successfully performed automatic segmentation of pulmonary arteries and veins in both lungs. When integrated with visualization tools, these automatic segmentations can enhance preoperative and intraoperative planning by providing detailed 3D views of patients anatomy.

Harnessing Artificial Intelligence for Accurate Diagnosis and Radiomics Analysis of Combined Pulmonary Fibrosis and Emphysema: Insights from a Multicenter Cohort Study

Zhang, S., Wang, H., Tang, H., Li, X., Wu, N.-W., Lang, Q., Li, B., Zhu, H., Chen, X., Chen, K., Xie, B., Zhou, A., Mo, C.

medrxiv logopreprintMay 18 2025
Combined Pulmonary Fibrosis and Emphysema (CPFE), formally recognized as a distinct pulmonary syndrome in 2022, is characterized by unique clinical features and pathogenesis that may lead to respiratory failure and death. However, the diagnosis of CPFE presents significant challenges that hinder effective treatment. Here, we assembled three-dimensional (3D) reconstruction data of the chest High-Resolution Computed Tomography (HRCT) of patients from multiple hospitals across different provinces in China, including Xiangya Hospital, West China Hospital, and Fujian Provincial Hospital. Using this dataset, we developed CPFENet, a deep learning-based diagnostic model for CPFE. It accurately differentiates CPFE from COPD, with performance comparable to that of professional radiologists. Additionally, we developed a CPFE score based on radiomic analysis of 3D CT images to quantify disease characteristics. Notably, female patients demonstrated significantly higher CPFE scores than males, suggesting potential sex-specific differences in CPFE. Overall, our study establishes the first diagnostic framework for CPFE, providing a diagnostic model and clinical indicators that enable accurate classification and characterization of the syndrome.

Attention-Enhanced U-Net for Accurate Segmentation of COVID-19 Infected Lung Regions in CT Scans

Amal Lahchim, Lazar Davic

arxiv logopreprintMay 18 2025
In this study, we propose a robust methodology for automatic segmentation of infected lung regions in COVID-19 CT scans using convolutional neural networks. The approach is based on a modified U-Net architecture enhanced with attention mechanisms, data augmentation, and postprocessing techniques. It achieved a Dice coefficient of 0.8658 and mean IoU of 0.8316, outperforming other methods. The dataset was sourced from public repositories and augmented for diversity. Results demonstrate superior segmentation performance. Future work includes expanding the dataset, exploring 3D segmentation, and preparing the model for clinical deployment.

Patient-Specific Autoregressive Models for Organ Motion Prediction in Radiotherapy

Yuxiang Lai, Jike Zhong, Vanessa Su, Xiaofeng Yang

arxiv logopreprintMay 17 2025
Radiotherapy often involves a prolonged treatment period. During this time, patients may experience organ motion due to breathing and other physiological factors. Predicting and modeling this motion before treatment is crucial for ensuring precise radiation delivery. However, existing pre-treatment organ motion prediction methods primarily rely on deformation analysis using principal component analysis (PCA), which is highly dependent on registration quality and struggles to capture periodic temporal dynamics for motion modeling.In this paper, we observe that organ motion prediction closely resembles an autoregressive process, a technique widely used in natural language processing (NLP). Autoregressive models predict the next token based on previous inputs, naturally aligning with our objective of predicting future organ motion phases. Building on this insight, we reformulate organ motion prediction as an autoregressive process to better capture patient-specific motion patterns. Specifically, we acquire 4D CT scans for each patient before treatment, with each sequence comprising multiple 3D CT phases. These phases are fed into the autoregressive model to predict future phases based on prior phase motion patterns. We evaluate our method on a real-world test set of 4D CT scans from 50 patients who underwent radiotherapy at our institution and a public dataset containing 4D CT scans from 20 patients (some with multiple scans), totaling over 1,300 3D CT phases. The performance in predicting the motion of the lung and heart surpasses existing benchmarks, demonstrating its effectiveness in capturing motion dynamics from CT images. These results highlight the potential of our method to improve pre-treatment planning in radiotherapy, enabling more precise and adaptive radiation delivery.

CorBenchX: Large-Scale Chest X-Ray Error Dataset and Vision-Language Model Benchmark for Report Error Correction

Jing Zou, Qingqiu Li, Chenyu Lian, Lihao Liu, Xiaohan Yan, Shujun Wang, Jing Qin

arxiv logopreprintMay 17 2025
AI-driven models have shown great promise in detecting errors in radiology reports, yet the field lacks a unified benchmark for rigorous evaluation of error detection and further correction. To address this gap, we introduce CorBenchX, a comprehensive suite for automated error detection and correction in chest X-ray reports, designed to advance AI-assisted quality control in clinical practice. We first synthesize a large-scale dataset of 26,326 chest X-ray error reports by injecting clinically common errors via prompting DeepSeek-R1, with each corrupted report paired with its original text, error type, and human-readable description. Leveraging this dataset, we benchmark both open- and closed-source vision-language models,(e.g., InternVL, Qwen-VL, GPT-4o, o4-mini, and Claude-3.7) for error detection and correction under zero-shot prompting. Among these models, o4-mini achieves the best performance, with 50.6 % detection accuracy and correction scores of BLEU 0.853, ROUGE 0.924, BERTScore 0.981, SembScore 0.865, and CheXbertF1 0.954, remaining below clinical-level accuracy, highlighting the challenge of precise report correction. To advance the state of the art, we propose a multi-step reinforcement learning (MSRL) framework that optimizes a multi-objective reward combining format compliance, error-type accuracy, and BLEU similarity. We apply MSRL to QwenVL2.5-7B, the top open-source model in our benchmark, achieving an improvement of 38.3% in single-error detection precision and 5.2% in single-error correction over the zero-shot baseline.

A deep learning-based approach to automated rib fracture detection and CWIS classification.

Marting V, Borren N, van Diepen MR, van Lieshout EMM, Wijffels MME, van Walsum T

pubmed logopapersMay 16 2025
Trauma-induced rib fractures are a common injury. The number and characteristics of these fractures influence whether a patient is treated nonoperatively or surgically. Rib fractures are typically diagnosed using CT scans, yet 19.2-26.8% of fractures are still missed during assessment. Another challenge in managing rib fractures is the interobserver variability in their classification. Purpose of this study was to develop and assess an automated method that detects rib fractures in CT scans, and classifies them according to the Chest Wall Injury Society (CWIS) classification. 198 CT scans were collected, of which 170 were used for training and internal validation, and 28 for external validation. Fractures and their classifications were manually annotated in each of the scans. A detection and classification network was trained for each of the three components of the CWIS classifications. In addition, a rib number labeling network was trained for obtaining the rib number of a fracture. Experiments were performed to assess the method performance. On the internal test set, the method achieved a detection sensitivity of 80%, at a precision of 87%, and an F1-score of 83%, with a mean number of FPPS (false positives per scan) of 1.11. Classification sensitivity varied, with the lowest being 25% for complex fractures and the highest being 97% for posterior fractures. The correct rib number was assigned to 94% of the detected fractures. The custom-trained nnU-Net correctly labeled 95.5% of all ribs and 98.4% of fractured ribs in 30 patients. The detection and classification performance on the external validation dataset was slightly better, with a fracture detection sensitivity of 84%, precision of 85%, F1-score of 84%, FPPS of 0.96 and 95% of the fractures were assigned the correct rib number. The method developed is able to accurately detect and classify rib fractures in CT scans, there is room for improvement in the (rare and) underrepresented classes in the training set.
Page 36 of 42412 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.