Sort by:
Page 24 of 66652 results

A Dual Radiomic and Dosiomic Filtering Technique for Locoregional Radiation Pneumonitis Prediction in Breast Cancer Patients

Zhenyu Yang, Qian Chen, Rihui Zhang, Manju Liu, Fengqiu Guo, Minjie Yang, Min Tang, Lina Zhou, Chunhao Wang, Minbin Chen, Fang-Fang Yin

arxiv logopreprintAug 4 2025
Purpose: Radiation pneumonitis (RP) is a serious complication of intensity-modulated radiation therapy (IMRT) for breast cancer patients, underscoring the need for precise and explainable predictive models. This study presents an Explainable Dual-Omics Filtering (EDOF) model that integrates spatially localized dosiomic and radiomic features for voxel-level RP prediction. Methods: A retrospective cohort of 72 breast cancer patients treated with IMRT was analyzed, including 28 who developed RP. The EDOF model consists of two components: (1) dosiomic filtering, which extracts local dose intensity and spatial distribution features from planning dose maps, and (2) radiomic filtering, which captures texture-based features from pre-treatment CT scans. These features are jointly analyzed using the Explainable Boosting Machine (EBM), a transparent machine learning model that enables feature-specific risk evaluation. Model performance was assessed using five-fold cross-validation, reporting area under the curve (AUC), sensitivity, and specificity. Feature importance was quantified by mean absolute scores, and Partial Dependence Plots (PDPs) were used to visualize nonlinear relationships between RP risk and dual-omic features. Results: The EDOF model achieved strong predictive performance (AUC = 0.95 +- 0.01; sensitivity = 0.81 +- 0.05). The most influential features included dosiomic Intensity Mean, dosiomic Intensity Mean Absolute Deviation, and radiomic SRLGLE. PDPs revealed that RP risk increases beyond 5 Gy and rises sharply between 10-30 Gy, consistent with clinical dose thresholds. SRLGLE also captured structural heterogeneity linked to RP in specific lung regions. Conclusion: The EDOF framework enables spatially resolved, explainable RP prediction and may support personalized radiation planning to mitigate pulmonary toxicity.

Transfer learning based deep architecture for lung cancer classification using CT image with pattern and entropy based feature set.

R N, C M V

pubmed logopapersAug 2 2025
Early detection of lung cancer, which remains one of the leading causes of death worldwide, is important for improved prognosis, and CT scanning is an important diagnostic modality. Lung cancer classification according to CT scan is challenging since the disease is characterized by very variable features. A hybrid deep architecture, ILN-TL-DM, is presented in this paper for precise classification of lung cancer from CT scan images. Initially, an Adaptive Gaussian filtering method is applied during pre-processing to eliminate noise and enhance the quality of the CT image. This is followed by an Improved Attention-based ResU-Net (P-ResU-Net) model being utilized during the segmentation process to accurately isolate the lung and tumor areas from the remaining image. During the process of feature extraction, various features are derived from the segmented images, such as Local Gabor Transitional Pattern (LGTrP), Pyramid of Histograms of Oriented Gradients (PHOG), deep features and improved entropy-based features, all intended to improve the representation of the tumor areas. Finally, classification exploits a hybrid deep learning architecture integrating an improved LeNet structure with Transfer Learning (ILN-TL) and a DeepMaxout (DM) structure. Both model outputs are finally merged with the help of a soft voting strategy, which results in the final classification result that separates cancerous and non-cancerous tissues. The strategy greatly enhances lung cancer detection's accuracy and strength, showcasing how combining sophisticated neural network structures with feature engineering and ensemble methods could be used to achieve better medical image classification. The ILN-TL-DM model consistently outperforms the conventional methods with greater accuracy (0.962), specificity (0.955) and NPV (0.964).

Light Convolutional Neural Network to Detect Chronic Obstructive Pulmonary Disease (COPDxNet): A Multicenter Model Development and External Validation Study.

Rabby ASA, Chaudhary MFA, Saha P, Sthanam V, Nakhmani A, Zhang C, Barr RG, Bon J, Cooper CB, Curtis JL, Hoffman EA, Paine R, Puliyakote AK, Schroeder JD, Sieren JC, Smith BM, Woodruff PG, Reinhardt JM, Bhatt SP, Bodduluri S

pubmed logopapersAug 1 2025
Approximately 70% of adults with chronic obstructive pulmonary disease (COPD) remain undiagnosed. Opportunistic screening using chest computed tomography (CT) scans, commonly acquired in clinical practice, may be used to improve COPD detection through simple, clinically applicable deep-learning models. We developed a lightweight, convolutional neural network (COPDxNet) that utilizes minimally processed chest CT scans to detect COPD. We analyzed 13,043 inspiratory chest CT scans from the COPDGene participants, (9,675 standard-dose and 3,368 low-dose scans), which we randomly split into training (70%) and test (30%) sets at the participant level to no individual contributed to both sets. COPD was defined by postbronchodilator FEV /FVC < 0.70. We constructed a simple, four-block convolutional model that was trained on pooled data and validated on the held-out standard- and low-dose test sets. External validation was performed using standard-dose CT scans from 2,890 SPIROMICS participants and low-dose CT scans from 7,893 participants in the National Lung Screening Trial (NLST). We evaluated performance using the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, Brier scores, and calibration curves. On COPDGene standard-dose CT scans, COPDxNet achieved an AUC of 0.92 (95% CI: 0.91 to 0.93), sensitivity of 80.2%, and specificity of 89.4%. On low-dose scans, AUC was 0.88 (95% CI: 0.86 to 0.90). When the COPDxNet model was applied to external validation datasets, it showed an AUC of 0.92 (95% CI: 0.91 to 0.93) in SPIROMICS and 0.82 (95% CI: 0.81 to 0.83) on NLST. The model was well-calibrated, with Brier scores of 0.11 for standard- dose and 0.13 for low-dose CT scans in COPDGene, 0.12 in SPIROMICS, and 0.17 in NLST. COPDxNet demonstrates high discriminative accuracy and generalizability for detecting COPD on standard- and low-dose chest CT scans, supporting its potential for clinical and screening applications across diverse populations.

Structured Spectral Graph Learning for Anomaly Classification in 3D Chest CT Scans

Theo Di Piazza, Carole Lazarus, Olivier Nempont, Loic Boussel

arxiv logopreprintAug 1 2025
With the increasing number of CT scan examinations, there is a need for automated methods such as organ segmentation, anomaly detection and report generation to assist radiologists in managing their increasing workload. Multi-label classification of 3D CT scans remains a critical yet challenging task due to the complex spatial relationships within volumetric data and the variety of observed anomalies. Existing approaches based on 3D convolutional networks have limited abilities to model long-range dependencies while Vision Transformers suffer from high computational costs and often require extensive pre-training on large-scale datasets from the same domain to achieve competitive performance. In this work, we propose an alternative by introducing a new graph-based approach that models CT scans as structured graphs, leveraging axial slice triplets nodes processed through spectral domain convolution to enhance multi-label anomaly classification performance. Our method exhibits strong cross-dataset generalization, and competitive performance while achieving robustness to z-axis translation. An ablation study evaluates the contribution of each proposed component.

Utility of an artificial intelligence-based lung CT airway model in the quantitative evaluation of large and small airway lesions in patients with chronic obstructive pulmonary disease.

Liu Z, Li J, Li B, Yi G, Pang S, Zhang R, Li P, Yin Z, Zhang J, Lv B, Yan J, Ma J

pubmed logopapersAug 1 2025
Accurate quantification of the extent of bronchial damage across various airway levels in chronic obstructive pulmonary disease (COPD) remains a challenge. In this study, artificial intelligence (AI) was employed to develop an airway segmentation model to investigate the morphological changes of the central and peripheral airways in COPD patients and the effects of these airway changes on pulmonary function classification and acute COPD exacerbations. Clinical data from a total of 340 patients with COPD and 73 healthy volunteers were collected and compiled. An AI-driven airway segmentation model was constructed using Convolutional Neural Regressor (CNR) and Airway Transfer Network (ATN) algorithms. The efficacy of the model was evaluated through support vector machine (SVM) and random forest regression approaches. The area under the receiver operating characteristic (ROC) curve (AUC) of the SVM in evaluating the COPD airway segmentation model was 0.96, with a sensitivity of 97% and a specificity of 92%, however, the AUC value of the SVM was 0.81 when it was replaced the healthy group by non-COPD outpatients. Compared with the healthy group, the grade and the total number of airway segmentation were decreased and the diameters of the right main bronchus and bilateral lobar bronchi of patients with COPD were smaller and the airway walls were thinner (all P < 0.01). However, the diameters of the subsegmental and small airway bronchi were increased, and airway walls were thickened, and the arc lengths were shorter ( all P < 0.01), especially in patients with severe COPD (all P < 0.05). Correlation and regression analysis showed that FEV1%pre was positively correlated with the diameters and airway wall thickness of the main and lobar airway, and the arc lengths of small airway bronchi (all P < 0.05). Airway wall thickness of the subsegment and small airway were found to have the greatest impact on the frequency of COPD exacerbations. Artificial intelligence lung CT airway segmentation model is a non-invasive quantitative tool for measuring chronic obstructive pulmonary disease. The main changes in COPD patients are that the central airway diameter becomes narrower and the thickness becomes thinner. The arc length of the peripheral airway becomes shorter, and the diameter and airway wall thickness become larger, which is more obvious in severe patients. Pulmonary function classification and small and medium airway dysfunction are also affected by the diameter, thickness and arc length of large and small airways. Small airway remodeling is more significant in acute exacerbations of COPD.

Minimum Data, Maximum Impact: 20 annotated samples for explainable lung nodule classification

Luisa Gallée, Catharina Silvia Lisson, Christoph Gerhard Lisson, Daniela Drees, Felix Weig, Daniel Vogele, Meinrad Beer, Michael Götz

arxiv logopreprintAug 1 2025
Classification models that provide human-interpretable explanations enhance clinicians' trust and usability in medical image diagnosis. One research focus is the integration and prediction of pathology-related visual attributes used by radiologists alongside the diagnosis, aligning AI decision-making with clinical reasoning. Radiologists use attributes like shape and texture as established diagnostic criteria and mirroring these in AI decision-making both enhances transparency and enables explicit validation of model outputs. However, the adoption of such models is limited by the scarcity of large-scale medical image datasets annotated with these attributes. To address this challenge, we propose synthesizing attribute-annotated data using a generative model. We enhance the Diffusion Model with attribute conditioning and train it using only 20 attribute-labeled lung nodule samples from the LIDC-IDRI dataset. Incorporating its generated images into the training of an explainable model boosts performance, increasing attribute prediction accuracy by 13.4% and target prediction accuracy by 1.8% compared to training with only the small real attribute-annotated dataset. This work highlights the potential of synthetic data to overcome dataset limitations, enhancing the applicability of explainable models in medical image analysis.

Establishing a Deep Learning Model That Integrates Pretreatment and Midtreatment Computed Tomography to Predict Treatment Response in Non-Small Cell Lung Cancer.

Chen X, Meng F, Zhang P, Wang L, Yao S, An C, Li H, Zhang D, Li H, Li J, Wang L, Liu Y

pubmed logopapersAug 1 2025
Patients with identical stages or similar tumor volumes can vary significantly in their responses to radiation therapy (RT) due to individual characteristics, making personalized RT for non-small cell lung cancer (NSCLC) challenging. This study aimed to develop a deep learning model by integrating pretreatment and midtreatment computed tomography (CT) to predict the treatment response in NSCLC patients. We retrospectively collected data from 168 NSCLC patients across 3 hospitals. Data from Shanghai General Hospital (SGH, 35 patients) and Shanxi Cancer Hospital (SCH, 93 patients) were used for model training and internal validation, while data from Linfen Central Hospital (LCH, 40 patients) were used for external validation. Deep learning, radiomics, and clinical features were extracted to establish a varying time interval long short-term memory network for response prediction. Furthermore, we derived a model-deduced personalize dose escalation (DE) for patients predicted to have suboptimal gross tumor volume regression. The area under the receiver operating characteristic curve (AUC) and predicted absolute error were used to evaluate the predictive Response Evaluation Criteria in Solid Tumors classification and the proportion of gross tumor volume residual. DE was calculated as the biological equivalent dose using an /α/β ratio of 10 Gy. The model using only pretreatment CT achieved the highest AUC of 0.762 and 0.687 in internal and external validation respectively, whereas the model integrating both pretreatment and midtreatment CT achieved AUC of 0.869 and 0.798, with predicted absolute error of 0.137 and 0.185, respectively. We performed personalized DE for 29 patients. Their original biological equivalent dose was approximately 72 Gy, within the range of 71.6 Gy to 75 Gy. DE ranged from 77.7 to 120 Gy for 29 patients, with 17 patients exceeding 100 Gy and 8 patients reaching the model's preset upper limit of 120 Gy. Combining pretreatment and midtreatment CT enhances prediction performance for RT response and offers a promising approach for personalized DE in NSCLC.

M4CXR: Exploring Multitask Potentials of Multimodal Large Language Models for Chest X-Ray Interpretation.

Park J, Kim S, Yoon B, Hyun J, Choi K

pubmed logopapersAug 1 2025
The rapid evolution of artificial intelligence, especially in large language models (LLMs), has significantly impacted various domains, including healthcare. In chest X-ray (CXR) analysis, previous studies have employed LLMs, but with limitations: either underutilizing the LLMs' capability for multitask learning or lacking clinical accuracy. This article presents M4CXR, a multimodal LLM designed to enhance CXR interpretation. The model is trained on a visual instruction-following dataset that integrates various task-specific datasets in a conversational format. As a result, the model supports multiple tasks such as medical report generation (MRG), visual grounding, and visual question answering (VQA). M4CXR achieves state-of-the-art clinical accuracy in MRG by employing a chain-of-thought (CoT) prompting strategy, in which it identifies findings in CXR images and subsequently generates corresponding reports. The model is adaptable to various MRG scenarios depending on the available inputs, such as single-image, multiimage, and multistudy contexts. In addition to MRG, M4CXR performs visual grounding at a level comparable to specialized models and demonstrates outstanding performance in VQA. Both quantitative and qualitative assessments reveal M4CXR's versatility in MRG, visual grounding, and VQA, while consistently maintaining clinical accuracy.

CX-Mind: A Pioneering Multimodal Large Language Model for Interleaved Reasoning in Chest X-ray via Curriculum-Guided Reinforcement Learning

Wenjie Li, Yujie Zhang, Haoran Sun, Yueqi Li, Fanrui Zhang, Mengzhe Xu, Victoria Borja Clausich, Sade Mellin, Renhao Yang, Chenrun Wang, Jethro Zih-Shuo Wang, Shiyi Yao, Gen Li, Yidong Xu, Hanyu Wang, Yilin Huang, Angela Lin Wang, Chen Shi, Yin Zhang, Jianan Guo, Luqi Yang, Renxuan Li, Yang Xu, Jiawei Liu, Yao Zhang, Lei Liu, Carlos Gutiérrez SanRomán, Lei Wang

arxiv logopreprintJul 31 2025
Chest X-ray (CXR) imaging is one of the most widely used diagnostic modalities in clinical practice, encompassing a broad spectrum of diagnostic tasks. Recent advancements have seen the extensive application of reasoning-based multimodal large language models (MLLMs) in medical imaging to enhance diagnostic efficiency and interpretability. However, existing multimodal models predominantly rely on "one-time" diagnostic approaches, lacking verifiable supervision of the reasoning process. This leads to challenges in multi-task CXR diagnosis, including lengthy reasoning, sparse rewards, and frequent hallucinations. To address these issues, we propose CX-Mind, the first generative model to achieve interleaved "think-answer" reasoning for CXR tasks, driven by curriculum-based reinforcement learning and verifiable process rewards (CuRL-VPR). Specifically, we constructed an instruction-tuning dataset, CX-Set, comprising 708,473 images and 2,619,148 samples, and generated 42,828 high-quality interleaved reasoning data points supervised by clinical reports. Optimization was conducted in two stages under the Group Relative Policy Optimization framework: initially stabilizing basic reasoning with closed-domain tasks, followed by transfer to open-domain diagnostics, incorporating rule-based conditional process rewards to bypass the need for pretrained reward models. Extensive experimental results demonstrate that CX-Mind significantly outperforms existing medical and general-domain MLLMs in visual understanding, text generation, and spatiotemporal alignment, achieving an average performance improvement of 25.1% over comparable CXR-specific models. On real-world clinical dataset (Rui-CXR), CX-Mind achieves a mean recall@1 across 14 diseases that substantially surpasses the second-best results, with multi-center expert evaluations further confirming its clinical utility across multiple dimensions.

Effect of spatial resolution on the diagnostic performance of machine-learning radiomics model in lung adenocarcinoma: comparisons between normal- and high-spatial-resolution imaging for predicting invasiveness.

Yanagawa M, Nagatani Y, Hata A, Sumikawa H, Moriya H, Iwano S, Tsuchiya N, Iwasawa T, Ohno Y, Tomiyama N

pubmed logopapersJul 31 2025
To construct two machine learning radiomics (MLR) for invasive adenocarcinoma (IVA) prediction using normal-spatial-resolution (NSR) and high-spatial-resolution (HSR) training cohorts, and to validate models (model-NSR and -HSR) in another test cohort while comparing independent radiologists' (R1, R2) performance with and without model-HSR. In this retrospective multicenter study, all CT images were reconstructed using NSR data (512 matrix, 0.5-mm thickness) and HSR data (2048 matrix, 0.25-mm thickness). Nodules were divided into training (n = 61 non-IVA, n = 165 IVA) and test sets (n = 36 non-IVA, n = 203 IVA). Two MLR models were developed with 18 significant factors for the NSR model and 19 significant factors for the HSR model from 172 radiomics features using random forest. Area under the receiver operator characteristic curves (AUC) was analyzed using DeLong's test in the test set. Accuracy (acc), sensitivity (sen), and specificity (spc) of R1 and R2 with and without model-HSR were compared using McNemar test. 437 patients (70 ± 9 years, 203 men) had 465 nodules (n = 368, IVA). Model-HSR AUCs were significantly higher than model-NSR in training (0.839 vs. 0.723) and test sets (0.863 vs. 0.718) (p < 0.05). R1's acc (87.2%) and sen (93.1%) with model-HSR were significantly higher than without (77.0% and 79.3%) (p < 0.0001). R2's acc (83.7%) and sen (86.7%) with model-HSR might be equal or higher than without (83.7% and 85.7%, respectively), but not significant (p > 0.50). Spc of R1 (52.8%) and R2 (66.7%) with model-HSR might be lower than without (63.9% and 72.2%, respectively), but not significant (p > 0.21). HSR-based MLR model significantly increased IVA diagnostic performance compared to NSR, supporting radiologists without compromising accuracy and sensitivity. However, this benefit came at the cost of reduced specificity, potentially increasing false positives, which may lead to unnecessary examinations or overtreatment in clinical settings.
Page 24 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.