Sort by:
Page 27 of 42412 results

Comparing Artificial Intelligence and Traditional Regression Models in Lung Cancer Risk Prediction Using A Systematic Review and Meta-Analysis.

Leonard S, Patel MA, Zhou Z, Le H, Mondal P, Adams SJ

pubmed logopapersJun 1 2025
Accurately identifying individuals who are at high risk of lung cancer is critical to optimize lung cancer screening with low-dose CT (LDCT). We sought to compare the performance of traditional regression models and artificial intelligence (AI)-based models in predicting future lung cancer risk. A systematic review and meta-analysis were conducted with reporting according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. We searched MEDLINE, Embase, Scopus, and the Cumulative Index to Nursing and Allied Health Literature databases for studies reporting the performance of AI or traditional regression models for predicting lung cancer risk. Two researchers screened articles, and a third researcher resolved conflicts. Model characteristics and predictive performance metrics were extracted. The quality of studies was assessed using the Prediction model Risk of Bias Assessment Tool. A meta-analysis assessed the discrimination performance of models, based on area under the receiver operating characteristic curve (AUC). One hundred forty studies met inclusion criteria and included 185 traditional and 64 AI-based models. Of these, 16 AI models and 65 traditional models have been externally validated. The pooled AUC of external validations of AI models was 0.82 (95% confidence interval [CI], 0.80-0.85), and the pooled AUC for traditional regression models was 0.73 (95% CI, 0.72-0.74). In a subgroup analysis, AI models that included LDCT had a pooled AUC of 0.85 (95% CI, 0.82-0.88). Overall risk of bias was high for both AI and traditional models. AI-based models, particularly those using imaging data, show promise for improving lung cancer risk prediction over traditional regression models. Future research should focus on prospective validation of AI models and direct comparisons with traditional methods in diverse populations.

ScreenDx, an artificial intelligence-based algorithm for the incidental detection of pulmonary fibrosis.

Touloumes N, Gagianas G, Bradley J, Muelly M, Kalra A, Reicher J

pubmed logopapersJun 1 2025
Nonspecific symptoms and variability in radiographic reporting patterns contribute to a diagnostic delay of the diagnosis of pulmonary fibrosis. An attractive solution is the use of machine-learning algorithms to screen for radiographic features suggestive of pulmonary fibrosis. Thus, we developed and validated a machine learning classifier algorithm (ScreenDx) to screen computed tomography imaging and identify incidental cases of pulmonary fibrosis. ScreenDx is a deep learning convolutional neural network that was developed from a multi-source dataset (cohort A) of 3,658 cases of normal and abnormal CT's, including CT's from patients with COPD, emphysema, and community-acquired pneumonia. Cohort B, a US-based cohort (n = 381) was used for tuning the algorithm, and external validation was performed on cohort C (n = 683), a separate international dataset. At the optimal threshold, the sensitivity and specificity for detection of pulmonary fibrosis in cohort B was 0.91 (95 % CI 88-94 %) and 0.95 (95 % CI 93-97 %), respectively, with AUC 0.98. In the external validation dataset (cohort C), the sensitivity and specificity were 1.0 (95 % 99.9-100.0) and 0.98 (95 % CI 97.9-99.6), respectively, with AUC 0.997. There were no significant differences in the ability of ScreenDx to identify pulmonary fibrosis based on CT manufacturer (Phillips, Toshiba, GE Healthcare, or Siemens) or slice thickness (2 mm vs 2-4 mm vs 4 mm). Regardless of CT manufacturer or slice thickness, ScreenDx demonstrated high performance across two, multi-site datasets for identifying incidental cases of pulmonary fibrosis. This suggest that the algorithm may be generalizable across patient populations and different healthcare systems.

Development and External Validation of a Detection Model to Retrospectively Identify Patients With Acute Respiratory Distress Syndrome.

Levy E, Claar D, Co I, Fuchs BD, Ginestra J, Kohn R, McSparron JI, Patel B, Weissman GE, Kerlin MP, Sjoding MW

pubmed logopapersJun 1 2025
The aim of this study was to develop and externally validate a machine-learning model that retrospectively identifies patients with acute respiratory distress syndrome (acute respiratory distress syndrome [ARDS]) using electronic health record (EHR) data. In this retrospective cohort study, ARDS was identified via physician-adjudication in three cohorts of patients with hypoxemic respiratory failure (training, internal validation, and external validation). Machine-learning models were trained to classify ARDS using vital signs, respiratory support, laboratory data, medications, chest radiology reports, and clinical notes. The best-performing models were assessed and internally and externally validated using the area under receiver-operating curve (AUROC), area under precision-recall curve, integrated calibration index (ICI), sensitivity, specificity, positive predictive value (PPV), and ARDS timing. Patients with hypoxemic respiratory failure undergoing mechanical ventilation within two distinct health systems. None. There were 1,845 patients in the training cohort, 556 in the internal validation cohort, and 199 in the external validation cohort. ARDS prevalence was 19%, 17%, and 31%, respectively. Regularized logistic regression models analyzing structured data (EHR model) and structured data and radiology reports (EHR-radiology model) had the best performance. During internal and external validation, the EHR-radiology model had AUROC of 0.91 (95% CI, 0.88-0.93) and 0.88 (95% CI, 0.87-0.93), respectively. Externally, the ICI was 0.13 (95% CI, 0.08-0.18). At a specified model threshold, sensitivity and specificity were 80% (95% CI, 75%-98%), PPV was 64% (95% CI, 58%-71%), and the model identified patients with a median of 2.2 hours (interquartile range 0.2-18.6) after meeting Berlin ARDS criteria. Machine-learning models analyzing EHR data can retrospectively identify patients with ARDS across different institutions.

Revolutionizing Radiology Workflow with Factual and Efficient CXR Report Generation

Pimchanok Sukjai, Apiradee Boonmee

arxiv logopreprintJun 1 2025
The escalating demand for medical image interpretation underscores the critical need for advanced artificial intelligence solutions to enhance the efficiency and accuracy of radiological diagnoses. This paper introduces CXR-PathFinder, a novel Large Language Model (LLM)-centric foundation model specifically engineered for automated chest X-ray (CXR) report generation. We propose a unique training paradigm, Clinician-Guided Adversarial Fine-Tuning (CGAFT), which meticulously integrates expert clinical feedback into an adversarial learning framework to mitigate factual inconsistencies and improve diagnostic precision. Complementing this, our Knowledge Graph Augmentation Module (KGAM) acts as an inference-time safeguard, dynamically verifying generated medical statements against authoritative knowledge bases to minimize hallucinations and ensure standardized terminology. Leveraging a comprehensive dataset of millions of paired CXR images and expert reports, our experiments demonstrate that CXR-PathFinder significantly outperforms existing state-of-the-art medical vision-language models across various quantitative metrics, including clinical accuracy (Macro F1 (14): 46.5, Micro F1 (14): 59.5). Furthermore, blinded human evaluation by board-certified radiologists confirms CXR-PathFinder's superior clinical utility, completeness, and accuracy, establishing its potential as a reliable and efficient aid for radiological practice. The developed method effectively balances high diagnostic fidelity with computational efficiency, providing a robust solution for automated medical report generation.

A Novel Theranostic Strategy for Malignant Pulmonary Nodules by Targeted CECAM6 with <sup>89</sup>Zr/<sup>131</sup>I-Labeled Tinurilimab.

Chen C, Zhu K, Wang J, Pan D, Wang X, Xu Y, Yan J, Wang L, Yang M

pubmed logopapersJun 1 2025
Lung adenocarcinoma (LUAD) constitutes a major cause of cancer-related fatalities worldwide. Early identification of malignant pulmonary nodules constitutes the most effective approach to reducing the mortality of LUAD. Despite the wide application of low-dose computed tomography (LDCT) in the early screening of LUAD, the identification of malignant pulmonary nodules by it remains a challenge. In this study, CEACAM6 (also called CD66c) as a potential biomarker is investigated for differentiating malignant lung nodules. Then, the CEACAM6-targeting monoclonal antibody (mAb, tinurilimab) is radiolabeled with <sup>89</sup>Zr and <sup>131</sup>I for theranostic applications. In terms of diagnosis, machine learning confirms CEACAM6 as a specific extracellular marker for discrimination between LUAD and benign nodules. The <sup>89</sup>Zr-labeled mAb is highly specific uptake in CEACAM6-positive LUAD via positron emission tomography (PET) imaging, and its ability to distinguish in malignant pulmonary nodules are significantly higher than that of <sup>18</sup>F Fluorodeoxyglucose (FDG) by positron emission tomography/magnetic resonance (PET/MR) imaging. While the <sup>131</sup>I-labeled mAb serving as the therapeutic aspect has significantly suppressed tumor growth after a single treatment. These results proves that <sup>89</sup>Zr/<sup>131</sup>I-labeled tinurilimab facilitates the differential capacity of malignant pulmonary nodules and radioimmunotherapy of LUAD in preclinical models. Further clinical evaluation and translation of this CEACAM6-targeted theranostics may be significant help in diagnosis and treatment of LUAD.

Implementation costs and cost-effectiveness of ultraportable chest X-ray with artificial intelligence in active case finding for tuberculosis in Nigeria.

Garg T, John S, Abdulkarim S, Ahmed AD, Kirubi B, Rahman MT, Ubochioma E, Creswell J

pubmed logopapersJun 1 2025
Availability of ultraportable chest x-ray (CXR) and advancements in artificial intelligence (AI)-enabled CXR interpretation are promising developments in tuberculosis (TB) active case finding (ACF) but costing and cost-effectiveness analyses are limited. We provide implementation cost and cost-effectiveness estimates of different screening algorithms using symptoms, CXR and AI in Nigeria. People 15 years and older were screened for TB symptoms and offered a CXR with AI-enabled interpretation using qXR v3 (Qure.ai) at lung health camps. Sputum samples were tested on Xpert MTB/RIF for individuals reporting symptoms or with qXR abnormality scores ≥0.30. We conducted a retrospective costing using a combination of top-down and bottom-up approaches while utilizing itemized expense data from a health system perspective. We estimated costs in five screening scenarios: abnormality score ≥0.30 and ≥0.50; cough ≥ 2 weeks; any symptom; abnormality score ≥0.30 or any symptom. We calculated total implementation costs, cost per bacteriologically-confirmed case detected, and assessed cost-effectiveness using incremental cost-effectiveness ratio (ICER) as additional cost per additional case. Overall, 3205 people with presumptive TB were identified, 1021 were tested, and 85 people with bacteriologically-confirmed TB were detected. Abnormality ≥ 0.30 or any symptom (US$65704) had the highest costs while cough ≥ 2 weeks was the lowest (US$40740). The cost per case was US$1198 for cough ≥ 2 weeks, and lowest for any symptom (US$635). Compared to baseline strategy of cough ≥ 2 weeks, the ICER for any symptom was US$191 per additional case detected and US$ 2096 for Abnormality ≥0.30 OR any symptom algorithm. Using CXR and AI had lower cost per case detected than any symptom screening criteria when asymptomatic TB was higher than 30% of all bacteriologically-confirmed TB detected. Compared to traditional symptom screening, using CXR and AI in combination with symptoms detects more cases at lower cost per case detected and is cost-effective. TB programs should explore adoption of CXR and AI for screening in ACF.

Comparing efficiency of an attention-based deep learning network with contemporary radiological workflow for pulmonary embolism detection on CTPA: A retrospective study.

Singh G, Singh A, Kainth T, Suman S, Sakla N, Partyka L, Phatak T, Prasanna P

pubmed logopapersJun 1 2025
Pulmonary embolism (PE) is the third most fatal cardiovascular disease in the United States. Currently, Computed Tomography Pulmonary Angiography (CTPA) serves as diagnostic gold standard for detecting PE. However, its efficacy is limited by factors such as contrast bolus timing, physician-dependent diagnostic accuracy, and time taken for scan interpretation. To address these limitations, we propose an AI-based PE triaging model (AID-PE) designed to predict the presence and key characteristics of PE on CTPA. This model aims to enhance diagnostic accuracy, efficiency, and the speed of PE identification. We trained AID-PE on the RSNA-STR PE CT (RSPECT) Dataset, N = 7279 and subsequently tested it on an in-house dataset (n = 106). We evaluated efficiency in a separate dataset (D<sub>4</sub>, n = 200) by comparing the time from scan to report in standard PE detection workflow versus AID-PE. A comparative analysis showed that AID-PE had an AUC/accuracy of 0.95/0.88. In contrast, a Convolutional Neural Network (CNN) classifier and a CNN-Long Short-Term Memory (LSTM) network without an attention module had an AUC/accuracy of 0.5/0.74 and 0.88/0.65, respectively. Our model achieved AUCs of 0.82 and 0.95 for detecting PE on the validation dataset and the independent test set, respectively. On D<sub>4</sub>, AID-PE took an average of 1.32 s to screen for PE across 148 CTPA studies, compared to an average of 40 min in contemporary workflow. AID-PE outperformed a baseline CNN classifier and a single-stage CNN-LSTM network without an attention module. Additionally, its efficiency is comparable to the current radiological workflow.

<i>Radiology: Cardiothoracic Imaging</i> Highlights 2024.

Catania R, Mukherjee A, Chamberlin JH, Calle F, Philomina P, Mastrodicasa D, Allen BD, Suchá D, Abbara S, Hanneman K

pubmed logopapersJun 1 2025
<i>Radiology: Cardiothoracic Imaging</i> publishes research, technical developments, and reviews related to cardiac, vascular, and thoracic imaging. The current review article, led by the <i>Radiology: Cardiothoracic Imaging</i> trainee editorial board, highlights the most impactful articles published in the journal between November 2023 and October 2024. The review encompasses various aspects of cardiac, vascular, and thoracic imaging related to coronary artery disease, cardiac MRI, valvular imaging, congenital and inherited heart diseases, thoracic imaging, lung cancer, artificial intelligence, and health services research. Key highlights include the role of CT fractional flow reserve analysis to guide patient management, the role of MRI elastography in identifying age-related myocardial stiffness associated with increased risk of heart failure, review of MRI in patients with cardiovascular implantable electronic devices and fractured or abandoned leads, imaging of mitral annular disjunction, specificity of the Lung Imaging Reporting and Data System version 2022 for detecting malignant airway nodules, and a radiomics-based reinforcement learning model to analyze serial low-dose CT scans in lung cancer screening. Ongoing research and future directions include artificial intelligence tools for applications such as plaque quantification using coronary CT angiography and growing understanding of the interconnectedness of environmental sustainability and cardiovascular imaging. <b>Keywords:</b> CT, MRI, CT-Coronary Angiography, Cardiac, Pulmonary, Coronary Arteries, Heart, Lung, Mediastinum, Mitral Valve, Aortic Valve, Artificial Intelligence © RSNA, 2025.

FeaInfNet: Diagnosis of Medical Images With Feature-Driven Inference and Visual Explanations.

Peng Y, He L, Hu D, Liu Y, Yang L, Shang S

pubmed logopapersJun 1 2025
Interpretable deep-learning models have received widespread attention in the field of image recognition. However, owing to the coexistence of medical-image categories and the challenge of identifying subtle decision-making regions, many proposed interpretable deep-learning models suffer from insufficient accuracy and interpretability in diagnosing images of medical diseases. Therefore, this study proposed a feature-driven inference network (FeaInfNet) that incorporates a feature-based network reasoning structure. Specifically, local feature masks (LFM) were developed to extract feature vectors, thereby providing global information for these vectors and enhancing the expressive ability of FeaInfNet. Second, FeaInfNet compares the similarity of the feature vector corresponding to each subregion image patch with the disease and normal prototype templates that may appear in the region. It then combines the comparison of each subregion when making the final diagnosis. This strategy simulates the diagnosis process of doctors, making the model interpretable during the reasoning process, while avoiding misleading results caused by the participation of normal areas during reasoning. Finally, we proposed adaptive dynamic masks (Adaptive-DM) to interpret feature vectors and prototypes into human-understandable image patches to provide an accurate visual interpretation. Extensive experiments on multiple publicly available medical datasets, including RSNA, iChallenge-PM, COVID-19, ChinaCXRSet, MontgomerySet, and CBIS-DDSM, demonstrated that our method achieves state-of-the-art classification accuracy and interpretability compared with baseline methods in the diagnosis of medical images. Additional ablation studies were performed to verify the effectiveness of each component.

IM- LTS: An Integrated Model for Lung Tumor Segmentation using Neural Networks and IoMT.

J J, Haw SC, Palanichamy N, Ng KW, Thillaigovindhan SK

pubmed logopapersJun 1 2025
In recent days, Internet of Medical Things (IoMT) and Deep Learning (DL) techniques are broadly used in medical data processing in decision-making. A lung tumour, one of the most dangerous medical diseases, requires early diagnosis with a higher precision rate. With that concern, this work aims to develop an Integrated Model (IM- LTS) for Lung Tumor Segmentation using Neural Networks (NN) and the Internet of Medical Things (IoMT). The model integrates two architectures, MobileNetV2 and U-NET, for classifying the input lung data. The input CT lung images are pre-processed using Z-score Normalization. The semantic features of lung images are extracted based on texture, intensity, and shape to provide information to the training network.•In this work, the transfer learning technique is incorporated, and the pre-trained NN was used as an encoder for the U-NET model for segmentation. Furthermore, Support Vector Machine is used here to classify input lung data as benign and malignant.•The results are measured based on the metrics such as, specificity, sensitivity, precision, accuracy and F-Score, using the data from benchmark datasets. Compared to the existing lung tumor segmentation and classification models, the proposed model provides better results and evidence for earlier disease diagnosis.
Page 27 of 42412 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.