Sort by:
Page 97 of 2382377 results

Evaluation of the impact of artificial intelligence-assisted image interpretation on the diagnostic performance of clinicians in identifying endotracheal tube position on plain chest X-ray: a multi-case multi-reader study.

Novak A, Ather S, Morgado ATE, Maskell G, Cowell GW, Black D, Shah A, Bowness JS, Shadmaan A, Bloomfield C, Oke JL, Johnson H, Beggs M, Gleeson F, Aylward P, Hafeez A, Elramlawy M, Lam K, Griffiths B, Harford M, Aaron L, Seeley C, Luney M, Kirkland J, Wing L, Qamhawi Z, Mandal I, Millard T, Chimbani M, Sharazi A, Bryant E, Haithwaite W, Medonica A

pubmed logopapersJul 28 2025
Incorrectly placed endotracheal tubes (ETTs) can lead to serious clinical harm. Studies have demonstrated the potential for artificial intelligence (AI)-led algorithms to detect ETT placement on chest X-Ray (CXR) images, however their effect on clinician accuracy remains unexplored. This study measured the impact of an AI-assisted ETT detection algorithm on the ability of clinical staff to correctly identify ETT misplacement on CXR images. Four hundred CXRs of intubated adult patients were retrospectively sourced from the John Radcliffe Hospital (Oxford) and two other UK NHS hospitals. Images were de-identified and selected from a range of clinical settings, including the intensive care unit (ICU) and emergency department (ED). Each image was independently reported by a panel of thoracic radiologists, whose consensus classification of ETT placement (correct, too low [distal], or too high [proximal]) served as the reference standard for the study. Correct ETT position was defined as the tip located 3-7 cm above the carina, in line with established guidelines. Eighteen clinical readers of varying seniority from six clinical specialties were recruited across four NHS hospitals. Readers viewed the dataset using an online platform and recorded a blinded classification of ETT position for each image. After a four-week washout period, this was repeated with assistance from an AI-assisted image interpretation tool. Reader accuracy, reported confidence, and timings were measured during each study phase. 14,400 image interpretations were undertaken. Pooled accuracy for tube placement classification improved from 73.6 to 77.4% (p = 0.002). Accuracy for identification of critically misplaced tubes increased from 79.3 to 89.0% (p = 0.001). Reader confidence improved with AI assistance, with no change in mean interpretation time at 36 s per image. Use of assistive AI technology improved accuracy and confidence in interpreting ETT placement on CXR, especially for identification of critically misplaced tubes. AI assistance may potentially provide a useful adjunct to support clinicians in identifying misplaced ETTs on CXR.

The evolving role of multimodal imaging, artificial intelligence and radiomics in the radiologic assessment of immune related adverse events.

Das JP, Ma HY, DeJong D, Prendergast C, Baniasadi A, Braumuller B, Giarratana A, Khonji S, Paily J, Shobeiri P, Yeh R, Dercle L, Capaccione KM

pubmed logopapersJul 28 2025
Immunotherapy, in particular checkpoint blockade, has revolutionized the treatment of many advanced cancers. Imaging plays a critical role in assessing both treatment response and the development of immune toxicities. Both conventional imaging and molecular imaging techniques can be used to evaluate multisystemic immune related adverse events (irAEs), including thoracic, abdominal and neurologic irAEs. As artificial intelligence (AI) proliferates in medical imaging, radiologic assessment of irAEs will become more efficient, improving the diagnosis, prognosis, and management of patients affected by immune-related toxicities. This review addresses some of the advancements in medical imaging including the potential future role of radiomics in evaluating irAEs, which may facilitate clinical decision-making and improvements in patient care.

Evaluating the accuracy of artificial intelligence-powered chest X-ray diagnosis for paediatric pulmonary tuberculosis (EVAL-PAEDTBAID): Study protocol for a multi-centre diagnostic accuracy study.

Aurangzeb B, Robert D, Baard C, Qureshi AA, Shaheen A, Ambreen A, McFarlane D, Javed H, Bano I, Chiramal JA, Workman L, Pillay T, Franckling-Smith Z, Mustafa T, Andronikou S, Zar HJ

pubmed logopapersJul 28 2025
Diagnosing pulmonary tuberculosis (PTB) in children is challenging owing to paucibacillary disease, non-specific symptoms and signs and challenges in microbiological confirmation. Chest X-ray (CXR) interpretation is fundamental for diagnosis and classifying disease as severe or non-severe. In adults with PTB, there is substantial evidence showing the usefulness of artificial intelligence (AI) in CXR interpretation, but very limited data exist in children. A prospective two-stage study of children with presumed PTB in three sites (one in South Africa and two in Pakistan) will be conducted. In stage I, eligible children will be enrolled and comprehensively investigated for PTB. A CXR radiological reference standard (RRS) will be established by an expert panel of blinded radiologists. CXRs will be classified into those with findings consistent with PTB or not based on RRS. Cases will be classified as confirmed, unconfirmed or unlikely PTB according to National Institutes of Health definitions. Data from 300 confirmed and unconfirmed PTB cases and 250 unlikely PTB cases will be collected. An AI-CXR algorithm (qXR) will be used to process CXRs. The primary endpoint will be sensitivity and specificity of AI to detect confirmed and unconfirmed PTB cases (composite reference standard); a secondary endpoint will be evaluated for confirmed PTB cases (microbiological reference standard). In stage II, a multi-reader multi-case study using a cross-over design will be conducted with 16 readers and 350 CXRs to assess the usefulness of AI-assisted CXR interpretation for readers (clinicians and radiologists). The primary endpoint will be the difference in the area under the receiver operating characteristic curve of readers with and without AI assistance in correctly classifying CXRs as per RRS. The study has been approved by a local institutional ethics committee at each site. Results will be published in academic journals and presented at conferences. Data will be made available as an open-source database. PACTR202502517486411.

Radiomics with Machine Learning Improves the Prediction of Microscopic Peritumoral Small Cancer Foci and Early Recurrence in Hepatocellular Carcinoma.

Zou W, Gu M, Chen H, He R, Zhao X, Jia N, Wang P, Liu W

pubmed logopapersJul 28 2025
This study aimed to develop an interpretable machine learning model using magnetic resonance imaging (MRI) radiomics features to predict preoperative microscopic peritumoral small cancer foci (MSF) and explore its relationship with early recurrence in hepatocellular carcinoma (HCC) patients. A total of 1049 patients from three hospitals were divided into a training set (Hospital 1: 614 cases), a test set (Hospital 2: 248 cases), and a validation set (Hospital 3: 187 cases). Independent risk factors from clinical and MRI features were identified using univariate and multivariate logistic regression to build a clinicoradiological model. MRI radiomics features were then selected using methods like least absolute shrinkage and selection operator (LassoCV) and modeled with various machine learning algorithms, choosing the best-performing model as the radiomics model. The clinical and radiomics features were combined to form a fusion model. Model performance was evaluated by comparing receiver operating characteristic (ROC) curves, area under the curve (AUC) values, calibration curves, and decision curve analysis (DCA) curves. Net reclassification improvement (NRI) and integrated discrimination improvement (IDI) values assessed improvements in predictive efficacy. The model's prognostic value was verified using Kaplan-Meier analysis. SHapley Additive exPlanations (SHAP) was used to interpret how the model makes predictions. Three models were developed as follows: Clinical Radiology, XGBoost, and Clinical XGBoost. XGBoost was selected as the final model for predicting MSF, with AUCs of 0.841, 0.835, and 0.817 in the training, test, and validation sets, respectively. These results were comparable to the Clinical XGBoost model (0.856, 0.826, 0.837) and significantly better than the Clinical Radiology model (0.688, 0.561, 0.613). Additionally, the XGBoost model effectively predicted early recurrence in HCC patients. This study successfully developed an interpretable XGBoost machine learning model based on MRI radiomics features to predict preoperative MSF and early recurrence in HCC patients.

Evaluating the impact of view position in X-ray imaging for the classification of lung diseases.

Hage Chehade A, Abdallah N, Marion JM, Oueidat M, Chauvet P

pubmed logopapersJul 28 2025
Clinical information associated with chest X-ray images, such as view position, patient age and gender, plays a crucial role in image interpretation, as it influences the visibility of anatomical structures and pathologies. However, most classification models using the ChestX-ray14 dataset relied solely on image data, disregarding the impact of these clinical variables. This study aims to investigate which clinical variable affects image characteristics and assess its impact on classification performance. To explore the relationships between clinical variables and image characteristics, unsupervised clustering was applied to group images based on their similarities. Afterwards, a statistical analysis was then conducted on each cluster to examine their clinical composition, by analyzing the distribution of age, gender, and view position. An attention-based CNN model was developed separately for each value of the clinical variable with the greatest influence on image characteristics to assess its impact on lung disease classification. The analysis identified view position as the most influential variable affecting image characteristics. Accounting for this, the proposed approach achieved a weighted area under the curve (AUC) of 0.8176 for pneumonia classification, surpassing the base model (without considering view position) by 1.65% and outperforming previous studies by 6.76%. Furthermore, it demonstrated improved performance across all 14 diseases in the ChestX-ray14 dataset. The findings highlight the importance of considering view position when developing classification models for chest X-ray analysis. Accounting for this characteristic allows for more precise disease identification, demonstrating potential for broader clinical application in lung disease evaluation.

Predicting Intracranial Pressure Levels: A Deep Learning Approach Using Computed Tomography Brain Scans.

Theodoropoulos D, Trivizakis E, Marias K, Xirouchaki N, Vakis A, Papadaki E, Karantanas A, Karabetsos DA

pubmed logopapersJul 28 2025
Elevated intracranial pressure (ICP) is a serious condition that demands prompt diagnosis to avoid significant neurological injury or even death. Although invasive techniques remain the "gold standard" for ICP measuring, they are time-consuming and pose risks of complications. Various noninvasive methods have been suggested, but their experimental status limits their use in emergency situations. On the other hand, although artificial intelligence has rapidly evolved, it has not yet fully harnessed fast-acquisition modalities such as computed tomography (CT) scans to evaluate ICP. This is likely due to the lack of available annotated data sets. In this article, we present research that addresses this gap by training four distinct deep learning models on a custom data set, enhanced with demographical and Glasgow Coma Scale (GCS) values. A key innovation of our study is the incorporation of demographical data and GCS values as additional channels of the scans. The models were trained and validated on a custom data set consisting of paired CT brain scans (n = 578) with corresponding ICP values, supplemented by GCS scores and demographical data. The algorithm addresses a binary classification problem by predicting whether ICP levels exceed a predetermined threshold of 15 mm Hg. The top-performing models achieved an area under the curve of 88.3% and a recall of 81.8%. An algorithm that enhances the transparency of the model's decisions was used to provide insights into where the models focus when generating outcomes, both for the best and lowest-performing models. This study demonstrates the potential of AI-based models to evaluate ICP levels from brain CT scans with high recall. Although promising, further improvements are necessary in the future to validate these findings and improve clinical applicability.

Fully automated 3D multi-modal deep learning model for preoperative T-stage prediction of colorectal cancer using <sup>18</sup>F-FDG PET/CT.

Zhang M, Li Y, Zheng C, Xie F, Zhao Z, Dai F, Wang J, Wu H, Zhu Z, Liu Q, Li Y

pubmed logopapersJul 28 2025
This study aimed to develop a fully automated 3D multi-modal deep learning model using preoperative <sup>18</sup>F-FDG PET/CT to predict the T-stage of colorectal cancer (CRC) and evaluate its clinical utility. A retrospective cohort of 474 CRC patients was included, with 400 patients for internal cohort and 74 patients for external cohort. Patients were classified into early T-stage (T1-T2) and advanced T-stage (T3-T4) groups. Automatic segmentation of the volume of interest (VOI) was achieved based on TotalSegmentator. A 3D ResNet18-based deep learning model integrated with a cross-multi-head attention mechanism was developed. Five models (CT + PET + Clinic (CPC), CT + PET (CP), PET (P), CT (C), Clinic) and two radiologists' assessment were compared. Performance was evaluated using Area Under the Curve (AUC). Grad-CAM was employed to provide visual interpretability of decision-critical regions. The automated segmentation achieved Dice scores of 0.884 (CT) and 0.888 (PET). The CPC and CP models achieved superior performance, with AUCs of 0.869 and 0.869 in the internal validation cohort, respectively, outperforming single-modality models (P: 0.832; C: 0.809; Clinic: 0.728) and the radiologists (AUC: 0.627, P < 0.05 for all models vs. radiologists, except for the Clinical model). External validation exhibited a similar trend, with AUCs of 0.814, 0.812, 0.763, 0.714, 0.663 and 0.704, respectively. Grad-CAM visualization highlighted tumor-centric regions for early T-stage and peri-tumoral tissue infiltration for advanced T-stage. The fully automated multimodal, fusing PET/CT with cross-multi-head-attention, improved T-stage prediction in CRC, surpassing the single-modality models and radiologists, offering a time-efficient tool to aid clinical decision-making.

Multi-Attention Stacked Ensemble for Lung Cancer Detection in CT Scans

Uzzal Saha, Surya Prakash

arxiv logopreprintJul 27 2025
In this work, we address the challenge of binary lung nodule classification (benign vs malignant) using CT images by proposing a multi-level attention stacked ensemble of deep neural networks. Three pretrained backbones - EfficientNet V2 S, MobileViT XXS, and DenseNet201 - are each adapted with a custom classification head tailored to 96 x 96 pixel inputs. A two-stage attention mechanism learns both model-wise and class-wise importance scores from concatenated logits, and a lightweight meta-learner refines the final prediction. To mitigate class imbalance and improve generalization, we employ dynamic focal loss with empirically calculated class weights, MixUp augmentation during training, and test-time augmentation at inference. Experiments on the LIDC-IDRI dataset demonstrate exceptional performance, achieving 98.09 accuracy and 0.9961 AUC, representing a 35 percent reduction in error rate compared to state-of-the-art methods. The model exhibits balanced performance across sensitivity (98.73) and specificity (98.96), with particularly strong results on challenging cases where radiologist disagreement was high. Statistical significance testing confirms the robustness of these improvements across multiple experimental runs. Our approach can serve as a robust, automated aid for radiologists in lung cancer screening.

Performance of AI-Based software in predicting malignancy risk in breast lesions identified on targeted ultrasound.

Lima IRM, Cruz RM, de Lima Rodrigues CL, Lago BM, da Cunha RF, Damião SQ, Wanderley MC, Bitencourt AGV

pubmed logopapersJul 27 2025
Targeted ultrasound is commonly used to identify lesions characterized on magnetic resonance imaging (MRI) that were not recognized on initial mammography or ultrasound and is especially valuable for guiding percutaneous biopsies. Although artificial intelligence (AI) algorithms have been used to differentiate benign from malignant breast lesions on ultrasound, their application in classifying lesions on targeted ultrasound has not yet been studied. To evaluate the performance of AI-based software in predicting malignancy risk in breast lesions identified on targeted ultrasound. This was a retrospective, cross-sectional, single-center study that included patients with breast lesions identified on MRI who underwent targeted ultrasound and percutaneous ultrasound-guided biopsy. The ultrasound findings were analyzed using AI-based software and subsequently correlated with the pathological results. 334 lesions were evaluated, including 183 mass and 151 non-mass lesions. On histological analysis, there were 257 (76.9 %) benign lesions, and 77 (23.1 %) malignant. Both the AI software and radiologists demonstrated high sensitivity in predicting the malignancy risk of the lesions. The specificity was higher when evaluated by the radiologist using the AI software compared to the radiologist's evaluation alone (p < 0.001). All lesions classified as BI-RADS 2 or 3 on targeted ultrasound by the radiologist or the AI software (n = 72; 21.6 %) showed benign pathology results. The AI software, when integrated into the radiologist's evaluation, demonstrated high diagnostic accuracy and improved specificity for both mass and non-mass lesions on targeted ultrasound, supporting more accurate biopsy decisions and potentially reducing false positives without missing cancers.

A Metabolic-Imaging Integrated Model for Prognostic Prediction in Colorectal Liver Metastases

Qinlong Li, Pu Sun, Guanlin Zhu, Tianjiao Liang, Honggang QI

arxiv logopreprintJul 26 2025
Prognostic evaluation in patients with colorectal liver metastases (CRLM) remains challenging due to suboptimal accuracy of conventional clinical models. This study developed and validated a robust machine learning model for predicting postoperative recurrence risk. Preliminary ensemble models achieved exceptionally high performance (AUC $>$ 0.98) but incorporated postoperative features, introducing data leakage risks. To enhance clinical applicability, we restricted input variables to preoperative baseline clinical parameters and radiomic features from contrast-enhanced CT imaging, specifically targeting recurrence prediction at 3, 6, and 12 months postoperatively. The 3-month recurrence prediction model demonstrated optimal performance with an AUC of 0.723 in cross-validation. Decision curve analysis revealed that across threshold probabilities of 0.55-0.95, the model consistently provided greater net benefit than "treat-all" or "treat-none" strategies, supporting its utility in postoperative surveillance and therapeutic decision-making. This study successfully developed a robust predictive model for early CRLM recurrence with confirmed clinical utility. Importantly, it highlights the critical risk of data leakage in clinical prognostic modeling and proposes a rigorous framework to mitigate this issue, enhancing model reliability and translational value in real-world settings.
Page 97 of 2382377 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.