Sort by:
Page 28 of 45442 results

Machine learning to identify hypoxic-ischemic brain injury on early head CT after pediatric cardiac arrest.

Kirschen MP, Li J, Elmer J, Manteghinejad A, Arefan D, Graham K, Morgan RW, Nadkarni V, Diaz-Arrastia R, Berg R, Topjian A, Vossough A, Wu S

pubmed logopapersJun 27 2025
To train deep learning models to detect hypoxic-ischemic brain injury (HIBI) on early CT scans after pediatric out-of-hospital cardiac arrest (OHCA) and determine if models could identify HIBI that was not visually appreciable to a radiologist. Retrospective study of children who had a CT scan within 24 hours of OHCA compared to age-matched controls. We designed models to detect HIBI by discriminating CT images from OHCA cases and controls, and predict death and unfavorable outcome (PCPC 4-6 at hospital discharge) among cases. Model performance was measured by AUC. We trained a second model to distinguish OHCA cases with radiologist-identified HIBI from controls without OHCA and tested the model on OHCA cases without radiologist-identified HIBI. We compared outcomes between OHCA cases with and without model-categorized HIBI. We analyzed 117 OHCA cases (age 3.1 [0.7-12.2] years); 43% died and 58% had unfavorable outcome. Median time from arrest to CT was 2.1 [1.0,7.2] hours. Deep learning models discriminated OHCA cases from controls with a mean AUC of 0.87±0.05. Among OHCA cases, mean AUCs for predicting death and unfavorable outcome were 0.79±0.06 and 0.69±0.06, respectively. Mean AUC was 0.98±0.01for discriminating between 44 OHCA cases with radiologist-identified HIBI and controls. Among 73 OHCA cases without radiologist-identified HIBI, the model identified 36% as having presumed HIBI; 31% of whom died compared to 17% of cases without HIBI identified radiologically and via the model (p=0.174). Deep learning models can identify HIBI on early CT images after pediatric OHCA and detect some presumed HIBI visually not identified by a radiologist.

Implementation of an Intelligent System for Detecting Breast Cancer Cells from Histological Images, and Evaluation of Its Results at CHU Bogodogo.

Nikiema WC, Ouattara TA, Barro SG, Ouedraogo AS

pubmed logopapersJun 26 2025
Early detection of breast cancer is a major challenge in the fight against this disease. Artificial intelligence (AI), particularly through medical imaging, offers promising prospects for improving diagnostic accuracy. This article focuses on evaluating the effectiveness of an intelligent electronic system deployed at the CHU of Bogodogo in Burkina Faso, designed to detect breast cancer cells from histological images. The system aims to reduce diagnosis time and enhance screening reliability. The article also discusses the challenges, innovations, and prospects for integrating the system into the conventional laboratory examination process, while considering the associated ethical and technical issues.

High-performance Open-source AI for Breast Cancer Detection and Localization in MRI.

Hirsch L, Sutton EJ, Huang Y, Kayis B, Hughes M, Martinez D, Makse HA, Parra LC

pubmed logopapersJun 25 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and evaluate an open-source deep learning model for detection and localization of breast cancer on MRI. Materials and Methods In this retrospective study, a deep learning model for breast cancer detection and localization was trained on the largest breast MRI dataset to date. Data included all breast MRIs conducted at a tertiary cancer center in the United States between 2002 and 2019. The model was validated on sagittal MRIs from the primary site (<i>n</i> = 6,615 breasts). Generalizability was assessed by evaluating model performance on axial data from the primary site (<i>n</i> = 7,058 breasts) and a second clinical site (<i>n</i> = 1,840 breasts). Results The primary site dataset included 30,672 sagittal MRI examinations (52,598 breasts) from 9,986 female patients (mean [SD] age, 53 [11] years). The model achieved an area under the receiver operating characteristic curve (AUC) of 0.95 for detecting cancer in the primary site. At 90% specificity (5717/6353), model sensitivity was 83% (217/262), which was comparable to historical performance data for radiologists. The model generalized well to axial examinations, achieving an AUC of 0.92 on data from the same clinical site and 0.92 on data from a secondary site. The model accurately located the tumor in 88.5% (232/262) of sagittal images, 92.8% (272/293) of axial images from the primary site, and 87.7% (807/920) of secondary site axial images. Conclusion The model demonstrated state-of-the-art performance on breast cancer detection. Code and weights are openly available to stimulate further development and validation. ©RSNA, 2025.

Efficacy of an Automated Pulmonary Embolism (PE) Detection Algorithm on Routine Contrast-Enhanced Chest CT Imaging for Non-PE Studies.

Troutt HR, Huynh KN, Joshi A, Ling J, Refugio S, Cramer S, Lopez J, Wei K, Imanzadeh A, Chow DS

pubmed logopapersJun 25 2025
The urgency to accelerate PE management and minimize patient risk has driven the development of artificial intelligence (AI) algorithms designed to provide a swift and accurate diagnosis in dedicated chest imaging (computed tomography pulmonary angiogram; CTPA) for suspected PE; however, the accuracy of AI algorithms in the detection of incidental PE in non-dedicated CT imaging studies remains unclear and untested. This study explores the potential for a commercial AI algorithm to identify incidental PE in non-dedicated contrast-enhanced CT chest imaging studies. The Viz PE algorithm was deployed to identify the presence of PE on 130 dedicated and 63 non-dedicated contrast-enhanced CT chest exams. The predictions for non-dedicated contrast-enhanced chest CT imaging studies were 90.48% accurate, with a sensitivity of 0.14 and specificity of 1.00. Our findings reflect that the Viz PE algorithm demonstrated an overall accuracy of 90.16%, with a specificity of 96% and a sensitivity of 41%. Although the high specificity is promising for ruling in PE, the low sensitivity highlights a limitation, as it indicates the algorithm may miss a substantial number of true-positive incidental PEs. This study demonstrates that commercial AI detection tools hold promise as integral support for detecting PE, particularly when there is a strong clinical indication for their use; however, current limitations in sensitivity, especially for incidental cases, underscore the need for ongoing radiologist oversight.

Few-Shot Learning for Prostate Cancer Detection on MRI: Comparative Analysis with Radiologists' Performance.

Yamagishi Y, Baba Y, Suzuki J, Okada Y, Kanao K, Oyama M

pubmed logopapersJun 25 2025
Deep-learning models for prostate cancer detection typically require large datasets, limiting clinical applicability across institutions due to domain shift issues. This study aimed to develop a few-shot learning deep-learning model for prostate cancer detection on multiparametric MRI that requires minimal training data and to compare its diagnostic performance with experienced radiologists. In this retrospective study, we used 99 cases (80 positive, 19 negative) of biopsy-confirmed prostate cancer (2017-2022), with 20 cases for training, 5 for validation, and 74 for testing. A 2D transformer model was trained on T2-weighted, diffusion-weighted, and apparent diffusion coefficient map images. Model predictions were compared with two radiologists using Matthews correlation coefficient (MCC) and F1 score, with 95% confidence intervals (CIs) calculated via bootstrap method. The model achieved an MCC of 0.297 (95% CI: 0.095-0.474) and F1 score of 0.707 (95% CI: 0.598-0.847). Radiologist 1 had an MCC of 0.276 (95% CI: 0.054-0.484) and F1 score of 0.741; Radiologist 2 had an MCC of 0.504 (95% CI: 0.289-0.703) and F1 score of 0.871, showing that the model performance was comparable to Radiologist 1. External validation on the Prostate158 dataset revealed that ImageNet pretraining substantially improved model performance, increasing study-level ROC-AUC from 0.464 to 0.636 and study-level PR-AUC from 0.637 to 0.773 across all architectures. Our findings demonstrate that few-shot deep-learning models can achieve clinically relevant performance when using pretrained transformer architectures, offering a promising approach to address domain shift challenges across institutions.

The evaluation of artificial intelligence in mammography-based breast cancer screening: Is breast-level analysis enough?

Taib AG, Partridge GJW, Yao L, Darker I, Chen Y

pubmed logopapersJun 25 2025
To assess whether the diagnostic performance of a commercial artificial intelligence (AI) algorithm for mammography differs between breast-level and lesion-level interpretations and to compare performance to a large population of specialised human readers. We retrospectively analysed 1200 mammograms from the NHS breast cancer screening programme using a commercial AI algorithm and assessments from 1258 trained human readers from the Personal Performance in Mammographic Screening (PERFORMS) external quality assurance programme. For breasts containing pathologically confirmed malignancies, a breast and lesion-level analysis was performed. The latter considered the locations of marked regions of interest for AI and humans. The highest score per lesion was recorded. For non-malignant breasts, a breast-level analysis recorded the highest score per breast. Area under the curve (AUC), sensitivity and specificity were calculated at the developer's recommended threshold for recall. The study was designed to detect a medium-sized effect (odds ratio 3.5 or 0.29) for sensitivity. The test set contained 882 non-malignant (73%) and 318 malignant breasts (27%), with 328 cancer lesions. The AI AUC was 0.942 at breast level and 0.929 at lesion level (difference -0.013, p < 0.01). The mean human AUC was 0.878 at breast level and 0.851 at lesion level (difference -0.027, p < 0.01). AI outperformed human readers at the breast and lesion level (ps < 0.01, respectively) according to the AUC. AI's diagnostic performance significantly decreased at the lesion level, indicating reduced accuracy in localising malignancies. However, its overall performance exceeded that of human readers. Question AI often recalls mammography cases not recalled by humans; to understand why, we as humans must consider the regions of interest it has marked as cancerous. Findings Evaluations of AI typically occur at the breast level, but performance decreases when AI is evaluated on a lesion level. This also occurs for humans. Clinical relevance To improve human-AI collaboration, AI should be assessed at the lesion level; poor accuracy here may lead to automation bias and unnecessary patient procedures.

[AI-enabled clinical decision support systems: challenges and opportunities].

Tschochohei M, Adams LC, Bressem KK, Lammert J

pubmed logopapersJun 25 2025
Clinical decision-making is inherently complex, time-sensitive, and prone to error. AI-enabled clinical decision support systems (CDSS) offer promising solutions by leveraging large datasets to provide evidence-based recommendations. These systems range from rule-based and knowledge-based to increasingly AI-driven approaches. However, key challenges persist, particularly concerning data quality, seamless integration into clinical workflows, and clinician trust and acceptance. Ethical and legal considerations, especially data privacy, are also paramount.AI-CDSS have demonstrated success in fields like radiology (e.g., pulmonary nodule detection, mammography interpretation) and cardiology, where they enhance diagnostic accuracy and improve patient outcomes. Looking ahead, chat and voice interfaces powered by large language models (LLMs) could support shared decision-making (SDM) by fostering better patient engagement and understanding.To fully realize the potential of AI-CDSS in advancing efficient, patient-centered care, it is essential to ensure their responsible development. This includes grounding AI models in domain-specific data, anonymizing user inputs, and implementing rigorous validation of AI-generated outputs before presentation. Thoughtful design and ethical oversight will be critical to integrating AI safely and effectively into clinical practice.

Interventional Radiology Reporting Standards and Checklist for Artificial Intelligence Research Evaluation (iCARE).

Anibal JT, Huth HB, Boeken T, Daye D, Gichoya J, Muñoz FG, Chapiro J, Wood BJ, Sze DY, Hausegger K

pubmed logopapersJun 25 2025
As artificial intelligence (AI) becomes increasingly prevalent within interventional radiology (IR) research and clinical practice, steps must be taken to ensure the robustness of novel technological systems presented in peer-reviewed journals. This report introduces comprehensive standards and an evaluation checklist (iCARE) that covers the application of modern AI methods in IR-specific contexts. The iCARE checklist encompasses the full "code-to-clinic" pipeline of AI development, including dataset curation, pre-training, task-specific training, explainability, privacy protection, bias mitigation, reproducibility, and model deployment. The iCARE checklist aims to support the development of safe, generalizable technologies for enhancing IR workflows, the delivery of care, and patient outcomes.

The Current State of Artificial Intelligence on Detecting Pulmonary Embolism via Computerised Tomography Pulmonary Angiogram: A Systematic Review.

Hassan MSTA, Elhotiby MAM, Shah V, Rocha H, Rad AA, Miller G, Malawana J

pubmed logopapersJun 25 2025
<b>Aims/Background</b> Pulmonary embolism (PE) is a life-threatening condition with significant diagnostic challenges due to high rates of missed or delayed detection. Computed tomography pulmonary angiography (CTPA) is the current standard for diagnosing PE, however, demand for imaging places strain on healthcare systems and increases error rates. This systematic review aims to assess the diagnostic accuracy and clinical applicability of artificial intelligence (AI)-based models for PE detection on CTPA, exploring their potential to enhance diagnostic reliability and efficiency across clinical settings. <b>Methods</b> A systematic review was conducted in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Excerpta Medica Database (EMBASE), Medical Literature Analysis and Retrieval System Online (MEDLINE), Cochrane, PubMed, and Google Scholar were searched for original articles from inception to September 2024. Articles were included if they reported successful AI integration, whether partial or full, alongside CTPA scans for PE detection in patients. <b>Results</b> The literature search identified 919 articles, with 745 remaining after duplicate removal. Following rigorous screening and appraisal aligned with inclusion and exclusion criteria, 12 studies were included in the final analysis. A total of three primary AI modalities emerged: convolutional neural networks (CNNs), segmentation models, and natural language processing (NLP), collectively used in the analysis of 341,112 radiographic images. CNNs were the most frequently applied modality in this review. Models such as AdaBoost and EmbNet have demonstrated high sensitivity, with EmbNet achieving 88-90.9% per scan and reducing false positives to 0.45 per scan. <b>Conclusion</b> AI shows significant promise as a diagnostic tool for identifying PE on CTPA scans, particularly when combined with other forms of clinical data. However, challenges remain, including ensuring generalisability, addressing potential bias, and conducting rigorous external validation. Variability in study methodologies and the lack of standardised reporting of key metrics complicate comparisons. Future research must focus on refining models, improving peripheral emboli detection, and validating performance across diverse settings to realise AI's potential fully.

Validation of a Pretrained Artificial Intelligence Model for Pancreatic Cancer Detection on Diagnosis and Prediagnosis Computed Tomography Scans.

Degand L, Abi-Nader C, Bône A, Vetil R, Placido D, Chmura P, Rohé MM, De Masi F, Brunak S

pubmed logopapersJun 24 2025
To evaluate PANCANAI, a previously developed AI model for pancreatic cancer (PC) detection, on a longitudinal cohort of patients. In particular, aiming for PC detection on scans acquired before histopathologic diagnosis was assessed. The model has been previously trained to predict PC suspicion on 2134 portal venous CTs. In this study, the algorithm was evaluated on a retrospective cohort of Danish patients with biopsy-confirmed PC and with CT scans acquired between 2006 and 2016. The sensitivity was measured, and bootstrapping was performed to provide median and 95% CI. The study included 1083 PC patients (mean age: 69 y ± 11, 575 men). CT scans were divided into 2 groups: (1) concurrent diagnosis (CD): 1022 CT scans acquired within 2 months around histopathologic diagnosis, and (2) prediagnosis (PD): 198 CT scans acquired before histopathologic diagnosis (median 7 months before diagnosis). The sensitivity was 91.8% (938 of 1022; 95% CI: 89.9-93.5) and 68.7% (137 of 198; 95% CI: 62.1-75.3) on the CD and PD groups, respectively. Sensitivity on CT scans acquired 1 year or more before diagnosis was 53.9% (36 of 67; 95% CI: 41.8-65.7). Sensitivity on CT scans acquired at stage I was 82.9% (29 of 35; 95% CI: 68.6-94.3). PANCANAI showed high sensitivity for automatic PC detection on a large retrospective cohort of biopsy-confirmed patients. PC suspicion was detected in more than half of the CT scans that were acquired at least a year before histopathologic diagnosis.
Page 28 of 45442 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.