Sort by:
Page 33 of 1421417 results

Best Practices and Checklist for Reviewing Artificial Intelligence-Based Medical Imaging Papers: Classification.

Kline TL, Kitamura F, Warren D, Pan I, Korchi AM, Tenenholtz N, Moy L, Gichoya JW, Santos I, Moradi K, Avval AH, Alkhulaifat D, Blumer SL, Hwang MY, Git KA, Shroff A, Stember J, Walach E, Shih G, Langer SG

pubmed logopapersJun 4 2025
Recent advances in Artificial Intelligence (AI) methodologies and their application to medical imaging has led to an explosion of related research programs utilizing AI to produce state-of-the-art classification performance. Ideally, research culminates in dissemination of the findings in peer-reviewed journals. To date, acceptance or rejection criteria are often subjective; however, reproducible science requires reproducible review. The Machine Learning Education Sub-Committee of the Society for Imaging Informatics in Medicine (SIIM) has identified a knowledge gap and need to establish guidelines for reviewing these studies. This present work, written from the machine learning practitioner standpoint, follows a similar approach to our previous paper related to segmentation. In this series, the committee will address best practices to follow in AI-based studies and present the required sections with examples and discussion of requirements to make the studies cohesive, reproducible, accurate, and self-contained. This entry in the series focuses on image classification. Elements like dataset curation, data pre-processing steps, reference standard identification, data partitioning, model architecture, and training are discussed. Sections are presented as in a typical manuscript. The content describes the information necessary to ensure the study is of sufficient quality for publication consideration and, compared with other checklists, provides a focused approach with application to image classification tasks. The goal of this series is to provide resources to not only help improve the review process for AI-based medical imaging papers, but to facilitate a standard for the information that should be presented within all components of the research study.

A review on learning-based algorithms for tractography and human brain white matter tracts recognition.

Barati Shoorche A, Farnia P, Makkiabadi B, Leemans A

pubmed logopapersJun 4 2025
Human brain fiber tractography using diffusion magnetic resonance imaging is a crucial stage in mapping brain white matter structures, pre-surgical planning, and extracting connectivity patterns. Accurate and reliable tractography, by providing detailed geometric information about the position of neural pathways, minimizes the risk of damage during neurosurgical procedures. Both tractography itself and its post-processing steps such as bundle segmentation are usually used in these contexts. Many approaches have been put forward in the past decades and recently, multiple data-driven tractography algorithms and automatic segmentation pipelines have been proposed to address the limitations of traditional methods. Several of these recent methods are based on learning algorithms that have demonstrated promising results. In this study, in addition to introducing diffusion MRI datasets, we review learning-based algorithms such as conventional machine learning, deep learning, reinforcement learning and dictionary learning methods that have been used for white matter tract, nerve and pathway recognition as well as whole brain streamlines or whole brain tractogram creation. The contribution is to discuss both tractography and tract recognition methods, in addition to extending previous related reviews with most recent methods, covering architectures as well as network details, assess the efficiency of learning-based methods through a comprehensive comparison in this field, and finally demonstrate the important role of learning-based methods in tractography.

Retrieval-Augmented Generation with Large Language Models in Radiology: From Theory to Practice.

Fink A, Rau A, Reisert M, Bamberg F, Russe MF

pubmed logopapersJun 4 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Large language models (LLMs) hold substantial promise in addressing the growing workload in radiology, but recent studies also reveal limitations, such as hallucinations and opacity in sources for LLM responses. Retrieval-augmented Generation (RAG) based LLMs offer a promising approach to streamline radiology workflows by integrating reliable, verifiable, and customizable information. Ongoing refinement is critical to enable RAG models to manage large amounts of input data and to engage in complex multiagent dialogues. This report provides an overview of recent advances in LLM architecture, including few-shot and zero-shot learning, RAG integration, multistep reasoning, and agentic RAG, and identifies future research directions. Exemplary cases demonstrate the practical application of these techniques in radiology practice. ©RSNA, 2025.

Predicting clinical outcomes using 18F-FDG PET/CT-based radiomic features and machine learning algorithms in patients with esophageal cancer.

Mutevelizade G, Aydin N, Duran Can O, Teke O, Suner AF, Erdugan M, Sayit E

pubmed logopapersJun 4 2025
This study evaluated the relationship between 18F-fluorodeoxyglucose PET/computed tomography (18F-FDG PET/CT) radiomic features and clinical parameters, including tumor localization, histopathological subtype, lymph node metastasis, mortality, and treatment response, in esophageal cancer (EC) patients undergoing chemoradiotherapy and the predictive performance of various machine learning (ML) models. In this retrospective study, 39 patients with EC who underwent pretreatment 18F-FDG PET/CT and received concurrent chemoradiotherapy were analyzed. Texture features were extracted using LIFEx software. Logistic regression, Naive Bayes, random forest, extreme gradient boosting (XGB), and support vector machine classifiers were applied to predict clinical outcomes. Cox regression and Kaplan-Meier analyses were used to evaluate overall survival (OS), and the accuracy of ML algorithms was quantified using the area under the receiver operating characteristic curve. Radiomic features showed significant associations with several clinical parameters. Lymph node metastasis, tumor localization, and treatment response emerged as predictors of OS. Among the ML models, XGB demonstrated the most consistent and highest predictive performance across clinical outcomes. Radiomic features extracted from 18F-FDG PET/CT, when combined with ML approaches, may aid in predicting treatment response and clinical outcomes in EC. Radiomic features demonstrated value in assessing tumor heterogeneity; however, clinical parameters retained a stronger prognostic value for OS.

Machine Learning to Automatically Differentiate Hypertrophic Cardiomyopathy, Cardiac Light Chain, and Cardiac Transthyretin Amyloidosis: A Multicenter CMR Study.

Weberling LD, Ochs A, Benovoy M, Aus dem Siepen F, Salatzki J, Giannitsis E, Duan C, Maresca K, Zhang Y, Möller J, Friedrich S, Schönland S, Meder B, Friedrich MG, Frey N, André F

pubmed logopapersJun 4 2025
Cardiac amyloidosis is associated with poor outcomes and is caused by the interstitial deposition of misfolded proteins, typically ATTR (transthyretin) or AL (light chains). Although specific therapies during early disease stages exist, the diagnosis is often only established at an advanced stage. Cardiovascular magnetic resonance (CMR) is the gold standard for imaging suspected myocardial disease. However, differentiating cardiac amyloidosis from hypertrophic cardiomyopathy may be challenging, and a reliable method for an image-based classification of amyloidosis subtypes is lacking. This study sought to investigate a CMR machine learning (ML) algorithm to identify and distinguish cardiac amyloidosis. This retrospective, multicenter, multivendor feasibility study included consecutive patients diagnosed with hypertrophic cardiomyopathy or AL/ATTR amyloidosis and healthy volunteers. Standard clinical information, semiautomated CMR imaging data, and qualitative CMR features were integrated into a trained ML algorithm. Four hundred participants (95 healthy, 94 hypertrophic cardiomyopathy, 95 AL, and 116 ATTR) from 56 institutions were included (269 men aged 58.5 [48.4-69.4] years). A 3-stage ML screening cascade sequentially differentiated healthy volunteers from patients, then hypertrophic cardiomyopathy from amyloidosis, and then AL from ATTR. The ML algorithm resulted in an accurate differentiation at each step (area under the curve, 1.0, 0.99, and 0.92, respectively). After reducing included data to demographics and imaging data alone, the performance remained excellent (area under the curve, 0.99, 0.98, and 0.88, respectively), even after removing late gadolinium enhancement imaging data from the model (area under the curve, 1.0, 0.95, 0.86, respectively). A trained ML model using semiautomated CMR imaging data and patient demographics can accurately identify cardiac amyloidosis and differentiate subtypes.

Long-Term Prognostic Implications of Thoracic Aortic Calcification on CT Using Artificial Intelligence-Based Quantification in a Screening Population: A Two-Center Study.

Lee JE, Kim NY, Kim YH, Kwon Y, Kim S, Han K, Suh YJ

pubmed logopapersJun 4 2025
<b>BACKGROUND.</b> The importance of including the thoracic aortic calcification (TAC), in addition to coronary artery calcification (CAC), in prognostic assessments has been difficult to determine, partly due to greater challenge in performing standardized TAC assessments. <b>OBJECTIVE.</b> The purpose of this study was to evaluate long-term prognostic implications of TAC assessed using artificial intelligence (AI)-based quantification on routine chest CT in a screening population. <b>METHODS.</b> This retrospective study included 7404 asymptomatic individuals (median age, 53.9 years; 5875 men, 1529 women) who underwent nongated noncontrast chest CT as part of a national general health screening program at one of two centers from January 2007 to December 2014. A commercial AI program quantified TAC and CAC using Agatston scores, which were stratified into categories. Radiologists manually quantified TAC and CAC in 2567 examinations. The role of AI-based TAC categories in predicting major adverse cardiovascular events (MACE) and all-cause mortality (ACM), independent of AI-based CAC categories as well as clinical and laboratory variables, was assessed by multivariable Cox proportional hazards models using data from both centers and concordance statistics from prognostic models developed and tested using center 1 and center 2 data, respectively. <b>RESULTS.</b> AI-based and manual quantification showed excellent agreement for TAC and CAC (concordance correlation coefficient: 0.967 and 0.895, respectively). The median observation periods were 7.5 years for MACE (383 events in 5342 individuals) and 11.0 years for ACM (292 events in 7404 individuals). When adjusted for AI-based CAC categories along with clinical and laboratory variables, the risk for MACE was not independently associated with any AI-based TAC category; risk of ACM was independently associated with AI-based TAC score of 1001-3000 (HR = 2.14, <i>p</i> = .02) but not with other AI-based TAC categories. When prognostic models were tested, the addition of AI-based TAC categories did not improve model fit relative to models containing clinical variables, laboratory variables, and AI-based CAC categories for MACE (concordance index [C-index] = 0.760-0.760, <i>p</i> = .81) or ACM (C-index = 0.823-0.830, <i>p</i> = .32). <b>CONCLUSION.</b> The addition of TAC to models containing CAC provided limited improvement in risk prediction in an asymptomatic screening population undergoing CT. <b>CLINICAL IMPACT.</b> AI-based quantification provides a standardized approach for better understanding the potential role of TAC as a predictive imaging biomarker.

Regulating Generative AI in Radiology Practice: A Trilaminar Approach to Balancing Risk with Innovation.

Gowda V, Bizzo BC, Dreyer KJ

pubmed logopapersJun 4 2025
Generative AI tools have proliferated across the market, garnered significant media attention, and increasingly found incorporation into the radiology practice setting. However, they raise a number of unanswered questions concerning governance and appropriate use. By their nature as general-purpose technologies, they strain the limits of existing FDA premarket review pathways to regulate them and introduce new sources of liability, privacy, and clinical risk. A multilayered governance approach is needed to balance innovation with safety. To address gaps in oversight, this piece establishes a trilaminar governance model for generative AI technologies. This treats federal regulations as a scaffold, upon which tiers of institutional guidelines and industry self-regulatory frameworks are added to create a comprehensive paradigm composed of interlocking parts. Doing so would provide radiologists with an effective risk management strategy for the future, foster continued technical development, and ultimately, promote patient care.

Impact of AI-Generated ADC Maps on Computer-Aided Diagnosis of Prostate Cancer: A Feasibility Study.

Ozyoruk KB, Harmon SA, Yilmaz EC, Gelikman DG, Bagci U, Simon BD, Merino MJ, Lis R, Gurram S, Wood BJ, Pinto PA, Choyke PL, Turkbey B

pubmed logopapersJun 4 2025
To evaluate the impact of AI-generated apparent diffusion coefficient (ADC) maps on diagnostic performance of a 3D U-Net AI model for prostate cancer (PCa) detection and segmentation at biparametric MRI (bpMRI). The study population was retrospectively collected and consisted of 178 patients, including 119 cases and 59 controls. Cases had a mean age of 62.1 years (SD=7.4) and a median prostate-specific antigen (PSA) level of 7.27ng/mL (IQR=5.43-10.55), while controls had a mean age of 63.4 years (SD=7.5) and a median PSA of 6.66ng/mL (IQR=4.29-11.30). All participants underwent 3.0 T T2-weighted turbo spin-echo MRI and high b-value echo-planar diffusion-weighted imaging (bpMRI), followed by either prostate biopsy or radical prostatectomy between January 2013 and December 2022. We compared the lesion detection and segmentation performance of a pretrained 3D U-Net AI model using conventional ADC maps versus AI-generated ADC maps. The Wilcoxon signed-rank test was used for statistical comparison, with 95% confidence intervals (CI) estimated via bootstrapping. A p-value <0.05 was considered significant. AI-ADC maps increased the accuracy of the lesion detection AI model, from 0.70 to 0.78 (p<0.01). Specificity increased from 0.22 to 0.47 (p<0.001), while maintaining high sensitivity, which was 0.94 with conventional ADC maps and 0.93 with AI-ADC maps (p>0.05). Mean dice similarity coefficients (DSC) for conventional ADC maps was 0.276, while AI-ADC maps showed a mean DSC of 0.225 (p<0.05). In the subset of patients with ISUP≥2, standard ADC maps demonstrated a mean DSC of 0.282 compared to 0.230 for AI-ADC maps (p<0.05). AI-generated ADC maps can improve performance of computer-aided diagnosis of prostate cancer.

Multimodal data integration for biologically-relevant artificial intelligence to guide adjuvant chemotherapy in stage II colorectal cancer.

Xie C, Ning Z, Guo T, Yao L, Chen X, Huang W, Li S, Chen J, Zhao K, Bian X, Li Z, Huang Y, Liang C, Zhang Q, Liu Z

pubmed logopapersJun 4 2025
Adjuvant chemotherapy provides a limited survival benefit (<5%) for patients with stage II colorectal cancer (CRC) and is suggested for high-risk patients. Given the heterogeneity of stage II CRC, we aimed to develop a clinically explainable artificial intelligence (AI)-powered analyser to identify radiological phenotypes that would benefit from chemotherapy. Multimodal data from patients with CRC across six cohorts were collected, including 405 patients from the Guangdong Provincial People's Hospital for model development and 153 patients from the Yunnan Provincial Cancer Centre for validation. RNA sequencing data were used to identify the differentially expressed genes in the two radiological clusters. Histopathological patterns were evaluated to bridge the gap between the imaging and genetic information. Finally, we investigated the discovered morphological patterns of mouse models to observe imaging features. The survival benefit of chemotherapy varied significantly among the AI-powered radiological clusters [interaction hazard ratio (iHR) = 5.35, (95% CI: 1.98, 14.41), adjusted P<sub>interaction</sub> = 0.012]. Distinct biological pathways related to immune and stromal cell abundance were observed between the clusters. The observation only (OO)-preferable cluster exhibited higher necrosis, haemorrhage, and tortuous vessels, whereas the adjuvant chemotherapy (AC)-preferable cluster exhibited vessels with greater pericyte coverage, allowing for a more enriched infiltration of B, CD4<sup>+</sup>-T, and CD8<sup>+</sup>-T cells into the core tumoural areas. Further experiments confirmed that changes in vessel morphology led to alterations in predictive imaging features. The developed explainable AI-powered analyser effectively identified patients with stage II CRC with improved overall survival after receiving adjuvant chemotherapy, thereby contributing to the advancement of precision oncology. This work was funded by the National Science Fund of China (81925023, 82302299, and U22A2034), Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (2022B1212010011), and High-level Hospital Construction Project (DFJHBF202105 and YKY-KF202204).
Page 33 of 1421417 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.