Sort by:
Page 43 of 100995 results

Emerging Role of MRI-Based Artificial Intelligence in Individualized Treatment Strategies for Hepatocellular Carcinoma: A Narrative Review.

Che F, Zhu J, Li Q, Jiang H, Wei Y, Song B

pubmed logopapersJul 19 2025
Hepatocellular carcinoma (HCC) is the most common subtype of primary liver cancer, with significant variability in patient outcomes even within the same stage according to the Barcelona Clinic Liver Cancer staging system. Accurately predicting patient prognosis and potential treatment response prior to therapy initiation is crucial for personalized clinical decision-making. This review focuses on the application of artificial intelligence (AI) in magnetic resonance imaging for guiding individualized treatment strategies in HCC management. Specifically, we emphasize AI-based tools for pre-treatment prediction of therapeutic response and prognosis. AI techniques such as radiomics and deep learning have shown strong potential in extracting high-dimensional imaging features to characterize tumors and liver parenchyma, predict treatment outcomes, and support prognostic stratification. These advances contribute to more individualized and precise treatment planning. However, challenges remain in model generalizability, interpretability, and clinical integration, highlighting the need for standardized imaging datasets and multi-omics fusion to fully realize the potential of AI in personalized HCC care. Evidence level: 5. Technical efficacy: 4.

Influence of high-performance image-to-image translation networks on clinical visual assessment and outcome prediction: utilizing ultrasound to MRI translation in prostate cancer.

Salmanpour MR, Mousavi A, Xu Y, Weeks WB, Hacihaliloglu I

pubmed logopapersJul 19 2025
Image-to-image (I2I) translation networks have emerged as promising tools for generating synthetic medical images; however, their clinical reliability and ability to preserve diagnostically relevant features remain underexplored. This study evaluates the performance of state-of-the-art 2D/3D I2I networks for converting ultrasound (US) images to synthetic MRI in prostate cancer (PCa) imaging. The novelty lies in combining radiomics, expert clinical evaluation, and classification performance to comprehensively benchmark these models for potential integration into real-world diagnostic workflows. A dataset of 794 PCa patients was analyzed using ten leading I2I networks to synthesize MRI from US input. Radiomics feature (RF) analysis was performed using Spearman correlation to assess whether high-performing networks (SSIM > 0.85) preserved quantitative imaging biomarkers. A qualitative evaluation by seven experienced physicians assessed the anatomical realism, presence of artifacts, and diagnostic interpretability of synthetic images. Additionally, classification tasks using synthetic images were conducted using two machine learning and one deep learning model to assess the practical diagnostic benefit. Among all networks, 2D-Pix2Pix achieved the highest SSIM (0.855 ± 0.032). RF analysis showed that 76 out of 186 features were preserved post-translation, while the remainder were degraded or lost. Qualitative feedback revealed consistent issues with low-level feature preservation and artifact generation, particularly in lesion-rich regions. These evaluations were conducted to assess whether synthetic MRI retained clinically relevant patterns, supported expert interpretation, and improved diagnostic accuracy. Importantly, classification performance using synthetic MRI significantly exceeded that of US-based input, achieving average accuracy and AUC of ~ 0.93 ± 0.05. Although 2D-Pix2Pix showed the best overall performance in similarity and partial RF preservation, improvements are still required in lesion-level fidelity and artifact suppression. The combination of radiomics, qualitative, and classification analyses offered a holistic view of the current strengths and limitations of I2I models, supporting their potential in clinical applications pending further refinement and validation.

SegMamba-V2: Long-range Sequential Modeling Mamba For General 3D Medical Image Segmentation.

Xing Z, Ye T, Yang Y, Cai D, Gai B, Wu XJ, Gao F, Zhu L

pubmed logopapersJul 18 2025
The Transformer architecture has demonstrated remarkable results in 3D medical image segmentation due to its capability of modeling global relationships. However, it poses a significant computational burden when processing high-dimensional medical images. Mamba, as a State Space Model (SSM), has recently emerged as a notable approach for modeling long-range dependencies in sequential data. Although a substantial amount of Mamba-based research has focused on natural language and 2D image processing, few studies explore the capability of Mamba on 3D medical images. In this paper, we propose SegMamba-V2, a novel 3D medical image segmentation model, to effectively capture long-range dependencies within whole-volume features at each scale. To achieve this goal, we first devise a hierarchical scale downsampling strategy to enhance the receptive field and mitigate information loss during downsampling. Furthermore, we design a novel tri-orientated spatial Mamba block that extends the global dependency modeling process from one plane to three orthogonal planes to improve feature representation capability. Moreover, we collect and annotate a large-scale dataset (named CRC-2000) with fine-grained categories to facilitate benchmarking evaluation in 3D colorectal cancer (CRC) segmentation. We evaluate the effectiveness of our SegMamba-V2 on CRC-2000 and three other large-scale 3D medical image segmentation datasets, covering various modalities, organs, and segmentation targets. Experimental results demonstrate that our Segmamba-V2 outperforms state-of-the-art methods by a significant margin, which indicates the universality and effectiveness of the proposed model on 3D medical image segmentation tasks. The code for SegMamba-V2 is publicly available at: https://github.com/ge-xing/SegMamba-V2.

Performance of Machine Learning in Diagnosing KRAS (Kirsten Rat Sarcoma) Mutations in Colorectal Cancer: Systematic Review and Meta-Analysis.

Chen K, Qu Y, Han Y, Li Y, Gao H, Zheng D

pubmed logopapersJul 18 2025
With the widespread application of machine learning (ML) in the diagnosis and treatment of colorectal cancer (CRC), some studies have investigated the use of ML techniques for the diagnosis of KRAS (Kirsten rat sarcoma) mutation. Nevertheless, there is scarce evidence from evidence-based medicine to substantiate its efficacy. Our study was carried out to systematically review the performance of ML models developed using different modeling approaches, in diagnosing KRAS mutations in CRC. We aim to offer evidence-based foundations for the development and enhancement of future intelligent diagnostic tools. PubMed, Cochrane Library, Embase, and Web of Science were systematically retrieved, with the search cutoff date set to December 22, 2024. The encompassed studies are publicly published research papers that use ML to diagnose KRAS gene mutations in CRC. The risk of bias in the encompassed models was evaluated via the PROBAST (Prediction Model Risk of Bias Assessment Tool). A meta-analysis of the model's concordance index (c-index) was performed, and a bivariate mixed-effects model was used to summarize sensitivity and specificity based on diagnostic contingency tables. A total of 43 studies involving 10,888 patients were included. The modeling variables were derived from clinical characteristics, computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography/computed tomography, and pathological histology. In the validation cohort, for the ML model developed based on CT radiomic features, the c-index, sensitivity, and specificity were 0.87 (95% CI 0.84-0.90), 0.85 (95% CI 0.80-0.89), and 0.83 (95% CI 0.73-0.89), respectively. For the model developed using MRI radiomic features, the c-index, sensitivity, and specificity were 0.77 (95% CI 0.71-0.83), 0.78 (95% CI 0.72-0.83), and 0.73 (95% CI 0.63-0.81), respectively. For the ML model developed based on positron emission tomography/computed tomography radiomic features, the c-index, sensitivity, and specificity were 0.84 (95% CI 0.77-0.90), 0.73, and 0.83, respectively. Notably, the deep learning (DL) model based on pathological images demonstrated a c-index, sensitivity, and specificity of 0.96 (95% CI 0.94-0.98), 0.83 (95% CI 0.72-0.91), and 0.87 (95% CI 0.77-0.92), respectively. The DL model MRI-based model showed a c-index of 0.93 (95% CI 0.90-0.96), sensitivity of 0.85 (95% CI 0.75-0.91), and specificity of 0.83 (95% CI 0.77-0.88). ML is highly accurate in diagnosing KRAS mutations in CRC, and DL models based on MRI and pathological images exhibit particularly strong diagnosis accuracy. More broadly applicable DL-based diagnostic tools may be developed in the future. However, the clinical application of DL models remains relatively limited at present. Therefore, future research should focus on increasing sample sizes, improving model architectures, and developing more advanced DL models to facilitate the creation of highly efficient intelligent diagnostic tools for KRAS mutation diagnosis in CRC.

Diagnostic Performance of Artificial Intelligence in Detecting and Distinguishing Pancreatic Ductal Adenocarcinoma via Computed Tomography: A Systematic Review and Meta-Analysis.

Harandi H, Gouravani M, Alikarami S, Shahrabi Farahani M, Ghavam M, Mohammadi S, Salehi MA, Reynolds S, Dehghani Firouzabadi F, Huda F

pubmed logopapersJul 18 2025
We conducted a systematic review and meta-analysis in diagnostic performance of studies that tried to use artificial intelligence (AI) algorithms in detecting pancreatic ductal adenocarcinoma (PDAC) and distinguishing them from other types of pancreatic lesions. We systematically searched for studies on pancreatic lesions and AI from January 2014 to May 2024. Data were extracted and a meta-analysis was performed using contingency tables and a random-effects model to calculate pooled sensitivity and specificity. Quality assessment was done using modified TRIPOD and PROBAST tools. We included 26 studies in this systematic review, with 22 studies chosen for meta-analysis. The evaluation of AI algorithms' performance in internal validation exhibited a pooled sensitivity of 93% (95% confidence interval [CI], 90 to 95) and specificity of 95% (95% CI, 92 to 97). Additionally, externally validated AI algorithms demonstrated a combined sensitivity of 89% (95% CI, 85 to 92) and specificity of 91% (95% CI, 85 to 95). Subgroup analysis indicated that diagnostic performance differed by comparator group, image contrast, segmentation technique, and algorithm type, with contrast-enhanced imaging and specific AI models (e.g., random forest for sensitivity and CNN for specificity) demonstrating superior accuracy. Although the potential biases should be further addressed, results of this systematic review and meta-analysis showed that AI models have the potential to be incorporated in clinical settings for the detection of smaller tumors and underpinning early signs of PDAC.

Deep learning-based automatic detection of pancreatic ductal adenocarcinoma ≤ 2 cm with high-resolution computed tomography: impact of the combination of tumor mass detection and indirect indicator evaluation.

Ozawa M, Sone M, Hijioka S, Hara H, Wakatsuki Y, Ishihara T, Hattori C, Hirano R, Ambo S, Esaki M, Kusumoto M, Matsui Y

pubmed logopapersJul 18 2025
Detecting small pancreatic ductal adenocarcinomas (PDAC) is challenging owing to their difficulty in being identified as distinct tumor masses. This study assesses the diagnostic performance of a three-dimensional convolutional neural network for the automatic detection of small PDAC using both automatic tumor mass detection and indirect indicator evaluation. High-resolution contrast-enhanced computed tomography (CT) scans from 181 patients diagnosed with PDAC (diameter ≤ 2 cm) between January 2018 and December 2023 were analyzed. The D/P ratio, which is the cross-sectional area of the MPD to that of the pancreatic parenchyma, was identified as an indirect indicator. A total of 204 patient data sets including 104 normal controls were analyzed for automatic tumor mass detection and D/P ratio evaluation. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were evaluated to detect tumor mass. The sensitivity of PDAC detection was compared with that of the software and radiologists, and tumor localization accuracy was validated against endoscopic ultrasonography (EUS) findings. The sensitivity, specificity, PPV, and NPV for tumor mass detection were 77.0%, 76.0%, 75.5%, and 77.5%, respectively; for D/P ratio detection, 87.0%, 94.2%, 93.5%, and 88.3%, respectively; and for combined tumor mass and D/P ratio detections, 96.0%, 70.2%, 75.6%, and 94.8%, respectively. No significant difference was observed between the software's sensitivity and that of the radiologist's report (software, 96.0%; radiologist, 96.0%; p = 1). The concordance rate between software findings and EUS was 96.0%. Combining indirect indicator evaluation with tumor mass detection may improve small PDAC detection accuracy.

Commercialization of medical artificial intelligence technologies: challenges and opportunities.

Li B, Powell D, Lee R

pubmed logopapersJul 18 2025
Artificial intelligence (AI) is already having a significant impact on healthcare. For example, AI-guided imaging can improve the diagnosis/treatment of vascular diseases, which affect over 200 million people globally. Recently, Chiu and colleagues (2024) developed an AI algorithm that supports nurses with no ultrasound training in diagnosing abdominal aortic aneurysms (AAA) with similar accuracy as ultrasound-trained physicians. This technology can therefore improve AAA screening; however, achieving clinical impact with new AI technologies requires careful consideration of commercialization strategies, including funding, compliance with safety and regulatory frameworks, health technology assessment, regulatory approval, reimbursement, and clinical guideline integration.

Enhanced Image Quality and Comparable Diagnostic Performance of Prostate Fast Bi-MRI with Deep Learning Reconstruction.

Shen L, Yuan Y, Liu J, Cheng Y, Liao Q, Shi R, Xiong T, Xu H, Wang L, Yang Z

pubmed logopapersJul 18 2025
To evaluate image quality and diagnostic performance of prostate biparametric MRI (bi-MRI) with deep learning reconstruction (DLR). This prospective study included 61 adult male urological patients undergoing prostate MRI with standard-of-care (SOC) and fast protocols. Sequences included T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), and apparent diffusion coefficient (ADC) maps. DLR images were generated from FAST datasets. Three groups (SOC, FAST, DLR) were compared using: (1) five-point Likert scale, (2) signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), (3) lesion slope profiles, (4) dorsal capsule edge rise distance (ERD). PI-RADS scores were assigned to dominant lesions. ADC values were measured in histopathologically confirmed cases. Diagnostic performance was analyzed via receiver operating characteristic (ROC) curves (accuracy/sensitivity/specificity). Statistical tests included Friedman test, one-way ANOVA with post hoc analyses, and DeLong test for ROC comparisons (P<0.05). FAST scanning protocols reduced acquisition time by nearly half compared to the SOC scanning protocol. When compared to T2WI<sub>FAST</sub>, DLR significantly improved SNR, CNR, slope profile, and ERD (P < 0.05). Similarly, DLR significantly enhanced SNR, CNR, and image sharpness when compared to DWI<sub>FAST</sub> (P < 0.05). No significant differences were observed in PI-RADS scores and ADC values between groups (P > 0.05). The areas under the ROC curves, sensitivity, and specificity of ADC values for distinguishing benign and malignant lesions remained consistent (P > 0.05). DLR enhances image quality in fast prostate bi-MRI while preserving PI-RADS classification accuracy and ADC diagnostic performance.

Deep learning reconstruction for improving image quality of pediatric abdomen MRI using a 3D T1 fast spoiled gradient echo acquisition.

Zucker EJ, Milshteyn E, Machado-Rivas FA, Tsai LL, Roberts NT, Guidon A, Gee MS, Victoria T

pubmed logopapersJul 18 2025
Deep learning (DL) reconstructions have shown utility for improving image quality of abdominal MRI in adult patients, but a paucity of literature exists in children. To compare image quality between three-dimensional fast spoiled gradient echo (SPGR) abdominal MRI acquisitions reconstructed conventionally and using a prototype method based on a commercial DL algorithm in a pediatric cohort. Pediatric patients (age < 18 years) who underwent abdominal MRI from 10/2023-3/2024 including gadolinium-enhanced accelerated 3D SPGR 2-point Dixon acquisitions (LAVA-Flex, GE HealthCare) were identified. Images were retrospectively generated using a prototype reconstruction method leveraging a commercial deep learning algorithm (AIR™ Recon DL, GE HealthCare) with the 75% noise reduction setting. For each case/reconstruction, three radiologists independently scored DL and non-DL image quality (overall and of selected structures) on a 5-point Likert scale (1-nondiagnostic, 5-excellent) and indicated reconstruction preference. The signal-to-noise ratio (SNR) and mean number of edges (inverse correlate of image sharpness) were also quantified. Image quality metrics and preferences were compared using Wilcoxon signed-rank, Fisher exact, and paired t-tests. Interobserver agreement was evaluated with the Kendall rank correlation coefficient (W). The final cohort consisted of 38 patients with mean ± standard deviation age of 8.6 ± 5.7 years, 23 males. Mean image quality scores for evaluated structures ranged from 3.8 ± 1.1 to 4.6 ± 0.6 in the DL group, compared to 3.1 ± 1.1 to 3.9 ± 0.6 in the non-DL group (all P < 0.001). All radiologists preferred DL in most cases (32-37/38, P < 0.001). There were a 2.3-fold increase in SNR and a 3.9% reduction in the mean number of edges in DL compared to non-DL images (both P < 0.001). In all scored anatomic structures except the spine and non-DL adrenals, interobserver agreement was moderate to substantial (W = 0.41-0.74, all P < 0.01). In a broad spectrum of pediatric patients undergoing contrast-enhanced Dixon abdominal MRI acquisitions, the prototype deep learning reconstruction is generally preferred to conventional methods with improved image quality across a wide range of structures.

Deep learning models for deriving optimised measures of fat and muscle mass from MRI.

Thomas B, Ali MA, Ali FMH, Chung A, Joshi M, Maiguma-Wilson S, Reiff G, Said H, Zalmay P, Berks M, Blackledge MD, O'Connor JPB

pubmed logopapersJul 17 2025
Fat and muscle mass are potential biomarkers of wellbeing and disease in oncology, but clinical measurement methods vary considerably. Here we evaluate the accuracy, precision and ability to track change for multiple deep learning (DL) models that quantify fat and muscle mass from abdominal MRI. Specifically, subcutaneous fat (SF), intra-abdominal fat (VF), external muscle (EM) and psoas muscle (PM) were evaluated using 15 convolutional neural network (CNN)-based and 4 transformer-based deep learning model architectures. There was negligible difference in the accuracy of human observers and all deep learning models in delineating SF or EM. Both of these tissues had excellent repeatability of their delineation. VF was measured most accurately by the human observers, then by CNN-based models, which outperformed transformer-based models. In distinction, PM delineation accuracy and repeatability was poor for all assessments. Repeatability limits of agreement determined when changes measured in individual patients were due to real change rather than test-retest variation. In summary, DL model accuracy and precision of delineating fat and muscle volumes varies between CNN-based and transformer-based models, between different tissues and in some cases with gender. These factors should be considered when investigators deploy deep learning methods to estimate biomarkers of fat and muscle mass.
Page 43 of 100995 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.