Sort by:
Page 6 of 2982972 results

An Adaptive Multi-Stage and Adjacent-Level Feature Integration Network for Brain Tumor Image Segmentation.

Zhou J, Wu Y, Xu Y, Liu W

pubmed logopapersAug 14 2025
The segmentation of brain tumor magnetic resonance imaging (MRI) plays a crucial role in assisting diagnosis, treatment planning, and disease progression evaluation. Convolutional neural networks (CNNs) and transformer-based methods have achieved significant progress due to their local and global feature extraction capabilities. However, similar to other medical image segmentation tasks, challenges remain in addressing issues such as blurred boundaries, small lesion volumes, and interwoven regions. General CNN and transformer approaches struggle to effectively resolve these issues. Therefore, a new multi-stage and adjacent-level feature integration network (MAI-Net) is introduced to overcome these challenges, thereby improving the overall segmentation accuracy. MAI-Net consists of dual-branch, multi-level structures and three innovative modules. The stage-level multi-scale feature extraction (SMFE) module focuses on capturing feature details from fine to coarse scales, improving detection of blurred edges and small lesions. The adjacent-level feature fusion (AFF) module facilitates information exchange across different levels, enhancing segmentation accuracy in complex regions as well as small volume lesions. Finally, the multi-stage feature fusion (MFF) module further integrates features from various levels to improve segmentation performance in complex regions. Extensive experiments on BraTS2020 and BraTS2021 datasets demonstrate that MAI-Net significantly outperforms existing methods in Dice and HD95 metrics. Furthermore, generalization experiments on a public ischemic stroke dataset confirm its robustness across different segmentation tasks. These results highlight the significant advantages of MAI-Net in addressing domain-specific challenges while maintaining strong generalization capabilities.

Development of a deep learning algorithm for radiographic detection of syndesmotic instability in ankle fractures with intraoperative validation.

Kubach J, Pogarell T, Uder M, Perl M, Betsch M, Pasurka M, Söllner S, Heiss R

pubmed logopapersAug 14 2025
Identifying syndesmotic instability in ankle fractures using conventional radiographs is still a major challenge. In this study we trained a convolutional neural network (CNN) to classify the fracture utilizing the AO-classification (AO-44 A/B/C) and to simultaneously detect syndesmosis instability in the conventional radiograph by leveraging the intraoperative stress testing as the gold standard. In this retrospective exploratory study we identified 700 patients with rotational ankle fractures at a university hospital from 2019 to 2024, from whom 1588 digital radiographs were extracted to train, validate, and test a CNN. Radiographs were classified based on the therapy-decisive gold standard of the intraoperative hook-test and the preoperatively determined AO-classification from the surgical report. To perform internal validation and quality control, the algorithm results were visualized using Guided Score Class activation maps (GSCAM).The AO44-classification sensitivity over all subclasses was 91%. Furthermore, the syndesmosis instability could be identified with a sensitivity of 0.84 (95% confidence interval (CI) 0.78, 0.92) and specificity 0.8 (95% CI 0.67, 0.9). Consistent visualization results were obtained from the GSCAMs. The integration of an explainable deep-learning algorithm, trained on an intraoperative gold standard showed a 0.84 sensitivity for syndesmotic stability testing. Thus, providing clinically interpretable outputs, suggesting potential for enhanced preoperative decision-making in complex ankle trauma.

Radiomics-based machine-learning method to predict extrahepatic metastasis in hepatocellular carcinoma after hepatectomy: a multicenter study.

He Y, Dong B, Hu B, Hao X, Xia N, Yang C, Dong Q, Zhu C

pubmed logopapersAug 14 2025
This study investigates the use of CT-based radiomics for predicting extrahepatic metastasis in hepatocellular carcinoma (HCC) following hepatectomy. We analyzed data from 374 patients from two centers (277 in the training cohort and 97 in an external validation cohort). Radiomic features were extracted from contrast-enhanced CT scans. Key features were identified using the least absolute shrinkage and selection operator (LASSO) to compute radiomics scores (radscore) for model development. A clinical model based on risk factors was also created. We developed a combined model integrating both radscore and clinical variables, constructing nomograms for personalized risk assessment. Model performance was compared via the Delong test, with calibration curves assessing prediction consistency. Decision curve analysis (DCA) was employed to assess the clinical utility and net benefit of the predictive models across different threshold probabilities, thereby evaluating their potential value in guiding clinical decision-making for extrahepatic metastasis. Radscore based on CT was an independent predictor of extrahepatic disease (p < 0.05). The combined model showed high predictive performance with an AUC of 87.2% (95% CI: 81.8%-92.6%) in the training group and 86.0% (95% CI: 69.4%-100%) in the validation group. Predictive performance of the combined model significantly outperformed both the radiomics and clinical models (p < 0.05). The DCA shows that the combined model has a higher net benefit in predicting extrahepatic metastases of HCC than the clinical model and radiomics model. The combined prediction model, utilizing CT radscore alongside clinical risk factors, effectively forecasts extrahepatic metastasis in HCC patients.

A novel unified Inception-U-Net hybrid gravitational optimization model (UIGO) incorporating automated medical image segmentation and feature selection for liver tumor detection.

Banerjee T, Singh DP, Kour P, Swain D, Mahajan S, Kadry S, Kim J

pubmed logopapersAug 14 2025
Segmenting liver tumors in medical imaging is pivotal for precise diagnosis, treatment, and evaluating therapy outcomes. Even with modern imaging technologies, fully automated segmentation systems have not overcome the challenge posed by the diversity in the shape, size, and texture of liver tumors. Such delays often hinder clinicians from making timely and accurate decisions. This study tries to resolve these issues with the development of UIGO. This new deep learning model merges U-Net and Inception networks, incorporating advanced feature selection and optimization strategies. The goals of UIGO include achieving high precision segmented results while maintaining optimal computational requirements for efficiency in real-world clinical use. Publicly available liver tumor segmentation datasets were used for testing the model: LiTS (Liver Tumor Segmentation Challenge), CHAOS (Combined Healthy Abdominal Organ Segmentation), and 3D-IRCADb1 (3D-IRCAD liver dataset). With various tumor shapes and sizes ranging across different imaging modalities such as CT and MRI, these datasets ensured comprehensive testing of UIGO's performance in diverse clinical scenarios. The experimental outcomes show the effectiveness of UIGO with a segmentation accuracy of 99.93%, an AUC score of 99.89%, a Dice Coefficient of 0.997, and an IoU of 0.998. UIGO demonstrated higher performance than other contemporary liver tumor segmentation techniques, indicating the system's ability to enhance clinician's ability to deliver precise and prompt evaluations at a lower computational expense. This study underscores the effort towards advanced streamlined, dependable, and clinically useful devices for liver tumor segmentation in medical imaging.

Automatic segmentation of cone beam CT images using treatment planning CT images in patients with prostate cancer.

Takayama Y, Kadoya N, Yamamoto T, Miyasaka Y, Kusano Y, Kajikawa T, Tomori S, Katsuta Y, Tanaka S, Arai K, Takeda K, Jingu K

pubmed logopapersAug 14 2025
Cone-beam computed tomography-based online adaptive radiotherapy (CBCT-based online ART) is currently used in clinical practice; however, deep learning-based segmentation of CBCT images remains challenging. Previous studies generated CBCT datasets for segmentation by adding contours outside clinical practice or synthesizing tissue contrast-enhanced diagnostic images paired with CBCT images. This study aimed to improve CBCT segmentation by matching the treatment planning CT (tpCT) image quality to CBCT images without altering the tpCT image or its contours. A deep-learning-based CBCT segmentation model was trained for the male pelvis using only the tpCT dataset. To bridge the quality gap between tpCT and routine CBCT images, an artificial pseudo-CBCT dataset was generated using Gaussian noise and Fourier domain adaptation (FDA) for 80 tpCT datasets (the hybrid FDA method). A five-fold cross-validation approach was used for model training. For comparison, atlas-based segmentation was performed with a registered tpCT dataset. The Dice similarity coefficient (DSC) assessed contour quality between the model-predicted and reference manual contours. The average DSC values for the clinical target volume, bladder, and rectum using the hybrid FDA method were 0.71 ± 0.08, 0.84 ± 0.08, and 0.78 ± 0.06, respectively. Conversely, the values for the model using plain tpCT were 0.40 ± 0.12, 0.17 ± 0.21, and 0.18 ± 0.14, and for the atlas-based model were 0.66 ± 0.13, 0.59 ± 0.16, and 0.66 ± 0.11, respectively. The segmentation model using the hybrid FDA method demonstrated significantly higher accuracy than models trained on plain tpCT datasets and those using atlas-based segmentation.

Ultrasound Phase Aberrated Point Spread Function Estimation with Convolutional Neural Network: Simulation Study.

Shen WH, Lin YA, Li ML

pubmed logopapersAug 13 2025
Ultrasound imaging systems rely on accurate point spread function (PSF) estimation to support advanced image quality enhancement techniques such as deconvolution and speckle reduction. Phase aberration, caused by sound speed inhomogeneity within biological tissue, is inevitable in ultrasound imaging. It distorts the PSF by increasing sidelobe level and introducing asymmetric amplitude, making PSF estimation under phase aberration highly challenging. In this work, we propose a deep learning framework for estimating phase-aberrated PSFs using U-Net and complex U-Net architectures, operating on RF and complex k-space data, respectively, with the latter demonstrating superior performance. Synthetic phase aberration data, generated using the near-field phase screen model, is employed to train the networks. We evaluate various loss functions and find that log-compressed B-mode perceptual loss achieves the best performance, accurately predicting both the mainlobe and near sidelobe regions of the PSF. Simulation results validate the effectiveness of our approach in estimating PSFs under varying levels of phase aberration. Furthermore, we demonstrate that more accurate PSF estimation improves performance in a downstream phase aberration correction task, highlighting the broader utility of the proposed method.

Exploring Radiologists' Use of AI Chatbots for Assistance in Image Interpretation: Patterns of Use and Trust Evaluation.

Alarifi M

pubmed logopapersAug 13 2025
This study investigated radiologists' perceptions of AI-generated, patient-friendly radiology reports across three modalities: MRI, CT, and mammogram/ultrasound. The evaluation focused on report correctness, completeness, terminology complexity, and emotional impact. Seventy-nine radiologists from four major Saudi Arabian hospitals assessed AI-simplified versions of clinical radiology reports. Each participant reviewed one report from each modality and completed a structured questionnaire covering factual correctness, completeness, terminology complexity, and emotional impact. A structured and detailed prompt was used to guide ChatGPT-4 in generating the reports, which included clear findings, a lay summary, glossary, and clarification of ambiguous elements. Statistical analyses included descriptive summaries, Friedman tests, and Pearson correlations. Radiologists rated mammogram reports highest for correctness (M = 4.22), followed by CT (4.05) and MRI (3.95). Completeness scores followed a similar trend. Statistically significant differences were found in correctness (χ<sup>2</sup>(2) = 17.37, p < 0.001) and completeness (χ<sup>2</sup>(2) = 13.13, p = 0.001). Anxiety and complexity ratings were moderate, with MRI reports linked to slightly higher concern. A weak positive correlation emerged between radiologists' experience and mammogram correctness ratings (r = .235, p = .037). Radiologists expressed overall support for AI-generated simplified radiology reports when created using a structured prompt that includes summaries, glossaries, and clarification of ambiguous findings. While mammography and CT reports were rated favorably, MRI reports showed higher emotional impact, highlighting a need for clearer and more emotionally supportive language.

Economic Evaluations and Equity in the Use of Artificial Intelligence in Imaging Examinations for Medical Diagnosis in People With Dermatological, Neurological, and Pulmonary Diseases: Systematic Review.

Santana GO, Couto RM, Loureiro RM, Furriel BCRS, de Paula LGN, Rother ET, de Paiva JPQ, Correia LR

pubmed logopapersAug 13 2025
Health care systems around the world face numerous challenges. Recent advances in artificial intelligence (AI) have offered promising solutions, particularly in diagnostic imaging. This systematic review focused on evaluating the economic feasibility of AI in real-world diagnostic imaging scenarios, specifically for dermatological, neurological, and pulmonary diseases. The central question was whether the use of AI in these diagnostic assessments improves economic outcomes and promotes equity in health care systems. This systematic review has 2 main components, economic evaluation and equity assessment. We used the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) tool to ensure adherence to best practices in systematic reviews. The protocol was registered with PROSPERO (International Prospective Register of Systematic Reviews), and we followed the PRISMA-E (Preferred Reporting Items for Systematic Reviews and Meta-Analyses - Equity Extension) guidelines for equity. Scientific articles reporting on economic evaluations or equity considerations related to the use of AI-based tools in diagnostic imaging in dermatology, neurology, or pulmonology were included in the study. The search was conducted in the PubMed, Embase, Scopus, and Web of Science databases. Methodological quality was assessed using the following checklists, CHEC (Consensus on Health Economic Criteria) for economic evaluations, EPHPP (Effective Public Health Practice Project) for equity evaluation studies, and Welte for transferability. The systematic review identified 9 publications within the scope of the research question, with sample sizes ranging from 122 to over 1.3 million participants. The majority of studies addressed economic evaluation (88.9%), with most studies addressing pulmonary diseases (n=6; 66.6%), followed by neurological diseases (n=2; 22.3%), and only 1 (11.1%) study addressing dermatological diseases. These studies had an average quality access of 87.5% on the CHEC checklist. Only 2 studies were found to be transferable to Brazil and other countries with a similar health context. The economic evaluation revealed that 87.5% of studies highlighted the benefits of using AI in dermatology, neurology, and pulmonology, highlighting significant cost-effectiveness outcomes, with the most advantageous being a negative cost-effectiveness ratio of -US $27,580 per QALY (quality-adjusted life year) for melanoma diagnosis, indicating substantial cost savings in this scenario. The only study assessing equity, based on 129,819 radiographic images, identified AI-assisted underdiagnosis, particularly in certain subgroups defined by gender, ethnicity, and socioeconomic status. This review underscores the importance of transparency in the description of AI tools and the representativeness of population subgroups to mitigate health disparities. As AI is rapidly being integrated into health care, detailed assessments are essential to ensure that benefits reach all patients, regardless of sociodemographic factors.

Pathology-Guided AI System for Accurate Segmentation and Diagnosis of Cervical Spondylosis.

Zhang Q, Chen X, He Z, Wu L, Wang K, Sun J, Shen H

pubmed logopapersAug 13 2025
Cervical spondylosis, a complex and prevalent condition, demands precise and efficient diagnostic techniques for accurate assessment. While MRI offers detailed visualization of cervical spine anatomy, manual interpretation remains labor-intensive and prone to error. To address this, we developed an innovative AI-assisted Expert-based Diagnosis System that automates both segmentation and diagnosis of cervical spondylosis using MRI. Leveraging multi-center datasets of cervical MRI images from patients with cervical spondylosis, our system features a pathology-guided segmentation model capable of accurately segmenting key cervical anatomical structures. The segmentation is followed by an expert-based diagnostic framework that automates the calculation of critical clinical indicators. Our segmentation model achieved an impressive average Dice coefficient exceeding 0.90 across four cervical spinal anatomies and demonstrated enhanced accuracy in herniation areas. Diagnostic evaluation further showcased the system's precision, with the lowest mean average errors (MAE) for the C2-C7 Cobb angle and the Maximum Spinal Cord Compression (MSCC) coefficient. In addition, our method delivered high accuracy, precision, recall, and F1 scores in herniation localization, K-line status assessment, T2 hyperintensity detection, and Kang grading. Comparative analysis and external validation demonstrate that our system outperforms existing methods, establishing a new benchmark for segmentation and diagnostic tasks for cervical spondylosis.

CT-Based radiomics and deep learning for the preoperative prediction of peritoneal metastasis in ovarian cancers.

Liu Y, Yin H, Li J, Wang Z, Wang W, Cui S

pubmed logopapersAug 13 2025
To develop a CT-based deep learning radiomics nomogram (DLRN) for the preoperative prediction of peritoneal metastasis (PM) in patients with ovarian cancer (OC). A total of 296 patients with OCs were randomly divided into training dataset (N = 207) and test dataset (N = 89). The radiomics features and DL features were extracted from CT images of each patient. Specifically, radiomics features were extracted from the 3D tumor regions, while DL features were extracted from the 2D slice with the largest tumor region of interest (ROI). The least absolute shrinkage and selection operator (LASSO) algorithm was used to select radiomics and DL features, and the radiomics score (Radscore) and DL score (Deepscore) were calculated. Multivariate logistic regression was employed to construct clinical model. The important clinical factors, radiomics and DL features were integrated to build the DLRN. The predictive performance of the models was evaluated using the area under the receiver operating characteristic curve (AUC) and DeLong's test. Nine radiomics features and 10 DL features were selected. Carbohydrate antigen 125 (CA-125) was the independent clinical predictor. In the training dataset, the AUC values of the clinical, radiomics and DL models were 0.618, 0.842, and 0.860, respectively. In the test dataset, the AUC values of these models were 0.591, 0.819 and 0.917, respectively. The DLRN showed better performance than other models in both training and test datasets with AUCs of 0.943 and 0.951, respectively. Decision curve analysis and calibration curve showed that the DLRN provided relatively high clinical benefit in both the training and test datasets. The DLRN demonstrated superior performance in predicting preoperative PM in patients with OC. This model offers a highly accurate and noninvasive tool for preoperative prediction, with substantial clinical potential to provide critical information for individualized treatment planning, thereby enabling more precise and effective management of OC patients.
Page 6 of 2982972 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.