Sort by:
Page 18 of 3113104 results

Contrast-enhanced ultrasound radiomics model for predicting axillary lymph node metastasis and prognosis in breast cancer: a multicenter study.

Li SY, Li YM, Fang YQ, Jin ZY, Li JK, Zou XM, Huang SS, Niu RL, Fu NQ, Shao YH, Gong XT, Li MR, Wang W, Wang ZL

pubmed logopapersAug 14 2025
To construct a multimodal ultrasound (US) radiomics model for predicting axillary lymph node metastasis (ALNM) in breast cancer and evaluated its application value in predicting ALNM and patient prognosis. From March 2014 to December 2022, data from 682 breast cancer patients from four hospitals were collected, including preoperative grayscale US, color Doppler flow imaging (CDFI), contrast-enhanced ultrasound (CEUS) imaging data, and clinical information. Data from the First Medical Center of PLA General Hospital were used as the training and internal validation sets, while data from Peking University First Hospital, the Cancer Hospital of the Chinese Academy of Medical Sciences, and the Fourth Medical Center of PLA General Hospital were used as the external validation set. LASSO regression was employed to select radiomic features (RFs), while eight machine learning algorithms were utilized to construct radiomic models based on US, CDFI, and CEUS. The prediction efficiency of ALNM was assessed to identify the optimal model. In the meantime, Radscore was computed and integrated with immunoinflammatory markers to forecast Disease-Free Survival (DFS) in breast cancer patients. Follow-up methods included telephone outreach and in-person hospital visits. The analysis employed Cox regression to pinpoint prognostic factors, while clinical-imaging models were developed accordingly. The performance of the model was evaluated using the C-index, Receiver Operating Characteristic (ROC) curves, calibration curves, and Decision Curve Analysis (DCA). In the training cohort (n = 400), 40% of patients had ALNM, with a mean age of 55 ± 10 years. The US + CDFI + CEUS-based radiomics model achieved Area Under the Curves (AUCs) of 0.88, 0.81, and 0.77 for predicting N0 versus N+ (≥ 1) in the training, internal, and external validation sets, respectively, outperforming the US-only model (P < 0.05). For distinguishing N+ (1-2) from N+ (≥ 3), the model achieved AUCs of 0.89, 0.74, and 0.75. Combining radiomics scores with clinical immunoinflammatory markers (platelet count and neutrophil-to-lymphocyte ratio) yielded a clinical-radiomics model predicting disease-free survival (DFS), with C-indices of 0.80, 0.73, and 0.79 across the three cohorts. In the external validation cohort, the clinical-radiomics model achieved higher AUCs for predicting 2-, 3-, and 5-year DFS compared to the clinical model alone (2-year: 0.79 vs. 0.66; 3-year: 0.83 vs. 0.70; 5-year: 0.78 vs. 0.64; all P < 0.05). Calibration and decision curve analyses demonstrated good model agreement and clinical utility. The multimodal ultrasound radiomics model based on US, CDFI, and CEUS could effectively predict ALNM in breast cancer. Furthermore, the combined application of radiomics and immune inflammation markers might predict the DFS of breast cancer patients to some extent.

Multimodal artificial intelligence for subepithelial lesion classification and characterization: a multicenter comparative study (with video).

Li J, Jing X, Zhang Q, Wang X, Wang L, Shan J, Zhou Z, Fan L, Gong X, Sun X, He S

pubmed logopapersAug 14 2025
Subepithelial lesions (SELs) present significant diagnostic challenges in gastrointestinal endoscopy, particularly in differentiating malignant types, such as gastrointestinal stromal tumors (GISTs) and neuroendocrine tumors, from benign types like leiomyomas. Misdiagnosis can lead to unnecessary interventions or delayed treatment. To address this challenge, we developed ECMAI-WME, a parallel fusion deep learning model integrating white light endoscopy (WLE) and microprobe endoscopic ultrasonography (EUS), to improve SEL classification and lesion characterization. A total of 523 SELs from four hospitals were used to develop serial and parallel fusion AI models. The Parallel Model, demonstrating superior performance, was designated as ECMAI-WME. The model was tested on an external validation cohort (n = 88) and a multicenter test cohort (n = 274). Diagnostic performance, lesion characterization, and clinical decision-making support were comprehensively evaluated and compared with endoscopists' performance. The ECMAI-WME model significantly outperformed endoscopists in diagnostic accuracy (96.35% vs. 63.87-86.13%, p < 0.001) and treatment decision-making accuracy (96.35% vs. 78.47-86.13%, p < 0.001). It achieved 98.72% accuracy in internal validation, 94.32% in external validation, and 96.35% in multicenter testing. For distinguishing gastric GISTs from leiomyomas, the model reached 91.49% sensitivity, 100% specificity, and 96.38% accuracy. Lesion characteristics were identified with a mean accuracy of 94.81% (range: 90.51-99.27%). The model maintained robust performance despite class imbalance, confirmed by five complementary analyses. Subgroup analyses showed consistent accuracy across lesion size, location, or type (p > 0.05), demonstrating strong generalizability. The ECMAI-WME model demonstrates excellent diagnostic performance and robustness in the multiclass SEL classification and characterization, supporting its potential for real-time deployment to enhance diagnostic consistency and guide clinical decision-making.

AI post-intervention operational and functional outcomes prediction in ischemic stroke patients using MRIs.

Wittrup E, Reavey-Cantwell J, Pandey AS, Rivet Ii DJ, Najarian K

pubmed logopapersAug 14 2025
Despite the potential clinical utility for acute ischemic stroke patients, predicting short-term operational outcomes like length of stay (LOS) and long-term functional outcomes such as the 90-day Modified Rankin Scale (mRS) remain a challenge, with limited current clinical guidance on expected patient trajectories. Machine learning approaches have increasingly aimed to bridge this gap, often utilizing admission-based clinical features; yet, the integration of imaging biomarkers remains underexplored, especially regarding whole 2.5D image fusion using advanced deep learning techniques. This study introduces a novel method leveraging autoencoders to integrate 2.5D diffusion weighted imaging (DWI) with clinical features for refined outcome prediction. Results on a comprehensive dataset of AIS patients demonstrate that our autoencoder-based method has comparable performance to traditional convolutional neural networks image fusion methods and clinical data alone (LOS > 8 days: AUC 0.817, AUPRC 0.573, F1-Score 0.552; 90-day mRS > 2: AUC 0.754, AUPRC 0.685, F1-Score 0.626). This novel integration of imaging and clinical data for post-intervention stroke prognosis has numerous computational and operational advantages over traditional image fusion methods. While further validation of the presented models is necessary before adoption, this approach aims to enhance personalized patient management and operational decision-making in healthcare settings. Not applicable.

An effective brain stroke diagnosis strategy based on feature extraction and hybrid classifier.

Elsayed MS, Saleh GA, Saleh AI, Khalil AT

pubmed logopapersAug 14 2025
Stroke is a leading cause of death and long-term disability worldwide, and early detection remains a significant clinical challenge. This study proposes an Effective Brain Stroke Diagnosis Strategy (EBDS). The hybrid deep learning framework integrates Vision Transformer (ViT) and VGG16 to enable accurate and interpretable stroke detection from CT images. The model was trained and evaluated using a publicly available dataset from Kaggle, achieving impressive results: a test accuracy of 99.6%, a precision of 1.00 for normal cases and 0.98 for stroke cases, a recall of 0.99 for normal cases and 1.00 for stroke cases, and an overall F1-score of 0.99. These results demonstrate the robustness and reliability of the EBDS model, which outperforms several recent state-of-the-art methods. To enhance clinical trust, the model incorporates explainability techniques, such as Grad-CAM and LIME, which provide visual insights into its decision-making process. The EBDS framework is designed for real-time application in emergency settings, offering both high diagnostic performance and interpretability. This work addresses a critical research gap in early brain stroke diagnosis and contributes a scalable, explainable, and clinically relevant solution for medical imaging diagnostics.

An Adaptive Multi-Stage and Adjacent-Level Feature Integration Network for Brain Tumor Image Segmentation.

Zhou J, Wu Y, Xu Y, Liu W

pubmed logopapersAug 14 2025
The segmentation of brain tumor magnetic resonance imaging (MRI) plays a crucial role in assisting diagnosis, treatment planning, and disease progression evaluation. Convolutional neural networks (CNNs) and transformer-based methods have achieved significant progress due to their local and global feature extraction capabilities. However, similar to other medical image segmentation tasks, challenges remain in addressing issues such as blurred boundaries, small lesion volumes, and interwoven regions. General CNN and transformer approaches struggle to effectively resolve these issues. Therefore, a new multi-stage and adjacent-level feature integration network (MAI-Net) is introduced to overcome these challenges, thereby improving the overall segmentation accuracy. MAI-Net consists of dual-branch, multi-level structures and three innovative modules. The stage-level multi-scale feature extraction (SMFE) module focuses on capturing feature details from fine to coarse scales, improving detection of blurred edges and small lesions. The adjacent-level feature fusion (AFF) module facilitates information exchange across different levels, enhancing segmentation accuracy in complex regions as well as small volume lesions. Finally, the multi-stage feature fusion (MFF) module further integrates features from various levels to improve segmentation performance in complex regions. Extensive experiments on BraTS2020 and BraTS2021 datasets demonstrate that MAI-Net significantly outperforms existing methods in Dice and HD95 metrics. Furthermore, generalization experiments on a public ischemic stroke dataset confirm its robustness across different segmentation tasks. These results highlight the significant advantages of MAI-Net in addressing domain-specific challenges while maintaining strong generalization capabilities.

Development of a deep learning algorithm for radiographic detection of syndesmotic instability in ankle fractures with intraoperative validation.

Kubach J, Pogarell T, Uder M, Perl M, Betsch M, Pasurka M, Söllner S, Heiss R

pubmed logopapersAug 14 2025
Identifying syndesmotic instability in ankle fractures using conventional radiographs is still a major challenge. In this study we trained a convolutional neural network (CNN) to classify the fracture utilizing the AO-classification (AO-44 A/B/C) and to simultaneously detect syndesmosis instability in the conventional radiograph by leveraging the intraoperative stress testing as the gold standard. In this retrospective exploratory study we identified 700 patients with rotational ankle fractures at a university hospital from 2019 to 2024, from whom 1588 digital radiographs were extracted to train, validate, and test a CNN. Radiographs were classified based on the therapy-decisive gold standard of the intraoperative hook-test and the preoperatively determined AO-classification from the surgical report. To perform internal validation and quality control, the algorithm results were visualized using Guided Score Class activation maps (GSCAM).The AO44-classification sensitivity over all subclasses was 91%. Furthermore, the syndesmosis instability could be identified with a sensitivity of 0.84 (95% confidence interval (CI) 0.78, 0.92) and specificity 0.8 (95% CI 0.67, 0.9). Consistent visualization results were obtained from the GSCAMs. The integration of an explainable deep-learning algorithm, trained on an intraoperative gold standard showed a 0.84 sensitivity for syndesmotic stability testing. Thus, providing clinically interpretable outputs, suggesting potential for enhanced preoperative decision-making in complex ankle trauma.

Radiomics-based machine-learning method to predict extrahepatic metastasis in hepatocellular carcinoma after hepatectomy: a multicenter study.

He Y, Dong B, Hu B, Hao X, Xia N, Yang C, Dong Q, Zhu C

pubmed logopapersAug 14 2025
This study investigates the use of CT-based radiomics for predicting extrahepatic metastasis in hepatocellular carcinoma (HCC) following hepatectomy. We analyzed data from 374 patients from two centers (277 in the training cohort and 97 in an external validation cohort). Radiomic features were extracted from contrast-enhanced CT scans. Key features were identified using the least absolute shrinkage and selection operator (LASSO) to compute radiomics scores (radscore) for model development. A clinical model based on risk factors was also created. We developed a combined model integrating both radscore and clinical variables, constructing nomograms for personalized risk assessment. Model performance was compared via the Delong test, with calibration curves assessing prediction consistency. Decision curve analysis (DCA) was employed to assess the clinical utility and net benefit of the predictive models across different threshold probabilities, thereby evaluating their potential value in guiding clinical decision-making for extrahepatic metastasis. Radscore based on CT was an independent predictor of extrahepatic disease (p < 0.05). The combined model showed high predictive performance with an AUC of 87.2% (95% CI: 81.8%-92.6%) in the training group and 86.0% (95% CI: 69.4%-100%) in the validation group. Predictive performance of the combined model significantly outperformed both the radiomics and clinical models (p < 0.05). The DCA shows that the combined model has a higher net benefit in predicting extrahepatic metastases of HCC than the clinical model and radiomics model. The combined prediction model, utilizing CT radscore alongside clinical risk factors, effectively forecasts extrahepatic metastasis in HCC patients.

A novel unified Inception-U-Net hybrid gravitational optimization model (UIGO) incorporating automated medical image segmentation and feature selection for liver tumor detection.

Banerjee T, Singh DP, Kour P, Swain D, Mahajan S, Kadry S, Kim J

pubmed logopapersAug 14 2025
Segmenting liver tumors in medical imaging is pivotal for precise diagnosis, treatment, and evaluating therapy outcomes. Even with modern imaging technologies, fully automated segmentation systems have not overcome the challenge posed by the diversity in the shape, size, and texture of liver tumors. Such delays often hinder clinicians from making timely and accurate decisions. This study tries to resolve these issues with the development of UIGO. This new deep learning model merges U-Net and Inception networks, incorporating advanced feature selection and optimization strategies. The goals of UIGO include achieving high precision segmented results while maintaining optimal computational requirements for efficiency in real-world clinical use. Publicly available liver tumor segmentation datasets were used for testing the model: LiTS (Liver Tumor Segmentation Challenge), CHAOS (Combined Healthy Abdominal Organ Segmentation), and 3D-IRCADb1 (3D-IRCAD liver dataset). With various tumor shapes and sizes ranging across different imaging modalities such as CT and MRI, these datasets ensured comprehensive testing of UIGO's performance in diverse clinical scenarios. The experimental outcomes show the effectiveness of UIGO with a segmentation accuracy of 99.93%, an AUC score of 99.89%, a Dice Coefficient of 0.997, and an IoU of 0.998. UIGO demonstrated higher performance than other contemporary liver tumor segmentation techniques, indicating the system's ability to enhance clinician's ability to deliver precise and prompt evaluations at a lower computational expense. This study underscores the effort towards advanced streamlined, dependable, and clinically useful devices for liver tumor segmentation in medical imaging.

Automatic segmentation of cone beam CT images using treatment planning CT images in patients with prostate cancer.

Takayama Y, Kadoya N, Yamamoto T, Miyasaka Y, Kusano Y, Kajikawa T, Tomori S, Katsuta Y, Tanaka S, Arai K, Takeda K, Jingu K

pubmed logopapersAug 14 2025
Cone-beam computed tomography-based online adaptive radiotherapy (CBCT-based online ART) is currently used in clinical practice; however, deep learning-based segmentation of CBCT images remains challenging. Previous studies generated CBCT datasets for segmentation by adding contours outside clinical practice or synthesizing tissue contrast-enhanced diagnostic images paired with CBCT images. This study aimed to improve CBCT segmentation by matching the treatment planning CT (tpCT) image quality to CBCT images without altering the tpCT image or its contours. A deep-learning-based CBCT segmentation model was trained for the male pelvis using only the tpCT dataset. To bridge the quality gap between tpCT and routine CBCT images, an artificial pseudo-CBCT dataset was generated using Gaussian noise and Fourier domain adaptation (FDA) for 80 tpCT datasets (the hybrid FDA method). A five-fold cross-validation approach was used for model training. For comparison, atlas-based segmentation was performed with a registered tpCT dataset. The Dice similarity coefficient (DSC) assessed contour quality between the model-predicted and reference manual contours. The average DSC values for the clinical target volume, bladder, and rectum using the hybrid FDA method were 0.71 ± 0.08, 0.84 ± 0.08, and 0.78 ± 0.06, respectively. Conversely, the values for the model using plain tpCT were 0.40 ± 0.12, 0.17 ± 0.21, and 0.18 ± 0.14, and for the atlas-based model were 0.66 ± 0.13, 0.59 ± 0.16, and 0.66 ± 0.11, respectively. The segmentation model using the hybrid FDA method demonstrated significantly higher accuracy than models trained on plain tpCT datasets and those using atlas-based segmentation.

Healthcare and cutting-edge technology: Advancements, challenges, and future prospects.

Singhal V, R S, Singhal S, Tiwari A, Mangal D

pubmed logopapersAug 14 2025
The high-level integration of technology in health care has radically changed the process of patient care, diagnosis, treatment, and health outcomes. This paper discusses significant technological advances: AI for medical imaging to detect early disease stages; robotic surgery with precision and minimally invasive techniques; telemedicine for remote monitoring and virtual consultation; personalized medicine through genomic analysis; and blockchain in secure and transparent handling of health data. Every section in the paper discusses the underlying principles, advantages, and disadvantages associated with such technologies, supported by appropriate case studies like deploying AI in radiology to enhance cancer diagnosis or robotic surgery to enhance accuracy in surgery and blockchain technology in electronic health records to enable data integrity and security. The paper also discusses key ethical issues, including risks to data privacy, algorithmic bias in AI-based diagnosis, patient consent problems in genomic medicine, and regulatory issues blocking the large-scale adoption of digital health solutions. The article also includes some recommended avenues of future research in the spaces where interdisciplinary cooperation, effective cybersecurity frameworks, and policy transformations are urgently required to ensure that new healthcare technology adoption is ethical and responsible. The work is aimed at delivering important information for policymakers and researchers who are interested in the changing roles of technology to improve healthcare provision and patient outcomes, as well as healthcare practitioners.
Page 18 of 3113104 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.