Sort by:
Page 257 of 6576562 results

Jiang H, Xie K, Chen X, Ning Y, Yu Q, Lv F, Liu R, Zhou Y, Xia S, Peng J

pubmed logopapersAug 16 2025
Accurate prognostic prediction is crucial for patients with laryngeal squamous cell carcinoma (LSCC) to guide personalized treatment strategies. This study aimed to develop a comprehensive prognostic model leveraging clinical factors alongside radiomics and deep learning (DL) based on CT imaging to predict recurrence-free survival (RFS) in LSCC patients. We retrospectively enrolled 349 patients with LSCC from Center 1 (training set: n = 189; internal testing set: n = 82) and Center 2 (external testing set: n = 78). A combined model was developed using Cox regression analysis to predict RFS in LSCC patients by integrating independent clinical risk factors, radiomics score (RS), and deep learning score (DLS). Meanwhile, separate clinical, radiomics, and DL models were also constructed for comparison. Furthermore, the combined model was represented visually through a nomogram to provide personalized estimation of RFS, with its risk stratification capability evaluated using Kaplan-Meier analysis. The combined model achieved a higher C-index than did the clinical model, radiomics model, and DL model in the internal testing (0.810 vs. 0.634, 0.679, and 0.727, respectively) and external testing sets (0.742 vs. 0.602, 0.617, and 0.729, respectively). Additionally, following risk stratification via nomogram, patients in the low-risk group showed significantly higher survival probabilities compared to those in the high-risk group in the internal testing set [hazard ratio (HR) = 0.157, 95% confidence interval (CI): 0.063-0.392, p < 0.001] and external testing set (HR = 0.312, 95% CI: 0.137-0.711, p = 0.003). The proposed combined model demonstrated a reliable and accurate ability to predict RFS in patients with LSCC, potentially assisting in risk stratification.

Zhang H, Liu Q, Han X, Niu L, Sun W

pubmed logopapersAug 16 2025
Accurate diagnosis of thyroid nodules using ultrasonography is a highly valuable, but challenging task. With the emergence of artificial intelligence, deep learning based methods can provide assistance to radiologists, whose performance heavily depends on the quantity and quality of training data, but current ultrasound image datasets for thyroid nodule either directly utilize the TI-RADS assessments as labels or are not publicly available. Faced with these issues, an open-access ultrasound image dataset for thyroid nodule detection and classification is proposed, i.e. the TN5000, which comprises 5,000 B-mode ultrasound images of thyroid nodule, as well as complete annotations and biopsy confirmations by expert radiologists. Additionally, the statistical characteristics of this proposed dataset have been analyzed clearly, some baseline methods for the detection and classification of thyroid nodules are recommended as the benchmark, along with their evaluation results. To our best knowledge, TN5000 is the largest open-access ultrasound image dataset of thyroid nodule with professional labeling, and is the first ultrasound image dataset designed both for the thyroid nodule detection and classification. These kinds of images with annotations can contribute to analyze the intrinsic properties of thyroid nodules and to determine the necessity of FNA biopsy, which are crucial in ultrasound diagnosis.

Zhang M, Zhao Y, Hao D, Song Y, Lin X, Hou F, Huang Y, Yang S, Niu H, Lu C, Wang H

pubmed logopapersAug 16 2025
Predicting the prognosis of bladder cancer remains challenging despite standard treatments. We developed an interpretable bladder cancer deep learning (BCDL) model using preoperative CT scans to predict overall survival. The model was trained on a cohort (n = 765) and validated in three independent cohorts (n = 438; n = 181; n = 72). The BCDL model outperformed other models in survival risk prediction, with the SHapley Additive exPlanation method identifying pixel-level features contributing to predictions. Patients were stratified into high- and low-risk groups using deep learning score cutoff. Adjuvant therapy significantly improved overall survival in high-risk patients (p = 0.028) and women in the low-risk group (p = 0.046). RNA sequencing analysis revealed differential gene expression and pathway enrichment between risk groups, with high-risk patients exhibiting an immunosuppressive microenvironment and altered microbial composition. Our BCDL model accurately predicts survival risk and supports personalized treatment strategies for improved clinical decision-making.

Farahani S, Hejazi M, Tabassum M, Di Ieva A, Mahdavifar N, Liu S

pubmed logopapersAug 16 2025
We aimed to evaluate the diagnostic performance of deep learning (DL)-based radiomics models for the noninvasive prediction of isocitrate dehydrogenase (IDH) mutation and 1p/19q co-deletion status in glioma patients using MRI sequences, and to identify methodological factors influencing accuracy and generalizability. Following PRISMA guidelines, we systematically searched major databases (PubMed, Scopus, Embase, Web of Science, and Google Scholar) up to March 2025, screening studies that utilized DL to predict IDH and 1p/19q co-deletion status from MRI data. We assessed study quality and risk of bias using the Radiomics Quality Score and the QUADAS-2 tool. Our meta-analysis employed a bivariate model to compute pooled sensitivity and specificity, and meta-regression to assess interstudy heterogeneity. Among the 1517 unique publications, 104 were included in the qualitative synthesis, and 72 underwent meta-analysis. Pooled estimates for IDH prediction in test cohorts yielded a sensitivity of 0.80 (95% CI: 0.77-0.83) and specificity of 0.85 (95% CI: 0.81-0.87). For 1p/19q co-deletion, sensitivity was 0.75 (95% CI: 0.65-0.82) and specificity was 0.82 (95% CI: 0.75-0.88). Meta-regression identified the tumor segmentation method and the extent of DL integration into the radiomics pipeline as significant contributors to interstudy variability. Although DL models demonstrate strong potential for noninvasive molecular classification of gliomas, clinical translation requires several critical steps: harmonization of multi-center MRI data using techniques such as histogram matching and DL-based style transfer; adoption of standardized and automated segmentation protocols; extensive multi-center external validation; and prospective clinical validation. Question Can DL based radiomics using routine MRI noninvasively predict IDH mutation and 1p/19q co-deletion status in gliomas, and what factors affect diagnostic accuracy? Findings Meta-analysis showed 80% sensitivity and 85% specificity for predicting IDH mutation, and 75% sensitivity and 82% specificity for 1p/19q co-deletion status. Clinical relevance MRI-based DL models demonstrate clinically useful accuracy for noninvasive glioma molecular classification, but data harmonization, standardized automated segmentation, and rigorous multi-center external validation are essential for clinical adoption.

Yucheng Tang, Pawel Rajwa, Alexander Ng, Yipei Wang, Wen Yan, Natasha Thorley, Aqua Asif, Clare Allen, Louise Dickinson, Francesco Giganti, Shonit Punwani, Daniel C. Alexander, Veeru Kasivisvanathan, Yipeng Hu

arxiv logopreprintAug 16 2025
Foundation models in medical imaging have shown promising label efficiency, achieving high performance on downstream tasks using only a fraction of the annotated data otherwise required. In this study, we evaluate this potential in the context of prostate multiparametric MRI using ProFound, a recently developed domain-specific vision foundation model pretrained on large-scale prostate MRI datasets. We investigate the impact of variable image quality on the label-efficient finetuning, by quantifying the generalisability of the finetuned models. We conduct a comprehensive set of experiments by systematically varying the ratios of high- and low-quality images in the finetuning and evaluation sets. Our findings indicate that image quality distribution and its finetune-and-test mismatch significantly affect model performance. In particular: a) Varying the ratio of high- to low-quality images between finetuning and test sets leads to notable differences in downstream performance; and b) The presence of sufficient high-quality images in the finetuning set is critical for maintaining strong performance, whilst the importance of matched finetuning and testing distribution varies between different downstream tasks, such as automated radiology reporting and prostate cancer detection. Importantly, experimental results also show that, although finetuning requires significantly less labeled data compared to training from scratch when the quality ratio is consistent, this label efficiency is not independent of the image quality distribution. For example, we show cases that, without sufficient high-quality images in finetuning, finetuned models may fail to outperform those without pretraining.

Guo, Y., Huang, H., Liu, X., Zou, W., Qiu, F., Liu, Y., Chai, R., Jiang, T., Wang, J.

medrxiv logopreprintAug 16 2025
For adult diffuse gliomas (ADGs), most grading can be achieved through molecular subtyping, retaining only two key histopathological features for high-grade glioma (HGG): necrosis (NEC) and microvascular proliferation (MVP). We developed a deep learning (DL) framework to automatically identify and characterize these features. We trained patch-level models to detect and quantify NEC and MVP using a dataset that employed active learning, incorporating patches from 621 whole-slide images (WSIs) from the Chinese Glioma Genome Atlas (CGGA). Utilizing trained patch-level models, we effectively integrated the predicted outcomes and positions of individual patches within WSIs from The Cancer Genome Atlas (TCGA) cohort to form datasets. Subsequently, we introduced a patient-level model, named PLNet (Probability Localization Network), which was trained on these datasets to facilitate patient diagnosis. We also explored the subtypes of NEC and MVP based on the features extracted from patch-level models with clustering process applied on all positive patches. The patient-level models demonstrated exceptional performance, achieving an AUC of 0.9968, 0.9995 and AUPRC of 0.9788, 0.9860 for NEC and MVP, respectively. Compared to pathological reports, our patient-level models achieved the accuracy of 88.05% for NEC and 90.20% for MVP, along with a sensitivity of 73.68% and 77%. When sensitivity was set at 80%, the accuracy for NEC reached 79.28% and for MVP reached 77.55%. DL models enabled more efficient and accurate histopathological image analysis which will aid traditional glioma diagnosis. Clustering-based analyses utilizing features extracted from patch-level models could further investigate the subtypes of NEC and MVP.

Xiong X, Sun Y, Liu X, Ke W, Lam CT, Gao Q, Tong T, Li S, Tan T

pubmed logopapersAug 16 2025
Modern deep neural networks are highly over-parameterized, necessitating the use of data augmentation techniques to prevent overfitting and enhance generalization. Generative adversarial networks (GANs) are popular for synthesizing visually realistic images. However, these synthetic images often lack diversity and may have ambiguous class labels. Recent data mixing strategies address some of these issues by mixing image labels based on salient regions. Since the main diagnostic information is not always contained within the salient regions, we aim to address the resulting label mismatches in medical image classifications. We propose a variety-guided data mixing framework (VariMix), which exploits an absolute difference map (ADM) to address the label mismatch problems of mixed medical images. VariMix generates ADM using the image-to-image (I2I) GAN across multiple classes and allows for bidirectional mixing operations between the training samples. The proposed VariMix achieves the highest accuracy of 99.30% and 94.60% with a SwinT V2 classifier on a Chest X-ray (CXR) dataset and a Retinal dataset, respectively. It also achieves the highest accuracy of 87.73%, 99.28%, 95.13%, and 95.81% with a ConvNeXt classifier on a Breast Ultrasound (US) dataset, a CXR dataset, a Retinal dataset, and a Maternal-Fetal US dataset, respectively. Furthermore, the medical expert evaluation on generated images shows the great potential of our proposed I2I GAN in improving the accuracy of medical image classifications. Extensive experiments demonstrate the superiority of VariMix compared with the existing GAN- and Mixup-based methods on four public datasets using Swin Transformer V2 and ConvNeXt architectures. Furthermore, by projecting the source image to the hyperplanes of the classifiers, the proposed I2I GAN can generate hyperplane difference maps between the source image and the hyperplane image, demonstrating its ability to interpret medical image classifications. The source code is provided in https://github.com/yXiangXiong/VariMix.

Tuchinov, B., Prokaeva, A., Vasilkiv, L., Stankevich, Y., Korobko, D., Malkova, N., Tulupov, A.

medrxiv logopreprintAug 16 2025
Multiple sclerosis (MS) is a chronic inflammatory neurodegenerative disorder of the central nervous system (CNS) and represents the leading cause of non-traumatic disability among young adults. Magnetic resonance imaging (MRI) has revolutionized both the clinical management and scientific understanding of MS, serving as an indispensable paraclinical tool. Its high sensitivity and diagnostic accuracy enable early detection and timely therapeutic intervention, significantly impacting patient outcomes. Recent technological advancements have facilitated the integration of artificial intelligence (AI) algorithms for automated lesion identification, segmentation, and longitudinal monitoring. The ongoing refinement of deep learning (DL) and machine learning (ML) techniques, alongside their incorporation into clinical workflows, holds great promise for improving healthcare accessibility and quality in MS management. Despite the encouraging performance of DL models in MS lesion segmentation and disease progression tracking, their effectiveness is frequently constrained by the scarcity of large, diverse, and publicly available datasets. Open-source initiatives such as MSLesSeg, MS-Baghdad, MS-Shift, and MSSEG-2 have provided valuable contributions to the research community. Building upon these foundations, we introduce the SibBMS dataset to further advance data-driven research in MS. In this study, we present the SibBMS dataset, a carefully curated, open-source resource designed to support MS research utilizing structural brain MRI. The dataset comprises imaging data from 93 patients diagnosed with MS or radiologically isolated syndrome (RIS), alongside 100 healthy controls. All lesion annotations were manually delineated and rigorously reviewed by a three-tier panel of experienced neuroradiologists to ensure clinical relevance and segmentation accuracy. Additionally, the dataset includes comprehensive demographic metadata--such as age, sex, and disease duration--enabling robust stratified analyses and facilitating the development of more generalizable predictive models. Our dataset is available via a request-access form at https://forms.gle/VqTenJ4n8S8qvtxQA.

Riaz IB, Khan MA, Osterman TJ

pubmed logopapersAug 15 2025
Artificial intelligence (AI) holds significant potential to enhance various aspects of oncology, spanning the cancer care continuum. This review provides an overview of current and emerging AI applications, from risk assessment and early detection to treatment and supportive care. AI-driven tools are being developed to integrate diverse data sources, including multi-omics and electronic health records, to improve cancer risk stratification and personalize prevention strategies. In screening and diagnosis, AI algorithms show promise in augmenting the accuracy and efficiency of medical image analysis and histopathology interpretation. AI also offers opportunities to refine treatment planning, optimize radiation therapy, and personalize systemic therapy selection. Furthermore, AI is explored for its potential to improve survivorship care by tailoring interventions and to enhance end-of-life care through improved symptom management and prognostic modeling. Beyond care delivery, AI augments clinical workflows, streamlines the dissemination of up-to-date evidence, and captures critical patient-reported outcomes for clinical decision support and outcomes assessment. However, the successful integration of AI into clinical practice requires addressing key challenges, including rigorous validation of algorithms, ensuring data privacy and security, and mitigating potential biases. Effective implementation necessitates interdisciplinary collaboration and comprehensive education for health care professionals. The synergistic interaction between AI and clinical expertise is crucial for realizing the potential of AI to contribute to personalized and effective cancer care. This review highlights the current state of AI in oncology and underscores the importance of responsible development and implementation.

Feng S, Mdletshe S

pubmed logopapersAug 15 2025
We aimed to review the role of functional imaging in cervical cancer to underscore its significance in the diagnosis and management of cervical cancer and in improving patient outcomes. This rapid literature review targeting the clinical guidelines for functional imaging in cervical cancer sourced literature from 2017 to 2023 using PubMed, Google Scholar, MEDLINE and Scopus. Keywords such as cervical cancer, cervical neoplasms, functional imaging, stag*, treatment response, monitor* and New Zealand or NZ were used with Boolean operators to maximise results. Emphasis was on English full research studies pertinent to New Zealand. The study quality of the reviewed articles was assessed using the Joanna Briggs Institute critical appraisal checklists. The search yielded a total of 21 papers after all duplicates and yields that did not meet the inclusion criteria were excluded. Only one paper was found to incorporate the New Zealand context. The papers reviewed yielded results that demonstrate the important role of functional imaging in cervical cancer diagnosis, staging and treatment response monitoring. Techniques such as dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), diffusion-weighted magnetic resonance imaging (DW-MRI), computed tomography perfusion (CTP) and positron emission tomography computed tomography (PET/CT) provide deep insights into tumour behaviour, facilitating personalised care. Integration of artificial intelligence in image analysis promises increased accuracy of these modalities. Functional imaging could play a significant role in a unified approach in New Zealand to improve patient outcomes for cervical cancer management. Therefore, this study advocates for New Zealand's medical sector to harness functional imaging's potential in cervical cancer management.
Page 257 of 6576562 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.