Sort by:
Page 100 of 6386373 results

Guo L, Zhang S, Chen H, Li Y, Liu Y, Liu W, Wang Q, Tang Z, Jiang P, Wang J

pubmed logopapersOct 1 2025
In recent years, the application of artificial intelligence (AI) in medical image analysis has drawn increasing attention in clinical studies of gynecologic tumors. This study presents the development and prospects of AI applications to assist in the treatment of gynecological oncology. The Web of Science database was screened for articles published until August 2023. "artificial intelligence," "deep learning," "machine learning," "radiomics," "radiotherapy," "chemoradiotherapy," "neoadjuvant therapy," "immunotherapy," "gynecological malignancy," "cervical carcinoma," "cervical cancer," "ovarian cancer," "endometrial cancer," "vulvar cancer," "Vaginal cancer" were used as keywords. Research articles related to AI-assisted treatment of gynecological cancers were included. A total of 317 articles were retrieved based on the search strategy, and 133 were selected by applying the inclusion and exclusion criteria, including 114 on cervical cancer, 10 on endometrial cancer, and 9 on ovarian cancer. Among the included studies, 44 (33%) focused on prognosis prediction, 24 (18%) on treatment response prediction, 13 (10%) on adverse event prediction, five (4%) on dose distribution prediction, and 47 (35%) on target volume delineation. Target volume delineation and dose prediction were performed using deep Learning methods. For the prediction of treatment response, prognosis, and adverse events, 57 studies (70%) used conventional radiomics methods, 13 (16%) used deep Learning methods, 8 (10%) used spatial-related unconventional radiomics methods, and 3 (4%) used temporal-related unconventional radiomics methods. In cervical and endometrial cancers, target prediction mostly included treatment response, overall survival, recurrence, toxicity undergoing radiotherapy, lymph node metastasis, and dose distribution. For ovarian cancer, the target prediction included platinum sensitivity and postoperative complications. The majority of the studies were single-center, retrospective, and small-scale; 101 studies (76%) had single-center data, 125 studies (94%) were retrospective, and 127 studies (95%) included Less than 500 cases. The application of AI in assisting treatment in gynecological oncology remains limited. Although the results of AI in predicting the response, prognosis, adverse events, and dose distribution in gynecological oncology are superior, it is evident that there is no validation of substantial data from multiple centers for these tasks.

Parwekar P, Agrawal KK, Ali J, Gundagatti S, Rajpoot DS, Ahmed T, Vidyarthi A

pubmed logopapersOct 1 2025
<b><i>Background:</i></b> Accurate and noninvasive breast cancer grading and therapy monitoring remain critical challenges in oncology. Traditional methods often rely on invasive histopathological assessments or imaging-only techniques, which may not fully capture the molecular and morphological intricacies of tumor response. <b><i>Method:</i></b> This article presents a novel, noninvasive framework for breast cancer analysis and therapy monitoring that combines two parallel mechanisms: (1) a dual-stream convolutional neural network (CNN) processing high-intensity ultrasound images, and (2) a biomarker-aware CNN stream utilizing patient-specific breast cancer biomarkers, including carbohydrate antigen 15-3, carcinoembryonic antigen, and human epidermal growth factor receptor 2 levels. The imaging stream extracts spatial and morphological features, while the biomarker stream encodes quantitative molecular indicators, enabling a multimodal understanding of tumor characteristics. The outputs from both streams are fused to predict the cancer grade (G1-G3) with high reliability. <b><i>Results:</i></b> Experimental evaluation on a cohort of pre- and postchemotherapy patients demonstrated the effectiveness of the proposed approach, achieving an overall grading accuracy of 97.8%, with an area under the curve of 0.981 for malignancy classification. The model also enables quantitative post-therapy analysis, revealing an average tumor response improvement of 41.3% across the test set, as measured by predicted regression in grade and changes in biomarker-imaging correlation. <b><i>Conclusions:</i></b> This dual-parallel artificial intelligence strategy offers a promising noninvasive alternative to traditional histopathological and imaging-alone methods, supporting real-time cancer monitoring and personalized treatment evaluation. The integration of high-resolution imaging with biomolecular data significantly enhances diagnostic depth, paving the way for intelligent, patient-specific breast cancer management.

Noordman CR, Te Molder LPW, Maas MC, Overduin CG, Fütterer JJ, Huisman HJ

pubmed logopapersOct 1 2025
Transrectal in-bore MR-guided biopsy (MRGB) is accurate but time-consuming, limiting clinical throughput. Faster imaging could improve workflow and enable real-time instrument tracking. Existing acceleration methods often use simulated data and lack validation in clinical settings. To accelerate MRGB by using deep learning for undersampled image reconstruction and instrument tracking, trained on multi-slice MR DICOM images and evaluated on raw k-space acquisitions. Prospective feasibility study. Briefly, 1289 male patients (aged 44-87, median age 68) for model training, 8 male patients (aged 59-78, median age 65) for prospective feasibility testing. 2D Cartesian balanced steady-state free precession, 3 T. Segmentation and reconstruction models were trained on 8464 MRGB confirmation scans containing a biopsy needle guide instrument and evaluated on 10 prospectively acquired dynamic k-space samples. Needle guide tracking accuracy was assessed using instrument tip prediction (ITP) error, computed per frame as the Euclidean distance from reference positions defined via pre- and post-movement scans. Feasibility was measured by the proportion of frames with < 5 mm error. Additional experiments tested model robustness under increasing undersampling rates. In a segmentation validation experiment, a one-sample t-test tested if the mean ITP error was below 5 mm. Statistical significance was defined as p < 0.05. In the tracking experiments, the mean, standard deviation, and Wilson 95% CI of the ITP success rate were computed per sample, across undersampling levels. ITP was first evaluated independently on 201 fully sampled scans, yielding an ITP error of 1.55 ± 1.01 mm (95% CI: 1.41-1.69). Tracking performance was assessed across increasing undersampling factors, achieving high ITP success rates from 97.5% ± 5.8% (68.8%-99.9%) at 8× up to 92.5% ± 10.3% (62.5%-98.9%) at 16× undersampling. Performance declined at 18×, dropping to 74.6% ± 33.6% (43.8%-91.7%). Results confirm stable needle guide tip prediction accuracy and support the robustness of the reconstruction model for tracking at high undersampling. 2. Stage 2.

Zhu H, Liang F, Zhao T, Cao Y, Chen Y, Yan H, Xiao X

pubmed logopapersOct 1 2025
Determining the status of glioma molecular markers is a problem of clinical importance in medicine. Current medical-imaging-based approaches for this problem suffer from various limitations, such as incomplete fine-grained feature extraction of glioma imaging data and low prediction accuracy of molecular marker status. To address these issues, a deep learning method is presented for the simultaneous joint prediction of multi-label statuses of glioma molecular markers. Firstly, a Gradient-aware Spatially Partitioned Enhancement algorithm (GASPE) is proposed to optimize the glioma MR image preprocessing method and to enhance the local detail expression ability; secondly, a Dual Attention module with Depthwise Convolution (DADC) is constructed to improve the fine-grained feature extraction ability by combining channel attention and spatial attention; thirdly, a hybrid model PMNet is proposed, which combines the Pyramid-based Multi-Scale Feature Extraction module (PMSFEM) and the Mamba-based Projection Convolution module (MPCM) to achieve effective fusion of local and global information; finally, an Iterative Truth Calibration algorithm (ITC) is used to calibrate the joint state truth vector output by the model to optimize the accuracy of the prediction results. Based on GASPE, DADC, ITC and PMNet, the proposed method constructs the Gradient-Aware Dual Attention Iteration Truth Calibration-PMNet (GDI-PMNet) to simultaneously predict the status of glioma molecular markers (IDH1, Ki67, MGMT, P53), with accuracies of 98.31%, 99.24%, 97.96% and 98.54% respectively, achieving non-invasive preoperative prediction, thereby capable of assisting doctors in clinical diagnosis and treatment. The GDI-PMNet method demonstrates high accuracy in predicting glioma molecular markers, addressing the limitations of current approaches by enhancing fine-grained feature extraction and prediction accuracy. This non-invasive preoperative prediction tool holds significant potential to assist clinicians in glioma diagnosis and treatment, ultimately improving patient outcomes.

An GZ, Xie Y, Benzinger TLS, Gordon BA, Sotiras A

pubmed logopapersOct 1 2025
There is significant evidence for neuroanatomical heterogeneity in neurodegenerative disorders, which has been demonstrated predominantly through analyses of well-characterized research cohorts. Despite the known diversity in clinical presentations among patients attending memory clinics, studies exploring neuroanatomical heterogeneity in such clinically diverse groups remain sparse. To address this gap, we applied the semi-supervised Heterogeneity through Discriminative Analysis (HYDRA) (Neuroimage 145:346-364 2017) machine learning method to magnetic resonance imaging (MRI) data from the Open Access Series of Imaging Studies (OASIS) (NeuroImage 26:102248 2020) to uncover patterns of neurostructural heterogeneity in memory clinic attendees. Cross-validation was used to assess clustering stability via the Adjusted Rand Index (ARI), Silhouette Score, and Calinski-Harabasz Index (CHI). We performed survival analyses using Kaplan-Meier curves and mixed-effects models for longitudinal cognitive data (e.g., memory, executive function, and language assessments) to examine differences in disease progression. Cross-validation analyses indicated two highly stable subtypes of cognitively impaired individuals (ARI = 0.552), exhibiting significant neuroanatomical differences. Subtype 1, termed the Temporal-Sparing Atrophy (TSA) Subtype, was defined by relatively mild atrophy, especially in temporal areas, with slower cognitive decline and preserved Function across most domains. Subtype 2, termed the Temporal-Parietal Predominated Atrophy (TPPA) Subtype, was marked by notable alterations in areas critically affected in neurodegenerative disorders. These included key areas critical for executive function and memory, such as the frontal, temporal, and parietal cortices including the precuneus. Longitudinal analysis of neuroimaging and cognitive data revealed contrasting trajectories. The TSA Subtype demonstrated a gradual decline in cognitive functions over time, particularly in the assessments that are memory-focused tests. Conversely, the TPPA Subtype exhibited a more severe decline in these functions. This research illustrates that neurodegenerative diseases present a spectrum of structural brain changes rather than uniform pathology, suggesting that future research may benefit from stratified therapeutic approaches and targeted recruitment strategies for clinical trials. By leveraging detailed clinical assessments and longitudinal data, including uncertain diagnoses and Clinical Dementia Rating (CDR) scores, this study contributes to better understanding/characterizing memory clinic populations, which could help with optimizing interventions.

Jin L, Ma Z, Gao F, Li M, Li H, Geng D

pubmed logopapersOct 1 2025
Prostate cancer (PCa) is one of the most common malignancies in men, and accurate assessment of tumor aggressiveness is crucial for treatment planning. The Gleason score (GS) remains the gold standard for risk stratification, yet it relies on invasive biopsy, which has inherent risks and sampling errors. The aim of this study was to detect PCa and non-invasively predict the GS for the early detection and stratification of clinically significant cases. We used single-modality T2-weighted imaging (T2WI) with an automatic machine-learning (ML) approach, MLJAR. The internal dataset comprised PCa patients who underwent magnetic resonance imaging (MRI) examinations at our hospital from September 2015 to June 2022 prior to prostate biopsy, surgery, radiotherapy, and endocrine therapy and whose examinations resulted in pathological findings. An external dataset from another medical center and a public challenge dataset were used for external validation. The Kolmogorov-Smirnov curve was used to evaluate the risk-differentiation ability of the PCa detection model. The area under the receiver operating characteristic curve (AUC) was calculated with confidence intervals to compare the model performance. The internal MRI dataset included 198 non-PCa and 291 PCa patients with histopathological results obtained through biopsy or surgery. External and public challenge datasets included 45 and 68 PCa patients, respectively. AUC for PCa detection in the internal-testing cohort (n = 147, PCa = 78) was 0.99. For GS prediction, AUCs were GS = 3 + 3 (0.97), GS = 3 + 4 (0.97), GS = 3 + 5 (1.0), GS = 4 + 3 (0.87), GS = 4 + 4 (0.91), GS = 4 + 5 (0.95), GS = 5 + 4 (1.0), and GS = 5 + 5 (0.99) in the internal-testing cohort (PCa = 88); GS = 3 + 3 (0.95), GS = 3 + 4 (0.76); GS = 3 + 5 (0.77), GS = 4 + 3 (0.88), GS = 4 + 4 (0.82), GS = 4 + 5 (0.87), GS = 5 + 4 (0.95), and GS = 5 + 5 (0.85) in the external-testing cohort (PCa = 45); and GS = 3 + 4 (0.89), GS = 4 + 3 (0.75), GS = 4 + 4 (0.65), and GS = 4 + 5 (0.91) in the public challenge cohort (PCa = 68). This multi-center study shows that an auto-ML model using only T2WI can accurately detect PCa and predict Gleason scores non-invasively, offering potential to reduce biopsy reliance and improve early risk stratification. These results warrant further validation and exploration for integration into clinical workflows.

Lai M, Marzi C, Citi L, Diciotti S

pubmed logopapersOct 1 2025
Deep learning algorithms trained on medical images often encounter limited data availability, leading to overfitting and imbalanced datasets. Synthetic datasets can address these challenges by providing a priori control over dataset size and balance. In this study, we present PACGAN (Progressive Auxiliary Classifier Generative Adversarial Network), a proof-of-concept framework that effectively combines Progressive Growing GAN and Auxiliary Classifier GAN (ACGAN) to generate high-quality, class-specific synthetic medical images. PACGAN leverages latent space information to perform conditional synthesis of high-resolution brain magnetic resonance (MR) images, specifically targeting Alzheimer's disease patients and healthy controls. Trained on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, PACGAN demonstrates its ability to generate realistic synthetic images, which are assessed for quality using quantitative metrics. The ability of the generator to perform proper target synthesis of the two classes was also assessed by evaluating the performance of the pre-trained discriminator when classifying real unseen images, which achieved an area under the receiver operating characteristic curve (AUC) of 0.813, supporting the ability of the model to capture the target characteristics of each class. The pre-trained models of the generator and discriminator, together with the source code, are available in our repository: https://github.com/aiformedresearch/PACGAN .

Ren T, Govindarajan V, Bourouis S, Wang X, Ke S

pubmed logopapersOct 1 2025
The increasing incidence of gastric cancer and the complexity of histopathological image interpretation present significant challenges for accurate and timely diagnosis. Manual assessments are often subjective and time-intensive, leading to a growing demand for reliable, automated diagnostic tools in digital pathology. This study proposes a hybrid deep learning approach combining convolutional neural networks (CNNs) and Transformer-based architectures to classify gastric histopathological images with high precision. The model is designed to enhance feature representation and spatial contextual understanding, particularly across diverse tissue subtypes and staining variations. Three publicly available datasets-GasHisSDB, TCGA-STAD, and NCT-CRC-HE-100 K-were utilized to train and evaluate the model. Image patches were preprocessed through stain normalization, augmented using standard techniques, and fed into the hybrid model. The CNN backbone extracts local spatial features, while the Transformer encoder captures global context. Performance was assessed using fivefold cross-validation and evaluated through accuracy, F1-score, AUC, and Grad-CAM-based interpretability. The proposed model achieved a 99.2% accuracy on the GasHisSDB dataset, with a macro F1-score of 0.991 and AUC of 0.996. External validation on TCGA-STAD and NCT-CRC-HE-100 K further confirmed the model's robustness. Grad-CAM visualizations highlighted biologically relevant regions, demonstrating interpretability and alignment with expert annotations. This hybrid deep learning framework offers a reliable, interpretable, and generalizable tool for gastric cancer diagnosis. Its superior performance and explainability highlight its clinical potential for deployment in digital pathology workflows.

Singh A, Paul S, Gayen S, Mandal B, Mitra D, Augustine R

pubmed logopapersOct 1 2025
The global incidence of lung diseases, particularly lung cancer, is increasing at an alarming rate, underscoring the urgent need for early detection, robust monitoring, and timely intervention. This study presents design aspects of an artificial intelligence (AI)-integrated microwave-based diagnostic tool for the early detection of lung tumors. The proposed method assimilates the prowess of machine learning (ML) tools with microwave imaging (MWI). A microwave unit containing eight antennas in the form of a wearable belt is employed for data collection from the CST body models. The data, collected in the form of scattering parameters, are reconstructed as 2D images. Two different ML approaches have been investigated for tumor detection and prediction of the size of the detected tumor. The first approach employs XGBoost models on raw S-parameters and the second approach uses convolutional neural networks (CNN) on the reconstructed 2-D microwave images. It is found that the XGBoost-based classifier with S-parameters outperforms the CNN-based classifier on reconstructed microwave images for tumor detection. Whereas a CNN-based model on reconstructed microwave images performs much better than an XGBoost-based regression model designed on the raw S-parameters for tumor size prediction. The performances of both of these models are evaluated on other body models to examine their generalization capacity over unknown data. This work explores the feasibility of a low-cost portable AI-integrated microwave diagnostic device for lung tumor detection, which eliminates the risk of exposure to harmful ionizing radiations of X-ray and CT scans.

Faizi MK, Qiang Y, Shagar MMB, Wei Y, Qiao Y, Zhao J, Urrehman Z

pubmed logopapersOct 1 2025
Early detection of lung cancer is critical for improving treatment outcomes, and automatic lung image segmentation plays a key role in diagnosing lung-related diseases such as cancer, COVID-19, and respiratory disorders. Challenges include overlapping anatomical structures, complex pixel-level feature fusion, and intricate morphology of lung tissues all of which impede segmentation accuracy. To address these issues, this paper introduces GEANet, a novel framework for lung segmentation in CT images. GEANet utilizes an encoder-decoder architecture enriched with radiomics-derived features. Additionally, it incorporates Graph Neural Network (GNN) modules to effectively capture the complex heterogeneity of tumors. Additionally, a boundary refinement module is incorporated to improve image reconstruction and boundary delineation accuracy. The framework utilizes a hybrid loss function combining Focal Loss and IoU Loss to address class imbalance and enhance segmentation robustness. Experimental results on benchmark datasets demonstrate that GEANet outperforms eight state-of-the-art methods across various metrics, achieving superior segmentation accuracy while maintaining computational efficiency.
Page 100 of 6386373 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.