Sort by:
Page 519 of 6346332 results

Bush M, Jones S, Hargrave C

pubmed logopapersJun 1 2025
Hydrogel spacers (HS) are designed to minimise the radiation doses to the rectum in prostate cancer radiation therapy (RT) by creating a physical gap between the rectum and the target treatment volume inclusive of the prostate and seminal vesicles (SV). This study aims to determine the feasibility of incorporating diagnostic MRI (dMRI) information in statistical machine learning (SML) models developed with planning CT (pCT) anatomy for dose and rectal toxicity prediction. The SML models aim to support HS insertion decision-making prior to RT planning procedures. Regions of interest (ROIs) were retrospectively contoured on the pCT and registered dMRI scans for 20 patients. ROI Dice and Hausdorff distance (HD) comparison metrics were calculated. The ROI and patient clinical risk factors (CRFs) variables were inputted into three SML models and then pCT and dMRI-based dose and toxicity model performance compared through confusion matrices, AUC curves, accuracy performance metric results and observed patient outcomes. Average Dice values comparing dMRI and pCT ROIs were 0.81, 0.47 and 0.71 for the prostate, SV, and rectum respectively. Average Hausdorff distances were 2.15, 2.75 and 2.75 mm for the prostate, SV, and rectum respectively. The average accuracy metric across all models was 0.83 when using dMRI ROIs and 0.85 when using pCT ROIs. Differences between pCT and dMRI anatomical ROI variables did not impact SML model performance in this study, demonstrating the feasibility of using dMRI images. Due to the limited sample size further training of the predictive models including dMRI anatomy is recommended.

Edwin Raja S, Sutha J, Elamparithi P, Jaya Deepthi K, Lalitha SD

pubmed logopapersJun 1 2025
The task of predicting liver tumors is critical as part of medical image analysis and genomics area since diagnosis and prognosis are important in making correct medical decisions. Silent characteristics of liver tumors and interactions between genomic and imaging features are also the main sources of challenges toward reliable predictions. To overcome these hurdles, this study presents two integrated approaches namely, - Attention-Guided Convolutional Neural Networks (AG-CNNs), and Genomic Feature Analysis Module (GFAM). Spatial and channel attention mechanisms in AG-CNN enable accurate tumor segmentation from CT images while providing detailed morphological profiling. Evaluation with three control databases TCIA, LiTS, and CRLM shows that our model produces more accurate output than relevant literature with an accuracy of 94.5%, a Dice Similarity Coefficient of 91.9%, and an F1-Score of 96.2% for the Dataset 3. More considerably, the proposed methods outperform all the other methods in different datasets in terms of recall, precision, and Specificity by up to 10 percent than all other methods including CELM, CAGS, DM-ML, and so on.•Utilization of Attention-Guided Convolutional Neural Networks (AG-CNN) enhances tumor region focus and segmentation accuracy.•Integration of Genomic Feature Analysis (GFAM) identifies molecular markers for subtype-specific tumor classification.

Rehman Khan SU

pubmed logopapersJun 1 2025
Kidney irregularities pose a significant public health challenge, often leading to severe complications, yet the limited availability of nephrologists makes early detection costly and time-consuming. To address this issue, we propose a deep learning framework for automated kidney disease detection, leveraging feature fusion and sequential modeling techniques to enhance diagnostic accuracy. Our study thoroughly evaluates six pretrained models under identical experimental conditions, identifying ResNet50 and VGG19 as the highly efficient models for feature extraction due to their deep residual learning and hierarchical representations. Our proposed methodology integrates feature fusion with an inception block to extract diverse feature representations while maintaining imbalance dataset overhead. To enhance sequential learning and capture long-term dependencies in disease progression, ConvLSTM is incorporated after feature fusion. Additionally, Inception block is employed after ConvLSTM to refine hierarchical feature extraction, further strengthening the proposed model ability to leverage both spatial and temporal patterns. To validate our approach, we introduce a new named Multiple Hospital Collected (MHC-CT) dataset, consisting of 1860 tumor and 1024 normal kidney CT scans, meticulously annotated by medical experts. Our model achieves 99.60 % accuracy on this dataset, demonstrating its robustness in binary classification. Furthermore, to assess its generalization capability, we evaluate the model on a publicly available benchmark multiclass CT scan dataset, achieving 91.31 % accuracy. The superior performance is attributed to the effective feature fusion using inception blocks and the sequential learning capabilities of ConvLSTM, which together enhance spatial and temporal feature representations. These results highlight the efficacy of the proposed framework in automating kidney disease detection, providing a reliable, and efficient solution for clinical decision-making. https://github.com/VS-EYE/KidneyDiseaseDetection.git.

Yang Z, Ling J, Sun W, Pan C, Chen T, Dong C, Zhou X, Zhang J, Zheng J, Ma X

pubmed logopapersJun 1 2025
Contrast-enhanced magnetic resonance lymphography (CE-MRL) plays a crucial role in preoperative diagnostic for evaluating tumor metastatic sentinel lymph node (T-SLN), by integrating detailed lymphatic information about lymphatic anatomy and drainage function from MR images. However, the clinical gadolinium-based contrast agents for identifying T-SLN is seriously limited, owing to their small molecular structure and rapid diffusion into the bloodstream. Herein, we propose a novel albumin-modified manganese-based nanoprobes enhanced MRL method for accurately assessing micro- and macro-T-SLN. Specifically, the inherent concentration gradient of albumin between blood and interstitial fluid aids in the movement of nanoprobes into the lymphatic system. The micro-T-SLN exhibits a notably higher MR signal due to the formation of new lymphatic vessels and increased lymphatic flow, allowing for a greater influx of nanoprobes. In contrast, the macro-T-SLN shows a lower MR signal as a result of tumor cell proliferation and damage to the lymphatic vessels. Additionally, a highly accurate and sensitive machine learning model has been developed to guide the identification of micro- and macro-T-SLN by analyzing manganese-enhanced MR images. In conclusion, our research presents a novel comprehensive assessment framework utilizing albumin-modified manganese-based nanoprobes for a highly sensitive evaluation of micro- and macro-T-SLN in breast cancer.

Puri S, Bagnall M, Erdelyi G

pubmed logopapersJun 1 2025
The Radiology team from a large Breast Screening Unit in the UK with a screening population of over 135,000 took part in a service evaluation project using artificial intelligence (AI) for reading breast screening mammograms. To evaluate the clinical benefit AI may provide when implemented as a silent reader in a double reading breast screening programme and to evaluate feasibility and the operational impact of deploying AI into the breast screening programme. The service was one of 14 breast screening sites in the UK to take part in this project and we present our local experience with AI in breast screening. A commercially available AI platform was deployed and worked in real time as a 'silent third reader' so as not to impact standard workflows and patient care. All cases flagged by AI but not recalled by standard double reading (positive discordant cases) were reviewed along with all cases recalled by human readers but not flagged by AI (negative discordant cases). 9,547 cases were included in the evaluation. 1,135 positive discordant cases were reviewed, and one woman was recalled from the reviews who was not found to have cancer on further assessment in the breast assessment clinic. 139 negative discordant cases were reviewed, and eight cancer cases (8.79% of total cancers detected in this period) recalled by human readers were not detected by AI. No additional cancers were detected by AI during the study. Performance of AI was inferior to human readers in our unit. Having missed a significant number of cancers makes it unreliable and not safe to be used in clinical practice. AI is not currently of sufficient accuracy to be considered in the NHS Breast Screening Programme.

Xena-Bosch C, Kodali S, Sahi N, Chard D, Llufriu S, Toosy AT, Martinez-Heras E, Prados F

pubmed logopapersJun 1 2025
Understanding optic nerve structure and monitoring changes within it can provide insights into neurodegenerative diseases like multiple sclerosis, in which optic nerves are often damaged by inflammatory episodes of optic neuritis. Over the past decades, interest in the optic nerve has increased, particularly with advances in magnetic resonance technology and the advent of deep learning solutions. These advances have significantly improved the visualisation and analysis of optic nerves, making it possible to detect subtle changes that aid the early diagnosis and treatment of optic nerve-related diseases, and for planning radiotherapy interventions. Effective segmentation techniques, therefore, are crucial for enhancing the accuracy of predictive models, planning interventions and treatment strategies. This comprehensive review, which includes 27 peer-reviewed articles published between 2007 and 2024, examines and highlights the evolution of optic nerve magnetic resonance imaging segmentation over the past decade, tracing the development from intensity-based methods to the latest deep learning algorithms, including multi-atlas solutions using single or multiple image modalities.

Ji G, Luo W, Zhu Y, Chen B, Wang M, Jiang L, Yang M, Song W, Yao P, Zheng T, Yu H, Zhang R, Wang C, Ding R, Zhuo X, Chen F, Li J, Tang X, Xian J, Song T, Tang J, Feng M, Shao J, Li W

pubmed logopapersJun 1 2025
Current lung cancer screening guidelines recommend annual low-dose computed tomography (LDCT) for high-risk individuals. However, the effectiveness of LDCT in non-high-risk individuals remains inadequately explored. With the incidence of lung cancer steadily increasing among non-high-risk individuals, this study aims to assess the risk of lung cancer in non-high-risk individuals and evaluate the potential of thin-section LDCT reconstruction combined with artificial intelligence (LDCT-TRAI) as a screening tool. A real-world cohort study on lung cancer screening was conducted at the West China Hospital of Sichuan University from January 2010 to July 2021. Participants were screened using either LDCT-TRAI or traditional thick-section LDCT without AI (traditional LDCT) . The AI system employed was the uAI-ChestCare software. Lung cancer diagnoses were confirmed through pathological examination. Among the 259 121 enrolled non-high-risk participants, 87 260 (33.7%) had positive screening results. Within 1 year, 728 (0.3%) participants were diagnosed with lung cancer, of whom 87.1% (634/728) were never-smokers, and 92.7% (675/728) presented with stage I disease. Compared with traditional LDCT, LDCT-TRAI demonstrated a higher lung cancer detection rate (0.3% vs. 0.2%, <i>P</i> < 0.001), particularly for stage I cancers (94.4% vs. 83.2%, <i>P</i> < 0.001), and was associated with improved survival outcomes (5-year overall survival rate: 95.4% vs. 81.3%, <i>P</i> < 0.0001). These findings highlight the importance of expanding lung cancer screening to non-high-risk populations, especially never-smokers. LDCT-TRAI outperformed traditional LDCT in detecting early-stage cancers and improving survival outcomes, underscoring its potential as a more effective screening tool for early lung cancer detection in this population.

Zhu H, Huang J, Chen K, Ying X, Qian Y

pubmed logopapersJun 1 2025
Brain Tumor Segmentation (BraTS) plays a critical role in clinical diagnosis, treatment planning, and monitoring the progression of brain tumors. However, due to the variability in tumor appearance, size, and intensity across different MRI modalities, automated segmentation remains a challenging task. In this study, we propose a novel Transformer-based framework, multiPI-TransBTS, which integrates multi-physical information to enhance segmentation accuracy. The model leverages spatial information, semantic information, and multi-modal imaging data, addressing the inherent heterogeneity in brain tumor characteristics. The multiPI-TransBTS framework consists of an encoder, an Adaptive Feature Fusion (AFF) module, and a multi-source, multi-scale feature decoder. The encoder incorporates a multi-branch architecture to separately extract modality-specific features from different MRI sequences. The AFF module fuses information from multiple sources using channel-wise and element-wise attention, ensuring effective feature recalibration. The decoder combines both common and task-specific features through a Task-Specific Feature Introduction (TSFI) strategy, producing accurate segmentation outputs for Whole Tumor (WT), Tumor Core (TC), and Enhancing Tumor (ET) regions. Comprehensive evaluations on the BraTS2019 and BraTS2020 datasets demonstrate the superiority of multiPI-TransBTS over the state-of-the-art methods. The model consistently achieves better Dice coefficients, Hausdorff distances, and Sensitivity scores, highlighting its effectiveness in addressing the BraTS challenges. Our results also indicate the need for further exploration of the balance between precision and recall in the ET segmentation task. The proposed framework represents a significant advancement in BraTS, with potential implications for improving clinical outcomes for brain tumor patients.

Long B, Li R, Wang R, Yin A, Zhuang Z, Jing Y, E L

pubmed logopapersJun 1 2025
To explore the feasibility of using a diagnostic model constructed with deep learning-radiomics (DLR) features extracted from chest computed tomography (CT) images to predict the gender-age-physiology (GAP) stage of patients with connective tissue disease-associated interstitial lung disease (CTD-ILD). The data of 264 CTD-ILD patients were retrospectively collected. GAP Stage I, II, III patients are 195, 56, 13 cases respectively. The latter two stages were combined into one group. The patients were randomized into a training set and a validation set. Single-input models were separately constructed using the selected radiomics and DL features, while DLR model was constructed from both sets of features. For all models, the support vector machine (SVM) and logistic regression (LR) algorithms were used for construction. The nomogram models were generated by integrating age, gender, and DLR features. The DLR model outperformed the radiomics and DL models in both the training set and the validation set. The predictive performance of the DLR model based on the LR algorithm was the best among all the feature-based models (AUC = 0.923). The comprehensive models had even greater performance in predicting the GAP stage of CTD-ILD patients. The comprehensive model using the SVM algorithm had the best performance of the two models (AUC = 0.951). The DLR model extracted from CT images can assist in the clinical prediction of the GAP stage of CTD-ILD patients. A nomogram showed even greater performance in predicting the GAP stage of CTD-ILD patients.

Du X, Zhang X, Chen J, Li L

pubmed logopapersJun 1 2025
Polyps, like a silent time bomb in the gut, are always lurking and can explode into deadly colorectal cancer at any time. Many methods are attempted to maximize the early detection of colon polyps by screening, however, there are still face some challenges: (i) the scarcity of per-pixel annotation data and clinical features such as the blurred boundary and low contrast of polyps result in poor performance. (ii) existing weakly semi-supervised methods directly using pseudo-labels to supervise student tend to ignore the value brought by intermediate features in the teacher. To adapt the point-prompt teacher model to the challenging scenarios of complex medical images and limited annotation data, we creatively leverage the diverse inductive biases of CNN and Transformer to extract robust and complementary representation of polyp features (boundary and context). At the same time, a novel designed teacher-student intermediate feature distillation method is introduced rather than just using pseudo-labels to guide student learning. Comprehensive experiments demonstrate that our proposed method effectively handles scenarios with limited annotations and exhibits good segmentation performance. All code is available at https://github.com/dxqllp/WSS-Polyp.
Page 519 of 6346332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.