Sort by:
Page 79 of 99990 results

Prediction of lymph node metastasis in papillary thyroid carcinoma using non-contrast CT-based radiomics and deep learning with thyroid lobe segmentation: A dual-center study.

Wang H, Wang X, Du Y, Wang Y, Bai Z, Wu D, Tang W, Zeng H, Tao J, He J

pubmed logopapersJun 1 2025
This study aimed to develop a predictive model for lymph node metastasis (LNM) in papillary thyroid carcinoma (PTC) patients by deep learning radiomic (DLRad) and clinical features. This study included 271 thyroid lobes from 228 PTC patients who underwent preoperative neck non-contrast CT at Center 1 (May 2021-April 2024). LNM status was confirmed via postoperative pathology, with each thyroid lobe labeled accordingly. The cohort was divided into training (n = 189) and validation (n = 82) cohorts, with additional temporal (n = 59 lobes, Center 1, May-August 2024) and external (n = 66 lobes, Center 2) test cohorts. Thyroid lobes were manually segmented from the isthmus midline, ensuring interobserver consistency (ICC ≥ 0.8). Deep learning and radiomics features were selected using LASSO algorithms to compute DLRad scores. Logistic regression identified independent predictors, forming DLRad, clinical, and combined models. Model performance was evaluated using AUC, calibration, decision curves, and the DeLong test, compared against radiologists' assessments. Independent predictors of LNM included age, gender, multiple nodules, tumor size group, and DLRad. The combined model demonstrated superior diagnostic performance with AUCs of 0.830 (training), 0.799 (validation), 0.819 (temporal test), and 0.756 (external test), outperforming the DLRad model (AUCs: 0.786, 0.730, 0.753, 0.642), clinical model (AUCs: 0.723, 0.745, 0.671, 0.660), and radiologist evaluations (AUCs: 0.529, 0.606, 0.620, 0.503). It also achieved the lowest Brier scores (0.167, 0.184, 0.175, 0.201) and the highest net benefit in decision-curve analysis at threshold probabilities > 20 %. The combined model integrating DLRad and clinical features exhibits good performance in predicting LNM in PTC patients.

Standardized pancreatic MRI-T1 measurement methods: comparison between manual measurement and a semi-automated pipeline with automatic quality control.

Triay Bagur A, Arya Z, Waddell T, Pansini M, Fernandes C, Counter D, Jackson E, Thomaides-Brears HB, Robson MD, Bulte DP, Banerjee R, Aljabar P, Brady M

pubmed logopapersJun 1 2025
Scanner-referenced T1 (srT1) is a method for measuring pancreas T1 relaxation time. The purpose of this multi-centre study is 2-fold: (1) to evaluate the repeatability of manual ROI-based analysis of srT1, (2) to validate a semi-automated measurement method with an automatic quality control (QC) module to identify likely discrepancies between automated and manual measurements. Pancreatic MRI scans from a scan-rescan cohort (46 subjects) were used to evaluate the repeatability of manual analysis. Seven hundred and eight scans from a longitudinal multi-centre study of 466 subjects were divided into training, internal validation (IV), and external validation (EV) cohorts. A semi-automated method for measuring srT1 using machine learning is proposed and compared against manual analysis on the validation cohorts with and without automated QC. Inter-operator agreement between manual ROI-based method and semi-automated method had low bias (3.8 ms or 0.5%) and limits of agreement [-36.6, 44.1] ms. There was good agreement between the 2 methods without automated QC (IV: 3.2 [-47.1, 53.5] ms, EV: -0.5 [-35.2, 34.2] ms). After QC, agreement on the IV set improved, was unchanged in the EV set, and the agreement in both was within inter-operator bounds (IV: -0.04 [-33.4, 33.3] ms, EV: -1.9 [-37.6, 33.7] ms). The semi-automated method improved scan-rescan agreement versus manual analysis (manual: 8.2 [-49.7, 66] ms, automated: 6.7 [-46.7, 60.1] ms). The semi-automated method for characterization of standardized pancreatic T1 using MRI has the potential to decrease analysis time while maintaining accuracy and improving scan-rescan agreement. We provide intra-operator, inter-operator, and scan-rescan agreement values for manual measurement of srT1, a standardized biomarker for measuring pancreas fibro-inflammation. Applying a semi-automated measurement method improves scan-rescan agreement and agrees well with manual measurements, while reducing human effort. Adding automated QC can improve agreement between manual and automated measurements. We describe a method for semi-automated, standardized measurement of pancreatic T1 (srT1), which includes automated quality control. Measurements show good agreement with manual ROI-based analysis, with comparable consistency to inter-operator performance.

Evaluating the prognostic significance of artificial intelligence-delineated gross tumor volume and prostate volume measurements for prostate radiotherapy.

Adleman J, McLaughlin PY, Tsui JMG, Buzurovic I, Harris T, Hudson J, Urribarri J, Cail DW, Nguyen PL, Orio PF, Lee LK, King MT

pubmed logopapersJun 1 2025
Artificial intelligence (AI) may extract prognostic information from MRI for localized prostate cancer. We evaluate whether AI-derived prostate and gross tumor volume (GTV) are associated with toxicity and oncologic outcomes after radiotherapy. We conducted a retrospective study of patients, who underwent radiotherapy between 2010 and 2017. We trained an AI segmentation algorithm to contour the prostate and GTV from patients treated with external-beam RT, and applied the algorithm to those treated with brachytherapy. AI prostate and GTV volumes were calculated from segmentation results. We evaluated whether AI GTV volume was associated with biochemical failure (BF) and metastasis. We evaluated whether AI prostate volume was associated with acute and late grade 2+ genitourinary toxicity, and International Prostate Symptom Score (IPSS) resolution for monotherapy and combination sets, separately. We identified 187 patients who received brachytherapy (monotherapy (N = 154) or combination therapy (N = 33)). AI GTV volume was associated with BF (hazard ratio (HR):1.28[1.14,1.44];p < 0.001) and metastasis (HR:1.34[1.18,1.53;p < 0.001). For the monotherapy subset, AI prostate volume was associated with both acute (adjusted odds ratio:1.16[1.07,1.25];p < 0.001) and late grade 2 + genitourinary toxicity (adjusted HR:1.04[1.01,1.07];p = 0.01), but not IPSS resolution (0.99[0.97,1.00];p = 0.13). For the combination therapy subset, AI prostate volume was not associated with either acute (p = 0.72) or late (p = 0.75) grade 2 + urinary toxicity. However, AI prostate volume was associated with IPSS resolution (0.96[0.93, 0.99];p = 0.01). AI-derived prostate and GTV volumes may be prognostic for toxicity and oncologic outcomes after RT. Such information may aid in treatment decision-making, given differences in outcomes among RT treatment modalities.

Predicting hepatocellular carcinoma response to TACE: A machine learning study based on 2.5D CT imaging and deep features analysis.

Lin C, Cao T, Tang M, Pu W, Lei P

pubmed logopapersJun 1 2025
Prior to the commencement of treatment, it is essential to establish an objective method for accurately predicting the prognosis of patients with hepatocellular carcinoma (HCC) undergoing transarterial chemoembolization (TACE). In this study, we aimed to develop a machine learning (ML) model to predict the response of HCC patients to TACE based on CT images analysis. Public dataset from The Cancer Imaging Archive (TCIA), uploaded in August 2022, comprised a total of 105 cases, including 68 males and 37 females. The external testing dataset was collected from March 1, 2019 to July 1, 2022, consisting of total of 26 patients who underwent TACE treatment at our institution and were followed up for at least 3 months after TACE, including 22 males and 4 females. The public dataset was utilized for ResNet50 transfer learning and ML model construction, while the external testing dataset was used for model performance evaluation. All CT images with the largest lesions in axial, sagittal, and coronal orientations were selected to construct 2.5D images. Pre-trained ResNet50 weights were adapted through transfer learning to serve as a feature extractor to derive deep features for building ML models. Model performance was assessed using area under the curve (AUC), accuracy, F1-Score, confusion matrix analysis, decision curves, and calibration curves. The AUC values for the external testing dataset were 0.90, 0.90, 0.91, and 0.89 for random forest classifier (RFC), support vector classifier (SVC), logistic regression (LR), and extreme gradient boosting (XGB), respectively. The accuracy values for the external testing dataset were 0.79, 0.81, 0.80, and 0.80 for RFC, SVC, LR, and XGB, respectively. The F1-score values for the external testing dataset were 0.75, 0.77, 0.78, and 0.79 for RFC, SVC, LR, and XGB, respectively. The ML model constructed using deep features from 2.5D images has the potential to be applied in predicting the prognosis of HCC patients following TACE treatment.

Scatter and beam hardening effect corrections in pelvic region cone beam CT images using a convolutional neural network.

Yagi S, Usui K, Ogawa K

pubmed logopapersJun 1 2025
The aim of this study is to remove scattered photons and beam hardening effect in cone beam CT (CBCT) images and make an image available for treatment planning. To remove scattered photons and beam hardening effect, a convolutional neural network (CNN) was used, and trained with distorted projection data including scattered photons and beam hardening effect and supervised projection data calculated with monochromatic X-rays. The number of training projection data was 17,280 with data augmentation and that of test projection data was 540. The performance of the CNN was investigated in terms of the number of photons in the projection data used in the training of the network. Projection data of pelvic CBCT images (32 cases) were calculated with a Monte Carlo simulation with six different count levels ranging from 0.5 to 3 million counts/pixel. For the evaluation of corrected images, the peak signal-to-noise ratio (PSNR), the structural similarity index measure (SSIM), and the sum of absolute difference (SAD) were used. The results of simulations showed that the CNN could effectively remove scattered photons and beam hardening effect, and the PSNR, the SSIM, and the SAD significantly improved. It was also found that the number of photons in the training projection data was important in correction accuracy. Furthermore, a CNN model trained with projection data with a sufficient number of photons could yield good performance even though a small number of photons were used in the input projection data.

Boosting polyp screening with improved point-teacher weakly semi-supervised.

Du X, Zhang X, Chen J, Li L

pubmed logopapersJun 1 2025
Polyps, like a silent time bomb in the gut, are always lurking and can explode into deadly colorectal cancer at any time. Many methods are attempted to maximize the early detection of colon polyps by screening, however, there are still face some challenges: (i) the scarcity of per-pixel annotation data and clinical features such as the blurred boundary and low contrast of polyps result in poor performance. (ii) existing weakly semi-supervised methods directly using pseudo-labels to supervise student tend to ignore the value brought by intermediate features in the teacher. To adapt the point-prompt teacher model to the challenging scenarios of complex medical images and limited annotation data, we creatively leverage the diverse inductive biases of CNN and Transformer to extract robust and complementary representation of polyp features (boundary and context). At the same time, a novel designed teacher-student intermediate feature distillation method is introduced rather than just using pseudo-labels to guide student learning. Comprehensive experiments demonstrate that our proposed method effectively handles scenarios with limited annotations and exhibits good segmentation performance. All code is available at https://github.com/dxqllp/WSS-Polyp.

Multi-level feature fusion network for kidney disease detection.

Rehman Khan SU

pubmed logopapersJun 1 2025
Kidney irregularities pose a significant public health challenge, often leading to severe complications, yet the limited availability of nephrologists makes early detection costly and time-consuming. To address this issue, we propose a deep learning framework for automated kidney disease detection, leveraging feature fusion and sequential modeling techniques to enhance diagnostic accuracy. Our study thoroughly evaluates six pretrained models under identical experimental conditions, identifying ResNet50 and VGG19 as the highly efficient models for feature extraction due to their deep residual learning and hierarchical representations. Our proposed methodology integrates feature fusion with an inception block to extract diverse feature representations while maintaining imbalance dataset overhead. To enhance sequential learning and capture long-term dependencies in disease progression, ConvLSTM is incorporated after feature fusion. Additionally, Inception block is employed after ConvLSTM to refine hierarchical feature extraction, further strengthening the proposed model ability to leverage both spatial and temporal patterns. To validate our approach, we introduce a new named Multiple Hospital Collected (MHC-CT) dataset, consisting of 1860 tumor and 1024 normal kidney CT scans, meticulously annotated by medical experts. Our model achieves 99.60 % accuracy on this dataset, demonstrating its robustness in binary classification. Furthermore, to assess its generalization capability, we evaluate the model on a publicly available benchmark multiclass CT scan dataset, achieving 91.31 % accuracy. The superior performance is attributed to the effective feature fusion using inception blocks and the sequential learning capabilities of ConvLSTM, which together enhance spatial and temporal feature representations. These results highlight the efficacy of the proposed framework in automating kidney disease detection, providing a reliable, and efficient solution for clinical decision-making. https://github.com/VS-EYE/KidneyDiseaseDetection.git.

Liver Tumor Prediction using Attention-Guided Convolutional Neural Networks and Genomic Feature Analysis.

Edwin Raja S, Sutha J, Elamparithi P, Jaya Deepthi K, Lalitha SD

pubmed logopapersJun 1 2025
The task of predicting liver tumors is critical as part of medical image analysis and genomics area since diagnosis and prognosis are important in making correct medical decisions. Silent characteristics of liver tumors and interactions between genomic and imaging features are also the main sources of challenges toward reliable predictions. To overcome these hurdles, this study presents two integrated approaches namely, - Attention-Guided Convolutional Neural Networks (AG-CNNs), and Genomic Feature Analysis Module (GFAM). Spatial and channel attention mechanisms in AG-CNN enable accurate tumor segmentation from CT images while providing detailed morphological profiling. Evaluation with three control databases TCIA, LiTS, and CRLM shows that our model produces more accurate output than relevant literature with an accuracy of 94.5%, a Dice Similarity Coefficient of 91.9%, and an F1-Score of 96.2% for the Dataset 3. More considerably, the proposed methods outperform all the other methods in different datasets in terms of recall, precision, and Specificity by up to 10 percent than all other methods including CELM, CAGS, DM-ML, and so on.•Utilization of Attention-Guided Convolutional Neural Networks (AG-CNN) enhances tumor region focus and segmentation accuracy.•Integration of Genomic Feature Analysis (GFAM) identifies molecular markers for subtype-specific tumor classification.

Evaluation of MRI anatomy in machine learning predictive models to assess hydrogel spacer benefit for prostate cancer patients.

Bush M, Jones S, Hargrave C

pubmed logopapersJun 1 2025
Hydrogel spacers (HS) are designed to minimise the radiation doses to the rectum in prostate cancer radiation therapy (RT) by creating a physical gap between the rectum and the target treatment volume inclusive of the prostate and seminal vesicles (SV). This study aims to determine the feasibility of incorporating diagnostic MRI (dMRI) information in statistical machine learning (SML) models developed with planning CT (pCT) anatomy for dose and rectal toxicity prediction. The SML models aim to support HS insertion decision-making prior to RT planning procedures. Regions of interest (ROIs) were retrospectively contoured on the pCT and registered dMRI scans for 20 patients. ROI Dice and Hausdorff distance (HD) comparison metrics were calculated. The ROI and patient clinical risk factors (CRFs) variables were inputted into three SML models and then pCT and dMRI-based dose and toxicity model performance compared through confusion matrices, AUC curves, accuracy performance metric results and observed patient outcomes. Average Dice values comparing dMRI and pCT ROIs were 0.81, 0.47 and 0.71 for the prostate, SV, and rectum respectively. Average Hausdorff distances were 2.15, 2.75 and 2.75 mm for the prostate, SV, and rectum respectively. The average accuracy metric across all models was 0.83 when using dMRI ROIs and 0.85 when using pCT ROIs. Differences between pCT and dMRI anatomical ROI variables did not impact SML model performance in this study, demonstrating the feasibility of using dMRI images. Due to the limited sample size further training of the predictive models including dMRI anatomy is recommended.

Combining Deep Data-Driven and Physics-Inspired Learning for Shear Wave Speed Estimation in Ultrasound Elastography.

Tehrani AKZ, Schoen S, Candel I, Gu Y, Guo P, Thomenius K, Pierce TT, Wang M, Tadross R, Washburn M, Rivaz H, Samir AE

pubmed logopapersJun 1 2025
The shear wave elastography (SWE) provides quantitative markers for tissue characterization by measuring the shear wave speed (SWS), which reflects tissue stiffness. SWE uses an acoustic radiation force pulse sequence to generate shear waves that propagate laterally through tissue with transient displacements. These waves travel perpendicular to the applied force, and their displacements are tracked using high-frame-rate ultrasound. Estimating the SWS map involves two main steps: speckle tracking and SWS estimation. Speckle tracking calculates particle velocity by measuring RF/IQ data displacement between adjacent firings, while SWS estimation methods typically compare particle velocity profiles of samples that are laterally a few millimeters apart. Deep learning (DL) methods have gained attention for SWS estimation, often relying on supervised training using simulated data. However, these methods may struggle with real-world data, which can differ significantly from the simulated training data, potentially leading to artifacts in the estimated SWS map. To address this challenge, we propose a physics-inspired learning approach that utilizes real data without known SWS values. Our method employs an adaptive unsupervised loss function, allowing the network to train with the real noisy data to minimize the artifacts and improve the robustness. We validate our approach using experimental phantom data and in vivo liver data from two human subjects, demonstrating enhanced accuracy and reliability in SWS estimation compared with conventional and supervised methods. This hybrid approach leverages the strengths of both data-driven and physics-inspired learning, offering a promising solution for more accurate and robust SWS mapping in clinical applications.
Page 79 of 99990 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.