Sort by:
Page 76 of 91901 results

Brain tumor segmentation with deep learning: Current approaches and future perspectives.

Verma A, Yadav AK

pubmed logopapersJun 1 2025
Accurate brain tumor segmentation from MRI images is critical in the medical industry, directly impacts the efficacy of diagnostic and treatment plans. Accurate segmentation of tumor region can be challenging, especially when noise and abnormalities are present. This research provides a systematic review of automatic brain tumor segmentation techniques, with a specific focus on the design of network architectures. The review categorizes existing methods into unsupervised and supervised learning techniques, as well as machine learning and deep learning approaches within supervised techniques. Deep learning techniques are thoroughly reviewed, with a particular focus on CNN-based, U-Net-based, transfer learning-based, transformer-based, and hybrid transformer-based methods. This survey encompasses a broad spectrum of automatic segmentation methodologies, from traditional machine learning approaches to advanced deep learning frameworks. It provides an in-depth comparison of performance metrics, model efficiency, and robustness across multiple datasets, particularly the BraTS dataset. The study further examines multi-modal MRI imaging and its influence on segmentation accuracy, addressing domain adaptation, class imbalance, and generalization challenges. The analysis highlights the current challenges in Computer-aided Diagnostic (CAD) systems, examining how different models and imaging sequences impact performance. Recent advancements in deep learning, especially the widespread use of U-Net architectures, have significantly enhanced medical image segmentation. This review critically evaluates these developments, focusing the iterative improvements in U-Net models that have driven progress in brain tumor segmentation. Furthermore, it explores various techniques for improving U-Net performance for medical applications, focussing on its potential for improving diagnostic and treatment planning procedures. The efficiency of these automated segmentation approaches is rigorously evaluated using the BraTS dataset, a benchmark dataset, part of the annual Multimodal Brain Tumor Segmentation Challenge (MICCAI). This evaluation provides insights into the current state-of-the-art and identifies key areas for future research and development.

PET and CT based DenseNet outperforms advanced deep learning models for outcome prediction of oropharyngeal cancer.

Ma B, Guo J, Dijk LVV, Langendijk JA, Ooijen PMAV, Both S, Sijtsema NM

pubmed logopapersJun 1 2025
In the HECKTOR 2022 challenge set [1], several state-of-the-art (SOTA, achieving best performance) deep learning models were introduced for predicting recurrence-free period (RFP) in head and neck cancer patients using PET and CT images. This study investigates whether a conventional DenseNet architecture, with optimized numbers of layers and image-fusion strategies, could achieve comparable performance as SOTA models. The HECKTOR 2022 dataset comprises 489 oropharyngeal cancer (OPC) patients from seven distinct centers. It was randomly divided into a training set (n = 369) and an independent test set (n = 120). Furthermore, an additional dataset of 400 OPC patients, who underwent chemo(radiotherapy) at our center, was employed for external testing. Each patients' data included pre-treatment CT- and PET-scans, manually generated GTV (Gross tumour volume) contours for primary tumors and lymph nodes, and RFP information. The present study compared the performance of DenseNet against three SOTA models developed on the HECKTOR 2022 dataset. When inputting CT, PET and GTV using the early fusion (considering them as different channels of input) approach, DenseNet81 (with 81 layers) obtained an internal test C-index of 0.69, a performance metric comparable with SOTA models. Notably, the removal of GTV from the input data yielded the same internal test C-index of 0.69 while improving the external test C-index from 0.59 to 0.63. Furthermore, compared to PET-only models, when utilizing the late fusion (concatenation of extracted features) with CT and PET, DenseNet81 demonstrated superior C-index values of 0.68 and 0.66 in both internal and external test sets, while using early fusion was better in only the internal test set. The basic DenseNet architecture with 81 layers demonstrated a predictive performance on par with SOTA models featuring more intricate architectures in the internal test set, and better performance in the external test. The late fusion of CT and PET imaging data yielded superior performance in the external test.

Predicting strength of femora with metastatic lesions from single 2D radiographic projections using convolutional neural networks.

Synek A, Benca E, Licandro R, Hirtler L, Pahr DH

pubmed logopapersJun 1 2025
Patients with metastatic bone disease are at risk of pathological femoral fractures and may require prophylactic surgical fixation. Current clinical decision support tools often overestimate fracture risk, leading to overtreatment. While novel scores integrating femoral strength assessment via finite element (FE) models show promise, they require 3D imaging, extensive computation, and are difficult to automate. Predicting femoral strength directly from single 2D radiographic projections using convolutional neural networks (CNNs) could address these limitations, but this approach has not yet been explored for femora with metastatic lesions. This study aimed to test whether CNNs can accurately predict strength of femora with metastatic lesions from single 2D radiographic projections. CNNs with various architectures were developed and trained using an FE model generated training dataset. This training dataset was based on 36,000 modified computed tomography (CT) scans, created by randomly inserting artificial lytic lesions into the CT scans of 36 intact anatomical femoral specimens. From each modified CT scan, an anterior-posterior 2D projection was generated and femoral strength in one-legged stance was determined using nonlinear FE models. Following training, the CNN performance was evaluated on an independent experimental test dataset consisting of 31 anatomical femoral specimens (16 intact, 15 with artificial lytic lesions). 2D projections of each specimen were created from corresponding CT scans and femoral strength was assessed in mechanical tests. The CNNs' performance was evaluated using linear regression analysis and compared to 2D densitometric predictors (bone mineral density and content) and CT-based 3D FE models. All CNNs accurately predicted the experimentally measured strength in femora with and without metastatic lesions of the test dataset (R²≥0.80, CCC≥0.81). In femora with metastatic lesions, the performance of the CNNs (best: R²=0.84, CCC=0.86) was considerably superior to 2D densitometric predictors (R²≤0.07) and slightly inferior to 3D FE models (R²=0.90, CCC=0.94). CNNs, trained on a large dataset generated via FE models, predicted experimentally measured strength of femora with artificial metastatic lesions with accuracy comparable to 3D FE models. By eliminating the need for 3D imaging and reducing computational demands, this novel approach demonstrates potential for application in a clinical setting.

Combating Medical Label Noise through more precise partition-correction and progressive hard-enhanced learning.

Zhang S, Chu S, Qiang Y, Zhao J, Wang Y, Wei X

pubmed logopapersJun 1 2025
Computer-aided diagnosis systems based on deep neural networks heavily rely on datasets with high-quality labels. However, manual annotation for lesion diagnosis relies on image features, often requiring professional experience and complex image analysis process. This inevitably introduces noisy labels, which can misguide the training of classification models. Our goal is to design an effective method to address the challenges posed by label noise in medical images. we propose a novel noise-tolerant medical image classification framework consisting of two phases: fore-training correction and progressive hard-sample enhanced learning. In the first phase, we design a dual-branch sample partition detection scheme that effectively classifies each instance into one of three subsets: clean, hard, or noisy. Simultaneously, we propose a hard-sample label refinement strategy based on class prototypes with confidence-perception weighting and an effective joint correction method for noisy samples, enabling the acquisition of higher-quality training data. In the second phase, we design a progressive hard-sample reinforcement learning method to enhance the model's ability to learn discriminative feature representations. This approach accounts for sample difficulty and mitigates the effects of label noise in medical datasets. Our framework achieves an accuracy of 82.39% on the pneumoconiosis dataset collected by our laboratory. On a five-class skin disease dataset with six different levels of label noise (0, 0.05, 0.1, 0.2, 0.3, and 0.4), the average accuracy over the last ten epochs reaches 88.51%, 86.64%, 85.02%, 83.01%, 81.95%, 77.89%, respectively; For binary polyp classification under noise rates of 0.2, 0.3, and 0.4, the average accuracy over the last ten epochs is 97.90%, 93.77%, 89.33%, respectively. The effectiveness of our proposed framework is demonstrated through its performance on three challenging datasets with both real and synthetic noise. Experimental results further demonstrate the robustness of our method across varying noise rates.

multiPI-TransBTS: A multi-path learning framework for brain tumor image segmentation based on multi-physical information.

Zhu H, Huang J, Chen K, Ying X, Qian Y

pubmed logopapersJun 1 2025
Brain Tumor Segmentation (BraTS) plays a critical role in clinical diagnosis, treatment planning, and monitoring the progression of brain tumors. However, due to the variability in tumor appearance, size, and intensity across different MRI modalities, automated segmentation remains a challenging task. In this study, we propose a novel Transformer-based framework, multiPI-TransBTS, which integrates multi-physical information to enhance segmentation accuracy. The model leverages spatial information, semantic information, and multi-modal imaging data, addressing the inherent heterogeneity in brain tumor characteristics. The multiPI-TransBTS framework consists of an encoder, an Adaptive Feature Fusion (AFF) module, and a multi-source, multi-scale feature decoder. The encoder incorporates a multi-branch architecture to separately extract modality-specific features from different MRI sequences. The AFF module fuses information from multiple sources using channel-wise and element-wise attention, ensuring effective feature recalibration. The decoder combines both common and task-specific features through a Task-Specific Feature Introduction (TSFI) strategy, producing accurate segmentation outputs for Whole Tumor (WT), Tumor Core (TC), and Enhancing Tumor (ET) regions. Comprehensive evaluations on the BraTS2019 and BraTS2020 datasets demonstrate the superiority of multiPI-TransBTS over the state-of-the-art methods. The model consistently achieves better Dice coefficients, Hausdorff distances, and Sensitivity scores, highlighting its effectiveness in addressing the BraTS challenges. Our results also indicate the need for further exploration of the balance between precision and recall in the ET segmentation task. The proposed framework represents a significant advancement in BraTS, with potential implications for improving clinical outcomes for brain tumor patients.

Combining Deep Data-Driven and Physics-Inspired Learning for Shear Wave Speed Estimation in Ultrasound Elastography.

Tehrani AKZ, Schoen S, Candel I, Gu Y, Guo P, Thomenius K, Pierce TT, Wang M, Tadross R, Washburn M, Rivaz H, Samir AE

pubmed logopapersJun 1 2025
The shear wave elastography (SWE) provides quantitative markers for tissue characterization by measuring the shear wave speed (SWS), which reflects tissue stiffness. SWE uses an acoustic radiation force pulse sequence to generate shear waves that propagate laterally through tissue with transient displacements. These waves travel perpendicular to the applied force, and their displacements are tracked using high-frame-rate ultrasound. Estimating the SWS map involves two main steps: speckle tracking and SWS estimation. Speckle tracking calculates particle velocity by measuring RF/IQ data displacement between adjacent firings, while SWS estimation methods typically compare particle velocity profiles of samples that are laterally a few millimeters apart. Deep learning (DL) methods have gained attention for SWS estimation, often relying on supervised training using simulated data. However, these methods may struggle with real-world data, which can differ significantly from the simulated training data, potentially leading to artifacts in the estimated SWS map. To address this challenge, we propose a physics-inspired learning approach that utilizes real data without known SWS values. Our method employs an adaptive unsupervised loss function, allowing the network to train with the real noisy data to minimize the artifacts and improve the robustness. We validate our approach using experimental phantom data and in vivo liver data from two human subjects, demonstrating enhanced accuracy and reliability in SWS estimation compared with conventional and supervised methods. This hybrid approach leverages the strengths of both data-driven and physics-inspired learning, offering a promising solution for more accurate and robust SWS mapping in clinical applications.

BenchXAI: Comprehensive benchmarking of post-hoc explainable AI methods on multi-modal biomedical data.

Metsch JM, Hauschild AC

pubmed logopapersJun 1 2025
The increasing digitalization of multi-modal data in medicine and novel artificial intelligence (AI) algorithms opens up a large number of opportunities for predictive models. In particular, deep learning models show great performance in the medical field. A major limitation of such powerful but complex models originates from their 'black-box' nature. Recently, a variety of explainable AI (XAI) methods have been introduced to address this lack of transparency and trust in medical AI. However, the majority of such methods have solely been evaluated on single data modalities. Meanwhile, with the increasing number of XAI methods, integrative XAI frameworks and benchmarks are essential to compare their performance on different tasks. For that reason, we developed BenchXAI, a novel XAI benchmarking package supporting comprehensive evaluation of fifteen XAI methods, investigating their robustness, suitability, and limitations in biomedical data. We employed BenchXAI to validate these methods in three common biomedical tasks, namely clinical data, medical image and signal data, and biomolecular data. Our newly designed sample-wise normalization approach for post-hoc XAI methods enables the statistical evaluation and visualization of performance and robustness. We found that the XAI methods Integrated Gradients, DeepLift, DeepLiftShap, and GradientShap performed well over all three tasks, while methods like Deconvolution, Guided Backpropagation, and LRP-α1-β0 struggled for some tasks. With acts such as the EU AI Act the application of XAI in the biomedical domain becomes more and more essential. Our evaluation study represents a first step towards verifying the suitability of different XAI methods for various medical domains.

Atten-Nonlocal Unet: Attention and Non-local Unet for medical image segmentation.

Jia X, Wang W, Zhang M, Zhao B

pubmed logopapersJun 1 2025
The convolutional neural network(CNN)-based models have emerged as the predominant approach for medical image segmentation due to their effective inductive bias. However, their limitation lies in the lack of long-range information. In this study, we propose the Atten-Nonlocal Unet model that integrates CNN and transformer to overcome this limitation and precisely capture global context in 2D features. Specifically, we utilize the BCSM attention module and the Cross Non-local module to enhance feature representation, thereby improving the segmentation accuracy. Experimental results on the Synapse, ACDC, and AVT datasets show that Atten-Nonlocal Unet achieves DSC scores of 84.15%, 91.57%, and 86.94% respectively, and has 95% HD of 15.17, 1.16, and 4.78 correspondingly. Compared to the existing methods for medical image segmentation, the proposed method demonstrates superior segmentation performance, ensuring high accuracy in segmenting large organs while improving segmentation for small organs.

Keeping AI on Track: Regular monitoring of algorithmic updates in mammography.

Taib AG, James JJ, Partridge GJW, Chen Y

pubmed logopapersJun 1 2025
To demonstrate a method of benchmarking the performance of two consecutive software releases of the same commercial artificial intelligence (AI) product to trained human readers using the Personal Performance in Mammographic Screening scheme (PERFORMS) external quality assurance scheme. In this retrospective study, ten PERFORMS test sets, each consisting of 60 challenging cases, were evaluated by human readers between 2012 and 2023 and were evaluated by Version 1 (V1) and Version 2 (V2) of the same AI model in 2022 and 2023 respectively. Both AI and humans considered each breast independently. Both AI and humans considered the highest suspicion of malignancy score per breast for non-malignant cases and per lesion for breasts with malignancy. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated for comparison, with the study powered to detect a medium-sized effect (odds ratio, 3.5 or 0.29) for sensitivity. The study included 1,254 human readers, with a total of 328 malignant lesions, 823 normal, and 55 benign breasts analysed. No significant difference was found between the AUCs for AI V1 (0.93) and V2 (0.94) (p = 0.13). In terms of sensitivity, no difference was observed between human readers and AI V1 (83.2 % vs 87.5 % respectively, p = 0.12), however V2 outperformed humans (88.7 %, p = 0.04). Specificity was higher for AI V1 (87.4 %) and V2 (88.2 %) compared to human readers (79.0 %, p < 0.01 respectively). The upgraded AI model showed no significant difference in diagnostic performance compared to its predecessor when evaluating mammograms from PERFORMS test sets.

Automated engineered-stone silicosis screening and staging using Deep Learning with X-rays.

Priego-Torres B, Sanchez-Morillo D, Khalili E, Conde-Sánchez MÁ, García-Gámez A, León-Jiménez A

pubmed logopapersJun 1 2025
Silicosis, a debilitating occupational lung disease caused by inhaling crystalline silica, continues to be a significant global health issue, especially with the increasing use of engineered stone (ES) surfaces containing high silica content. Traditional diagnostic methods, dependent on radiological interpretation, have low sensitivity, especially, in the early stages of the disease, and present variability between evaluators. This study explores the efficacy of deep learning techniques in automating the screening and staging of silicosis using chest X-ray images. Utilizing a comprehensive dataset, obtained from the medical records of a cohort of workers exposed to artificial quartz conglomerates, we implemented a preprocessing stage for rib-cage segmentation, followed by classification using state-of-the-art deep learning models. The segmentation model exhibited high precision, ensuring accurate identification of thoracic structures. In the screening phase, our models achieved near-perfect accuracy, with ROC AUC values reaching 1.0, effectively distinguishing between healthy individuals and those with silicosis. The models demonstrated remarkable precision in the staging of the disease. Nevertheless, differentiating between simple silicosis and progressive massive fibrosis, the evolved and complicated form of the disease, presented certain difficulties, especially during the transitional period, when assessment can be significantly subjective. Notwithstanding these difficulties, the models achieved an accuracy of around 81% and ROC AUC scores nearing 0.93. This study highlights the potential of deep learning to generate clinical decision support tools to increase the accuracy and effectiveness in the diagnosis and staging of silicosis, whose early detection would allow the patient to be moved away from all sources of occupational exposure, therefore constituting a substantial advancement in occupational health diagnostics.
Page 76 of 91901 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.