Sort by:
Page 109 of 1331328 results

The impact of training image quality with a novel protocol on artificial intelligence-based LGE-MRI image segmentation for potential atrial fibrillation management.

Berezhnoy AK, Kalinin AS, Parshin DA, Selivanov AS, Demin AG, Zubov AG, Shaidullina RS, Aitova AA, Slotvitsky MM, Kalemberg AA, Kirillova VS, Syrovnev VA, Agladze KI, Tsvelaya VA

pubmed logopapersJun 1 2025
Atrial fibrillation (AF) is the most common cardiac arrhythmia, affecting up to 2 % of the population. Catheter ablation is a promising treatment for AF, particularly for paroxysmal AF patients, but it often has high recurrence rates. Developing in silico models of patients' atria during the ablation procedure using cardiac MRI data may help reduce these rates. This study aims to develop an effective automated deep learning-based segmentation pipeline by compiling a specialized dataset and employing standardized labeling protocols to improve segmentation accuracy and efficiency. In doing so, we aim to achieve the highest possible accuracy and generalization ability while minimizing the burden on clinicians involved in manual data segmentation. We collected LGE-MRI data from VMRC and the cDEMRIS database. Two specialists manually labeled the data using standardized protocols to reduce subjective errors. Neural network (nnU-Net and smpU-Net++) performance was evaluated using statistical tests, including sensitivity and specificity analysis. A new database of LGE-MRI images, based on manual segmentation, was created (VMRC). Our approach with consistent labeling protocols achieved a Dice coefficient of 92.4 % ± 0.8 % for the cavity and 64.5 % ± 1.9 % for LA walls. Using the pre-trained RIFE model, we attained a Dice score of approximately 89.1 % ± 1.6 % for atrial LGE-MRI imputation, outperforming classical methods. Sensitivity and specificity values demonstrated substantial enhancement in the performance of neural networks trained with the new protocol. Standardized labeling and RIFE applications significantly improved machine learning tool efficiency for constructing 3D LA models. This novel approach supports integrating state-of-the-art machine learning methods into broader in silico pipelines for predicting ablation outcomes in AF patients.

Brain tumor segmentation with deep learning: Current approaches and future perspectives.

Verma A, Yadav AK

pubmed logopapersJun 1 2025
Accurate brain tumor segmentation from MRI images is critical in the medical industry, directly impacts the efficacy of diagnostic and treatment plans. Accurate segmentation of tumor region can be challenging, especially when noise and abnormalities are present. This research provides a systematic review of automatic brain tumor segmentation techniques, with a specific focus on the design of network architectures. The review categorizes existing methods into unsupervised and supervised learning techniques, as well as machine learning and deep learning approaches within supervised techniques. Deep learning techniques are thoroughly reviewed, with a particular focus on CNN-based, U-Net-based, transfer learning-based, transformer-based, and hybrid transformer-based methods. This survey encompasses a broad spectrum of automatic segmentation methodologies, from traditional machine learning approaches to advanced deep learning frameworks. It provides an in-depth comparison of performance metrics, model efficiency, and robustness across multiple datasets, particularly the BraTS dataset. The study further examines multi-modal MRI imaging and its influence on segmentation accuracy, addressing domain adaptation, class imbalance, and generalization challenges. The analysis highlights the current challenges in Computer-aided Diagnostic (CAD) systems, examining how different models and imaging sequences impact performance. Recent advancements in deep learning, especially the widespread use of U-Net architectures, have significantly enhanced medical image segmentation. This review critically evaluates these developments, focusing the iterative improvements in U-Net models that have driven progress in brain tumor segmentation. Furthermore, it explores various techniques for improving U-Net performance for medical applications, focussing on its potential for improving diagnostic and treatment planning procedures. The efficiency of these automated segmentation approaches is rigorously evaluated using the BraTS dataset, a benchmark dataset, part of the annual Multimodal Brain Tumor Segmentation Challenge (MICCAI). This evaluation provides insights into the current state-of-the-art and identifies key areas for future research and development.

Evaluating the prognostic significance of artificial intelligence-delineated gross tumor volume and prostate volume measurements for prostate radiotherapy.

Adleman J, McLaughlin PY, Tsui JMG, Buzurovic I, Harris T, Hudson J, Urribarri J, Cail DW, Nguyen PL, Orio PF, Lee LK, King MT

pubmed logopapersJun 1 2025
Artificial intelligence (AI) may extract prognostic information from MRI for localized prostate cancer. We evaluate whether AI-derived prostate and gross tumor volume (GTV) are associated with toxicity and oncologic outcomes after radiotherapy. We conducted a retrospective study of patients, who underwent radiotherapy between 2010 and 2017. We trained an AI segmentation algorithm to contour the prostate and GTV from patients treated with external-beam RT, and applied the algorithm to those treated with brachytherapy. AI prostate and GTV volumes were calculated from segmentation results. We evaluated whether AI GTV volume was associated with biochemical failure (BF) and metastasis. We evaluated whether AI prostate volume was associated with acute and late grade 2+ genitourinary toxicity, and International Prostate Symptom Score (IPSS) resolution for monotherapy and combination sets, separately. We identified 187 patients who received brachytherapy (monotherapy (N = 154) or combination therapy (N = 33)). AI GTV volume was associated with BF (hazard ratio (HR):1.28[1.14,1.44];p < 0.001) and metastasis (HR:1.34[1.18,1.53;p < 0.001). For the monotherapy subset, AI prostate volume was associated with both acute (adjusted odds ratio:1.16[1.07,1.25];p < 0.001) and late grade 2 + genitourinary toxicity (adjusted HR:1.04[1.01,1.07];p = 0.01), but not IPSS resolution (0.99[0.97,1.00];p = 0.13). For the combination therapy subset, AI prostate volume was not associated with either acute (p = 0.72) or late (p = 0.75) grade 2 + urinary toxicity. However, AI prostate volume was associated with IPSS resolution (0.96[0.93, 0.99];p = 0.01). AI-derived prostate and GTV volumes may be prognostic for toxicity and oncologic outcomes after RT. Such information may aid in treatment decision-making, given differences in outcomes among RT treatment modalities.

Integrating finite element analysis and physics-informed neural networks for biomechanical modeling of the human lumbar spine.

Ahmadi M, Biswas D, Paul R, Lin M, Tang Y, Cheema TS, Engeberg ED, Hashemi J, Vrionis FD

pubmed logopapersJun 1 2025
Comprehending the biomechanical characteristics of the human lumbar spine is crucial for managing and preventing spinal disorders. Precise material properties derived from patient-specific CT scans are essential for simulations to accurately mimic real-life scenarios, which is invaluable in creating effective surgical plans. The integration of Finite Element Analysis (FEA) with Physics-Informed Neural Networks (PINNs) offers significant clinical benefits by automating lumbar spine segmentation and meshing. We developed a FEA model of the lumbar spine incorporating detailed anatomical and material properties derived from high-quality CT and MRI scans. The model includes vertebrae and intervertebral discs, segmented and meshed using advanced imaging and computational techniques. PINNs were implemented to integrate physical laws directly into the neural network training process, ensuring that the predictions of material properties adhered to the governing equations of mechanics. The model achieved an accuracy of 94.30% in predicting material properties such as Young's modulus (14.88 GPa for cortical bone and 1.23 MPa for intervertebral discs), Poisson's ratio (0.25 and 0.47, respectively), bulk modulus (9.87 GPa and 6.56 MPa, respectively), and shear modulus (5.96 GPa and 0.42 MPa, respectively). We developed a lumbar spine FEA model using anatomical and material properties from CT and MRI scans. Vertebrae and discs were segmented and meshed with advanced imaging techniques, while PINNs ensured material predictions followed mechanical laws. The integration of FEA and PINNs allows for accurate, automated prediction of material properties and mechanical behaviors of the lumbar spine, significantly reducing manual input and enhancing reliability. This approach ensures dependable biomechanical simulations and supports the development of personalized treatment plans and surgical strategies, ultimately improving clinical outcomes for spinal disorders. This method improves surgical planning and outcomes, contributing to better patient care and recovery in spinal disorders.

Advanced image preprocessing and context-aware spatial decomposition for enhanced breast cancer segmentation.

Kalpana G, Deepa N, Dhinakaran D

pubmed logopapersJun 1 2025
The segmentation of breast cancer diagnosis and medical imaging contains issues such as noise, variation in contrast, and low resolutions which make it challenging to distinguish malignant sites. In this paper, we propose a new solution that integrates with AIPT (Advanced Image Preprocessing Techniques) and CASDN (Context-Aware Spatial Decomposition Network) to overcome these problems. The preprocessing pipeline apply bunch of methods including Adaptive Thresholding, Hierarchical Contrast Normalization, Contextual Feature Augmentation, Multi-Scale Region Enhancement, and Dynamic Histogram Equalization for image quality. These methods smooth edges, equalize the contrasting picture and inlay contextual details in a way which effectively eliminate the noise and make the images clearer and with fewer distortions. Experimental outcomes demonstrate its effectiveness by delivering a Dice Coefficient of 0.89, IoU of 0.85, and a Hausdorff Distance of 5.2 demonstrating its enhanced capability in segmenting significant tumor margins over other techniques. Furthermore, the use of the improved preprocessing pipeline benefits classification models with improved Convolutional Neural Networks having a classification accuracy of 85.3 % coupled with AUC-ROC of 0.90 which shows a significant enhancement from conventional techniques.•Enhanced segmentation accuracy with advanced preprocessing and CASDN, achieving superior performance metrics.•Robust multi-modality compatibility, ensuring effectiveness across mammograms, ultrasounds, and MRI scans.

Boosting polyp screening with improved point-teacher weakly semi-supervised.

Du X, Zhang X, Chen J, Li L

pubmed logopapersJun 1 2025
Polyps, like a silent time bomb in the gut, are always lurking and can explode into deadly colorectal cancer at any time. Many methods are attempted to maximize the early detection of colon polyps by screening, however, there are still face some challenges: (i) the scarcity of per-pixel annotation data and clinical features such as the blurred boundary and low contrast of polyps result in poor performance. (ii) existing weakly semi-supervised methods directly using pseudo-labels to supervise student tend to ignore the value brought by intermediate features in the teacher. To adapt the point-prompt teacher model to the challenging scenarios of complex medical images and limited annotation data, we creatively leverage the diverse inductive biases of CNN and Transformer to extract robust and complementary representation of polyp features (boundary and context). At the same time, a novel designed teacher-student intermediate feature distillation method is introduced rather than just using pseudo-labels to guide student learning. Comprehensive experiments demonstrate that our proposed method effectively handles scenarios with limited annotations and exhibits good segmentation performance. All code is available at https://github.com/dxqllp/WSS-Polyp.

multiPI-TransBTS: A multi-path learning framework for brain tumor image segmentation based on multi-physical information.

Zhu H, Huang J, Chen K, Ying X, Qian Y

pubmed logopapersJun 1 2025
Brain Tumor Segmentation (BraTS) plays a critical role in clinical diagnosis, treatment planning, and monitoring the progression of brain tumors. However, due to the variability in tumor appearance, size, and intensity across different MRI modalities, automated segmentation remains a challenging task. In this study, we propose a novel Transformer-based framework, multiPI-TransBTS, which integrates multi-physical information to enhance segmentation accuracy. The model leverages spatial information, semantic information, and multi-modal imaging data, addressing the inherent heterogeneity in brain tumor characteristics. The multiPI-TransBTS framework consists of an encoder, an Adaptive Feature Fusion (AFF) module, and a multi-source, multi-scale feature decoder. The encoder incorporates a multi-branch architecture to separately extract modality-specific features from different MRI sequences. The AFF module fuses information from multiple sources using channel-wise and element-wise attention, ensuring effective feature recalibration. The decoder combines both common and task-specific features through a Task-Specific Feature Introduction (TSFI) strategy, producing accurate segmentation outputs for Whole Tumor (WT), Tumor Core (TC), and Enhancing Tumor (ET) regions. Comprehensive evaluations on the BraTS2019 and BraTS2020 datasets demonstrate the superiority of multiPI-TransBTS over the state-of-the-art methods. The model consistently achieves better Dice coefficients, Hausdorff distances, and Sensitivity scores, highlighting its effectiveness in addressing the BraTS challenges. Our results also indicate the need for further exploration of the balance between precision and recall in the ET segmentation task. The proposed framework represents a significant advancement in BraTS, with potential implications for improving clinical outcomes for brain tumor patients.

Advances in MRI optic nerve segmentation.

Xena-Bosch C, Kodali S, Sahi N, Chard D, Llufriu S, Toosy AT, Martinez-Heras E, Prados F

pubmed logopapersJun 1 2025
Understanding optic nerve structure and monitoring changes within it can provide insights into neurodegenerative diseases like multiple sclerosis, in which optic nerves are often damaged by inflammatory episodes of optic neuritis. Over the past decades, interest in the optic nerve has increased, particularly with advances in magnetic resonance technology and the advent of deep learning solutions. These advances have significantly improved the visualisation and analysis of optic nerves, making it possible to detect subtle changes that aid the early diagnosis and treatment of optic nerve-related diseases, and for planning radiotherapy interventions. Effective segmentation techniques, therefore, are crucial for enhancing the accuracy of predictive models, planning interventions and treatment strategies. This comprehensive review, which includes 27 peer-reviewed articles published between 2007 and 2024, examines and highlights the evolution of optic nerve magnetic resonance imaging segmentation over the past decade, tracing the development from intensity-based methods to the latest deep learning algorithms, including multi-atlas solutions using single or multiple image modalities.

Liver Tumor Prediction using Attention-Guided Convolutional Neural Networks and Genomic Feature Analysis.

Edwin Raja S, Sutha J, Elamparithi P, Jaya Deepthi K, Lalitha SD

pubmed logopapersJun 1 2025
The task of predicting liver tumors is critical as part of medical image analysis and genomics area since diagnosis and prognosis are important in making correct medical decisions. Silent characteristics of liver tumors and interactions between genomic and imaging features are also the main sources of challenges toward reliable predictions. To overcome these hurdles, this study presents two integrated approaches namely, - Attention-Guided Convolutional Neural Networks (AG-CNNs), and Genomic Feature Analysis Module (GFAM). Spatial and channel attention mechanisms in AG-CNN enable accurate tumor segmentation from CT images while providing detailed morphological profiling. Evaluation with three control databases TCIA, LiTS, and CRLM shows that our model produces more accurate output than relevant literature with an accuracy of 94.5%, a Dice Similarity Coefficient of 91.9%, and an F1-Score of 96.2% for the Dataset 3. More considerably, the proposed methods outperform all the other methods in different datasets in terms of recall, precision, and Specificity by up to 10 percent than all other methods including CELM, CAGS, DM-ML, and so on.•Utilization of Attention-Guided Convolutional Neural Networks (AG-CNN) enhances tumor region focus and segmentation accuracy.•Integration of Genomic Feature Analysis (GFAM) identifies molecular markers for subtype-specific tumor classification.

Atten-Nonlocal Unet: Attention and Non-local Unet for medical image segmentation.

Jia X, Wang W, Zhang M, Zhao B

pubmed logopapersJun 1 2025
The convolutional neural network(CNN)-based models have emerged as the predominant approach for medical image segmentation due to their effective inductive bias. However, their limitation lies in the lack of long-range information. In this study, we propose the Atten-Nonlocal Unet model that integrates CNN and transformer to overcome this limitation and precisely capture global context in 2D features. Specifically, we utilize the BCSM attention module and the Cross Non-local module to enhance feature representation, thereby improving the segmentation accuracy. Experimental results on the Synapse, ACDC, and AVT datasets show that Atten-Nonlocal Unet achieves DSC scores of 84.15%, 91.57%, and 86.94% respectively, and has 95% HD of 15.17, 1.16, and 4.78 correspondingly. Compared to the existing methods for medical image segmentation, the proposed method demonstrates superior segmentation performance, ensuring high accuracy in segmenting large organs while improving segmentation for small organs.
Page 109 of 1331328 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.