Sort by:
Page 7 of 33324 results

ICPPNet: A semantic segmentation network model based on inter-class positional prior for scoliosis reconstruction in ultrasound images.

Wang C, Zhou Y, Li Y, Pang W, Wang L, Du W, Yang H, Jin Y

pubmed logopapersJun 1 2025
Considering the radiation hazard of X-ray, safer, more convenient and cost-effective ultrasound methods are gradually becoming new diagnostic approaches for scoliosis. For ultrasound images of spine regions, it is challenging to accurately identify spine regions in images due to relatively small target areas and the presence of a lot of interfering information. Therefore, we developed a novel neural network that incorporates prior knowledge to precisely segment spine regions in ultrasound images. We constructed a dataset of ultrasound images of spine regions for semantic segmentation. The dataset contains 3136 images of 30 patients with scoliosis. And we propose a network model (ICPPNet), which fully utilizes inter-class positional prior knowledge by combining an inter-class positional probability heatmap, to achieve accurate segmentation of target areas. ICPPNet achieved an average Dice similarity coefficient of 70.83% and an average 95% Hausdorff distance of 11.28 mm on the dataset, demonstrating its excellent performance. The average error between the Cobb angle measured by our method and the Cobb angle measured by X-ray images is 1.41 degrees, and the coefficient of determination is 0.9879 with a strong correlation. ICPPNet provides a new solution for the medical image segmentation task with positional prior knowledge between target classes. And ICPPNet strongly supports the subsequent reconstruction of spine models using ultrasound images.

BUS-M2AE: Multi-scale Masked Autoencoder for Breast Ultrasound Image Analysis.

Yu L, Gou B, Xia X, Yang Y, Yi Z, Min X, He T

pubmed logopapersJun 1 2025
Masked AutoEncoder (MAE) has demonstrated significant potential in medical image analysis by reducing the cost of manual annotations. However, MAE and its recent variants are not well-developed for ultrasound images in breast cancer diagnosis, as they struggle to generalize to the task of distinguishing ultrasound breast tumors of varying sizes. This limitation hinders the model's ability to adapt to the diverse morphological characteristics of breast tumors. In this paper, we propose a novel Breast UltraSound Multi-scale Masked AutoEncoder (BUS-M2AE) model to address the limitations of the general MAE. BUS-M2AE incorporates multi-scale masking methods at both the token level during the image patching stage and the feature level during the feature learning stage. These two multi-scale masking methods enable flexible strategies to match the explicit masked patches and the implicit features with varying tumor scales. By introducing these multi-scale masking methods in the image patching and feature learning phases, BUS-M2AE allows the pre-trained vision transformer to adaptively perceive and accurately distinguish breast tumors of different sizes, thereby improving the model's overall performance in handling diverse tumor morphologies. Comprehensive experiments demonstrate that BUS-M2AE outperforms recent MAE variants and commonly used supervised learning methods in breast cancer classification and tumor segmentation tasks.

Res-Net-Based Modeling and Morphologic Analysis of Deep Medullary Veins Using Multi-Echo GRE at 7 T MRI.

Li Z, Liang L, Zhang J, Fan X, Yang Y, Yang H, Wang Q, An J, Xue R, Zhuo Y, Qian H, Zhang Z

pubmed logopapersJun 1 2025
The pathological changes in deep medullary veins (DMVs) have been reported in various diseases. However, accurate modeling and quantification of DMVs remain challenging. We aim to propose and assess an automated approach for modeling and quantifying DMVs at 7 Tesla (7 T) MRI. A multi-echo-input Res-Net was developed for vascular segmentation, and a minimum path loss function was used for modeling and quantifying the geometric parameter of DMVs. Twenty-one patients diagnosed as subcortical vascular dementia (SVaD) and 20 condition matched controls were included in this study. The amplitude and phase images of gradient echo with five echoes were acquired at 7 T. Ten GRE images were manually labeled by two neurologists and compared with the results obtained by our proposed method. Independent samples t test and Pearson correlation were used for statistical analysis in our study, and p value < 0.05 was considered significant. No significant offset was found in centerlines obtained by human labeling and our algorithm (p = 0.734). The length difference between the proposed method and manual labeling was smaller than the error between different clinicians (p < 0.001). Patients with SVaD exhibited fewer DMVs (mean difference = -60.710 ± 21.810, p = 0.011) and higher curvature (mean difference = 0.12 ± 0.022, p < 0.0001), corresponding to their higher Vascular Dementia Assessment Scale-Cog (VaDAS-Cog) scores (mean difference = 4.332 ± 1.992, p = 0.036) and lower Mini-Mental State Examination (MMSE) (mean difference = -3.071 ± 1.443, p = 0.047). The MMSE scores were positively correlated with the numbers of DMVs (r = 0.437, p = 0.037) and were negatively correlated with the curvature (r = -0.426, p = 0.042). In summary, we proposed a novel framework for automated quantifying the morphologic parameters of DMVs. These characteristics of DMVs are expected to help the research and diagnosis of cerebral small vessel diseases with DMV lesions.

Atten-Nonlocal Unet: Attention and Non-local Unet for medical image segmentation.

Jia X, Wang W, Zhang M, Zhao B

pubmed logopapersJun 1 2025
The convolutional neural network(CNN)-based models have emerged as the predominant approach for medical image segmentation due to their effective inductive bias. However, their limitation lies in the lack of long-range information. In this study, we propose the Atten-Nonlocal Unet model that integrates CNN and transformer to overcome this limitation and precisely capture global context in 2D features. Specifically, we utilize the BCSM attention module and the Cross Non-local module to enhance feature representation, thereby improving the segmentation accuracy. Experimental results on the Synapse, ACDC, and AVT datasets show that Atten-Nonlocal Unet achieves DSC scores of 84.15%, 91.57%, and 86.94% respectively, and has 95% HD of 15.17, 1.16, and 4.78 correspondingly. Compared to the existing methods for medical image segmentation, the proposed method demonstrates superior segmentation performance, ensuring high accuracy in segmenting large organs while improving segmentation for small organs.

Liver Tumor Prediction using Attention-Guided Convolutional Neural Networks and Genomic Feature Analysis.

Edwin Raja S, Sutha J, Elamparithi P, Jaya Deepthi K, Lalitha SD

pubmed logopapersJun 1 2025
The task of predicting liver tumors is critical as part of medical image analysis and genomics area since diagnosis and prognosis are important in making correct medical decisions. Silent characteristics of liver tumors and interactions between genomic and imaging features are also the main sources of challenges toward reliable predictions. To overcome these hurdles, this study presents two integrated approaches namely, - Attention-Guided Convolutional Neural Networks (AG-CNNs), and Genomic Feature Analysis Module (GFAM). Spatial and channel attention mechanisms in AG-CNN enable accurate tumor segmentation from CT images while providing detailed morphological profiling. Evaluation with three control databases TCIA, LiTS, and CRLM shows that our model produces more accurate output than relevant literature with an accuracy of 94.5%, a Dice Similarity Coefficient of 91.9%, and an F1-Score of 96.2% for the Dataset 3. More considerably, the proposed methods outperform all the other methods in different datasets in terms of recall, precision, and Specificity by up to 10 percent than all other methods including CELM, CAGS, DM-ML, and so on.•Utilization of Attention-Guided Convolutional Neural Networks (AG-CNN) enhances tumor region focus and segmentation accuracy.•Integration of Genomic Feature Analysis (GFAM) identifies molecular markers for subtype-specific tumor classification.

Advances in MRI optic nerve segmentation.

Xena-Bosch C, Kodali S, Sahi N, Chard D, Llufriu S, Toosy AT, Martinez-Heras E, Prados F

pubmed logopapersJun 1 2025
Understanding optic nerve structure and monitoring changes within it can provide insights into neurodegenerative diseases like multiple sclerosis, in which optic nerves are often damaged by inflammatory episodes of optic neuritis. Over the past decades, interest in the optic nerve has increased, particularly with advances in magnetic resonance technology and the advent of deep learning solutions. These advances have significantly improved the visualisation and analysis of optic nerves, making it possible to detect subtle changes that aid the early diagnosis and treatment of optic nerve-related diseases, and for planning radiotherapy interventions. Effective segmentation techniques, therefore, are crucial for enhancing the accuracy of predictive models, planning interventions and treatment strategies. This comprehensive review, which includes 27 peer-reviewed articles published between 2007 and 2024, examines and highlights the evolution of optic nerve magnetic resonance imaging segmentation over the past decade, tracing the development from intensity-based methods to the latest deep learning algorithms, including multi-atlas solutions using single or multiple image modalities.

multiPI-TransBTS: A multi-path learning framework for brain tumor image segmentation based on multi-physical information.

Zhu H, Huang J, Chen K, Ying X, Qian Y

pubmed logopapersJun 1 2025
Brain Tumor Segmentation (BraTS) plays a critical role in clinical diagnosis, treatment planning, and monitoring the progression of brain tumors. However, due to the variability in tumor appearance, size, and intensity across different MRI modalities, automated segmentation remains a challenging task. In this study, we propose a novel Transformer-based framework, multiPI-TransBTS, which integrates multi-physical information to enhance segmentation accuracy. The model leverages spatial information, semantic information, and multi-modal imaging data, addressing the inherent heterogeneity in brain tumor characteristics. The multiPI-TransBTS framework consists of an encoder, an Adaptive Feature Fusion (AFF) module, and a multi-source, multi-scale feature decoder. The encoder incorporates a multi-branch architecture to separately extract modality-specific features from different MRI sequences. The AFF module fuses information from multiple sources using channel-wise and element-wise attention, ensuring effective feature recalibration. The decoder combines both common and task-specific features through a Task-Specific Feature Introduction (TSFI) strategy, producing accurate segmentation outputs for Whole Tumor (WT), Tumor Core (TC), and Enhancing Tumor (ET) regions. Comprehensive evaluations on the BraTS2019 and BraTS2020 datasets demonstrate the superiority of multiPI-TransBTS over the state-of-the-art methods. The model consistently achieves better Dice coefficients, Hausdorff distances, and Sensitivity scores, highlighting its effectiveness in addressing the BraTS challenges. Our results also indicate the need for further exploration of the balance between precision and recall in the ET segmentation task. The proposed framework represents a significant advancement in BraTS, with potential implications for improving clinical outcomes for brain tumor patients.

Boosting polyp screening with improved point-teacher weakly semi-supervised.

Du X, Zhang X, Chen J, Li L

pubmed logopapersJun 1 2025
Polyps, like a silent time bomb in the gut, are always lurking and can explode into deadly colorectal cancer at any time. Many methods are attempted to maximize the early detection of colon polyps by screening, however, there are still face some challenges: (i) the scarcity of per-pixel annotation data and clinical features such as the blurred boundary and low contrast of polyps result in poor performance. (ii) existing weakly semi-supervised methods directly using pseudo-labels to supervise student tend to ignore the value brought by intermediate features in the teacher. To adapt the point-prompt teacher model to the challenging scenarios of complex medical images and limited annotation data, we creatively leverage the diverse inductive biases of CNN and Transformer to extract robust and complementary representation of polyp features (boundary and context). At the same time, a novel designed teacher-student intermediate feature distillation method is introduced rather than just using pseudo-labels to guide student learning. Comprehensive experiments demonstrate that our proposed method effectively handles scenarios with limited annotations and exhibits good segmentation performance. All code is available at https://github.com/dxqllp/WSS-Polyp.

Advanced image preprocessing and context-aware spatial decomposition for enhanced breast cancer segmentation.

Kalpana G, Deepa N, Dhinakaran D

pubmed logopapersJun 1 2025
The segmentation of breast cancer diagnosis and medical imaging contains issues such as noise, variation in contrast, and low resolutions which make it challenging to distinguish malignant sites. In this paper, we propose a new solution that integrates with AIPT (Advanced Image Preprocessing Techniques) and CASDN (Context-Aware Spatial Decomposition Network) to overcome these problems. The preprocessing pipeline apply bunch of methods including Adaptive Thresholding, Hierarchical Contrast Normalization, Contextual Feature Augmentation, Multi-Scale Region Enhancement, and Dynamic Histogram Equalization for image quality. These methods smooth edges, equalize the contrasting picture and inlay contextual details in a way which effectively eliminate the noise and make the images clearer and with fewer distortions. Experimental outcomes demonstrate its effectiveness by delivering a Dice Coefficient of 0.89, IoU of 0.85, and a Hausdorff Distance of 5.2 demonstrating its enhanced capability in segmenting significant tumor margins over other techniques. Furthermore, the use of the improved preprocessing pipeline benefits classification models with improved Convolutional Neural Networks having a classification accuracy of 85.3 % coupled with AUC-ROC of 0.90 which shows a significant enhancement from conventional techniques.•Enhanced segmentation accuracy with advanced preprocessing and CASDN, achieving superior performance metrics.•Robust multi-modality compatibility, ensuring effectiveness across mammograms, ultrasounds, and MRI scans.

Integrating finite element analysis and physics-informed neural networks for biomechanical modeling of the human lumbar spine.

Ahmadi M, Biswas D, Paul R, Lin M, Tang Y, Cheema TS, Engeberg ED, Hashemi J, Vrionis FD

pubmed logopapersJun 1 2025
Comprehending the biomechanical characteristics of the human lumbar spine is crucial for managing and preventing spinal disorders. Precise material properties derived from patient-specific CT scans are essential for simulations to accurately mimic real-life scenarios, which is invaluable in creating effective surgical plans. The integration of Finite Element Analysis (FEA) with Physics-Informed Neural Networks (PINNs) offers significant clinical benefits by automating lumbar spine segmentation and meshing. We developed a FEA model of the lumbar spine incorporating detailed anatomical and material properties derived from high-quality CT and MRI scans. The model includes vertebrae and intervertebral discs, segmented and meshed using advanced imaging and computational techniques. PINNs were implemented to integrate physical laws directly into the neural network training process, ensuring that the predictions of material properties adhered to the governing equations of mechanics. The model achieved an accuracy of 94.30% in predicting material properties such as Young's modulus (14.88 GPa for cortical bone and 1.23 MPa for intervertebral discs), Poisson's ratio (0.25 and 0.47, respectively), bulk modulus (9.87 GPa and 6.56 MPa, respectively), and shear modulus (5.96 GPa and 0.42 MPa, respectively). We developed a lumbar spine FEA model using anatomical and material properties from CT and MRI scans. Vertebrae and discs were segmented and meshed with advanced imaging techniques, while PINNs ensured material predictions followed mechanical laws. The integration of FEA and PINNs allows for accurate, automated prediction of material properties and mechanical behaviors of the lumbar spine, significantly reducing manual input and enhancing reliability. This approach ensures dependable biomechanical simulations and supports the development of personalized treatment plans and surgical strategies, ultimately improving clinical outcomes for spinal disorders. This method improves surgical planning and outcomes, contributing to better patient care and recovery in spinal disorders.
Page 7 of 33324 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.