Sort by:
Page 1 of 438 results
Next

SurgPointTransformer: transformer-based vertebra shape completion using RGB-D imaging.

Massalimova A, Liebmann F, Jecklin S, Carrillo F, Farshad M, Fürnstahl P

pubmed logopapersDec 1 2025
State-of-the-art computer- and robot-assisted surgery systems rely on intraoperative imaging technologies such as computed tomography and fluoroscopy to provide detailed 3D visualizations of patient anatomy. However, these methods expose both patients and clinicians to ionizing radiation. This study introduces a radiation-free approach for 3D spine reconstruction using RGB-D data. Inspired by the "mental map" surgeons form during procedures, we present SurgPointTransformer, a shape completion method that reconstructs unexposed spinal regions from sparse surface observations. The method begins with a vertebra segmentation step that extracts vertebra-level point clouds for subsequent shape completion. SurgPointTransformer then uses an attention mechanism to learn the relationship between visible surface features and the complete spine structure. The approach is evaluated on an <i>ex vivo</i> dataset comprising nine samples, with CT-derived data used as ground truth. SurgPointTransformer significantly outperforms state-of-the-art baselines, achieving a Chamfer distance of 5.39 mm, an F-score of 0.85, an Earth mover's distance of 11.00 and a signal-to-noise ratio of 22.90 dB. These results demonstrate the potential of our method to reconstruct 3D vertebral shapes without exposing patients to ionizing radiation. This work contributes to the advancement of computer-aided and robot-assisted surgery by enhancing system perception and intelligence.

Amorphous-Crystalline Synergy in CoSe<sub>2</sub>/CoS<sub>2</sub> Heterostructures: High-Performance SERS Substrates for Esophageal Tumor Cell Discrimination.

Zhang M, Liu A, Meng X, Wang Y, Yu J, Liu H, Sun Y, Xu L, Song X, Zhang J, Sun L, Lin J, Wu A, Wang X, Chai N, Li L

pubmed logopapersAug 12 2025
Although surface-enhanced Raman scattering (SERS) spectroscopy is applied in biomedicine deeply, the design of new substrates for wider detection is still in demand. Crystalline-amorphous CoSe<sub>2</sub>/CoS<sub>2</sub> heterojunction is synthesized, with high SERS performance and stability, composed of orthorhombic (o-CoSe<sub>2</sub>) and amorphous CoS<sub>2</sub> (a-CoS<sub>2</sub>). By adjusting feed ratio, the proportion of a-CoS<sub>2</sub> to o-CoSe<sub>2</sub> is regulated, where CoSe<sub>2</sub>/CoS<sub>2</sub>-S50 with a 1:1 ratio demonstrates the best SERS performance due to the balance of two components. It is confirmed through experimental and simulation methods that o-CoSe<sub>2</sub> and a-CoS<sub>2</sub> have unique contribution, respectively: a-CoS<sub>2</sub> has rich vacancies and a higher density of active sites, while o-CoSe<sub>2</sub> further enriches vacancies, enhances electron delocalization and charge transfer (CT) capabilities, and reduces bandgap. Besides, CoSe<sub>2</sub>/CoS<sub>2</sub>-S50 achieves not only SERS detection of two common esophageal tumor cells (KYSE and TE) and healthy oral epithelial cells (het-1A), but also the discrimination with high sensitivity, specificity, and accuracy via machine learning (ML) analysis.

CMVFT: A Multi-Scale Attention Guided Framework for Enhanced Keratoconus Suspect Classification in Multi-View Corneal Topography.

Lu Y, Li B, Zhang Y, Qi Y, Shi X

pubmed logopapersAug 11 2025
Retrospective cross-sectional study. To develop a multi-view fusion framework that effectively identifies suspect keratoconus cases and facilitates the possibility of early clinical intervention. A total of 573 corneal topography maps representing eyes classified as normal, suspect, or keratoconus. We designed the Corneal Multi-View Fusion Transformer (CMVFT), which integrates features from seven standard corneal topography maps. A pretrained ResNet-50 extracts single-view representations that are further refined by a custom-designed Multi-Scale Attention Module (MSAM). This integrated design specifically compensates for the representation gap commonly encountered when applying Transformers to small-sample corneal topography datasets by dynamically bridging local convolution-based feature extraction with global self-attention mechanisms. A subsequent fusion Transformer then models long-range dependencies across views for comprehensive multi-view feature integration. The primary measure was the framework's ability to differentiate suspect cases from normal and keratoconus cases, thereby creating a pathway for early clinical intervention. Experimental evaluation demonstrated that CMVFT effectively distinguishes suspect cases within a feature space characterized by overlapping attributes. Ablation studies confirmed that both the MSAM and the fusion Transformer are essential for robust multi-view feature integration, successfully compensating for potential representation shortcomings in small datasets. This study is the first to apply a Transformer-driven multi-view fusion approach in corneal topography analysis. By compensating for the representation gap inherent in small-sample settings, CMVFT shows promise in enabling the identification of suspect keratoconus cases and supporting early intervention strategies, with prospective implications for early clinical intervention.

Self-supervised disc and cup segmentation via non-local deformable convolution and adaptive transformer.

Zhao W, Wang Y

pubmed logopapersAug 9 2025
Optic disc and cup segmentation is a crucial subfield of computer vision, playing a pivotal role in automated pathological image analysis. It enables precise, efficient, and automated diagnosis of ocular conditions, significantly aiding clinicians in real-world medical applications. However, due to the scarcity of medical segmentation data and the insufficient integration of global contextual information, the segmentation accuracy remains suboptimal. This issue becomes particularly pronounced in optic disc and cup cases with complex anatomical structures and ambiguous boundaries.In order to address these limitations, this paper introduces a self-supervised training strategy integrated with a newly designed network architecture to improve segmentation accuracy.Specifically,we initially propose a non-local dual deformable convolutional block,which aims to capture the irregular image patterns(i.e. boundary).Secondly,we modify the traditional vision transformer and design an adaptive K-Nearest Neighbors(KNN) transformation block to extract the global semantic context from images. Finally,an initialization strategy based on self-supervised training is proposed to reduce the burden on the network on labeled data.Comprehensive experimental evaluations demonstrate the effectiveness of our proposed method, which outperforms previous networks and achieves state-of-the-art performance,with IOU scores of 0.9577 for the optic disc and 0.8399 for the optic cup on the REFUGE dataset.

ATLASS: An AnaTomicaLly-Aware Self-Supervised Learning Framework for Generalizable Retinal Disease Detection.

Khan AA, Ahmad KM, Shafiq S, Akram MU, Shao J

pubmed logopapersAug 6 2025
Medical imaging, particularly retinal fundus photography, plays a crucial role in early disease detection and treatment for various ocular disorders. However, the development of robust diagnostic systems using deep learning remains constrained by the scarcity of expertly annotated data, which is time-consuming and expensive. Self-Supervised Learning (SSL) has emerged as a promising solution, but existing models fail to effectively incorporate critical domain knowledge specific to retinal anatomy. This potentially limits their clinical relevance and diagnostic capability. We address this issue by introducing an anatomically aware SSL framework that strategically integrates domain expertise through specialized masking of vital retinal structures during pretraining. Our approach leverages vessel and optic disc segmentation maps to guide the SSL process, enabling the development of clinically relevant feature representations without extensive labeled data. The framework combines a Vision Transformer with dual-masking strategies and anatomically informed loss functions to preserve structural integrity during feature learning. Comprehensive evaluation across multiple datasets demonstrates our method's competitive performance in diverse retinal disease classification tasks, including diabetic retinopathy grading, glaucoma detection, age-related macular degeneration identification, and multi-disease classification. The evaluation results establish the effectiveness of anatomically-aware SSL in advancing automated retinal disease diagnosis while addressing the fundamental challenge of limited labeled medical data.

Skin lesion segmentation: A systematic review of computational techniques, tools, and future directions.

Sharma AL, Sharma K, Ghosal P

pubmed logopapersAug 5 2025
Skin lesion segmentation is a highly sought-after research topic in medical image processing, which may help in the early diagnosis of skin diseases. Early detection of skin diseases like Melanoma can decrease the mortality rate by 95%. Distinguishing lesions from healthy skin through skin image segmentation is a critical step. Various factors such as color, size, shape of the skin lesion, presence of hair, and other noise pose challenges in segmenting a lesion from healthy skin. Hence, the effectiveness of the segmentation technique utilized is vital for precise disease diagnosis and treatment planning. This review explores and summarizes the latest advancements in skin lesion segmentation techniques and their state-of-the-art methods from 2018 to 2025. It also covers crucial information, including input datasets, pre-processing, augmentation, method configuration, loss functions, hyperparameter settings, and performance metrics. The review addresses the primary challenges encountered in skin lesion segmentation from images and comprehensively compares state-of-the-art techniques for skin lesion segmentation. Researchers in this field will find this review compelling due to the insights on skin lesion segmentation and methodological details, as well as the encouraging results analysis of the state-of-the-art methods.

The retina as a window into detecting subclinical cardiovascular disease in type 2 diabetes.

Alatrany AS, Lakhani K, Cowley AC, Yeo JL, Dattani A, Ayton SL, Deshpande A, Graham-Brown MPM, Davies MJ, Khunti K, Yates T, Sellers SL, Zhou H, Brady EM, Arnold JR, Deane J, McLean RJ, Proudlock FA, McCann GP, Gulsin GS

pubmed logopapersJul 31 2025
Individuals with Type 2 Diabetes (T2D) are at high risk of subclinical cardiovascular disease (CVD), potentially detectable through retinal alterations. In this single-centre, prospective cohort study, 255 asymptomatic adults with T2D and no prior history of CVD underwent echocardiography, non-contrast coronary computed tomography and cardiovascular magnetic resonance. Retinal photographs were evaluated for diabetic retinopathy grade and microvascular geometric characteristics using deep learning (DL) tools. Associations with cardiac imaging markers of subclinical CVD were explored. Of the participants (aged 64 ± 7 years, 62% males); 200 (78%) had no diabetic retinopathy and 55 (22%) had mild background retinopathy. Groups were well-matched for age, sex, ethnicity, CV risk factors, urine microalbuminuria, and serum natriuretic peptide and high-sensitivity troponin levels. Presence of retinopathy was associated with a greater burden of coronary atherosclerosis (coronary artery calcium score ≥ 100; OR 2.63; 95% CI 1.29–5.36; <i>P</i> = 0.008), more concentric left ventricular remodelling (OR 3.11; 95% CI 1.50–6.45; <i>P</i> = 0.002), and worse global longitudinal strain (OR 2.32; 95% CI 1.18–4.59; <i>P</i> = 0.015), independent of key co-variables. Early diabetic retinopathy is associated with a high burden of coronary atherosclerosis and markers of early heart failure. Routine diabetic eye screening may serve as an effective alternative to currently advocated screening tests for detecting subclinical CVD in T2D, presenting opportunities for earlier detection and intervention. The online version contains supplementary material available at 10.1038/s41598-025-13468-4.

IHE-Net:Hidden feature discrepancy fusion and triple consistency training for semi-supervised medical image segmentation.

Ju M, Wang B, Zhao Z, Zhang S, Yang S, Wei Z

pubmed logopapersJul 31 2025
Teacher-Student (TS) networks have become the mainstream frameworks of semi-supervised deep learning, and are widely used in medical image segmentation. However, traditional TSs based on single or homogeneous encoders often struggle to capture the rich semantic details required for complex, fine-grained tasks. To address this, we propose a novel semi-supervised medical image segmentation framework (IHE-Net), which makes good use of the feature discrepancies of two heterogeneous encoders to improve segmentation performance. The two encoders are instantiated by different learning paradigm networks, namely CNN and Transformer/Mamba, respectively, to extract richer and more robust context representations from unlabeled data. On this basis, we propose a simple yet powerful multi-level feature discrepancy fusion module (MFDF), which effectively integrates different modal features and their discrepancies from two heterogeneous encoders. This design enhances the representational capacity of the model through efficient fusion without introducing additional computational overhead. Furthermore, we introduce a triple consistency learning strategy to improve predictive stability by setting dual decoders and adding mixed output consistency. Extensive experimental results on three skin lesion segmentation datasets, ISIC2017, ISIC2018, and PH2, demonstrate the superiority of our framework. Ablation studies further validate the rationale and effectiveness of the proposed method. Code is available at: https://github.com/joey-AI-medical-learning/IHE-Net.

Efficacy of image similarity as a metric for augmenting small dataset retinal image segmentation.

Wallace T, Heng IS, Subasic S, Messenger C

pubmed logopapersJul 30 2025
Synthetic images are an option for augmenting limited medical imaging datasets to improve the performance of various machine learning models. A common metric for evaluating synthetic image quality is the Fréchet Inception Distance (FID) which measures the similarity of two image datasets. In this study we evaluate the relationship between this metric and the improvement which synthetic images, generated by a Progressively Growing Generative Adversarial Network (PGGAN), grant when augmenting Diabetes-related Macular Edema (DME) intraretinal fluid segmentation performed by a U-Net model with limited amounts of training data. We find that the behaviour of augmenting with standard and synthetic images agrees with previously conducted experiments. Additionally, we show that dissimilar (high FID) datasets do not improve segmentation significantly. As FID between the training and augmenting datasets decreases, the augmentation datasets are shown to contribute to significant and robust improvements in image segmentation. Finally, we find that there is significant evidence to suggest that synthetic and standard augmentations follow separate log-normal trends between FID and improvements in model performance, with synthetic data proving more effective than standard augmentation techniques. Our findings show that more similar datasets (lower FID) will be more effective at improving U-Net performance, however, the results also suggest that this improvement may only occur when images are sufficiently dissimilar.

Harnessing infrared thermography and multi-convolutional neural networks for early breast cancer detection.

Attallah O

pubmed logopapersJul 28 2025
Breast cancer is a relatively common carcinoma among women worldwide and remains a considerable public health concern. Consequently, the prompt identification of cancer is crucial, as research indicates that 96% of cancers are treatable if diagnosed prior to metastasis. Despite being considered the gold standard for breast cancer evaluation, conventional mammography possesses inherent drawbacks, including accessibility issues, especially in rural regions, and discomfort associated with the procedure. Therefore, there has been a surge in interest in non-invasive, radiation-free alternative diagnostic techniques, such as thermal imaging (thermography). Thermography employs infrared thermal sensors to capture and assess temperature maps of human breasts for the identification of potential tumours based on areas of thermal irregularity. This study proposes an advanced computer-aided diagnosis (CAD) system called Thermo-CAD to assess early breast cancer detection using thermal imaging, aimed at assisting radiologists. The CAD system employs a variety of deep learning techniques, specifically incorporating multiple convolutional neural networks (CNNs) to enhance diagnostic accuracy and reliability. To effectively integrate multiple deep features and diminish the dimensionality of features derived from each CNN, feature transformation and selection methods, including non-negative matrix factorization and Relief-F, are used leading to a reduction in classification complexity. The Thermo-CAD system is assessed utilising two datasets: the DMR-IR (Database for Mastology Research Infrared Images), for distinguishing between normal and abnormal breast tissues, and a novel thermography dataset to distinguish abnormal instances as benign or malignant. Thermo-CAD has proven to be an outstanding CAD system for thermographic breast cancer detection, attaining 100% accuracy on the DMR-IR dataset (normal versus abnormal breast cancer) using CSVM and MGSVM classifiers, and lower accuracy using LSVM and QSVM classifiers. However, it showed a lower ability to distinguish benign from malignant cases (second dataset), achieving an accuracy of 79.3% using CSVM. Yet, it remains a promising tool for early-stage cancer detection, especially in resource-constrained environments.
Page 1 of 438 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.