Sort by:
Page 5 of 1331322 results

Exploring learning transferability in deep segmentation of colorectal cancer liver metastases.

Abbas M, Badic B, Andrade-Miranda G, Bourbonne V, Jaouen V, Visvikis D, Conze PH

pubmed logopapersSep 26 2025
Ensuring the seamless transfer of knowledge and models across various datasets and clinical contexts is of paramount importance in medical image segmentation. This is especially true for liver lesion segmentation which plays a key role in pre-operative planning and treatment follow-up. Despite the progress of deep learning algorithms using Transformers, automatically segmenting small hepatic metastases remains a persistent challenge. This can be attributed to the degradation of small structures due to the intrinsic process of feature down-sampling inherent to many deep architectures, coupled with the imbalance between foreground metastases voxels and background. While similar challenges have been observed for liver tumors originated from hepatocellular carcinoma, their manifestation in the context of liver metastasis delineation remains under-explored and require well-defined guidelines. Through comprehensive experiments, this paper aims to bridge this gap and to demonstrate the impact of various transfer learning schemes from off-the-shelf datasets to a dataset containing liver metastases only. Our scale-specific evaluation reveals that models trained from scratch or with domain-specific pre-training demonstrate greater proficiency.

Hemorica: A Comprehensive CT Scan Dataset for Automated Brain Hemorrhage Classification, Segmentation, and Detection

Kasra Davoodi, Mohammad Hoseyni, Javad Khoramdel, Reza Barati, Reihaneh Mortazavi, Amirhossein Nikoofard, Mahdi Aliyari-Shoorehdeli, Jaber Hatam Parikhan

arxiv logopreprintSep 26 2025
Timely diagnosis of Intracranial hemorrhage (ICH) on Computed Tomography (CT) scans remains a clinical priority, yet the development of robust Artificial Intelligence (AI) solutions is still hindered by fragmented public data. To close this gap, we introduce Hemorica, a publicly available collection of 372 head CT examinations acquired between 2012 and 2024. Each scan has been exhaustively annotated for five ICH subtypes-epidural (EPH), subdural (SDH), subarachnoid (SAH), intraparenchymal (IPH), and intraventricular (IVH)-yielding patient-wise and slice-wise classification labels, subtype-specific bounding boxes, two-dimensional pixel masks and three-dimensional voxel masks. A double-reading workflow, preceded by a pilot consensus phase and supported by neurosurgeon adjudication, maintained low inter-rater variability. Comprehensive statistical analysis confirms the clinical realism of the dataset. To establish reference baselines, standard convolutional and transformer architectures were fine-tuned for binary slice classification and hemorrhage segmentation. With only minimal fine-tuning, lightweight models such as MobileViT-XS achieved an F1 score of 87.8% in binary classification, whereas a U-Net with a DenseNet161 encoder reached a Dice score of 85.5% for binary lesion segmentation that validate both the quality of the annotations and the sufficiency of the sample size. Hemorica therefore offers a unified, fine-grained benchmark that supports multi-task and curriculum learning, facilitates transfer to larger but weakly labelled cohorts, and facilitates the process of designing an AI-based assistant for ICH detection and quantification systems.

MultiD4CAD: Multimodal Dataset composed of CT and Clinical Features for Coronary Artery Disease Analysis.

Prinzi F, Militello C, Sollami G, Toia P, La Grutta L, Vitabile S

pubmed logopapersSep 26 2025
Multimodal datasets offer valuable support for developing Clinical Decision Support Systems (CDSS), which leverage predictive models to enhance clinicians' decision-making. In this observational study, we present a dataset of suspected Coronary Artery Disease (CAD) patients - called MultiD4CAD - comprising imaging and clinical data. The imaging data obtained from Coronary Computed Tomography Angiography (CCTA) includes epicardial (EAT) and pericoronary (PAT) adipose tissue segmentations. These metabolically active fat tissues play a key role in cardiovascular diseases. In addition, clinical data include a set of biomarkers recognized as CAD risk factors. The validated EAT and PAT segmentations make the dataset suitable for training predictive models based on radiomics and deep learning architectures. The inclusion of CAD disease labels allows for its application in supervised learning algorithms to predict CAD outcomes. MultiD4CAD has revealed important correlations between imaging features, clinical biomarkers, and CAD status. The article concludes by discussing some challenges, such as classification, segmentation, radiomics, and deep training tasks, that can be investigated and validated using the proposed dataset.

Johnson-Lindenstrauss Lemma Guided Network for Efficient 3D Medical Segmentation

Jinpeng Lu, Linghan Cai, Yinda Chen, Guo Tang, Songhan Jiang, Haoyuan Shi, Zhiwei Xiong

arxiv logopreprintSep 26 2025
Lightweight 3D medical image segmentation remains constrained by a fundamental "efficiency / robustness conflict", particularly when processing complex anatomical structures and heterogeneous modalities. In this paper, we study how to redesign the framework based on the characteristics of high-dimensional 3D images, and explore data synergy to overcome the fragile representation of lightweight methods. Our approach, VeloxSeg, begins with a deployable and extensible dual-stream CNN-Transformer architecture composed of Paired Window Attention (PWA) and Johnson-Lindenstrauss lemma-guided convolution (JLC). For each 3D image, we invoke a "glance-and-focus" principle, where PWA rapidly retrieves multi-scale information, and JLC ensures robust local feature extraction with minimal parameters, significantly enhancing the model's ability to operate with low computational budget. Followed by an extension of the dual-stream architecture that incorporates modal interaction into the multi-scale image-retrieval process, VeloxSeg efficiently models heterogeneous modalities. Finally, Spatially Decoupled Knowledge Transfer (SDKT) via Gram matrices injects the texture prior extracted by a self-supervised network into the segmentation network, yielding stronger representations than baselines at no extra inference cost. Experimental results on multimodal benchmarks show that VeloxSeg achieves a 26% Dice improvement, alongside increasing GPU throughput by 11x and CPU by 48x. Codes are available at https://github.com/JinPLu/VeloxSeg.

Segmental airway volume as a predictive indicator of postoperative extubation timing in patients with oral and maxillofacial space infections: a retrospective analysis.

Liu S, Shen H, Zhu B, Zhang X, Zhang X, Li W

pubmed logopapersSep 26 2025
The objective of this study was to investigate the significance of segmental airway volume in developing a predictive model to guide the timing of postoperative extubation in patients with oral and maxillofacial space infections (OMSIs). A retrospective cohort study was performed to analyse clinical data from 177 medical records, with a focus on key variables related to disease severity and treatment outcomes. The inclusion criteria of this study were as follows: adherence to the OMSI diagnostic criteria (local tissue inflammation characterized by erythema, oedema, hyperthermia and tenderness); compromised functions such as difficulties opening the mouth, swallowing, or breathing; the presence of purulent material confirmed by puncture or computed tomography (CT); and laboratory examinations indicating an underlying infection process. The data included age, sex, body mass index (BMI), blood test results, smoking history, history of alcohol abuse, the extent of mouth opening, the number of infected spaces, and the source of infection. DICOM files were imported into 3D Slicer for manual segmentation, followed by volume measurement of each segment. We observed statistically significant differences in age, neutrophil count, lymphocyte count, and C4 segment volume among patient subgroups stratified by extubation time. Regression analysis revealed that age and C4 segment volume were significantly correlated with extubation time. Additionally, the machine learning models yielded good evaluation metrics. Segmental airway volume shows promise as an indicator for predicting extubation time. Predictive models constructed using machine learning algorithms yield good predictive performance and may facilitate clinical decision-making.

Active-Supervised Model for Intestinal Ulcers Segmentation Using Fuzzy Labeling.

Chen J, Lin Y, Saeed F, Ding Z, Diyan M, Li J, Wang Z

pubmed logopapersSep 25 2025
Inflammatory bowel disease (IBD) is a chronic inflammatory condition of the intestines with a rising global incidence. Colonoscopy remains the gold standard for IBD diagnosis, but traditional image-scoring methods are subjective and complex, impacting diagnostic accuracy and efficiency. To address these limitations, this paper investigates machine learning techniques for intestinal ulcer segmentation, focusing on multi-category ulcer segmentation to enhance IBD diagnosis. We identified two primary challenges in intestinal ulcer segmentation: 1) labeling noise, where inaccuracies in medical image annotation introduce ambiguity, hindering model training, and 2) performance variability across datasets, where models struggle to maintain high accuracy due to medical image diversity. To address these challenges, we propose an active ulcer segmentation algorithm based on fuzzy labeling. A collaborative training segmentation model is designed to utilize pixel-wise confidence extracted from fuzzy labels, distinguishing high- and low-confidence regions, and enhancing robustness to noisy labels through network cooperation. To mitigate performance disparities, we introduce a data adaptation strategy leveraging active learning. By selecting high-information samples based on uncertainty and diversity, the strategy enables incremental model training, improving adaptability. Extensive experiments on public and hospital datasets validate the proposed methods. Our collaborative training model and active learning strategy show significant advantages in handling noisy labels and enhancing model performance across datasets, paving the way for more precise and efficient IBD diagnosis.

Conditional Virtual Imaging for Few-Shot Vascular Image Segmentation.

He Y, Ge R, Tang H, Liu Y, Su M, Coatrieux JL, Shu H, Chen Y, He Y

pubmed logopapersSep 25 2025
In the field of medical image processing, vascular image segmentation plays a crucial role in clinical diagnosis, treatment planning, prognosis, and medical decision-making. Accurate and automated segmentation of vascular images can assist clinicians in understanding the vascular network structure, leading to more informed medical decisions. However, manual annotation of vascular images is time-consuming and challenging due to the fine and low-contrast vascular branches, especially in the medical imaging domain where annotation requires specialized knowledge and clinical expertise. Data-driven deep learning models struggle to achieve good performance when only a small number of annotated vascular images are available. To address this issue, this paper proposes a novel Conditional Virtual Imaging (CVI) framework for few-shot vascular image segmentation learning. The framework combines limited annotated data with extensive unlabeled data to generate high-quality images, effectively improving the accuracy and robustness of segmentation learning. Our approach primarily includes two innovations: First, aligned image-mask pair generation, which leverages the powerful image generation capabilities of large pre-trained models to produce high-quality vascular images with complex structures using only a few training images; Second, the Dual-Consistency Learning (DCL) strategy, which simultaneously trains the generator and segmentation model, allowing them to learn from each other and maximize the utilization of limited data. Experimental results demonstrate that our CVI framework can generate high-quality medical images and effectively enhance the performance of segmentation models in few-shot scenarios. Our code will be made publicly available online.

Segmentation-model-based framework to detect aortic dissection on non-contrast CT images: a retrospective study.

Wang Q, Huang S, Pan W, Feng Z, Lv L, Guan D, Yang Z, Huang Y, Liu W, Shui W, Ying M, Xiao W

pubmed logopapersSep 25 2025
To develop an automated deep learning framework for detecting aortic dissection (AD) and visualizing its morphology and extent on non-contrast CT (NCCT) images. This retrospective study included patients who underwent aortic CTA from January 2021 to January 2023 at two tertiary hospitals. Demographic data, medical history, and CT scans were collected. A segmentation-based deep learning model was trained to identify true and false lumens on NCCT images, with performance evaluated on internal and external test sets. Segmentation accuracy was measured using the Dice coefficient, while the intraclass correlation coefficient (ICC) assessed consistency between predicted and ground-truth false lumen volumes. Receiver operating characteristic (ROC) analysis evaluated the model's predictive performance. Among 701 patients (median age, 53 years, IQR: 41-64, 486 males), data from Center 1 were split into training (439 cases: 318 non-AD, 121 AD) and internal test sets (106 cases: 77 non-AD, 29 AD) (8:2 ratio), while Center 2 served as the external test set (156 cases: 80 non-AD, 76 AD). The ICC for false lumen volume was 0.823 (95% CI: 0.750-0.880) internally and 0.823 (95% CI: 0.760-0.870) externally. The model achieved an AUC of 0.935 (95% CI: 0.894-0.968) in the external test set, with an optimal cutoff of 7649 mm<sup>3</sup> yielding 88.2% sensitivity, 91.3% specificity, and 89.0% negative predictive value. The proposed deep learning framework accurately detects AD on NCCT and effectively visualizes its morphological features, demonstrating strong clinical potential. This deep learning framework helps reduce the misdiagnosis of AD in emergencies with limited time. The satisfactory results of presenting true/false lumen on NCCT images benefit patients with contrast media contraindications and promote treatment decisions. False lumen volume was used as an indicator for AD. NCCT detects AD via this segmentation model. This framework enhances AD diagnosis in emergencies, reducing unnecessary contrast use.

CACTUS: Multiview classifier for Punctate White Matter Lesions detection & segmentation in cranial ultrasound volumes.

Estermann F, Kaftandjian V, Guy P, Quetin P, Delachartre P

pubmed logopapersSep 25 2025
Punctate white matter lesions (PWML) are the most common white matter injuries found in preterm neonates, with several studies indicating a connection between these lesions and negative long-term outcomes. Automated detection of PWML through ultrasound (US) imaging could assist doctors in diagnosis more effectively and at a lower cost than MRI. However, this task is highly challenging because of the lesions' small size and low contrast, and the number of lesions can vary significantly between subjects. In this work, we propose a two-phase approach: (1) Segmentation using a vision transformer to increase the detection rate of lesions. (2) Multi-view classification leveraging cross-attention to reduce false positives and enhance precision. We also investigate multiple postprocessing approaches to ensure prediction quality and compare our results with what is known in MRI. Our method demonstrates improved performance in PWML detection on US images, achieving recall and precision rates of 0.84 and 0.70, respectively, representing an increase of 2% and 10% over the best published US models. Moreover, by reducing the task to a slightly simpler problem (detection of MRI-visible PWML), the model achieves 0.82 recall and 0.89 precision, which is equivalent to the latest method in MRI.

Clinically Explainable Disease Diagnosis based on Biomarker Activation Map.

Zang P, Wang C, Hormel TT, Bailey ST, Hwang TS, Jia Y

pubmed logopapersSep 25 2025
Artificial intelligence (AI)-based disease classifiers have achieved specialist-level performances in several diagnostic tasks. However, real-world adoption of these classifiers remains challenging due to the black box issue. Here, we report a novel biomarker activation map (BAM) generation framework that can provide clinically meaningful explainability to current AI-based disease classifiers. We designed the framework based on the concept of residual counterfactual explanation by generating counterfactual outputs that could reverse the decision-making of the disease classifier. The BAM was generated as the difference map between the counterfactual output and original input with postprocessing. We evaluated the BAM on four different disease classifiers, including an age-related macular degeneration classier based on fundus photography, a diabetic retinopathy classifier based on optical coherence tomography angiography, a brain tumor classifier based on magnetic resonance imaging (MRI), and a breast cancer classifier based on computerized tomography (CT) scans. The highlighted regions in the BAM correlated highly with manually demarcated biomarkers of each disease. The BAM can improve the clinical applicability of an AI-based disease classifier by providing intuitive output clinicians can use to understand and verify the diagnostic decision.
Page 5 of 1331322 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.