Unsupervised Pre-Training With Language-Vision Prompts for Low-Data Instance Segmentation

Bibliographic Details
Title: Unsupervised Pre-Training With Language-Vision Prompts for Low-Data Instance Segmentation
Authors: Dingwen Zhang, Hao Li, Diqi He, Nian Liu, Lechao Cheng, Jingdong Wang, Junwei Han
Source: IEEE Transactions on Pattern Analysis and Machine Intelligence. 47:8642-8657
Publication Status: Preprint
Publisher Information: Institute of Electrical and Electronics Engineers (IEEE), 2025.
Publication Year: 2025
Subject Terms: FOS: Computer and information sciences, 4. Education, Computer Vision and Pattern Recognition (cs.CV), Computer Science - Computer Vision and Pattern Recognition
Description: In recent times, following the paradigm of DETR (DEtection TRansformer), query-based end-to-end instance segmentation (QEIS) methods have exhibited superior performance compared to CNN-based models, particularly when trained on large-scale datasets. Nevertheless, the effectiveness of these QEIS methods diminishes significantly when confronted with limited training data. This limitation arises from their reliance on substantial data volumes to effectively train the pivotal queries/kernels that are essential for acquiring localization and shape priors. To address this problem, we propose a novel method for unsupervised pre-training in low-data regimes. Inspired by the recently successful prompting technique, we introduce a new method, Unsupervised Pre-training with Language-Vision Prompts (UPLVP), which improves QEIS models' instance segmentation by bringing language-vision prompts to queries/kernels. Our method consists of three parts: (1) Masks Proposal: Utilizes language-vision models to generate pseudo masks based on unlabeled images. (2) Prompt-Kernel Matching: Converts pseudo masks into prompts and injects the best-matched localization and shape features to their corresponding kernels. (3) Kernel Supervision: Formulates supervision for pre-training at the kernel level to ensure robust learning. With the help of our pre-training method, QEIS models can converge faster and perform better than CNN-based models in low-data regimes. Experimental evaluations conducted on MS COCO, Cityscapes, and CTW1500 datasets indicate that the QEIS models' performance can be significantly improved when pre-trained with our method. Code will be available at: https://github.com/lifuguan/UPLVP.
https://github.com/lifuguan/UPLVP
Document Type: Article
ISSN: 1939-3539
0162-8828
DOI: 10.1109/tpami.2025.3579469
DOI: 10.48550/arxiv.2405.13388
Access URL: http://arxiv.org/abs/2405.13388
Rights: IEEE Copyright
arXiv Non-Exclusive Distribution
Accession Number: edsair.doi.dedup.....84509bd9ae928b783341c057125484de
Database: OpenAIRE
Description
ISSN:19393539
01628828
DOI:10.1109/tpami.2025.3579469