Enhancing Zero-Shot Vision Models by Label-Free Prompt Distribution Learning and Bias Correcting

Abstract

Vision-language models, such as CLIP, have shown impressive generalization capacities when using appropriate text descriptions. While optimizing prompts on downstream labeled data has proven effective in improving performance, these methods entail labor costs for annotations and are limited by their quality. Additionally, since CLIP is pre-trained on highly imbalanced Web-scale data, it suffers from inherent label bias that leads to suboptimal performance. To tackle the above challenges, we propose a label-{F}ree p extbf{ro}mpt distribution {l}earning and b{i}as {c}orrection framework, dubbed as {Frolic}, which boosts zero-shot performance without the need for labeled data. Specifically, our Frolic learns distributions over prompt prototypes to capture diverse visual representations and adaptively fuses these with the original CLIP through confidence matching. This fused model is further enhanced by correcting label bias via a label-free logit adjustment. Notably, our method is not only training-free but also circumvents the necessity for hyper-parameter tuning. Extensive experimental results across 16 datasets demonstrate the efficacy of our approach, particularly outperforming the state-of-the-art by an average of 2.6% on 10 datasets with CLIP ViT-B/16 and achieving an average margin of 1.5% on ImageNet and its five distribution shifts with CLIP ViT-B/16. Codes are available in supplementary materials.

Publication
In NeurIPS 2024
Xingyu Zhu
Xingyu Zhu
朱星宇
Shuo Wang
Shuo Wang
王硕 副研究员
Yanbin Hao
Yanbin Hao
郝艳宾 副研究员