Report
HumanBench: Towards General Human-centric Perception with Projector Assisted Pretraining
العنوان: | HumanBench: Towards General Human-centric Perception with Projector Assisted Pretraining |
---|---|
المؤلفون: | Tang, Shixiang, Chen, Cheng, Xie, Qingsong, Chen, Meilin, Wang, Yizhou, Ci, Yuanzheng, Bai, Lei, Zhu, Feng, Yang, Haiyang, Yi, Li, Zhao, Rui, Ouyang, Wanli |
سنة النشر: | 2023 |
المجموعة: | Computer Science |
مصطلحات موضوعية: | Computer Science - Computer Vision and Pattern Recognition |
الوصف: | Human-centric perceptions include a variety of vision tasks, which have widespread industrial applications, including surveillance, autonomous driving, and the metaverse. It is desirable to have a general pretrain model for versatile human-centric downstream tasks. This paper forges ahead along this path from the aspects of both benchmark and pretraining methods. Specifically, we propose a \textbf{HumanBench} based on existing datasets to comprehensively evaluate on the common ground the generalization abilities of different pretraining methods on 19 datasets from 6 diverse downstream tasks, including person ReID, pose estimation, human parsing, pedestrian attribute recognition, pedestrian detection, and crowd counting. To learn both coarse-grained and fine-grained knowledge in human bodies, we further propose a \textbf{P}rojector \textbf{A}ssis\textbf{T}ed \textbf{H}ierarchical pretraining method (\textbf{PATH}) to learn diverse knowledge at different granularity levels. Comprehensive evaluations on HumanBench show that our PATH achieves new state-of-the-art results on 17 downstream datasets and on-par results on the other 2 datasets. The code will be publicly at \href{https://github.com/OpenGVLab/HumanBench}{https://github.com/OpenGVLab/HumanBench}. Comment: Accepted to CVPR2023 |
نوع الوثيقة: | Working Paper |
URL الوصول: | http://arxiv.org/abs/2303.05675 |
رقم الانضمام: | edsarx.2303.05675 |
قاعدة البيانات: | arXiv |
الوصف غير متاح. |