Pretrained Language Models as Visual Planners for Human Assistance

التفاصيل البيبلوغرافية
العنوان: Pretrained Language Models as Visual Planners for Human Assistance
المؤلفون: Patel, Dhruvesh, Eghbalzadeh, Hamid, Kamra, Nitin, Iuzzolino, Michael Louis, Jain, Unnat, Desai, Ruta
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition, Computer Science - Artificial Intelligence
الوصف: In our pursuit of advancing multi-modal AI assistants capable of guiding users to achieve complex multi-step goals, we propose the task of "Visual Planning for Assistance (VPA)". Given a succinct natural language goal, e.g., "make a shelf", and a video of the user's progress so far, the aim of VPA is to devise a plan, i.e., a sequence of actions such as "sand shelf", "paint shelf", etc. to realize the specified goal. This requires assessing the user's progress from the (untrimmed) video, and relating it to the requirements of natural language goal, i.e., which actions to select and in what order? Consequently, this requires handling long video history and arbitrarily complex action dependencies. To address these challenges, we decompose VPA into video action segmentation and forecasting. Importantly, we experiment by formulating the forecasting step as a multi-modal sequence modeling problem, allowing us to leverage the strength of pre-trained LMs (as the sequence model). This novel approach, which we call Visual Language Model based Planner (VLaMP), outperforms baselines across a suite of metrics that gauge the quality of the generated plans. Furthermore, through comprehensive ablations, we also isolate the value of each component--language pre-training, visual observations, and goal information. We have open-sourced all the data, model checkpoints, and training code.
Comment: Accepted at ICCV 2023
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2304.09179
رقم الانضمام: edsarx.2304.09179
قاعدة البيانات: arXiv