DrivingGPT: Unifying Driving World Modeling and Planning with Multi-modal Autoregressive Transformers

التفاصيل البيبلوغرافية
العنوان: DrivingGPT: Unifying Driving World Modeling and Planning with Multi-modal Autoregressive Transformers
المؤلفون: Chen, Yuntao, Wang, Yuqi, Zhang, Zhaoxiang
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition
الوصف: World model-based searching and planning are widely recognized as a promising path toward human-level physical intelligence. However, current driving world models primarily rely on video diffusion models, which specialize in visual generation but lack the flexibility to incorporate other modalities like action. In contrast, autoregressive transformers have demonstrated exceptional capability in modeling multimodal data. Our work aims to unify both driving model simulation and trajectory planning into a single sequence modeling problem. We introduce a multimodal driving language based on interleaved image and action tokens, and develop DrivingGPT to learn joint world modeling and planning through standard next-token prediction. Our DrivingGPT demonstrates strong performance in both action-conditioned video generation and end-to-end planning, outperforming strong baselines on large-scale nuPlan and NAVSIM benchmarks.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2412.18607
رقم الانضمام: edsarx.2412.18607
قاعدة البيانات: arXiv