CoMT: A Novel Benchmark for Chain of Multi-modal Thought on Large Vision-Language Models

التفاصيل البيبلوغرافية
العنوان: CoMT: A Novel Benchmark for Chain of Multi-modal Thought on Large Vision-Language Models
المؤلفون: Cheng, Zihui, Chen, Qiguang, Zhang, Jin, Fei, Hao, Feng, Xiaocheng, Che, Wanxiang, Li, Min, Qin, Libo
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition, Computer Science - Artificial Intelligence
الوصف: Large Vision-Language Models (LVLMs) have recently demonstrated amazing success in multi-modal tasks, including advancements in Multi-modal Chain-of-Thought (MCoT) reasoning. Despite these successes, current benchmarks still follow a traditional paradigm with multi-modal input and text-modal output, which leads to significant drawbacks such as missing visual operations and vague expressions. Motivated by this, we introduce a novel Chain of Multi-modal Thought (CoMT) benchmark to address these limitations. Different from the traditional MCoT benchmark, CoMT requires both multi-modal input and multi-modal reasoning output, aiming to mimic human-like reasoning that inherently integrates visual operation. Specifically, CoMT consists of four categories: (1) Visual Creation, (2) Visual Deletion, (3) Visual Update, and (4) Visual Selection to comprehensively explore complex visual operations and concise expression in real scenarios. We evaluate various LVLMs and strategies on CoMT, revealing some key insights into the capabilities and limitations of the current approaches. We hope that CoMT can inspire more research on introducing multi-modal generation into the reasoning process.
Comment: Accepted at AAAI 2025
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2412.12932
رقم الانضمام: edsarx.2412.12932
قاعدة البيانات: arXiv