Report
Quantized Distillation: Optimizing Driver Activity Recognition Models for Resource-Constrained Environments
العنوان: | Quantized Distillation: Optimizing Driver Activity Recognition Models for Resource-Constrained Environments |
---|---|
المؤلفون: | Tanama, Calvin, Peng, Kunyu, Marinov, Zdravko, Stiefelhagen, Rainer, Roitberg, Alina |
سنة النشر: | 2023 |
المجموعة: | Computer Science |
مصطلحات موضوعية: | Computer Science - Computer Vision and Pattern Recognition, Computer Science - Robotics |
الوصف: | Deep learning-based models are at the forefront of most driver observation benchmarks due to their remarkable accuracies but are also associated with high computational costs. This is challenging, as resources are often limited in real-world driving scenarios. This paper introduces a lightweight framework for resource-efficient driver activity recognition. The framework enhances 3D MobileNet, a neural architecture optimized for speed in video classification, by incorporating knowledge distillation and model quantization to balance model accuracy and computational efficiency. Knowledge distillation helps maintain accuracy while reducing the model size by leveraging soft labels from a larger teacher model (I3D), instead of relying solely on original ground truth data. Model quantization significantly lowers memory and computation demands by using lower precision integers for model weights and activations. Extensive testing on a public dataset for in-vehicle monitoring during autonomous driving demonstrates that this new framework achieves a threefold reduction in model size and a 1.4-fold improvement in inference time, compared to an already optimized architecture. The code for this study is available at https://github.com/calvintanama/qd-driver-activity-reco. Comment: Accepted at IROS 2023 |
نوع الوثيقة: | Working Paper |
URL الوصول: | http://arxiv.org/abs/2311.05970 |
رقم الانضمام: | edsarx.2311.05970 |
قاعدة البيانات: | arXiv |
الوصف غير متاح. |