Triplet Loss for Knowledge Distillation

التفاصيل البيبلوغرافية
العنوان: Triplet Loss for Knowledge Distillation
المؤلفون: Junichi Miyao, Hideki Oki, Motoshi Abe, Takio Kurita
المصدر: IJCNN
بيانات النشر: IEEE, 2020.
سنة النشر: 2020
مصطلحات موضوعية: FOS: Computer and information sciences, Computer Science - Machine Learning, Similarity (geometry), Ensemble forecasting, business.industry, Computer science, Computer Vision and Pattern Recognition (cs.CV), Deep learning, Feature extraction, Computer Science - Computer Vision and Pattern Recognition, Physics::Physics Education, Machine learning, computer.software_genre, Machine Learning (cs.LG), law.invention, Euclidean distance, law, Metric (mathematics), Artificial intelligence, business, Distillation, computer
الوصف: In recent years, deep learning has spread rapidly, and deeper, larger models have been proposed. However, the calculation cost becomes enormous as the size of the models becomes larger. Various techniques for compressing the size of the models have been proposed to improve performance while reducing computational costs. One of the methods to compress the size of the models is knowledge distillation (KD). Knowledge distillation is a technique for transferring knowledge of deep or ensemble models with many parameters (teacher model) to smaller shallow models (student model). Since the purpose of knowledge distillation is to increase the similarity between the teacher model and the student model, we propose to introduce the concept of metric learning into knowledge distillation to make the student model closer to the teacher model using pairs or triplets of the training samples. In metric learning, the researchers are developing the methods to build a model that can increase the similarity of outputs for similar samples. Metric learning aims at reducing the distance between similar and increasing the distance between dissimilar. The functionality of the metric learning to reduce the differences between similar outputs can be used for the knowledge distillation to reduce the differences between the outputs of the teacher model and the student model. Since the outputs of the teacher model for different objects are usually different, the student model needs to distinguish them. We think that metric learning can clarify the difference between the different outputs, and the performance of the student model could be improved. We have performed experiments to compare the proposed method with state-of-the-art knowledge distillation methods.
Comment: Accepted to IJCNN 2020, Source code is at https://github.com/i13abe/Triplet-Loss-for-Knowledge-Distillation
DOI: 10.1109/ijcnn48605.2020.9207148
URL الوصول: https://explore.openaire.eu/search/publication?articleId=doi_dedup___::58d36ff686cdf05dcad514fa2161bc4d
https://doi.org/10.1109/ijcnn48605.2020.9207148
Rights: OPEN
رقم الانضمام: edsair.doi.dedup.....58d36ff686cdf05dcad514fa2161bc4d
قاعدة البيانات: OpenAIRE
الوصف
DOI:10.1109/ijcnn48605.2020.9207148