الوصف: |
Semantic segmentation extends classical image classification by attributing one class for each pixel in a given image. This approach requires a significant amount of resources to be performed. The majority of time, lowpower resource devices are unable to deliver predictions on this task, because of its computational requirements. Some small robots lack inference speed, enough memory to inference a single instance at time or, even, battery life to delivery continuous predictions. Another aspect, is the incapability of training models on the edge, which can be a major limitation on the practicality of the solution. As if current networks were not big enough for this type of devices, novel architectures tend to be even more complex, which can be seen as a continuous divergence on the possibility of running this kind of models on lowpower devices. With this in mind, the project has the goal of exploring efficient solutions to deploy segmentation models in the edge. To do so, the project aims at exploring efficient architectures and light convolutional layers, alternative segmentation methods and alternative methods of weight representation. In the end, by performing benchmarks on efficient networks with quantization, filter pruning along distillation and layer replacement, it is shown that these methods can be used to save computational resources, but to do so, they sacrifice precision points. ; O processo de segmentação semântica envolve uma enorme quantidade de recursos. Por consequência, este tipo de modelos são dificilmente ou, na grande parte dos casos, impossíveis de exportar para dispositivos eletrônicos de baixa capacidade computacional. Pequenos dispositivos, sendo alguns deles robots, não têm as capacidades computacionais necessárias para tornar o processo de inferência viável. Estes pequenos robots não têm muitas vezes memória RAM suficiente, ou noutros casos, bateria grande o suficiente para inferir de forma contínua durante curtos intervalos de tempo. Um outro aspecto consiste na impossibilidade de ... |