Academic Journal

SGLFormer: Spiking Global-Local-Fusion Transformer with high performance

التفاصيل البيبلوغرافية
العنوان: SGLFormer: Spiking Global-Local-Fusion Transformer with high performance
المؤلفون: Han Zhang, Chenlin Zhou, Liutao Yu, Liwei Huang, Zhengyu Ma, Xiaopeng Fan, Huihui Zhou, Yonghong Tian
المصدر: Frontiers in Neuroscience, Vol 18 (2024)
بيانات النشر: Frontiers Media S.A., 2024.
سنة النشر: 2024
المجموعة: LCC:Neurosciences. Biological psychiatry. Neuropsychiatry
مصطلحات موضوعية: Spiking Neural Network, spiking transformer, Global-Local-Fusion, Maxpooling, spatio-temporal, high performance, Neurosciences. Biological psychiatry. Neuropsychiatry, RC321-571
الوصف: IntroductionSpiking Neural Networks (SNNs), inspired by brain science, offer low energy consumption and high biological plausibility with their event-driven nature. However, the current SNNs are still suffering from insufficient performance.MethodsRecognizing the brain's adeptness at information processing for various scenarios with complex neuronal connections within and across regions, as well as specialized neuronal architectures for specific functions, we propose a Spiking Global-Local-Fusion Transformer (SGLFormer), that significantly improves the performance of SNNs. This novel architecture enables efficient information processing on both global and local scales, by integrating transformer and convolution structures in SNNs. In addition, we uncover the problem of inaccurate gradient backpropagation caused by Maxpooling in SNNs and address it by developing a new Maxpooling module. Furthermore, we adopt spatio-temporal block (STB) in the classification head instead of global average pooling, facilitating the aggregation of spatial and temporal features.ResultsSGLFormer demonstrates its superior performance on static datasets such as CIFAR10/CIFAR100, and ImageNet, as well as dynamic vision sensor (DVS) datasets including CIFAR10-DVS and DVS128-Gesture. Notably, on ImageNet, SGLFormer achieves a top-1 accuracy of 83.73% with 64 M parameters, outperforming the current SOTA directly trained SNNs by a margin of 6.66%.DiscussionWith its high performance, SGLFormer can support more computer vision tasks in the future. The codes for this study can be found in https://github.com/ZhangHanN1/SGLFormer.
نوع الوثيقة: article
وصف الملف: electronic resource
اللغة: English
تدمد: 1662-453X
Relation: https://www.frontiersin.org/articles/10.3389/fnins.2024.1371290/full; https://doaj.org/toc/1662-453X
DOI: 10.3389/fnins.2024.1371290
URL الوصول: https://doaj.org/article/5be57903c02540f9a45774ec4d0421c6
رقم الانضمام: edsdoj.5be57903c02540f9a45774ec4d0421c6
قاعدة البيانات: Directory of Open Access Journals
الوصف
تدمد:1662453X
DOI:10.3389/fnins.2024.1371290