Structure Development in List-Sorting Transformers

التفاصيل البيبلوغرافية
العنوان: Structure Development in List-Sorting Transformers
المؤلفون: Urdshals, Einar, Urdshals, Jasmina
سنة النشر: 2025
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Artificial Intelligence, Computer Science - Neural and Evolutionary Computing
الوصف: We study how a one-layer attention-only transformer develops relevant structures while learning to sort lists of numbers. At the end of training, the model organizes its attention heads in two main modes that we refer to as vocabulary-splitting and copy-suppression. Both represent simpler modes than having multiple heads handle overlapping ranges of numbers. Interestingly, vocabulary-splitting is present regardless of whether we use weight decay, a common regularization technique thought to drive simplification, supporting the thesis that neural networks naturally prefer simpler solutions. We relate copy-suppression to a mechanism in GPT-2 and investigate its functional role in our model. Guided by insights from a developmental analysis of the model, we identify features in the training data that drive the model's final acquired solution. This provides a concrete example of how the training data shape the internal organization of transformers, paving the way for future studies that could help us better understand how LLMs develop their internal structures.
Comment: 15+19 pages, 6+13 figures
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2501.18666
رقم الانضمام: edsarx.2501.18666
قاعدة البيانات: arXiv