ABC Align: Large Language Model Alignment for Safety & Accuracy

التفاصيل البيبلوغرافية
العنوان: ABC Align: Large Language Model Alignment for Safety & Accuracy
المؤلفون: Seneque, Gareth, Ho, Lap-Hang, Kuperman, Ariel, Saeedi, Nafise Erfanian, Molendijk, Jeffrey
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Artificial Intelligence, Computer Science - Computation and Language, 68T50, I.2.7
الوصف: Alignment of Large Language Models (LLMs) remains an unsolved problem. Human preferences are highly distributed and can be captured at multiple levels of abstraction, from the individual to diverse populations. Organisational preferences, represented by standards and principles, are defined to mitigate reputational risk or meet legislative obligations. In this paper, we present ABC Align, a novel alignment methodology for LLMs that enables integration of the standards and preferences of a large media organisation into the LLM itself. We combine a set of data and methods that build on recent breakthroughs in synthetic data generation, preference optimisation, and post-training model quantisation. Our unified approach mitigates bias and improves accuracy, while preserving reasoning capability, as measured against standard benchmarks.
Comment: 23 pages, 4 figures
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2408.00307
رقم الانضمام: edsarx.2408.00307
قاعدة البيانات: arXiv