Academic Journal
MathAttack: Attacking Large Language Models towards Math Solving Ability
العنوان: | MathAttack: Attacking Large Language Models towards Math Solving Ability |
---|---|
المؤلفون: | Zhou, Zihao, Wang, Qiufeng, Jin, Mingyu, Yao, Jie, Ye, Jianan, Liu, Wei, Wang, Wei, Huang, Xiaowei, Huang, Kaizhu |
المصدر: | Proceedings of the AAAI Conference on Artificial Intelligence; Vol. 38 No. 17: AAAI-24 Technical Tracks 17; 19750-19758 ; 2374-3468 ; 2159-5399 |
بيانات النشر: | Association for the Advancement of Artificial Intelligence |
سنة النشر: | 2024 |
المجموعة: | Association for the Advancement of Artificial Intelligence: AAAI Publications |
مصطلحات موضوعية: | NLP: (Large) Language Models, NLP: Applications |
الوصف: | With the boom of Large Language Models (LLMs), the research of solving Math Word Problem (MWP) has recently made great progress. However, there are few studies to examine the robustness of LLMs in math solving ability. Instead of attacking prompts in the use of LLMs, we propose a MathAttack model to attack MWP samples which are closer to the essence of robustness in solving math problems. Compared to traditional text adversarial attack, it is essential to preserve the mathematical logic of original MWPs during the attacking. To this end, we propose logical entity recognition to identify logical entries which are then frozen. Subsequently, the remaining text are attacked by adopting a word-level attacker. Furthermore, we propose a new dataset RobustMath to evaluate the robustness of LLMs in math solving ability. Extensive experiments on our RobustMath and two another math benchmark datasets GSM8K and MultiAirth show that MathAttack could effectively attack the math solving ability of LLMs. In the experiments, we observe that (1) Our adversarial samples from higher-accuracy LLMs are also effective for attacking LLMs with lower accuracy (e.g., transfer from larger to smaller-size LLMs, or from few-shot to zero-shot prompts); (2) Complex MWPs (such as more solving steps, longer text, more numbers) are more vulnerable to attack; (3) We can improve the robustness of LLMs by using our adversarial samples in few-shot prompts. Finally, we hope our practice and observation can serve as an important attempt towards enhancing the robustness of LLMs in math solving ability. The code and dataset is available at: https://github.com/zhouzihao501/MathAttack. |
نوع الوثيقة: | article in journal/newspaper |
وصف الملف: | application/pdf |
اللغة: | English |
Relation: | https://ojs.aaai.org/index.php/AAAI/article/view/29949/31658; https://ojs.aaai.org/index.php/AAAI/article/view/29949/31659; https://ojs.aaai.org/index.php/AAAI/article/view/29949 |
DOI: | 10.1609/aaai.v38i17.29949 |
الاتاحة: | https://ojs.aaai.org/index.php/AAAI/article/view/29949 https://doi.org/10.1609/aaai.v38i17.29949 |
Rights: | Copyright (c) 2024 Association for the Advancement of Artificial Intelligence |
رقم الانضمام: | edsbas.2B500500 |
قاعدة البيانات: | BASE |
DOI: | 10.1609/aaai.v38i17.29949 |
---|