Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations

التفاصيل البيبلوغرافية
العنوان: Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations
المؤلفون: Fayyaz, Mohsen, Aghazadeh, Ehsan, Modarressi, Ali, Mohebbi, Hosein, Pilehvar, Mohammad Taher
سنة النشر: 2021
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language, Computer Science - Artificial Intelligence
الوصف: Most of the recent works on probing representations have focused on BERT, with the presumption that the findings might be similar to the other models. In this work, we extend the probing studies to two other models in the family, namely ELECTRA and XLNet, showing that variations in the pre-training objectives or architectural choices can result in different behaviors in encoding linguistic information in the representations. Most notably, we observe that ELECTRA tends to encode linguistic knowledge in the deeper layers, whereas XLNet instead concentrates that in the earlier layers. Also, the former model undergoes a slight change during fine-tuning, whereas the latter experiences significant adjustments. Moreover, we show that drawing conclusions based on the weight mixing evaluation strategy -- which is widely used in the context of layer-wise probing -- can be misleading given the norm disparity of the representations across different layers. Instead, we adopt an alternative information-theoretic probing with minimum description length, which has recently been proven to provide more reliable and informative results.
Comment: Accepted to BlackboxNLP Workshop at EMNLP 2021
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2109.05958
رقم الانضمام: edsarx.2109.05958
قاعدة البيانات: arXiv
ResultId 1
Header edsarx
arXiv
edsarx.2109.05958
1022
3
Report
report
1021.65277099609
PLink https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edsarx&AN=edsarx.2109.05958&custid=s6537998&authtype=sso
FullText Array ( [Availability] => 0 )
Array ( [0] => Array ( [Url] => http://arxiv.org/abs/2109.05958 [Name] => EDS - Arxiv [Category] => fullText [Text] => View record in Arxiv [MouseOverText] => View record in Arxiv ) )
Items Array ( [Name] => Title [Label] => Title [Group] => Ti [Data] => Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations )
Array ( [Name] => Author [Label] => Authors [Group] => Au [Data] => <searchLink fieldCode="AR" term="%22Fayyaz%2C+Mohsen%22">Fayyaz, Mohsen</searchLink><br /><searchLink fieldCode="AR" term="%22Aghazadeh%2C+Ehsan%22">Aghazadeh, Ehsan</searchLink><br /><searchLink fieldCode="AR" term="%22Modarressi%2C+Ali%22">Modarressi, Ali</searchLink><br /><searchLink fieldCode="AR" term="%22Mohebbi%2C+Hosein%22">Mohebbi, Hosein</searchLink><br /><searchLink fieldCode="AR" term="%22Pilehvar%2C+Mohammad+Taher%22">Pilehvar, Mohammad Taher</searchLink> )
Array ( [Name] => DatePubCY [Label] => Publication Year [Group] => Date [Data] => 2021 )
Array ( [Name] => Subset [Label] => Collection [Group] => HoldingsInfo [Data] => Computer Science )
Array ( [Name] => Subject [Label] => Subject Terms [Group] => Su [Data] => <searchLink fieldCode="DE" term="%22Computer+Science+-+Computation+and+Language%22">Computer Science - Computation and Language</searchLink><br /><searchLink fieldCode="DE" term="%22Computer+Science+-+Artificial+Intelligence%22">Computer Science - Artificial Intelligence</searchLink> )
Array ( [Name] => Abstract [Label] => Description [Group] => Ab [Data] => Most of the recent works on probing representations have focused on BERT, with the presumption that the findings might be similar to the other models. In this work, we extend the probing studies to two other models in the family, namely ELECTRA and XLNet, showing that variations in the pre-training objectives or architectural choices can result in different behaviors in encoding linguistic information in the representations. Most notably, we observe that ELECTRA tends to encode linguistic knowledge in the deeper layers, whereas XLNet instead concentrates that in the earlier layers. Also, the former model undergoes a slight change during fine-tuning, whereas the latter experiences significant adjustments. Moreover, we show that drawing conclusions based on the weight mixing evaluation strategy -- which is widely used in the context of layer-wise probing -- can be misleading given the norm disparity of the representations across different layers. Instead, we adopt an alternative information-theoretic probing with minimum description length, which has recently been proven to provide more reliable and informative results.<br />Comment: Accepted to BlackboxNLP Workshop at EMNLP 2021 )
Array ( [Name] => TypeDocument [Label] => Document Type [Group] => TypDoc [Data] => Working Paper )
Array ( [Name] => URL [Label] => Access URL [Group] => URL [Data] => <link linkTarget="URL" linkTerm="http://arxiv.org/abs/2109.05958" linkWindow="_blank">http://arxiv.org/abs/2109.05958</link> )
Array ( [Name] => AN [Label] => Accession Number [Group] => ID [Data] => edsarx.2109.05958 )
RecordInfo Array ( [BibEntity] => Array ( [Subjects] => Array ( [0] => Array ( [SubjectFull] => Computer Science - Computation and Language [Type] => general ) [1] => Array ( [SubjectFull] => Computer Science - Artificial Intelligence [Type] => general ) ) [Titles] => Array ( [0] => Array ( [TitleFull] => Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations [Type] => main ) ) ) [BibRelationships] => Array ( [HasContributorRelationships] => Array ( [0] => Array ( [PersonEntity] => Array ( [Name] => Array ( [NameFull] => Fayyaz, Mohsen ) ) ) [1] => Array ( [PersonEntity] => Array ( [Name] => Array ( [NameFull] => Aghazadeh, Ehsan ) ) ) [2] => Array ( [PersonEntity] => Array ( [Name] => Array ( [NameFull] => Modarressi, Ali ) ) ) [3] => Array ( [PersonEntity] => Array ( [Name] => Array ( [NameFull] => Mohebbi, Hosein ) ) ) [4] => Array ( [PersonEntity] => Array ( [Name] => Array ( [NameFull] => Pilehvar, Mohammad Taher ) ) ) ) [IsPartOfRelationships] => Array ( [0] => Array ( [BibEntity] => Array ( [Dates] => Array ( [0] => Array ( [D] => 13 [M] => 09 [Type] => published [Y] => 2021 ) ) ) ) ) ) )
IllustrationInfo