[期刊论文]


A transformer-based deep learning model for Persian moral sentiment analysis

作   者:
Behnam Karami;Fatemeh Bakouie;Shahriar Gharibzadeh;

出版年:暂无

页    码:暂无
出版社:SAGE Publications


摘   要:

Moral expressions in online communications can have a serious impact on framing discussions and subsequent online behaviours. Despite research on extracting moral sentiment from English text, other low-resource languages, such as Persian, lack enough resources and research about this important topic. We address this issue using the Moral Foundation theory (MFT) as the theoretical moral psychology paradigm. We developed a Twitter data set of 8000 tweets that are manually annotated for moral foundations and also we established a baseline for computing moral sentiment from Persian text. We evaluate a plethora of state-of-the-art machine learning models, both rule-based and neural, including distributed dictionary representation (DDR), long short-term memory (LSTM) and bidirectional encoder representations from transformer (BERT). Our findings show that among different models, fine-tuning a pre-trained Persian BERT language model with a linear network as the classifier yields the best results. Furthermore, we analysed this model to find out which layer of the model contributes most to this superior accuracy. We also proposed an alternative transformer-based model that yields competitive results to the BERT model despite its lower size and faster inference time. The proposed model can be used as a tool for analysing moral sentiment and framing in Persian texts for downstream social and psychological studies. We also hope our work provides some resources for further enhancing the methods for computing moral sentiment in Persian text.



关键字:

暂无


所属期刊
Journal of Information Science
ISSN: 0165-5515
来自:SAGE Publications