Publication:
Why the unexpected? dissecting the political and economic bias in Persian small and large language models

Thumbnail Image

Organizational Units

Program

KU Authors

Co-Authors

Thapa, Surendrabikram
Maratha, Ashwarya
Naseem, Usman

Advisor

Publication Date

Language

en

Journal Title

Journal ISSN

Volume Title

Abstract

Recently, language models (LMs) like BERT and large language models (LLMs) like GPT-4 have demonstrated potential in various linguistic tasks such as text generation, translation, and sentiment analysis. However, these abilities come with a cost of a risk of perpetuating biases from their training data. Political and economic inclinations play a significant role in shaping these biases. Thus, this research aims to understand political and economic biases in Persian LMs and LLMs, addressing a significant gap in AI ethics and fairness research. Focusing on the Persian language, our research employs a two-step methodology. First, we utilize the political compass test adapted to Persian. Second, we analyze biases present in these models. Our findings indicate the presence of nuanced biases, underscoring the importance of ethical considerations in AI deployments within Persian-speaking contexts. © 2024 ELRA Language Resource Association.

Source:

3rd Annual Meeting of the ELRA-ISCA Special Interest Group on Under-Resourced Languages, SIGUL 2024 at LREC-COLING 2024 - Workshop Proceedings

Publisher:

European Language Resources Association (ELRA)

Keywords:

Subject

Computational linguistics, Natural language processing, Language modeling

Citation

Endorsement

Review

Supplemented By

Referenced By

Copyrights Note

3

Views

2

Downloads