Publication:
Why the unexpected? dissecting the political and economic bias in Persian small and large language models

Thumbnail Image

School / College / Institute

Organizational Unit

Program

KU Authors

Co-Authors

Thapa, Surendrabikram
Maratha, Ashwarya
Naseem, Usman

Publication Date

Language

Embargo Status

Journal Title

Journal ISSN

Volume Title

Alternative Title

Abstract

Recently, language models (LMs) like BERT and large language models (LLMs) like GPT-4 have demonstrated potential in various linguistic tasks such as text generation, translation, and sentiment analysis. However, these abilities come with a cost of a risk of perpetuating biases from their training data. Political and economic inclinations play a significant role in shaping these biases. Thus, this research aims to understand political and economic biases in Persian LMs and LLMs, addressing a significant gap in AI ethics and fairness research. Focusing on the Persian language, our research employs a two-step methodology. First, we utilize the political compass test adapted to Persian. Second, we analyze biases present in these models. Our findings indicate the presence of nuanced biases, underscoring the importance of ethical considerations in AI deployments within Persian-speaking contexts. © 2024 ELRA Language Resource Association.

Source

Publisher

European Language Resources Association (ELRA)

Subject

Computational linguistics, Natural language processing, Language modeling

Citation

Has Part

Source

3rd Annual Meeting of the ELRA-ISCA Special Interest Group on Under-Resourced Languages, SIGUL 2024 at LREC-COLING 2024 - Workshop Proceedings

Book Series Title

Edition

DOI

item.page.datauri

Link

Rights

Copyrights Note

Endorsement

Review

Supplemented By

Referenced By

3

Views

5

Downloads