Department of Computer Engineering2024-11-0920209781-9521-4831-6N/Ahttps://hdl.handle.net/20.500.14288/9830In this paper, we describe our approach to utilize pre-trained BERT models with Convolutional Neural Networks for sub-task A of the Multilingual Offensive Language Identification shared task (OffensEval 2020), which is a part of the SemEval 2020. We show that combining CNN with BERT is better than using BERT on its own, and we emphasize the importance of utilizing pre-trained language models for downstream tasks. Our system, ranked 4th with macro averaged F1-Score of 0.897 in Arabic, 4th with score of 0.843 in Greek, and 3rd with score of 0.814 in Turkish. Additionally, we present ArabicBERT, a set of pre-trained transformer language models for Arabic that we share with the community.CyberbullyingHate speechSocial networksKUISAIL at SemEval-2020 Task 12: BERT-CNN for offensive speech identification in social mediaConference proceedinghttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85118416740&partnerID=40&md5=0ea75bf0a2e6450e0159116791ac489210438