Researcher: Can, Ozan Arkan
Name Variants
Can, Ozan Arkan
Email Address
Birth Date
6 results
Search Results
Now showing 1 - 6 of 6
Publication Metadata only Team Howard Beale at SemEval-2019 task 4: hyperpartisan news detection with BERT(Association for Computational Linguistics (ACL), 2019) Dayanık, Erenay; Mutlu, Osman; Can, Ozan Arkan; PhD Student; PhD Student; Graduate School of Sciences and Engineering; Graduate School of Sciences and EngineeringThis paper describes our system for SemEval-2019 Task 4: Hyperpartisan News Detection (Kiesel et al., 2019). We use pretrained BERT (Devlin et al., 2018) architecture and investigate the effect of different fine tuning regimes on the final classification task. We show that additional pretraining on news domain improves the performance on the Hyperpartisan News Detection task. Our system1 ranked 8th out of 42 teams with 78.3% accuracy on the held-out test dataset.Publication Metadata only Learning to follow verbal instructions with visual grounding(Institute of Electrical and Electronics Engineers (IEEE), 2019) Department of Electrical and Electronics Engineering; N/A; Department of Computer Engineering; Ünal, Emre; Can, Ozan Arkan; Yemez, Yücel; Other; PhD Student; Faculty Member; Department of Electrical and Electronics Engineering; Department of Computer Engineering; College of Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; N/A; 107907We present a visually grounded deep learning model towards a virtual robot that can follow navigational instructions. Our model is capable of processing raw visual input and natural text instructions. The aim is to develop a model that can learn to follow novel instructions from instruction-perception examples. The proposed model is trained on data collected in a synthetic environment and its architecture allows it to work also with real visual data. We show that our results are on par with the previously proposed methods.Publication Metadata only Modulating bottom-up and top-down visual processing via language-conditional filters(Ieee, 2022) Erdem, Erkut; N/A; N/A; Department of Computer Engineering; Department of Computer Engineering; Kesen, İlker; Can, Ozan Arkan; Erdem, Aykut; Yüret, Deniz; PhD Student; PhD Student; Faculty Member; Faculty Member; Department of Computer Engineering; Koç Üniversitesi İş Bankası Yapay Zeka Uygulama ve Araştırma Merkezi (KUIS AI)/ Koç University İş Bank Artificial Intelligence Center (KUIS AI); Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; College of Engineering; College of Engineering; N/A; N/A; 20331; 179996How to best integrate linguistic and perceptual processing in multi-modal tasks that involve language and vision is an important open problem. In this work, we argue that the common practice of using language in a top-down manner, to direct visual attention over high-level visual features, may not be optimal. We hypothesize that the use of language to also condition the bottom-up processing from pixels to high-level features can provide benefits to the overall performance. To support our claim, we propose a U-Net-based model and perform experiments on two language-vision dense-prediction tasks: referring expression segmentation and language-guided image colorization. We compare results where either one or both of the top-down and bottom-up visual branches are conditioned on language. Our experiments reveal that using language to control the filters for bottom-up visual processing in addition to top-down attention leads to better results on both tasks and achieves competitive performance. Our linguistic analysis suggests that bottom-up conditioning improves segmentation of objects especially when input text refers to low-level visual concepts. Code is available at https://github.com/ilkerkesen/bvpr.Publication Metadata only CharNER: character-level named entity recognition(Association for Computational Linguistics (ACL), 2016) N/A; N/A; N/A; Department of Computer Engineering; Kuru, Onur; Can, Ozan Arkan; Yüret, Deniz; Master Student; PhD Student; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; N/A; 179996We describe and evaluate a character-level tagger for language-independent Named Entity Recognition (NER). Instead of words, a sentence is represented as a sequence of characters. The model consists of stacked bidirectional LSTMs which inputs characters and outputs tag probabilities for each character. These probabilities are then converted to consistent word level named entity tags using a Viterbi decoder. We are able to achieve close to state-of-the-art NER performance in seven languages with the same basic model using only labeled NER data and no hand-engineered features or other external resources like syntactic taggers or Gazetteers.Publication Open Access Team Howard Beale at SemEval-2019 task 4: hyperpartisan news detection with BERT(Association for Computational Linguistics (ACL), 2019) Dayanık, Erenay; Department of Computer Engineering; Mutlu, Osman; Can, Ozan Arkan; Researcher; Department of Computer Engineering; Graduate School of Sciences and EngineeringThis paper describes our system for SemEval-2019 Task 4: Hyperpartisan News Detection (Kiesel et al., 2019). We use pretrained BERT (Devlin et al., 2018) architecture and investigate the effect of different fine tuning regimes on the final classification task. We show that additional pretraining on news domain improves the performance on the Hyperpartisan News Detection task. Our system1 ranked 8th out of 42 teams with 78.3% accuracy on the held-out test dataset.Publication Open Access Visually grounded language learning for robot navigation(Association for Computing Machinery (ACM), 2019) Department of Computer Engineering; Yemez, Yücel; Ünal, Emre; Can, Ozan Arkan; Faculty Member; Other; Department of Computer Engineering; College of Engineering; Graduate School of Sciences and Engineering; 107907; N/A; N/AWe present an end-to-end deep learning model for robot navigation from raw visual pixel input and natural text instructions. The proposed model is an LSTM-based sequence-to-sequence neural network architecture with attention, which is trained on instructionperception data samples collected in a synthetic environment. We conduct experiments on the SAIL dataset which we reconstruct in 3D so as to generate the 2D images associated with the data. Our experiments show that the performance of our model is on a par with state-of-the-art, despite the fact that it learns navigational language with end-to-end training from raw visual data.