Researcher: Sert, Ulaş
Name Variants
Sert, Ulaş
Email Address
Birth Date
3 results
Search Results
Now showing 1 - 3 of 3
Publication Metadata only KU ai at MEDIQA 2019: domain-specific pre-training and transfer learning for medical NLI(Association for Computational Linguistics (ACL), 2019) Department of Computer Engineering; N/A; N/A; Yüret, Deniz; Sert, Ulaş; Cengiz, Cemil; Faculty Member; Master Student; Master Student; Department of Computer Engineering; College of Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and EngineeringIn this paper, we describe our system and results submitted for the Natural Language Inference (NLI) track of the MEDIQA 2019 Shared Task (Ben Abacha et al., 2019). As KU ai team, we used BERT (Devlin et al., 2018) as our baseline model and pre-processed the MedNLI dataset to mitigate the negative impact of de-identification artifacts. Moreover, we investigated different pre-training and transfer learning approaches to improve the performance. We show that pre-training the language model on rich biomedical corpora has a significant effect in teaching the model domain-specific language. In addition, training the model on large NLI datasets such as MultiNLI and SNLI helps in learning task-specific reasoning. Finally, we ensembled our highest-performing models, and achieved 84.7% accuracy on the unseen test dataset and ranked 10th out of 17 teams in the official results.Publication Metadata only KU_ai at MEDIQA 2019: domain-specific pre-training and transfer learning for medical NLI(assoc Computational Linguistics-acl, 2019) N/A; N/A; Department of Computer Engineering; Cengiz, Cemil; Sert, Ulaş; Yüret, Deniz; Master Student; Master Student; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; N/A; 179996in this paper, we describe our system and results submitted for the Natural Language inference (NLI) track of the MEDIQa 2019 Shared Task (Ben abacha et al., 2019). as KU ai team, we used BERT (Devlin et al., 2018) as our baseline model and pre-processed the MedNLI dataset to mitigate the negative impact of de-identification artifacts. Moreover, we investigated different pre-training and transfer learning approaches to improve the performance. We show that pre-training the language model on rich biomedical corpora has a significant effect in teaching the model domain-specific language. in addition, training the model on large NLI datasets such as MultiNLI and SNLI helps in learning task-specific reasoning. Finally, we ensembled our highest-performing models, and achieved 84.7% accuracy on the unseen test dataset and ranked 10th out of 17 teams in the official results.Publication Open Access KU_ai at MEDIQA 2019: domain-specific pre-training and transfer learning for medical NLI(Association for Computational Linguistics (ACL), 2019) N/A; Department of Computer Engineering; Cengiz, Cemil; Sert, Ulaş; Yüret, Deniz; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; N/A; 179996In this paper, we describe our system and results submitted for the Natural Language Inference (NLI) track of the MEDIQA 2019 Shared Task (Ben Abacha et al., 2019). As KU ai team, we used BERT (Devlin et al., 2018) as our baseline model and pre-processed the MedNLI dataset to mitigate the negative impact of de-identification artifacts. Moreover, we investigated different pre-training and transfer learning approaches to improve the performance. We show that pre-training the language model on rich biomedical corpora has a significant effect in teaching the model domain-specific language. In addition, training the model on large NLI datasets such as MultiNLI and SNLI helps in learning task-specific reasoning. Finally, we ensembled our highest-performing models, and achieved 84.7% accuracy on the unseen test dataset and ranked 10th out of 17 teams in the official results.