Publication: Parser evaluation using textual entailments
Files
Program
KU-Authors
KU Authors
Co-Authors
Rimell, Laura
Publication Date
Language
Type
Embargo Status
NO
Journal Title
Journal ISSN
Volume Title
Alternative Title
Abstract
Parser Evaluation using Textual Entailments (PETE) is a shared task in the SemEval-2010 Evaluation Exercises on Semantic Evaluation. The task involves recognizing textual entailments based on syntactic information alone. PETE introduces a new parser evaluation scheme that is formalism independent, less prone to annotation error, and focused on semantically relevant distinctions. This paper describes the PETE task, gives an error analysis of the top-performing Cambridge system, and introduces a standard entailment module that can be used with any parser that outputs Stanford typed dependencies.
Source
Publisher
Springer
Subject
Computer science, Interdisciplinary applications
Citation
Has Part
Source
Language Resources and Evaluation
Book Series Title
Edition
DOI
10.1007/s10579-012-9200-5