Evaluation of Named Entity Recognition Systems
Monica Marrero et al. (2009) Evaluation of Named Entity Extraction Systems, Research In Computer Science, Vol. 41 (2009), pp. 47-58
This paper presents a number of NER tools (including their programming language and license), lists important conferences for NER, and discusses limits of these evaluations. The authors also perform an evaluation of NER tools based on a self-made corpus of 579 (!) words with 100 occurrences of various named entities.
Conferences with named entity evaluation tasks
- MUC conferences
- CONLL
- ACE (according to the authors the most prestigious one)
Evaluation measures and problems
- handling of partial identifications
- penalizing of false positives
- ACE: algorithm that weights results => difficult to interpret and compare with other efforts