Evaluation of Named Entity Recognition Systems

less than 1 minute read

Monica Marrero et al. (2009) Evaluation of Named Entity Extraction Systems, Research In Computer Science, Vol. 41 (2009), pp. 47-58

This paper presents a number of NER tools (including their programming language and license), lists important conferences for NER, and discusses limits of these evaluations. The authors also perform an evaluation of NER tools based on a self-made corpus of 579 (!) words with 100 occurrences of various named entities.

Conferences with named entity evaluation tasks

  1. MUC conferences
  2. CONLL
  3. ACE (according to the authors the most prestigious one)

Evaluation measures and problems

  1. handling of partial identifications
  2. penalizing of false positives
  3. ACE: algorithm that weights results => difficult to interpret and compare with other efforts