Skip to content

This is a fork, you can find the original repository for my *SEM 2018 @ NAACL 2018 paper under the UKP Lab account. Please refer there for issues and discussion.

License

Notifications You must be signed in to change notification settings

daniilsorokin/starsem2018-entity-linking

 
 

Repository files navigation

Mixing Context Granularities for Improved Entity Linking on Question Answering Data across Entity Categories

Entity linking with the Wikidata knowledge base

This is an accompanying repository for our *SEM 2018 paper (pre-print). It contains the code to replicate the experiments and train the models descirbed in the paper.

This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.

Please use the following citation:

@inproceedings{TUD-CS-2018-01,
    title = {Mixing Context Granularities for Improved Entity Linking on Question Answering Data across Entity Categories},
    author = {Sorokin, Daniil and Gurevych, Iryna},
    publisher = {Association for Computational Linguistics},
    booktitle = {Proceedings of the 7th Joint Conference on Lexical and Computational Semantics (*SEM 2018) },
    pages = {to appear},
    month = jun,
    year = {2018},
    location = {New Orleans, LA, U.S.}
}

Paper abstract:

The first stage of every knowledge base question answering approach is to link entities in the input question. We investigate entity linking in the context of a question answering task and present a jointly optimized neural architecture for entity mention detection and entity disambiguation that models the surrounding context on different levels of granularity.

We use the Wikidata knowledge base and available question answering datasets to create benchmarks for entity linking on question answering data. Our approach outperforms the previous state-of-the-art system on this data, resulting in an average 8% improvement of the final score. We further demonstrate that our model delivers a strong performance across different entity categories.

Please, refer to the paper for more the model description and training details

Contacts:

If you have any questions regarding the code, please, don't hesitate to contact the authors or report an issue.

Project structure:

FileDescription
configs/Configuration files for the experiments
entitylinking/coreMention extraction and candidate retrieval
entitylinking/datasetsDatasets IO
entitylinking/evaluationEvaluation measures and scripts
entitylinking/mlearningModel definition and training scripts
entitylinking/wikidataRetrieving information from Wikidata
resources/Necessary resources
trainedmodels/Trained models

Requirements:

Running the experiments from the paper:

See run_experiments.sh

Using the pre-trained model:

Follow the steps to use this project as an external entity-linking tool.

  1. Clone/Download the project
  2. Take the pre-trained model FeatureModel_Baseline and extract it into a trainedmodels/ folder in the main directory of the project
  3. Download the GloVe embeddings, glove.6B.zip and put them into the folder resources/glove/ in the main directory of the project
  4. Modify the path to the word embeddings in the configuration file for the model: trainedmodels/FeatureModel_Baseline.param
  5. Make sure that the project folder in your Python PATH
  6. Use the following code to initialize an entity linker and apply it on new data:
from entitylinking import core
    
entitylinker = core.MLLinker(path_to_model="trainedmodels/FeatureModel_Baseline.torchweights")
output = entitylinker.link_entities_in_raw_input("Barack Obama is a president.")
print(output.entities)

For the VCG model you also need KB embeddings produced by Fast-TransX. We will make available a pre-trained version of these embeddings upon the publication.

License:

  • Apache License Version 2.0

About

This is a fork, you can find the original repository for my *SEM 2018 @ NAACL 2018 paper under the UKP Lab account. Please refer there for issues and discussion.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 99.8%
  • Shell 0.2%