This repository contains additional data files to be used with
spaCy v2.2+. When it's installed in the same environment as
spaCy, this package makes the resources for each language available as an entry
point, which spaCy checks when setting up the Vocab
and Lookups
.
Feel free to submit pull requests to update the data. For issues related to the data, lookups and integration, please use the spaCy issue tracker.
The main purpose of this package is to make the default spaCy installation
smaller and not force every user to download large data files for all
languages by default. Lookups data is now either provided via the pre-trained
models (which serialize out their vocabulary and lookup tables) or by
explicitly installing this package or spacy[lookups]
.
You should install this package if you want to use lemmatization for languages
that don't yet have a pretrained model available for
download and don't rely on third-party libraries for lemmatization – for
example, Turkish, Swedish or Croatian
(see data files). You should also install it if
you're creating a blank model and want it to include lemmatization data.
Once you've saved out the model (e.g. via nlp.disk
), it will include the
lookup tables as part of its Vocab
.
At the moment, yes. However, we are considering including other lookup lists and tables as well, e.g. large tokenizer exception files.
This package now also includes all data-specific tests. The test suite depends on spaCy.
pip install -r requirements.txt
python -m pytest spacy_lookups_data
If you've installed the package in your spaCy environment, you can also run the tests like this:
python -m pytest --pyargs spacy_lookups_data