Skip to content

Spanish word embeddings computed with different methods and from different corpora

License

Notifications You must be signed in to change notification settings

AlphaMoury/spanish-word-embeddings

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Spanish Word Embeddings

Below you find links to Spanish word embeddings computed with different methods and from different corpora. Whenever it is possible, a description of the parameters used to compute the embeddings is included, together with simple statistics of the vectors, vocabulary, and description of the corpus from which the embeddings were computed. Direct links to the embeddings are provided, so please refer to the original sources for proper citation (also see References). An example of the use of some of these embeddings can be found here or in this tutorial (both in Spanish).

Summary (and links) for the embeddings in this page:

Corpus Size Algorithm #vectors vec-dim Credits
1 Spanish Unannotated Corpora 2.6B FastText 1,313,423 300 José Cañete
2 Spanish Billion Word Corpus 1.4B FastText 855,380 300 Jorge Pérez
3 Spanish Billion Word Corpus 1.4B Glove 855,380 300 Jorge Pérez
4 Spanish Billion Word Corpus 1.4B Word2Vec 1,000,653 300 Cristian Cardellino
5 Spanish Wikipedia ??? FastText 985,667 300 FastText team

FastText embeddings from SUC

Embeddings

Links to the embeddings (#dimensions=300, #vectors=1,313,423):

More vectors with different dimensiones (10, 30, 100, and 300) can be found here

Algorithm

  • Implementation: FastText with Skipgram
  • Parameters:
    • min subword-ngram = 3
    • max subword-ngram = 6
    • minCount = 5
    • epochs = 20
    • dim = 300
    • all other parameters set as default

Corpus

FastText embeddings from SBWC

Embeddings

Links to the embeddings (#dimensions=300, #vectors=855,380):

Algorithm

  • Implementation: FastText with Skipgram
  • Parameters:
    • min subword-ngram = 3
    • max subword-ngram = 6
    • minCount = 5
    • epochs = 20
    • dim = 300
    • all other parameters set as default

Corpus

  • Spanish Billion Word Corpus
  • Corpus Size: 1.4 billion words
  • Post processing: Besides the post processing of the raw corpus explained in the SBWCE page that included deletion of punctuation, numbers, etc., the following processing was applied:
    • Words were converted to lower case letters
    • Every sequence of the 'DIGITO' keyword was replaced by (a single) '0'
    • All words of more than 3 characteres plus a '0' were ommitted (example: 'padre0')

GloVe embeddings from SBWC

Embeddings

Links to the embeddings (#dimensions=300, #vectors=855,380):

Algorithm

  • Implementation: GloVe
  • Parameters:
    • vector-size = 300
    • iter = 25
    • min-count = 5
    • all other parameters set as default

Corpus

Word2Vec embeddings from SBWC

Embeddings

Links to the embeddings (#dimensions=300, #vectors=1,000,653)

Algorithm

Corpus

FastText embeddings from Spanish Wikipedia

Embeddings

Links to the embeddings (#dimensions=300, #vectors=985,667):

Algorithm

  • Implementation: FastText with Skipgram
  • Parameters: FastText default parameters

Corpus

References

About

Spanish word embeddings computed with different methods and from different corpora

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published