Skip to content

rajarshd/Gaussian_LDA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Gaussian_LDA

Implementation of the paper - "Gaussian LDA for Topic Models with Word Embeddings"

Data Format

  • Embedding file: Each line of the file should contain the embedding (dense vector representation) for a word. Each vector should have the same dimension and each dimension should be space separated. The experiments were carried with word2vec embeddings trained on an English Wikipedia dataset. However our model is agnostic to the choice of word embedding procedure and you could use any embeddings (for example trained GLove vectors)
  • Corpus train/test files: Here each line is a document with the words mapped as integers. The integer index of the word in your vocabulary should be equal to the position (line_number) of the word embedding in the embedding file. Please take care of 0-indexing.

Running the script

Checkout run_gaussian_lda.sh. It should be self-explanatory.

Contact: Rajarshi Das ([email protected])

Citation

@InProceedings{das-zaheer-dyer:2015,
  author    = {Das, Rajarshi  and  Zaheer, Manzil  and  Dyer, Chris},
  title     = {Gaussian LDA for Topic Models with Word Embeddings},
  booktitle = {Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
  publisher = {Association for Computational Linguistics},
  url       = {http://www.aclweb.org/anthology/P15-1077}
}

Multilingual G-LDA

Checkout the simple extensions of G-LDA to multiple languages and a detailed project report by Elias Kamyab

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages