This is the common repo for deep learning models implemented in PyTorch by the Data Systems Group at the University of Waterloo.
Moved to https://github.com/castorini/hedwig
For paraphrase detection, question answering, etc.
- SM-CNN: Siamese CNN for ranking texts (Severyn and Moschitti, SIGIR 2015)
- MP-CNN: Multi-Perspective CNN (He et al., EMNLP 2015)
- NCE: Noise-Contrastive Estimation for answer selection applied on SM-CNN and MP-CNN (Rao et al., CIKM 2016)
- VDPWI: Very-Deep Pairwise Word Interaction NNs for modeling textual similarity (He and Lin, NAACL 2016)
- IDF Baseline: IDF overlap between question and candidate answers
Each model directory has a README.md
with further details.
If you are an internal Castor contributor using GPU machines in the lab, follow the instructions here.
Castor is designed for Python 3.6 and PyTorch 0.4. PyTorch recommends Anaconda for managing your environment. We'd recommend creating a custom environment as follows:
$ conda create --name castor python=3.6
$ source activate castor
And installing the packages as follows:
$ conda install pytorch torchvision -c pytorch
Other Python packages we use can be installed via pip:
$ pip install -r requirements.txt
Code depends on data from NLTK (e.g., stopwords) so you'll have to download them. Run the Python interpreter and type the commands:
>>> import nltk
>>> nltk.download()
Finally, run the following inside the utils
directory to build the trec_eval
tool for evaluating certain datasets.
$ ./get_trec_eval.sh
If you are an internal Castor contributor using GPU machines in the lab, follow the instructions here.
To fully take advantage of code here, clone these other two repos:
Castor-data
: embeddings, datasets, etc.Caster-models
: pre-trained models
Organize your directory structure as follows:
.
├── Castor
├── Castor-data
└── Castor-models
For example (using HTTPS):
$ git clone https://github.com/castorini/Castor.git
$ git clone https://git.uwaterloo.ca/jimmylin/Castor-data.git
$ git clone https://git.uwaterloo.ca/jimmylin/Castor-models.git
After cloning the Castor-data repo, you need to unzip embeddings and run data pre-processing scripts. You can choose to follow instructions under each dataset and embedding directory separately, or just run the following script in Castor-data to do all of the steps for you:
$ ./setup.sh