This repo is meant to be public.
It serves as a demo to show what the Snowplow pipeline looks like.
Before you can send your own event and context types into Snowplow (using the track unstructured events or track self-describing events and custom contexts features of Snowplow), you need to:
- Define a JSON schema for each of the events and context types
- Upload those schemas to your Iglu schema registry
- Define a corresponding jsonpath file, and make sure this is uploaded your jsonpaths directory in Amazon S3
- Create a corresponding Redshfit table definition, and create this table in your Redshift cluster
Once you have completed the above, you can send in data that conforms to the schemas as custom unstructured events or custom contexts.
We recommend setting up the following two tools before staring:
- Git so you can easily clone the repo and make updates to it.
- Igluctl igluctl. This is a command line tool for validating schemas, auto-generating associated SQL table definition and jsonpath files and publishing them to snowplow-mini or S3
In order to start sending a new event or context type into Snowplow, you first need to define a new schema for that event.
- Create a file in the repo for the new schema e.g.
/schemas/com.codecombat/new_event_or_context_name/jsonschema/1-0-0
- Create the schema in that file. Follow the
/schemas/com.example_company/example_event/jsonschema/1-0-0
example - Save the schema file
Note that if you have JSON data already and you want to create a corresponding schema, you can do so using Schema Guru, both the web UI and the CLI.
Once you have your schema, make sure to validate it using igluctl:
$ /path/to/igluctl lint /path/to/schemas/com.codecombat/my_new_event_or_context
For Windows:
> java -jar /path/to/igluctl lint /path/to/schemas/com.codecombat/my_new_event_or_context
Igluctl has two severity levels that it can use when validating schemas. By default it uses level (1), which checks that the schemas are simply valid. We recommend validating schemas against a higher level (2). This will fail schemas that:
- Define a string field without a
maxLength
property. That ensures that when e.g. the corresponding Redshift table DDL is generated, the correct associated column length can be unambiguously set - Define a numeric field without a
minimum
andmaximum
properties. That ensures that the when e.g. the corresponding Redshift table DDL is generated, the right numeric field type is set.
To lint the schemas using the higher severity level (2) run:
$ /path/to/igluctl lint /path/to/schemas/com.codecombat/my_new_event_or_context --severityLevel 2
Once you have created or updated your schemas, you need to push them to Iglu so that they're accessible to the Snowplow data pipeline.
This is a two part process:
In order to upload schemas you need to have Snowplow Mini IP and Iglu Registry Master key. Here we are refer to them as environment variables (those are just examples, you'll have unique IP and key):
SNOWPLOW_MINI_IP=com-codecombat.mini.snplow.net
IGLU_REGISTRY_MASTER_KEY=5199e5ba-881d-42dc-880c-032b74cbd5b9
Or for Windows:
SET SNOWPLOW_MINI_IP=com-codecombat.mini.snplow.net
SET IGLU_REGISTRY_MASTER_KEY=5199e5ba-881d-42dc-880c-032b74cbd5b9
Run the following command to publish all schemas to the Iglu server bundled with Snowplow-mini:
$ /path/to/igluctl static push ./schemas $SNOWPLOW_MINI_IP:8081 $IGLU_REGISTRY_MASTER_KEY --public
Note that you can specify individual schemas if you prefer e.g.
$ /path/to/igluctl static push ./schemas/com.mycompany/my_new_event_schema $SNOWPLOW_MINI_IP:8081 $IGLU_REGISTRY_MASTER_KEY --public
Also note that if you're editing existing schemas, the server will need to be rebooted to clear the schema cache. This can be done directly in the EC2 console, or ping [email protected] to ask a member of the Snowplow team to do so.
Once you've created your schemas, you need to upload them to Iglu. In practice, this means copying them into S3.
This can also be done via Igluctl. In the project root, first commit the schema to Git:
git add .
git commit -m "Committed finalized schema"
git push
Then push it to S3 bucket:
$ /path/to/igluctl static s3cp ./schemas sp-com-codecombat-100499738709-0-iglu-schemas --accessKeyId ABCDEF --secretAccessKey GHIJKILM/12345XYZ --region us-east-1
Note that you also can pass credentials via configuration file or environment variables, as with any [AWS tool] aws-credentials.
Useful resources
- Iglu schema repository 0.1.0 release blog post
- Iglu central - centralized registry for all the schemas hosted by the Snowplow team
- Iglu - respository with both Iglu server and client libraries
Once you've created your schemas, you need to upload them to your Iglu Server. In practice, this means synchronising them with the Iglu Server.
This must be done via Igluctl. In the project root, first commit the schema to Git:
git add .
git commit -m "Committed finalized schema"
git push
Then push them to the Iglu Server. In order to upload schemas you need to have the Iglu Server IP and Iglu Server Master key. Here we are refer to them as environment variables (those are just examples, you'll have unique IP and key):
IGLU_SERVER_IP=com-codecombat.iglu.snplow.net
IGLU_SERVER_MASTER_KEY=5199e5ba-881d-42dc-880c-032b74cbd5b9
Run the following command to publish all schemas to the Iglu Server:
$ /path/to/igluctl static push ./schemas $IGLU_SERVER_IP $IGLU_SERVER_MASTER_KEY
Once you've defined the jsonschema for your new event or context type you need to create a correpsonding jsonpath file and sql table definition. This can be done programmatically using Igluctl. From the root of the repo:
/path/to/igluctl static generate --with-json-paths /path/to/schemas/com.codecombat/new_event_or_context_name
A corresponding jsonpath file and sql table definition file will be generated in the appropriate folder in the repo.
Note that you can create SQL table definition and jsonpath files for all the events / contexts schema'd as follows:
/path/to/igluctl static generate --with-json-paths ./schemas/com.codecombat
Once you've finalized the new jsonpath file, commit it to Git. From the project root:
git add .
git commit -m "Committed finalized jsonpath"
git push
Then push to Iglu:
$ /path/to/igluctl static s3cp ./jsonpaths sp-com-codecombat-100499738709-0-iglu-jsonpaths --accessKeyId ABCDEF --secretAccessKey GHIJKILM/12345XYZ --region us-east-1
Once you've committed your updated table definition into Github, you need to either create or modify the table in Redshift, either by executing the CREATE TABLE...
statement directly, or ALTER TABLE
(if you're e.g. adding a column to an existing table).
Note that it is essential that any new tables you create are owned by the storageloader
user. This is the user that we use to load and model data in Redshift. Once you've created your new table:
CREATE TABLE my_new_table...
Please run the following statement to assign ownership of it to storageloader
ALTER TABLE my_new_table OWNER TO storageloader;
Once you have gone through the above process, you can start sending data that conforms to the schema(s) you've created into Snowplow as unstructured events and custom contexts.
In both cases (custom unstructured events and contexts), the data is sent in as a JSON with two fields, a schema field with a reference to the location of the schema in Iglu, and a data field, with the actual data being sent, e.g.
{
"schema": "iglu: com.acme_company/viewed_product/jsonschema/2-0-0",
"data": {
"productId": "ASO01043",
"category": "Dresses",
"brand": "ACME",
"price": 49.95,
"sizes": [
"xs",
"s",
"l",
"xl",
"xxl"
],
"availableSince": "2013-03-07"
}
}
For more detail, please see the technical documentation for the specific tracker you're implementing.
Note: we recommend testing that the data you're sending into Snowplow conforms to the schemas you've defined and uploaded into Iglu, before pushing updates into production. This online JSON schema validator is a very useful resource for doing so.
We also recommend testing that the events are sent successfully using Snowplow-Mini. You do this by configuring the collector in the tracker to $SNOWPLOW_MINI_IP:8080
and then logging onto http://$SNOWPLOW_MINI_IP
to review the results e.g. in Kibana. (Follow the links on the page.) Note that you need to have your IP whitelisted before you can view data on Snowplow-mini.
When you use Snowplow, the schema for each event and context lives with the data. That means you have the flexibility to evolve your schema definition over time.
If you want to change your schema over time, you will need to:
- Create a new jsonschema file. Depending on how different this is to your current version, you will need to give it the appropriate version number. The SchemaVer specification we use when versioning data schemas can be found here
- Update the corresponding jsonpath files. If you've created a new major schema version, you'll need to create a new jsonpath file e.g.
exmaple_event_2.json
, that exists alongside your existingexample_event_1.json
- For minor schema updates, you should be able to update your existing Redshift table definition e.g. to add add additional columns. For major schema updates, you'll need to create a new Redshift table definition e.g.
com_mycompany_exmaple_event_2.sql
- Start sending data into Snowplow using the new schema version (i.e. update the Iglu reference to point at the new version e.g.
2-0-0
or1-0-1
rather than1-0-0
). Note that you will continue to be able to send in data that conforms to the old schema at the same time. In the event that you have an event with two different major schema definitions, each event version will be loaded into a different Redshift table
Documentation on jsonschemas:
- Other example jsonschemas can be found in Iglu Central. Note how schemas are namespaced in different folders
- [Schema Guru] schema-guru-online is an [online] schema-guru-online and [command line tool] schema-guru-github for programmatically generating schemas from existing JSON data
- Snowplow 0.9.5 release blog post, which gives an overview of the way that Snowplow uses jsonschemas to process, validate and shred unstructured event and custom context JSONs
- It can be useful to test jsonschemas using online validators e.g. this one
- json-schema.org contains links to the actual jsonschema specification, examples and guide for schema authors
- The original specification for self-describing JSONs, produced by the Snowplow team, can be found here
Documentation on jsonpaths:
- Example jsonpath files can be found on the Snowplow repo. Note that the corresponding jsonschema definitions are stored in Iglu central
- Amazon documentation on jsonpath files can be found here
Documentaiton on creating tablels in Redshift:
- Example Redshift table definitions can be found on the Snowplow repo. Note that corresponding jsonschema definitions are stored in Iglu central
- Amazon documentation on Redshift create table statements can be found here. A list of Redshift data types can be found here