"}`.
+
+sni_json: |
+ {
+ "id": "7fca84d6-7d37-4a74-a7b0-93e576089a41",
+ "name": "my-sni",
+ "created_at": 1422386534,
+ "tags": ["user-level", "low-priority"],
+ "certificate": {"id":"d044b7d4-3dc2-4bbc-8e9f-6b7a69416df6"}
+ }
+
+sni_data: |
+ "data": [{
+ "id": "a9b2107f-a214-47b3-add4-46b942187924",
+ "name": "my-sni",
+ "created_at": 1422386534,
+ "tags": ["user-level", "low-priority"],
+ "certificate": {"id":"04fbeacf-a9f1-4a5d-ae4a-b0407445db3f"}
+ }, {
+ "id": "43429efd-b3a5-4048-94cb-5cc4029909bb",
+ "name": "my-sni",
+ "created_at": 1422386534,
+ "tags": ["admin", "high-priority", "critical"],
+ "certificate": {"id":"d26761d5-83a4-4f24-ac6c-cff276f2b79c"}
+ }],
+
+certificate_authority_body: |
+ Attributes | Description
+ ---:| ---
+ `cert` | PEM-encoded public CA certificate.
+
+certificate_authority_json: |
+ {
+ "id": "322dce96-d434-4e0d-9038-311b3520f0a3",
+ "created_at": 1566597621,
+ "cert": "-----BEGIN CERTIFICATE-----...",
+ }
+
+certificate_authority_data: |
+ "data": [{
+ "id": "322dce96-d434-4e0d-9038-311b3520f0a3",
+ "created_at": 1566597621,
+ "cert": "-----BEGIN CERTIFICATE-----...",
+ }, {
+ "id": "43629afd-bda5-4248-94cb-5cc4029909bb",
+ "created_at": 1566597621,
+ "cert": "-----BEGIN CERTIFICATE-----...",
+ }],
+
+upstream_body: |
+ Attributes | Description
+ ---:| ---
+ `name` | This is a hostname, which must be equal to the `host` of a Service.
+ `hash_on`
*optional* | What to use as hashing input: `none` (resulting in a weighted-round-robin scheme with no hashing), `consumer`, `ip`, `header`, or `cookie`. Defaults to `"none"`.
+ `hash_fallback`
*optional* | What to use as hashing input if the primary `hash_on` does not return a hash (eg. header is missing, or no consumer identified). One of: `none`, `consumer`, `ip`, `header`, or `cookie`. Not available if `hash_on` is set to `cookie`. Defaults to `"none"`.
+ `hash_on_header`
*semi-optional* | The header name to take the value from as hash input. Only required when `hash_on` is set to `header`.
+ `hash_fallback_header`
*semi-optional* | The header name to take the value from as hash input. Only required when `hash_fallback` is set to `header`.
+ `hash_on_cookie`
*semi-optional* | The cookie name to take the value from as hash input. Only required when `hash_on` or `hash_fallback` is set to `cookie`. If the specified cookie is not in the request, Kong will generate a value and set the cookie in the response.
+ `hash_on_cookie_path`
*semi-optional* | The cookie path to set in the response headers. Only required when `hash_on` or `hash_fallback` is set to `cookie`. Defaults to `"/"`.
+ `slots`
*optional* | The number of slots in the loadbalancer algorithm (`10`-`65536`). Defaults to `10000`.
+ `healthchecks.active.https_verify_certificate`
*optional* | Whether to check the validity of the SSL certificate of the remote host when performing active health checks using HTTPS. Defaults to `true`.
+ `healthchecks.active.unhealthy.http_statuses`
*optional* | An array of HTTP statuses to consider a failure, indicating unhealthiness, when returned by a probe in active health checks. Defaults to `[429, 404, 500, 501, 502, 503, 504, 505]`. With form-encoded, the notation is `http_statuses[]=429&http_statuses[]=404`. With JSON, use an Array.
+ `healthchecks.active.unhealthy.tcp_failures`
*optional* | Number of TCP failures in active probes to consider a target unhealthy. Defaults to `0`.
+ `healthchecks.active.unhealthy.timeouts`
*optional* | Number of timeouts in active probes to consider a target unhealthy. Defaults to `0`.
+ `healthchecks.active.unhealthy.http_failures`
*optional* | Number of HTTP failures in active probes (as defined by `healthchecks.active.unhealthy.http_statuses`) to consider a target unhealthy. Defaults to `0`.
+ `healthchecks.active.unhealthy.interval`
*optional* | Interval between active health checks for unhealthy targets (in seconds). A value of zero indicates that active probes for unhealthy targets should not be performed. Defaults to `0`.
+ `healthchecks.active.http_path`
*optional* | Path to use in GET HTTP request to run as a probe on active health checks. Defaults to `"/"`.
+ `healthchecks.active.timeout`
*optional* | Socket timeout for active health checks (in seconds). Defaults to `1`.
+ `healthchecks.active.healthy.http_statuses`
*optional* | An array of HTTP statuses to consider a success, indicating healthiness, when returned by a probe in active health checks. Defaults to `[200, 302]`. With form-encoded, the notation is `http_statuses[]=200&http_statuses[]=302`. With JSON, use an Array.
+ `healthchecks.active.healthy.interval`
*optional* | Interval between active health checks for healthy targets (in seconds). A value of zero indicates that active probes for healthy targets should not be performed. Defaults to `0`.
+ `healthchecks.active.healthy.successes`
*optional* | Number of successes in active probes (as defined by `healthchecks.active.healthy.http_statuses`) to consider a target healthy. Defaults to `0`.
+ `healthchecks.active.https_sni`
*optional* | The hostname to use as an SNI (Server Name Identification) when performing active health checks using HTTPS. This is particularly useful when Targets are configured using IPs, so that the target host's certificate can be verified with the proper SNI.
+ `healthchecks.active.concurrency`
*optional* | Number of targets to check concurrently in active health checks. Defaults to `10`.
+ `healthchecks.active.type`
*optional* | Whether to perform active health checks using HTTP or HTTPS, or just attempt a TCP connection. Possible values are `tcp`, `http` or `https`. Defaults to `"http"`.
+ `healthchecks.passive.unhealthy.http_failures`
*optional* | Number of HTTP failures in proxied traffic (as defined by `healthchecks.passive.unhealthy.http_statuses`) to consider a target unhealthy, as observed by passive health checks. Defaults to `0`.
+ `healthchecks.passive.unhealthy.http_statuses`
*optional* | An array of HTTP statuses which represent unhealthiness when produced by proxied traffic, as observed by passive health checks. Defaults to `[429, 500, 503]`. With form-encoded, the notation is `http_statuses[]=429&http_statuses[]=500`. With JSON, use an Array.
+ `healthchecks.passive.unhealthy.tcp_failures`
*optional* | Number of TCP failures in proxied traffic to consider a target unhealthy, as observed by passive health checks. Defaults to `0`.
+ `healthchecks.passive.unhealthy.timeouts`
*optional* | Number of timeouts in proxied traffic to consider a target unhealthy, as observed by passive health checks. Defaults to `0`.
+ `healthchecks.passive.type`
*optional* | Whether to perform passive health checks interpreting HTTP/HTTPS statuses, or just check for TCP connection success. Possible values are `tcp`, `http` or `https` (in passive checks, `http` and `https` options are equivalent.). Defaults to `"http"`.
+ `healthchecks.passive.healthy.successes`
*optional* | Number of successes in proxied traffic (as defined by `healthchecks.passive.healthy.http_statuses`) to consider a target healthy, as observed by passive health checks. Defaults to `0`.
+ `healthchecks.passive.healthy.http_statuses`
*optional* | An array of HTTP statuses which represent healthiness when produced by proxied traffic, as observed by passive health checks. Defaults to `[200, 201, 202, 203, 204, 205, 206, 207, 208, 226, 300, 301, 302, 303, 304, 305, 306, 307, 308]`. With form-encoded, the notation is `http_statuses[]=200&http_statuses[]=201`. With JSON, use an Array.
+ `tags`
*optional* | An optional set of strings associated with the Upstream, for grouping and filtering.
+
+upstream_json: |
+ {
+ "id": "91020192-062d-416f-a275-9addeeaffaf2",
+ "created_at": 1422386534,
+ "name": "my-upstream",
+ "hash_on": "none",
+ "hash_fallback": "none",
+ "hash_on_cookie_path": "/",
+ "slots": 10000,
+ "healthchecks": {
+ "active": {
+ "https_verify_certificate": true,
+ "unhealthy": {
+ "http_statuses": [429, 404, 500, 501, 502, 503, 504, 505],
+ "tcp_failures": 0,
+ "timeouts": 0,
+ "http_failures": 0,
+ "interval": 0
+ },
+ "http_path": "/",
+ "timeout": 1,
+ "healthy": {
+ "http_statuses": [200, 302],
+ "interval": 0,
+ "successes": 0
+ },
+ "https_sni": "example.com",
+ "concurrency": 10,
+ "type": "http"
+ },
+ "passive": {
+ "unhealthy": {
+ "http_failures": 0,
+ "http_statuses": [429, 500, 503],
+ "tcp_failures": 0,
+ "timeouts": 0
+ },
+ "type": "http",
+ "healthy": {
+ "successes": 0,
+ "http_statuses": [200, 201, 202, 203, 204, 205, 206, 207, 208, 226, 300, 301, 302, 303, 304, 305, 306, 307, 308]
+ }
+ }
+ },
+ "tags": ["user-level", "low-priority"]
+ }
+
+upstream_data: |
+ "data": [{
+ "id": "a2e013e8-7623-4494-a347-6d29108ff68b",
+ "created_at": 1422386534,
+ "name": "my-upstream",
+ "hash_on": "none",
+ "hash_fallback": "none",
+ "hash_on_cookie_path": "/",
+ "slots": 10000,
+ "healthchecks": {
+ "active": {
+ "https_verify_certificate": true,
+ "unhealthy": {
+ "http_statuses": [429, 404, 500, 501, 502, 503, 504, 505],
+ "tcp_failures": 0,
+ "timeouts": 0,
+ "http_failures": 0,
+ "interval": 0
+ },
+ "http_path": "/",
+ "timeout": 1,
+ "healthy": {
+ "http_statuses": [200, 302],
+ "interval": 0,
+ "successes": 0
+ },
+ "https_sni": "example.com",
+ "concurrency": 10,
+ "type": "http"
+ },
+ "passive": {
+ "unhealthy": {
+ "http_failures": 0,
+ "http_statuses": [429, 500, 503],
+ "tcp_failures": 0,
+ "timeouts": 0
+ },
+ "type": "http",
+ "healthy": {
+ "successes": 0,
+ "http_statuses": [200, 201, 202, 203, 204, 205, 206, 207, 208, 226, 300, 301, 302, 303, 304, 305, 306, 307, 308]
+ }
+ }
+ },
+ "tags": ["user-level", "low-priority"]
+ }, {
+ "id": "147f5ef0-1ed6-4711-b77f-489262f8bff7",
+ "created_at": 1422386534,
+ "name": "my-upstream",
+ "hash_on": "none",
+ "hash_fallback": "none",
+ "hash_on_cookie_path": "/",
+ "slots": 10000,
+ "healthchecks": {
+ "active": {
+ "https_verify_certificate": true,
+ "unhealthy": {
+ "http_statuses": [429, 404, 500, 501, 502, 503, 504, 505],
+ "tcp_failures": 0,
+ "timeouts": 0,
+ "http_failures": 0,
+ "interval": 0
+ },
+ "http_path": "/",
+ "timeout": 1,
+ "healthy": {
+ "http_statuses": [200, 302],
+ "interval": 0,
+ "successes": 0
+ },
+ "https_sni": "example.com",
+ "concurrency": 10,
+ "type": "http"
+ },
+ "passive": {
+ "unhealthy": {
+ "http_failures": 0,
+ "http_statuses": [429, 500, 503],
+ "tcp_failures": 0,
+ "timeouts": 0
+ },
+ "type": "http",
+ "healthy": {
+ "successes": 0,
+ "http_statuses": [200, 201, 202, 203, 204, 205, 206, 207, 208, 226, 300, 301, 302, 303, 304, 305, 306, 307, 308]
+ }
+ }
+ },
+ "tags": ["admin", "high-priority", "critical"]
+ }],
+
+target_body: |
+ Attributes | Description
+ ---:| ---
+ `target` | The target address (ip or hostname) and port. If the hostname resolves to an SRV record, the `port` value will be overridden by the value from the DNS record.
+ `weight`
*optional* | The weight this target gets within the upstream loadbalancer (`0`-`1000`). If the hostname resolves to an SRV record, the `weight` value will be overridden by the value from the DNS record. Defaults to `100`.
+ `tags`
*optional* | An optional set of strings associated with the Target, for grouping and filtering.
+
+target_json: |
+ {
+ "id": "a3ad71a8-6685-4b03-a101-980a953544f6",
+ "created_at": 1422386534,
+ "upstream": {"id":"b87eb55d-69a1-41d2-8653-8d706eecefc0"},
+ "target": "example.com:8000",
+ "weight": 100,
+ "tags": ["user-level", "low-priority"]
+ }
+
+target_data: |
+ "data": [{
+ "id": "4e8d95d4-40f2-4818-adcb-30e00c349618",
+ "created_at": 1422386534,
+ "upstream": {"id":"58c8ccbb-eafb-4566-991f-2ed4f678fa70"},
+ "target": "example.com:8000",
+ "weight": 100,
+ "tags": ["user-level", "low-priority"]
+ }, {
+ "id": "ea29aaa3-3b2d-488c-b90c-56df8e0dd8c6",
+ "created_at": 1422386534,
+ "upstream": {"id":"4fe14415-73d5-4f00-9fbc-c72a0fccfcb2"},
+ "target": "example.com:8000",
+ "weight": 100,
+ "tags": ["admin", "high-priority", "critical"]
+ }],
+
+
+---
+
+Kong comes with an **internal** RESTful Admin API for administration purposes.
+Requests to the Admin API can be sent to any node in the cluster, and Kong will
+keep the configuration consistent across all nodes.
+
+- `8001` is the default port on which the Admin API listens.
+- `8444` is the default port for HTTPS traffic to the Admin API.
+
+This API is designed for internal use and provides full control over Kong, so
+care should be taken when setting up Kong environments to avoid undue public
+exposure of this API. See [this document][secure-admin-api] for a discussion
+of methods to secure the Admin API.
+
+## Supported Content Types
+
+The Admin API accepts 2 content types on every endpoint:
+
+- **application/x-www-form-urlencoded**
+
+Simple enough for basic request bodies, you will probably use it most of the time.
+Note that when sending nested values, Kong expects nested objects to be referenced
+with dotted keys. Example:
+
+```
+config.limit=10&config.period=seconds
+```
+
+Arrays and sets can be specified in various ways:
+
+1. Sending same parameter multiple times:
+ ```
+ hosts=example.com&hosts=example.org
+ ```
+2. Using array notation:
+ ```
+ hosts[1]=example.com&hosts[2]=example.org
+ ```
+ or
+ ```
+ hosts[]=example.com&hosts[]=example.org
+ ```
+ Array and object notation can also be mixed:
+
+ ```
+ config.hosts[1]=example.com&config.hosts[2]=example.org
+ ```
+
+
+- **application/json**
+
+Handy for complex bodies (ex: complex plugin configuration), in that case simply send
+a JSON representation of the data you want to send. Example:
+
+```json
+{
+ "config": {
+ "limit": 10,
+ "period": "seconds"
+ }
+}
+```
+
+JSON arrays can be specified as well:
+
+```json
+{
+ "config": {
+ "limit": 10,
+ "period": "seconds",
+ "hosts": [ "example.com", "example.org" ]
+ }
+}
+```
+
+---
+
+## Information Routes
+
+
+
+### Retrieve Node Information
+
+Retrieve generic details about a node.
+
+/
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "hostname": "",
+ "node_id": "6a72192c-a3a1-4c8d-95c6-efabae9fb969",
+ "lua_version": "LuaJIT 2.1.0-beta3",
+ "plugins": {
+ "available_on_server": [
+ ...
+ ],
+ "enabled_in_cluster": [
+ ...
+ ]
+ },
+ "configuration" : {
+ ...
+ },
+ "tagline": "Welcome to Kong",
+ "version": "0.14.0"
+}
+```
+
+* `node_id`: A UUID representing the running Kong node. This UUID
+ is randomly generated when Kong starts, so the node will have a
+ different `node_id` each time it is restarted.
+* `available_on_server`: Names of plugins that are installed on the node.
+* `enabled_in_cluster`: Names of plugins that are enabled/configured.
+ That is, the plugins configurations currently in the datastore shared
+ by all Kong nodes.
+
+
+---
+
+### Retrieve Node Status
+
+Retrieve usage information about a node, with some basic information
+about the connections being processed by the underlying nginx process,
+the status of the database connection, and node's memory usage.
+
+If you want to monitor the Kong process, since Kong is built on top
+of nginx, every existing nginx monitoring tool or agent can be used.
+
+
+/status
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "database": {
+ "reachable": true
+ },
+ "memory": {
+ "workers_lua_vms": [{
+ "http_allocated_gc": "0.02 MiB",
+ "pid": 18477
+ }, {
+ "http_allocated_gc": "0.02 MiB",
+ "pid": 18478
+ }],
+ "lua_shared_dicts": {
+ "kong": {
+ "allocated_slabs": "0.04 MiB",
+ "capacity": "5.00 MiB"
+ },
+ "kong_db_cache": {
+ "allocated_slabs": "0.80 MiB",
+ "capacity": "128.00 MiB"
+ },
+ }
+ },
+ "server": {
+ "total_requests": 3,
+ "connections_active": 1,
+ "connections_accepted": 1,
+ "connections_handled": 1,
+ "connections_reading": 0,
+ "connections_writing": 1,
+ "connections_waiting": 0
+ }
+}
+```
+
+* `memory`: Metrics about the memory usage.
+ * `workers_lua_vms`: An array with all workers of the Kong node, where each
+ entry contains:
+ * `http_allocated_gc`: HTTP submodule's Lua virtual machine's memory
+ usage information, as reported by `collectgarbage("count")`, for every
+ active worker, i.e. a worker that received a proxy call in the last 10
+ seconds.
+ * `pid`: worker's process identification number.
+ * `lua_shared_dicts`: An array of information about dictionaries that are
+ shared with all workers in a Kong node, where each array node contains how
+ much memory is dedicated for the specific shared dictionary (`capacity`)
+ and how much of said memory is in use (`allocated_slabs`).
+ These shared dictionaries have least recent used (LRU) eviction
+ capabilities, so a full dictionary, where `allocated_slabs == capacity`,
+ will work properly. However for some dictionaries, e.g. cache HIT/MISS
+ shared dictionaries, increasing their size can be beneficial for the
+ overall performance of a Kong node.
+ * The memory usage unit and precision can be changed using the querystring
+ arguments `unit` and `scale`:
+ * `unit`: one of `b/B`, `k/K`, `m/M`, `g/G`, which will return results
+ in bytes, kibibytes, mebibytes, or gibibytes, respectively. When
+ "bytes" are requested, the memory values in the response will have a
+ number type instead of string. Defaults to `m`.
+ * `scale`: the number of digits to the right of the decimal points when
+ values are given in human-readable memory strings (unit other than
+ "bytes"). Defaults to `2`.
+ You can get the shared dictionaries memory usage in kibibytes with 4
+ digits of precision by doing: `GET /status?unit=k&scale=4`
+* `server`: Metrics about the nginx HTTP/S server.
+ * `total_requests`: The total number of client requests.
+ * `connections_active`: The current number of active client
+ connections including Waiting connections.
+ * `connections_accepted`: The total number of accepted client
+ connections.
+ * `connections_handled`: The total number of handled connections.
+ Generally, the parameter value is the same as accepts unless
+ some resource limits have been reached.
+ * `connections_reading`: The current number of connections
+ where Kong is reading the request header.
+ * `connections_writing`: The current number of connections
+ where nginx is writing the response back to the client.
+ * `connections_waiting`: The current number of idle client
+ connections waiting for a request.
+* `database`: Metrics about the database.
+ * `reachable`: A boolean value reflecting the state of the
+ database connection. Please note that this flag **does not**
+ reflect the health of the database itself.
+
+
+---
+
+## Tags
+
+Tags are strings associated to entities in Kong. Each tag must be composed of one or more
+alphanumeric characters, `_`, `-`, `.` or `~`.
+
+Most core entities can be *tagged* via their `tags` attribute, upon creation or edition.
+
+Tags can be used to filter core entities as well, via the `?tags` querystring parameter.
+
+For example: if you normally get a list of all the Services by doing:
+
+```
+GET /services
+```
+
+You can get the list of all the Services tagged `example` by doing:
+
+```
+GET /services?tags=example
+```
+
+Similarly, if you want to filter Services so that you only get the ones tagged `example` *and*
+`admin`, you can do that like so:
+
+```
+GET /services?tags=example,admin
+```
+
+Finally, if you wanted to filter the Services tagged `example` *or* `admin`, you could use:
+
+```
+GET /services?tags=example/admin
+```
+
+Some notes:
+
+* A maximum of 5 tags can be queried simultaneously in a single request with `,` or `/`
+* Mixing operators is not supported: if you try to mix `,` with `/` in the same querystring,
+ you will receive an error.
+* You may need to quote and/or escape some characters when using them from the
+ command line.
+* Filtering by `tags` is not supported in foreign key relationship endpoints. For example,
+ the `tags` parameter will be ignored in a request such as `GET /services/foo/routes?tags=a,b`
+* `offset` parameters are not guaranteed to work if the `tags` parameter is altered or removed
+
+
+### List All Tags
+
+Returns a paginated list of all the tags in the system.
+
+The list of entities will not be restricted to a single entity type: all the
+entities tagged with tags will be present on this list.
+
+If an entity is tagged with more than one tag, the `entity_id` for that entity
+will appear more than once in the resulting list. Similarly, if several entities
+have been tagged with the same tag, the tag will appear in several items of this list.
+
+
+/tags
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+``` json
+{
+ {
+ "data": [
+ { "entity_name": "services",
+ "entity_id": "acf60b10-125c-4c1a-bffe-6ed55daefba4",
+ "tag": "s1",
+ },
+ { "entity_name": "services",
+ "entity_id": "acf60b10-125c-4c1a-bffe-6ed55daefba4",
+ "tag": "s2",
+ },
+ { "entity_name": "routes",
+ "entity_id": "60631e85-ba6d-4c59-bd28-e36dd90f6000",
+ "tag": "s1",
+ },
+ ...
+ ],
+ "offset" = "c47139f3-d780-483d-8a97-17e9adc5a7ab",
+ "next" = "/tags?offset=c47139f3-d780-483d-8a97-17e9adc5a7ab",
+ }
+}
+```
+
+
+---
+
+### List Entity Ids by Tag
+
+Returns the entities that have been tagged with the specified tag.
+
+The list of entities will not be restricted to a single entity type: all the
+entities tagged with tags will be present on this list.
+
+
+/tags/:tags
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+``` json
+{
+ {
+ "data": [
+ { "entity_name": "services",
+ "entity_id": "c87440e1-0496-420b-b06f-dac59544bb6c",
+ "tag": "example",
+ },
+ { "entity_name": "routes",
+ "entity_id": "8a99e4b1-d268-446b-ab8b-cd25cff129b1",
+ "tag": "example",
+ },
+ ...
+ ],
+ "offset" = "1fb491c4-f4a7-4bca-aeba-7f3bcee4d2f9",
+ "next" = "/tags/example?offset=1fb491c4-f4a7-4bca-aeba-7f3bcee4d2f9",
+ }
+}
+```
+
+
+---
+
+## Service Object
+
+Service entities, as the name implies, are abstractions of each of your own
+upstream services. Examples of Services would be a data transformation
+microservice, a billing API, etc.
+
+The main attribute of a Service is its URL (where Kong should proxy traffic
+to), which can be set as a single string or by specifying its `protocol`,
+`host`, `port` and `path` individually.
+
+Services are associated to Routes (a Service can have many Routes associated
+with it). Routes are entry-points in Kong and define rules to match client
+requests. Once a Route is matched, Kong proxies the request to its associated
+Service. See the [Proxy Reference][proxy-reference] for a detailed explanation
+of how Kong proxies traffic.
+
+Services can be both [tagged and filtered by tags](#tags).
+
+
+```json
+{{ page.service_json }}
+```
+
+### Add Service
+
+##### Create Service
+
+/services
+
+
+*Request Body*
+
+{{ page.service_body }}
+
+
+*Response*
+
+```
+HTTP 201 Created
+```
+
+```json
+{{ page.service_json }}
+```
+
+
+---
+
+### List Services
+
+##### List All Services
+
+/services
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+{{ page.service_data }}
+ "next": "http://localhost:8001/services?offset=6378122c-a0a1-438d-a5c6-efabae9fb969"
+}
+```
+
+
+---
+
+### Retrieve Service
+
+##### Retrieve Service
+
+/services/{name or id}
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the Service to retrieve.
+
+
+##### Retrieve Service Associated to a Specific Route
+
+/routes/{route name or id}/service
+
+Attributes | Description
+---:| ---
+`route name or id`
**required** | The unique identifier **or** the name of the Route associated to the Service to be retrieved.
+
+
+##### Retrieve Service Associated to a Specific Plugin
+
+/plugins/{plugin id}/service
+
+Attributes | Description
+---:| ---
+`plugin id`
**required** | The unique identifier of the Plugin associated to the Service to be retrieved.
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{{ page.service_json }}
+```
+
+
+---
+
+### Update Service
+
+##### Update Service
+
+/services/{name or id}
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the Service to update.
+
+
+##### Update Service Associated to a Specific Route
+
+/routes/{route name or id}/service
+
+Attributes | Description
+---:| ---
+`route name or id`
**required** | The unique identifier **or** the name of the Route associated to the Service to be updated.
+
+
+##### Update Service Associated to a Specific Plugin
+
+/plugins/{plugin id}/service
+
+Attributes | Description
+---:| ---
+`plugin id`
**required** | The unique identifier of the Plugin associated to the Service to be updated.
+
+
+*Request Body*
+
+{{ page.service_body }}
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{{ page.service_json }}
+```
+
+
+---
+
+### Update Or Create Service
+
+##### Create Or Update Service
+
+/services/{name or id}
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the Service to create or update.
+
+
+##### Create Or Update Service Associated to a Specific Route
+
+/routes/{route name or id}/service
+
+Attributes | Description
+---:| ---
+`route name or id`
**required** | The unique identifier **or** the name of the Route associated to the Service to be created or updated.
+
+
+##### Create Or Update Service Associated to a Specific Plugin
+
+/plugins/{plugin id}/service
+
+Attributes | Description
+---:| ---
+`plugin id`
**required** | The unique identifier of the Plugin associated to the Service to be created or updated.
+
+
+*Request Body*
+
+{{ page.service_body }}
+
+
+Inserts (or replaces) the Service under the requested resource with the
+definition specified in the body. The Service will be identified via the `name
+or id` attribute.
+
+When the `name or id` attribute has the structure of a UUID, the Service being
+inserted/replaced will be identified by its `id`. Otherwise it will be
+identified by its `name`.
+
+When creating a new Service without specifying `id` (neither in the URL nor in
+the body), then it will be auto-generated.
+
+Notice that specifying a `name` in the URL and a different one in the request
+body is not allowed.
+
+
+*Response*
+
+```
+HTTP 201 Created or HTTP 200 OK
+```
+
+See POST and PATCH responses.
+
+
+---
+
+### Delete Service
+
+##### Delete Service
+
+/services/{name or id}
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the Service to delete.
+
+
+##### Delete Service Associated to a Specific Route
+
+/routes/{route name or id}/service
+
+Attributes | Description
+---:| ---
+`route name or id`
**required** | The unique identifier **or** the name of the Route associated to the Service to be deleted.
+
+
+*Response*
+
+```
+HTTP 204 No Content
+```
+
+
+---
+
+## Route Object
+
+Route entities define rules to match client requests. Each Route is
+associated with a Service, and a Service may have multiple Routes associated to
+it. Every request matching a given Route will be proxied to its associated
+Service.
+
+The combination of Routes and Services (and the separation of concerns between
+them) offers a powerful routing mechanism with which it is possible to define
+fine-grained entry-points in Kong leading to different upstream services of
+your infrastructure.
+
+Routes can be both [tagged and filtered by tags](#tags).
+
+
+```json
+{{ page.route_json }}
+```
+
+### Add Route
+
+##### Create Route
+
+/routes
+
+
+##### Create Route Associated to a Specific Service
+
+/services/{service name or id}/routes
+
+Attributes | Description
+---:| ---
+`service name or id`
**required** | The unique identifier or the `name` attribute of the Service that should be associated to the newly-created Route.
+
+
+*Request Body*
+
+{{ page.route_body }}
+
+
+*Response*
+
+```
+HTTP 201 Created
+```
+
+```json
+{{ page.route_json }}
+```
+
+
+---
+
+### List Routes
+
+##### List All Routes
+
+/routes
+
+
+##### List Routes Associated to a Specific Service
+
+/services/{service name or id}/routes
+
+Attributes | Description
+---:| ---
+`service name or id`
**required** | The unique identifier or the `name` attribute of the Service whose Routes are to be retrieved. When using this endpoint, only Routes associated to the specified Service will be listed.
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+{{ page.route_data }}
+ "next": "http://localhost:8001/routes?offset=6378122c-a0a1-438d-a5c6-efabae9fb969"
+}
+```
+
+
+---
+
+### Retrieve Route
+
+##### Retrieve Route
+
+/routes/{name or id}
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the Route to retrieve.
+
+
+##### Retrieve Route Associated to a Specific Plugin
+
+/plugins/{plugin id}/route
+
+Attributes | Description
+---:| ---
+`plugin id`
**required** | The unique identifier of the Plugin associated to the Route to be retrieved.
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{{ page.route_json }}
+```
+
+
+---
+
+### Update Route
+
+##### Update Route
+
+/routes/{name or id}
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the Route to update.
+
+
+##### Update Route Associated to a Specific Plugin
+
+/plugins/{plugin id}/route
+
+Attributes | Description
+---:| ---
+`plugin id`
**required** | The unique identifier of the Plugin associated to the Route to be updated.
+
+
+*Request Body*
+
+{{ page.route_body }}
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{{ page.route_json }}
+```
+
+
+---
+
+### Update Or Create Route
+
+##### Create Or Update Route
+
+/routes/{name or id}
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the Route to create or update.
+
+
+##### Create Or Update Route Associated to a Specific Plugin
+
+/plugins/{plugin id}/route
+
+Attributes | Description
+---:| ---
+`plugin id`
**required** | The unique identifier of the Plugin associated to the Route to be created or updated.
+
+
+*Request Body*
+
+{{ page.route_body }}
+
+
+Inserts (or replaces) the Route under the requested resource with the
+definition specified in the body. The Route will be identified via the `name
+or id` attribute.
+
+When the `name or id` attribute has the structure of a UUID, the Route being
+inserted/replaced will be identified by its `id`. Otherwise it will be
+identified by its `name`.
+
+When creating a new Route without specifying `id` (neither in the URL nor in
+the body), then it will be auto-generated.
+
+Notice that specifying a `name` in the URL and a different one in the request
+body is not allowed.
+
+
+*Response*
+
+```
+HTTP 201 Created or HTTP 200 OK
+```
+
+See POST and PATCH responses.
+
+
+---
+
+### Delete Route
+
+##### Delete Route
+
+/routes/{name or id}
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the Route to delete.
+
+
+*Response*
+
+```
+HTTP 204 No Content
+```
+
+
+---
+
+## Consumer Object
+
+The Consumer object represents a consumer - or a user - of a Service. You can
+either rely on Kong as the primary datastore, or you can map the consumer list
+with your database to keep consistency between Kong and your existing primary
+datastore.
+
+Consumers can be both [tagged and filtered by tags](#tags).
+
+
+```json
+{{ page.consumer_json }}
+```
+
+### Add Consumer
+
+##### Create Consumer
+
+/consumers
+
+
+*Request Body*
+
+{{ page.consumer_body }}
+
+
+*Response*
+
+```
+HTTP 201 Created
+```
+
+```json
+{{ page.consumer_json }}
+```
+
+
+---
+
+### List Consumers
+
+##### List All Consumers
+
+/consumers
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+{{ page.consumer_data }}
+ "next": "http://localhost:8001/consumers?offset=6378122c-a0a1-438d-a5c6-efabae9fb969"
+}
+```
+
+
+---
+
+### Retrieve Consumer
+
+##### Retrieve Consumer
+
+/consumers/{username or id}
+
+Attributes | Description
+---:| ---
+`username or id`
**required** | The unique identifier **or** the username of the Consumer to retrieve.
+
+
+##### Retrieve Consumer Associated to a Specific Plugin
+
+/plugins/{plugin id}/consumer
+
+Attributes | Description
+---:| ---
+`plugin id`
**required** | The unique identifier of the Plugin associated to the Consumer to be retrieved.
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{{ page.consumer_json }}
+```
+
+
+---
+
+### Update Consumer
+
+##### Update Consumer
+
+/consumers/{username or id}
+
+Attributes | Description
+---:| ---
+`username or id`
**required** | The unique identifier **or** the username of the Consumer to update.
+
+
+##### Update Consumer Associated to a Specific Plugin
+
+/plugins/{plugin id}/consumer
+
+Attributes | Description
+---:| ---
+`plugin id`
**required** | The unique identifier of the Plugin associated to the Consumer to be updated.
+
+
+*Request Body*
+
+{{ page.consumer_body }}
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{{ page.consumer_json }}
+```
+
+
+---
+
+### Update Or Create Consumer
+
+##### Create Or Update Consumer
+
+/consumers/{username or id}
+
+Attributes | Description
+---:| ---
+`username or id`
**required** | The unique identifier **or** the username of the Consumer to create or update.
+
+
+##### Create Or Update Consumer Associated to a Specific Plugin
+
+/plugins/{plugin id}/consumer
+
+Attributes | Description
+---:| ---
+`plugin id`
**required** | The unique identifier of the Plugin associated to the Consumer to be created or updated.
+
+
+*Request Body*
+
+{{ page.consumer_body }}
+
+
+Inserts (or replaces) the Consumer under the requested resource with the
+definition specified in the body. The Consumer will be identified via the `username
+or id` attribute.
+
+When the `username or id` attribute has the structure of a UUID, the Consumer being
+inserted/replaced will be identified by its `id`. Otherwise it will be
+identified by its `username`.
+
+When creating a new Consumer without specifying `id` (neither in the URL nor in
+the body), then it will be auto-generated.
+
+Notice that specifying a `username` in the URL and a different one in the request
+body is not allowed.
+
+
+*Response*
+
+```
+HTTP 201 Created or HTTP 200 OK
+```
+
+See POST and PATCH responses.
+
+
+---
+
+### Delete Consumer
+
+##### Delete Consumer
+
+/consumers/{username or id}
+
+Attributes | Description
+---:| ---
+`username or id`
**required** | The unique identifier **or** the username of the Consumer to delete.
+
+
+*Response*
+
+```
+HTTP 204 No Content
+```
+
+
+---
+
+## Plugin Object
+
+A Plugin entity represents a plugin configuration that will be executed during
+the HTTP request/response lifecycle. It is how you can add functionalities
+to Services that run behind Kong, like Authentication or Rate Limiting for
+example. You can find more information about how to install and what values
+each plugin takes by visiting the [Kong Hub](https://docs.konghq.com/hub/).
+
+When adding a Plugin Configuration to a Service, every request made by a client to
+that Service will run said Plugin. If a Plugin needs to be tuned to different
+values for some specific Consumers, you can do so by creating a separate
+plugin instance that specifies both the Service and the Consumer, through the
+`service` and `consumer` fields.
+
+Plugins can be both [tagged and filtered by tags](#tags).
+
+
+```json
+{{ page.plugin_json }}
+```
+
+See the [Precedence](#precedence) section below for more details.
+
+#### Precedence
+
+A plugin will always be run once and only once per request. But the
+configuration with which it will run depends on the entities it has been
+configured for.
+
+Plugins can be configured for various entities, combination of entities, or
+even globally. This is useful, for example, when you wish to configure a plugin
+a certain way for most requests, but make _authenticated requests_ behave
+slightly differently.
+
+Therefore, there exists an order of precedence for running a plugin when it has
+been applied to different entities with different configurations. The rule of
+thumb is: the more specific a plugin is with regards to how many entities it
+has been configured on, the higher its priority.
+
+The complete order of precedence when a plugin has been configured multiple
+times is:
+
+1. Plugins configured on a combination of: a Route, a Service, and a Consumer.
+ (Consumer means the request must be authenticated).
+2. Plugins configured on a combination of a Route and a Consumer.
+ (Consumer means the request must be authenticated).
+3. Plugins configured on a combination of a Service and a Consumer.
+ (Consumer means the request must be authenticated).
+4. Plugins configured on a combination of a Route and a Service.
+5. Plugins configured on a Consumer.
+ (Consumer means the request must be authenticated).
+6. Plugins configured on a Route.
+7. Plugins configured on a Service.
+8. Plugins configured to run globally.
+
+**Example**: if the `rate-limiting` plugin is applied twice (with different
+configurations): for a Service (Plugin config A), and for a Consumer (Plugin
+config B), then requests authenticating this Consumer will run Plugin config B
+and ignore A. However, requests that do not authenticate this Consumer will
+fallback to running Plugin config A. Note that if config B is disabled
+(its `enabled` flag is set to `false`), config A will apply to requests that
+would have otherwise matched config B.
+
+
+### Add Plugin
+
+##### Create Plugin
+
+/plugins
+
+
+##### Create Plugin Associated to a Specific Route
+
+/routes/{route id}/plugins
+
+Attributes | Description
+---:| ---
+`route id`
**required** | The unique identifier of the Route that should be associated to the newly-created Plugin.
+
+
+##### Create Plugin Associated to a Specific Service
+
+/services/{service id}/plugins
+
+Attributes | Description
+---:| ---
+`service id`
**required** | The unique identifier of the Service that should be associated to the newly-created Plugin.
+
+
+##### Create Plugin Associated to a Specific Consumer
+
+/consumers/{consumer id}/plugins
+
+Attributes | Description
+---:| ---
+`consumer id`
**required** | The unique identifier of the Consumer that should be associated to the newly-created Plugin.
+
+
+*Request Body*
+
+{{ page.plugin_body }}
+
+
+*Response*
+
+```
+HTTP 201 Created
+```
+
+```json
+{{ page.plugin_json }}
+```
+
+
+---
+
+### List Plugins
+
+##### List All Plugins
+
+/plugins
+
+
+##### List Plugins Associated to a Specific Route
+
+/routes/{route id}/plugins
+
+Attributes | Description
+---:| ---
+`route id`
**required** | The unique identifier of the Route whose Plugins are to be retrieved. When using this endpoint, only Plugins associated to the specified Route will be listed.
+
+
+##### List Plugins Associated to a Specific Service
+
+/services/{service id}/plugins
+
+Attributes | Description
+---:| ---
+`service id`
**required** | The unique identifier of the Service whose Plugins are to be retrieved. When using this endpoint, only Plugins associated to the specified Service will be listed.
+
+
+##### List Plugins Associated to a Specific Consumer
+
+/consumers/{consumer id}/plugins
+
+Attributes | Description
+---:| ---
+`consumer id`
**required** | The unique identifier of the Consumer whose Plugins are to be retrieved. When using this endpoint, only Plugins associated to the specified Consumer will be listed.
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+{{ page.plugin_data }}
+ "next": "http://localhost:8001/plugins?offset=6378122c-a0a1-438d-a5c6-efabae9fb969"
+}
+```
+
+
+---
+
+### Retrieve Plugin
+
+##### Retrieve Plugin
+
+/plugins/{plugin id}
+
+Attributes | Description
+---:| ---
+`plugin id`
**required** | The unique identifier of the Plugin to retrieve.
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{{ page.plugin_json }}
+```
+
+
+---
+
+### Update Plugin
+
+##### Update Plugin
+
+/plugins/{plugin id}
+
+Attributes | Description
+---:| ---
+`plugin id`
**required** | The unique identifier of the Plugin to update.
+
+
+*Request Body*
+
+{{ page.plugin_body }}
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{{ page.plugin_json }}
+```
+
+
+---
+
+### Update Or Create Plugin
+
+##### Create Or Update Plugin
+
+/plugins/{plugin id}
+
+Attributes | Description
+---:| ---
+`plugin id`
**required** | The unique identifier of the Plugin to create or update.
+
+
+*Request Body*
+
+{{ page.plugin_body }}
+
+
+Inserts (or replaces) the Plugin under the requested resource with the
+definition specified in the body. The Plugin will be identified via the `name
+or id` attribute.
+
+When the `name or id` attribute has the structure of a UUID, the Plugin being
+inserted/replaced will be identified by its `id`. Otherwise it will be
+identified by its `name`.
+
+When creating a new Plugin without specifying `id` (neither in the URL nor in
+the body), then it will be auto-generated.
+
+Notice that specifying a `name` in the URL and a different one in the request
+body is not allowed.
+
+
+*Response*
+
+```
+HTTP 201 Created or HTTP 200 OK
+```
+
+See POST and PATCH responses.
+
+
+---
+
+### Delete Plugin
+
+##### Delete Plugin
+
+/plugins/{plugin id}
+
+Attributes | Description
+---:| ---
+`plugin id`
**required** | The unique identifier of the Plugin to delete.
+
+
+*Response*
+
+```
+HTTP 204 No Content
+```
+
+
+---
+
+### Retrieve Enabled Plugins
+
+Retrieve a list of all installed plugins on the Kong node.
+
+/plugins/enabled
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "enabled_plugins": [
+ "jwt",
+ "acl",
+ "cors",
+ "oauth2",
+ "tcp-log",
+ "udp-log",
+ "file-log",
+ "http-log",
+ "key-auth",
+ "hmac-auth",
+ "basic-auth",
+ "ip-restriction",
+ "request-transformer",
+ "response-transformer",
+ "request-size-limiting",
+ "rate-limiting",
+ "response-ratelimiting",
+ "aws-lambda",
+ "bot-detection",
+ "correlation-id",
+ "datadog",
+ "galileo",
+ "ldap-auth",
+ "loggly",
+ "statsd",
+ "syslog"
+ ]
+}
+```
+
+
+---
+
+### Retrieve Plugin Schema
+
+Retrieve the schema of a plugin's configuration. This is useful to
+understand what fields a plugin accepts, and can be used for building
+third-party integrations to the Kong's plugin system.
+
+
+/plugins/schema/{plugin name}
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "fields": {
+ "hide_credentials": {
+ "default": false,
+ "type": "boolean"
+ },
+ "key_names": {
+ "default": "function",
+ "required": true,
+ "type": "array"
+ }
+ }
+}
+```
+
+
+---
+
+## Certificate Object
+
+A certificate object represents a public certificate, and can be optionally paired with the
+corresponding private key. These objects are used by Kong to handle SSL/TLS termination for
+encrypted requests, or for use as a trusted CA store when validating peer certificate of
+client/service. Certificates are optionally associated with SNI objects to
+tie a cert/key pair to one or more hostnames.
+
+Certificates can be both [tagged and filtered by tags](#tags).
+
+
+```json
+{{ page.certificate_json }}
+```
+
+### Add Certificate
+
+##### Create Certificate
+
+/certificates
+
+
+*Request Body*
+
+{{ page.certificate_body }}
+
+
+*Response*
+
+```
+HTTP 201 Created
+```
+
+```json
+{{ page.certificate_json }}
+```
+
+
+---
+
+### List Certificates
+
+##### List All Certificates
+
+/certificates
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+{{ page.certificate_data }}
+ "next": "http://localhost:8001/certificates?offset=6378122c-a0a1-438d-a5c6-efabae9fb969"
+}
+```
+
+
+---
+
+### Retrieve Certificate
+
+##### Retrieve Certificate
+
+/certificates/{certificate id}
+
+Attributes | Description
+---:| ---
+`certificate id`
**required** | The unique identifier of the Certificate to retrieve.
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{{ page.certificate_json }}
+```
+
+
+---
+
+### Update Certificate
+
+##### Update Certificate
+
+/certificates/{certificate id}
+
+Attributes | Description
+---:| ---
+`certificate id`
**required** | The unique identifier of the Certificate to update.
+
+
+*Request Body*
+
+{{ page.certificate_body }}
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{{ page.certificate_json }}
+```
+
+
+---
+
+### Update Or Create Certificate
+
+##### Create Or Update Certificate
+
+/certificates/{certificate id}
+
+Attributes | Description
+---:| ---
+`certificate id`
**required** | The unique identifier of the Certificate to create or update.
+
+
+*Request Body*
+
+{{ page.certificate_body }}
+
+
+Inserts (or replaces) the Certificate under the requested resource with the
+definition specified in the body. The Certificate will be identified via the `name
+or id` attribute.
+
+When the `name or id` attribute has the structure of a UUID, the Certificate being
+inserted/replaced will be identified by its `id`. Otherwise it will be
+identified by its `name`.
+
+When creating a new Certificate without specifying `id` (neither in the URL nor in
+the body), then it will be auto-generated.
+
+Notice that specifying a `name` in the URL and a different one in the request
+body is not allowed.
+
+
+*Response*
+
+```
+HTTP 201 Created or HTTP 200 OK
+```
+
+See POST and PATCH responses.
+
+
+---
+
+### Delete Certificate
+
+##### Delete Certificate
+
+/certificates/{certificate id}
+
+Attributes | Description
+---:| ---
+`certificate id`
**required** | The unique identifier of the Certificate to delete.
+
+
+*Response*
+
+```
+HTTP 204 No Content
+```
+
+
+---
+
+## SNI Object
+
+An SNI object represents a many-to-one mapping of hostnames to a certificate.
+That is, a certificate object can have many hostnames associated with it; when
+Kong receives an SSL request, it uses the SNI field in the Client Hello to
+lookup the certificate object based on the SNI associated with the certificate.
+
+SNIs can be both [tagged and filtered by tags](#tags).
+
+
+```json
+{{ page.sni_json }}
+```
+
+### Add SNI
+
+##### Create SNI
+
+/snis
+
+
+##### Create SNI Associated to a Specific Certificate
+
+/certificates/{certificate name or id}/snis
+
+Attributes | Description
+---:| ---
+`certificate name or id`
**required** | The unique identifier or the `name` attribute of the Certificate that should be associated to the newly-created SNI.
+
+
+*Request Body*
+
+{{ page.sni_body }}
+
+
+*Response*
+
+```
+HTTP 201 Created
+```
+
+```json
+{{ page.sni_json }}
+```
+
+
+---
+
+### List SNIs
+
+##### List All SNIs
+
+/snis
+
+
+##### List SNIs Associated to a Specific Certificate
+
+/certificates/{certificate name or id}/snis
+
+Attributes | Description
+---:| ---
+`certificate name or id`
**required** | The unique identifier or the `name` attribute of the Certificate whose SNIs are to be retrieved. When using this endpoint, only SNIs associated to the specified Certificate will be listed.
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+{{ page.sni_data }}
+ "next": "http://localhost:8001/snis?offset=6378122c-a0a1-438d-a5c6-efabae9fb969"
+}
+```
+
+
+---
+
+### Retrieve SNI
+
+##### Retrieve SNI
+
+/snis/{name or id}
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the SNI to retrieve.
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{{ page.sni_json }}
+```
+
+
+---
+
+### Update SNI
+
+##### Update SNI
+
+/snis/{name or id}
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the SNI to update.
+
+
+*Request Body*
+
+{{ page.sni_body }}
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{{ page.sni_json }}
+```
+
+
+---
+
+### Update Or Create SNI
+
+##### Create Or Update SNI
+
+/snis/{name or id}
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the SNI to create or update.
+
+
+*Request Body*
+
+{{ page.sni_body }}
+
+
+Inserts (or replaces) the SNI under the requested resource with the
+definition specified in the body. The SNI will be identified via the `name
+or id` attribute.
+
+When the `name or id` attribute has the structure of a UUID, the SNI being
+inserted/replaced will be identified by its `id`. Otherwise it will be
+identified by its `name`.
+
+When creating a new SNI without specifying `id` (neither in the URL nor in
+the body), then it will be auto-generated.
+
+Notice that specifying a `name` in the URL and a different one in the request
+body is not allowed.
+
+
+*Response*
+
+```
+HTTP 201 Created or HTTP 200 OK
+```
+
+See POST and PATCH responses.
+
+
+---
+
+### Delete SNI
+
+##### Delete SNI
+
+/snis/{name or id}
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the SNI to delete.
+
+
+*Response*
+
+```
+HTTP 204 No Content
+```
+
+---
+
+## Certificate Authority Object
+
+A certificate authority object represents a public CA certificate.
+These objects are used by Kong to verify client certificates presented to the mTLS plugin.
+
+
+```json
+{{ page.certificate_authority_json }}
+```
+
+### Add Certificate Authority
+
+##### Create Certificate Authority
+
+/ca_certificates
+
+
+*Request Body*
+
+{{ page.certificate_authority_body }}
+
+
+*Response*
+
+```
+HTTP 201 Created
+```
+
+```json
+{{ page.certificate_authority_json }}
+```
+
+
+---
+
+### List Certificate Authorities
+
+##### List all Certificate Authorities
+
+/ca_certificates
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+{{ page.certificate_authority_data }}
+ "next": "http://localhost:8001/ca_certificates?offset=6378122c-a0a1-438d-a5c6-efabae9fb969"
+}
+```
+
+
+---
+
+### Retrieve Certificate Authority
+
+##### Retrieve Certificate Authority
+
+/ca_certificates/{certificate authority id}
+
+Attributes | Description
+---:| ---
+`certificate authority id`
**required** | The unique identifier of the certificate authority to retrieve.
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{{ page.certificate_authority_json }}
+```
+
+
+---
+
+### Update Certificate Authority
+
+##### Update Certificate Authority
+
+/ca_certificates/{certificate authority id}
+
+Attributes | Description
+---:| ---
+`certificate authority id`
**required** | The unique identifier of the certificate authority to update.
+
+
+*Request Body*
+
+{{ page.certificate_authority_body }}
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{{ page.certificate_authority_json }}
+```
+
+
+---
+
+### Update or create Certificate Authority
+
+##### Create or update Certificate Authority
+
+/ca_certificates/{certificate authority id}
+
+Attributes | Description
+---:| ---
+`certificate authority id`
**required** | The unique identifier of the certificate authority to create or update.
+
+
+*Request Body*
+
+{{ page.certificate_authority_body }}
+
+
+Inserts (or replaces) the certificate authority under the requested resource with the
+definition specified in the body. The certificate authority will be identified via the ` id` attribute.
+
+When creating a new certificate authority without specifying `id` (neither in the URL nor in
+the body), then it will be auto-generated.
+
+
+*Response*
+
+```
+HTTP 201 Created or HTTP 200 OK
+```
+
+See POST and PATCH responses.
+
+
+---
+
+### Delete Certificate Authority
+
+##### Delete Certificate Authority
+
+/ca_certificates/{certificate authority id}
+
+Attributes | Description
+---:| ---
+`certificate authority id`
**required** | The unique identifier of the certificate authority to delete.
+
+
+*Response*
+
+```
+HTTP 204 No Content
+```
+
+
+
+---
+
+## Upstream Object
+
+The upstream object represents a virtual hostname and can be used to loadbalance
+incoming requests over multiple services (targets). So for example an upstream
+named `service.v1.xyz` for a Service object whose `host` is `service.v1.xyz`.
+Requests for this Service would be proxied to the targets defined within the upstream.
+
+An upstream also includes a [health checker][healthchecks], which is able to
+enable and disable targets based on their ability or inability to serve
+requests. The configuration for the health checker is stored in the upstream
+object, and applies to all of its targets.
+
+Upstreams can be both [tagged and filtered by tags](#tags).
+
+
+```json
+{{ page.upstream_json }}
+```
+
+### Add Upstream
+
+##### Create Upstream
+
+/upstreams
+
+
+*Request Body*
+
+{{ page.upstream_body }}
+
+
+*Response*
+
+```
+HTTP 201 Created
+```
+
+```json
+{{ page.upstream_json }}
+```
+
+
+---
+
+### List Upstreams
+
+##### List All Upstreams
+
+/upstreams
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+{{ page.upstream_data }}
+ "next": "http://localhost:8001/upstreams?offset=6378122c-a0a1-438d-a5c6-efabae9fb969"
+}
+```
+
+
+---
+
+### Retrieve Upstream
+
+##### Retrieve Upstream
+
+/upstreams/{name or id}
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the Upstream to retrieve.
+
+
+##### Retrieve Upstream Associated to a Specific Target
+
+/targets/{target host:port or id}/upstream
+
+Attributes | Description
+---:| ---
+`target host:port or id`
**required** | The unique identifier **or** the host:port of the Target associated to the Upstream to be retrieved.
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{{ page.upstream_json }}
+```
+
+
+---
+
+### Update Upstream
+
+##### Update Upstream
+
+/upstreams/{name or id}
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the Upstream to update.
+
+
+##### Update Upstream Associated to a Specific Target
+
+/targets/{target host:port or id}/upstream
+
+Attributes | Description
+---:| ---
+`target host:port or id`
**required** | The unique identifier **or** the host:port of the Target associated to the Upstream to be updated.
+
+
+*Request Body*
+
+{{ page.upstream_body }}
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{{ page.upstream_json }}
+```
+
+
+---
+
+### Update Or Create Upstream
+
+##### Create Or Update Upstream
+
+/upstreams/{name or id}
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the Upstream to create or update.
+
+
+##### Create Or Update Upstream Associated to a Specific Target
+
+/targets/{target host:port or id}/upstream
+
+Attributes | Description
+---:| ---
+`target host:port or id`
**required** | The unique identifier **or** the host:port of the Target associated to the Upstream to be created or updated.
+
+
+*Request Body*
+
+{{ page.upstream_body }}
+
+
+Inserts (or replaces) the Upstream under the requested resource with the
+definition specified in the body. The Upstream will be identified via the `name
+or id` attribute.
+
+When the `name or id` attribute has the structure of a UUID, the Upstream being
+inserted/replaced will be identified by its `id`. Otherwise it will be
+identified by its `name`.
+
+When creating a new Upstream without specifying `id` (neither in the URL nor in
+the body), then it will be auto-generated.
+
+Notice that specifying a `name` in the URL and a different one in the request
+body is not allowed.
+
+
+*Response*
+
+```
+HTTP 201 Created or HTTP 200 OK
+```
+
+See POST and PATCH responses.
+
+
+---
+
+### Delete Upstream
+
+##### Delete Upstream
+
+/upstreams/{name or id}
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the Upstream to delete.
+
+
+##### Delete Upstream Associated to a Specific Target
+
+/targets/{target host:port or id}/upstream
+
+Attributes | Description
+---:| ---
+`target host:port or id`
**required** | The unique identifier **or** the host:port of the Target associated to the Upstream to be deleted.
+
+
+*Response*
+
+```
+HTTP 204 No Content
+```
+
+
+---
+
+### Show Upstream Health for Node
+
+Displays the health status for all Targets of a given Upstream, according to
+the perspective of a specific Kong node. Note that, being node-specific
+information, making this same request to different nodes of the Kong cluster
+may produce different results. For example, one specific node of the Kong
+cluster may be experiencing network issues, causing it to fail to connect to
+some Targets: these Targets will be marked as unhealthy by that node
+(directing traffic from this node to other Targets that it can successfully
+reach), but healthy to all others Kong nodes (which have no problems using that
+Target).
+
+The `data` field of the response contains an array of Target objects.
+The health for each Target is returned in its `health` field:
+
+* If a Target fails to be activated in the ring balancer due to DNS issues,
+ its status displays as `DNS_ERROR`.
+* When [health checks][healthchecks] are not enabled in the Upstream
+ configuration, the health status for active Targets is displayed as
+ `HEALTHCHECKS_OFF`.
+* When health checks are enabled and the Target is determined to be healthy,
+ either automatically or [manually](#set-target-as-healthy),
+ its status is displayed as `HEALTHY`. This means that this Target is
+ currently included in this Upstream's load balancer ring.
+* When a Target has been disabled by either active or passive health checks
+ (circuit breakers) or [manually](#set-target-as-unhealthy),
+ its status is displayed as `UNHEALTHY`. The load balancer is not directing
+ any traffic to this Target via this Upstream.
+
+
+/upstreams/{name or id}/health/
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the Upstream for which to display Target health.
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "total": 2,
+ "node_id": "cbb297c0-14a9-46bc-ad91-1d0ef9b42df9",
+ "data": [
+ {
+ "created_at": 1485524883980,
+ "id": "18c0ad90-f942-4098-88db-bbee3e43b27f",
+ "health": "HEALTHY",
+ "target": "127.0.0.1:20000",
+ "upstream_id": "07131005-ba30-4204-a29f-0927d53257b4",
+ "weight": 100
+ },
+ {
+ "created_at": 1485524914883,
+ "id": "6c6f34eb-e6c3-4c1f-ac58-4060e5bca890",
+ "health": "UNHEALTHY",
+ "target": "127.0.0.1:20002",
+ "upstream_id": "07131005-ba30-4204-a29f-0927d53257b4",
+ "weight": 200
+ }
+ ]
+}
+```
+
+
+---
+
+## Target Object
+
+A target is an ip address/hostname with a port that identifies an instance of a backend
+service. Every upstream can have many targets, and the targets can be
+dynamically added. Changes are effectuated on the fly.
+
+Because the upstream maintains a history of target changes, the targets cannot
+be deleted or modified. To disable a target, post a new one with `weight=0`;
+alternatively, use the `DELETE` convenience method to accomplish the same.
+
+The current target object definition is the one with the latest `created_at`.
+
+Targets can be both [tagged and filtered by tags](#tags).
+
+
+```json
+{{ page.target_json }}
+```
+
+### Add Target
+
+##### Create Target Associated to a Specific Upstream
+
+/upstreams/{upstream host:port or id}/targets
+
+Attributes | Description
+---:| ---
+`upstream host:port or id`
**required** | The unique identifier or the `host:port` attribute of the Upstream that should be associated to the newly-created Target.
+
+
+*Request Body*
+
+{{ page.target_body }}
+
+
+*Response*
+
+```
+HTTP 201 Created
+```
+
+```json
+{{ page.target_json }}
+```
+
+
+---
+
+### List Targets
+
+##### List Targets Associated to a Specific Upstream
+
+/upstreams/{upstream host:port or id}/targets
+
+Attributes | Description
+---:| ---
+`upstream host:port or id`
**required** | The unique identifier or the `host:port` attribute of the Upstream whose Targets are to be retrieved. When using this endpoint, only Targets associated to the specified Upstream will be listed.
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+{{ page.target_data }}
+ "next": "http://localhost:8001/targets?offset=6378122c-a0a1-438d-a5c6-efabae9fb969"
+}
+```
+
+
+---
+
+### Delete Target
+
+Disable a target in the load balancer. Under the hood, this method creates
+a new entry for the given target definition with a `weight` of 0.
+
+
+/upstreams/{upstream name or id}/targets/{host:port or id}
+
+Attributes | Description
+---:| ---
+`upstream name or id`
**required** | The unique identifier **or** the name of the upstream for which to delete the target.
+`host:port or id`
**required** | The host:port combination element of the target to remove, or the `id` of an existing target entry.
+
+
+*Response*
+
+```
+HTTP 204 No Content
+```
+
+
+---
+
+### Set Target As Healthy
+
+Set the current health status of a target in the load balancer to "healthy"
+in the entire Kong cluster.
+
+This endpoint can be used to manually re-enable a target that was previously
+disabled by the upstream's [health checker][healthchecks]. Upstreams only
+forward requests to healthy nodes, so this call tells Kong to start using this
+target again.
+
+This resets the health counters of the health checkers running in all workers
+of the Kong node, and broadcasts a cluster-wide message so that the "healthy"
+status is propagated to the whole Kong cluster.
+
+
+/upstreams/{upstream name or id}/targets/{target or id}/healthy
+
+Attributes | Description
+---:| ---
+`upstream name or id`
**required** | The unique identifier **or** the name of the upstream.
+`target or id`
**required** | The host/port combination element of the target to set as healthy, or the `id` of an existing target entry.
+
+
+*Response*
+
+```
+HTTP 204 No Content
+```
+
+
+---
+
+### Set Target As Unhealthy
+
+Set the current health status of a target in the load balancer to "unhealthy"
+in the entire Kong cluster.
+
+This endpoint can be used to manually disable a target and have it stop
+responding to requests. Upstreams only forward requests to healthy nodes, so
+this call tells Kong to start skipping this target in the ring-balancer
+algorithm.
+
+This call resets the health counters of the health checkers running in all
+workers of the Kong node, and broadcasts a cluster-wide message so that the
+"unhealthy" status is propagated to the whole Kong cluster.
+
+[Active health checks][active] continue to execute for unhealthy
+targets. Note that if active health checks are enabled and the probe detects
+that the target is actually healthy, it will automatically re-enable it again.
+To permanently remove a target from the ring-balancer, you should [delete a
+target](#delete-target) instead.
+
+
+/upstreams/{upstream name or id}/targets/{target or id}/unhealthy
+
+Attributes | Description
+---:| ---
+`upstream name or id`
**required** | The unique identifier **or** the name of the upstream.
+`target or id`
**required** | The host/port combination element of the target to set as unhealthy, or the `id` of an existing target entry.
+
+
+*Response*
+
+```
+HTTP 204 No Content
+```
+
+
+---
+
+### List All Targets
+
+Lists all targets of the upstream. Multiple target objects for the same
+target may be returned, showing the history of changes for a specific target.
+The target object with the latest `created_at` is the current definition.
+
+
+/upstreams/{name or id}/targets/all/
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the upstream for which to list the targets.
+
+
+*Response*
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "total": 2,
+ "data": [
+ {
+ "created_at": 1485524883980,
+ "id": "18c0ad90-f942-4098-88db-bbee3e43b27f",
+ "target": "127.0.0.1:20000",
+ "upstream_id": "07131005-ba30-4204-a29f-0927d53257b4",
+ "weight": 100
+ },
+ {
+ "created_at": 1485524914883,
+ "id": "6c6f34eb-e6c3-4c1f-ac58-4060e5bca890",
+ "target": "127.0.0.1:20002",
+ "upstream_id": "07131005-ba30-4204-a29f-0927d53257b4",
+ "weight": 200
+ }
+ ]
+}
+```
+
+## Enterprise Exclusive Admin API
+
+
+
+
+
+
+
The following documentation refers to Kong Enterprise specific Admin API functionality. For a complete reference checkout the Kong Admin API Reference.
+
+
+
+
+
+
+
+
+
Authenticate Kong Admins with Basic Auth, OIDC, LDAP, and Sessions. Authorize Admins with RBAC and Workspaces.
+
+ Learn more →
+
+
+
+
+
+
Create and manage Admins for Kong Enterprise
+
Learn more →
+
+
+
+
+
Enable metrics about the health and performance of Kong.
+
Learn more →
+
+
+
+
+
+---
+
+[clustering]: /enterprise/{{page.kong_version}}/clustering
+[cli]: /enterprise/{{page.kong_version}}/cli
+[active]: /enterprise/{{page.kong_version}}/health-checks-circuit-breakers/#active-health-checks
+[healthchecks]: /enterprise/{{page.kong_version}}/health-checks-circuit-breakers
+[secure-admin-api]: /enterprise/{{page.kong_version}}/secure-admin-api
+[proxy-reference]: /enterprise/{{page.kong_version}}/proxy
+
diff --git a/app/enterprise/1.3-x/admin-api/rbac/examples.md b/app/enterprise/1.3-x/admin-api/rbac/examples.md
new file mode 100644
index 000000000000..fe90252af467
--- /dev/null
+++ b/app/enterprise/1.3-x/admin-api/rbac/examples.md
@@ -0,0 +1,1050 @@
+---
+title: RBAC Examples
+book: rbac
+---
+
+## Introduction
+
+This chapter aims to provide a step-by-step tutorial on how to set up
+RBAC and see it in action, with an end-to-end use case. The chosen
+use case demonstrates how **RBAC with workspaces** can be coupled
+to achieve a flexible organization of teams and users in complex
+hierarchies. Make sure to read the [RBAC Overview][rbac-overview] page
+and to glance over the [RBAC Admin API][rbac-admin] chapter, keeping it
+open as a reference.
+
+## Use Case
+
+For the sake of example, let's say a given company has a Kong Enterprise
+cluster to be shared with 3 teams: teamA, teamB, and teamC. While the Kong
+cluster are shared among these teams, they want to be able to segment
+their entities in such a way that management of entities in one team doesn't
+disrupt operation in some other team. As shown in the
+[Workspaces Examples Page][workspaces-examples], such a use case is possible
+with workspaces. On top of workspaces, though, each team wants to enforce
+access control over their Workspace, which is possible with RBAC. **To sum up,
+Workspaces and RBAC are complementary: Workspaces provide segmentation of
+Admin API entities, while RBAC provides access control**.
+
+## Bootstrapping the first RBAC userâthe Super Admin
+
+**Note:** It is possible to create the first Super Admin at the time
+of migration as described in the [Getting Started Guide][getting-started-guide].
+If you chose this option, skip to [Enforcing RBAC](#enforcing-rbac).
+
+Before anything, we will assume the Kong Adminâor, more interestingly,
+the KongOps Engineerâin charge of operating Kong, will create a Super Admin
+user, before actually enforcing RBAC and restarting Kong with RBAC enabled.
+
+As Kong ships with a handy set of default RBAC Rolesâthe `super-admin`,
+the `admin`, and `read-only`âthe task of creating a Super Admin user is
+quite easy:
+
+Create the RBAC user, named `super-admin`:
+
+```
+http :8001/rbac/users name=super-admin
+{
+ "user_token": "M8J5A88xKXa7FNKsMbgLMjkm6zI2anOY",
+ "id": "da80838d-49f8-40f6-b673-6fff3e2c305b",
+ "enabled": true,
+ "created_at": 1531009435000,
+ "name": "super-admin"
+}
+```
+
+As the `super-admin` user name coincides with an existing `super-admin`
+role, it gets automatically added to the `super-admin` roleâwhich can be
+confirmed with the following command:
+
+```
+http :8001/rbac/users/super-admin/roles
+{
+ "roles": [
+ {
+ "comment": "Full access to all endpoints, across all workspaces",
+ "created_at": 1531009724000,
+ "name": "super-admin",
+ "id": "b924ac91-e83f-4136-a5a4-4a7ff92594a8"
+ }
+ ],
+ "user": {
+ "created_at": 1531009858000,
+ "id": "e6897cc0-0c34-4a9c-9f0b-cc65b4f04d68",
+ "name": "super-admin",
+ "enabled": true,
+ "user_token": "vajeOlkybsn0q0VD9qw9B3nHYOErgY7b8"
+ }
+}
+
+```
+
+## Enforcing RBAC
+
+As the `super-admin` user has just been created, the Kong Admin may now
+restart Kong with RBAC enforced, with, e.g.:
+
+```
+KONG_ENFORCE_RBAC=on kong restart
+```
+
+**NOTE**: This is one of the possible ways of enforcing RBAC and restarting
+Kong; another possibility is editing the Kong configuration file and
+restarting.
+
+Before we move on, note that we will be using the Super Admin user, but we
+could, in fact, be moving without RBAC enabled, and having our Kong Admin do
+all the job of setting up the RBAC hierarchy. We want, however, to stress the
+fact that RBAC is powerful enough to allow a flexible separation of tasks. To
+summarize:
+
+- **Kong Admin**: this user has physical access to Kong infrastructure; her
+task is to bootstrap the Kong cluster as well as its configuration, including
+initial RBAC users;
+- **RBAC Super Admin**: created by the Kong Admin, has the role of managing
+RBAC users, roles, etc; this could all be done by the **Kong Admin**, but let's
+give him a break.
+
+## Super Admin creates the teams Workspaces
+
+The Super Admin will now set up our 3 teams: teamA, teamB, and teamC, creating
+one workspace for each, one admin for each. Enough talking.
+
+Creating workspaces for each teamâthis overlaps a bit with
+[Workspaces Examples][workspaces-examples], yes, but it will make our
+exploration of RBAC + Workspaces easier:
+
+**Team A**:
+
+```
+http :8001/workspaces name=teamA Kong-Admin-Token:vajeOlkbsn0q0VD9qw9B3nHYOErgY7b8
+{
+ "name": "teamA",
+ "created_at": 1531014100000,
+ "id": "1412f3a6-4d9b-4b9d-964e-60d8d63a9d46"
+}
+
+```
+
+**Team B**:
+
+```
+http :8001/workspaces name=teamB Kong-Admin-Token:vajeOlkbsn0q0VD9qw9B3nHYOErgY7b8
+{
+ "name": "teamB",
+ "created_at": 1531014143000,
+ "id": "7dee8c56-c6db-4125-b87a-b508baa33c66"
+}
+```
+
+**Team C**:
+
+```
+http :8001/workspaces name=teamC Kong-Admin-Token:vajeOlkbsn0q0VD9qw9B3nHYOErgY7b8
+{
+ "name": "teamC",
+ "created_at": 1531014171000,
+ "id": "542c8662-17cc-49eb-af50-6eb14f3b2e8a"
+}
+```
+
+**NOTE**: this is the RBAC Super Admin creating workspacesânote his
+token being passed in through the `Kong-Admin-Token` HTTP header.
+
+## Super Admin Creates one Admin for each Team:
+
+**Team A**:
+
+```
+http :8001/teamA/rbac/users name=adminA Kong-Admin-Token:vajeOlkbsn0q0VD9qw9B3nHYOErgY7b8
+{
+ "user_token": "qv1VLIpl8kHj7lC1QOKwRdCMXanqEDii",
+ "id": "4d315ff9-8c1a-4844-9ea2-21b16204a154",
+ "enabled": true,
+ "created_at": 1531015165000,
+ "name": "adminA"
+}
+```
+
+**Team B**:
+
+```
+http :8001/teamB/rbac/users name=adminB Kong-Admin-Token:vajeOlkbsn0q0VD9qw9B3nHYOErgY7b8
+{
+ "user_token": "IX5vHVgYqM40tLcctdmzRtHyfxB4ToYv",
+ "id": "49641fc0-8c9d-4507-bc7a-2acac8f2903a",
+ "enabled": true,
+ "created_at": 1531015221000,
+ "name": "adminB"
+}
+```
+
+**Team C**:
+
+```
+http :8001/teamC/rbac/users name=adminC Kong-Admin-Token:vajeOlkbsn0q0VD9qw9B3nHYOErgY7b8
+{
+ "user_token": "w2f7tsuUW4BerXocZIMRQHE84nK2ZAo7",
+ "id": "74643f69-8852-49f9-b363-21971bac4f52",
+ "enabled": true,
+ "created_at": 1531015304000,
+ "name": "teamC"
+}
+```
+
+With this, all of the teams have one admin and each admin can only be seen
+in his corresponding workspace. To verify:
+
+```
+http :8001/teamA/rbac/users Kong-Admin-Token:vajeOlkbsn0q0VD9qw9B3nHYOErgY7b8
+{
+ "total": 1,
+ "data": [
+ {
+ "created_at": 1531014784000,
+ "id": "1faaacd1-709f-4762-8c3e-79f268ec8faf",
+ "name": "adminA",
+ "enabled": true,
+ "user_token": "n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG"
+ }
+ ]
+}
+```
+
+Similarly, workspaces teamB and teamC only show their respective admins:
+
+```
+http :8001/teamB/rbac/users Kong-Admin-Token:vajeOlkbsn0q0VD9qw9B3nHYOErgY7b8
+{
+ "total": 1,
+ "data": [
+ {
+ "created_at": 1531014805000,
+ "id": "3a829408-c1ee-4764-8222-2d280a5de441",
+ "name": "adminB",
+ "enabled": true,
+ "user_token": "C8b6kTTN10JFyU63ORjmCQwVbvK4maeq"
+ }
+ ]
+}
+```
+
+```
+http :8001/teamC/rbac/users Kong-Admin-Token:vajeOlkbsn0q0VD9qw9B3nHYOErgY7b8
+{
+ "total": 1,
+ "data": [
+ {
+ "created_at": 1531014813000,
+ "id": "84d43cdb-5274-4b74-ad22-615e50f005e3",
+ "name": "adminC",
+ "enabled": true,
+ "user_token": "zN5Nj8U1MiGR7vVQKvl8odaGBDI6mjgY"
+ }
+ ]
+}
+```
+
+## Super Admin Creates Admin Roles for Teams
+
+Super Admin is now done creating RBAC Admin users for each team; his next
+task is to create admin roles that will effectively grant permissions to admin
+users.
+
+The admin role must have access to all of the Admin API, restricted to his
+workspace.
+
+Setting up the Admin roleâpay close attention to the request parameters:
+
+```
+http :8001/teamA/rbac/roles/ name=admin Kong-Admin-Token:vajeOlkbsn0q0VD9qw9B3nHYOErgY7b8
+{
+ "created_at": 1531016728000,
+ "id": "d40e61ab-8dad-4ef2-a48b-d11379f7b8d1",
+ "name": "admin"
+}
+```
+
+Creating role endpoint permissions:
+
+```
+http :8001/teamA/rbac/roles/admin/endpoints/ endpoint=* workspace=teamA actions=* Kong-Admin-Token:vajeOlkbsn0q0VD9qw9B3nHYOErgY7b8
+{
+ "total": 1,
+ "data": [
+ {
+ "endpoint": "*",
+ "created_at": 1531017322000,
+ "role_id": "d40e61ab-8dad-4ef2-a48b-d11379f7b8d1",
+ "actions": [
+ "delete",
+ "create",
+ "update",
+ "read"
+ ],
+ "negative": false,
+ "workspace": "teamA"
+ }
+ ]
+}
+```
+
+Next logical step is to add the adminA userâadmin of Team Aâto the Admin
+role in his workspace:
+
+```
+http :8001/teamA/rbac/users/adminA/roles/ roles=admin Kong-Admin-Token:vajeOlkbsn0q0VD9qw9B3nHYOErgY7b8
+{
+ "roles": [
+ {
+ "comment": "Default user role generated for adminA",
+ "created_at": 1531014784000,
+ "id": "e2941b41-92a4-4f49-be89-f1a452bdecd0",
+ "name": "adminA"
+ },
+ {
+ "created_at": 1531016728000,
+ "id": "d40e61ab-8dad-4ef2-a48b-d11379f7b8d1",
+ "name": "admin"
+ }
+ ],
+ "user": {
+ "created_at": 1531014784000,
+ "id": "1faaacd1-709f-4762-8c3e-79f268ec8faf",
+ "name": "adminA",
+ "enabled": true,
+ "user_token": "n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG"
+ }
+}
+```
+
+Note the admin role in the list above.
+
+With these steps, Team A's admin user is now able to manage his team. To
+validate that, let's try to list RBAC users in Team B using Team A's admin
+user tokenâand see that we are not allowed to do so:
+
+```
+http :8001/teamB/rbac/users Kong-Admin-Token:n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG
+{
+ "message": "Invalid RBAC credentials"
+}
+```
+
+Said admin is, however, allowed to list RBAC users in Team A's workspace:
+
+```
+http :8001/teamA/rbac/users Kong-Admin-Token:n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG
+{
+ "total": 1,
+ "data": [
+ {
+ "created_at": 1531014784000,
+ "id": "1faaacd1-709f-4762-8c3e-79f268ec8faf",
+ "name": "adminA",
+ "enabled": true,
+ "user_token": "n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG"
+ }
+ ]
+}
+```
+
+If the same procedure is repeated for Team B and Team C, they will end up with
+a similar set up, with an admin role and an admin user, both restricted to the
+team's workspace.
+
+And so Super Admin ends his participation; individual team admins are now able
+to set up his teams users and entities!
+
+## Team Admins Create Team Regular Users
+
+From this point on, team admins are able to drive the process; the next logical
+step is for Team users to be created; such team users could be, for example,
+engineers that are part of Team A (or B or C). Let's go ahead and do that,
+using Admin A's user token.
+
+Before regular users can be created, a role needs to be available for them.
+Such a role needs to have permissions to all of Admin API endpoints, except
+RBAC and Workspacesâregular users will not need access to these in general
+and, if they do, the Admin can grant them.
+
+**Creating the regular users role**:
+
+```
+http :8001/teamA/rbac/roles/ name=users Kong-Admin-Token:n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG
+{
+ "created_at": 1531020346000,
+ "id": "9846b92c-6820-4741-ac31-425b3d6abc5b",
+ "name": "users"
+}
+```
+
+**Creating permissions in the regular users role**:
+
+First, permission to all of Admin APIâpositive permission on \*:
+
+```
+http :8001/teamA/rbac/roles/users/endpoints/ endpoint=* workspace=teamA actions=* Kong-Admin-Token:n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG
+{
+ "endpoint": "*",
+ "created_at": 1531020573000,
+ "role_id": "9846b92c-6820-4741-ac31-425b3d6abc5b",
+ "actions": [
+ "delete",
+ "create",
+ "update",
+ "read"
+ ],
+ "negative": false,
+ "workspace": "teamA"
+}
+```
+
+Then, filter out RBAC and workspaces with negative permissions:
+
+```
+http :8001/teamA/rbac/roles/users/endpoints/ endpoint=/rbac/* workspace=teamA actions=* Kong-Admin-Token:n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG
+{
+ "endpoint": "/rbac/*",
+ "created_at": 1531020744000,
+ "role_id": "9846b92c-6820-4741-ac31-425b3d6abc5b",
+ "actions": [
+ "delete",
+ "create",
+ "update",
+ "read"
+ ],
+ "negative": true,
+ "workspace": "teamA"
+}
+```
+
+```
+http :8001/teamA/rbac/roles/users/endpoints/ endpoint=/workspaces/* workspace=teamA actions=* Kong-Admin-Token:n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG
+{
+ "endpoint": "/workspaces/*",
+ "created_at": 1531020778000,
+ "role_id": "9846b92c-6820-4741-ac31-425b3d6abc5b",
+ "actions": [
+ "delete",
+ "create",
+ "update",
+ "read"
+ ],
+ "negative": true,
+ "workspace": "teamA"
+}
+```
+
+**IMPORTANT**: as explained in the [Wildcards in Permissions][rbac-wildcards]
+section, the meaning of `*` is not the expected generic globbing one might
+be used to. As such, `/rbac/*` or `/workspaces/*` do not match all of the
+RBAC and Workspaces endpoints. For example, to cover all of the RBAC API,
+one would have to define permissions for the following endpoints:
+
+- `/rbac/*`
+- `/rbac/*/*`
+- `/rbac/*/*/*`
+- `/rbac/*/*/*/*`
+- `/rbac/*/*/*/*/*`
+
+Team A just got 3 new members: foogineer, bargineer, and bazgineer. Admin A
+will welcome them to the team by creating RBAC users for them and giving them
+access to Kong!
+
+Create foogineer:
+
+```
+http :8001/teamA/rbac/users name=foogineer Kong-Admin-Token:n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG
+{
+ "created_at": 1531019797000,
+ "id": "0b4111da-2827-4767-8651-a327f7a559e9",
+ "name": "foogineer",
+ "enabled": true,
+ "user_token": "dNeYvYAwvjOJdoReVJZXF8vLBXQioKkI"
+}
+```
+
+Add foogineer to the `users` role:
+
+```
+http :8001/teamA/rbac/users/foogineer/roles roles=users Kong-Admin-Token:n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG
+{
+ "roles": [
+ {
+ "comment": "Default user role generated for foogineer",
+ "created_at": 1531019797000,
+ "id": "125c4212-b882-432d-a323-9cbe38b1d0df",
+ "name": "foogineer"
+ },
+ {
+ "created_at": 1531020346000,
+ "id": "9846b92c-6820-4741-ac31-425b3d6abc5b",
+ "name": "users"
+ }
+ ],
+ "user": {
+ "created_at": 1531019797000,
+ "id": "0b4111da-2827-4767-8651-a327f7a559e9",
+ "name": "foogineer",
+ "enabled": true,
+ "user_token": "dNeYvYAwvjOJdoReVJZXF8vLBXQioKkI"
+ }
+}
+```
+
+Create bargineer:
+
+```
+http :8001/teamA/rbac/users name=bargineer Kong-Admin-Token:n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG
+{
+ "created_at": 1531019837000,
+ "id": "25dfa68e-32e8-48d8-815f-6fedfd2fb4a6",
+ "name": "bargineer",
+ "enabled": true,
+ "user_token": "eZj3WUc46wO3zEJbLP3Y4VGvNaUgGlyv"
+}
+```
+
+Add bargineer to the `users` role:
+
+```
+http :8001/teamA/rbac/users/bargineer/roles roles=users Kong-Admin-Token:n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG
+{
+ "roles": [
+ {
+ "comment": "Default user role generated for bargineer",
+ "created_at": 1531019837000,
+ "id": "3edb00c2-9ae1-423d-ac81-bec702c29e37",
+ "name": "bargineer"
+ },
+ {
+ "created_at": 1531020346000,
+ "id": "9846b92c-6820-4741-ac31-425b3d6abc5b",
+ "name": "users"
+ }
+ ],
+ "user": {
+ "created_at": 1531019837000,
+ "id": "25dfa68e-32e8-48d8-815f-6fedfd2fb4a6",
+ "name": "bargineer",
+ "enabled": true,
+ "user_token": "eZj3WUc46wO3zEJbLP3Y4VGvNaUgGlyv"
+ }
+}
+```
+
+Create bazgineer:
+
+```
+http :8001/teamA/rbac/users name=bazgineer Kong-Admin-Token:n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG
+{
+ "created_at": 1531019937000,
+ "id": "ea7207d7-0d69-427b-b288-ce696b7f4690",
+ "name": "bazgineer",
+ "enabled": true,
+ "user_token": "r8NhaT213Zm8o1woQF4ZyQyCVjFRgGp3"
+}
+```
+
+Add bazgineer to the `users` role:
+
+```
+http :8001/teamA/rbac/users/bazgineer/roles roles=users Kong-Admin-Token:n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG
+{
+ "roles": [
+ {
+ "comment": "Default user role generated for bazgineer",
+ "created_at": 1531019937000,
+ "id": "fa409bb6-c86c-45d2-8a6b-ac8e71de2cc9",
+ "name": "bazgineer"
+ },
+ {
+ "created_at": 1531020346000,
+ "name": "users",
+ "id": "9846b92c-6820-4741-ac31-425b3d6abc5b"
+ }
+ ],
+ "user": {
+ "created_at": 1531019937000,
+ "id": "ea7207d7-0d69-427b-b288-ce696b7f4690",
+ "name": "bazgineer",
+ "enabled": true,
+ "user_token": "r8NhaT213Zm8o1woQF4ZyQyCVjFRgGp3"
+ }
+}
+```
+
+## Regular Team Users use their tokens!
+
+foogineer, bargineer, and bazgineer all have gotten their RBAC user tokens
+from their Team A admin, and are now allowed to explore Kongâwithin the
+confines of their Team A workspace. Let's validate they can in fact do anything
+they wish, except over RBAC and Workspaces.
+
+Try listing Workspaces:
+
+```
+http :8001/teamA/workspaces/ Kong-Admin-Token:dNeYvYAwvjOJdoReVJZXF8vLBXQioKkI
+{
+ "message": "foogineer, you do not have permissions to read this resource"
+}
+```
+
+Enable some pluginâe.g., key-auth:
+
+```
+http :8001/teamA/plugins name=key-auth Kong-Admin-Token:dNeYvYAwvjOJdoReVJZXF8vLBXQioKkI
+{
+ "created_at": 1531021732000,
+ "config": {
+ "key_in_body": false,
+ "run_on_preflight": true,
+ "anonymous": "",
+ "hide_credentials": false,
+ "key_names": [
+ "apikey"
+ ]
+ },
+ "id": "cdc85ef0-804b-4f92-aafd-3ff58512e445",
+ "enabled": true,
+ "name": "key-auth"
+}
+```
+
+List currently enabled plugins:
+
+```
+http :8001/teamA/plugins Kong-Admin-Token:dNeYvYAwvjOJdoReVJZXF8vLBXQioKkI
+{
+ "total": 1,
+ "data": [
+ {
+ "created_at": 1531021732000,
+ "config": {
+ "key_in_body": false,
+ "run_on_preflight": true,
+ "anonymous": "",
+ "hide_credentials": false,
+ "key_names": [
+ "apikey"
+ ]
+ },
+ "id": "cdc85ef0-804b-4f92-aafd-3ff58512e445",
+ "name": "key-auth",
+ "enabled": true
+ }
+ ]
+}
+```
+
+This ends our use case tutorial; it demonstrates the power of RBAC and
+workspaces with a real-world scenario. Following, we will approach **Entity-Level
+RBAC**, an extension of our powerful access control to entity-level granularity.
+
+# Entity-Level RBAC: a Primer
+
+Kong Enterprise's new RBAC implementation goes one step further in permissions
+granularity: in addition to "endpoint" permissions, it supports entity-level
+permissions, meaning that particular entities, identified by their unique ID,
+can be allowed or disallowed access in a role.
+
+Refreshing our minds, RBAC is [enforced][rbac-enforce] with the `enforce_rbac`
+configuration directiveâor with its `KONG_ENFORCE_RBAC` environment variable
+counterpart. Such directive is an enum, with 4 possible values:
+
+- `on`: similarly to the previous RBAC implementation, applies Endpoint-level
+access control
+- `entity`: applies **only** Entity-level access control
+- `both`: applies **both Endpoint and Entity level access control**
+- `off`: disables RBAC enforcement
+
+If one sets it to either `entity` or `both`, Kong will enforce entity-level
+access control. However, as with endpoint-level access control, permissions
+must be bootstrapped before enforcement is enabled.
+
+## Creating Entity-Level Permissions
+
+Team A just got one new, temporary, team member: qux. Admin A, the admin of
+Team A, has already created his qux RBAC user; he needs, however, to limit
+access that qux has over entities in Team A workspace, giving him read access
+to only a couple of entitiesâsay, a Service and a Route. For that, he will
+use Entity-Level RBAC.
+
+**Admin A creates a role for the temporary user qux**:
+
+```
+http :8001/teamA/rbac/roles name=qux-role Kong-Admin-Token:n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG
+{
+ "name": "qux-role",
+ "created_at": 1531065975000,
+ "id": "ffe93269-7993-4308-965e-0286d0bc87b9"
+}
+```
+
+We will assume the following entities exist:
+
+A service:
+
+```
+http :8001/teamA/services Kong-Admin-Token:n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG
+{
+ "next": null,
+ "data": [
+ {
+ "host": "httpbin.org",
+ "created_at": 1531066074,
+ "connect_timeout": 60000,
+ "id": "3ed24101-19a7-4a0b-a10f-2f47bcd4ff43",
+ "protocol": "http",
+ "name": "service1",
+ "read_timeout": 60000,
+ "port": 80,
+ "path": null,
+ "updated_at": 1531066074,
+ "retries": 5,
+ "write_timeout": 60000
+ }
+ ]
+}
+```
+
+and a Route to that Service:
+
+```
+http :8001/teamA/routes Kong-Admin-Token:n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG
+{
+ "next": null,
+ "data": [
+ {
+ "created_at": 1531066253,
+ "id": "d25afc46-dc59-48b2-b04f-d3ebe19f6d4b",
+ "hosts": null,
+ "updated_at": 1531066253,
+ "preserve_host": false,
+ "regex_priority": 0,
+ "service": {
+ "id": "3ed24101-19a7-4a0b-a10f-2f47bcd4ff43"
+ },
+ "paths": [
+ "/anything"
+ ],
+ "methods": null,
+ "strip_path": false,
+ "protocols": [
+ "http",
+ "https"
+ ]
+ }
+ ]
+}
+```
+
+**Admin A creates entity permissions in qux-role**:
+
+Add service1âwhose ID is 3ed24101-19a7-4a0b-a10f-2f47bcd4ff43:
+
+```
+http :8001/teamA/rbac/roles/qux-role/entities entity_id=3ed24101-19a7-4a0b-a10f-2f47bcd4ff43 actions=read Kong-Admin-Token:n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG
+{
+ "created_at": 1531066684000,
+ "role_id": "ffe93269-7993-4308-965e-0286d0bc87b9",
+ "entity_id": "3ed24101-19a7-4a0b-a10f-2f47bcd4ff43",
+ "negative": false,
+ "entity_type": "services",
+ "actions": [
+ "read"
+ ]
+}
+```
+
+Add the routeâwhose ID is d25afc46-dc59-48b2-b04f-d3ebe19f6d4b:
+
+```
+http :8001/teamA/rbac/roles/qux-role/entities entity_id=d25afc46-dc59-48b2-b04f-d3ebe19f6d4b actions=read Kong-Admin-Token:n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG
+{
+ "created_at": 1531066728000,
+ "role_id": "ffe93269-7993-4308-965e-0286d0bc87b9",
+ "entity_id": "d25afc46-dc59-48b2-b04f-d3ebe19f6d4b",
+ "negative": false,
+ "entity_type": "routes",
+ "actions": [
+ "read"
+ ]
+}
+```
+
+**Admin A adds qux to his role**:
+
+```
+http :8001/teamA/rbac/users/qux/roles roles=qux-role Kong-Admin-Token:n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG
+{
+ "roles": [
+ {
+ "comment": "Default user role generated for qux",
+ "created_at": 1531065373000,
+ "name": "qux",
+ "id": "31614171-4174-42b4-9fae-43c9ce14830f"
+ },
+ {
+ "created_at": 1531065975000,
+ "name": "qux-role",
+ "id": "ffe93269-7993-4308-965e-0286d0bc87b9"
+ }
+ ],
+ "user": {
+ "created_at": 1531065373000,
+ "id": "4d87bf78-5824-4756-b0d0-ceaa9bd9b2d5",
+ "name": "qux",
+ "enabled": true,
+ "user_token": "sUnv6uBehM91amYRNWESsgX3HzqoBnR5"
+ }
+}
+```
+
+Checking permissions appear listed:
+
+```
+http :8001/teamA/rbac/users/qux/permissions Kong-Admin-Token:n5bhjgv0speXp4N7rSUzUj8PGnl3F5eG
+{
+ "entities": {
+ "d25afc46-dc59-48b2-b04f-d3ebe19f6d4b": {
+ "actions": [
+ "read"
+ ],
+ "negative": false
+ },
+ "3ed24101-19a7-4a0b-a10f-2f47bcd4ff43": {
+ "actions": [
+ "read"
+ ],
+ "negative": false
+ }
+ },
+ "endpoints": {}
+}
+```
+
+That is, 2 entities permissions and no endpoint permissions.
+
+Admin A is done setting up qux, and qux can now use his user token to read
+his two entities over Kong's admin API.
+
+We will assume that Admin A [enabled entity-level enforcement][rbac-enforce].
+Note that as qux has **no endpoint-level permissions**, if both endpoint and
+entity-level enforcement is enabled, he will not be able to read his entities -
+endpoint-level validation comes before entity-level.
+
+**qux tries listing all RBAC users**
+
+```
+http :8001/teamA/rbac/users/ Kong-Admin-Token:sUnv6uBehM91amYRNWESsgX3HzqoBnR5
+{
+ "message": "qux, you do not have permissions to read this resource"
+}
+```
+
+**qux tries listing all Workspaces**
+
+```
+http :8001/teamA/rbac/workspaces/ Kong-Admin-Token:sUnv6uBehM91amYRNWESsgX3HzqoBnR5
+{
+ "message": "qux, you do not have permissions to read this resource"
+}
+```
+
+**qux tries to access service1**
+
+```
+http :8001/teamA/services/service1 Kong-Admin-Token:sUnv6uBehM91amYRNWESsgX3HzqoBnR5
+{
+ "host": "httpbin.org",
+ "created_at": 1531066074,
+ "connect_timeout": 60000,
+ "id": "3ed24101-19a7-4a0b-a10f-2f47bcd4ff43",
+ "protocol": "http",
+ "name": "service1",
+ "read_timeout": 60000,
+ "port": 80,
+ "path": null,
+ "updated_at": 1531066074,
+ "retries": 5,
+ "write_timeout": 60000
+}
+```
+
+Similarly, he can access his Route:
+
+```
+http :8001/teamA/routes/3ed24101-19a7-4a0b-a10f-2f47bcd4ff43 Kong-Admin-Token:sUnv6uBehM91amYRNWESsgX3HzqoBnR5
+{
+ "created_at": 1531066253,
+ "strip_path": false,
+ "hosts": null,
+ "preserve_host": false,
+ "regex_priority": 0,
+ "updated_at": 1531066253,
+ "paths": [
+ "/anything"
+ ],
+ "service": {
+ "id": "3ed24101-19a7-4a0b-a10f-2f47bcd4ff43"
+ },
+ "methods": null,
+ "protocols": [
+ "http",
+ "https"
+ ],
+ "id": "d25afc46-dc59-48b2-b04f-d3ebe19f6d4b"
+}
+```
+
+## Closing Remarks
+
+We will end this chapter with a few closing remarks.
+
+## Wildcards in Permissions
+
+RBAC supports the use of wildcardsârepresented by the `*` characterâin many
+aspects of permissions:
+
+**Creating endpoint permissionsâ`/rbac/roles/:role/endpoints`**
+
+To create an endpoint permission, one must pass the parameters below, all of
+which can be replaced by a * character:
+
+- `endpoint`: `*` matches **any endpoint**
+- `workspace`: `*` matches **any workspace**
+- `actions`: `*` evaluates to **all actionsâread, update, create, delete**
+
+**Special case**: `endpoint`, in addition to a single `*`, also accepts `*`
+within the endpoint itself, replacing a URL segment between `/`; for example,
+all of the following are valid endpoints:
+
+- `/rbac/*`: where `*` replaces any possible segmentâe.g., `/rbac/users`,
+`/rbac/roles`, etc
+- `/services/*/plugins`: `*` matches any Service name or ID
+
+Note, however, that `*` **is not** a generic, shell-like, glob pattern.
+
+If `workspace` is ommitted, it defaults to the current request's workspace. For
+example, a role-endpoint permission created with `/teamA/roles/admin/endpoints`
+is scoped to workspace `teamA`.
+
+**Creating entity permissionsâ`/rbac/roles/:role/entities`**
+
+Similarly, for entity permissions, the following parameters accept a `*`
+character:
+
+- `entity_id`: `*` matches **any entity ID**
+
+## Entities Concealing in Entity-Level RBAC
+
+With Entity-Level RBAC enabled, endpoints that list all entities of a
+particular collection will only list entities that the user has access to;
+in the example above, if user qux listed all Routes, he would only get as
+response the entities he has access toâeven though there could be more:
+
+```
+http :8001/teamA/routes Kong-Admin-Token:sUnv6uBehM91amYRNWESsgX3HzqoBnR5
+{
+ "next": null,
+ "data": [
+ {
+ "created_at": 1531066253,
+ "id": "d25afc46-dc59-48b2-b04f-d3ebe19f6d4b",
+ "hosts": null,
+ "updated_at": 1531066253,
+ "preserve_host": false,
+ "regex_priority": 0,
+ "service": {
+ "id": "3ed24101-19a7-4a0b-a10f-2f47bcd4ff43"
+ },
+ "paths": [
+ "/anything"
+ ],
+ "methods": null,
+ "strip_path": false,
+ "protocols": [
+ "http",
+ "https"
+ ]
+ }
+ ]
+}
+```
+
+Some Kong endpoints carry a `total` field in responses; with Entity-Level RBAC
+enabled, the global count of entities is displayed, but only entities the user
+has access to are themselves shown; for example, if Team A has a number of
+plugins configured, but qux only has access to one of them, the following
+would be the expected output for a GET request to `/teamA/plugins`:
+
+```
+http :8001/teamA/plugins Kong-Admin-Token:sUnv6uBehM91amYRNWESsgX3HzqoBnR5
+{
+ "total": 2,
+ "data": [
+ {
+ "created_at": 1531070344000,
+ "config": {
+ "key_in_body": false,
+ "run_on_preflight": true,
+ "anonymous": "",
+ "hide_credentials": false,
+ "key_names": [
+ "apikey"
+ ]
+ },
+ "id": "8813dd0b-3e9d-4bcf-8a10-3112654f86e7",
+ "name": "key-auth",
+ "enabled": true
+ }
+ ]
+}
+```
+
+Notice the `total` field is 2, but qux only got one entity in the response.
+
+## Creating Entities in Entity-Level RBAC
+
+As entity-level RBAC provides access control to individual existing entities,
+it does not apply to creation of new entities; for that, endpoint-level
+permissions must be configured and enforced. For example, if endpoint-level
+permissions are not enforced, qux will be able to create new entities:
+
+```
+http :8001/teamA/routes paths[]=/anything service.id=3ed24101-19a7-4a0b-a10f-2f47bcd4ff43 strip_path=false Kong-Admin-Token:sUnv6uBehM91amYRNWESsgX3HzqoBnR5
+{
+ "created_at": 1531070828,
+ "strip_path": false,
+ "hosts": null,
+ "preserve_host": false,
+ "regex_priority": 0,
+ "updated_at": 1531070828,
+ "paths": [
+ "/anything"
+ ],
+ "service": {
+ "id": "3ed24101-19a7-4a0b-a10f-2f47bcd4ff43"
+ },
+ "methods": null,
+ "protocols": [
+ "http",
+ "https"
+ ],
+ "id": "6ee76f74-3c96-46a9-ae48-72df0717d244"
+}
+```
+
+and qux will automatically have permissions to perform any actions to entities
+he created.
+
+---
+
+[rbac-overview]: /enterprise/{{page.kong_version}}/rbac/overview
+[rbac-enforce]: /enterprise/{{page.kong_version}}/rbac/overview#enforcing-rbac
+[rbac-wildcard]: /enterprise/{{page.kong_version}}/rbac/examples/#wildcards-in-permissions
+[rbac-admin]: /enterprise/{{page.kong_version}}/rbac/admin-api
+[workspaces-examples]: /enterprise/{{page.kong_version}}/rbac/examples
+[getting-started-guide]: /enterprise/{{page.kong_version}}/getting-started/quickstart/#1-start-kong-enterprise
diff --git a/app/enterprise/1.3-x/admin-api/rbac/reference.md b/app/enterprise/1.3-x/admin-api/rbac/reference.md
new file mode 100644
index 000000000000..5ca178b74a59
--- /dev/null
+++ b/app/enterprise/1.3-x/admin-api/rbac/reference.md
@@ -0,0 +1,952 @@
+---
+title: RBAC Reference
+book: rbac
+---
+
+## Introduction
+
+Kong Enterprise's RBAC feature is configurable through Kong's [Admin
+API] or via the [Kong Manager].
+
+There are 4 basic entities involving RBAC.
+
+- **User**: The entity interacting with the system. Can be associated with
+ zero, one or more roles. Example: user `bob` has token `1234`.
+- **Role**: Set of permissions (`role_endpoint` and
+ `role_entity`). Has a name and can be associated with zero, one or
+ more permissions. Example: user bob is associated with role
+ `developer`.
+- **role_endpoint**: A set of enabled or disabled (see `negative`
+ parameter) actions (`read`, `create`, `update`, `delete`)
+ `endpoint`. Example: Role `developer` has 1 role_endpoint: `read &
+ write` to endpoint `/routes`
+- **role_entity**: A set of enabled or disabled (see `negative`
+ parameter) actions (`read`, `create`, `update`, `delete`)
+ `entity`. Example: Role `developer` has 1 role_entity: `read & write
+ & delete` to entity `283fccff-2d4f-49a9-8730-dc8b71ec2245`.
+
+## Add a User
+**Endpoint**
+
+/rbac/users
+
+**Request Body**
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name` | The RBAC user name. |
+| `user_token` | The authentication token to be presented to the Admin API. The value will be hashed and cannot be fetched in plaintext. |
+| `enabled`
optional | A flag to enable or disable the user. By default, users are enabled. |
+| `comment`
optional | A string describing the RBAC user object. |
+
+**Response**
+
+```
+HTTP 201 Created
+```
+
+```json
+{
+ "comment": null,
+ "created_at": 1557522650,
+ "enabled": true,
+ "id": "fa6881b2-f49f-4007-9475-577cd21d34f4",
+ "name": "doc_knight",
+ "user_token": "$2b$09$Za30VKAAAmyoB9zF2PNEF.9hgKcN2BdKkptPMCubPK/Ps08lzZjYG",
+ "user_token_ident": "4d870"
+}
+```
+___
+
+## Retrieve a User
+**Endpoint**
+
+/rbac/users/{name_or_id}
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name_or_id` | The RBAC user name or UUID. |
+
+**Response**
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "created_at": 1557522650,
+ "enabled": true,
+ "id": "fa6881b2-f49f-4007-9475-577cd21d34f4",
+ "name": "doc_lord",
+ "user_token": "$2b$09$Za30VKAAAmyoB9zF2PNEF.9hgKcN2BdKkptPMCubPK/Ps08lzZjYG",
+ "user_token_ident": "4d870"
+}
+```
+___
+
+## List Users
+**Endpoint**
+
+/rbac/users/
+
+**Response**
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "data": [
+ {
+ "comment": null,
+ "created_at": 1557512629,
+ "enabled": true,
+ "id": "f035f120-a95e-4327-b2ae-8fa264601d75",
+ "name": "doc_lord",
+ "user_token": "$2b$09$TIMneYcTosdG9WbzRsqcweAS2zote8g6I8HqXAtbFHR1pds2ymsh6",
+ "user_token_ident": "88ea3"
+ },
+ {
+ "comment": null,
+ "created_at": 1557522650,
+ "enabled": true,
+ "id": "fa6881b2-f49f-4007-9475-577cd21d34f4",
+ "name": "doc_knight",
+ "user_token": "$2b$09$Za30VKAAAmyoB9zF2PNEF.9hgKcN2BdKkptPMCubPK/Ps08lzZjYG",
+ "user_token_ident": "4d870"
+ }
+ ],
+ "next": null
+}
+```
+
+â ď¸ **Note**: **RBAC Users** associated with **Admins** will _not_ be
+listed with **`GET`** `/rbac/users`. Instead, use
+[**`GET`** `/admins`](/enterprise/{{page.kong_version}}/admin-api/admins/reference/#list-admins)
+to list all **Admins**.
+
+___
+
+## Update a User
+**Endpoint**
+
+/rbac/users/{name_or_id}
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name_or_id` | The RBAC user name or UUID. |
+
+**Request Body**
+
+| Attribute | Description |
+| --------- | ----------- |
+| `user_token`
optional | The authentication token to be presented to the Admin API. If this value is not present, the token will automatically be generated. |
+| `enabled`
optional | A flag to enable or disable the user. By default, users are enabled. |
+| `comment`
optional | A string describing the RBAC user object. |
+
+**Response**
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "comment": "this comment came from a patch request",
+ "created_at": 1557522650,
+ "enabled": true,
+ "id": "fa6881b2-f49f-4007-9475-577cd21d34f4",
+ "name": "donut_lord",
+ "user_token": "$2b$09$Za30VKAAAmyoB9zF2PNEF.9hgKcN2BdKkptPMCubPK/Ps08lzZjYG",
+ "user_token_ident": "4d870"
+}
+```
+___
+
+## Delete a User
+**Endpoint**
+
+/rbac/users/{name_or_id}
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name_or_id` | The RBAC user name or UUID. |
+
+**Response**
+
+```
+HTTP 204 No Content
+```
+___
+
+## Add a Role
+**Endpoint**
+
+/rbac/roles
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name` | The RBAC role name. |
+| `comment`
optional | A string describing the RBAC user object. |
+
+**Response**
+
+```
+HTTP 201 Created
+```
+
+```json
+{
+ "comment": null,
+ "created_at": 1557532241,
+ "id": "b5c5cfd4-3330-4796-9b7b-6026e91e3ad6",
+ "is_default": false,
+ "name": "service_reader"
+}
+```
+___
+
+## Retrieve a Role
+Endpoint
+
+/rbac/roles/{name_or_id}
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name_or_id` | The RBAC role name or UUID. |
+
+**Response**
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "created_at": 1557532241,
+ "id": "b5c5cfd4-3330-4796-9b7b-6026e91e3ad6",
+ "is_default": false,
+ "name": "service_reader"
+}
+```
+___
+
+## List Roles
+**Endpoint**
+
+/rbac/roles
+
+**Response**
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "data": [
+ {
+ "comment": "Full access to all endpoints, across all workspacesâexcept RBAC Admin API",
+ "created_at": 1557506249,
+ "id": "38a03d47-faae-4366-b430-f6c10aee5029",
+ "name": "admin"
+ },
+ {
+ "comment": "Read access to all endpoints, across all workspaces",
+ "created_at": 1557506249,
+ "id": "4141675c-8beb-41a5-aa04-6258ab2d2f7f",
+ "name": "read-only"
+ },
+ {
+ "comment": "Full access to all endpoints, across all workspaces",
+ "created_at": 1557506249,
+ "id": "888117e0-f2b3-404d-823b-dee595423505",
+ "name": "super-admin"
+ },
+ {
+ "comment": null,
+ "created_at": 1557532241,
+ "id": "b5c5cfd4-3330-4796-9b7b-6026e91e3ad6",
+ "name": "doc_lord"
+ }
+ ],
+ "next": null
+}
+```
+___
+
+## Update or Create a Role
+**Endpoint**
+
+/rbac/roles
+
+**Request Body**
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name` | The RBAC role name. |
+| `comment`
optional | A string describing the RBAC user object. |
+
+The behavior of `PUT` endpoints is the following: if the request payload **does
+not** contain an entity's primary key (`id` for Users), the entity will be
+created with the given payload. If the request payload **does** contain an
+entity's primary key, the payload will "replace" the entity specified by the
+given primary key. If the primary key is **not** that of an existing entity, `404
+NOT FOUND` will be returned.
+
+**Response**
+
+If creating the entity:
+
+```
+HTTP 201 Created
+```
+
+If replacing the entity:
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "comment": "the best",
+ "created_at": 1557532566,
+ "id": "b5c5cfd4-3330-4796-9b7b-6026e91e3ad6",
+ "is_default": false,
+ "name": "doc_lord"
+}
+```
+
+## Update a Role
+**Endpoint**
+
+/rbac/roles/{name_or_id}
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name_or_id` | The RBAC role or UUID. |
+
+**Request Body**
+
+| Attribute | Description |
+| --------- | ----------- |
+| `comment`
optional | A string describing the RBAC role object. |
+
+**Response**
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "comment": "comment from patch request",
+ "created_at": 1557532566,
+ "id": "b5c5cfd4-3330-4796-9b7b-6026e91e3ad6",
+ "is_default": false,
+ "name": "service_reader"
+}
+```
+___
+
+## Delete a Role
+**Endpoint**
+
+/rbac/role/{name_or_id}
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name` | The RBAC role name. |
+
+**Response**
+
+```
+HTTP 204 No Content
+```
+___
+
+## Add a Role Endpoint Permission
+**Endpoint**
+
+/rbac/roles/{name_or_id}/endpoints
+
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name_or_id` | The RBAC role name. |
+
+
+**Request Body**
+
+| Attribute | Description |
+| --------- | ----------- |
+| `workspace` | Workspace tied to the endpoint. Defaults to the default permission. Special value of "*" means **all** workspaces are affected |
+| `endpoint` | Endpoint associated with this permission. |
+| `negative` | If true, explicitly disallow the actions associated with the permissions tied to this endpoint. By default this value is false. |
+| `actions` | One or more actions associated with this permission. This is a comma separated string (read,create,update,delete) |
+| `comment`
optional | A string describing the RBAC permission object. |
+
+`endpoint` must be the path of the associated endpoint. They can be
+exact matches, or contain wildcards, represented by `*`.
+
+- Exact matches; eg:
+ * /apis/
+ * /apis/foo
+
+- Wildcards; eg:
+ * /apis/*
+ * /apis/*/plugins
+
+Where `*` replaces exactly one segment between slashes (or the end of
+the path).
+
+Note that wildcards can be nested (`/rbac/*`, `/rbac/*/*`,
+`/rbac/*/*/*` would refer to all paths under `/rbac/`)
+
+**Response**
+
+```
+HTTP 201 Created
+```
+
+```json
+{
+ "actions": [
+ "delete",
+ "create",
+ "update",
+ "read"
+ ],
+ "created_at": 1557764505,
+ "endpoint": "/consumers",
+ "negative": false,
+ "role": {
+ "id": "23df9f20-e7cc-4da4-bc89-d3a08f976e50"
+ },
+ "workspace": "default"
+}
+```
+
+---
+
+## Retrieve a Role Endpoint Permission
+**Endpoint**
+
+/rbac/roles/{name_or_id}/endpoints/{worspace_name_or_id}/{endpoint}
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name_or_id` | The RBAC role name or UUID. |
+| `worspace_name_or_id` | The worspace name or UUID. |
+| `endpoint` | The endpoint associated with this permisson. |
+
+**Response**
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "actions": [
+ "delete",
+ "create",
+ "update",
+ "read"
+ ],
+ "created_at": 1557764505,
+ "endpoint": "/consumers",
+ "negative": false,
+ "role": {
+ "id": "23df9f20-e7cc-4da4-bc89-d3a08f976e50"
+ },
+ "workspace": "default"
+}
+```
+
+---
+
+
+
+## List Role Endpoints Permissions
+**Endpoint**
+
+/rbac/roles/{role_name_or_id}/endpoints
+
+| Attribute | Description |
+| --------- | ----------- |
+| `role_name_or_id` | The RBAC role name or UUID. |
+
+**Response**
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "data": [
+ {
+ "actions": [
+ "delete",
+ "create",
+ "update",
+ "read"
+ ],
+ "created_at": 1557764505,
+ "endpoint": "/consumers",
+ "negative": false,
+ "role": {
+ "id": "23df9f20-e7cc-4da4-bc89-d3a08f976e50"
+ },
+ "workspace": "default"
+ },
+ {
+ "actions": [
+ "read"
+ ],
+ "created_at": 1557764438,
+ "endpoint": "/services",
+ "negative": false,
+ "role": {
+ "id": "23df9f20-e7cc-4da4-bc89-d3a08f976e50"
+ },
+ "workspace": "default"
+ }
+ ]
+}
+```
+
+---
+
+## Update a Role Endpoint Permission
+**Endpoint**
+
+/rbac/roles/{name_or_id}/endpoints/{worspace_name_or_id}/{endpoint}
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name_or_id` | The RBAC role name or UUID. |
+| `worspace_name_or_id` | The worspace name or UUID. |
+| `endpoint` | The endpoint associated with this permisson. |
+
+**Request Body**
+
+| Attribute | Description |
+| --------- | ----------- |
+| `negative` | If true, explicitly disallow the actions associated with the permissions tied to this resource. By default this value is false. |
+| `actions` | One or more actions associated with this permission. |
+
+**Response**
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "actions": [
+ "delete",
+ "create",
+ "update",
+ "read"
+ ],
+ "created_at": 1557764438,
+ "endpoint": "/services",
+ "negative": false,
+ "role": {
+ "id": "23df9f20-e7cc-4da4-bc89-d3a08f976e50"
+ },
+ "workspace": "default"
+}
+```
+
+---
+
+
+
+## Delete a Role Endpoint Permission
+**Endpoint**
+
+/rbac/roles/{name_or_id}/endpoints/{worspace_name_or_id}/{endpoint}
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name_or_id` | The RBAC role name or UUID. |
+| `worspace_name_or_id` | The worspace name or UUID. |
+| `endpoint` | The endpoint associated with this permisson. |
+
+**Response**
+
+```
+HTTP 204 No Content
+```
+
+---
+
+
+
+## Add a Role Entity Permission
+**Endpoint**
+/rbac/roles/{name_or_id}/entities
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name_or_id` | The RBAC role name or UUID. |
+
+**Request Body**
+
+| Attribute | Description |
+| --------- | ----------- |
+| `negative` | If true, explicitly disallow the actions associated with the permissions tied to this resource. By default this value is false. |
+| `entity_id` | id of the entity associated with this permission. |
+| `actions` | One or more actions associated with this permission. |
+| `comment`
optional | A string describing the RBAC permission object |
+
+`entity_id` must be the ID of an entity in Kong; if the ID of a
+workspace is given, the permission will apply to all entities in that
+workspace. Future entities belonging to that workspace will get the
+same permissions. A wildcard `*` will be interpreted as **all
+entities** in the system.
+
+
+**Response**
+
+```
+HTTP 201 Created
+```
+
+```json
+{
+ "actions": [
+ "delete",
+ "create",
+ "read"
+ ],
+ "created_at": 1557771505,
+ "entity_id": "*",
+ "entity_type": "wildcard",
+ "negative": false,
+ "role": {
+ "id": "bba049fa-bf7e-40ef-8e89-553dda292e99"
+ }
+}
+```
+
+---
+
+## Retrieve a Role Entity Permission
+**Endpoint**
+/rbac/roles/{name_or_id}/entities/{entity_id}
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name_or_id` | The RBAC permission name or UUID. |
+| `entity_id` | id of the entity associated with this permission. |
+
+**Response**
+
+```
+HTTP 200 Ok
+```
+
+```json
+{
+ "actions": [
+ "delete",
+ "create",
+ "read"
+ ],
+ "created_at": 1557771505,
+ "entity_id": "*",
+ "entity_type": "wildcard",
+ "negative": false,
+ "role": {
+ "id": "bba049fa-bf7e-40ef-8e89-553dda292e99"
+ }
+}
+```
+
+---
+
+## List Entity Permissons
+
+**Endpoint**
+/rbac/roles/{name_or_id}/entities
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name_or_id` | The RBAC permisson name or UUID. |
+
+**Response**
+
+```
+HTTP 200 Ok
+```
+
+```json
+{
+ "data": [
+ {
+ "actions": [
+ "delete",
+ "create",
+ "read"
+ ],
+ "created_at": 1557771505,
+ "entity_id": "*",
+ "entity_type": "wildcard",
+ "negative": false,
+ "role": {
+ "id": "bba049fa-bf7e-40ef-8e89-553dda292e99"
+ }
+ }
+ ]
+}
+```
+
+---
+## Update an Entity Permission
+**Endpoint**
+
+/rbac/roles/{name_or_id}/entities/{entity_id}
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name_or_id` | The RBAC role name or UUID. |
+| `entity_id` | The entity name or UUID. |
+
+**Request Body**
+
+| Attribute | Description |
+| --------- | ----------- |
+| `negative` | If true, explicitly disallow the actions associated with the permissions tied to this resource. By default this value is false. |
+| `actions` | One or more actions associated with this permission. |
+
+**Response**
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "actions": [
+ "update"
+ ],
+ "created_at": 1557771505,
+ "entity_id": "*",
+ "entity_type": "wildcard",
+ "negative": false,
+ "role": {
+ "id": "bba049fa-bf7e-40ef-8e89-553dda292e99"
+ }
+}
+```
+
+---
+
+## Delete an Entity Permission
+**Endpoint**
+
+/rbac/roles/{name_or_id}/entities/{entity_id}
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name_or_id` | The RBAC role name or UUID. |
+| `entity_id` | The entity name or UUID. |
+
+**Response**
+
+```
+HTTP 204 No Content
+```
+
+---
+
+## List Role Permissions
+**Endpoint**
+/rbac/roles/{name_or_id}/permissions/
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name_or_id` | The RBAC role name or UUID. |
+
+
+**Response**
+
+```
+HTTP 200 OK
+```
+```json
+{
+ "endpoints": {
+ "*": {
+ "*": {
+ "actions": [
+ "delete",
+ "create",
+ "update",
+ "read"
+ ],
+ "negative": false
+ },
+ "/*/rbac/*": {
+ "actions": [
+ "delete",
+ "create",
+ "update",
+ "read"
+ ],
+ "negative": true
+ }
+ }
+ },
+ "entities": {}
+}
+```
+
+## Add a User to a Role
+**Endpoint**
+
+/rbac/users/{name_or_id}/roles
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name_or_id` | The RBAC role name or UUID. |
+
+
+**Request Body**
+
+| Attribute | Description |
+| --------- | ----------- |
+| `roles` | Comma-separated list of role names to assign to the user. |
+
+**Response**
+
+```
+HTTP 201 Created
+```
+```json
+{
+ "roles": [
+ {
+ "created_at": 1557772263,
+ "id": "aae80073-095f-4553-ba9a-bee5ed3b8b91",
+ "name": "doc-knight"
+ }
+ ],
+ "user": {
+ "comment": null,
+ "created_at": 1557772232,
+ "enabled": true,
+ "id": "b65ca712-7ceb-4114-87f4-5c310492582c",
+ "name": "gruce-wayne",
+ "user_token": "$2b$09$gZnMKK/mm/d2rAXN7gL63uL43mjdX/62iwMqdyCQwLyC0af3ce/1K",
+ "user_token_ident": "88ea3"
+ }
+}
+```
+
+---
+## List a User's Roles
+**Endpoint**
+
+/rbac/users/{name_or_id}/roles
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name_or_id` | The RBAC role name or UUID. |
+
+
+**Response**
+
+```
+HTTP 200 OK
+```
+```json
+
+{
+ "roles": [
+ {
+ "comment": "Read access to all endpoints, across all workspaces",
+ "created_at": 1557765500,
+ "id": "a1c810ee-8366-4654-ba0c-963ffb9ccf2e",
+ "name": "read-only"
+ },
+ {
+ "created_at": 1557772263,
+ "id": "aae80073-095f-4553-ba9a-bee5ed3b8b91",
+ "name": "doc-knight"
+ }
+ ],
+ "user": {
+ "comment": null,
+ "created_at": 1557772232,
+ "enabled": true,
+ "id": "b65ca712-7ceb-4114-87f4-5c310492582c",
+ "name": "gruce-wayne",
+ "user_token": "$2b$09$gZnMKK/mm/d2rAXN7gL63uL43mjdX/62iwMqdyCQwLyC0af3ce/1K",
+ "user_token_ident": "88ea3"
+ }
+}
+```
+
+---
+## Delete a Role from a User
+**Endpoint**
+
+/rbac/users/{name_or_id}/roles
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name_or_id` | The RBAC role name or UUID. |
+
+
+**Request Body**
+
+| Attribute | Description |
+| --------- | ----------- |
+| `roles` | Comma-separated list of role names to assign to the user. |
+
+**Response**
+
+```
+HTTP 204 No Content
+```
+
+---
+
+## List a User's Permissions
+**Endpoint**
+
+/rbac/users/{name_or_id}/permissions
+
+| Attribute | Description |
+| --------- | ----------- |
+| `name_or_id` | The RBAC role name or UUID. |
+
+**Response**
+
+```
+HTTP 200 OK
+```
+```json
+{
+ "endpoints": {
+ "*": {
+ "*": {
+ "actions": [
+ "read"
+ ],
+ "negative": false
+ }
+ }
+ },
+ "entities": {}
+}
+
+```
diff --git a/app/enterprise/1.3-x/admin-api/vitals/index.md b/app/enterprise/1.3-x/admin-api/vitals/index.md
new file mode 100644
index 000000000000..bd3525e05362
--- /dev/null
+++ b/app/enterprise/1.3-x/admin-api/vitals/index.md
@@ -0,0 +1,197 @@
+---
+title: Kong Vitals
+---
+
+### Introduction
+
+Vitals is a feature within Kong's Admin API and Kong Manager that provides
+metrics about the health and performance of Kong nodes and Kong-proxied APIs.
+
+## Requirements
+
+- PostgresSQL 9.5+ or Cassandra 2.1+
+
+## How to Enable and Disable Vitals
+
+Kong Enterprise ships with Vitals enabled by default.
+
+Vitals can be disabled (or re-enabled) in the configuration file (e.g kong.conf):
+
+```bash
+# via your Kong configuration file; e.g., kong.conf
+vitals = on # vitals is enabled
+vitals = off # vitals is disabled
+```
+
+or by environment variables:
+
+```bash
+# or via environment variables
+$ export KONG_VITALS=on
+$ export KONG_VITALS=off
+```
+
+Kong must be restarted for these changes to take effect.
+
+## Vitals Metrics
+
+Vitals metrics fall into two categories:
+* Health Metrics - for monitoring the health of a Kong cluster
+* Traffic Metrics â for monitoring the usage of upstream services
+
+Within these categories, Vitals collects the following metrics:
+
+- [Health Metrics](#health-metrics)
+ - [Latency](#latency)
+ - [Proxy Latency (Request)](#proxy-latency-request)
+ - [Upstream Latency](#upstream-latency)
+ - [Datastore Cache](#datastore-cache)
+ - [Datastore Cache Hit/Miss](#datastore-cache-hit-miss)
+ - [Datastore Cache Hit Ratio](#datastore-cache-hit-ratio)
+- [Traffic Metrics](#traffic-metrics)
+ - [Request Counts](#request-counts)
+ - [Total Requests](#total-requests)
+ - [Requests per Consumer](#request-per-consumer)
+ - [Status Codes](#status-code)
+ - [Total Status Code Classes](#total-status-code-classes)
+ - [Total Status Codes per Service](#total-status-codes-per-service)
+ - [Total Status Codes per Route](#total-status-codes-per-route)
+ - [Total Status Codes per Consumer](#total-status-codes-per-consumer)
+ - [Total Status Codes per Consumer per Route](#total-status-codes-per-consumer-per-route)
+
+All metrics are collected at 1-second intervals and aggregated into 1-minute
+intervals. The 1-second intervals are retained for one hour. The 1-minute
+intervals are retained for 25 hours.
+
+If longer retention times are needed, the Vitals API can be used to pull metrics
+out of Kong and into a data retention tool.
+
+### Health Metrics
+
+Health metrics give insight into the performance of a Kong cluster; for example,
+how many requests it is processing and the latency on those requests.
+
+Health metrics are tracked for each node in a cluster as well as for the cluster
+as a whole. In Kong, a node is a running process with a unique identifier,
+configuration, cache layout, and connections to both Kongâs datastores and the
+upstream APIs it proxies. Note that node identifiers are unique to the process,
+and not to the host on which the process runs. In other words, each Kong restart
+results in a new node, and therefore a new node ID.
+
+#### Latency
+
+The Vitals API may return null for Latency metricsâthis occurs when no API
+requests were proxied during the timeframe. Null latencies are not graphed in
+Kong Managerâperiods with null latencies appear as gaps in Vitals charts.
+
+##### Proxy Latency (Request)
+
+The Proxy Latency metrics are the min, max, and average values for the time, in milliseconds, that the Kong proxy spends processing API proxy requests. This includes time to execute plugins that run in the access phase as well as DNS lookup time. This does not include time spent in Kongâs load balancer, time spent sending the request to the upstream, or time spent on the response.
+
+These metrics are referenced in the Vitals API with the following labels: `latency_proxy_request_min_ms`, `latency_proxy_request_max_ms`, `latency_proxy_request_avg_ms`
+
+Latency is not reported when a request is prematurely ended by Kong (e.g., bad auth, rate limited, etc.)ânote that this differs from the âTotal Requestsâ metric that does count such requests.
+
+##### Upstream Latency
+
+The Upstream Latency metrics are the min, max, and average values for the time elapsed, in milliseconds, between Kong sending requests upstream and Kong receiving the first bytes of responses from upstream.
+
+These metrics are referenced in the Vitals API with the following labels: `latency_upstream_min_ms`, `latency_upstream_max_ms`, `latency_upstream_avg_ms`
+
+#### Datastore Cache
+
+##### Datastore Cache Hit/Miss
+
+The Datastore Cache Hit/Miss metrics are the count of requests to Kong's node-level datastore cache. When Kong workers need configuration information to respond to a given API proxy request, they first check their worker-specific cache (also known as L1 cache), then if the information isnât available they check the node-wide datastore cache (also known as L2 cache). If neither cache contains the necessary information, Kong requests it from the datastore.
+
+A âHitâ indicates that an entity was retrieved from the data store cache. A âMissâ indicates that the record had to be fetched from the datastore. Not every API request will result in datastore cache accessâsome entities will be retrieved from Kong's worker-specific cache memory.
+
+These metrics are referenced in the Vitals API with the following labels: `cache_datastore_hits_total`, `cache_datastore_misses_total`
+
+##### Datastore Cache Hit Ratio
+
+This metric contains the ratio of datastore cache hits to the total count of datastore cache requests.
+
+> Note: Datastore Cache Hit Ratio cannot be calculated for time indices with no hits and no misses.
+
+### Traffic Metrics
+
+Traffic metrics provide insight into which of your services are being used, and by whom, and how they are responding.
+
+#### Request Counts
+
+##### Total Requests
+
+
+This metric is the count of all API proxy requests received. This includes requests that were rejected due to rate-limiting, failed authentication, etc.
+
+This metric is referenced in the Vitals API with the following label: `requests_proxy_total`
+
+##### Requests Per Consumer
+
+This metric is the count of all API proxy requests received from each specific consumer. Consumers are identified by credentials in their requests (e.g., API key, OAuth token, etc) as required by the Kong Auth plugin(s) in use.
+
+This metric is referenced in the Vitals API with the following label: `requests_consumer_total`
+
+#### Status Codes
+
+##### Total Status Code Classes
+
+This metric is the count of all status codes grouped by status code class (e.g. 4xx, 5xx).
+
+This metric is referenced in the Vitals API with the following label: `status_code_classes_total`
+
+##### Total Status Codes per Service
+
+This metric is the total count of each specific status code for a given service.
+
+This metric is referenced in the Vitals API with the following label: `status_codes_per_service_total`
+
+##### Total Status Codes per Route
+
+This metric is the total count of each specific status code for a given route.
+
+This metric is referenced in the Vitals API with the following label: `status_codes_per_route_total`
+
+##### Total Status Codes per Consumer
+This metric is the total count of each specific status code for a given consumer.
+
+This metric is referenced in the Vitals API with the following label: `status_codes_per_consumer_total`
+
+##### Total Status Codes per Consumer Per Route
+This metric is the total count of each specific status code for a given consumer and route.
+
+This metric is referenced in the Vitals API with the following label: `status_codes_per_consumer_route_total`
+
+## Vitals API
+Vitals data is available via endpoints on Kongâs Admin API. Access to these endpoints may be controlled via Admin API RBAC. The Vitals API is described in the attached OAS (Open API Spec, formerly Swagger) file [vitalsSpec.yaml][vitals_spec]
+
+## Vitals Data Visualization in Kong Manager
+
+Kong Manager includes visualization of Vitals data. Additional visualizations, dashboarding of Vitals data alongside data from other systems, etc., can be achieved using the Vitals API to integrate with common monitoring systems.
+
+### Time Frame Control
+
+A timeframe selector adjacent to Vitals charts in Kong Manager controls the timeframe of data visualized, which indirectly controls the granularity of the data. For example, the âLast 5 Minutesâ choice will display 1-second resolution data, while longer time frames will show 1-minute resolution data.
+
+Timestamps on the x-axis of Vitals charts are displayed either in the browser's local time zone, or in UTC, depending on the UTC option that appears adjacent to Vitals charts.
+
+### Cluster and Node Data
+Metrics can be displayed on Vitals charts at both node and cluster level. Controls are available to show cluster-wide metrics and/or node-specific metrics. Clicking on individual nodes will toggle the display of data from those nodes. Nodes can be identified by a unique Kong node identifier, by hostname, or by a combination of the two.
+
+### Status Code Data
+
+Visualizations of cluster-wide status code classes (1xx, 2xx, 3xx, 4xx, 5xx) can be found in the Status Codes page of Kong Manager. This page contains the counts of status code classes graphed over time, as well as the ratio of code classes to total requests. Note: this page does not include non-standard code classes (6xx, 7xx, etc.) Individual status code data can be viewed in the Consumer, Route, and Service details pages under the Activity tab. Both standard and non-standard status codes are visible in these views.
+
+## Known Issues
+
+Vitals data does not appear in Kong Manager or the Admin API.
+First, make sure Vitals is enabled. (`vitals = on` in your Kong configuration).
+
+Then, check your log files. If you see `[vitals] kong_vitals_requests_consumers cache is full` or `[vitals] error attempting to push to list: no memory`, then Vitals is no longer able to track requests because its cache is full. This condition may resolve itself if traffic to the node subsides long enough for it to work down the cache. Regardless, the node will continue to proxy requests as usual.
+
+### Limitations in Cassandra 2.x
+
+Vitals data is purged regularly: 1-second data is purged after one hour, and 1-minute data is purged after 25 hours. Due to limitations in Cassandra 2.x query options, the counter table vitals_consumers is not purged. If it becomes necessary to prune this table, you will need to do so manually.
+
+[vitals_spec]: /enterprise/{{page.kong_version}}/admin-api/vitals/vitalsSpec.yaml
diff --git a/app/enterprise/1.3-x/admin-api/vitals/vitals-influx-strategy.md b/app/enterprise/1.3-x/admin-api/vitals/vitals-influx-strategy.md
new file mode 100644
index 000000000000..aa142a5f1375
--- /dev/null
+++ b/app/enterprise/1.3-x/admin-api/vitals/vitals-influx-strategy.md
@@ -0,0 +1,135 @@
+---
+title: Kong Vitals with InfluxDB
+---
+
+## Overview
+
+This document covers integrating Kong Vitals with a new or existing InfluxDB
+time series server or cluster. Leveraging a time series database for Vitals data
+can improve request and Vitals performance in very-high traffic Kong Enterprise
+clusters (such as environments handling tens or hundreds of thousands of
+requests per second), without placing addition write load on the database
+backing the Kong cluster.
+
+For using Vitals with a database as the backend (i.e. PostgreSQL, Cassandra),
+please refer to [Kong Vitals](/enterprise/{{page.kong_version}}/admin-api/vitals/).
+
+## Getting Started
+
+### Preparing InfluxDB
+
+This guide assumes an existing InfluxDB server or cluster is already installed
+and is accepting write traffic. Production-ready InfluxDB installations should
+be deployed as a separate effort, but for proof-of-concept testing, running a
+local InfluxDB instance is possible via Docker:
+
+```bash
+$ docker run -p 8086:8086 \
+ -v $PWD:/var/lib/influxdb \
+ influxdb
+```
+
+Writing Vitals data to InfluxDB requires that the `kong` database is created.
+Currently, this operation must be done manually. This can be done via the
+`influx` CLI:
+
+```bash
+influx> create database kong;
+```
+
+Alternatively the [InfluxDB API](https://docs.influxdata.com/influxdb/v1.7/tools/api/#query-http-endpoint)
+may be queried directly to create the database.
+
+### Configuring Kong
+
+In addition to enabling Vitals, Kong must be configured to use InfluxDB as the
+backing strategy for Vitals. The InfluxDB host and port must also be defined:
+
+```
+vitals_strategy = influxdb
+vitals_tsdb_address = 127.0.0.1:8086 # the IP or hostname, and port, of InfluxDB
+```
+
+As with other Kong configurations, changes take effect on kong reload or kong
+restart.
+
+## InfluxDB Measurements
+
+Kong Vitals records metrics in two InfluxDB measurements- `kong_request`, which
+contains field values for request latencies and HTTP, and tags for various Kong
+entities associated with the requests (e.g., the Route and Service in question,
+etc.), and `kong_datastore_cache`, which contains points about cache hits and
+misses. Measurement schemas are listed below:
+
+```
+> show tag keys
+name: kong_request
+tagKey
+------
+consumer
+hostname
+route
+service
+status_f
+wid
+workspace
+
+name: kong_datastore_cache
+tagKey
+------
+hostname
+wid
+```
+
+```
+> show field keys
+name: kong_request
+fieldKey fieldType
+-------- ---------
+kong_latency integer
+proxy_latency integer
+request_latency integer
+status integer
+
+name: kong_datastore_cache
+fieldKey fieldType
+-------- ---------
+
+hits integer
+misses integer
+```
+
+The tag `wid` is used to differentiate the unique worker ID per host, to avoid
+duplicate metrics shipped at the same point in time.
+
+As demonstrated above, the series cardinality of the `kong_request` measurement
+varies based on the cardinality of the Kong cluster configuration - a greater
+number of Service/Route/Consumer/Workspace combinations handled by Kong results
+in a greater series cardinality as written by Vitals. Please consult the
+[InfluxDB sizing guidelines](https://docs.influxdata.com/influxdb/v1.7/guides/hardware_sizing/)
+for reference on appropriately sizing an InfluxDB node/cluster. Note that the
+query behavior when reading Vitals data falls under the "moderate" load
+category as defined by the above document - several `GROUP BY` statements and
+functions are used to generate the Vitals API responses, which can require
+significant CPU resources to execute when hundreds of thousands or millions of
+data points are present.
+
+## Query Behavior
+
+Kong buffers Vitals metrics and writes InfluxDB points in batches to improve
+throughput in InfluxDB and reduce overhead in the Kong proxy path. Each Kong
+worker process flushes its buffer of metrics every 5 seconds or 5000 data points,
+whichever comes first.
+
+Metrics points are written with microsecond (`u`) precision. To comply with
+the [Vitals API](/enterprise/{{page.kong_version}}/admin-api/vitals/#vitals-api), measurement
+values are read back grouped by second. Note that due to limitations in the
+OpenResty API, writing values with microsecond precision requires an additional
+syscall per request.
+
+Currently, Vitals InfluxDB data points are not downsampled or managed via
+retention policy by Kong. InfluxDB operators are encouraged to manually manage
+the retention policy of the `kong` database to reduce the disk space and memory
+needed to manage Vitals data points. Currently, Kong Vitals ignores data points
+older than 25 hours; it is safe to create a retention policy with a 25-hour
+duration for measurements written by Kong.
diff --git a/app/enterprise/1.3-x/admin-api/vitals/vitals-prometheus-strategy.md b/app/enterprise/1.3-x/admin-api/vitals/vitals-prometheus-strategy.md
new file mode 100644
index 000000000000..8403b8015155
--- /dev/null
+++ b/app/enterprise/1.3-x/admin-api/vitals/vitals-prometheus-strategy.md
@@ -0,0 +1,221 @@
+---
+title: Kong Vitals with Prometheus
+---
+
+For using Vitals with a database as the backend (i.e. PostgreSQL, Cassandra),
+please refer to [Kong Vitals](/enterprise/{{page.kong_version}}/admin-api/vitals/).
+
+## Setup Prometheus environment for Vitals
+
+### Download Prometheus
+
+The latest release of Prometheus can be found at the [Prometheus download page](https://prometheus.io/download/#prometheus).
+
+Prometheus should be running on a separate node from the one running Kong.
+For users that are already using Prometheus in their infrastructure, it's
+possible to use existing Prometheus nodes as Vitals storage backend.
+
+In this guide, we assume Prometheus is running on hostname `prometheus-node`
+using default config that listens on port `9090`.
+
+### Download and configure StatsD exporter
+
+The latest release of StatsD exporter can be found at the
+[Bintray](https://bintray.com/kong/statsd-exporter). The binary is distributed
+with some specific features like min/max gauges and Unix domain socket support.
+
+StatsD exporter needed to configured with a set of mapping rules to translate
+the StatsD UDP events to Prometheus metrics. A default set of mapping rules can
+be downloaded at
+[statsd.rules.yaml](/enterprise/{{page.kong_version}}/plugins/statsd.rules.yaml).
+Then start StatsD exporter with
+
+```bash
+$ ./statsd_exporter --statsd.mapping-config=statsd.rules.yaml \
+ --statsd.listen-unixgram=''
+```
+
+The StatsD mapping rules file must be configured to match the metrics sent from
+Kong. To learn how to customize the StatsD events name, please refer to
+[Enable Vitals with Prometheus strategy in Kong](#enable-Vitals-with-prometheus-strategy-in-kong)
+section.
+
+StatsD exporter can run either on a separate node from Kong (to avoid resource
+competition with Kong), or on the same host with Kong (to reduce unnecessary
+network overhead).
+
+In this guide, we assume StatsD exporter is running on hostname `statsd-node`,
+using default config that listens to UDP traffic on port `9125` and the metrics
+in Prometheus Exposition Format are exposed on port `9102`.
+
+### Configure Prometheus to scrape StatsD exporter
+
+To configure Prometheus to scrape StatsD exporter, add the following section to
+`scrape_configs` in `prometheus.yaml`.
+
+```yaml
+scrape_configs:
+ - job_name: 'vitals_statsd_exporter'
+ scrape_interval: "5s"
+ static_configs:
+ - targets: ['statsd-node:9102']
+```
+
+Please update `statsd-node` with the actual hostname that runs StatsD exporter.
+
+If you are using service discovery, it will be more convenient to configure
+multiple StatsD exporters in Prometheus. Please refer to the
+[scape_configs](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#%3Cscrape_config%3E)
+section of Prometheus document for further reading.
+
+By default, the Vitals graph in Kong Manager uses the configured target address
+in the legend, which is named `instance` in the Prometheus metrics label. For some service
+discovery setups where `instance` is `IP:PORT`, the user might want to relabel the `instance`
+label to display a more meaningful hostname in the legend.
+To do so, the user can also refer to the [scape_configs](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#%3Cscrape_config%3E)
+section and rewrite the `instance` label with the corresponding meta label.
+
+For example, in a Kubernetes environment, use the following relabel rules:
+
+```yaml
+scrape_configs:
+ - job_name: 'vitals_statsd_exporter'
+ kubernetes_sd_configs:
+ # your SD config to filter statsd exporter pods
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_pod_name]
+ action: replace
+ target_label: 'instance'
+```
+
+### Enable Vitals with Prometheus strategy in Kong
+
+You can change this in your configuration to enable Vitals with Prometheus:
+
+```bash
+# via your Kong configuration file
+vitals = on
+vitals_strategy = prometheus
+vitals_statsd_address = statsd-node:9125
+vitals_tsdb_address = prometheus-node:9090
+```
+
+```bash
+# or via environment variables
+$ export KONG_VITALS=on
+$ export KONG_VITALS_STRATEGY=prometheus
+$ export KONG_VITALS_STATSD_ADDRESS=statsd-node:9125
+$ export KONG_VITALS_TSDB_ADDRESS=prometheus-node:9090
+```
+
+Please update `statsd-node` and `prometheus-node` with the actual hostname that
+runs StatsD exporter and Prometheus.
+
+As with other Kong configurations, your changes take effect on `kong reload` or
+`kong restart`.
+
+If you set `scrape_interval` in Prometheus other than the default value of `5`
+seconds, you will also need to update the following:
+
+```bash
+# via your Kong configuration file
+vitals_prometheus_scrape_interval = new_value_in_seconds
+```
+
+```bash
+# or via environment variables
+$ export KONG_VITALS_PROMETHEUS_SCRAPE_INTERVAL=new_value_in_seconds
+```
+
+The above option configures `interval` parameter when querying Prometheus.
+The value `new_value_in_seconds` should be equal or larger than
+`scrape_interval` config in Prometheus.
+
+You can also configure Kong to send StatsD events with a different prefix from
+the default value of `kong`. Make sure the prefix in statsd.rules
+is same as that in Kong configuration:
+
+```bash
+# via your Kong configuration file
+vitals_statsd_prefix = kong-vitals
+```
+
+```bash
+# or via environment variables
+$ export KONG_VITALS_STATSD_PREFIX=kong-vitals
+```
+
+```yaml
+# in statsd.rules
+mappings:
+# by API
+- match: kong-vitals.api.*.request.count
+ name: "kong_requests_proxy"
+ labels:
+ job: "kong_metrics"
+# follows other metrics
+# ...
+```
+
+## Tuning and Optimization
+
+### StatsD exporter UDP buffer
+
+As the amount of concurrent requests increases, the length of the queue to store
+unprocessed UDP events will grow as well.
+It's necessary to increase the UDP read buffer size to avoid possible packet
+dropping.
+
+To increase the UDP read buffer for the StatsD exporter process, run the binary
+using the following example to set read buffer to around 3 MB:
+
+```
+$ ./statsd_exporter --statsd.mapping-config=statsd.rules.yaml \
+ --statsd.listen-unixgram='' \
+ --statsd.read-buffer=30000000
+```
+
+To increase the UDP read buffer for the host that's running, adding the
+following example line to `/etc/sysctl.conf`:
+
+```
+net.core.rmem_max = 60000000
+```
+
+And then apply the setting with root privilege:
+
+```
+# sysctl -p
+```
+
+### StatsD exporter with Unix domain socket
+
+It is possible to further reduce network overhead by deploying StatsD exporter
+on the same node with Kong and let the exporter listen on local Unix domain
+socket.
+
+```bash
+$ ./statsd_exporter --statsd.mapping-config=statsd.rules.yaml \
+ --statsd.read-buffer=30000000 \
+ --statsd.listen-unixgram='/tmp/statsd.sock'
+```
+
+By default the socket is created with permission `0755`, so that StatsD exporter
+ has to be running with the same user with Kong to allow Kong to write UDP
+ packets to the socket. To allow the exporter and Kong to run as different
+ users, the socket can be created with permission `0777` with the following:
+
+```bash
+$ ./statsd_exporter --statsd.mapping-config=statsd.rules.yaml \
+ --statsd.read-buffer=30000000 \
+ --statsd.listen-unixgram='/tmp/statsd.sock' \
+ --statsd.unixsocket-umask="777"
+```
+
+
+## Accessing Vitals metrics from Prometheus
+
+You can also access Kong Vitals metrics in Prometheus and display on Grafana
+or setup alerting rules. With the example StatsD mapping rules, all metrics are
+labeled with `exported_job=kong_vitals`. With the above Prometheus scrape config
+above, all metrics are also labeled with `job=vitals_statsd_exporter`.
diff --git a/app/enterprise/1.3-x/admin-api/vitals/vitalsSpec.yaml b/app/enterprise/1.3-x/admin-api/vitals/vitalsSpec.yaml
new file mode 100644
index 000000000000..78ebda2b54b4
--- /dev/null
+++ b/app/enterprise/1.3-x/admin-api/vitals/vitalsSpec.yaml
@@ -0,0 +1,734 @@
+swagger: '2.0'
+info:
+ description: Vitals API
+ version: 1.3.0
+ title: Vitals API
+basePath: /
+tags:
+ - name: health
+ description: Stats about the health of a Kong cluster
+ - name: traffic
+ description: Stats about traffic routed through Kong
+schemes:
+ - http
+paths:
+ /vitals:
+ get:
+ tags:
+ - vitals
+ summary: Get information about stats collected
+ description: ''
+ operationId: getVitalsInfo
+ produces:
+ - application/json
+ responses:
+ '200':
+ description: successful operation
+ schema:
+ $ref: '#/definitions/VitalsInfo'
+ /vitals/cluster:
+ get:
+ tags:
+ - health
+ summary: Get health stats for this Kong cluster
+ description: ''
+ operationId: getClusterStats
+ produces:
+ - application/json
+ parameters:
+ - name: interval
+ in: query
+ description: Granularity of the time series (minutes or seconds)
+ required: true
+ type: string
+ - name: start_ts
+ in: query
+ description: 'Requested start of the time series, in Unix epoch format (seconds)'
+ required: true
+ type: string
+ responses:
+ '200':
+ description: successful operation
+ schema:
+ $ref: '#/definitions/ClusterVitalsTimeSeriesWithMetadata'
+ '400':
+ description: Invalid query params
+ /vitals/cluster/status_codes:
+ get:
+ deprecated: true
+ tags:
+ - traffic
+ summary: Get the status code classes returned across the cluster
+ description: This operation is deprecated. Use /status_code_classes.
+ operationId: getClusterStatusCodeStats
+ produces:
+ - application/json
+ parameters:
+ - name: interval
+ in: query
+ description: Granularity of the time series (minutes or seconds)
+ required: true
+ type: string
+ - name: start_ts
+ in: query
+ description: 'Requested start of the time series, in Unix epoch format (seconds)'
+ required: true
+ type: string
+ responses:
+ '200':
+ description: successful operation
+ schema:
+ $ref: '#/definitions/ClusterVitalsStatusCodesWithMetadata'
+ '400':
+ description: Invalid query params
+ /vitals/nodes:
+ get:
+ tags:
+ - health
+ summary: Get health stats for all nodes
+ description: ''
+ operationId: getAllNodeStats
+ produces:
+ - application/json
+ parameters:
+ - name: interval
+ in: query
+ description: Granularity of the time series (minutes or seconds)
+ required: true
+ type: string
+ - name: start_ts
+ in: query
+ description: 'Requested start of the time series, in Unix epoch format (seconds)'
+ required: true
+ type: string
+ responses:
+ '200':
+ description: successful operation
+ schema:
+ $ref: '#/definitions/NodeVitalsTimeSeriesWithMetadata'
+ '400':
+ description: Invalid query params
+ '/vitals/nodes/{node_id}':
+ get:
+ tags:
+ - health
+ summary: Get stats for a specific node by UUID
+ description: ''
+ operationId: getNodeStatsByID
+ produces:
+ - application/json
+ parameters:
+ - name: node_id
+ type: string
+ in: path
+ description: Node to retrieve stats for
+ required: true
+ - name: interval
+ in: query
+ description: Granularity of the time series (minutes or seconds)
+ required: true
+ type: string
+ - name: start_ts
+ in: query
+ description: 'Requested start of the time series, in Unix epoch format (seconds)'
+ required: true
+ type: string
+ responses:
+ '200':
+ description: successful operation
+ schema:
+ $ref: '#/definitions/NodeVitalsTimeSeriesWithMetadata'
+ '400':
+ description: Invalid query params
+ '404':
+ description: Unable to find requested node
+ '/vitals/consumers/{consumer_id}/cluster':
+ get:
+ tags:
+ - traffic
+ deprecated: true
+ summary: Get count of requests for the given consumer across entire cluster
+ description: This operation is deprecated. Use /vitals/status_codes/by_consumer_and_route
+ operationId: getConsumerClusterStats
+ produces:
+ - application/json
+ parameters:
+ - name: consumer_id
+ type: string
+ in: path
+ description: Consumer to retrieve stats for
+ required: true
+ - name: interval
+ type: string
+ in: query
+ description: Granularity of the time series (minutes or seconds)
+ required: true
+ - name: start_ts
+ in: query
+ description: 'Requested start of the time series, in Unix epoch format (seconds)'
+ required: true
+ type: string
+ responses:
+ '200':
+ description: successful operation
+ schema:
+ $ref: '#/definitions/ClusterConsumersTimeSeriesWithMetadata'
+ '400':
+ description: Invalid query params
+ '404':
+ description: Unable to find requested consumer
+ /{workspace_name}/vitals/status_code_classes:
+ get:
+ tags:
+ - traffic
+ summary: Get status code classes for a cluster or workspace.
+ description: ''
+ operationId: getStatusCodeClassesByWorkspace
+ produces:
+ - application/json
+ parameters:
+ - name: workspace_name
+ type: string
+ in: path
+ description: >-
+ Optional parameter. If provided, returns status code classes for the
+ workspace; otherwise, returns them for the cluster
+ required: true
+ - name: interval
+ type: string
+ in: query
+ description: Granularity of the time series (minutes or seconds)
+ required: true
+ - name: start_ts
+ in: query
+ description: 'Requested start of the time series, in Unix epoch format (seconds)'
+ required: true
+ type: string
+ responses:
+ '200':
+ description: successful operation
+ schema:
+ $ref: '#/definitions/StatusCodesByEntityMetadata'
+ '400':
+ description: Invalid query params
+ '404':
+ description: Unable to find requested consumer
+ /{workspace_name}/vitals/status_codes/by_service:
+ get:
+ tags:
+ - traffic
+ summary: Get a time series of status codes returned by a given service
+ description: ''
+ operationId: getStatusCodesByService
+ produces:
+ - application/json
+ parameters:
+ - name: service_id
+ type: string
+ in: query
+ description: Service to retrieve status codes for
+ required: true
+ - name: interval
+ type: string
+ in: query
+ description: Granularity of the time series (minutes or seconds)
+ required: true
+ - name: start_ts
+ in: query
+ description: 'Requested start of the time series, in Unix epoch format (seconds)'
+ required: true
+ type: string
+ - name: workspace_name
+ type: string
+ in: path
+ description: >-
+ This parameter is optional. When present, it limits the result to a
+ specific workspace.
+ required: true
+ responses:
+ '200':
+ description: successful operation
+ schema:
+ $ref: '#/definitions/StatusCodesByEntityTimeSeriesWithMetadata'
+ '400':
+ description: Invalid query params
+ '404':
+ description: Unable to find requested service
+ /{workspace_name}/vitals/status_codes/by_route:
+ get:
+ tags:
+ - traffic
+ summary: Get cluster-wide count of status for a given route
+ description: ''
+ operationId: getStatusCodesByRoute
+ produces:
+ - application/json
+ parameters:
+ - name: route_id
+ type: string
+ in: query
+ description: Route to retrieve status codes for
+ required: true
+ - name: interval
+ type: string
+ in: query
+ description: Granularity of the time series (minutes or seconds)
+ required: true
+ - name: start_ts
+ in: query
+ description: 'Requested start of the time series, in Unix epoch format (seconds)'
+ required: true
+ type: string
+ - name: workspace_name
+ type: string
+ in: path
+ description: >-
+ This parameter is optional. When present, it limits the result to a
+ specific workspace.
+ required: true
+ responses:
+ '200':
+ description: successful operation
+ schema:
+ $ref: '#/definitions/StatusCodesByEntityTimeSeriesWithMetadata'
+ '400':
+ description: Invalid query params
+ '404':
+ description: Unable to find requested route
+ '/{workspace_name}/vitals/status_codes/by_consumer_and_route':
+ get:
+ tags:
+ - traffic
+ summary: >-
+ Get status codes for all the routes called by the given consumer in the
+ given timeframe
+ description: ''
+ operationId: getStatusCodesByConsumerAndRoute
+ produces:
+ - application/json
+ parameters:
+ - name: consumer_id
+ type: string
+ in: query
+ description: Consumer to retrieve status codes for
+ required: true
+ - name: interval
+ type: string
+ in: query
+ description: Granularity of the time series (minutes or seconds)
+ required: true
+ - name: start_ts
+ in: query
+ description: 'Requested start of the time series, in Unix epoch format (seconds)'
+ required: true
+ type: string
+ - name: workspace_name
+ type: string
+ in: path
+ description: >-
+ This parameter is optional. When present, it limits the result to a
+ specific workspace.
+ required: true
+ responses:
+ '200':
+ description: successful operation
+ schema:
+ $ref: '#/definitions/StatusCodesByEntityTimeSeriesWithMetadata'
+ '400':
+ description: Invalid query params
+ '404':
+ description: Unable to find requested consumer
+definitions:
+ ClusterVitalsMetadata:
+ properties:
+ level:
+ type: string
+ example: cluster
+ enum:
+ - cluster
+ - node
+ workspace_id:
+ type: string
+ description: UUID of workspace this time series is for
+ interval:
+ type: string
+ example: seconds
+ enum:
+ - seconds
+ - minutes
+ interval_width:
+ type: number
+ example: 60
+ earliest_ts:
+ type: integer
+ example: 1514508300
+ latest_ts:
+ type: integer
+ example: 1514508480
+ stat_labels:
+ type: array
+ items:
+ type: string
+ example:
+ - cache_datastore_hits_total
+ - cache_datastore_misses_total
+ - latency_proxy_request_min_ms
+ - latency_proxy_request_max_ms
+ - latency_upstream_min_ms
+ - latency_upstream_max_ms
+ - requests_proxy_total
+ - latency_proxy_request_avg_ms
+ - latency_upstream_avg_ms
+ StatusCodesByEntityMetadata:
+ properties:
+ entity_type:
+ type: string
+ example: service|route
+ entity_id:
+ type: string
+ example:
+ level:
+ type: string
+ example: cluster
+ enum:
+ - cluster
+ workspace_id:
+ type: string
+ description: UUID of the workspace this time series is for
+ interval:
+ type: string
+ example: seconds
+ enum:
+ - seconds
+ - minutes
+ interval_width:
+ type: number
+ example: 60
+ earliest_ts:
+ type: integer
+ example: 1514508300
+ latest_ts:
+ type: integer
+ example: 1514508480
+ stat_labels:
+ type: array
+ items:
+ type: string
+ example:
+ - status_codes_service|route_total
+ StatusCodesByEntityTimeSeriesWithMetadata:
+ type: object
+ properties:
+ meta:
+ $ref: '#/definitions/StatusCodesByEntityMetadata'
+ stats:
+ $ref: '#/definitions/StatusCodesByEntityStats'
+ ClusterVitalsStatusCodesMetadata:
+ properties:
+ level:
+ type: string
+ example: cluster
+ enum:
+ - cluster
+ interval:
+ type: string
+ example: seconds
+ enum:
+ - seconds
+ - minutes
+ interval_width:
+ type: number
+ example: 60
+ earliest_ts:
+ type: integer
+ example: 1514508300
+ latest_ts:
+ type: integer
+ example: 1514508480
+ stat_labels:
+ type: array
+ items:
+ type: string
+ example:
+ - status_code_classes_total
+ ClusterVitalsStats:
+ properties:
+ cluster:
+ type: object
+ properties:
+ :
+ type: array
+ items:
+ type: integer
+ description: >-
+ List of stat values collected at "timestamp_n", in same order as
+ "meta.stat_labels"
+ example:
+ - 999
+ - 130
+ - 0
+ - 35
+ - 142
+ - 528
+ - 1146
+ - 110
+ - 156
+ StatusCodesByEntityStats:
+ properties:
+ cluster:
+ type: object
+ description: Vitals status codes data available at the cluster level
+ properties:
+ :
+ type: object
+ properties:
+ :
+ type: integer
+ description: >-
+ The total count of a particular status code for the time
+ period
+ example: 1824
+ ClusterVitalsStatusCodesStats:
+ properties:
+ cluster:
+ type: object
+ description: Vitals status codes data available at the cluster level
+ properties:
+ :
+ type: object
+ properties:
+ :
+ type: integer
+ description: >-
+ The total count of a particular status code class for the time
+ period
+ example: 1824
+ ClusterVitalsTimeSeriesWithMetadata:
+ type: object
+ properties:
+ meta:
+ $ref: '#/definitions/ClusterVitalsMetadata'
+ stats:
+ $ref: '#/definitions/ClusterVitalsStats'
+ ClusterVitalsStatusCodesWithMetadata:
+ type: object
+ properties:
+ meta:
+ $ref: '#/definitions/ClusterVitalsStatusCodesMetadata'
+ stats:
+ $ref: '#/definitions/ClusterVitalsStatusCodesStats'
+ ClusterConsumersMetadata:
+ properties:
+ level:
+ type: string
+ example: cluster
+ enum:
+ - cluster
+ - node
+ interval:
+ type: string
+ example: seconds
+ enum:
+ - seconds
+ - minutes
+ interval_width:
+ type: number
+ example: 60
+ earliest_ts:
+ type: integer
+ example: 1514508300
+ latest_ts:
+ type: integer
+ example: 1514508480
+ stat_labels:
+ type: array
+ items:
+ type: string
+ example:
+ - requests_consumer_total
+ ClusterConsumersStats:
+ properties:
+ cluster:
+ type: object
+ properties:
+ :
+ type: integer
+ description: >-
+ List of stat values collected at "timestamp_n", in same order as
+ "meta.stat_labels"
+ example: 47
+ ClusterConsumersTimeSeriesWithMetadata:
+ type: object
+ properties:
+ meta:
+ $ref: '#/definitions/ClusterConsumersMetadata'
+ stats:
+ $ref: '#/definitions/ClusterConsumersStats'
+ NodeVitalsMetadata:
+ properties:
+ level:
+ type: string
+ example: node
+ enum:
+ - cluster
+ - node
+ workspace_id:
+ type: string
+ description: UUID of the workspace this time series is for
+ interval:
+ type: string
+ example: seconds
+ enum:
+ - seconds
+ - minutes
+ interval_width:
+ type: number
+ example: 60
+ earliest_ts:
+ type: integer
+ example: 1514508300
+ latest_ts:
+ type: integer
+ example: 1514508480
+ stat_labels:
+ type: array
+ items:
+ type: string
+ example:
+ - cache_datastore_hits_total
+ - cache_datastore_misses_total
+ - latency_proxy_request_min_ms
+ - latency_proxy_request_max_ms
+ - latency_upstream_min_ms
+ - latency_upstream_max_ms
+ - requests_proxy_total
+ - latency_proxy_request_avg_ms
+ - latency_upstream_avg_ms
+ nodes:
+ type: object
+ description: >-
+ table of node ids that contributed to this time series. This element
+ is not included on cluster-level requests.
+ properties:
+ :
+ type: object
+ description: The id of a node included in this time series.
+ properties:
+ hostname:
+ type: string
+ description: The name of the host where this node runs
+ NodeVitalsStats:
+ properties:
+ :
+ type: object
+ description: >-
+ The node this time series is for, or "cluster" if it's a cluster-level
+ time series.
+ properties:
+ :
+ type: array
+ items:
+ type: integer
+ description: >-
+ List of stat values collected at "timestamp_n", in same order as
+ "meta.stat_labels"
+ example:
+ - 999
+ - 130
+ - 0
+ - 35
+ - 142
+ - 528
+ - 1146
+ - 110
+ - 156
+ NodeVitalsTimeSeriesWithMetadata:
+ type: object
+ properties:
+ meta:
+ $ref: '#/definitions/NodeVitalsMetadata'
+ stats:
+ $ref: '#/definitions/NodeVitalsStats'
+ VitalsInfo:
+ type: object
+ example:
+ stats:
+ cache_datastore_hits_total:
+ levels:
+ cluster:
+ intervals:
+ minutes:
+ retention_period_seconds: 90000
+ seconds:
+ retention_period_seconds: 3600
+ nodes:
+ intervals:
+ minutes:
+ retention_period_seconds: 90000
+ seconds:
+ retention_period_seconds: 3600
+ properties:
+ stats:
+ type: object
+ properties:
+ :
+ type: object
+ properties:
+ levels:
+ type: object
+ description: >-
+ Vitals data is tracked and aggregated at different levels (per
+ cluster, per node)
+ properties:
+ cluster:
+ type: object
+ description: Vitals data available at the cluster level
+ properties:
+ intervals:
+ type: object
+ description: >-
+ Vitals data is available at different intervals
+ (seconds, minutes)
+ properties:
+ minutes:
+ type: object
+ properties:
+ retention_period_seconds:
+ type: integer
+ description: >-
+ Configured retention period (in seconds) for
+ the minutes interval
+ seconds:
+ type: object
+ properties:
+ retention_period_seconds:
+ type: integer
+ description: >-
+ Configured retention period (in seconds) for
+ the seconds interval
+ nodes:
+ type: object
+ description: Vitals data available at the node level
+ properties:
+ intervals:
+ type: object
+ description: >-
+ Vitals data is available at different intervals
+ (seconds, minutes)
+ properties:
+ minutes:
+ type: object
+ properties:
+ retention_period_seconds:
+ type: integer
+ description: >-
+ Configured retention period (in seconds) for
+ the minutes interval
+ seconds:
+ type: object
+ properties:
+ retention_period_seconds:
+ type: integer
+ description: >-
+ Configured retention period (in seconds) for
+ the seconds interval
diff --git a/app/enterprise/1.3-x/admin-api/workspaces/examples.md b/app/enterprise/1.3-x/admin-api/workspaces/examples.md
new file mode 100644
index 000000000000..f7714120ebf3
--- /dev/null
+++ b/app/enterprise/1.3-x/admin-api/workspaces/examples.md
@@ -0,0 +1,491 @@
+---
+title: Workspace Examples
+book: workspaces
+---
+
+## Introduction
+
+This chapter aims to provide a step-by-step tutorial on how to set up
+workspaces, entities, and see it in action.
+
+## Important Note: Conflicting APIs or Routes in workspaces
+
+Workspaces provide a way to segment Kong entitiesâentities in a workspace
+are isolated from those in other workspaces. That said, entities
+such as APIs and Routes have "routing rules", which are pieces of info
+attached to APIs or Routesâsuch as HTTP method, URI, or hostâthat allow a
+given proxy-side request to be routed to its corresponding upstream service.
+
+Admins configuring APIs (or Routes) in their workspaces do not want traffic
+directed to their APIs or Routes to be swallowed by APIs or Routes in other
+workspaces; Kong allows them to prevent such undesired behaviorâas long as
+certain measures are taken. Below we outline the conflict detection algorithm
+used by Kong to determine if a conflict occurs.
+
+* At API or Route **creation or modification** time, Kong runs its internal
+router:
+ - If no APIs or Routes are found with matching routing rules, the creation
+ or modification proceeds
+ - If APIs or Routes with matching routing rules are found **within the same
+ workspace**, proceed
+ - If APIs or Routes are found **in a different workspace**:
+ * If the matching API or Route **does not have an associated
+ `host` value**â`409 Conflict`
+ * If the matching API or Route's `host` is a wildcard
+ - If they are the same, a conflict is reportedâ`409 Conflict`
+ - If they are not equal, proceed
+ * If the matching API or Route's `host` is an absolute value, a
+ conflict is reportedâ`409 Conflict`
+
+## The Default workspace
+
+Kong creates a default workspaceâunsurprisingly named `default`âwhose goal
+is to group all existing entities in Kong, where "existing entities" refers to:
+
+- Entities that were created in operation in previous versions & in case
+one is migrating from an older Kong version;
+- Entities that Kong creates at migration timeâe.g., RBAC credentials, which
+are provisioned at migration time as a convenience
+
+It will also hold entities that are created without being explicitly assigned to
+a specific workspace.
+
+That said, it's worth noting that the default workspace is a workspace as any
+other, the only difference being that it's created by Kong, at migration time.
+
+(Examples will be shown using the httpie HTTP command line client.)
+
+## Listing workspaces and its entities
+
+In a fresh Kong Enterprise installâor one just migrated to 0.33âsubmit the
+following request:
+
+```
+http GET :8001/workspaces
+{
+ "total": 1,
+ "data": [
+ {
+ "created_at": 1529627841000,
+ "id": "a43fc3f9-98e4-43b0-b703-c3b1004980d5",
+ "name": "default"
+ }
+ ]
+}
+```
+
+## List entities in the default workspace
+
+To get a list of entities contained inâor referenced byâthe default workspace,
+let's issue the following request:
+
+```
+http GET :8001/workspaces/default/entities
+{
+ "data": [
+ {
+ "workspace_id": "a43fc3f9-98e4-43b0-b703-c3b1004980d5",
+ "entity_id": "a43fc3f9-98e4-43b0-b703-c3b1004980d5",
+ "entity_type": "workspaces",
+ },
+ {
+ "workspace_id": "a43fc3f9-98e4-43b0-b703-c3b1004980d5",
+ "entity_id": "e6b5f24a-8914-40b3-a1f5-02e88b33b1d3",
+ "entity_type": "portal_files",
+ },
+ {
+ "workspace_id": "a43fc3f9-98e4-43b0-b703-c3b1004980d5",
+ "entity_id": "ee7a43f0-c4e5-4533-8000-5e8bd459049f",
+ "entity_type": "portal_files",
+ },
+ ...
+ {
+ "workspace_id": "a43fc3f9-98e4-43b0-b703-c3b1004980d5",
+ "entity_id": "74e706b6-4f0d-411c-9369-55a61ecc5fa8",
+ "entity_type": "portal_files",
+ },
+ ],
+ "total": 42
+}
+```
+
+As can be seen, a total of 42 entities were part of the default workspaceâfor
+brevity, most of them were redacted here.
+
+## Creating a workspace and adding entities to it
+
+A more interesting example would be segmenting entities by teams; for the sake of
+example, let's say they are teamA, teamB, and teamC.
+
+Each of these teams has its own set of entitiesâsay, upstream services and
+routesâand want to segregate their configurations and traffic; they can
+achieve that with workspaces.
+
+```
+http POST :8001/workspaces name=teamA
+{
+ "created_at": 1528843468000,
+ "id": "735af96e-206f-43f7-88f0-b930d5fd4b7e",
+ "name": "teamA"
+}
+```
+
+```
+http POST :8001/workspaces name=teamB
+{
+ "name": "teamB",
+ "created_at": 1529628574000,
+ "id": "a25728ac-6036-497c-82ee-524d4c22fcae"
+}
+```
+
+```
+http POST :8001/workspaces name=teamC
+{
+ "name": "teamC",
+ "created_at": 1529628622000,
+ "id": "34b28f10-e1ec-4dad-9ac0-74780baee182"
+}
+```
+
+At this point, if we list workspaces, we will get a total of 4âremember,
+Kong provisions a "default" workspace and, on top of that, we created other
+3.
+
+```
+{
+ "data": [
+ {
+ "created_at": 1529627841000,
+ "id": "a43fc3f9-98e4-43b0-b703-c3b1004980d5",
+ "name": "default"
+ },
+ {
+ "created_at": 1529628818000,
+ "id": "5ed1c043-78cc-4fe2-924e-40b17ecd97bc",
+ "name": "teamA"
+ },
+ {
+ "created_at": 1529628574000,
+ "id": "a25728ac-6036-497c-82ee-524d4c22fcae",
+ "name": "teamB"
+ },
+ {
+ "created_at": 1529628622000,
+ "id": "34b28f10-e1ec-4dad-9ac0-74780baee182",
+ "name": "teamC"
+ }
+ ]
+ "total": 4,
+}
+
+```
+
+Having our teams' workspaces set up, let's add some entities to them. Say
+they have a shared serviceârepresented by the [Service][services] Kong
+entityâand different routes associated with this upstream service.
+
+Creating the shared service:
+
+```
+http :8001/services url=http://httpbin.org/ name=shared-service
+{
+ "host": "httpbin.org",
+ "created_at": 1529699798,
+ "connect_timeout": 60000,
+ "id": "86608199-e3d8-48aa-b76d-d9ec36d8d445",
+ "protocol": "http",
+ "name": "shared-service",
+ "read_timeout": 60000,
+ "port": 80,
+ "path": "/",
+ "updated_at": 1529699798,
+ "retries": 5,
+ "write_timeout": 60000
+}
+```
+
+Notice the endpoint `/services` does not include a workspace prefixâwhich
+is how one specifies the workspace under which a given API call applies to.
+In such cases, the call applies to the `default` workspace. Let's confirm that
+by listing all entities under the `default` workspace:
+
+```
+http :8001/workspaces/default/entities
+{
+ "data": [
+ ...
+ {
+ "workspace_id": "dd516707-919e-4e72-9fd8-12f63a80a662",
+ "unique_field_name": "name",
+ "entity_id": "86608199-e3d8-48aa-b76d-d9ec36d8d445",
+ "entity_type": "services",
+ "unique_field_value": "shared-service"
+ }
+ ],
+ "total": 43
+}
+```
+
+Again, entities not relevant for this example were redacted; notice, though,
+that our shared service is on the list.
+
+The next step is to add the shared service to our teams' workspaces. This can be
+done as follows:
+
+```
+http :8001/workspaces/teamA/entities entities=86608199-e3d8-48aa-b76d-d9ec36d8d445
+http :8001/workspaces/teamB/entities entities=86608199-e3d8-48aa-b76d-d9ec36d8d445
+http :8001/workspaces/teamC/entities entities=86608199-e3d8-48aa-b76d-d9ec36d8d445
+```
+
+To confirm the shared service was added,
+
+```
+http :8001/teamA/services
+{
+ "next": null,
+ "data": [
+ {
+ "host": "httpbin.org",
+ "created_at": 1529699798,
+ "connect_timeout": 60000,
+ "id": "86608199-e3d8-48aa-b76d-d9ec36d8d445",
+ "protocol": "http",
+ "name": "shared-service",
+ "read_timeout": 60000,
+ "port": 80,
+ "path": "/",
+ "updated_at": 1529699798,
+ "retries": 5,
+ "write_timeout": 60000
+ }
+ ]
+}
+```
+
+The next step is to set up each team's routes to the shared service. Let's say
+Team A, B, and C have routes `/headers`, `/ip`, and `/user-agent`, respectively.
+
+Putting into action, we have:
+
+```
+http POST :8001/teamA/routes paths[]=/headers service.id=86608199-e3d8-48aa-b76d-d9ec36d8d445 strip_path=false -f
+{
+ "created_at": 1529702016,
+ "hosts": null,
+ "id": "1850216b-35a2-4038-8544-34f58c7701f1",
+ "methods": null,
+ "paths": [
+ "/headers"
+ ],
+ "preserve_host": false,
+ "protocols": [
+ "http",
+ "https"
+ ],
+ "regex_priority": 0,
+ "service": {
+ "id": "86608199-e3d8-48aa-b76d-d9ec36d8d445"
+ },
+ "strip_path": false,
+ "updated_at": 1529702016
+}
+```
+
+```
+http POST :8001/teamB/routes paths[]=/ip service.id=86608199-e3d8-48aa-b76d-d9ec36d8d445 strip_path=false -f
+{
+ "created_at": 1529702211,
+ "hosts": null,
+ "id": "c804699c-e492-4e33-96e1-c2398bc79986",
+ "methods": null,
+ "paths": [
+ "/ip"
+ ],
+ "preserve_host": false,
+ "protocols": [
+ "http",
+ "https"
+ ],
+ "regex_priority": 0,
+ "service": {
+ "id": "86608199-e3d8-48aa-b76d-d9ec36d8d445"
+ },
+ "strip_path": false,
+ "updated_at": 1529702211
+}
+```
+
+```
+http POST :8001/teamC/routes paths[]=/user-agent service.id=86608199-e3d8-48aa-b76d-d9ec36d8d445 strip_path=false -f
+{
+ "created_at": 1529702339,
+ "hosts": null,
+ "id": "bbaac9db-52b3-46fe-bb2a-e9af2968aee9",
+ "methods": null,
+ "paths": [
+ "/user-agent"
+ ],
+ "preserve_host": false,
+ "protocols": [
+ "http",
+ "https"
+ ],
+ "regex_priority": 0,
+ "service": {
+ "id": "86608199-e3d8-48aa-b76d-d9ec36d8d445"
+ },
+ "strip_path": false,
+ "updated_at": 1529702339
+}
+```
+
+Readyânow we have all teams with their routes sharing the same service!
+
+To make sure it's set up correctly, let's list the routes in each workspace.
+
+```
+http :8001/teamA/routes
+{
+ "next": null,
+ "data": [
+ {
+ "created_at": 1529702016,
+ "id": "1850216b-35a2-4038-8544-34f58c7701f1",
+ "hosts": null,
+ "updated_at": 1529702016,
+ "preserve_host": false,
+ "regex_priority": 0,
+ "service": {
+ "id": "86608199-e3d8-48aa-b76d-d9ec36d8d445"
+ },
+ "paths": [
+ "/headers"
+ ],
+ "methods": null,
+ "strip_path": false,
+ "protocols": [
+ "http",
+ "https"
+ ]
+ }
+ ]
+}
+```
+
+As we wanted, Team A has a `/headers` route pointing to the shared service.
+
+```
+http :8001/teamB/routes
+{
+ "next": null,
+ "data": [
+ {
+ "created_at": 1529702211,
+ "id": "c804699c-e492-4e33-96e1-c2398bc79986",
+ "hosts": null,
+ "updated_at": 1529702211,
+ "preserve_host": false,
+ "regex_priority": 0,
+ "service": {
+ "id": "86608199-e3d8-48aa-b76d-d9ec36d8d445"
+ },
+ "paths": [
+ "/ip"
+ ],
+ "methods": null,
+ "strip_path": false,
+ "protocols": [
+ "http",
+ "https"
+ ]
+ }
+ ]
+}
+```
+
+Team B has its `/ip` route.
+
+```
+http :8001/teamC/routes
+{
+ "next": null,
+ "data": [
+ {
+ "created_at": 1529702339,
+ "id": "bbaac9db-52b3-46fe-bb2a-e9af2968aee9",
+ "hosts": null,
+ "updated_at": 1529702339,
+ "preserve_host": false,
+ "regex_priority": 0,
+ "service": {
+ "id": "86608199-e3d8-48aa-b76d-d9ec36d8d445"
+ },
+ "paths": [
+ "/user-agent"
+ ],
+ "methods": null,
+ "strip_path": false,
+ "protocols": [
+ "http",
+ "https"
+ ]
+ }
+ ]
+}
+```
+
+and Team C has its `/user-agent` route.
+
+With this setup, Teams A, B, and C only have access to their own Routes
+entities through the Admin API. (Additionally, with RBAC's additional control,
+granular read/write/delete/update rights can be further assigned to workspaces,
+allowing flexible intra and inter-team permissioning schemes.)
+
+## Entities in different workspaces can have the same name!
+
+Different teamsâbelonging to different workspacesâare allowed to give any
+name to their entities. To provide an example of that, let's say that Teams A, B,
+and C want a particular consumer named `guest`âa different consumer for each
+team, sharing the same username.
+
+```
+http :8001/teamA/consumers username=guest
+{
+ "created_at": 1529703386000,
+ "id": "2e230275-2a4a-41fd-b06b-bae37008aed2",
+ "type": 0,
+ "username": "guest"
+}
+```
+
+```
+http :8001/teamB/consumers username=guest
+{
+ "created_at": 1529703390000,
+ "id": "8533e404-8d56-4481-a919-0ee35b8a768c",
+ "type": 0,
+ "username": "guest"
+}
+```
+
+```
+http :8001/teamC/consumers username=guest
+{
+ "created_at": 1529703393000,
+ "id": "5fb180b0-0cd0-42e1-8d75-ce42a54b2909",
+ "type": 0,
+ "username": "guest"
+}
+```
+
+With this, Teams A, B, and C will have the freedom to operate their `guest`
+consumer independently, choosing authentication plugins or doing any other
+operation that is allowed in the non-workspaced Kong world.
+
+Next: [RBAC Overview ›](/enterprise/{{page.kong_version}}/rbac/overview)
+
+---
+
+[services]: /{{page.kong_version}}/admin-api/#service-object
diff --git a/app/enterprise/1.3-x/admin-api/workspaces/reference.md b/app/enterprise/1.3-x/admin-api/workspaces/reference.md
new file mode 100644
index 000000000000..4e376a4b1e59
--- /dev/null
+++ b/app/enterprise/1.3-x/admin-api/workspaces/reference.md
@@ -0,0 +1,490 @@
+---
+title: Workspaces Reference
+book: workspaces
+
+workspace_body: |
+ Attribute | Description
+ ---:| ---
+ `name` | The **Workspace** name.
+
+workspace_entities_body: |
+ Attribute | Description
+ ---:| ---
+ `entities`| Comma-delimited list of entity identifiers
+---
+
+## Introduction
+
+Kong Enterprise's Workspaces feature is configurable through Kong's
+[Admin API].
+
+## Workspace Object
+
+The **Workspace** object describes the **Workspace** entity, which has an ID
+and a name.
+
+### Add Workspace
+
+**Endpoint**
+
+/workspaces/
+
+#### Request Body
+
+{{ page.workspace_body }}
+
+**Response**
+
+```
+HTTP 201 Created
+```
+
+```json
+{
+ "comment": null,
+ "config": {
+ "meta": null,
+ "portal": false,
+ "portal_access_request_email": null,
+ "portal_approved_email": null,
+ "portal_auth": null,
+ "portal_auth_conf": null,
+ "portal_auto_approve": null,
+ "portal_cors_origins": null,
+ "portal_developer_meta_fields": "[{\"label\":\"Full Name\",\"title\":\"full_name\",\"validator\":{\"required\":true,\"type\":\"string\"}}]",
+ "portal_emails_from": null,
+ "portal_emails_reply_to": null,
+ "portal_invite_email": null,
+ "portal_reset_email": null,
+ "portal_reset_success_email": null,
+ "portal_token_exp": null
+ },
+ "created_at": 1557441226,
+ "id": "c663cca5-c6f6-474a-ae44-01f62aba16a9",
+ "meta": {
+ "color": null,
+ "thumbnail": null
+ },
+ "name": "green-team"
+}
+```
+
+### List Workspaces
+
+**Endpoint**
+
+/workspaces/
+
+**Response**
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "data": [
+ {
+ "comment": null,
+ "config": {
+ "meta": null,
+ "portal": false,
+ "portal_access_request_email": null,
+ "portal_approved_email": null,
+ "portal_auth": null,
+ "portal_auth_conf": null,
+ "portal_auto_approve": null,
+ "portal_cors_origins": null,
+ "portal_developer_meta_fields": "[{\"label\":\"Full Name\",\"title\":\"full_name\",\"validator\":{\"required\":true,\"type\":\"string\"}}]",
+ "portal_emails_from": null,
+ "portal_emails_reply_to": null,
+ "portal_invite_email": null,
+ "portal_reset_email": null,
+ "portal_reset_success_email": null,
+ "portal_token_exp": null
+ },
+ "created_at": 1557419951,
+ "id": "00000000-0000-0000-0000-000000000000",
+ "meta": {
+ "color": null,
+ "thumbnail": null
+ },
+ "name": "default"
+ },
+ {
+ "comment": null,
+ "config": {
+ "meta": null,
+ "portal": false,
+ "portal_access_request_email": null,
+ "portal_approved_email": null,
+ "portal_auth": null,
+ "portal_auth_conf": null,
+ "portal_auto_approve": null,
+ "portal_cors_origins": null,
+ "portal_developer_meta_fields": "[{\"label\":\"Full Name\",\"title\":\"full_name\",\"validator\":{\"required\":true,\"type\":\"string\"}}]",
+ "portal_emails_from": null,
+ "portal_emails_reply_to": null,
+ "portal_invite_email": null,
+ "portal_reset_email": null,
+ "portal_reset_success_email": null,
+ "portal_token_exp": null
+ },
+ "created_at": 1557441226,
+ "id": "c663cca5-c6f6-474a-ae44-01f62aba16a9",
+ "meta": {
+ "color": null,
+ "thumbnail": null
+ },
+ "name": "green-team"
+ }
+ ],
+ "next": null
+}
+```
+
+### Update or Create a Workspace
+
+**Endpoint**
+
+/workspaces/{id}
+
+Attributes | Description
+---:| ---
+`id`
**conditional** | The **Workspace's** unique ID, if replacing it.*
+
+* The behavior of `PUT` endpoints is the following: if the request payload **does
+not** contain an entity's primary key (`id` for Workspaces), the entity will be
+created with the given payload. If the request payload **does** contain an
+entity's primary key, the payload will "replace" the entity specified by the
+given primary key. If the primary key is **not** that of an existing entity, `404
+NOT FOUND` will be returned.
+
+#### Request Body
+
+Attribute | Description
+---:| ---
+`name` | The **Workspace** name.
+
+**Response**
+
+If creating the entity:
+
+```
+HTTP 201 Created
+```
+
+If replacing the entity:
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "comment": null,
+ "config": {
+ "meta": null,
+ "portal": false,
+ "portal_access_request_email": null,
+ "portal_approved_email": null,
+ "portal_auth": null,
+ "portal_auth_conf": null,
+ "portal_auto_approve": null,
+ "portal_cors_origins": null,
+ "portal_developer_meta_fields": "[{\"label\":\"Full Name\",\"title\":\"full_name\",\"validator\":{\"required\":true,\"type\":\"string\"}}]",
+ "portal_emails_from": null,
+ "portal_emails_reply_to": null,
+ "portal_invite_email": null,
+ "portal_reset_email": null,
+ "portal_reset_success_email": null,
+ "portal_token_exp": null
+ },
+ "created_at": 1557504202,
+ "id": "c663cca5-c6f6-474a-ae44-01f62aba16a9",
+ "meta": {
+ "color": null,
+ "thumbnail": null
+ },
+ "name": "rocket-team"
+}
+```
+
+### Retrieve a Workspace
+
+**Endpoint**
+
+/workspaces/{name or id}
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the **Workspace** to retrieve
+
+**Response**
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "config": {
+ "portal": false,
+ "portal_developer_meta_fields": "[{\"label\":\"Full Name\",\"title\":\"full_name\",\"validator\":{\"required\":true,\"type\":\"string\"}}]"
+ },
+ "created_at": 1557504202,
+ "id": "c663cca5-c6f6-474a-ae44-01f62aba16a9",
+ "meta": { },
+ "name": "rocket-team"
+}
+```
+
+### Retrieve Workspace Metadata
+
+#### Endpoint
+
+/workspaces/{name or id}/meta
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the **Workspace** to retrieve
+
+#### Response
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "counts": {
+ "acls": 1,
+ "apis": 1,
+ "basicauth_credentials": 1,
+ "consumers": 1234,
+ "files": 41,
+ "hmacauth_credentials": 1,
+ "jwt_secrets": 1,
+ "keyauth_credentials": 1,
+ "oauth2_authorization_codes": 1,
+ "oauth2_credentials": 1,
+ "oauth2_tokens": 1,
+ "plugins": 5,
+ "rbac_roles": 3,
+ "rbac_users": 12,
+ "routes": 15,
+ "services": 2,
+ "ssl_certificates": 1,
+ "ssl_servers_names": 1,
+ "targets": 1,
+ "upstreams": 1
+ }
+}
+```
+
+### Delete a Workspace
+
+**Endpoint**
+
+/workspaces/{name or id}
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the **Workspace** to delete
+
+**Note:** All entities within a **Workspace** must be deleted before the
+**Workspace** itself can be.
+
+**Response**
+
+```
+HTTP 204 No Content
+```
+
+### Update a Workspace
+
+**Endpoint**
+
+/workspaces/{name or id}
+
+Attributes | Description
+---:| ---
+`name or id`
**required** | The unique identifier **or** the name of the **Workspace** to patch
+
+#### Request Body
+
+Attributes | Description
+---:| ---
+`comment` | A string describing the **Workspace**
+
+The behavior of `PATCH` endpoints prevents the renaming of a **Workspace**.
+
+**Response**
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "comment": "this is a sample comment in the patch request",
+ "config": {
+ "meta": null,
+ "portal": false,
+ "portal_access_request_email": null,
+ "portal_approved_email": null,
+ "portal_auth": null,
+ "portal_auth_conf": null,
+ "portal_auto_approve": null,
+ "portal_cors_origins": null,
+ "portal_developer_meta_fields": "[{\"label\":\"Full Name\",\"title\":\"full_name\",\"validator\":{\"required\":true,\"type\":\"string\"}}]",
+ "portal_emails_from": null,
+ "portal_emails_reply_to": null,
+ "portal_invite_email": null,
+ "portal_reset_email": null,
+ "portal_reset_success_email": null,
+ "portal_token_exp": null
+ },
+ "created_at": 1557509909,
+ "id": "c543d2c8-d297-4c9c-adf5-cd64212868fd",
+ "meta": {
+ "color": null,
+ "thumbnail": null
+ },
+ "name": "green-team"
+}
+```
+
+### Add entities to a Workspace
+
+Workspaces are groups of entities. This endpoint allows one to add an entity,
+identified by its unique identifier, to a **Workspace**.
+
+**Endpoint**
+
+/workspaces/{name or id}/entities
+
+#### Request Body
+
+{{ page.workspace_entities_body }}
+
+**Response**
+
+```
+HTTP 200 Created
+```
+
+```json
+[
+ {
+ "connect_timeout": 60000,
+ "created_at": 1557510770,
+ "host": "httpbin.com",
+ "id": "8e15ca43-1d14-4f58-8f05-0327af74b5c5",
+ "name": "testservice",
+ "port": 80,
+ "protocol": "http",
+ "read_timeout": 60000,
+ "retries": 5,
+ "updated_at": 1557510770,
+ "write_timeout": 60000
+ }
+]
+```
+
+The response is the representation of the entity that was added to the
+workspaceâin this case, a Service.
+
+### List entities that are part of a Workspace
+
+**Endpoint**
+
+/workspaces/{name or id}/entities
+
+**Response**
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "data": [
+ {
+ "entity_id": "8e15ca43-1d14-4f58-8f05-0327af74b5c5",
+ "entity_type": "services",
+ "unique_field_name": "name",
+ "unique_field_value": "testservice",
+ "workspace_id": "c543d2c8-d297-4c9c-adf5-cd64212868fd",
+ "workspace_name": "green-team"
+ },
+ {
+ "entity_id": "8e15ca43-1d14-4f58-8f05-0327af74b5c5",
+ "entity_type": "services",
+ "unique_field_name": "id",
+ "unique_field_value": "8e15ca43-1d14-4f58-8f05-0327af74b5c5",
+ "workspace_id": "c543d2c8-d297-4c9c-adf5-cd64212868fd",
+ "workspace_name": "green-team"
+ }
+ ],
+ "total": 2
+}
+```
+
+In this case, the **Workspace** references two Services.
+
+### Delete entities from a Workspace
+
+**Endpoint**
+
+/workspaces/{name or id}/entities
+
+#### Request Body
+
+{{ page.workspace_entities_body }}
+
+**Response**
+
+```
+HTTP 204 No Content
+```
+
+### Retrieve an entity from a Workspace
+
+This endpoint allows one to retrieve an entity from a workspaceâuseful, say,
+for checking if a given entity is part of a given **Workspace**.
+
+**Endpoint**
+
+/workspaces/{name or id}/entities/{name or id}
+
+**Response**
+
+```
+HTTP 200 OK
+```
+
+```json
+{
+ "entity_id": "8e15ca43-1d14-4f58-8f05-0327af74b5c5",
+ "entity_type": "services",
+ "workspace_id": "c543d2c8-d297-4c9c-adf5-cd64212868fd",
+ "workspace_name": "green-team"
+}
+```
+
+### Delete a particular entity from a Workspace
+
+**Endpoint**
+
+/workspaces/{name or id}/entities/{name or id}
+
+**Response**
+
+```
+HTTP 204 No Content
+```
+
+---
+
+[Admin API]: /enterprise/{{page.kong_version}}/admin-api/
diff --git a/app/enterprise/1.3-x/auth.md b/app/enterprise/1.3-x/auth.md
new file mode 100644
index 000000000000..f190620128e8
--- /dev/null
+++ b/app/enterprise/1.3-x/auth.md
@@ -0,0 +1,215 @@
+---
+title: Authentication Reference
+---
+
+## Introduction
+
+Traffic to your upstream services (APIs or microservices) is typically controlled by the application and
+configuration of various Kong [authentication plugins][plugins]. Since Kong's Service entity represents
+a 1-to-1 mapping of your own upstream services, the simplest scenario is to configure authentication
+plugins on the Services of your choosing.
+
+## Generic authentication
+
+The most common scenario is to require authentication and to not allow access for any unauthenticated request.
+To achieve this any of the authentication plugins can be used. The generic scheme/flow of those plugins
+works as follows:
+
+1. Apply an auth plugin to a Service, or globally (you cannot apply one on consumers)
+2. Create a `consumer` entity
+3. Provide the consumer with authentication credentials for the specific authentication method
+4. Now whenever a request comes in Kong will check the provided credentials (depends on the auth type) and
+it will either block the request if it cannot validate, or add consumer and credential details
+in the headers and forward the request
+
+The generic flow above does not always apply, for example when using external authentication like LDAP,
+then there is no consumer to be identified, and only the credentials will be added in the forwarded headers.
+
+The authentication method specific elements and examples can be found in each [plugin's documentation][plugins].
+
+## Consumers
+
+The easiest way to think about consumers is to map them one-on-one to users. Yet, to Kong this does not matter.
+The core principle for consumers is that you can attach plugins to them, and hence customize request behaviour.
+So you might have mobile apps, and define one consumer for each app, or version of it. Or have a consumer per
+platform, e.g. an android consumer, an iOS consumer, etc.
+
+It is an opaque concept to Kong and hence they are called "consumers" and not "users".
+
+## Anonymous Access
+
+Kong has the ability to configure a given Service to allow **both** authenticated **and** anonymous access.
+You might use this configuration to grant access to anonymous users with a low rate-limit, and grant access
+to authenticated users with a higher rate limit.
+
+To configure a Service like this, you first apply your selected authentication plugin, then create a new
+consumer to represent anonymous users, then configure your authentication plugin to allow anonymous
+access. Here is an example, which assumes you have already configured a Service named `example-service` and
+the corresponding route:
+
+1. ### Create an example Service and a Route
+
+ Issue the following cURL request to create `example-service` pointing to mockbin.org, which will echo
+ the request:
+
+ ```bash
+ $ curl -i -X POST \
+ --url http://localhost:8001/services/ \
+ --data 'name=example-service' \
+ --data 'url=http://mockbin.org/request'
+ ```
+
+ Add a route to the Service:
+
+ ```bash
+ $ curl -i -X POST \
+ --url http://localhost:8001/services/example-service/routes \
+ --data 'paths[]=/auth-sample'
+ ```
+
+ The url `http://localhost:8000/auth-sample` will now echo whatever is being requested.
+
+2. ### Configure the key-auth Plugin for your Service
+
+ Issue the following cURL request to add a plugin to a Service:
+
+ ```bash
+ $ curl -i -X POST \
+ --url http://localhost:8001/services/example-service/plugins/ \
+ --data 'name=key-auth'
+ ```
+
+ Be sure to note the created Plugin `id` - you'll need it in step 5.
+
+3. ### Verify that the key-auth plugin is properly configured
+
+ Issue the following cURL request to verify that the [key-auth][key-auth]
+ plugin was properly configured on the Service:
+
+ ```bash
+ $ curl -i -X GET \
+ --url http://localhost:8000/auth-sample
+ ```
+
+ Since you did not specify the required `apikey` header or parameter, and you have not yet
+ enabled anonymous access, the response should be `403 Forbidden`:
+
+ ```http
+ HTTP/1.1 403 Forbidden
+ ...
+
+ {
+ "message": "No API key found in headers or querystring"
+ }
+ ```
+
+4. ### Create an anonymous Consumer
+
+ Every request proxied by Kong must be associated with a Consumer. You'll now create a Consumer
+ named `anonymous_users` (that Kong will utilize when proxying anonymous access) by issuing the
+ following request:
+
+ ```bash
+ $ curl -i -X POST \
+ --url http://localhost:8001/consumers/ \
+ --data "username=anonymous_users"
+ ```
+
+ You should see a response similar to the one below:
+
+ ```http
+ HTTP/1.1 201 Created
+ Content-Type: application/json
+ Connection: keep-alive
+
+ {
+ "username": "anonymous_users",
+ "created_at": 1428555626000,
+ "id": "bbdf1c48-19dc-4ab7-cae0-ff4f59d87dc9"
+ }
+ ```
+
+ Be sure to note the Consumer `id` - you'll need it in the next step.
+
+5. ### Enable anonymous access
+
+ You'll now re-configure the key-auth plugin to permit anonymous access by issuing the following
+ request (**replace the sample uuids below by the `id` values from step 2 and 4**):
+
+ ```bash
+ $ curl -i -X PATCH \
+ --url http://localhost:8001/plugins/ \
+ --data "config.anonymous="
+ ```
+
+ The `config.anonymous=` parameter instructs the key-auth plugin on this Service to permit
+ anonymous access, and to associate such access with the Consumer `id` we received in the previous step. It is
+ required that you provide a valid and pre-existing Consumer `id` in this step - validity of the Consumer `id`
+ is not currently checked when configuring anonymous access, and provisioning of a Consumer `id` that doesn't already
+ exist will result in an incorrect configuration.
+
+6. ### Check anonymous access
+
+ Confirm that your Service now permits anonymous access by issuing the following request:
+
+ ```bash
+ $ curl -i -X GET \
+ --url http://localhost:8000/auth-sample
+ ```
+
+ This is the same request you made in step #3, however this time the request should succeed, because you
+ enabled anonymous access in step #5.
+
+ The response (which is the request as Mockbin received it) should have these elements:
+
+ ```json
+ {
+ ...
+ "headers": {
+ ...
+ "x-consumer-id": "713c592c-38b8-4f5b-976f-1bd2b8069494",
+ "x-consumer-username": "anonymous_users",
+ "x-anonymous-consumer": "true",
+ ...
+ },
+ ...
+ }
+ ```
+
+ It shows the request was successful, but anonymous.
+
+## Multiple Authentication
+
+Kong supports multiple authentication plugins for a given Service, allowing
+different clients to utilize different authentication methods to access a given Service or Route.
+
+The behaviour of the auth plugins can be set to do either a logical `AND`, or a logical `OR` when evaluating
+multiple authentication credentials. The key to the behaviour is the `config.anonymous` property.
+
+- `config.anonymous` not set
+ If this property is not set (empty) then the auth plugins will always perform authentication and return
+ a `40x` response if not validated. This results in a logical `AND` when multiple auth plugins are being
+ invoked.
+- `config.anonymous` set to a valid consumer id
+ In this case the auth plugin will only perform authentication if it was not already authenticated. When
+ authentication fails, it will not return a `40x` response, but set the anonymous consumer as the consumer. This
+ results in a logical `OR` + 'anonymous access' when multiple auth plugins are being invoked.
+
+**NOTE 1**: Either all or none of the auth plugins must be configured for anonymous access. The behaviour is
+undefined if they are mixed.
+
+**NOTE 2**: When using the `AND` method, the last plugin executed will be the one setting the credentials
+passed to the upstream service. With the `OR` method, it will be the first plugin that successfully authenticates
+the consumer, or the last plugin that will set its configured anonymous consumer.
+
+**NOTE 3**: When using the OAuth2 plugin in an `AND` fashion, then also the OAuth2 endpoints for requesting
+tokens etc. will require authentication by the other configured auth plugins.
+
+
+ When multiple authentication plugins are enabled in an
OR fashion on a given Service, and it is desired that
+ anonymous access be forbidden, then the
request-termination plugin should be
+ configured on the anonymous consumer. Failure to do so will allow unauthorized requests.
+
+
+[plugins]: https://konghq.com/plugins/
+[key-auth]: /plugins/key-authentication
diff --git a/app/enterprise/1.3-x/brain-immunity/install-configure.md b/app/enterprise/1.3-x/brain-immunity/install-configure.md
new file mode 100644
index 000000000000..f1430613b524
--- /dev/null
+++ b/app/enterprise/1.3-x/brain-immunity/install-configure.md
@@ -0,0 +1,716 @@
+---
+title: Kong Brain & Kong Immunity Installation and Configuration Guide
+---
+
+
+### Introduction
+**Kong Brain** and **Kong Immunity** help automate the entire API and service development life cycle. By automating processes for configuration, traffic analysis and the creation of documentation, Kong Brain and Kong Immunity help organizations improve efficiency, governance, reliability and security. Kong Brain automates API and service documentation and Kong Immunity uses advanced machine learning to analyze traffic patterns to diagnose and improve security.
+
+### Overview
+
+#### Kong Collector Plugin
+
+Kong Collector Plugin (Collector) configuration includes:
+
+* Kong Collector Plugin is included with Kong Enterprise, and used by Kong Brain and (optional) Kong Immunity.
+* Configure the Collector, and Kong Brain is automatically enabled.
+* If you have Kong Immunity, configure the Collector and Kong Immunity is automatically enabled.
+
+#### Kong Brain
+
+Kong Brain (Brain) features include:
+
+* Kong Brain uses the real-time Kong Collector Plugin.
+* Once the Collector is configured, Kong Brain is automatically enabled.
+* Kong Brain ingests your documentation and data flows, analyze changes and take action.
+* Users can create workflows and approve changes directly in Kong Manager.
+* Visually map services using the real-time Kong Service Map.
+
+#### Kong Immunity
+
+Kong Immunity (Immunity) features include:
+
+* Kong Immunity is optional.
+* Once the Collector is configured, Kong Immunity is automatically enabled.
+* View Kong Immunity alerts in the Kong Service Map and take action with just a few clicks.
+* Instantly diagnose issues. Receive Slack alerts notifying users of anomalies that require attention.
+
+#### Kong Service Map
+
+* For information about the Kong Service Map, see **Using Kongâs Service Map**.
+
+
+### Configure the Kong Collector Plugin
+
+#### Prerequisites
+
+Prerequisites for installing the Collector include:
+
+* A working Kong Enterprise system using Kong Enterprise 0.35.3+ or later, with a minimum of one Kong node and a working data store (PostgreSQL or Cassandra).
+* Access to a platform for the Collector which has Docker installed. This system must be networked to âtalkâ to the Kong Enterprise system on which the Collector is installed.
+* Bintray access credentials (supplied by Kong) to access the downloads.
+
+#### Overview
+
+With the Collector, Kong Enterprise collects, coalesces and stores requests made through Kong. This plugin makes a copy of requests through Kong (not affecting the hot path) which are âTeedâ to Kong Brain and/or Kong Immunity platforms. See the diagram below for a visual of how Kong Collector Plugin works with Kong Enterprise.
+
+#### Configuration Order
+
+Configure the Collector in the following order, with configuration steps provided below:
+
+1. Deploy the Collector Plugin, which captures and sends traffic to the Collector for collection/processing.
+2. Deploy and start the Collector infrastructure on your Docker aware platform.
+3. Configure the components to talk to one another.
+4. Test their configuration.
+
+#### Deploy the Collector
+
+Deploy the Collector Plugin from the Kong Manager or the Admin API:
+
+```
+$ http --form POST http://:8001//plugins name=collector config.service_token=foo config.host= config.port=5000 config.https=false config.log_bodies=true
+```
+
+It is possible to set up the Collector to only be applied to specific routes or services, by adding `route.id=` or `service.id=` to the request.
+```
+$ http --form POST http://:8001//plugins name=collector config.service_token=foo config.host= config.port=5000 config.https=false config.log_bodies=true route.id=`
+```
+
+
+### Configure the Collector
+
+#### Bintray Credentials
+
+Bintray is the download location for the files needed to install and run Brain and Immunity. Log in to Bintray and retrieve your BINTRAY_USERNAME and BINTRAY_API_KEY to proceed with the download.
+
+1. Getting BINTRAY_USERNAME:
+2. Getting BINTRAY_API_KEY:
+
+Go to âEdit Profileâ.
+
+1. Choose âAPI Keyâ.
+
+#### Setting up with Docker Compose
+
+The information needed to run the full Collector, Brain, and Immunity system is included in the docker-compose files. This starts several docker containers: a database, a collector, a worker and a scheduler.
+Kong provides a private docker image that is used by the compose files. This image is distributed by Bintray, and for access, the following is required:
+
+1. Your Bintray User ID
+2. Your Bintray API key
+3. A system (where you want to run Brain and/or Immunity) with Docker installed and ready to run
+4. A system (where you want to run Brain and/or Immunity) with Docker-compose installed and ready to run
+
+Your Bintray credentials should be furnished to you upon purchase of Kong Enterprise. If you do not have them, contact Kong Support to get a new copy.
+First you must ssh/putty into your running instance where Brain and Immunity system will be installed. Then, log into Docker.
+Command:
+```
+sudo docker login -u -p kong-docker-kong-brain-immunity-base.bintray.io
+```
+
+Example:
+```
+$ sudo docker login -u kongUser -p ef27888aba233eggg8889eeee9454ca67ca9b1aa8bdd kong-docker-kong-brain-immunity-base.bintray.io
+```
+If you see âpermission deniedâ, make sure you have run the above command as âsudoâ.
+Next, pull down the files Docker Compose will need.
+Command:
+
+```
+wget https://:@kong.bintray.com/kong-brain-immunity-base/docker-compose.zip
+```
+
+Example:
+```
+$ wget https://kong_user:ef27888aba233eefffffeef669454ca67ca9b1aa8bdd@kong.bintray.com/kong-brain-immunity-base/docker-compose.zip
+```
+
+If successful, you should see docker-compose.zip in your current dir. Unzip the package into the dir of your choice.
+Next, run Docker Compose using the files to install Brain. Start the instances with docker-compose:
+Command:
+```
+KONG_HOST= KONG_PORT=<8001> docker-compose -f docker-compose.yml -f with-db.yml -f with-redis.yml up --remove-orphans`
+`a
+```
+
+```
+`KONG_HOST - the public IP address or Hostname of the system which is running Brain`
+`KONG_PORT - Usually 8001 but may be set otherwise`
+```
+
+Example:
+```
+`$ sudo KONG_HOST=13.57.208.156 KONG_PORT=8001 docker-compose -f docker-compose.yml -f with-db.yml -f with-redis.yml up --remove-orphans`
+```
+
+If you get an error which mentions âempty stringâ, make sure you run the above command with sudo.
+If the command is running successfully, you should see downloads occurring similar to the following:
+
+#### Opt-Out of Har Redaction
+
+Collector will default to NOT storing body data values and attachments in traffic data. You can see this as viewed in the Har['postData']['text'] field, all values that exist have been stripped and replaced with the value's type. This does not affect the performance of Brain or Immunity, however can impact your ability to investigate some Kong Immunity related alerts by looking at the offending HARs that created those alerts.
+If you want to store body data in the Collector, you can set your Collector's REDACT_BODY_DATA by declaring it in your docker-compose up command as follows:
+```
+$ REDACT_BODY_DATA=False docker-compose -f docker-compose.yml -f with-redis.yml up --remove-orphans
+```
+
+
+#### Using different Postgres and Redis instances
+
+To use your own instances instead of the container provided ones, you can change the command to use either your database, redis, or both.
+```
+`$ KONG_HOST= KONG_PORT=<8001> SQLALCHEMY_DATABASE_URI= docker-compose -f docker-compose.yml -f with-redis.yml up --remove-orphans`
+```
+
+```
+`$ REDIS_URI= KONG_HOST= KONG_PORT=<8001> docker-compose -f docker-compose.yml -f with-db.yml up --remove-orphans`
+```
+
+```
+`$ REDIS_URI= KONG_HOST= KONG_PORT=<8001> SQLALCHEMY_DATABASE_URI= docker-compose -f docker-compose.yml up --remove-orphans`
+```
+
+
+#### Confirm Collector is working
+
+Requests to the status endpoint will confirm Collector is up and running in addition to providing Immunity and/or Brain status and version number. Open a browser and type in the following request:
+Request: http://:/status
+COLLECTOR_HOST - the *public* IP address of the computer where you ran Docker Compose.
+
+#### Configure Kong Routes to send data to Collector
+
+Each route in Kong that you want Collector to collect data on must be positively enabled to send data to Collector. You can do this using Kong Admin API to enable and configure a plugin on the route.
+For each route you want to enable Collector on, make the following request:
+```
+`$ http http://:8001//routes//plugins name=collector config.service_token=foo config.host= config.port=5000 config.https=false config.log_bodies=true`
+```
+
+If your Kong instance is using RBAC authorization for admin endpoints, be sure to pass your KONG-ADMIN-TOKEN in header.
+
+>POWER USER note: There are additional configuration options on the Collector plugin that allows for tuning of the batches that are sent to Brain. These are `config.queue_size` and `config.flush_timeout`.
+
+If you are experiencing performance issues, contact Kong Support and to optimize the batching.
+You should now be ready to use Brain and Immunity!
+
+#### Monitor the Collector
+
+Once you have Collector up and running and Kong routes configured to send data to Collector, you can check the functioning of Brain in several ways. Brain exposes several API endpoints on port 5000.
+
+#### Check Collector receiving data
+
+```
+http://:/
+```
+
+Will return json contains information about each of the last 100 requests that Collector has received.
+
+#### Using Kong Brain
+
+Once you have the Collector plugin and infrastructure up and running, Kong Brain does not require additional configuration as it is automatically enabled. Once data is flowing through the Collector system, Brain starts generating swagger files and service-maps as displayed on the Dev Portal.
+
+#### Generated Open-API Spec files
+
+To create Brain's Swagger files, the Collector endpoint /swagger returns a swagger file, generated considering traffic that match the submited filter parameters: `host`, `route_id`, `service_id` and `workspace_name`. Also, it fills the fields `title`, `version` and `description` within the swagger file with the respective submited parameters.
+
+#### **http://:/swagger?host=&openapi_version=<2|3>&route_id=&service_id=&workspace_name=?title=**
+
+### Using Kong Immunity
+
+#### Immunity Model Training
+
+Immunity automatically starts training its models once it is up and running and receiving data. Immunity will create a unique model for every unique endpoint + method combination it sees in incoming traffic. For example, if you have an endpoint "[_www.test-website.com/buy_](http://www.test-website.com/buy)" and traffic comes in with both GET and POST requests for that endpoint, Immunity will create two models one for the endpoint + GET traffic and one for the endpoint + POST traffic.
+
+
+Our first model version gets created after the first hour and will continuously retrain itself for the first week to provide the best model possible. After that, every week all models retrain with a week of data.
+
+
+We also provide clients an endpoint to retrigger training themselves. We recommend retraining when the context of your app is expected to change a lot. For example, maybe there is an upcoming app release that will change several endpoints. If this is the case, one can POST to http://:/resettrainer to start the training cycle all over again.
+
+
+Letâs say youâd like slightly more control over the data your model sees, for example perhaps you know that weekend data is not particularly useful for model building because weekends are normally outliers that your team is prepared for. You can trigger model training for all models with a specified time period of data. Simply POST to **http://:/trainer**, with the start and end time of data youâd like included in training like this:
+
+```
+`curl -d '{"start":"2019-01-08 10:00:00", "end":"2019-01-09 23:30:00"}' \`
+` -H "Content-Type: application/json" \`
+` -X POST http://:/trainer`
+```
+or, in the browser like this:
+
+#### **http://:/trainer?start=&end=**
+
+** datetime value format: YYYY-MM-DD HH:mm:ss
+
+
+Additionally, you can specify the kong service_id or route_id of the urls you would like trained using the kong_entity parameter. Immunity would then only train urls associated with the ID provided, and with the data specified by the start and end dates.
+
+```
+`curl -d '{"start":"2019-01-08 10:00:00", "end":"2019-01-09 23:30:00", "kong_entity":"2beff163-061d-43ad-8d87-8f40d10805ba"}' \`
+` -H "Content-Type: application/json" \`
+` -X POST http://:/trainer`
+```
+
+#### Checking Models Trained
+
+Only endpoints + method combinations that have a model trained can be monitored for alerts. If you want to check which endpoints have models, you can use **http://:/monitoredendpoints** which will give back a list of all models in the system. Each item in this list contains the following identifying information for the model:
+
+
+
+* **base_url**: The url of the traffic used to train the model.
+* **method**: The method of the traffic used to train the model.
+* **route_id**: The Kong route_id that the traffic used to train the model is associated with. **service_id**: The Kong service_id that the traffic used to train the model is associated with. **model_version_id**: The model version number of the current, active model.
+* **active_models**: A json object containing information on the active status of each of the 6 core alert types in Immunity (unknown_parameters, abnormal_value, latency, traffic, status codes, and value_type).
+
+
+
+In this object, the value is a specific alert type and the value is a boolean value where True indicates that the model is actively monitoring for that alert type.
+
+
+In general, if a endpoint + method combination model does not appear on the returned object from /monitoredendpoints, this is likely because not enough traffic has been seen by Immunity to build a reliable model.
+
+### Configure Auto-Training
+
+#### Restarting Training Schedules
+
+Immunity automatically sets up training jobs when it first starts up, and retrains all models on an optimized schedule based on time since data started flowing through Immunity. If you have experienced large changes in the type of data you expect to be coming through Immunity and do not feel comfortable choosing an "optimal" time period to use for retraining with the /trainer endpoint, you can re-trigger Immunity's auto-training by posting to the /trainer/reset endpoint. Immunity will then recreate its retraining schedule as if it was just being started and newly ingesting data.
+
+```
+`curl -X POST http://:/trainer/reset`
+```
+
+#### Configuring Auto-Training Rules
+
+For best use, Immunity retrains on a regular basis. If you do not feel like you need to retrain your models regularly and are happy with the current model you have now, you can stop auto retraining via post request to the /trainer/config endpoint. This endpoint takes these parameters:
+
+
+
+* **kong_entity**: The route_id or service_id that you would like to turn on or off auto-training.
+* **method**: One of values: GET, HEAD, POST, PUT, DELETE, CONNECT, OPTIONS, TRACE. Specifying a method will restrict the rule being made on auto-training to only traffic matching the method specified. When this value is null, all traffic from all methods will be included in the rule.
+* **enable**: True or False, where true means auto-training is on and false means auto-training is off for the kong_entity specified.
+
+
+
+You can turn off auto-training for a particular route or service via curl request like this:
+
+```
+`curl -d '{"kong_entity":"your-route-id", "enable":false}' \`
+` -H "Content-Type: application/json" \`
+` -X POST http://:/trainer/config`
+```
+
+Similarly, if you turned off auto-training for a route and feel like turning it back on, you can post to /trainer/config with enable = true.
+
+
+These configurations will only apply to training started by Immunity's auto-training schedule. Other training requests made by /trainer won't be affected by this configuration.
+
+#### Viewing Configuration Rules
+
+To see all of your configured training rules, just create a get request to /trainer/config like this:
+
+```
+`curl -X GET http://:/trainer/config`
+```
+
+A list of all your rules will be returned, where kong_entity refers to the service_id or route_id the rule applies to, and enabled is a true or false value.
+
+#### Resetting or Deleting Configured Rules
+
+To delete a single auto-train rule that you created, you can send a delete request to /trainer/config with a kong_entity parameter and value of the service_id or route_id of the rule you would like to delete.
+
+```
+`curl -d '{"kong_entity":"your-route-id"}' \`
+` -H "Content-Type: application/json" \`
+` -X DELETE http://:/trainer/config`
+```
+
+Without a rule established, Immunity will default to auto-training. In other words, once you delete a configured rule, Immunity will continue or start auto-training on the route or service of the deleted rule.
+
+
+If you would like to delete all the configurations you create, you can do so by sending an empty DELETE request to /trainer/config like this:
+
+```
+`curl -X http://:/trainer/config`
+```
+
+### Immunity Generated Alerts
+
+Immunity evaluates your traffic every minute, and creates an alert when an anomalous event is detected.
+
+#### Types of Generated Alerts
+
+Immunity is monitoring different types of data points in all traffic coming through Kong. Alerts generated will be based on these data points. Here are the alert types of violations Immunity is looking for:
+**value_type**: These alerts are triggered when incoming requests have a value to a parameter of a different type (such as Int instead of Str) than seen historically.
+**unknown_parameter**: These alerts are triggered when requests include parameters not seen before.
+**abnormal_value**: These alerts are triggered when requests contain values abnormal to historical values seen paired with its parameter.
+**latency_ms**: These alerts are triggered when incoming requests are significantly slower than historical records.
+**traffic**: These alerts are triggered when Immunity sees a rise on 4XX and 5XX codes for incoming traffic, or when the overall traffic experiences an abnormal spike or dip.
+**statuscode**: When the proportion of 4XX or 5XX codes is increasing, regardless of traffic volume
+
+#### Retrieving Generated Alerts
+
+You can monitor the created alerts by running the following commands:
+
+```
+`curl -d '{"start":"2019-01-08 10:00:00", "end":"2019-01-09 23:30:00"}' \`
+` -H "Content-Type: application/json" \`
+` -X POST http://:/alerts`
+```
+
+Or, you can access the alerts via browser, passing in the end and start values as parameters like this:
+
+```
+http://:/alerts?start=2019-01-01 00:00:00&end=2019-01-02 00:00:00
+```
+
+The /alerts endpoint takes these parameters and you can mix and match them to your monitoring needs:
+**start and end**: Returns only alerts generated between the values in start and end parameters passed.
+**alert_type**: Returns only alerts of the alert_type in specified in alert_type parameter. This parameter does not accept lists of alert types. The value passed must be one of [âquery_paramsâ, âstatuscodeâ, âlatency_msâ, âtrafficâ]
+**url**: Returns only the alerts associated with the endpoint specified with url parameter.
+**method**: Returns only alerts with the method specified. Must be one of these values: GET, HEAD, POST, PUT, DELETE, CONNECT, OPTIONS, or TRACE. Full capitalization is necessary.
+**workspace_name**: The name of the Kong workspace for the alerts you want returned.
+**route_id**: The Kong Route id for the alerts you want returned
+**service_id**: The Kong Service id for the alerts you want returned
+**system_restored**: A true/false value indicated you only want returned alerts where the system_restored value is matching the boolean value passed into this parameter.
+**severity**: One of "low", "medium", "high" which will restrict returned alerts of severities matching the value provided with this parameter.
+
+#### Alerts Object
+
+Two types of data are returned by the /alerts endpoint. The first is a list of the alerts generated, which are structured like this:
+**id**: The alert_id of the alert.
+**detected_at**: The time in which the generated alert was detected at. This time also correlates with the last time point in the data time series that generated this alert. For example, if the alert was generated on data from 1:00 pm to 1:01 pm, then the detected_at time would correspond with the most recent time point of 1:01 pm in the data used to make that alert.
+**detected_at_unix**: The time from detected_at expressed in unix time.
+**url**: The url whoâs data generated this alert.
+**alert_type**: the type of alert generated, will be one of [âquery_paramsâ, âstatuscodeâ, âlatency_msâ, âtrafficâ]
+**summary**: The summary of the alert generated, includes a description of the anomalous event for clarity.
+**system_restored**: This parameter takes True or False as a value, and will return notifications where the anomalous eventâs system_restored status matches the value passed in the parameter.
+**severity**: The severity level of this alert, values within [low, medium, high].
+
+#### Alerts Metadata
+
+The second type of data returned is alerts metadata which describes the overall count of alerts and breaks down counts by alert type, severity, system_restored, and filtered_total.
+
+#### Configure Alert Severity
+
+#### Severity-Levels
+
+Alerts can be classified on 4-severity levels:
+low: The low severity classification denotes the least important alerts to the user. While the user ultimately decides what a low severity means to them, we recommend that low severity indicates an alert that you'd want to look at eventually, but not right away. It's an alert you wouldn't wake up at 2 am to fix but something you'll find useful down the road maybe with planning or minor bug fixing.
+medium: A medium severity classification denotes a mid-level important alert to the user. We think of this level as not something you'd want to wake up at 2 am to fix, but not so unimportant that you would wait till sprint planning prep to address. This is a level where you'll likely address it within the sprint or couple of days following it coming up.
+high: A high severity classification is the highest severity level of alert. These are the alerts that you want to be woken up for in the middle of the night, the alert who's ping means all hands on deck.
+ignored: Alerts that are designated as ignored are not surfaced in the Kong Manager, slack alerts, nor /alerts endpoint. For the later, ignored relates will be returned when explicitly asked for via /alerts parameter "severity".
+
+#### Immunity Default Severities
+
+Immunity provides default severity levels based on the alert type, and these defaults are:
+
+* value_type: low
+* unknown_parameter: low
+* latency_ms: high
+* traffic: medium
+* statuscode: high
+
+#### Creating or Updated New Rules
+
+Of course, we think you know your system best and you can adjust the severities of your alerts to varying degrees of specificity. Users of Immunity will be able to configure alert severity on alert type, kong route_id or service_id, or any combination of the two.
+For example, if you decide that for your system, unknown_parameter alerts are always system-breaking you can set the severity configuration for unknown_parameter alerts to high. Let's say after doing so, you find that while usually an unknown_parameter alert is what you consider high-severity, there's one route where it's actually more of a medium. You can then specify a medium severity for unknown_parameter alerts generated only on that route and preserve the high-severity setting for the rest of unknown_parameters for the rest of your system.
+
+
+To set a severity configuration on alerts, Immunity provides a /alerts/config endpoint. Posting to /alerts/config will create a new configuration, and requires these parameters:
+
+* alert_name: one of the alert types from ['traffic', 'value_type', 'unknown_parameter', 'latency_ms', 'traffic', 'statuscode'], or null.
+* kong_entity: a route_id or service_id for the entity you want to create the configuration for, or null.
+* severity: the severity you want this rule to make, must be one of ['low', 'medium', 'high', 'ignored']. No other severity options will be accepted.
+
+
+
+In the example above, to set the first alert type wide rule for all unknown_parameter alerts in your system, you would pass 'unknown_parameter' to the "alert_name" parameter and null to the "kong_entity" parameter. Here's an example of what that curl would look like:
+
+```
+`curl -d '{"alert_name":"unknown_parameter", "kong_entity":null, "severity": "high"}' \`
+` -H "Content-Type: application/json" \`
+` -X POST http://:/alerts/config`
+```
+
+To add that second rule for unknown_parameter alerts only coming from a specific route, you'd make a request like this:
+
+```
+`curl -d '{"alert_name":"unknown_parameter", "kong_entity":"your-route-id", "severity": "medium"}' \`
+` -H "Content-Type: application/json" \`
+` -X POST http://:/alerts/config`
+```
+
+When determining which severity to assign, Immunity will look for your configurations and default to the configuration that's "most specific" to the alert in question. Immunity thinks of alert configuration specification like this (in order from most specific configuration to least specific configuration):
+
+* kong route_id + alert_name combo
+* kong service_id + alert_name combo
+* route_id
+* service_id
+* alert_name
+* Immunity alert_name defaults
+
+
+
+When you hit the /alerts endpoint, for each alert, Immunity will first look for a rule specifying a severity for that route's kong route_id and alert_name. If it doesn't find a severity configuration, it moves down the list above until it returns the Immunity defaults for the alert's alert type.
+
+#### Removing-Alert-Severity-Rule
+
+You can remove alert-severity configuration rules by sending a delete request to /alerts/config. This endpoint takes these parameters:
+kong_entity: The kong_entity of the rule you want deleted, or null for alert type rules.
+alert_name: The alert type of the rule you wanted deleted, or null for a kong_entity rule you want deleted.
+
+```
+`curl -d '{"alert_name":"unknown_parameter", "kong_entity":"your-route-id"}' \`
+` -H "Content-Type: application/json" \`
+` -X DELETE http://:/alerts/config`
+```
+
+If you want to delete all configuration rules, you can by passing null values for both kong_entity and alert_name in your request. If you pass null for both kong_entity and alert_name parameters, all configurations will be deleted, like this:
+
+```
+`curl -d '{"alert_name":null, "kong_entity":null}' \`
+` -H "Content-Type: application/json" \`
+` -X DELETE http://:/alerts/config`
+```
+
+#### Seeing-Alert-Severity-Configuration
+
+To see what rules you already have made, make a get request to /alerts/config to see all the rules like this:
+
+```
+`curl -X GET http://:/alerts/config`
+```
+
+In return you'll get back a json like this, where each row is a configuration rule:
+
+```
+`[`
+`` ```{'alert_name': null, 'kong_entity': 'route-id-1', 'severity': 'low'},`
+` ``` ```{'alert_name': 'traffic', 'kong_entity': null, 'severity': 'high'},`
+` ``` ```{'alert_name': 'value_type', 'kong_entity': 'route-id-2', 'severity': 'medium'}`
+`]`
+```
+
+Any kong entity plus alert type rule will be represented by a json object with both alert_name and kong_entity are not null. In the example above, that would be
+{'alert_name': 'value_type', 'kong_entity': 'route-id-2', 'severity': 'medium'}. An alert type wide rule will be represented by an json object where the alert_name is not null but the kong_entity is, like
+`{'alert_name': 'traffic', 'kong_entity': null, 'severity': 'high'}`.A kong entity wide rule is the reverse with a json object that has a non-null kong_entity value but a null alert_name value like `{'alert_name': null, 'kong_entity': 'route-id-1', 'severity': 'low'}`
+
+#### Looking at Offending Hars
+
+For value_type, unknown_parameter, and abnormal_value alerts, you can retrieve the hars that created those alerts via the http://:/hars endpoint. This endpoint accepts alert_id and/or har_id as parameters and returns hars related to the parameters sent. You must specify one of these two parameters to receive hars on the http://:/hars endpoint.
+These are the parameters http://:/hars accepts:
+
+
+**alert_id**: The id of the alert related to the hars you'd like to inspect. This parameter only accepts one alert_id at a time (no lists).
+**har_ids**: A list of har_ids you want returned.
+
+
+The response will include these values:
+**har_id**: The har id of the har returned
+**alert_id**: The alert_id of the alert_returned.
+**har**: The full har for the request that generated that har.
+
+
+Here's an example using curl:
+
+```
+`curl -d '{"alert_id":1} \`
+` -H "Content-Type: application/json" \`
+` -X POST http://:/hars`
+```
+
+Here's an example using the browser
+
+
+**http://:/hars?alert_id=1**
+
+### Alert Slack Integration
+
+If you choose, Immunity can send Slack notifications for unusual traffic. Immunity needs a Slack webhook to post messages to a Slack workspace. In order to obtain a webhook URL, do the following:
+
+
+
+1. Create a new Slack app
+
+[**https://api.slack.com/apps?new_app=1**](https://api.slack.com/apps?new_app=1)
+
+Pick a name and the workspace where the app will run.
+
+
+Enable incoming webhook in your app
+After submitting the app creation form you are redirected to your newly created appâs page. In âAdd features and functionalityâ click on âIncoming webhooksâ to enable them.
+Change the OFF switch to ON. That will make visible a button âAdd new webhook to workspaceâ, click on it.
+
+
+That will redirect you to a page where you can select the channel the webhook will post messages. Select the channel and click in authorize.
+
+
+Configuring Slack Channels Immunity provides the endpoint /notifications/slack/config for adding, deleting, and viewing your slack configurations.
+
+#### Adding a Slack Configuration
+
+To add your first Slack configuration, copy the webhook URL that you just created with your app (when you finished the Slack app creation, you should have been directed to a page where you could copy the webhook URL). Then, simply create a POST request to /notifications/slack/config with an endpoint parameter equal to the webhook URL. Here's an example via curl:
+
+```
+`curl -d '{"endpoint":"www.your-slack-webhook.com"} \`
+` -H "Content-Type: application/json" \`
+` -X POST http://:/notifications/slack/config`
+```
+
+Now, you've successfully connected your Slack channel to Immunity and all alerts will notify you.
+
+#### Routing Different Alerts to Different Slack Channels
+
+Immunity will send alerts to all Slack channels you ask it too. You can even restrict the type of alerts that go to a channel with additional parameters in your POST request. To do so, the /notifications/slack/config endpoint takes these parameters on POST:
+
+
+**endpoint**: The endpoint that you would like the current POST request rule you're setting to apply to.
+**kong_entity**: Will restrict notifications set to the endpoint to only those arising from the service_id, route_id, or workspace name specified here.
+**severity**: Will route only alerts with severity specified to the endpoint. Severity values can be one of "low", "medium", "high".
+**alert_type**: Will route only alerts.
+**enable**: When set to False, the rule in the POST request is disabled, meaning Immunity will ignore that configuration rule. When set to True, the rule in enabled and Immunity will route traffic according to the full request rule. This parameter is set to True by default in all POST requests.
+
+
+When you send a POST request with only the endpoint parameter specified (like the one we did above), Immunity will route all traffic to that endpoint. Once a more specific POST request is made with more parameters filled, for example:
+
+```
+`curl -d '{"endpoint":"www.your-slack-webhook.com",`
+`` ```` ``` "severity": "high"} \`
+` -H "Content-Type: application/json" \`
+` -X POST http://:/notifications/slack/config`
+```
+
+Immunity will no longer route all traffic to "[_www.your-slack-webhook.com_](http://www.your-slack-webhook.com/)", and only route alerts at high severity to "[_www.your-slack-webhook.com_](http://www.your-slack-webhook.com/)".
+
+
+You can set multiple rules of varying specificity for the same endpoint. For example, let's say you want "[_www.your-slack-webhook.com_](http://www.your-slack-webhook.com/)" to show notifications on all alerts from service_id = "my-service-1-id" and only high-severity alerts on route_id = "my-route-1-id", you can do so with two post requests:
+
+```
+`curl -d '{"endpoint":"www.your-slack-webhook.com",`
+`` ```` ``` "kong_entity": "my-service-1-id"} \`
+` -H "Content-Type: application/json" \`
+` -X POST http://:/notifications/slack/config`
+
+
+`curl -d '{"endpoint":"www.your-slack-webhook.com",`
+`` ```` ``` "severity": "high",`
+`` ```` ``` "kong_entity": "my-route-1-id"} \`
+` -H "Content-Type: application/json" \`
+` -X POST http://:/notifications/slack/config`
+```
+
+Once one specific Slack configuration rule is created for a given Slack endpoint, Immunity considers all following configuration rules as "additive", meaning that each new rule will add more.
+
+#### Seeing your configured Rules
+
+Configured rules can get complicated. To see all the slack rules and slack urls you have configured, make a GET request to /notifications/slack/config like this:
+
+```
+`curl -X GET http://:/notifications/slack/config`
+```
+
+Which will return a json object where each key is an endpoint configured its value are the rules configured in a tree like structure with a boolean at the leaf of the tree indicating whether that rule is enabled or not. For the multi-config example we made above for [_www.your-slack-webhook.com_](http://www.your-slack-webhook.com/), the returned GET object will look like:
+
+```
+`{"www.your-slack-webhook.com": {"kong_entities": {"my-service-1-id": true,`
+`` ```` ```` ```` ```` ```` ```` ```` ```` ```` ```` ```` ``` "my-route-1-id": {"severities": {"high": true}},`
+`` ```` ```` ```` ```` ```` ```` ```` ```` ```` ```` ```` ``` },`
+`` ```` ```` ```` ```` ```` ```` ``` }`
+`}`
+```
+
+#### Disabling a Rule
+
+You might want to temporarily disable a rule you created. No problem, simply make the same POST request to /notifications/slack/config and add or change the enable parameter to false. Using the same example from above, let's set the configuration on [_www.your-slack-webhook.com_](http://www.your-slack-webhook.com/) on my-service-1-id to false.
+
+```
+`curl -d '{"endpoint":"www.your-slack-webhook.com",`
+`` ```` ``` "kong_entity": "my-service-1-id",`
+`` ```` ``` "enable": false} \`
+` -H "Content-Type: application/json" \`
+` -X POST http://:/notifications/slack/config`
+```
+
+It's important when disabling a rule to use the exact same specification parameter values (kong_entity, severity, and alert_type) that were used to create the rule.
+
+### Deleting a Rule
+
+Sometimes you want to delete a rule. Functionally this is the same as disabling a rule in the sense that notifications will no longer be sent as the deleted or disabled rule specified. To delete a configuration rule, send a DELETE request to /notifications/slack/config, and just like with disabling rules, make sure you're passing the correct values to the configuration specifying parameters (kong_entity, severity, and alert_type). With the same example from above that disabled the config rule for my-service-1-id, a DELETE would look like:
+
+```
+`curl -d '{"endpoint":"www.your-slack-webhook.com",`
+`` ```` ``` "kong_entity": "my-service-1-id"} \`
+` -H "Content-Type: application/json" \`
+` -X DELETE http://:/notifications/slack/config`
+```
+
+Charts in Slack notifications
+
+
+Some alerts include images to better describe the context in which the alert was created. We rely on Amazon S3 to store the images that are sent to Slack. In order to have notifications with images, please provide access information to an S3 bucket (with permission to add files), by setting the environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
+
+### Clean-Up-the-Data
+
+Collector will clean the amount of HARs stored daily up to the max number of hars specified in the environment variable MAX_HARS_STORAGE and tables with extracted information to a max of two weeks of data. This means that at any day, the max number of HARs stored is the MAX_HARS_STORAGE + days_incoming_number_of_hars. If no MAX_HARS_STORAGE is specified, collector defaults to keeping 2 million hars in the database.
+
+
+You can set your own value of MAX_HARS_STORAGE by setting the app environment variable through whatever means you've been deploying collector.
+
+
+Additionally, collector provides an endpoint to delete the HARs data at /clean-hars. This endpoint accepts get and post and takes one parameter "max_hars_storage" which will delete all hars until only the value passed with max_hars_storage remains and contains the most recent HARs added to the database. If no value is passed to max_hars_storage, it will clean the database to the default value set with the environment variable MAX_HARS_STORAGE. An example of using this endpoint with curl looks like this:
+
+```
+`curl -d '{"max_hars_storage":10000} \`
+` -H "Content-Type: application/json" \`
+` -X POST http://:/clean-hars`
+```
+
+### Troubleshooting-Common-Setup-Pitfalls
+
+#### âIâm sending requests with strange parameters, but Iâm not seeing any alerts related to itâ
+
+There are a couple of things that can prevent the alerts youâre expecting from showing up. First, check your Collector instance again at http://: and make sure itâs returning HARS data. If youâre not seeing data, there are 2 possible explanations. First, its likely your Brain plugin setup is not correct. Retry setting up your Brain plugin, for example making sure that the host specified on config is the same as , until you see data coming in on http://:. Second, if the plugin setup is correct but your test request data is not coming through, then make sure that the url you are sending your test data through is correct. For data to reach Collector, it must be sent through the kong service or route configured.
+
+If you are seeing data, letâs examine the data that youâve sent. The data given back are the last 100 requests Brain has. Click on one of them, and drill in until you get to queryString. Pro tip: if you donât have random traffic coming into your system, and you know the last request you sent was the one with the strange parameters that you expected to trigger an alert, then the last entry in the list returned should correspond directly to that strange parameters request you sent.
+
+
+On this view, check the contents of queryString. The queryString entry will list all parameters sent with the request your examining. If this entry is empty, then it indicates that no parameters were sent with this request, and properly sending parameters with your test request is the first step to seeing corresponding alerts.
+
+
+If the queryString is looking good and youâre still not seeing alerts, then it might be that your models havenât been built yet. When you first start Immunity, training is automatically scheduled to occur on the hour, every hour for the first week. This means that the first hour of immunityactivation will trigger no alerts because no models have been trained. You can check which endpoints have models by hitting http://:/monitoredendpoints and verify that the endpoint that you're testing with is included in the list of endpoints with models.
+
+
+If you're not seeing any endpoints being returned with /monitoredendpoints, then it's likely training hasn't happened. If you havenât triggered training via the http://:/trainer endpoint and itâs within the first hour of Brain activation, then itâs likely no model has been made. If you would like to trigger training and not wait for the auto-generated models, hit http://:/trainer with the start parameter set to yesterday, and the end parameter set to tomorrow. This will create models using all available data.
+
+
+If you are seeing endpoints but not the endpoint you're testing, this likely means that not enough data for that endpoint is available for proper model training. If it's possible to test alert generation with another endpoint on the /monitoredendpoints list, then switching your testing endpoint is recommended. If not, please create normal traffic for your test endpoint and hit http://:/trainer again for full training. Then confirm that your test endpoint is listed in /monistorendpoints.
+
+### Still having problems?
+
+Email us at immunity@konghq.com with your bug report, please use the following format.
+
+```
+`Summary`
+`Please include a description of what happened, and a description of what you expected to happen`
+`Steps To Reproduce (With pictures if helpful)`
+`1.`
+`2.`
+`3.`
+`4.`
+`Additional Details & Logs`
+`* Immunity version (same as Brain version on Brain image name)`
+`* Immunity logs (docker compose -f logs)`
+`* Immunity configuration`
+`* Deployment Method (docker deployment, bare metals, kubernetes ? etc)`
+```
+
+### Send us feature requests
+
+#### Send us a feature request to immunity@konghq.com!
+
+```
+`Summary of Proposed Feature`
+`SUMMARY_GOES_HERE`
+`User steps through feature (if applicable)`
+`1.`
+`2.`
+`3.`
+`4.`
+```
\ No newline at end of file
diff --git a/app/enterprise/1.3-x/brain-immunity/service-map.md b/app/enterprise/1.3-x/brain-immunity/service-map.md
new file mode 100644
index 000000000000..95136c9e4333
--- /dev/null
+++ b/app/enterprise/1.3-x/brain-immunity/service-map.md
@@ -0,0 +1,43 @@
+---
+title: Using Kong's Service Map
+---
+
+### Introduction
+
+Get a high-level view of your architecture with Kong Enterpriseâs real-time visual Service Map. Analyze inter-service dependencies across teams and platforms to improve governance and minimize risk.
+
+The Service Map gives a visual response of mapping the traffic flowing through your services. To view the Service Map, you must install and configure the Kong Collector plugin and enable Kong Brain. If you have Kong Immunity, you can automatically view Immunity alerts.
+
+
+
+### Prerequisites
+
+* Kong Enterprise installed and configured
+* Kong Collector Plugin installed and configured
+* Kong Brain enabled
+* (Optional) Kong Immunity enabled to view Immunity alerts
+
+
+
+For more information, see the [Kong Brain and Kong Immunity Installation and Configuration Guide](/enterprise/{{page.kong_version}}/brain-immunity/install-configure).
+
+### Service Map Overview
+
+Kongâs Service Map provides a graphical representation of requests that flow through Kong Enterprise.
+
+* Kong Service Map is available from the Service Map tab. The Service Map populates with traffic as seen in Kong Brain.
+* As traffic hits services running in Kong, the Service Map populates and maps those requests through hosts. The Service Map also displays protocol, timestamp, and other metadata associated with the routes and methods used for those requests. The Service Map can be filtered by hosts, as well as by Workspace.
+
+
+* With Kong Immunity, you can view Immunity alerts within the Service Map and click through the Alerts dashboard for further investigation.
+
+
+### Set up the Service Map
+
+The Kong Service Map uses **Kong Brain** and the **Kong Collector Plugin**. To populate the Service Map, configure the Kong Collector Plugin and enable Kong Brain. Once traffic starts flowing, the Service Map begins to populate with a visual representation of requests flowing through Kong, and traffic is updated in minute intervals.
+
+
+If you have **Kong Immunity**, and the Kong Collector Plugin is configured, Kong Immunity is automatically enabled and Immunity alerts populate and display in the Service Map as they occur.
+
+
+For more information, see the [Kong Brain and Kong Immunity Installation and Configuration Guide](/enterprise/{{page.kong_version}}/brain-immunity/install-configure).
diff --git a/app/enterprise/1.3-x/cli.md b/app/enterprise/1.3-x/cli.md
new file mode 100644
index 000000000000..7a37c0748565
--- /dev/null
+++ b/app/enterprise/1.3-x/cli.md
@@ -0,0 +1,334 @@
+---
+title: CLI Reference
+---
+
+## Introduction
+
+The provided CLI (*Command Line Interface*) allows you to start, stop, and
+manage your Kong instances. The CLI manages your local node (as in, on the
+current machine).
+
+If you haven't yet, we recommend you read the [configuration reference][configuration-reference].
+
+## Global flags
+
+All commands take a set of special, optional flags as arguments:
+
+* `--help`: print the command's help message
+* `--v`: enable verbose mode
+* `--vv`: enable debug mode (noisy)
+
+[Back to TOC](#table-of-contents)
+
+## Available commands
+
+
+### kong check
+
+```
+Usage: kong check
+
+Check the validity of a given Kong configuration file.
+
+ (default /etc/kong/kong.conf) configuration file
+
+```
+
+[Back to TOC](#table-of-contents)
+
+---
+
+
+
+### kong config
+
+```
+Usage: kong config COMMAND [OPTIONS]
+
+Use declarative configuration files with Kong.
+
+The available commands are:
+ init Generate an example config file to
+ get you started.
+
+ db_import Import a declarative config file into
+ the Kong database.
+
+ db_export Export the Kong database into a
+ declarative config file.
+
+ parse Parse a declarative config file (check
+ its syntax) but do not load it into Kong.
+
+Options:
+ -c,--conf (optional string) Configuration file.
+ -p,--prefix (optional string) Override prefix directory.
+
+```
+
+[Back to TOC](#table-of-contents)
+
+---
+
+
+### kong health
+
+```
+Usage: kong health [OPTIONS]
+
+Check if the necessary services are running for this node.
+
+Options:
+ -p,--prefix (optional string) prefix at which Kong should be running
+
+```
+
+[Back to TOC](#table-of-contents)
+
+---
+
+
+### kong migrations
+
+```
+Usage: kong migrations COMMAND [OPTIONS]
+
+Manage database schema migrations.
+
+The available commands are:
+ bootstrap Bootstrap the database and run all
+ migrations.
+
+ up Run any new migrations.
+
+ finish Finish running any pending migrations after
+ 'up'.
+
+ list List executed migrations.
+
+ reset Reset the database.
+
+ migrate-apis Migrates API entities to Routes and
+ Services.
+
+ migrate-community-to-enterprise Migrates Kong Community entities to Kong Enterprise in the default
+ workspace
+
+Options:
+ -y,--yes Assume "yes" to prompts and run
+ non-interactively.
+
+ -q,--quiet Suppress all output.
+
+ -f,--force Run migrations even if database reports
+ as already executed.
+
+ --db-timeout (default 60) Timeout, in seconds, for all database
+ operations (including schema consensus for
+ Cassandra).
+
+ --lock-timeout (default 60) Timeout, in seconds, for nodes waiting on
+ the leader node to finish running
+ migrations.
+
+ -c,--conf (optional string) Configuration file.
+
+
+```
+
+[Back to TOC](#table-of-contents)
+
+---
+
+
+### kong prepare
+
+This command prepares the Kong prefix folder, with its sub-folders and files.
+
+```
+Usage: kong prepare [OPTIONS]
+
+Prepare the Kong prefix in the configured prefix directory. This command can
+be used to start Kong from the nginx binary without using the 'kong start'
+command.
+
+Example usage:
+ kong migrations up
+ kong prepare -p /usr/local/kong -c kong.conf
+ nginx -p /usr/local/kong -c /usr/local/kong/nginx.conf
+
+Options:
+ -c,--conf (optional string) configuration file
+ -p,--prefix (optional string) override prefix directory
+ --nginx-conf (optional string) custom Nginx configuration template
+
+```
+
+[Back to TOC](#table-of-contents)
+
+---
+
+
+### kong quit
+
+```
+Usage: kong quit [OPTIONS]
+
+Gracefully quit a running Kong node (Nginx and other
+configured services) in given prefix directory.
+
+This command sends a SIGQUIT signal to Nginx, meaning all
+requests will finish processing before shutting down.
+If the timeout delay is reached, the node will be forcefully
+stopped (SIGTERM).
+
+Options:
+ -p,--prefix (optional string) prefix Kong is running at
+ -t,--timeout (default 10) timeout before forced shutdown
+ -w,--wait (default 0) wait time before initiating the shutdown
+
+```
+
+[Back to TOC](#table-of-contents)
+
+---
+
+
+### kong reload
+
+```
+Usage: kong reload [OPTIONS]
+
+Reload a Kong node (and start other configured services
+if necessary) in given prefix directory.
+
+This command sends a HUP signal to Nginx, which will spawn
+new workers (taking configuration changes into account),
+and stop the old ones when they have finished processing
+current requests.
+
+Options:
+ -c,--conf (optional string) configuration file
+ -p,--prefix (optional string) prefix Kong is running at
+ --nginx-conf (optional string) custom Nginx configuration template
+
+```
+
+[Back to TOC](#table-of-contents)
+
+---
+
+
+### kong restart
+
+```
+Usage: kong restart [OPTIONS]
+
+Restart a Kong node (and other configured services like Serf)
+in the given prefix directory.
+
+This command is equivalent to doing both 'kong stop' and
+'kong start'.
+
+Options:
+ -c,--conf (optional string) configuration file
+ -p,--prefix (optional string) prefix at which Kong should be running
+ --nginx-conf (optional string) custom Nginx configuration template
+ --run-migrations (optional boolean) optionally run migrations on the DB
+ --db-timeout (default 60)
+ --lock-timeout (default 60)
+
+```
+
+[Back to TOC](#table-of-contents)
+
+---
+
+### kong runner
+
+```
+Usage: kong runner [file] [args]
+
+Execute a lua file in a kong node. the `kong` variable is available to
+reach the DAO, PDK, etc. The variable `args` can be used to access all
+arguments (args[1] being the lua filename bein run).
+
+Example usage:
+ kong runner file.lua arg1 arg2
+ echo 'print("foo")' | kong runner
+
+```
+[Back to TOC](#table-of-contents)
+
+
+### kong start
+
+```
+Usage: kong start [OPTIONS]
+
+Start Kong (Nginx and other configured services) in the configured
+prefix directory.
+
+Options:
+ -c,--conf (optional string) Configuration file.
+
+ -p,--prefix (optional string) Override prefix directory.
+
+ --nginx-conf (optional string) Custom Nginx configuration template.
+
+ --run-migrations (optional boolean) Run migrations before starting.
+
+ --db-timeout (default 60) Timeout, in seconds, for all database
+ operations (including schema consensus for
+ Cassandra).
+
+ --lock-timeout (default 60) When --run-migrations is enabled, timeout,
+ in seconds, for nodes waiting on the
+ leader node to finish running migrations.
+
+```
+
+[Back to TOC](#table-of-contents)
+
+---
+
+
+### kong stop
+
+```
+Usage: kong stop [OPTIONS]
+
+Stop a running Kong node (Nginx and other configured services) in given
+prefix directory.
+
+This command sends a SIGTERM signal to Nginx.
+
+Options:
+ -p,--prefix (optional string) prefix Kong is running at
+
+```
+
+[Back to TOC](#table-of-contents)
+
+---
+
+
+### kong version
+
+```
+Usage: kong version [OPTIONS]
+
+Print Kong's version. With the -a option, will print
+the version of all underlying dependencies.
+
+Options:
+ -a,--all get version of all dependencies
+
+```
+
+[Back to TOC](#table-of-contents)
+
+---
+
+
+[configuration-reference]: /enterprise/{{page.kong_version}}/property-reference/
diff --git a/app/enterprise/1.3-x/clustering.md b/app/enterprise/1.3-x/clustering.md
new file mode 100644
index 000000000000..fcf427def903
--- /dev/null
+++ b/app/enterprise/1.3-x/clustering.md
@@ -0,0 +1,297 @@
+---
+title: Clustering Reference
+---
+
+## Introduction
+
+A Kong cluster allows you to scale the system horizontally by adding more
+machines to handle more incoming requests. They will all share the same
+configuration since they point to the same database. Kong nodes pointing to the
+**same datastore** will be part of the same Kong cluster.
+
+You need a load-balancer in front of your Kong cluster to distribute traffic
+across your available nodes.
+
+## What a Kong cluster does and doesn't do
+
+**Having a Kong cluster does not mean that your clients traffic will be
+load-balanced across your Kong nodes out of the box.** You still need a
+load-balancer in front of your Kong nodes to distribute your traffic. Instead,
+a Kong cluster means that those nodes will share the same configuration.
+
+For performance reasons, Kong avoids database connections when proxying
+requests, and caches the contents of your database in memory. The cached
+entities include Services, Routes, Consumers, Plugins, Credentials, etc... Since those
+values are in memory, any change made via the Admin API of one of the nodes
+needs to be propagated to the other nodes.
+
+This document describes how those cached entities are being invalidated and how
+to configure your Kong nodes for your use case, which lies somewhere between
+performance and consistency.
+
+[Back to TOC](#table-of-contents)
+
+## Single node Kong clusters
+
+A single Kong node connected to a database (Cassandra or PostgreSQL) creates a
+Kong cluster of one node. Any changes applied via the Admin API of this node
+will instantly take effect. Example:
+
+Consider a single Kong node `A`. If we delete a previously registered Service:
+
+```bash
+$ curl -X DELETE http://127.0.0.1:8001/services/test-service
+```
+
+Then any subsequent request to `A` would instantly return `404 Not Found`, as
+the node purged it from its local cache:
+
+```bash
+$ curl -i http://127.0.0.1:8000/test-service
+```
+
+[Back to TOC](#table-of-contents)
+
+## Multiple nodes Kong clusters
+
+In a cluster of multiple Kong nodes, other nodes connected to the same database
+would not instantly be notified that the Service was deleted by node `A`. While
+the Service is **not** in the database anymore (it was deleted by node `A`), it is
+**still** in node `B`'s memory.
+
+All nodes perform a periodic background job to synchronize with changes that
+may have been triggered by other nodes. The frequency of this job can be
+configured via:
+
+* [db_update_frequency][db_update_frequency] (default: 5 seconds)
+
+Every `db_update_frequency` seconds, all running Kong nodes will poll the
+database for any update, and will purge the relevant entities from their cache
+if necessary.
+
+If we delete a Service from node `A`, this change will not be effective in node
+`B` until node `B`s next database poll, which will occur up to
+`db_update_frequency` seconds later (though it could happen sooner).
+
+This makes Kong clusters **eventually consistent**.
+
+[Back to TOC](#table-of-contents)
+
+## What is being cached?
+
+All of the core entities such as Services, Routes, Plugins, Consumers, Credentials are
+cached in memory by Kong and depend on their invalidation via the polling
+mechanism to be updated.
+
+Additionally, Kong also caches **database misses**. This means that if you
+configure a Service with no plugin, Kong will cache this information. Example:
+
+On node `A`, we add a Service and a Route:
+
+```bash
+# node A
+$ curl -X POST http://127.0.0.1:8001/services \
+ --data "name=example-service" \
+ --data "url=http://example.com"
+
+$ curl -X POST http://127.0.0.1:8001/services/example-service/routes \
+ --data "paths[]=/example"
+```
+
+(Note that we used `/services/example-service/routes` as a shortcut: we
+could have used the `/routes` endpoint instead, but then we would need to
+pass `service_id` as an argument, with the UUID of the new Service.)
+
+A request to the Proxy port of both node `A` and `B` will cache this Service, and
+the fact that no plugin is configured on it:
+
+```bash
+# node A
+$ curl http://127.0.0.1:8000/example
+HTTP 200 OK
+...
+```
+
+```bash
+# node B
+$ curl http://127.0.0.2:8000/example
+HTTP 200 OK
+...
+```
+
+Now, say we add a plugin to this Service via node `A`'s Admin API:
+
+```bash
+# node A
+$ curl -X POST http://127.0.0.1:8001/services/example-service/plugins \
+ --data "name=example-plugin"
+```
+
+Because this request was issued via node `A`'s Admin API, node `A` will locally
+invalidate its cache and on subsequent requests, it will detect that this API
+has a plugin configured.
+
+However, node `B` hasn't run a database poll yet, and still caches that this
+API has no plugin to run. It will be so until node `B` runs its database
+polling job.
+
+**Conclusion**: all CRUD operations trigger cache invalidations. Creation
+(`POST`, `PUT`) will invalidate cached database misses, and update/deletion
+(`PATCH`, `DELETE`) will invalidate cached database hits.
+
+[Back to TOC](#table-of-contents)
+
+## How to configure database caching?
+
+You can configure 3 properties in the Kong configuration file, the most
+important one being `db_update_frequency`, which determine where your Kong
+nodes stand on the performance vs consistency trade off.
+
+Kong comes with default values tuned for consistency, in order to let you
+experiment with its clustering capabilities while avoiding "surprises". As you
+prepare a production setup, you should consider tuning those values to ensure
+that your performance constraints are respected.
+
+### 1. [db_update_frequency][db_update_frequency] (default: 5s)
+
+This value determines the frequency at which your Kong nodes will be polling
+the database for invalidation events. A lower value will mean that the polling
+job will be executed more frequently, but that your Kong nodes will keep up
+with changes you apply. A higher value will mean that your Kong nodes will
+spend less time running the polling jobs, and will focus on proxying your
+traffic.
+
+**Note**: changes propagate through the cluster in up to `db_update_frequency`
+seconds.
+
+[Back to TOC](#table-of-contents)
+
+### 2. [db_update_propagation][db_update_propagation] (default: 0s)
+
+If your database itself is eventually consistent (ie: Cassandra), you **must**
+configure this value. It is to ensure that the change has time to propagate
+across your database nodes. When set, Kong nodes receiving invalidation events
+from their polling jobs will delay the purging of their cache for
+`db_update_propagation` seconds.
+
+If a Kong node connected to an eventual consistent database was not delaying
+the event handling, it could purge its cache, only to cache the non-updated
+value again (because the change hasn't propagated through the database yet)!
+
+You should set this value to an estimate of the amount of time your database
+cluster takes to propagate changes.
+
+**Note**: when this value is set, changes propagate through the cluster in
+up to `db_update_frequency + db_update_propagation` seconds.
+
+[Back to TOC](#table-of-contents)
+
+### 3. [db_cache_ttl][db_cache_ttl] (default: 0s)
+
+The time (in seconds) for which Kong will cache database entities (both hits
+and misses). This Time-To-Live value acts as a safeguard in case a Kong node
+misses an invalidation event, to avoid it from running on stale data for too
+long. When the TTL is reached, the value will be purged from its cache, and the
+next database result will be cached again.
+
+By default no data is invalidated based on this TTL (the default value is `0`).
+This is usually fine: Kong nodes rely on invalidation events, which are handled
+at the db store level (Cassandra/PosgreSQL). If you are concerned that a Kong
+node might miss invalidation event for any reason, you should set a TTL. Otherwise
+the node might run with a stale value in its cache for an undefined amount of time,
+until the cache is manually purged, or the node is restarted.
+
+[Back to TOC](#table-of-contents)
+
+### 4. When using Cassandra
+
+If you use Cassandra as your Kong database, you **must** set
+[db_update_propagation][db_update_propagation] to a non-zero value. Since
+Cassandra is eventually consistent by nature, this will ensure that Kong nodes
+do not prematurely invalidate their cache, only to fetch and catch a
+not up-to-date entity again. Kong will present you a warning logs if you did
+not configure this value when using Cassandra.
+
+Additionally, you might want to configure `cassandra_consistency` to a value
+like `QUORUM` or `LOCAL_QUORUM`, to ensure that values being cached by your
+Kong nodes are up-to-date values from your database.
+
+[Back to TOC](#table-of-contents)
+
+## Interacting with the cache via the Admin API
+
+If for some reason, you wish to investigate the cached values, or manually
+invalidate a value cached by Kong (a cached hit or miss), you can do so via the
+Admin API `/cache` endpoint.
+
+### Inspect a cached value
+
+**Endpoint**
+
+/cache/{cache_key}
+
+**Response**
+
+If a value with that key is cached:
+
+```
+HTTP 200 OK
+...
+{
+ ...
+}
+```
+
+Else:
+
+```
+HTTP 404 Not Found
+```
+
+**Note**: retrieving the `cache_key` for each entity being cached by Kong is
+currently an undocumented process. Future versions of the Admin API will make
+this process easier.
+
+[Back to TOC](#table-of-contents)
+
+### Purge a cached value
+
+**Endpoint**
+
+/cache/{cache_key}
+
+**Response**
+
+```
+HTTP 204 No Content
+...
+```
+
+**Note**: retrieving the `cache_key` for each entity being cached by Kong is
+currently an undocumented process. Future versions of the Admin API will make
+this process easier.
+
+[Back to TOC](#table-of-contents)
+
+### Purge a node's cache
+
+**Endpoint**
+
+/cache
+
+**Response**
+
+```
+HTTP 204 No Content
+```
+
+**Note**: be wary of using this endpoint on a warm, production running node.
+If the node is receiving a lot of traffic, purging its cache at the same time
+will trigger many requests to your database, and could cause a
+[dog-pile effect](https://en.wikipedia.org/wiki/Cache_stampede).
+
+[Back to TOC](#table-of-contents)
+
+[db_update_frequency]: /enterprise/{{page.kong_version}}/property-reference/#db_update_frequency
+[db_update_propagation]: /enterprise/{{page.kong_version}}/property-reference/#db_update_propagation
+[db_cache_ttl]: /enterprise/{{page.kong_version}}/property-reference/#db_cache_ttl
diff --git a/app/enterprise/1.3-x/deployment/access-license.md b/app/enterprise/1.3-x/deployment/access-license.md
new file mode 100644
index 000000000000..d6eaf0fb766b
--- /dev/null
+++ b/app/enterprise/1.3-x/deployment/access-license.md
@@ -0,0 +1,38 @@
+---
+title: How to Access Your Kong Enterprise License
+toc: false
+---
+
+Starting with Kong EE 0.29, Kong requires a license file to start. This guide
+will walk you through how to access your license file.
+
+**Note:** The following guide only pertains to paid versions of Kong Enterprise. For free trial information, check the email received after signing up.
+
+Log into [https://bintray.com/login?forwardedFrom=%2Fkong%2F](https://bintray.com/login?forwardedFrom=%2Fkong%2F)
+If you are unaware of your login credentials, reach out to your CSE and they'll
+be able to assist you.
+
+You will notice that along with Kong Enterprise and Gelato, there is a new
+repository that has the same name as your company. Click on that repo.
+
+In the repo, click on the file called **license**.
+
+![bintray-license](/assets/images/docs/ee/access-bintray-license.png)
+
+Click into the **Files** section
+
+![bintray-license-files](/assets/images/docs/ee/access-bintray-license-files.png)
+
+Click any file you would like to download.
+
+Alternatively, you can run this command in your terminal
+
+```bash
+curl -L -u<$UserName>@kong<$API_KEY> "https://kong.bintray.com/<$repoName>/license.json" -o
+```
+
+> Note: Your UserName and key were emailed to you by your CSE. You will need to get the repo name from the GUI
+
+
+
+
diff --git a/app/enterprise/1.3-x/deployment/installation/amazon-linux.md b/app/enterprise/1.3-x/deployment/installation/amazon-linux.md
new file mode 100644
index 000000000000..74dd3f978a9f
--- /dev/null
+++ b/app/enterprise/1.3-x/deployment/installation/amazon-linux.md
@@ -0,0 +1,87 @@
+---
+title: How to Install Kong Enterprise and PostgreSQL onto Amazon Linux
+---
+
+## Installation Steps
+
+```bash
+$ sudo yum update
+$ wget 'https://@bintray.com/kong/kong-enterprise-edition-aws/rpm' -O bintray-kong-kong-enterprise-edition-aws.repo --auth-no-challenge
+$ sudo mv bintray-kong-kong-enterprise-edition-aws.repo /etc/yum.repos.d/
+$ sudo vi /etc/yum.repos.d/bintray-kong-kong-enterprise-edition-aws.repo
+```
+
+Ensure `baseurl` is correct
+
+```bash
+baseurl=https://:@kong.bintray.com/kong-enterprise-edition-aws
+```
+
+```bash
+$ sudo yum install kong-enterprise-edition
+$ sudo yum install postgresql95 postgresql95-server
+$ sudo service postgresql95 initdb
+$ sudo service postgresql95 start
+$ sudo -i -u postgres (puts you into new shell)
+```
+
+**Note**: `` is obtained from your access key, by appending a `%40kong`
+to it (encoded form of `@kong`). For example, if your access key is `bob-company`,
+your username will be `bob-company%40kong`.
+
+Create `kong` user
+
+```bash
+$ psql
+> CREATE USER kong; CREATE DATABASE kong OWNER kong; ALTER USER kong WITH password 'kong';
+> \q
+```
+
+```bash
+# Change entries from ident to md5
+$ sudo vi /var/lib/pgsql95/data/pg_hba.conf
+$ sudo service postgresql95 restart
+
+# add contents of license file
+$ sudo vi /etc/kong/license.json
+
+# Uncomment and add 'kong' to pg_password line
+$ sudo vi [/path/to/kong.conf]
+
+# Run migrations and start kong
+$ kong migrations bootstrap [-c /path/to/kong.conf]
+$ sudo /usr/local/bin/kong start [-c /path/to/kong.conf]
+```
+
+**Note:** You may use `kong.conf.default` or create [your own configuration](/0.13.x/configuration/#configuration-loading).
+
+## Install HTTPie to Make Commands more Easily
+
+```bash
+$ sudo yum install python-pip
+$ sudo pip install --upgrade pip setuptools
+$ sudo pip install --upgrade httpie
+```
+
+## Verify Kong Installation
+
+```bash
+$ http :8001/apis name=demo uris=/ upstream_url=http://httpbin.org
+$ http :8000/ip
+```
+
+## Install Kong Manager
+
+```bash
+# Get the local IP address
+$ ifconfig
+
+# Uncomment the admin_listen setting, and update to something like this `admin_listen = 172.31.3.8:8001`
+$ sudo vi [/path/to/kong.conf]
+
+# Restart kong
+$ sudo /usr/local/bin/kong stop
+$ sudo /usr/local/bin/kong start [-c /path/to/kong.conf]
+```
+
+In a browser, load your server on port `8002`
diff --git a/app/enterprise/1.3-x/deployment/installation/centos.md b/app/enterprise/1.3-x/deployment/installation/centos.md
new file mode 100644
index 000000000000..4e02f192eca3
--- /dev/null
+++ b/app/enterprise/1.3-x/deployment/installation/centos.md
@@ -0,0 +1,239 @@
+---
+title: How to Install Kong Enterprise on CentOS
+---
+
+## Introduction
+
+This guide walks through downloading, installing, and starting Kong Enterprise
+using CentOS and PostgreSQL 9.5. The configuration shown in this guide is
+intended only as an example -- you will want to modify and take additional
+measures to secure your Kong Enterprise system before moving to a production
+environment.
+
+
+## Prerequisites
+
+To complete this guide you will need:
+
+- A CentOS 6 or 7 system with root equivalent access.
+- The ability to SSH to the system.
+
+
+## Step 1. Download Kong Enterprise
+
+1. Option 1. Download via **Packages**
+
+ Log in to [Bintray](http://bintray.com) to download the latest Kong
+ Enterprise RPM for CentOS. Your **Sales** or **Support** contact will
+ email this credential to you.
+
+ Copy the file to your home directory:
+
+ ```
+ $ scp kong-enterprise-edition-0.35-1.el7.noarch.rpm @:@kong.bintray.com/kong-enterprise-edition-rpm/centos/$releasever
+ ```
+ Replace `` with your Bintray account information
+
+ Set `$releasever` to the correct CentOS version (e.g. `6` or `7`)
+
+3. Obtain your Kong Enterprise license
+
+ If you do not already have your license file, you can download it from your
+ account files in Bintray
+ `https://bintray.com/kong//license#files`
+
+ Ensure your license file is in proper `JSON`:
+
+ ```json
+ {"license":{"signature":"91e6dd9716d12ffsn4a5ckkb16a556dbebdbc4d0a66d9b2c53f8c8d717eb93dd2bdbe2cb3ef51c20806f14345128907da35","payload":{"customer":"Kong Inc","license_creation_date":"2019-05-07","product_subscription":"Kong Enterprise Edition","admin_seats":"5","support_plan":"None","license_expiration_date":"2021-04-01","license_key":"00Q1K00000zuUAwUAM_a1V1K000005kRhuUAE"},"version":1}}
+ ```
+4. Securely copy the license file to the CentOS system
+
+ ```
+ $ scp license.json @:~
+ ```
+
+
+## Step 2. Install Kong Enterprise
+
+1. Install Kong Enterprise
+
+ ```
+ $ sudo yum install kong-enterprise-edition-0.35-1.el7.noarch.rpm
+ ```
+ >Note: Your version may be different based on when you obtained the rpm
+
+2. Copy the license file to the `/etc/kong` directory
+
+ ```
+ $ sudo cp kong-se-license.json /etc/kong/license.json
+ ```
+ Kong will look for a valid license in this location.
+
+
+## Step 3. Setup PostgreSQL
+
+1. Install PostgreSQL
+
+ ```
+ $ sudo yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
+
+ $ sudo yum install postgresql95 postgresql95-server
+ ```
+
+2. Initialize the PostgreSQL Database
+
+ ```
+ $ sudo /usr/pgsql-9.5/bin/postgresql95-setup initdb
+ ```
+
+3. Start PostgreSQL and Enable Automatic Start
+
+ ```
+ $ sudo systemctl enable postgresql-9.5
+ $ sudo systemctl start postgresql-9.5
+ ```
+
+
+## Step 4. Create a Kong database and user
+
+1. Switch to PostgreSQL user
+
+ ```
+ $ sudo -i -u postgres
+ ```
+
+2. Launch PostgreSQL
+
+ ```
+ $ psql
+ ```
+
+3. Run the following command to:
+
+ -- Create a Kong user and database
+
+ -- Make Kong the owner of the database
+
+ -- Set the password of the Kong user to 'kong'
+
+ ```
+ $ CREATE USER kong; CREATE DATABASE kong OWNER kong; ALTER USER kong WITH password 'kong';
+ ```
+
+ >â ď¸**Note**: Make sure the username and password for the Kong Database are
+ >kept safe. We have used a simple example for illustration purposes only.
+
+4. Exit from PostgreSQL
+
+ ```
+ $ \q
+ ```
+
+5. Return to terminal
+
+ ```
+ $ exit
+ ```
+
+6. Run the following command to access the PostgreSQL configuration file.
+
+ ```
+ $ sudo vi /var/lib/pgsql/9.5/data/pg_hba.conf
+ ```
+
+7. Under IPv4 local connections replace `ident` with `md5`
+
+ | TYPE | DATABASE | USER | ADDRESS | METHOD |
+ | # IPv4 local connections: |
+ | host | all | all | 127.0.0.1/32 | **md5**|
+ | # IPv6 local connections: |
+ | host | all | all | ::1/128 | ident |
+
+8. Restart PostgreSQL
+
+ ```
+ $ sudo systemctl restart postgresql-9.5
+ ```
+
+
+## Step 5. Modify Kong's configuration file
+
+To use the newly provisioned PostgreSQL database, Kong's configuration file
+must be modified to accept the correct PostgreSQL user and password.
+
+1. Make a copy of the default configuration file
+
+ ```
+ $ cp /etc/kong/kong.conf.default /etc/kong/kong.conf
+ ```
+
+2. Uncomment and update the PostgreSQL database properties inside the Kong conf:
+
+ ```
+ $ sudo vi etc/kong/kong.conf
+ ```
+ ```
+ pg_user = kong
+ pg_password = kong
+ pg_database = kong
+ ```
+
+
+## Step 6. Seed the Super Admin _(optional)_
+
+For the added security of Role-Based Access Control (RBAC), it is best to seed
+the **Super Admin** before initial start-up.
+
+Create an environment variable with the desired **Super Admin** password:
+
+
+ $ export KONG_PASSWORD=
+
+
+This will be used during migrations to seed the initial **Super Admin**
+password within Kong.
+
+
+## Step 7. Start Kong
+
+1. Run migrations to prepare the Kong database
+
+ ```
+ $ kong migrations bootstrap -c /etc/kong/kong.conf
+ ```
+
+2. Start Kong
+
+ ```
+ $ sudo /usr/local/bin/kong start -c /etc/kong/kong.conf
+ ```
+
+3. Verify Kong is working
+
+ ```
+ curl -i -X GET --url http://localhost:8001/services
+ ```
+
+ You should receive an HTTP/1.1 200 OK message.
+
+## Troubleshooting
+
+If you did not receive an HTTP/1.1 200 OK message, or need assistance completing
+setup reach out to your **Support contact** or head over to the
+[Support Portal](https://support.konghq.com/support/s/).
+
+
+## Next Steps
+
+Work through Kong Enterprise's series of
+[Getting Started](/enterprise/latest/getting-started) guides to get the most
+out of Kong Enterprise.
diff --git a/app/enterprise/1.3-x/deployment/installation/docker.md b/app/enterprise/1.3-x/deployment/installation/docker.md
new file mode 100644
index 000000000000..1dec1a12d014
--- /dev/null
+++ b/app/enterprise/1.3-x/deployment/installation/docker.md
@@ -0,0 +1,180 @@
+---
+title: Installing Kong Enterprise Docker Image
+---
+
+
+
+## Installation Steps
+
+A guide to installing Kong Enterpriseâand its license fileâusing
+Docker.
+
+**Free trial users should skip directly to step 3**.
+
+1. Log in to bintray.com. Your Sales or Support
+contact will email the credential to you.
+
+2. In the upper right corner, click "Edit Profile' to retrieve your API
+key, which will be used in step 3. Alternatively, to retrieve it from
+Bintray, click here.
+
+3. For **users with existing contracts**, add the Kong Docker repository and
+pull the image:
+
+ ```
+ $ docker login -u -p kong-docker-kong-enterprise-edition-docker.bintray.io
+ $ docker pull kong-docker-kong-enterprise-edition-docker.bintray.io/kong-enterprise-edition
+ ```
+
+ For **trial users**, run the following, replacing ``
+with the URL you received in your welcome email:
+
+ ```
+ curl -Lsv "" -o /tmp/kong-docker-ee.tar.gz
+ docker load -i /tmp/kong-docker-ee.tar.gz
+ ```
+
+4. You should now have your Kong Enterprise image locally. Run
+`docker images` to verify and find the image ID matching your repository.
+
+5. Tag the image ID for easier use in the commands that follow:
+
+ ```
+ docker tag kong-ee
+ ```
+
+ (Replace "IMAGE ID" with the one matching your repository, seen in step 4)
+
+6. Create a Docker network
+
+ You will need to create a custom network to allow the containers to discover
+ and communicate with each other. In this example, `kong-ee-net` is the network name,
+ but you can use any name.
+
+ ```bash
+ $ docker network create kong-ee-net
+ ```
+
+
+7. Start your database
+
+ If using a Cassandra container:
+
+ ```bash
+ $ docker run -d --name kong-ee-database \
+ --network=kong-ee-net \
+ -p 9042:9042 \
+ cassandra:3
+ ```
+
+ If using a PostgreSQL container:
+
+ ```bash
+ $ docker run -d --name kong-ee-database \
+ --network=kong-ee-net \
+ -p 5432:5432 \
+ -e "POSTGRES_USER=kong" \
+ -e "POSTGRES_DB=kong" \
+ postgres:9.6
+ ```
+
+8. To make the license data easier to handle, export it as a shell variable.
+Please note that **your license contents will differ**! Users with Bintray
+accounts should visit `https://bintray.com/kong//license#files`
+to retrieve their license. Trial users should download their license from their
+welcome email. Once you have your license, you can set it in an environment variable:
+
+ ```sh
+ export KONG_LICENSE_DATA='{"license":{"signature":"LS0tLS1CRUdJTiBQR1AgTUVTU0FHRS0tLS0tClZlcnNpb246IEdudVBHIHYyCgpvd0did012TXdDSFdzMTVuUWw3dHhLK01wOTJTR0tLWVc3UU16WTBTVTVNc2toSVREWk1OTFEzVExJek1MY3dTCjA0ek1UVk1OREEwc2pRM04wOHpNalZKVHpOTE1EWk9TVTFLTXpRMVRVNHpTRXMzTjA0d056VXdUTytKWUdNUTQKR05oWW1VQ21NWEJ4Q3NDc3lMQmorTVBmOFhyWmZkNkNqVnJidmkyLzZ6THhzcitBclZtcFZWdnN1K1NiKzFhbgozcjNCeUxCZzdZOVdFL2FYQXJ0NG5lcmVpa2tZS1ozMlNlbGQvMm5iYkRzcmdlWFQzek1BQUE9PQo9b1VnSgotLS0tLUVORCBQR1AgTUVTU0FHRS0tLS0tCg=","payload":{"customer":"Test Company Inc","license_creation_date":"2017-11-08","product_subscription":"Kong Enterprise","admin_seats":"5","support_plan":"None","license_expiration_date":"2017-11-10","license_key":"00141000017ODj3AAG_a1V41000004wT0OEAU"},"version":1}}'
+ ```
+
+9. Run Kong migrations:
+
+ ```
+ docker run --rm --network=kong-ee-net \
+ -e "KONG_DATABASE=postgres" \
+ -e "KONG_PG_HOST=kong-ee-database" \
+ -e "KONG_CASSANDRA_CONTACT_POINTS=kong-ee-database" \
+ -e "KONG_LICENSE_DATA=$KONG_LICENSE_DATA" \
+ kong-ee kong migrations bootstrap
+ ```
+ **Docker on Windows users:** Instead of the `KONG_LICENSE_DATA` environment
+ variable, use the [volume bind](https://docs.docker.com/engine/reference/commandline/run/#options) option.
+ For example, assuming you've saved your `license.json` file into `C:\temp`,
+ use `--volume /c/temp/license.json:/etc/kong/license.json` to specify the
+ license file.
+
+10. Start Kong:
+
+ ```
+ docker run -d --name kong-ee --network=kong-ee-net \
+ -e "KONG_DATABASE=postgres" \
+ -e "KONG_PG_HOST=kong-ee-database" \
+ -e "KONG_CASSANDRA_CONTACT_POINTS=kong-ee-database" \
+ -e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
+ -e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
+ -e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
+ -e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
+ -e "KONG_ADMIN_LISTEN=0.0.0.0:8001" \
+ -e "KONG_PORTAL=on" \
+ -e "KONG_LICENSE_DATA=$KONG_LICENSE_DATA" \
+ -p 8000:8000 \
+ -p 8443:8443 \
+ -p 8001:8001 \
+ -p 8444:8444 \
+ -p 8002:8002 \
+ -p 8445:8445 \
+ -p 8003:8003 \
+ -p 8004:8004 \
+ kong-ee
+ ```
+ **Docker on Windows users:** Instead of the `KONG_LICENSE_DATA` environment
+ variable, use the [volume bind](https://docs.docker.com/engine/reference/commandline/run/#options) option.
+ For example, assuming you've saved your `license.json` file into `C:\temp`,
+ use `--volume /c/temp/license.json:/etc/kong/license.json` to specify the
+ license file.
+
+11. Kong Enterprise should now be installed and running. Test
+it by visiting Kong Manager at [http://localhost:8002](http://localhost:8002)
+(replace `localhost` with your server IP or hostname when running Kong on a
+remote system), or by visiting the Default Dev Portal at
+[http://127.0.0.1:8003/default](http://127.0.0.1:8003/default)
+
+## FAQs
+
+The Admin API only listens on the local interface by default. This was done as a
+security enhancement. Note that we are overriding that in the above example with
+`KONG_ADMIN_LISTEN=0.0.0.0:8001` because Docker container networking benefits from
+more open settings and enables Kong Manager and Dev Portal to talk with the Kong
+Admin API.
+
+Without a license properly referenced, youâll get errors running migrations:
+
+ $ docker run -ti --rm ... kong migrations bootstrap
+ nginx: [alert] Error validating Kong license: license path environment variable not set
+
+Also, without a license, you will get no output if you do a `docker run` in
+"daemon mode"âthe `-d` flag to `docker run`:
+
+
+ $ docker run -d ... kong start
+ 26a995171e23e37f89a4263a10bb084120ab0dbed1aa11a71c888c8e0d74a0b6
+
+
+When you check the container, it wonât be running. Doing a `docker logs` will
+show you:
+
+
+ $ docker logs
+ nginx: [alert] Error validating Kong license: license path environment variable not set
+
+
+As awareness, another error that can occur due to the vagaries of the interactions
+between text editors and copy & paste changing straight quotes (" or ') into curly
+ones (â or â or â or â) is:
+
+â```
+nginx: [alert] Error validating Kong license: could not decode license json
+â```
+
+Your license data must contain only straight quotes to be considered valid JSON.
diff --git a/app/enterprise/1.3-x/deployment/installation/index.md b/app/enterprise/1.3-x/deployment/installation/index.md
new file mode 100644
index 000000000000..f1ad7cdd6429
--- /dev/null
+++ b/app/enterprise/1.3-x/deployment/installation/index.md
@@ -0,0 +1,3 @@
+---
+title: Install Kong Enterprise
+---
\ No newline at end of file
diff --git a/app/enterprise/1.3-x/deployment/installation/overview.md b/app/enterprise/1.3-x/deployment/installation/overview.md
new file mode 100644
index 000000000000..f7998e263aa3
--- /dev/null
+++ b/app/enterprise/1.3-x/deployment/installation/overview.md
@@ -0,0 +1,44 @@
+---
+title: Installing Kong Enterprise
+toc: false
+---
+
+
diff --git a/app/enterprise/1.3-x/deployment/installation/ubuntu.md b/app/enterprise/1.3-x/deployment/installation/ubuntu.md
new file mode 100644
index 000000000000..8f205f617217
--- /dev/null
+++ b/app/enterprise/1.3-x/deployment/installation/ubuntu.md
@@ -0,0 +1,197 @@
+---
+title: How to Install Kong Enterprise on Ubuntu
+---
+
+## Introduction
+
+This guide walks through downloading, installing, and starting Kong Enterprise
+using Ubuntu and PostgreSQL 9.5. The configuration shown in this guide is
+intended only as an example -- you will want to modify and take additional
+measures to secure your Kong Enterprise system before moving to a production
+environment.
+
+
+## Prerequisites
+
+To complete this guide you will need:
+
+- An Ubuntu system with root equivalent access.
+- The ability to SSH to the system.
+
+
+## Step 1. Download Kong Enterprise
+
+1. Download the .deb package
+
+ Log in to [Bintray](http://bintray.com) to download the latest Kong
+ Enterprise .deb for the desired version of Ubuntu. Your **Sales** or
+ **Support** contact will email this credential to you.
+
+ Copy the file to your home directory:
+
+ ```
+ $ scp kong-enterprise-edition-0.35.xxx.xxx.deb @/license#files`
+
+ Ensure your license file is in proper `JSON`:
+
+ ```json
+ {"license":{"signature":"91e6dd9716d12ffsn4a5ckkb16a556dbebdbc4d0a66d9b2c53f8c8d717eb93dd2bdbe2cb3ef51c20806f14345128907da35","payload":{"customer":"Kong Inc","license_creation_date":"2019-05-07","product_subscription":"Kong Enterprise Edition","admin_seats":"5","support_plan":"None","license_expiration_date":"2021-04-01","license_key":"00Q1K00000zuUAwUAM_a1V1K000005kRhuUAE"},"version":1}}
+ ```
+
+3. Securely copy the license file to the Ubuntu system
+
+ ```
+ $ scp license.json @:~
+ ```
+
+
+## Step 2. Install Kong Enterprise
+
+1. Install Kong Enterprise
+
+ ```
+ $ sudo apt-get update
+ $ sudo apt-get install openssl libpcre3 procps perl
+ $ sudo dpkg -i kong-enterprise-edition-0.35.xxx.xxx.deb
+ ```
+ >Note: Your version may be different based on when you obtained the package
+
+2. Copy the license file to the `/etc/kong` directory
+
+ ```
+ $ sudo cp license.json /etc/kong/license.json
+ ```
+ Kong will look for a valid license in this location.
+
+
+## Step 3. Install PostgreSQL
+
+1. Install PostgreSQL
+
+ ```
+ $ sudo apt-get install postgresql-9.5 postgresql-contrib
+ ```
+
+
+## Step 4. Create a Kong database and user
+
+1. Switch to PostgreSQL user
+
+ ```
+ $ sudo -i -u postgres
+ ```
+
+2. Launch PostgreSQL
+
+ ```
+ $ psql
+ ```
+
+3. Run the following command to:
+
+ -- Create a Kong user and database
+
+ -- Make Kong the owner of the database
+
+ -- Set the password of the Kong user to 'kong'
+
+ ```
+ $ CREATE USER kong; CREATE DATABASE kong OWNER kong; ALTER USER kong WITH password 'kong';
+ ```
+
+ >â ď¸**Note**: Make sure the username and password for the Kong Database are
+ >kept safe. We have used a simple example for illustration purposes only.
+
+4. Exit from PostgreSQL
+
+ ```
+ $ \q
+ ```
+
+5. Return to terminal
+
+ ```
+ $ exit
+ ```
+
+
+## Step 5. Modify Kong's configuration file
+
+To use the newly provisioned PostgreSQL database, Kong's configuration file
+must be modified to accept the correct PostgreSQL user and password.
+
+1. Make a copy of the default configuration file
+
+ ```
+ $ sudo cp /etc/kong/kong.conf.default /etc/kong/kong.conf
+ ```
+
+2. Uncomment and update the PostgreSQL database properties inside the Kong conf:
+
+ ```
+ $ sudo vi /etc/kong/kong.conf
+ ```
+ ```
+ pg_user = kong
+ pg_password = kong
+ pg_database = kong
+ ```
+
+
+## Step 6. Seed the Super Admin _(optional)_
+
+For the added security of Role-Based Access Control (RBAC), it is best to seed
+the **Super Admin** before initial start-up.
+
+Create an environment variable with the desired **Super Admin** password:
+
+
+ $ export KONG_PASSWORD=
+
+
+This will be used during migrations to seed the initial **Super Admin**
+password within Kong.
+
+
+## Step 7. Start Kong
+
+1. Run migrations to prepare the Kong database
+
+ ```
+ $ kong migrations bootstrap -c /etc/kong/kong.conf
+ ```
+
+2. Start Kong
+
+ ```
+ $ sudo /usr/local/bin/kong start -c /etc/kong/kong.conf
+ ```
+
+3. Verify Kong is working
+
+ ```
+ curl -i -X GET --url http://localhost:8001/services
+ ```
+
+ You should receive an HTTP/1.1 200 OK message.
+
+
+## Troubleshooting
+
+If you did not receive an HTTP/1.1 200 OK message, or need assistance completing
+setup reach out to your **Support contact** or head over to the
+[Support Portal](https://support.konghq.com/support/s/).
+
+
+## Next Steps
+
+Work through Kong Enterprise's series of
+[Getting Started](/enterprise/latest/getting-started) guides to get the most
+out of Kong Enterprise.
diff --git a/app/enterprise/1.3-x/deployment/licensing.md b/app/enterprise/1.3-x/deployment/licensing.md
new file mode 100644
index 000000000000..92a637e484d4
--- /dev/null
+++ b/app/enterprise/1.3-x/deployment/licensing.md
@@ -0,0 +1,49 @@
+---
+title: Kong Enterprise Licensing
+---
+
+## Overview
+Kong Enterprise enforces the presence and validity of a license file.
+
+License files must be deployed to each node running Kong Enterprise. License file checking is done independently by each node as the Kong process starts; no network connectivity is necessary to execute the license validation process.
+
+## Deploying the License File
+There are three possible ways to configure a license file on a Kong node. These are defined below, in the order in which they are checked by Kong:
+
+1. If present, the contents of the environmental variable `KONG_LICENSE_DATA` are used.
+2. Kong will search in the default location `/etc/kong/license.json`
+3. If present, the contents of the file defined by the environment variable `KONG_LICENSE_PATH` is used.
+
+In this manner, the license file can be deployed either as a file on the node filesystem, or as an environmental variable.
+
+Note that unlike most other `KONG_*` environmental variables, the `KONG_LICENSE_DATA` and `KONG_LICENSE_PATH` cannot be defined in-line as part of any `kong` CLI commands. The reason for this is that the `kong` CLI tool is a wrapper script to generate an Nginx config and launch the Nginx process via the existing shell. That is, the Nginx process that accepts proxy traffic is spawned as a child of the shell in which the `kong` CLI process is run, not as a child of the `kong` CLI process itself, and thus in-line environmental variables are not made available to the Nginx process. Thus, license file environmental variables must be exported to the shell in which the Nginx process will run, ahead of the `kong` CLI tool.
+
+## Examining the License Data on a Kong Node
+License data is displayed as part of the root (`"/"`) Admin API endpoint, under the `license` JSON key. It is also visible in the Admin GUI.
+
+## Troubleshooting
+When a valid license file is properly deployed, license file validation is a transparent operation; no additional output or logging data is written or provided. If an error occurs when attempting to validate the license, or the license data is not valid, an error message will be written to the console and logged to the Kong error log, followed by the process quitting. Below are possible error messages and troubleshooting steps to take:
+
+- "license path environment variable not set"
+ - Neither the `KONG_LICENSE_DATA` nor the `KONG_LICENSE_PATH` environmental variables were defined, and no license file could be opened at the default license location (`/etc/kong/license.json`)
+- "internal error"
+ - An internal error has occurred while attempting to validate the license. Such cases are extremely unlikely; contact Kong support to further troubleshoot.
+- "error opening license file"
+ - The license file defined either in the default location, or via the `KONG_LICENSE_PATH` env variable, could not be opened. Check that the user executing the Nginx process (e.g., the user executing the Kong CLI utility) has permissions to read this file.
+- "error reading license file"
+ - The license file defined either in the default location, or via the `KONG_LICENSE_PATH` env variable, could be opened, but an error occurred while reading. Confirm that the file is not corrupt, that there are no kernel error messages reported (e.g., out of memory conditions, etc). This is a generic error and is extremely unlikely to occur if the file could be opened.
+- "could not decode license json"
+ - The license file data could not be decoded as valid JSON. Confirm that the file is not corrupt and has not been altered since you received it from Kong Inc. Try re-downloading and installing your license file from Kong Inc.
+ - if you still receive this error, contact Kong support.
+- "invalid license format"
+ - The license file data is missing one or more key/value pairs. Confirm that the file is not corrupt and has not been altered since you received it from Kong Inc. Try re-downloading and installing your license file from Kong Inc.
+ - if you still receive this error, contact Kong support.
+- "validation failed"
+ - The attempt to verify the payload of the license with the license's signature failed. Confirm that the file is not corrupt and has not been altered since you received it from Kong Inc. Try re-downloading and installing your license file from Kong Inc.
+ - if you still receive this error, contact Kong support.
+- "license expired"
+ - The system time is past the license's license_expiration_date. Note that Kong Enterprise provides 1-2 days worth of slack time past the license expiration date before failing to validate with this error, to account for timezone discrepancies, human error, etc.
+- "invalid license expiration date"
+ - The data in the license_expiration_date field is incorrectly formatted. Try re-downloading and installing your license file from Kong Inc.
+ - if you still receive this error, contact Kong support.
+- License expiration logs: Kong will start logging the license expiration date once a dayâwith a WARN logâ90 days before the expiry; 30 days before, the log severity increases to ERR, and after expiration, it to CRIT.
diff --git a/app/enterprise/1.3-x/deployment/migrations.md b/app/enterprise/1.3-x/deployment/migrations.md
new file mode 100644
index 000000000000..1f53c8e3083a
--- /dev/null
+++ b/app/enterprise/1.3-x/deployment/migrations.md
@@ -0,0 +1,81 @@
+---
+title: Migrating to 1.3-Îą
+---
+
+### Prerequisites for Migrating to 1.3-Îą
+
+* If running a version of **Kong Enterprise** earlier than 0.35, [migrate to 0.35](/enterprise/0.35-x/deployment/migrations/) first.
+* If running a version of **Kong** earlier than 1.2, [upgrade to Kong 1.2](/1.2.x/upgrading/) before upgrading to Kong Enterprise 1.3-Îą.
+
+### Changes and Configuration to Consider before Upgrading
+
+* If using RBAC with Kong Manager, it will be necessary to manually add the [Session Plugin configuration values](/enterprise/{{page.kong_version}}/kong-manager/authentication/sessions/#configuration-to-use-the-sessions-plugin-with-kong-manager).
+* Kong Manager and the Admin API must share the same domain in order to use the SameSite directive. If they are on separate domains, `cookie_samesite` must be set to `âoffâ`. Learn more in [Session Security](/enterprise/{{page.kong_version}}/kong-manager/authentication/sessions/#configuration-to-use-the-sessions-plugin-with-kong-manager)
+* Kong Manager must be served over HTTPS in order for the Secure directive to work. If using Kong Manager with only HTTP, e.g. on `localhost`, then `cookie_secure` must be set to `false`. Learn more in [Session Security](/enterprise/{{page.kong_version}}/kong-manager/authentication/sessions/#session-security)
+* Instances where the Portal and Files API are on different hostnames require that they at least share a common root, and that the `cookie_domain` setting of the Portal session configuration be that common root. For example, if the Portal itself is at `portal.kong.example` and the Files API is at `files.kong.example`, `cookie_domain=.kong.example`.
+* Portal-related `rbac_role_endpoints` will be updated to adhere to changes in the Dev Portal API. This only applies to Portal-related endpoints that were present in or set by Kong Manager; any user-generated endpoints will need to be updated manually. The endpoints that will be updated automatically are as follows:
+
+
+```
+'/portal/*' => '/developers/*', '/files/*'
+'/portal/developers' => '/developers/*'
+'/portal/developers/*' => '/developers/*'
+'/portal/developers/*/*' => '/developers/*/*'
+'/portal/developers/*/email' => '/developers/*/email'
+'/portal/developers/*/meta' => '/developers/*/meta'
+'/portal/developers/*/password' => '/developers/*/password'
+'/portal/invite' => '/developers/invite'
+```
+* As a result of the switch to server-side rendering, a few portal template files need to be updated or replaced to regain full functionality:
+ 1. Replace contents of partial `spec/index-vue` with contents of:
+ [https://raw.githubusercontent.com/Kong/kong-portal-templates/master/themes/default-ie11/partials/spec/index-vue.hbs](https://raw.githubusercontent.com/Kong/kong-portal-templates/master/themes/default-ie11/partials/spec/index-vue.hbs)
+ 2. Replace contents of partial `search/widget-vue` with contents of:
+ [https://raw.githubusercontent.com/Kong/kong-portal-templates/master/themes/default-ie11/partials/search/widget-vue.hbs](https://raw.githubusercontent.com/Kong/kong-portal-templates/master/themes/default-ie11/partials/search/widget-vue.hbs)
+ 3. Create or update partial `unauthenticated/assets/icons/search-header` with contents of:
+ [https://raw.githubusercontent.com/Kong/kong-portal-templates/master/themes/default-ie11/partials/unauthenticated/assets/icons/search-header.hbs](https://raw.githubusercontent.com/Kong/kong-portal-templates/master/themes/default-ie11/partials/unauthenticated/assets/icons/search-header.hbs)
+ 4. Create or update partial `unauthenticated/assets/icons/loading` with contents of:
+ [https://raw.githubusercontent.com/Kong/kong-portal-templates/master/themes/default-ie11/partials/unauthenticated/assets/icons/loading.hbs](https://raw.githubusercontent.com/Kong/kong-portal-templates/master/themes/default-ie11/partials/unauthenticated/assets/icons/loading.hbs)
+ 5. Create or update partial `unauthenticated/assets/icons/search-widget` with contents of:
+ [https://raw.githubusercontent.com/Kong/kong-portal-templates/master/themes/default-ie11/partials/unauthenticated/assets/icons/search-widget.hbs](https://raw.githubusercontent.com/Kong/kong-portal-templates/master/themes/default-ie11/partials/unauthenticated/assets/icons/search-widget.hbs)
+
+### Migration Steps from 0.35 to 1.3-Îą
+
+For a no-downtime migration from a 0.35 cluster to a 1.3-Îą cluster:
+
+1. Download 1.3-Îą, and configure it to point to the same datastore as your 0.35 cluster.
+2. Run `kong migrations up`. Both 0.35 and 1.3-Îą nodes can now run simultaneously on the same datastore.
+3. Start provisioning 1.3-Îą nodes.
+4. Gradually divert traffic away from your 0.35 nodes, and into your 1.3-Îą cluster. Monitor your traffic to make sure everything is going smoothly.
+5. When your traffic is fully migrated to the 0.35 cluster, decommission your 0.35 nodes.
+6. From your 1.3-Îą cluster, run `kong migrations finish`. From this point on, it will not be possible to start 0.35 nodes pointing to the same datastore anymore. Only run this command when you are confident that your migration was successful. From now on, you can safely make Admin API requests to your 1.3-Îą nodes.
+
+At any step of the way, you may run `kong migrations list` to get a report of the state of migrations. It will list whether there are missing migrations, if there are pending migrations (which have already started in the `kong migrations up` step and later need to finish in the `kong migrations finish` step) or if there are new migrations available. The status code of the process will also change accordingly:
+
+* `0` - migrations are up-to-date
+* `1` - failed inspecting the state of migrations (e.g. database is down)
+* `3` - database needs bootstrapping: you should run `kong migrations bootstrap` to install on a fresh datastore.
+* `4` - there are pending migrations: once your old cluster is decommissioned you should run `kong migrations finish` (step 5 above).
+* `5` - there are new migrations: you should start a migration sequence (beginning from step 1 above).
+
+### Migration Steps from Kong 1.2 to Kong Enterprise 1.3-Îą
+
+
+ Note: This action is irreversible, therefore it is highly recommended to have a backup of production data.
+
+
+Kong Enterprise 1.3-Îą includes a command to migrate all Kong entities to Kong Enterprise. The following steps will guide you through the migration process.
+
+First download Kong Enterprise 1.3-Îą, and configure it to point to the same datastore as your Kong 1.2 node. The migration command expects the datastore to be up to date on any pending migration:
+
+```shell
+$ kong migrations up [-c config]
+$ kong migrations finish [-c config]
+```
+
+Once all Kong Enterprise migrations are up to date, the migration command can be run as:
+
+```shell
+$ kong migrations migrate-community-to-enterprise [-c config] [-f] [-y]
+```
+
+Confirm now that all the entities are now available on your Kong Enterprise 0.35 node.
diff --git a/app/enterprise/1.3-x/developer-portal/administration/developer-permissions.md b/app/enterprise/1.3-x/developer-portal/administration/developer-permissions.md
new file mode 100644
index 000000000000..d3c309bafc5c
--- /dev/null
+++ b/app/enterprise/1.3-x/developer-portal/administration/developer-permissions.md
@@ -0,0 +1,56 @@
+---
+title: Developer Roles and Content Permissions
+---
+
+## Introduction
+
+Access to the Developer Portal can be fine-tuned with the use of Developer Roles and Content Permissions, managed through the Dev Portal Permissions page of Kong Manager. This page can be found by clicking on the **Permissions** link under **Dev Portal** in the Kong Manager side navigation.
+
+## Roles
+
+The Roles Tab contains a list of available developer roles as well as the ability to create and edit roles.
+
+Selecting "Create Role" will allow us to enter the unique role name, as well as a comment to provide context for the nature of the role. We can assign the role to existing developers from within the role creation page. Clicking "Create" will save the role and return us to the Roles List view. Here we can see our newly created role as well as any other previously defined roles.
+
+Clicking "View" will show us the Role Details page with a list of developers assigned.
+
+From the Role Details page, we can click the "Edit" button to make changes to the role. We can also access this page from the Roles List "Edit" button. Here we can change the name and comment of the role, assign or remove developers, or delete the role.
+
+Deleting a role will remove it from any developers assigned the role and remove the role restriction from any content files it is applied to.
+
+## Content
+
+The Content Tab shows the list of content files used by the Dev Portal. Here we can apply roles to our content files, restricting access to developers who posess certain roles. Selecting an individual content file displays a dropdown of availabled developer roles. Here we can choose which role has accese to the file. Unchecking all availabled roles will leave the file unauthenticated.
+
+An additional option is preset in the list: the `*` role. This predefined role behaves differently from other roles. When a content file has the `*` role attached to it, any developer may view the page as long as they are authenticated. Additionally, the `*` role may not be used in conjunction with other user defined roles and will deselect these roles when `*` is selected
+
+â ď¸**Important:** `dashboard.txt` and `settings.txt` content files are assigned the `*` role by default. All other content files have no roles by default. This means that until a role is added, the file is unauthenticated even if Dev Portal Authentication is enabled. Content Permissions are ignored when Dev Portal Authentication is disabled. For more information visit the Dev Portal Authentication section.
+
+## readable_by attribute
+
+When a role is applied to a content file via the Content Tab, a special attribute `readable_by` is added to the headmatter of the file.
+
+```
+---
+readable_by:
+ - role_name
+ - another_role_name
+---
+```
+
+ In the case of spec files, `readable_by` is applied under the key `x-headmatter` or `X-headmatter`.
+
+```
+x-headmatter:
+ readable_by:
+ - role_name
+ - another_role_name
+```
+
+The value of `readable_by` is an array of string role names that have access to view the content file. The exception is when the `*` role is applied to the file. In this case, the value of `readable_by` is no longer an array, but the single string character `*`.
+
+```
+readable_by: "*"
+```
+
+â ď¸**Important:** Please note that if you manually remove or edit the `readable_by` attribute, it will modify the permissions of the file. Attempting to save a content file with a `readable_by` array containing an nonexistent role name will result in an error. Additionally, if you make changes to permissions in the Content Tab or via the Portal Editor, be sure to sync any local files so that permissions are not overwritten next time you push changes.
diff --git a/app/enterprise/1.3-x/developer-portal/administration/index.md b/app/enterprise/1.3-x/developer-portal/administration/index.md
new file mode 100644
index 000000000000..87dc44d98542
--- /dev/null
+++ b/app/enterprise/1.3-x/developer-portal/administration/index.md
@@ -0,0 +1,20 @@
+---
+title: Dev Portal Administration
+---
+
+
+
+
+
Inviting, Approving, Rejecting, and Revoking Developers
+
+ Learn More →
+
+
+
+
Control access to your Dev Portal with Developer Roles and Content Permissions
+
+ Learn More →
+
+
diff --git a/app/enterprise/1.3-x/developer-portal/administration/managing-developers.md b/app/enterprise/1.3-x/developer-portal/administration/managing-developers.md
new file mode 100644
index 000000000000..1c8b02ec82f1
--- /dev/null
+++ b/app/enterprise/1.3-x/developer-portal/administration/managing-developers.md
@@ -0,0 +1,144 @@
+---
+title: Managing Developers
+---
+
+### Developer Status
+
+A status represents the state of a developer and the access they have to the Dev
+ Portal and APIs.
+
+* **Approved**
+ * A Developer who can access the Dev Portal. Approved Developers can create
+ credentials & access **all** APIs that allow those credentials.
+* **Requested**
+ * A Developer who has requested access but has not yet been Approved.
+* **Rejected**
+ * A Developer who has had their request denied by a Kong Admin.
+* **Revoked**
+ * A Developer who once had access to the Dev Portal but has since had access
+ Revoked.
+
+
+![Managing Developers](https://konghq.com/wp-content/uploads/2018/05/gui-developer-tabs.png)
+
+### Approving Developers
+
+Developers who have requested access to a Dev Portal will appear under the
+**Requested Access** tab. From this tab you can choose to *Accept* or *Reject*
+the developer from the actions in the table row. After selecting an action the
+corresponding tab will update.
+
+
+### Viewing Approved Developers
+
+To view all currently approved developers choose the **Approved** tab. From here you can choose to *Revoke* or *Delete* a particular developer. Additionally you can use this view to send an email to a developer with the **Email Developer** `mailto` link. See [Emailing Developers](#emailing-developers) for more info.
+
+
+### Viewing Revoked Developers
+
+To view all currently revoked developers choose the **Revoked** tab. From here you can choose to *Re-approve* or *Delete* a developer.
+
+
+### Viewing Rejected Developers
+
+To view all currently rejected developers choose the **Rejected** tab. Rejected developers completed the registration flow on your Dev Portal but were rejected from the **Request Access** tab. You may *Approve* or *Delete* a developer from this tab.
+
+
+### Emailing Developers
+
+#### Inviting Developers to Register
+
+To invite a single or set of developers...
+
+1. Click the **Invite Developers** button from the top right corner above the tabs
+2. Use the popup modal to enter email addresses separated by commas
+3. After all emails have been added click **Invite**. This will open a pre-filled message in your default email client with a link to the registration page for your Dev Portal
+
+Each developer is bcc'd by default for privacy. You may choose to edit the message or send as is.
+
+![Invite Developers](https://konghq.com/wp-content/uploads/2018/05/invite-developers.png)
+
+
+### Developer Management Property Reference
+
+
+#### portal_auto_approve
+
+**Default:** `off`
+
+**Description:**
+Dev Portal Auto Approve Access.
+
+When set to `on`, a developer will automatically be marked as `approved` after
+completing Dev Portal registration. Access can still be revoked through
+Kong Manager or API.
+
+When set to `off`, a Kong Admin will have to manually approve the Developer via
+the Kong Manager or API.
+
+
+#### portal_invite_email
+
+**Default:** `on`
+
+**Description:**
+When enabled, Admins will be able to invite Developers to a Dev Portal by using
+ the "Invite" button in the Kong Manager.
+
+
+#### portal_access_request_email
+
+**Default:** `on`
+
+**Description:**
+When enabled, Kong Admins specified by `smtp_admin_emails` will receive an email
+ when a Developer requests access to a Dev Portal.
+
+When disabled, Kong Admins will have to manually check the Kong Manager to view
+any requests.
+
+
+#### portal_approved_email
+
+**Default:** `on`
+
+**Description:**
+When enabled, Developers will receive an email when access to a Dev Portal has
+been approved.
+
+When disabled, Developers will receive no indication that they have been
+approved. It is suggested to only disable this feature if`portal_auto_approve`
+is enabled.
+
+
+#### portal_reset_email
+
+**Default:** `on`
+
+**Description:**
+When enabled, Developers will be able to use the "Reset Password" flow on a Dev
+Portal and will receive an email with password reset instructions.
+
+When disabled, Developers will *not* be able to reset their account passwords.
+Kong Admins will have to manually create new credentials for the Developer in
+the Kong Manager.
+
+#### portal_token_exp
+
+**Default:** `21600`
+
+**Description:**
+Duration in seconds for the expiration of the Dev Portal reset password token.
+Default `21600` is six hours.
+
+
+#### portal_reset_success_email
+
+**Default:** `on`
+
+**Description:**
+When enabled, Developers will receive an email after successfully resetting their
+ Dev Portal account password.
+
+When disabled, Developers will still be able to reset their account passwords,
+but will not receive a confirmation email.
diff --git a/app/enterprise/1.3-x/developer-portal/configuration/authentication/adding-registration-fields.md b/app/enterprise/1.3-x/developer-portal/configuration/authentication/adding-registration-fields.md
new file mode 100644
index 000000000000..8f3a4592ce0a
--- /dev/null
+++ b/app/enterprise/1.3-x/developer-portal/configuration/authentication/adding-registration-fields.md
@@ -0,0 +1,32 @@
+---
+title: Adding New Dev Portal Registration Fields
+toc: false
+---
+
+
+### Introduction
+
+By default, when authentication is enabled for a Dev Portal the only required
+fields are **full name**, **email**, and **password**. However, additional fields can be added
+to this form.
+
+
+### Adding Additional Registration Fields
+
+1. In Kong Manager, navigate to the desired Workspace's Dev Portal **Settings** page.
+
+2. Click the **Developer Meta Fields** tab on the **Settings Page**
+
+3. Click **+ Add Field** to add a new field object to the form.
+
+4. Give the new field a label, field name, and select the type of input
+
+5. Select the checkbox **Required** to require this field for registration
+
+6. Click the **Save Changes** button a the bottom of the form.
+
+
+Once saved, the new field will automatically be added to the registration form.
+
+> **WARNING** Adding new required fields to registration will block existing
+> developers from logging in. They will need to be removed and re-registered.
\ No newline at end of file
diff --git a/app/enterprise/1.3-x/developer-portal/configuration/authentication/basic-auth.md b/app/enterprise/1.3-x/developer-portal/configuration/authentication/basic-auth.md
new file mode 100644
index 000000000000..2149ea3ab7d0
--- /dev/null
+++ b/app/enterprise/1.3-x/developer-portal/configuration/authentication/basic-auth.md
@@ -0,0 +1,65 @@
+---
+title: How to Enable Basic Auth in the Dev Portal
+---
+
+### Introduction
+
+The Kong Developer Portal can be fully or partially authenticated using HTTP protocol's Basic Authentication scheme. Requests will be sent with the Authorization header that
+contains the word `Basic` followed by the base64-encoded `username:password` string.
+
+Basic Authentication for the Dev Portal can be enabled in three ways:
+
+- via the [Kong Manager](#enable-basic-auth-via-kong-manager)
+- via the [the command line](#enable-basic-auth-via-the-command-line)
+- via the [the Kong configuration file](#enable-basic-auth-via-the-kong-conf)
+
+>**Warning** Enabling authentication in the Dev Portal requires use of the
+> Sessions plugin. Developers will not be able to login if this is not set
+> properly. More information about [Sessions in the Dev Portal](/enterprise/{{page.kong_version}}/developer-portal/configuration/authentication/sessions)
+
+### Enable Portal Session Config
+
+In the the Kong configuration file set the `portal_session_conf` property:
+
+```
+portal_session_conf={ "cookie_name": "portal_session", "secret": "CHANGE_THIS", "storage": "kong" }
+```
+
+If using HTTP while testing, include `"cookie_secure": false` in the config:
+
+```
+portal_session_conf={ "cookie_name": "portal_session", "secret": "CHANGE_THIS", "storage": "kong", "cookie_secure": false }
+```
+
+### Enable Basic Auth via Kong Manager
+
+1. Navigate to the Dev Portal's **Settings** page
+2. Find **Authentication plugin** under the **Authentication** tab
+3. Select **Basic Authentication** from the drop down
+4. Click the **Save Changes** button at the bottom of the form
+
+>**Warning** When Dev Portal Authentication is enabled, content files will remain unauthenticated until a role is applied to them. The exception to this is `settings.txt` and `dashboard.txt` which begin with the `*` role. Please visit the Developer Roles and Content Permissions section for more info.
+
+### Enable Basic Auth via the Command Line
+
+To patch a Dev Portal's authentication property directly run:
+
+```
+curl -X PATCH http://localhost:8001/workspaces/ \
+ --data "config.portal_auth=basic-auth"
+```
+
+>**Warning** When Dev Portal Authentication is enabled, content files will remain unauthenticated until a role is applied to them. The exception to this is `settings.txt` and `dashboard.txt` which begin with the `*` role. Please visit the Developer Roles and Content Permissions section for more info.
+
+### Enable Basic Auth via the Kong.conf
+
+Kong allows for a `default authentication plugin` to be set in the Kong
+configuration file with the `portal_auth` property.
+
+In your `kong.conf` file set the property as follows:
+
+```
+portal_auth="basic-auth"
+```
+
+This will set all Dev Portals to use Basic Authentication by default when initialized.
diff --git a/app/enterprise/1.3-x/developer-portal/configuration/authentication/index.md b/app/enterprise/1.3-x/developer-portal/configuration/authentication/index.md
new file mode 100644
index 000000000000..39d59a6a0d1c
--- /dev/null
+++ b/app/enterprise/1.3-x/developer-portal/configuration/authentication/index.md
@@ -0,0 +1,45 @@
+---
+title: Dev Portal Authentication
+toc: false
+---
+
+
diff --git a/app/enterprise/1.3-x/developer-portal/configuration/authentication/key-auth.md b/app/enterprise/1.3-x/developer-portal/configuration/authentication/key-auth.md
new file mode 100644
index 000000000000..d0c2d2a3f452
--- /dev/null
+++ b/app/enterprise/1.3-x/developer-portal/configuration/authentication/key-auth.md
@@ -0,0 +1,65 @@
+---
+title: How to Enable Key Auth in the Dev Portal
+---
+
+### Introduction
+
+The Kong Developer Portal can be fully or partially authenticated using API keys or **Key
+Authentication**. Users provide a unique key upon registering and use this key
+to log into the Dev Portal.
+
+Key Authentication for the Dev Portal can be enabled in three ways:
+
+- via the [Kong Manager](#enable-key-auth-via-kong-manager)
+- via the [the command line](#enable-key-auth-via-the-command-line)
+- via the [the Kong configuration file](#enable-key-auth-via-the-kong-conf)
+
+>**Warning** Enabling authentication in the Dev Portal requires use of the
+> Sessions plugin. Developers will not be able to login if this is not set
+> properly. More information about [Sessions in the Dev Portal](/enterprise/{{page.kong_version}}/developer-portal/configuration/authentication/sessions)
+
+### Enable Portal Session Config
+
+```
+portal_session_conf={ "cookie_name": "portal_session", "secret": "CHANGE_THIS", "storage": "kong" }
+```
+
+If using HTTP while testing, include `"cookie_secure": false` in the config:
+
+```
+portal_session_conf={ "cookie_name": "portal_session", "secret": "CHANGE_THIS", "storage": "kong", "cookie_secure": false }
+```
+
+### Enable Key Auth via Kong Manager
+
+1. Navigate to the Dev Portal's **Settings** page
+2. Find **Authentication plugin** under the **Authentication** tab
+3. Select **Key Authentication** from the drop down
+4. Click the **Save Changes** button at the bottom of the form
+
+>**Warning** When Dev Portal Authentication is enabled, content files will remain unauthenticated until a role is applied to them. The exception to this is `settings.txt` and `dashboard.txt` which begin with the `*` role. Please visit the Developer Roles and Content Permissions section for more info.
+
+### Enable Key Auth via the Command Line
+
+To patch a Dev Portal's authentication property directly run:
+
+```
+curl -X PATCH http://localhost:8001/workspaces/ \
+ --data "config.portal_auth=key-auth"
+```
+
+>**Warning** When Dev Portal Authentication is enabled, content files will remain unauthenticated until a role is applied to them. The exception to this is `settings.txt` and `dashboard.txt` which begin with the `*` role. Please visit the Developer Roles and Content Permissions section for more info.
+
+### Enable Key Auth via the Kong.conf
+
+Kong allows for a `default authentication plugin` to be set in the Kong
+configuration file with the `portal_auth` property.
+
+In your `kong.conf` file set the property as follows:
+
+```
+portal_auth="key-auth"
+```
+
+This will set every Dev Portal to use Key Authentication by default when
+initialized, regardless of Workspace.
diff --git a/app/enterprise/1.3-x/developer-portal/configuration/authentication/oidc.md b/app/enterprise/1.3-x/developer-portal/configuration/authentication/oidc.md
new file mode 100644
index 000000000000..96d1e37a1c37
--- /dev/null
+++ b/app/enterprise/1.3-x/developer-portal/configuration/authentication/oidc.md
@@ -0,0 +1,131 @@
+---
+title: How to Enable OpenId Connect in the Dev Portal
+---
+
+### Introduction
+
+The [OpenID Connect Plugin](/hub/kong-inc/openid-connect/) (OIDC)
+allows the Kong Developer Portal to hook into existing authentication setups using third-party
+*Identity Providers* (IdP) such as Google, Yahoo, Microsoft Azure AD, etc.
+
+[OIDC](/hub/kong-inc/openid-connect/) must be used with
+the `session` method, utilizing cookies for Dev Portal File API requests.
+
+In addition, a configuration object is required to enable OIDC, please refer to the
+[Sample Configuration Object](#/sample-configuration-object) section of this
+document for more information.
+
+OIDC for the Dev Portal can be enabled in three ways:
+
+- via the [Kong Manager](#enable-oidc-via-kong-manager)
+- via the [the command line](#enable-oidc-via-the-command-line)
+- via the [the Kong configuration file](#enable-oidc-via-the-kongconf)
+
+
+### Portal Session Plugin Config
+
+Session Plugin Config does not apply when using OpenID Connect.
+
+### Sample Configuration Object
+
+Below is a sample configuration JSON object for using *Google* as the Identity
+Provder:
+
+```
+{
+ "consumer_by": ["username","custom_id","id"],
+ "leeway": 1000,
+ "scopes": ["openid","profile","email","offline_access"],
+ "logout_query_arg": "logout",
+ "client_id": [""],
+ "login_action": "redirect",
+ "logout_redirect_uri": ["http://localhost:8003 (http://localhost:8003/)"],
+ "ssl_verify": false,
+ "consumer_claim": ["email"],
+ "forbidden_redirect_uri": ["http://localhost:8003/unauthorized"],
+ "client_secret": [""],
+ "issuer": "https://accounts.google.com/",
+ "logout_methods": ["GET"],
+ "login_redirect_uri": ["http://localhost:8003 (http://localhost:8003/)"],
+ "login_redirect_mode": "query"
+}
+
+```
+
+The values above can be replaced with their corresponding values for a custom
+OIDC configuration:
+
+ - `` - Client ID provided by IdP
+ * For Example, Google credentials can be found here:
+ https://console.cloud.google.com/projectselector/apis/credentials
+ - `` - Client secret provided by IdP
+
+If `portal_gui_host` and `portal_api_url` are set to share a domain but differ
+in regards to subdomain, `redirect_uri` and `session_cookie_domain` need to be
+configured to allow OpenID-Connect to apply the session correctly.
+
+Example:
+
+```
+{
+ "consumer_by": ["username","custom_id","id"],
+ "leeway": 1000,
+ "scopes": ["openid","profile","email","offline_access"],
+ "logout_query_arg": "logout",
+ "client_id": [""],
+ "login_redirect_uri": ["https://example.portal.com (https://example.portal.com/)"],
+ "login_action": "redirect",
+ "logout_redirect_uri": ["https://example.portal.com (https://example.portal.com/)"],
+ "ssl_verify": false,
+ "consumer_claim": ["email"],
+ "redirect_uri": ["https://exampleapi.portal.com/auth"],
+ "session_cookie_domain": ".portal.com",
+ "forbidden_redirect_uri": ["https://example.portal.com/unauthorized"],
+ "client_secret": ["**Warning** When Dev Portal Authentication is enabled, content files will remain unauthenticated until a role is applied to them. The exception to this is `settings.txt` and `dashboard.txt` which begin with the `*` role. Please visit the Developer Roles and Content Permissions section for more info.
+
+### Enable OIDC via the Command Line
+
+To patch a Dev Portal's authentication property directly run:
+
+```
+curl -X PATCH http://localhost:8001/workspaces/ \
+ --data "config.portal_auth=openid-connect"
+ "config.portal_auth_conf=
+```
+
+>**Warning** When Dev Portal Authentication is enabled, content files will remain unauthenticated until a role is applied to them. The exception to this is `settings.txt` and `dashboard.txt` which begin with the `*` role. Please visit the Developer Roles and Content Permissions section for more info.
+
+### Enable OIDC via the Kong.conf
+
+Kong allows for a `default authentication plugin` to be set in the Kong
+configuration file with the `portal_auth` property.
+
+In your `kong.conf` file set the property as follows:
+
+```
+portal_auth="openid-connect"
+```
+
+Then set `portal_auth_conf` property to your
+customized [**Configuration JSON Object**](#/sample-configuration-object)
+
+This will set every Dev Portal to use Key Authentication by default when
+initialized, regardless of Workspace.
diff --git a/app/enterprise/1.3-x/developer-portal/configuration/authentication/sessions.md b/app/enterprise/1.3-x/developer-portal/configuration/authentication/sessions.md
new file mode 100644
index 000000000000..3b3ada132439
--- /dev/null
+++ b/app/enterprise/1.3-x/developer-portal/configuration/authentication/sessions.md
@@ -0,0 +1,84 @@
+---
+title: Sessions in the Dev Portal
+---
+
+â ď¸**Important:** Portal Session Configuration does not apply when using [OpenID Connect](/hub/kong-inc/openid-connect) for Dev Portal authentication. The following information assumes that the Dev Portal is configured with `portal_auth` other than `openid-connect`, for example `key-auth` or `basic-auth`.
+
+## How does the Sessions Plugin work in the Dev Portal?
+
+When a user logs in to the Dev Portal with their credentials, the Sessions Plugin will create a session cookie. The cookie is used for all subsequent requests and is valid to authenticate the user. The session has a limited duration and renews at a configurable interval, which helps prevent an attacker from obtaining and using a stale cookie after the session has ended.
+
+The Session configuration is secure by default, which may [require alteration](#session-security) if using HTTP or different domains for [portal_api_url](/enterprise/{{page.kong_version}}/developer-portal/networking/#portal_api_url) and [portal_gui_host](/enterprise/{{page.kong_version}}/developer-portal/networking/#portal_gui_host). Even if an attacker were to obtain a stale cookie, it would not benefit them since the cookie is encrypted. The encrypted session data may be stored either in Kong or the cookie itself.
+
+## Configuration to Use the Sessions Plugin with the Dev Portal
+
+To enable sessions authentication, configure the following:
+
+```
+portal_auth =
+portal_session_conf = {
+ "secret":"",
+ "cookie_name":"",
+ "storage":"kong",
+ "cookie_lifetime":,
+ "cookie_renew":
+ "cookie_secure":
+ "cookie_samesite":""
+}
+```
+
+* `"cookie_name":""`: The name of the cookie
+ * For example, `"cookie_name":"portal_cookie"`
+* `"secret":""`: The secret used in keyed HMAC generation. Although
+ the **Session Plugin's** default is a random string, the `secret` _must_ be
+ manually set for use with the Dev Portal since it must be the same across all
+ Kong workers/nodes.
+* `"storage":"kong"`: Where session data is stored. This value _must_ be set to `kong` for use with the Dev Portal.
+* `"cookie_lifetime":`: The duration (in seconds) that the session will remain open; 3600 by default.
+* `"cookie_renew":`: The duration (in seconds) of a session remaining at which point
+ the Plugin renews the session; 600 by default.
+* `"cookie_secure":`: `true` by default. See [Session Security](#session-security) for
+ exceptions.
+* `"cookie_samesite":""`: `"Strict"` by default. See [Session Security](#session-security) for
+ exceptions.
+
+â ď¸**Important:**
+*The following properties must not be altered from default for use with the Dev Portal:*
+* `logout_methods`
+* `logout_query_arg`
+* `logout_post_arg`
+
+For detailed descriptions of each configuration property, learn more in the [Session Plugin documentation](/enterprise/{{page.kong_version}}/plugins/session).
+
+## Session Security
+
+The Session configuration is secure by default, so the cookie uses the [Secure, HttpOnly](https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies#Secure_and_HttpOnly_cookies), and [SameSite](https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies#SameSite_cookies) directives.
+
+â ď¸**Important:** The following properties must be altered depending on the protocol and domains in use:
+* If using HTTP instead of HTTPS: `"cookie_secure": false`
+* If using different domains for [portal_api_url](/enterprise/{{page.kong_version}}/developer-portal/networking/#portal_api_url) and [portal_gui_host](/enterprise/{{page.kong_version}}/developer-portal/networking/#portal_gui_host): `"cookie_samesite": "off"`
+
+## Example Configurations
+
+If using HTTPS and hosting Dev Portal API and the Dev Portal GUI from the same domain, the following configuration could be used for Basic Auth:
+
+```
+portal_auth = basic-auth
+portal_session_conf = {
+ "cookie_name":"$4m04$"
+ "secret":"change-this-secret"
+ "storage":"kong"
+}
+```
+
+In testing, if using HTTP, the following configuration could be used instead:
+
+```
+portal_auth = basic-auth
+portal_session_conf = {
+ "cookie_name":"04tm34l"
+ "secret":"change-this-secret"
+ "storage":"kong"
+ "cookie_secure":false
+}
+```
diff --git a/app/enterprise/1.3-x/developer-portal/configuration/index.md b/app/enterprise/1.3-x/developer-portal/configuration/index.md
new file mode 100644
index 000000000000..fef2e041c151
--- /dev/null
+++ b/app/enterprise/1.3-x/developer-portal/configuration/index.md
@@ -0,0 +1,62 @@
+---
+title: Dev Portal Configuration
+---
+
+
+
Authentication
+
+
+
+
+
+
+
+
+
diff --git a/app/enterprise/1.3-x/developer-portal/configuration/smtp.md b/app/enterprise/1.3-x/developer-portal/configuration/smtp.md
new file mode 100644
index 000000000000..846e9c1fe871
--- /dev/null
+++ b/app/enterprise/1.3-x/developer-portal/configuration/smtp.md
@@ -0,0 +1,149 @@
+---
+title: Dev Portal SMTP Configuration
+---
+
+### Introduction
+
+The following property reference outlines each email and email variable used by the Dev Portal to send emails to Kong Admins and Developers.
+
+These settings can be modified in the `Kong Manager` under the Dev Portal `Settings / Email` tab. Or by running the following command:
+
+```
+curl http://localhost:8001/workspaces/ \
+ --data "config.=off"
+```
+
+If they are not modified manually, the Dev Portal will use the default value defined in the Kong Configuration file.
+
+
+### portal_invite_email
+
+**Default:** `on`
+
+**Description:**
+When enabled, Kong Admins will be able to invite Developers to a Dev Portal by using the "Invite" button in the Kong Manager.
+
+**Email:**
+```
+Subject: Invite to access Developer Portal
+
+Hello Developer!
+
+You have been invited to create a Developer Portal account at %s.
+Please visit `` to create your account.
+```
+
+
+### portal_email_verification
+
+**Default:** `off`
+
+**Description:**
+When enabled Developers will receive an email upon registration to verify their account. Developers will not be able to use the Dev Portal until their account is verified, even if auto-approve is enabled.
+
+
+### portal_access_request_email
+
+**Default:** `on`
+
+**Description:**
+When enabled, Kong Admins specified by `smtp_admin_emails` will receive an email when a Developer requests access to a Dev Portal.
+
+```
+Subject: Request to access Developer Portal
+
+Hello Admin!
+
+ has requested Developer Portal access for .
+Please visit to review this request.
+```
+
+
+### portal_approved_email
+
+**Default:** `on`
+
+**Description:**
+When enabled, Developers will receive an email when access to a Dev Portal has been approved.
+
+```
+Subject: Developer Portal access approved
+
+Hello Developer!
+You have been approved to access .
+Please visit to login.
+
+```
+
+### portal_reset_email
+
+**Default:** `on`
+
+**Description:**
+When enabled, Developers will be able to use the "Reset Password" flow on a Dev Portal and will receive an email with password reset instructions.
+
+When disabled, Developers will *not* be able to reset their account passwords.
+
+```
+Subject: Password Reset Instructions for Developer Portal .
+
+Hello Developer,
+
+Please click the link below to reset your Developer Portal password.
+
+
+
+This link will expire in
+
+If you didn't make this request, keep your account secure by clicking
+the link above to change your password.
+```
+
+### portal_reset_success_email
+
+**Default:** `on`
+
+**Description:**
+When enabled, Developers will receive an email after successfully reseting their Dev Portal account password.
+
+When disabled, Developers will still be able to reset their account passwords, but will not recieve a confirmation email.
+
+```
+Subject: Developer Portal password change success
+
+Hello Developer,
+We are emailing you to let you know that your Developer Portal password at has been changed.
+
+Click the link below to sign in with your new credentials.
+
+
+```
+
+
+### portal_emails_from
+
+**Default:** `nil`
+
+**Description:**
+The name and email address for the 'From' header included in all Dev Portal emails.
+
+**Example :**
+
+```
+portal_emails_from = Your Name
+```
+
+
+### portal_emails_reply_to
+
+**Default:** `nil`
+
+**Description:**
+The email address for the 'Reply-To' header included in all Dev Portal emails.
+
+
+**Example :**
+
+```
+portal_emails_reply_to: noreply@example.com
+```
diff --git a/app/enterprise/1.3-x/developer-portal/configuration/workspaces.md b/app/enterprise/1.3-x/developer-portal/configuration/workspaces.md
new file mode 100644
index 000000000000..dc22eb8cdaea
--- /dev/null
+++ b/app/enterprise/1.3-x/developer-portal/configuration/workspaces.md
@@ -0,0 +1,79 @@
+---
+title: Running Multiple Dev Portals with Workspaces
+---
+
+### Introduction
+
+Kong supports running multiple instances of the Dev Portal with the use of
+[**Workspaces**](/enterprise/{{page.kong_version}}/admin-api/workspaces/reference). This allows each Workspace to enable
+and maintain separate Dev Portals (complete with separate files, settings, and
+authorization) from a within a single instance of Kong.
+
+### Managing Multiple Dev Portals within Kong Manager
+
+A snapshot of every Dev Portal within an instance of Kong can be viewed via
+the Kong Manager's **Dev Portals** top navigation tab.
+
+This overview page details:
+
+- Whether a Dev Portal in a given Workspace is enabled or disabled
+- A link to set up the Dev Portal if it is not enabled
+- A link to each Dev Portal's homepage
+- A link to each Dev Portal's individual overview page within Kong Manager
+- Whether or not each Dev Portal is authenticated (indicated by a lock icon
+in the upper right corner of each card)
+
+![Dev Portals Overview Page](https://konghq.com/wp-content/uploads/2018/11/devportals-overview.png)
+
+
+### Enabling a Workspace's Dev Portal
+
+When a Workspace other than **default** is created, that Workspace's Dev Portal
+will remain `disabled` until it is manually enabled.
+
+This can be done from the Kong Manager by clicking the **Set up Dev Portal**
+button located on the **Dev Portals** Overview page, or by navigating directly
+to a Workspace's **Dev Portal Settings** page via the sidebar and toggling the
+`Dev Portal Switch`, or by sending the following cURL request:
+
+```
+curl -X PATCH http://localhost:8001/workspaces/ \
+ --data "config.portal=true"
+```
+
+On intialization, Kong will populate the new Dev Portal with the [**Default Settings**](#defining-dev-portals-default-settings) defined in Kong's configuration file.
+
+>*Note* A Workspace can only enable a Dev Portal if the Dev Portal feature has been enabled in Kong's configuration.
+
+
+### Defining the Dev Portal's URL structure
+
+The URL of each Dev Portal is automatically configured upon initialization and
+is determined by four properties:
+
+1. The `portal_gui_protocol` property
+2. The `portal_gui_host` property
+3. Whether the `portal_gui_use_subdomains` property is enabled or disabled
+4. The `name` of the Workspace
+
+Example URL with subdomains disabled: `http://localhost:8003/example-workspace`
+
+Example URL with subdomains enabled: `http://example-workspace.localhost:8003`
+
+The first three properties are controlled by Kong's configuration file and
+cannot be edited via the Kong Manager.
+
+### Overriding Default Settings
+
+On initialization, the Dev Portal will be configured using the [**Default Portal Settings**] defined in Kong's configuration file.
+
+These settings can be manually overridden in the Dev Portals **Settings** tab
+in the Kong Manager or by patching the setting directly.
+
+### Workspace Files
+
+On initialization of a Workspace's Dev Portal a copy of the **default** Dev Portal files will be made and inserted into the new Dev Portal. This allows for the easy transference of a customized Dev Portal theme and allows **default** to act as a 'master template' -- however the Dev Portal will not continue to sync changes from the **default** Dev Portal after it is first enabled.
+
+### Developer Access
+
+Access is not synced between Dev Portals. If an Admin or Developer would like access to multiple Dev Portals, they must sign up for each Dev Portal individually.
diff --git a/app/enterprise/1.3-x/developer-portal/helpers/cli.md b/app/enterprise/1.3-x/developer-portal/helpers/cli.md
new file mode 100644
index 000000000000..4fcef50eb86f
--- /dev/null
+++ b/app/enterprise/1.3-x/developer-portal/helpers/cli.md
@@ -0,0 +1,62 @@
+---
+title: Developer Portal CLI
+---
+
+
+### Introduction
+
+The Kong Developer Portal CLI is used to manage your Developer Portals from the
+command line. It is built using [clipanion][clipanion].
+
+
+### Overview
+
+This is the next generation TypeScript based Developer Portal CLI. The goal of
+this project is to make a higher quality CLI tool over the initial sync script.
+
+This project is built for Kong Enterprise `>= 1.3`.
+
+For Kong Enterprise `<= 0.36`, or for `legacy mode` on Kong Enterprise `>= 1.3` [use the legacy sync script][sync-script].
+
+
+### Install
+
+```
+> npm install -g kong-portal-cli
+```
+
+
+
+### Usage
+
+The easiest way to start is by cloning the [portal-templates repo][templates] dev-master branch locally.
+
+Then edit `workspaces/default/cli.conf.yaml` to set workspace `name` and `rbac_token` to match your setup.
+
+Make sure Kong is running and portal is on:
+
+Now from root folder of the templates repo you can run:
+
+```portal [-h,--help] [--config PATH] [-v,--verbose]