-
Notifications
You must be signed in to change notification settings - Fork 104
Installation
This doc will go over setting up your environment for using knox and will walk through the changes you should consider before putting it in production.
The first step is to install Go. We require Go >1.6 or Go 1.5 with the vendor flag enabled (GO15VENDOREXPERIMENT=1
). For instructions on setting up Go, please visit https://golang.org/doc/install Make sure you also setup your workspace in that section (you should have a $GOPATH
env variable defined and the command go
)
After Go is set up (including a $GOPATH
points to your workspace), please run go get -d github.com/pinterest/knox
to get the latest version of the knox code.
To compile the devserver and devclient binaries, run go install github.com/pinterest/knox/cmd/dev_server
and go install github.com/pinterest/knox/cmd/dev_client
. These can be directly executed, the dev_client expects the server to be running on a localhost. By default, the client uses mTLS with a hardcoded signed cert given for example.com for machine authentication and had github authentication enabled for users. These two binaries can be used for toying around with Knox
The rest of this guide will walk through how to modify these two binaries to make something legit.
In your Go workspace, setup two other directories that correlate with your company’s name in the $GOPATH/src
directory. At pinterest we use src/pinterest.com/security/knox/cmd/knox_server
and src/pinterest.com/security/knox/cmd/knox
for the server and client respectively. Once you make these two directories, you should copy over the existing cmd/dev_server/main.go
and cmd/dev_client/main.go
to those. Using our directories this looks like:
cd $GOPATH
mkdir -p src/pinterest.com/security/knox/cmd/knox_server
mkdir -p src/pinterest.com/security/knox/cmd/knox
cp src/github.com/pinterest/knox/cmd/dev_server/main.go src/pinterest.com/security/knox/cmd/knox_server/
cp src/github.com/pinterest/knox/cmd/dev_client/main.go src/pinterest.com/security/knox/cmd/knox/
Now we have our own copies of the knox server and client to work on and modify. You can install these at any point using go install pinterest.com/security/knox/cmd/knox_server
which will create a binary in your $GOPATH/bin
directory. For more information on Go tooling check out: https://golang.org/cmd/go/.
In this section, we will cover the major topics for modification in the main.go file and how to go about making changes. You can do these step by step and should be able to test out each change as you go.
By default, the dev_server uses TempDB which stores the key database in memory. This means you cannot deploy multiple servers and if the devserver ever goes down all the data is gone. This obviously needs to change for production usage.
The router takes in an implementation of keydb.DB
as the second parameter. There are a couple already built that could be used for production or you could build your own. Pinterest uses a custom interface inspired by the kingpin project internally (https://github.com/pinterest/kingpin). I present a mysql and postgres backend below that could be configured for production use (depending on what your company supports). There is more discussion on alternative databases and building your own implementation of keydb.DB
in https://github.com/pinterest/knox/wiki/Knox-Backend-Database
For both MySQL and PostgreSQL you will likely need credentials. These should be stored using some sort of secure method as discussed in the “Changing the key cryptor to provide confidentiality” section.
// import "github.com/go-sql-driver/mysql" to register mysql as a sql database
d, err := sql.Open("mysql", "user:password@host/test")
if err != nil {
//handle the error
}
db, err := keydb.NewSQLDB(d)
// import "github.com/lib/pq" to register postgres as a sql database
d, err := sql.Open("postgres", "user=user dbname=test")
if err != nil {
//handle the error
}
db, err := keydb.NewSQLDB(d)
The NewAESGCMCryptor
should be suitable for production it allows you to input a version byte and an AES key. The question is where to get this randomly generated AES key such that it is consistent across servers. (Yes we are talking about where to store the secrets in a service designed for secret storage)
The first simplest answer is that these keys should be made into input for the server in the form of files and manually placed on the server(s) running Knox. This makes scaling up or handling unexpected machine failures more difficult, but it has the nice benefit that only the user(s) who generated the key and the machines who need them have access.
The second answer would be to leverage some sort of hardware security module that only the knox machines can talk to. In terms of actual implementation, I don’t have much insight into this.
If your service is in AWS, KMS (https://aws.amazon.com/kms/) is a good alternative. You can restrict access to KMS decryption operations on the basis of IAM roles. If you launch your knox server with a common IAM role, lock down access to that role, and lock down access to KMS keys, you store your KMS encrypted secrets in S3 (upload the keys needed with the kms flag). Then you can retrieve them using the aws-sdk-go (https://github.com/aws/aws-sdk-go) library which will fallback to using the IAM role for authentication by default.
Custom logging and metrics are supported through the decorator model. A list of func(http.HandlerFunc) http.HandlerFunc
are passed into the router and chained together on every request. They are applied in the order given for example if [f,g,h]
was passed in then, they would be called as f(g(h(handlerForRoute)))
This means you likely want to order logging at the beginning along with timers. A Go function that might send timing data to your metrics engine could look like (this is based on our actual timing function):
func Timer(name string, trigger func(r *http.Request) bool) func(http.HandlerFunc) http.HandlerFunc {
timer := metrics.NewHistogram(name)
return func(f http.HandlerFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
now := time.Now()
f(w, r)
if trigger(r) {
timer.Add(float64(time.Since(now)) / float64(time.Millisecond))
}
}
}
}
This could then be passed into the list of decorators like this:
decorators := [](func(http.HandlerFunc) http.HandlerFunc){
Timer("knox.getkeys", func(r *http.Request) bool { return "getkeys" == server.GetRouteID(r) }),
server.Logger(accLogger),
server.AddHeader("Content-Type", "application/json"),
server.AddHeader("X-Content-Type-Options", "nosniff"),
server.Authentication([]auth.Provider{auth.NewMTLSAuthProvider(certPool), auth.NewGitHubProvider(authTimeout)}),
}
r := server.GetRouter(cryptor, db, decorators)
If you noticed, the timer uses some function called GetRouteID
which lets the user know what route is being called by a simple name. These names are available in routes.go
. (this is also implemented internally by a simple decorator that is added to the front of the decorator list).
You will need to generate a private key and certificate for the knox server to run with signed by a trusted CA (Certificate Authority). This is important for maintaining confidentiality and integrity of knox responses.
The first step is to generate a private key and a CSR (Certificate Signing Request) for your trusted CA to sign. Part of the CSR will include the hostname(s) that your knox server will be running on.
If you are using cfssl(https://github.com/cloudflare/cfssl), you can generate a private key and csr using cfssl genkey csr.json | cfssljson -bare knox-server
with a csr.json file that looks like:
{
"hosts": [
"knox.server.hostname.here"
],
"key": {
"algo": "ecdsa",
"size": 256
},
"names": [
{
"C": "US",
"L": "San Francisco",
"O": "Internet Widgets, Inc.",
"OU": "WWW",
"ST": "California"
}
]
}
This will create a private key: knox-server-key.pem
and knox-server.csr
. The csr can then be sent to your CA for signing. If you are using cfssl with an internal CA, you can generate a cert using: cfssl sign -ca ca.pem -ca-key ca-key.pem knox-server.csr | cfssljson -bare knox-server
which will generate a knox-server.pem cert file.
If you get your CSR signed by an external CA, you will likely receive the cert in a pem format. Append any intermediate certificates to this certificate and save it as knox-server.pem
.
The contents of these two files are used in the call to serveTLS
as the certPEMBlock and the keyPEMBlock. The TLS configuration given inside that function contains reccommendations from https://wiki.mozilla.org/Security/Server_Side_TLS which should be usuable in a production environment. The ClientAuth
parameter is modified to allow for machine authentication (it is safe to remove the tls.RequestClientCert setting if you are not using MTLS for machine authentication).
Similar to the “Changing the key cryptor to provide confidentiality” section above, you should source the key file from a secure location. Similar methods such as manual copying, an HSM, or KMS should work well for this use case. See the above mentioned section for code samples.
If you use MTLS in production, you should use the correct caCert (or list of caCerts by calling append multiple times). The MTLSAuthProvider will take care off machine authentication server side. This method does not currently verify revocation, but takes care of hostname matching and cert verification leverage Go's stdlib TLS library. To use simply pass in CA certs into auth.NewMTLSAuthProvider
as below.
certPool := x509.NewCertPool()
certPool.AppendCertsFromPEM([]byte(caCert))
… auth.NewMTLSAuthProvider(certPool) ...
For user auth, github authentication could be used in prod.
For custom authentication providers for either you need to implement the auth interface and pass it into the server.Authentication middleware.
// Provider is used for authenticating requests via the authentication decorator.
type Provider interface {
Authenticate(token string, r *http.Request) (knox.Principal, error)
Version() byte
Type() byte
}
Multiple knox servers are useful to have for reliability, increased load, and to do rolling deployments. At Pinterest we have Knox servers setup behind an AWS Elastic Load Balancer set in TCP mode. This allows for TLS connections to be forwarded for authentication and confidentiality and for us to bring up machines as needed. This could also be accomplished with nginx running in TCP load balancing mode. There are many alternatives to service discovery besides having a load balancer which could be written into the client if this method is not prefered. Given this type of setup and a knox database that is not in memory or local the knox servers can be scaled up as needed.
Productionizing the knox client has more to do with packaging than anything else but we will also cover any code modifications you might want to make first.
You should add a root certificate and change to ServerName in the tlsConfig to match what you used for your server’s TLS configuration. Be sure to remove the InsecureSkipVerify setting from the tls.Config (or set it to false). You also should change the hostname to point at the actual server (or your load balancer).
If you implement an OAuth provider that allows for OAuth2 password grants you can add that token endpoint and client ID to enable knox login
to work.
You should change your VisibilityParams
to output so you can view the logs later on.
For machine authentication, you will likely want to change the source of the client certificate and key to use some internal directory on the machine. The authHandler function can change as you see fit to pull credentials from other sources or files to work with your auth solution.
In order to get the knox client running with cacheable secrets, you need to create the directory /var/lib/knox/
and give the user who will be running knox daemon
full permissions on that directory. We store the binary for knox at /usr/bin/knox
. Also there is an upstart configuration available at client/knox-upstart.conf
If added to /etc/init/
it will allow upstart to be used to start and stop the knox daemon
that keeps registered keys up to date.