-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.json
85 lines (85 loc) · 42.8 KB
/
index.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
[
{
"uri": "https://nhite.github.io/principles/about/",
"title": "About",
"tags": [],
"description": "",
"content": "You can find here anything that is related to the Nhite development and core engine.\n"
},
{
"uri": "https://nhite.github.io/get_started/",
"title": "Get started",
"tags": [],
"description": "",
"content": ""
},
{
"uri": "https://nhite.github.io/principles/from-cli-to-microservice/",
"title": "From command line tools to microservices - The example of Hashicorp tools (terraform) and gRPC",
"tags": [],
"description": "This article explains how to turn a golang utility into a webservice using gRPC (and protobuf). I take the example of Hashicorp tools because they are often used as a leverage for the DevOps transformation. Often, the Ops use the tools for themselves, but when comes the time to provide a service around them, they are usually scared to open the engine. They prefer to make a factory around the service, which is often less reliable than a little piece of code fully tested.",
"content": " This article has been originally published on Olivier Wulveryck\u0026rsquo;s blog\nThis post is a little bit different from the last ones. As usual, the introduction tries to be open, but it quickly goes deeper into a go implementation. Some of the explanations may be tricky from time to times and therefore not very clear. As usual, do not hesitate to send me any comment via this blog or via twitter @owulveryck.\nTL;DR: This is a step-by-step example that turns a golang cli utility into a webservice powered by gRPC and protobuf. The code can be found here.\nAbout the cli utilities I come from the sysadmin world\u0026hellip; Precisely the Unix world (I have been a BSD user for years). Therefore I have learned to use and love \u0026ldquo;the cli utilities\u0026rdquo;. Cli utilities are all those tools that make Unix sexy and \u0026ldquo;user-friendly\u0026rdquo;.\n Because, yes, Unix is user-friendly (it\u0026rsquo;s just picky about its friends1). \nFrom a user perspective, cli tools remains a must nowadays because:\n there are usually developed in the pure Unix philosophy: simple enough to use for what they were made for; they can be easily wrapped into scripts. Therefore, it is easy to automate cli actions. The point with cli application is that they are mainly developed for an end-user that we call \u0026ldquo;an operator\u0026rdquo;. As Unix is a multi-user operating system, several operators can use the same tool, but they have to be logged onto the same host.\nIn case of a remote execution, it\u0026rsquo;s possible to execute the cli via ssh, but dealing with automation, network interruption and resuming starts to be tricky. For remote and concurrent execution web-services are more suitable.\nLet\u0026rsquo;s see if turning a cli tool into a webservice without re-coding the whole logic is easy in go?\nHashicorp\u0026rsquo;s cli For the purpose of this post, and because I am using Hashicorp tools at work, I will take @mitchellh\u0026rsquo;s framework for developing command line utilities. This package is used in all of the Hashicorp tools and is called\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;. \u0026ldquo;cli\u0026rdquo;!\nThis library provides a Command type that represents any action that the cli will execute. Command is a go interface composed of three methods:\n Help() that returns a string describing how to use the command; Run(args []string) that takes an array of string as arguments (all cli parameters of the command) and returns an integer (the exit code); Synopsis() that returns a string describing what the command is about. Note: I assume that you know what an interface is (especially in go). If you don\u0026rsquo;t, just google, or even better, buy the book The Go Programming Language and read the chapter 7 :).\nThe main object that holds the business logic of the cli package is an implementation of Cli. One of the elements of the Cli structure is Commands which is a map that takes the name of the action as key. The name passed is a string and is the one that will be used on the command line. The value of the map is a function that returns a Command. This function is named CommandFactory. According to the documentation, the factory is needed because we may need to setup some state on the struct that implements the command itself. Good idea!\nExample First, let\u0026rsquo;s create a very simple tool using the \u0026ldquo;cli\u0026rdquo; package. The tool will have two \u0026ldquo;commands\u0026rdquo;:\n hello: will display hello args\u0026hellip;. on stdout goodbye: will display goodbye args\u0026hellip; on stderr func main() { c := cli.NewCLI(\u0026quot;server\u0026quot;, \u0026quot;1.0.0\u0026quot;) c.Args = os.Args[1:] c.Commands = map[string]cli.CommandFactory{ \u0026quot;hello\u0026quot;: func() (cli.Command, error) { return \u0026amp;HelloCommand{}, nil }, \u0026quot;goodbye\u0026quot;: func() (cli.Command, error) { return \u0026amp;GoodbyeCommand{}, nil }, } exitStatus, err := c.Run() ... } As seen before, the first object created is a Cli. Then the Commands field is filled with the two commands \u0026ldquo;hello\u0026rdquo; and \u0026ldquo;goodbye\u0026rdquo; as keys, and an anonymous function that simply returns two structures that will implement the Command interface.\nNow, let\u0026rsquo;s create the HelloCommand structure that will fulfill the cli.Command interface:\ntype HelloCommand struct{} func (t *HelloCommand) Help() string { return \u0026quot;hello [arg0] [arg1] ... says hello to everyone\u0026quot; } func (t *HelloCommand) Run(args []string) int { fmt.Println(\u0026quot;hello\u0026quot;, args) return 0 } func (t *HelloCommand) Synopsis() string { return \u0026quot;A sample command that says hello on stdout\u0026quot; } The GoodbyeCommand is similar, and I omit it for brevity.\nAfter a simple go build, here is the behavior of our new cli tool: ~ ./server help Usage: server [--version] [--help] \u0026lt;command\u0026gt; [\u0026lt;args\u0026gt;] Available commands are: goodbye synopsis... hello A sample command that says hello on stdout ~ ./server hello -help hello [arg0] [arg1] ... says hello to everyone ~ ./server/server hello a b c hello [a b c] So far, so good! Now, let\u0026rsquo;s see if we can turn this into a webservice.\nMicro-services The biggest issue in changing a monolith into microservices lies in changing the communication pattern. - Martin Fowler2\nThere is, according to me, two options to consider to turn our application into a webservice:\n a RESTish communication and interface; an RPC based communication. SOAP is not an option anymore because it does not provide any advantage over the REST and RPC methods.\nRest? I\u0026rsquo;ve always been a big fan of the REST \u0026ldquo;protocol\u0026rdquo;. It is easy to understand and to write. On top of that, it is verbose and allows a good description of \u0026ldquo;business objects\u0026rdquo;. But, its verbosity, that is a strength, quickly become a weakness when applied to machine-to-machine communication. The \u0026ldquo;contract\u0026rdquo; between the client and the server have to be documented manually (via something like swagger for example). And, as you only transfer objects and states, the server must handle the request, understand it, and apply it to any business logic before returning a result. Don\u0026rsquo;t get me wrong, REST remains a very good thing. But it is very good when you think about it from the beginning of your conception (and with a user experience in mind).\nIndeed, it may not be a good choice for easily turning a cli into a webservice.\nRPC! RPC, on the other hand, may be a good fit because there would be a very little modification of the code. Actually, the principle would be to:\n trigger a network listener receive a procedure call with arguments, execute the function send back the result The function that holds the business logic does not need any change at all.\nThe drawbacks of RPCs are:\n the development language need a library that supports RPC, the client and the server must use the same communication protocol. Those drawbacks have been addressed by Google. They gave to the community a polyglot RPC implementation called gRPC.\nLet me quote this from the chapter \u0026ldquo;The Production Environment at Google, from the Viewpoint of an SRE\u0026rdquo; of the SRE book:\n All of Google\u0026rsquo;s services communicate using a Remote Procedure Call (RPC) infrastructure named Stubby; an open source version, gRPC, is available. Often, an RPC call is made even when a call to a subroutine in the local program needs to be performed. This makes it easier to refactor the call into a different server if more modularity is needed, or when a server\u0026rsquo;s codebase grows. GSLB can load balance RPCs in the same way it load balances externally visible services.\n Sounds cool! Let\u0026rsquo;s dig into gRPC!\ngRPC We will now implement a gRPC server that will trigger the cli.Commands.\nIt will receive \u0026ldquo;orders\u0026rdquo;, and depending on the expected call, it will:\n Implements a HelloCommand and trigger its Run() function; Implements a GoodbyeCommand and trigger its Run() function We will also implement a gRPC client.\nFor the server and the client to communicate, they have to share the same protocol and understand each other with a contract. Protocol Buffers (a.k.a., protobuf) are Google\u0026rsquo;s language-neutral, platform-neutral, extensible mechanism for serializing structured data Even if it\u0026rsquo;s not mandatory, gRPC is usually used with the Protocol Buffer.\nSo, first, let\u0026rsquo;s implement the contract with/in protobuf!\nThe protobuf contract The protocol is described in a simple text file and a specific DSL. Then there is a compiler that serializess the description and turns it into a contract that can be understood by the targeted language.\nHere is a simple definition that matches our need:\nsyntax = \u0026quot;proto3\u0026quot;; package myservice; service MyService { rpc Hello (Arg) returns (Output) {} rpc Goodbye (Arg) returns (Output) {} } message Arg { repeated string args = 1; } message Output { int32 retcode = 1; } Here is the English description of the contract:\nLet\u0026rsquo;s take a service called MyService. This service provides to actions (commands) remotely:\n Hello Goodbye Both takes as argument an object called Arg that contains an infinite number of string (this array is stored in a field called args).\nBoth actions return an object called Output that returns an integer.\nThe specification is clear enough to code a server and a client. But the string implementation may differ from a language to another. You may now understand why we need to \u0026ldquo;compile\u0026rdquo; the file. Let\u0026rsquo;s generate a definition suitable for the go language:\nprotoc --go_out=plugins=grpc:. myservice/myservice.proto\nNote the definition file has been placed into a subdirectory myservice\nThis command generates a myservice/myservice.pb.go file. This file is part of the myservice package, as specified in the myservice.proto.\nThe package myservice holds the \u0026ldquo;contract\u0026rdquo; translated in go. It is full of interfaces and holds helpers function to easily create a server and/or a client. Let\u0026rsquo;s see how.\nThe implementation of the \u0026ldquo;contract\u0026rdquo; into the server Let\u0026rsquo;s go back to the roots and read the doc of gRPC. In the gRPC basics - go tutorial is written:\nTo build and start a server, we:\n Specify the port we want to use to listen for client requests\u0026hellip; Create an instance of the gRPC server using grpc.NewServer(). Register our service implementation with the gRPC server. Call Serve() on the server with our port details to do a blocking wait until the process is killed or Stop() is called. Let\u0026rsquo;s decompose the third step.\n\u0026ldquo;service implementation\u0026rdquo; The myservice/myservice.pb.go file has defined an interface for our service.\ntype MyServiceServer interface { // Sends a greeting Hello(context.Context, *Arg) (*Output, error) Goodbye(context.Context, *Arg) (*Output, error) } To create a \u0026ldquo;service implementation\u0026rdquo; in our \u0026ldquo;cli\u0026rdquo; utility, we need to create any structure that implements the Hello(\u0026hellip;) and Goodbye(\u0026hellip;) methods. Let\u0026rsquo;s call our structure grpcCommands:\npackage main ... import \u0026quot;myservice\u0026quot; ... type grpcCommands struct {} func (g *grpcCommands) Hello(ctx context.Context, in *myservice.Arg) (*myservice.Output, error) { return \u0026amp;myservice.Output{int32(0)}, err } func (g *grpcCommands) Goodbye(ctx context.Context, in *myservice.Arg) (*myservice.Output, error) { return \u0026amp;myservice.Output{int32(0)}, err } Note: *myservice.Arg is a structure that holds an array of string named Args. It corresponds to the proto definition exposed before.\n\u0026ldquo;service registration\u0026rdquo; As written in the doc, we need to register the implementation. In the generated file myservice.pb.go, there is a RegisterMyServiceServer function. This function is simply an autogenerated wrapper around the RegisterService method of the gRPC Server type.\nThis method takes two arguments:\n An instance of the gRPC server the implementation of the contract. The 4 steps of the documentation can be implemented like this:\nlistener, _ := net.Listen(\u0026quot;tcp\u0026quot;, \u0026quot;127.0.0.1:1234\u0026quot;) grpcServer := grpc.NewServer() myservice.RegisterMyServiceServer(grpcServer, \u0026amp;grpcCommands{}) grpcServer.Serve(listener) So far so good\u0026hellip; The code compiles, but does not perform any action and always return 0.\nActually calling the Run() method Now, let\u0026rsquo;s use the grpcCommands structure as a bridge between the cli.Command and the grpc service.\nWhat we will do is to embed the c.Commands object inside the structure and trigger the appropriate objects\u0026rsquo; Run() method from the corresponding gRPC procedures.\nSo first, let\u0026rsquo;s embed the c.Commands object.\ntype grpcCommands struct { commands map[string]cli.CommandFactory } Then change the Hello and Goodbye methods of grpcCommands so they trigger respectively:\n HelloCommand.Run(args) GoodbyeCommand.Run(args) with args being the array of string passed via the in argument of the protobuf.\nas defined in myservice.Arg.Args (the protobuf compiler has transcribed the repeated string args argument into a filed Args []string of the type Arg.\nfunc (g *grpcCommands) Hello(ctx context.Context, in *myservice.Arg) (*myservice.Output, error) { runner, err := g.commands[\u0026quot;hello\u0026quot;]() if err != nil { return int32(0), err } ret = int32(runner.Run(in.Args)) return \u0026amp;myservice.Output{int32(ret)}, err } func (g *grpcCommands) Goodbye(ctx context.Context, in *myservice.Arg) (*myservice.Output, error) { runner, err := g.commands[\u0026quot;goodbye\u0026quot;]() if err != nil { return int32(0), err } ret = int32(runner.Run(in.Args)) return \u0026amp;myservice.Output{int32(ret)}, err } Let\u0026rsquo;s factorize a little bit and create a wrapper (that will be useful in the next section):\nfunc wrapper(cf cli.CommandFactory, args []string) (int32, error) { runner, err := cf() if err != nil { return int32(0), err } return int32(runner.Run(in.Args)), nil } func (g *grpcCommands) Hello(ctx context.Context, in *myservice.Arg) (*myservice.Output, error) { ret, err := wrapper(g.commands[\u0026quot;hello\u0026quot;]) return \u0026amp;myservice.Output{int32(ret)}, err } func (g *grpcCommands) Goodbye(ctx context.Context, in *myservice.Arg) (*myservice.Output, error) { ret, err := wrapper(g.commands[\u0026quot;goodbye\u0026quot;]) return \u0026amp;myservice.Output{int32(ret)}, err } Now we have everything needed to turn our cli into a gRPC service. With a little bit of plumbing, the code compiles and the service runs. The full implementation of the service can be found here.\nA very quick client The principle is the same for the client. All the needed methods are auto-generated and wrapped by the protoc command.\nThe steps are:\n create a network connection to the gRPC server (with TLS) create a new instance of myservice\u0026rsquo;client call a function and get a result for example:\nconn, _ := grpc.Dial(\u0026quot;127.0.0.1:1234\u0026quot;, grpc.WithInsecure()) defer conn.Close() client := myservice.NewMyServiceClient(conn) output, err := client.Hello(context.Background(), \u0026amp;myservice.Arg{os.Args[1:]}) Note: By default, gRPC requires some TLS. I have specified the WithInsecure option because I am running on the local loop and it is just an example. Don\u0026rsquo;t do that in production.\nGoing further Normally, Unix tools should respect a certain philosophy such as:\nRule of Silence: When a program has nothing surprising to say, it should say nothing.\nAnyway, we all know that tools are verbose, so let\u0026rsquo;s add a feature that sends the content of stdout and stderr back to the client. (And anyway, we are implementing a service greeting. It would be useless if it was silent :))\nstdout / stderr What we want to do is to change the output of the commands. Therefore, we simply add two more fields to the Output object in the protobuf definition: message Output { int32 retcode = 1; bytes stdout = 2; bytes stderr = 3; } The generated file contains the following definition for Output:\ntype Output struct { Retcode int32 `protobuf:\u0026quot;varint,1,opt,name=retcode\u0026quot; json:\u0026quot;retcode,omitempty\u0026quot;` Stdout []byte `protobuf:\u0026quot;bytes,2,opt,name=stdout,proto3\u0026quot; json:\u0026quot;stdout,omitempty\u0026quot;` Stderr []byte `protobuf:\u0026quot;bytes,3,opt,name=stderr,proto3\u0026quot; json:\u0026quot;stderr,omitempty\u0026quot;` } We have changed the Output type, but as all the fields are embedded within the structure, the \u0026ldquo;service implementation\u0026rdquo; interface (grpcCommand) has not changed. We only need to change a little bit the implementation in order to return a completed Output object:\nfunc (g *grpcCommands) Hello(ctx context.Context, in *myservice.Arg) (*myservice.Output, error) { var stdout, stderr []byte // ... return \u0026amp;myservice.Output{ret, stdout, stderr}, err } Now we have to change the wrapper function that has been defined previously to return the content of stdout and stderr:\nfunc wrapper(cf cli.CommandFactory, args []string) (int32, []byte, []byte, error) { // ... } func (g *grpcCommands) Hello(ctx context.Context, in *myservice.Arg) (*myservice.Output, error) { var stdout, stderr []byte ret, stdout, stderr, err := wrapper(g.commands[\u0026quot;hello\u0026quot;], in.Args) return \u0026amp;myservice.Output{ret, stdout, stderr}, err } All the job of capturing stdout and stderr is done within the wrapper function (This solution has been found on StackOverflow:\n first, we backup the standard stdout and stderr then, we create two times, two file descriptor linked with a pipe (one for stdout and one for stderr) we assign the standard stdout and stderr to the input of the pipe. From now on, every interaction will be written to the pipe and will be received into the variable declared as output of the pipe then, we actually execute the function (the business logic) we get the content of the output and save it to variable and then we restore stdout and stderr Here is the implementation of the wrapper: func wrapper(cf cli.CommandFactory, args []string) (int32, []byte, []byte, error) { var ret int32 oldStdout := os.Stdout // keep backup of the real stdout oldStderr := os.Stderr // Backup the stdout r, w, err := os.Pipe() // ... re, we, err := os.Pipe() //... os.Stdout = w os.Stderr = we runner, err := cf() // ... ret = int32(runner.Run(args)) outC := make(chan []byte) errC := make(chan []byte) // copy the output in a separate goroutine so printing can\u0026#39;t block indefinitely go func() { var buf bytes.Buffer io.Copy(\u0026amp;buf, r) outC \u0026lt;- buf.Bytes() }() go func() { var buf bytes.Buffer io.Copy(\u0026amp;buf, re) errC \u0026lt;- buf.Bytes() }() // back to normal state w.Close() we.Close() os.Stdout = oldStdout // restoring the real stdout os.Stderr = oldStderr stdout := \u0026lt;-outC stderr := \u0026lt;-errC return ret, stdout, stderr, nil } Et voilà, the cli has been transformed into a grpc webservice. The full code is available on GitHub.\nSide note about race conditions The map used for cli.Command is not concurrent safe. But there is no goroutine that actually writes it so it should be ok. Anyway, I have written a little benchmark of our function and passed it to the race detector. And it did not find any problem:\ngo test -race -bench=. goos: linux goarch: amd64 pkg: github.com/owulveryck/cli-grpc-example/server BenchmarkHello-2 200 10483400 ns/op PASS ok github.com/owulveryck/cli-grpc-example/server 4.130s The benchmark shows good result on my little chromebook, gRPC seems very efficient, but actually testing it is beyond the scope of this article.\nInteractivity Sometimes, cli tools ask questions. Another good point with gRPC is that it is bidirectional. Therefore, it would be possible to send the question from the server to the client and get the response back. I let that for another experiment.\nTerraform ? At the beginning of this article, I have explained that I was using this specific cli in order to derivate Hashicorp tools and turned them into webservices. Let\u0026rsquo;s take an example with the excellent terraform.\nWe are going to derivate terraform by changing only its cli interface, add some gRPC powered by protobuf\u0026hellip;\n$$\\frac{\\partial terraform}{\\partial cli} + grpc^{protobuf} = \\mu service(terraform)$$ 3\nAbout concurrency Terraform uses backends to store its states. By default, it relies on the local filesystem, which is, obviously, not concurrent safe. It does not scale and cannot be used when dealing with webservices. For the purpose of my article, I won\u0026rsquo;t dig into the backend principle and stick to the local one. Hence, this will only work with one and only one client. If you plan to do more work around terraform-as-a-service, changing the backend is a must!\nWhat will I test? In order to narrow the exercise, I will partially implement the plan command.\nMy test case is the creation of an EC2 instance on AWS. This example is a copy/paste of the example Basic Two-Tier AWS Architecture.\nI will not implement any kind of interactivity. Therefore I have added some default values for the ssh key name and path.\nLet\u0026rsquo;s check that the basic cli is working:\nlocalhost two-tier [master*] terraform plan | tail enable_classiclink_dns_support: \u0026quot;\u0026lt;computed\u0026gt;\u0026quot; enable_dns_hostnames: \u0026quot;\u0026lt;computed\u0026gt;\u0026quot; enable_dns_support: \u0026quot;true\u0026quot; instance_tenancy: \u0026quot;\u0026lt;computed\u0026gt;\u0026quot; ipv6_association_id: \u0026quot;\u0026lt;computed\u0026gt;\u0026quot; ipv6_cidr_block: \u0026quot;\u0026lt;computed\u0026gt;\u0026quot; main_route_table_id: \u0026quot;\u0026lt;computed\u0026gt;\u0026quot; Plan: 9 to add, 0 to change, 0 to destroy. Ok, let\u0026rsquo;s \u0026ldquo;hack\u0026rdquo; terraform!\nhacking Terraform Creating the protobuf contract The contract will be placed in a terraformservice package. I am using a similar approach as the one used for the greeting example described before:\nsyntax = \u0026quot;proto3\u0026quot;; package terraformservice; service Terraform { rpc Plan (Arg) returns (Output) {} } message Arg { repeated string args = 1; } message Output { int32 retcode = 1; bytes stdout = 2; bytes stderr = 3; } Then I generate the go version of the contract with:\nprotoc --go_out=plugins=grpc:. terraformservice/terraform.proto\nThe go implementation of the interface I am using a similar structure as the one defined in the previous example. I only change the methods to match the new ones:\ntype grpcCommands struct { commands map[string]cli.CommandFactory } func (g *grpcCommands) Plan(ctx context.Context, in *terraformservice.Arg) (*terraformservice.Output, error) { ret, stdout, stderr, err := wrapper(g.commands[\u0026quot;plan\u0026quot;], in.Args) return \u0026amp;terraformservice.Output{ret, stdout, stderr}, err } The wrapper function remains exactly the same as the one defined before because I didn\u0026rsquo;t change the Output format.\nSetting a gRPC server in the main function The only modification that has to be done is to create a listener for the grpc like the one we did before. We place it in the main code, just before the execution of the Cli.Run() call:\nif len(cliRunner.Args) == 0 { log.Println(\u0026quot;Listening on 127.0.0.1:1234\u0026quot;) listener, err := net.Listen(\u0026quot;tcp\u0026quot;, \u0026quot;127.0.0.1:1234\u0026quot;) if err != nil { log.Fatalf(\u0026quot;failed to listen: %v\u0026quot;, err) } grpcServer := grpc.NewServer() terraformservice.RegisterTerraformServer(grpcServer, \u0026amp;grpcCommands{cliRunner.Commands}) // determine whether to use TLS grpcServer.Serve(listener) } Testing it The code compiles without any problem. I have triggered the terraform init and I have a listening process waiting for a call:\n~ netstat -lntp | grep 1234 (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 127.0.0.1:1234 0.0.0.0:* LISTEN 9053/tfoliv Let\u0026rsquo;s launch a client:\nfunc main() { conn, err := grpc.Dial(\u0026quot;127.0.0.1:1234\u0026quot;, grpc.WithInsecure()) if err != nil { log.Fatal(\u0026quot;Cannot reach grpc server\u0026quot;, err) } defer conn.Close() client := terraformservice.NewTerraformClient(conn) output, err := client.Plan(context.Background(), \u0026amp;terraformservice.Arg{os.Args[1:]}) stdout := bytes.NewBuffer(output.Stdout) stderr := bytes.NewBuffer(output.Stderr) io.Copy(os.Stdout, stdout) io.Copy(os.Stderr, stderr) fmt.Println(output.Retcode) os.Exit(output.Retcode) } ~ ./grpcclient ~ echo $? ~ 0 Too bad, the proper function has been called, the return code is ok, but all the output went to the console of the server\u0026hellip; Anyway, the RPC has worked.\nI can even remove the default parameters and pass them as an argument of my client:\n~ ./grpcclient -var 'key_name=terraform' -var 'public_key_path=~/.ssh/terraform.pub' ~ echo $? ~ 0 And let\u0026rsquo;s see if I give a non existent path:\n~ ./grpcclient -var 'key_name=terraform' -var 'public_key_path=~/.ssh/nonexistent' ~ echo $? ~ 1 about the output: I have been a little bit optimistic about the stdout and stderr. Actually, to make it work, the best option would be to implement a custom UI (it should not be difficult because Ui is also an interface). I will try an implementation as soon as I will have enough time to do so. But for now, I have reached my first goal, and this post is long enough :)\nConclusion Transforming terraform into a webservice has required a very little modification of the terraform code itself which is very good for maintenance purpose:\ndiff --git a/main.go b/main.go index ca4ec7c..da5215b 100644 --- a/main.go +++ b/main.go @@ -5,14 +5,18 @@ import ( \u0026quot;io\u0026quot; \u0026quot;io/ioutil\u0026quot; \u0026quot;log\u0026quot; + \u0026quot;net\u0026quot; \u0026quot;os\u0026quot; \u0026quot;runtime\u0026quot; \u0026quot;strings\u0026quot; \u0026quot;sync\u0026quot; + \u0026quot;google.golang.org/grpc\u0026quot; + \u0026quot;github.com/hashicorp/go-plugin\u0026quot; \u0026quot;github.com/hashicorp/terraform/helper/logging\u0026quot; \u0026quot;github.com/hashicorp/terraform/terraform\u0026quot; + \u0026quot;github.com/hashicorp/terraform/terraformservice\u0026quot; \u0026quot;github.com/mattn/go-colorable\u0026quot; \u0026quot;github.com/mattn/go-shellwords\u0026quot; \u0026quot;github.com/mitchellh/cli\u0026quot; @@ -185,6 +189,18 @@ func wrappedMain() int { PluginOverrides.Providers = config.Providers PluginOverrides.Provisioners = config.Provisioners + if len(cliRunner.Args) == 0 { + log.Println(\u0026quot;Listening on 127.0.0.1:1234\u0026quot;) + listener, err := net.Listen(\u0026quot;tcp\u0026quot;, \u0026quot;127.0.0.1:1234\u0026quot;) + if err != nil { + log.Fatalf(\u0026quot;failed to listen: %v\u0026quot;, err) + } + grpcServer := grpc.NewServer() + terraformservice.RegisterTerraformServer(grpcServer, \u0026amp;grpcCommands{cliRunner.Commands}) + // determine whether to use TLS + grpcServer.Serve(listener) + } + exitCode, err := cliRunner.Run() if err != nil { Ui.Error(fmt.Sprintf(\u0026quot;Error executing CLI: %s\u0026quot;, err.Error())) Of course, there is a bit of work to setup a complete terraform-as-a-service architecture, but it looks promising.\nRegarding grpc and protobuf: gRPC is a very nice protocol, I am really looking forward an implementation in javascript to target the browser (Meanwhile it is possible and easy to setup a grpc-to-json proxy if any web client is needed).\nBut it reminds us that the main target of RPC is machine-to-machine communication. This is something that the ease-of-use-and-read of json has shadowed\u0026hellip;\n This sentence is not from me. I read it once, somewhere, on the Internet. I cannot find anybody to give the credit to. [return] from Martin Fowler\u0026rsquo;s Microservices definition. [return] I know, this mathematical equation come from nowhere. But I simply like the beautifulness of this language. (I would have been damned by my math teachers because I have used the mathematical language to describe something that is not mathematical. Would you please forgive me, gentlemen :) [return] "
},
{
"uri": "https://nhite.github.io/principles/hip-terraform-part-i/",
"title": "Terraform is hip... Introducing Nhite",
"tags": [],
"description": "This is a second part of the last article. I now really dig into terraform. This article will explain how to use the Terraform sub-packages in order to create a brand new binary that acts as a gRPC server instead of a cli.",
"content": " This post is has been originally published on Olivier Wulveryck\u0026rsquo;s Tech Blog.\nIn a previous post, I did some experiments with gRPC, protocol buffer and Terraform. The idea was to transform the \u0026ldquo;Terraform\u0026rdquo; cli tool into a micro-service thanks to gRPC.\nThis post is the second part of the experiment. I will go deeper in the code and see if it is possible to create a brand new utility, without hacking Terraform. The idea is to import some of the packages that compose the binary and create my own service based on gRPC.\nThe Terraform structure Terraform is a binary utility written in go. The main package resides in the root directory of the terraform directory. As usual with go projects, all other subdirectories are different modules.\nThe whole business logic of Terraform is coded into the subpackages. The \u0026ldquo;main\u0026rdquo; package is simply an envelop for kick-starting the utility (env variables, etc.) and to initiate the command line.\nThe cli implementation The command line flags are instantiated by Mitchell Hashimoto\u0026rsquo;s cli package. As explained in the previous post, this cli package is calling a specific function for every action.\nThe command package Every single action is fulfilling the cli.Command interface and is implemented in the command subpackage. Therefore, every \u0026ldquo;action\u0026rdquo; of Terraform has a definition in the command package and the logic is coded into a Run(args []string) int method (see the doc of the Command interface for a complete definition.\nCreating a new binary The idea is not to hack any of the packages of Terraform to allow an easier maintenance of my code. In order to create a custom service, I will instead implement a new utility; therefore a new main package. This package will implement a gRPC server. This server will implement wrappers around the functions declared in the terraform.Command package.\nFor the purpose of my POC, I will only implement three actions of Terraform:\n terraform init terraform plan terraform apply The gRPC contract In order to create a gRPC server, we need a service definition. To keep it simple, let\u0026rsquo;s consider the contract defined in the previous post (cf the section: Creating the protobuf contract). I simply add the missing procedure calls:\nsyntax = \u0026quot;proto3\u0026quot;; package pbnhite; service Terraform { rpc Init (Arg) returns (Output) {} rpc Plan (Arg) returns (Output) {} rpc Apply (Arg) returns (Output) {} } message Arg { repeated string args = 2; } message Output { int32 retcode = 1; bytes stdout = 2; bytes stderr = 3; } Fulfilling the contract As described previously, I am creating a grpcCommand structure that will have the required methods to fulfill the contract:\ntype grpcCommands struct {} func (g *grpcCommands) Init(ctx context.Context, in *pb.Arg) (*pb.Output, error) { .... } func (g *grpcCommands) Plan(ctx context.Context, in *pb.Arg) (*pb.Output, error) { .... } func (g *grpcCommands) Apply(ctx context.Context, in *pb.Arg) (*pb.Output, error) { .... } In the previous post, I have filled the grpcCommand structure with a map filled with the command definition. The idea was to keep the same CLI interface. As we are now building a completely new binary with only a gRPC interface, we don\u0026rsquo;t need that anymore. Indeed, there is still a need to execute the Run method of every Terraform command.\nLet\u0026rsquo;s take the example of the Init command.\nLet\u0026rsquo;s see the definition of the command by looking at the godoc:\ntype InitCommand struct { Meta // contains filtered or unexported fields } This command holds a substructure called Meta. Meta is defined here and holds the meta-options that are available on all or most commands. Obviously we need a Meta definition in the command to make it work properly.\nFor now, let\u0026rsquo;s add it to the grpcCommand globally, and we will see later how to implement it.\nHere is the gRPC implementation of the contract:\ntype grpcCommands struct { meta command.Meta } func (g *grpcCommands) Init(ctx context.Context, in *pb.Arg) (*pb.Output, error) { // ... tfCommand := \u0026amp;command.InitCommand{ Meta: g.meta, } ret := int32(tfCommand.Run(in.Args)) return \u0026amp;pb.Output{ret, stdout, stderr}, err } How to initialize the grpcCommand object Now that we have a proper grpcCommand than can be registered to the grpc server, let\u0026rsquo;s see how to create an instance. As the grpcCommand only contains one field, we simply need to create a meta object.\nLet\u0026rsquo;s simply copy/paste the code done in Terraform\u0026rsquo;s main package and we now have:\nvar PluginOverrides command.PluginOverrides meta := command.Meta{ Color: false, GlobalPluginDirs: globalPluginDirs(), PluginOverrides: \u0026amp;PluginOverrides, Ui: \u0026amp;grpcUI{}, } pb.RegisterTerraformServer(grpcServer, \u0026amp;grpcCommands{meta: meta}) According to the comments in the code, the globalPluginDirs() returns directories that should be searched for globally-installed plugins (not specific to the current configuration). I will simply copy the function into my main package\nAbout the UI In the example CLI that I developed in the previous post, what I did was to redirect stdout and stderr to an array of bytes, in order to capture it and send it back to a gRPC client. I noticed that this was not working with Terraform. This is because of the UI! UI is an interface whose purpose is to get the output stream and write it down to a specific io.Writer.\nOur tool will need a custom UI.\nA custom UI As UI is an interface (see the doc here), it is easy to implement our own. Let\u0026rsquo;s define a structure that holds two array of bytes called stdout and stderr. Then let\u0026rsquo;s implement the methods that will write into this elements:\ntype grpcUI struct { stdout []byte stderr []byte } func (g *grpcUI) Output(msg string) { g.stdout = append(g.stdout, []byte(msg)...) } Note 1: I omit the methods Info, Warn, and Error for brevity.\nNote 2: For now, I do not implement any logic into the Ask and AskSecret methods. Therefore, my client will not be able to ask something. But as gRPC is bidirectional, it would be possible to implement such an interaction.\nNow, we can instantiate this UI for every call, and assign it to the meta-options of the command:\nvar stdout []byte var stderr []byte myUI := \u0026amp;grpcUI{ stdout: stdout, stderr: stderr, } tfCommand.Meta.Ui = myUI So far, so good: we now have a new Terraform binary, that is working via gRPC with a very little bit of code.\nWhat did we miss? Multi-stack It is fun but not usable for real purpose because the server needs to be launched from the directory where the tf files are\u0026hellip; Therefore the service can only be used for one single Terraform stack\u0026hellip; Come on!\nLet\u0026rsquo;s change that and pass as a parameter of the RPC call the directory where the server needs to work. It is as simple as adding an extra argument to the message Arg:\nmessage Arg { string workingDir = 1; repeated string args = 2; } and then, simply do a change directory in the implementation of the command:\nfunc (g *grpcCommands) Init(ctx context.Context, in *pb.Arg) (*pb.Output, error) { err := os.Chdir(in.WorkingDir) if err != nil { return \u0026amp;pb.Output{int32(0), nil, nil}, err } tfCommand := \u0026amp;command.InitCommand{ Meta: g.meta, } var stdout []byte var stderr []byte myUI := \u0026amp;grpcUI{ stdout: stdout, stderr: stderr, } ret := int32(tfCommand.Run(in.Args)) return \u0026amp;pb.Output{ret, stdout, stderr}, err } Implementing a new push command I have a Terraform service. Alright. Can an \u0026ldquo;Operator\u0026rdquo; use it?\nThe service we have deployed is working exactly like Terraform. I have only changed the user interface. Therefore, in order to deploy a stack, the \u0026lsquo;tf\u0026rsquo; files must be present locally on the host.\nObviously we do not want to give access to the server that hosts Terraform. This is not how micro-services work.\nTerraform has a push command that Hashicorp has implemented to communicate with Terraform enterprise. This command is linked with their close-source product called \u0026ldquo;Atlas\u0026rdquo; and is therefore useless for us.\nLet\u0026rsquo;s take the same principle and implement our own push command.\nPrinciple The push command will zip all the tf files of the current directory in memory, and transfer the zip via a specific message to the server. The server will then decompress the zip into a unique temporary directory and send back the ID of that directory. Then every other Terraform command can use the id of the directory and use the stack (as before).\nLet\u0026rsquo;s implement a protobuf contract:\nservice Terraform { // ... rpc Push(stream Body) returns (Id) {} } message Body { bytes zipfile = 1; } message Id { string tmpdir = 1; } Note: By now I assume that the whole zip can fit into a single message. I will probably have to implement chunking later\nThen instantiate the definition into the code of the server:\nfunc (g *grpcCommands) Push(stream pb.Terraform_PushServer) error { workdir, err := ioutil.TempDir(\u0026quot;\u0026quot;, \u0026quot;.terraformgrpc\u0026quot;) if err != nil { return err } err = os.Chdir(workdir) if err != nil { return err } body, err := stream.Recv() if err == io.EOF || err == nil { // We have all the file // Now let\u0026#39;s extract the zipfile // ... } if err != nil { return err } return stream.SendAndClose(\u0026amp;pb.Id{ Tmpdir: workdir, }) } going further\u0026hellip; The problem with this architecture is that it\u0026rsquo;s stateful, and therefore easily scalable.\nA solution would be to store the zip file in a third party service, identify it with a unique id. And then call the Terraform commands with this ID as a parameter. The Terraform engine would then grab the zip file from the third party service if needed and process the file\nImplementing a micro-service of backend I want to keep the same logic, therefore the storage service can be a gRPC microservice. We can then have different services (such as s3, google storage, dynamodb, NAS, \u0026hellip;) written in different languages.\nThe Terraform service will act as a client of this \u0026ldquo;backend\u0026rdquo; service (take care, it is not the same backend as the one defined within Terraform).\nOur Terraform-service can then be configured in runtime to call the host/port of the correct backend-service. We can even imagine the backend address being served via consul.\nThis is a work in progress and may be part of another blog post.\nHip1 is cooler than cool: Introducing Nhite I have talked to some people about all this stuff and I feel that people are interested. Therefore I have setup a GitHub organisation and a GitHub project to centralize what I will do around that.\nThe project is called Nhite.\n The GitHub organization is called [nhite]( The web page is https://nhite.github.io There is still a lot to do, but I really think that this could make sense to create a community. It may give a product by the end, or go in my attic of dead projects. Anyway, so far I\u0026rsquo;ve had a lot of fun!\n hip definition on wikipedia [return] "
},
{
"uri": "https://nhite.github.io/principles/",
"title": "Architecture",
"tags": [],
"description": "",
"content": ""
},
{
"uri": "https://nhite.github.io/_header/",
"title": "",
"tags": [],
"description": "",
"content": ""
},
{
"uri": "https://nhite.github.io/",
"title": "",
"tags": [],
"description": "",
"content": " Hip-Terraform Terraform is hip, so is Nhite! \n"
},
{
"uri": "https://nhite.github.io/get_started/about/",
"title": "About Nhite",
"tags": [],
"description": "",
"content": " Disclaimer: Nhite is in its really early development stage. It may become a product\u0026hellip; or not.\nWhat is Nhite ? Nhite is a binary that is implementing the terraform engine, but that is exposing a gRPC service instead of a cli.\nDo I need Nhite? If you are working with terraform alone, or if you have susbscribed to terraform enterprise, probably not.\nBut you may need it if you are willing to:\n work in team with terraform easily integrate terraform in a CI/CD easily use terraform on multiple environment (Dev, Prod, \u0026hellip;) [insert any cool use-case here] Does Nhite mean anything? Nhite stands for Nhite is hip-terraform\n"
},
{
"uri": "https://nhite.github.io/principles/architecture/",
"title": "Architecture diagrams",
"tags": [],
"description": "",
"content": "sequenceDiagram participant Client participant Nhite participant Backend participant TerraformLib Client-Nhite: push Nhite-Backend: push Nhite-Client: ID Client-Nhite: plan ID Nhite-Backend: Get ID Nhite-Nhite: cd ID Nhite--TerraformLib: commands/plan.go Nhite-Client: result Client-Nhite: apply ID Nhite-Backend: Get ID Nhite-Nhite: cd ID Nhite--TerraformLib: commands/apply.go Nhite-Client: result "
},
{
"uri": "https://nhite.github.io/categories/",
"title": "Categories",
"tags": [],
"description": "",
"content": ""
},
{
"uri": "https://nhite.github.io/get_started/download/",
"title": "Download and test the POC",
"tags": [],
"description": "",
"content": "So far Nhite is mostly a proof-of-concept about how to transform terraform in a gRPC service.\nIf you want to test what\u0026rsquo;s done so far, you can download a \u0026ldquo;nhite server\u0026rdquo; and a \u0026ldquo;sample nhite client\u0026rdquo; from github.\n The nhite server A simple nhite client "
},
{
"uri": "https://nhite.github.io/tags/",
"title": "Tags",
"tags": [],
"description": "",
"content": ""
}]