The developement uses Kustomize to manage yaml templating and composition. Detailed deploy folder structure can be found here.
Local development uses Skaffold to manage the build, deploy and port forwarding.
-
Contact DataCommons team to get data access to Cloud Bigtable and BigQuery.
-
Install the following tools to develop mixer locally (with Docker):
-
If you prefer to do it locally without Docker, then need to install the following:
Make sure to add
GOPATH
and updatePATH
:# Use the actual path of your Go installation export GOPATH=/Users/<USER>/go/ export PATH=$PATH:$GOPATH
-
Authenticate to GCP
gcloud components update gcloud auth login gcloud auth application-default login
NOTE This can only develop and test the gRPC server. Since the ESP is not brought up here, can not test the REST API.
Install the following packages as a one-time action.
cd ~/ # Be sure there is no go.mod in the local directory
go install google.golang.org/protobuf/cmd/[email protected]
go install google.golang.org/grpc/cmd/[email protected]
Run the following command to generate Go proto files.
# In repo root directory
protoc \
--proto_path=proto \
--go_out=internal \
--go-grpc_out=internal \
--go-grpc_opt=require_unimplemented_servers=false \
proto/*.proto proto/v1/*.proto
Run the following code to start mixer gRPC server (without branch cache)
# In repo root directory
go run --tags sqlite_fts5 cmd/main.go \
--mixer_project=datcom-mixer-staging \
--bq_dataset=$(head -1 deploy/storage/bigquery.version) \
--base_bigtable_info="$(cat deploy/storage/base_bigtable_info.yaml)" \
--custom_bigtable_info="$(cat test/custom_bigtable_info.yaml)" \
--schema_path=$PWD/deploy/mapping/ \
--use_branch_bt=false
go run examples/main.go
Mixer can load and serve TMCF + CSV files. This is used for a private Data Commons instance. This requires to set the following flag
--use_tmcf_csv_data=true
--tmcf_csv_bucket=<bucket-name>
--tmcf_csv_folder=<folder-name>
Prerequists:
- Create a GCS bucket <BUCKET_NAME>
- Create a folder in the bucket <FOLDER_NAME> to host all the data files
Run the following code to start mixer gRPC server with TMCF + CSV files stored in GCS
# In repo root directory
go run --tags sqlite_fts5 cmd/main.go \
--mixer_project=datcom-mixer-dev-316822 \
--tmcf_csv_bucket=datcom-mixer-dev-resources \
--tmcf_csv_folder=test \
--use_tmcf_csv_data=true \
--use_bigquery=false \
--use_base_bt=false \
--use_branch_bt=false
To view possible updates:
go list -m -u all
To update:
go get -u ./...
go mod tidy
./scripts/run_test.sh
./scripts/update_golden.sh
In root directory, run:
./test/e2e/run_latency.sh
Install Graphgiz.
go test -v -parallel 1 -cpuprofile cpu.prof -memprofile mem.prof XXX_test.go
go tool pprof -png cpu.prof
go tool pprof -png mem.prof
Run the regular go run cmd/main.go
command that you'd like to profile with the
flag --startup_memprof=<output_file_path>
. This will save the memory profile
to that path, and you can use go tool pprof
to analyze it. For example;
# Command from ### Start Mixer as a gRPC server backed by TMCF + CSV files
# In repo root directory
go run --tags sqlite_fts5 cmd/main.go \
--mixer_project=datcom-mixer-staging \
--bq_dataset=$(head -1 deploy/storage/bigquery.version) \
--base_bigtable_info="$(cat deploy/storage/base_bigtable_info.yaml)" \
--schema_path=$PWD/deploy/mapping/ \
--use_branch_bt=true
--startup_memprof=grpc.memprof # <-- note the additional flag here
# -sample_index=alloc_space reports on all memory allocations, including those
# that have been garbage collected. use -sample_index=inuse_space for memory
# still in use after garbage collection
go tool pprof -sample_index=alloc_space -png grpc.memprof
Run the regular go run cmd/main.go
command that you'd like to profile with the
flag --httpprof_port=<port, recommended 6060>
. This will run the mixer server
with an HTTP handler at that port serving memory and CPU profiles of the running
server.
# Command from ### Start Mixer as a gRPC server backed by TMCF + CSV files
# In repo root directory
go run cmd/main.go \
--mixer_project=datcom-mixer-staging \
--bq_dataset=$(head -1 deploy/storage/bigquery.version) \
--base_bigtable_info="$(cat deploy/storage/base_bigtable_info.yaml)" \
--schema_path=$PWD/deploy/mapping/ \
--use_branch_bt=true
--httpprof_port=6060 # <-- note the additional flag here
Once this server is ready to serve requests, you can send it requests and use
the profile handler to retrieve memory and CPU profiles. test/http_memprof/http_memprof.go
is a program that automatically sends and profiles the memory usage of given
gRPC calls. You can update this file to your profiling needs or use it as a
starting point for an independent script that will automatically run a suite of
tests.
# in another process...
go run test/http_memprof/http_memprof.go \
--grpc_addr=127.0.0.1:12345 \ # default is given; where to find the Mixer server
--prof_addr=127.0.0.1:6060 # default is given; where to find the live profile handler
go tool pprof
also supports ad-hoc profiling of servers started as described
above. To use, specify the URL at which the HTTP handler can be found as the
input file argument. pprof
will download a profile from the handler and open
in interactive mode to run queries.
# ?gc=1 triggers a garbage collection run before the memory profile is served
# See net/http/pprof for other URLs and profiles available https://pkg.go.dev/net/http/pprof
# with no flags specifying output, pprof goes into interactive mode
go tool pprof -sample_index=alloc_space 127.0.0.1:6060/debug/pprof/heap?gc=1
Mixer and ESP is deployed on a local Minikube cluster. To avoid using Endpoints API management and talking to GCP, local deployment uses Json API configuration, which is compiled using API Compiler.
minikube start
minikube addons enable gcp-auth
eval $(minikube docker-env)
kubectl config use-context minikube
skaffold dev --port-forward -n mixer
This exposes the local mixer service at localhost:8081
.
To verify the server serving request:
curl http://localhost:8081/node/property-labels?dcids=Class
After code edit, the container images are automatically rebuilt and re-deployed to the local cluster.
./scripts/run_test.sh -d
./scripts/update_golden.sh -d
Run the following commands to update prod golden files from staging golden files
./scripts/update_golden_prod.sh