Skip to content

Commit

Permalink
Merge development (#89)
Browse files Browse the repository at this point in the history
* Separate image URLs into production and development

* Add parameter to settings for image URL and policy

* Remove Python application emulator

* Target Go 1.20 and upgrade all packages

This shouldn't change anything since k8s.io/apimachinery already requires Go 1.20

* Move the application model into a separate directory

* Add placeholder Go emulator

* Move Dockerfile to the repo root
This is necessary because the emulator needs to access both emulator/ and model/

* Read config in emulator (test)

* Set GOMAXPROCS to the number of processes in configmap

* Shebang should always be on the first line

* Add script to redeploy emulator in dev environment

* Move HTTP server into goroutine

* Use encoding/json instead of sending string

* Add notFoundHandler

* Add endpointHandler

* Clean up emulator code

* Add testing config map for running emulator outside of K8s

* Change JSON response to better match old Python emulator

* Use struct instead of creating functions

* Export response structs

* Add CPU stressor

* Move CPU time function into util

* Add basic logging for endpoint calls

* Add ExecParallel for stressors

* Read SERVICE_NAME from env

* Remove unused istio.go and redis.go

* Use Go workspace to separate modules

* Add CPU task response

* Add network task function

* Refactor stressors to use interface

* Indent JSON response

* Forward the HTTP request to NetworkTask

* Propagate headers from inbound to outbound

* Move payload generation into separate function

* Format time when logging

* Add a restful.POST function for forwarding requests

* Restructure code to avoid import cycles

* Change called service port type to int

* Add function for forwarding requests to endpoints

* Set the slice capacity on init

* Call other endpoints in NetworkTask

* Combine network responses
I don't like this code...

* Omit port if zero
This seems to match what Python does

* Simplify code for network complexity

* Fix for NetworkTask == nil

* Don't append services and statuses twice

* Format status properly

* Fix duplicated service name in network complexity

* Embed task responses in RESTResponse

* Add ForwardParallel function

* Move threads parameter into CPU complexity

* Support CPU stressor on multiple goroutines

* Rename response.go to api.go
Since it contains requests now

* Log CPU tasks if logging=true

* Move documentation from wiki into repo

* Move documentation from wiki into repo

* Worker image should not be built in model/ anymore

* Worker image should not be built in model/ anymore

* Add logging for network tasks

* Also remove threads from model.Service

* Remove the workaround for null network complexity
In Go, a nil slice is an empty slice, so it works without panicking

* Ensure CPU complexity uses at least one thread in generator

* Allow processes = 0 in service

* Panic if input contains unknown fields

* Show protocol in network task log

* Set execution mode and forward requests in validation.go

* Update simple example to work with the new emulator

* Generate a new complex example

* Log configuration on startup

* Add documentation for adding a new stressor

* Add stressor documentation to generator-parameters.md

* Fix issue with execution time rounding

* Remove lock_threads from cpu complexity
It always needs to be on or the stressor doesn't work properly

* Fix image link in complex example

* Add comment to clarify that HTTP server should always be running

* Launch gRPC server on demand

* Convert requests and responses to protobufs
gRPC works with protobufs so this makes it easier to share types between the HTTP and gRPC server

* Move protobuf structs to generated package

* Add a generated placeholder gRPC service to the repo

* Set AllowPartial to true
Should be faster

* Unconditionally use HTTP server as readiness probe

* Compile the service at runtime
This is necessary once gRPC support is added

* Set run.sh executable bit

* Clean caches after build

* More edits to try to save space

* Add placeholder file for registering services

* Remove status from response
HTTP provides 200 OK, 404 Not Found, etc and gRPC provides OK, INVALID_ARGUMENT, etc

* Start gRPC server

* Update stressors.md for new API

* Start generating emulator code

* Add function for converting K8s name to Go name

* Use goname to generate service.go

* Add logging to stressors.md

* Generate gRPC code in run.sh

* Implement gRPC health service

* Generate service name when Check is called
This is faster since we only have one service anyway

* Do not update modules in run.sh
The emulator is supposed to use the modules in the base image

* Download protoc in run.sh

* Fix protoc not finding packages

* Use goname in service.tmpl

* Fix segfault in generator

* Fix go build not working

* Expose gRPC on port 81

* Add function CallGeneratedEndpoint

* Enable gRPC reflection service
For grpcurl, etc

* Only generate protobufs for gRPC services

* Install grpc_health_probe in base image

* Check for empty response data

* Also check ResponseData.Tasks != nil

* Specify protocol for microservice instead of endpoint
As discussed with Aleksandra, running two servers on two ports is complicated and unrealistic for a microservice

* Forward requests to gRPC endpoints

* Add protocols to network task response

* Update impl.tmpl to match new definition

* Print protocols in LogNetworkTask

* Fix parallel gRPC request

* Update stressors.md to match current implementation

* Create Dockerfile for application-generator

* Write generated files to k8s/generated

* Build Docker image in generator

* Write Docker output to stdout/stderr

* Fix error in Docker build

* Do not download modules for model
Since emulator depends on model

* Delete placeholder files from base layer

* Copy grpc_health_probe into final image

* Fix grpc_health_probe path

* Improve endpoint response format to be less ambiguous

* Add build ID to Docker image and config to make sure they match

* Warn instead of panicking on build ID mismatch

* Change all errors to lowercase
https://github.com/golang/go/wiki/CodeReviewComments#error-strings

* Update documentation to match new response format

* Fix inconsistent endpoint keys

* Fix buildID not being set in emulator

* Remove most TODOs

* Fix buildID not being set in Dockerfile

* Move generated files out of k8s/
Fixes warning when running deploy.sh

* Remove the development config map

* Document api.proto

* Use strcase package

* Ignore go.work.sum
This file includes other modules installed by the user, such as gopls

* Separate generated files into client and server
Prevents import cycle error

* Remove old run.sh script

* Ensure that gRPC status codes are returned on error

* Return INVALID_ARGUMENT if endpoint doesn't exist

* Fix api.proto comment

* Update documentation for Docker base and layered images

* Add DefaultProtocol back into generator

* Split CreateK8sYaml into two functions

* Name docker images after hostname
This ensures they won't be pulled down from the Docker registry

* Remove buildID and give all images unique tag

* Delete old hydragen-emulator images

* Never cache results from image build

* Add auto-deployment of images for Kind

* Add break to deploy.sh

* Add containerd-push-image-to-clusters.sh

* Pass sudo password to remote script

* Fix containerd-push-image-to-clusters.sh

* Remove old code from deploy.sh

* Fix ssh commands in containerd-push-image-to-clusters.sh

* Document helper scripts for kind and containerd

* Run goimports after generating gRPC code

* Ensure TrafficForwardRatio is positive

* Implement TrafficForwardRatio

* Don't allow traffic forward ratio under 1
Otherwise the default is 0

* Make sure multiple responses from endpoint can be returned

* Change container name back to "app"

* Add SSH and sudo password options to containerd-push-image-to-clusters.sh

* Set GOMEMLIMIT in k8s manifest

* Add debug output to containerd script

* Only use contexts in containerd script

* Don't reference loop variable in HTTP server
https://github.com/golang/go/wiki/CommonMistakes#using-reference-to-loop-iterator-variable

* Set default emulator base image to busybox
Provides a shell and utilities without increasing image size by much

* Rename "base image" to "source image"

* Add a base_image option in config

* Set BASEIMAGE in Dockerfile

* Document base_image parameter

* Automatically determine port and protocol of called service

* Add an option to use the development image with the random preset

* Fix port and protocol not applying in network complexity

* Update simple example for new Go emulator

* Update complex example by generating new application

---------

Co-authored-by: Hannes Mann <[email protected]>
  • Loading branch information
alekodu and hannesmann authored Sep 1, 2023
1 parent f2d2da3 commit 5ace3e8
Show file tree
Hide file tree
Showing 123 changed files with 5,154 additions and 3,911 deletions.
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
.idea/
.idea/
go.work.sum
42 changes: 42 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
#
# Copyright 2023 Ericsson AB
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

# This is the source image for the application emulator
# It contains the source code, Go compiler and protobuf compiler
# The generator will compile a unique layered image for the current configuration

FROM golang:1.20

# Install protoc
RUN apt update && apt install -y protobuf-compiler
RUN go install google.golang.org/protobuf/cmd/[email protected]
RUN go install google.golang.org/grpc/cmd/[email protected]

# Copy relevant parts of the source tree to the new source dir
COPY emulator /usr/src/emulator/emulator
COPY model /usr/src/emulator/model
# Delete placeholder files
RUN rm -Rf /usr/src/emulator/emulator/src/generated

WORKDIR /usr/src/emulator

# Create Go workspace
RUN go work init
RUN go work use ./emulator
RUN go work use ./model

# Download as many modules as possible to be shared between compilations
RUN cd emulator && go mod download -x
13 changes: 9 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# HydraGen

HydraGen is a tool that allows generating a wide range of microservice benchmarks in a systematic and flexible way. This tool facilitates evaluating performance and scalability of various resource management strategies entailing microservice-based architectures hosted in cloud environments. Currently, HydraGen can generate benchmarks emulating web-based applications with HTTP or gRPC servers.

## License and copyright
Expand All @@ -8,16 +9,20 @@ HydraGen is a free software. You can use, distribute and/or modify it under the
HydraGen's developement is driven by both Ericsson Research and Umea University.

## Papers and scientific reports

The design and evaluation of HydraGen is being published at IEEE Cloud 2023.
> M. R. Saleh Sedghpour, A. Obeso Duque, X. Cai, B. Skubic, E. Elmroth, C. Klein and J. Tordsson, "HydraGen: A Microservice Benchmark Generator," in IEEE International Conference on Cloud Computing, vol. X, no. Y, pp. Z, Day Month 2023, doi: N.
## Want to use our work?
Check our documentation in our [Wiki](https://github.com/EricssonResearch/cloud-native-app-simulator/wiki) pages.
## Want to use our work?

Check our documentation [here](docs/home.md).
If you use our tool, please cite our work.

## Want to contribute?

We welcome new contributions to our project via pull requests.
If you are interested in contribution, please check the list of open issues and visit the [Development Environment](https://github.com/EricssonResearch/cloud-native-app-simulator/wiki/Development-Environment) section under our wiki pages to learn about how to setup a development environment.
If you are interested in contribution, please check the list of open issues and visit the [Development Environment](docs/development-environment.md) section in our documentation to learn about how to setup a development environment.

## Acknowledgements
This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.
28 changes: 28 additions & 0 deletions community/containerd-import-image.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
#!/bin/bash
#
# Copyright 2023 Ericsson AB
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# This script is run over SSH and caches the image being sent over the network in containerd

password="$1"

if [[ -z "$password" ]]; then
ctr -n=k8s.io images import -
else
# First authorize (timeout is usually 15 minutes)
echo "$password" | sudo -S -v -p ""
# Now read from ssh stdin
sudo ctr -n=k8s.io images import -
fi
113 changes: 113 additions & 0 deletions community/containerd-push-image-to-clusters.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
#!/bin/bash
#
# Copyright 2023 Ericsson AB
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

has_sudo_password=false
sudo_password=""
has_ssh_password=false
ssh_password=""

while getopts ":s:p:n" option; do
case "${option}" in
s)
has_sudo_password=true
sudo_password="$OPTARG"
;;
p)
has_ssh_password=true
ssh_password="$OPTARG"
;;
n)
has_sudo_password=true
sudo_password=""
;;
*)
echo "Usage: $0 -s <sudo password> -p <ssh password> -n"
echo "Parameters:"
echo " -s: Set sudo password to argument"
echo " -p: Set ssh password to argument (if sshpass is installed)"
echo " -n: Skip sudo password prompt"
exit 0
;;
esac
done

name="$(hostname -f)/hydragen-emulator"
image="$(docker images $name --format '{{.Repository}}:{{.Tag}}')"

cd "$(git rev-parse --show-toplevel)/generator/k8s"
contexts="$(echo *)"
#contexts="$(kubectl config get-contexts --output=name | tr '\n' ' ')"

echo "Contexts: $contexts"
echo "Trying to discover all nodes that need an updated image..."
echo ""

nodes=()

# Try every context
# TODO: Does not check for the "node" property in configmap
for ctx in $contexts; do
echo "Trying to access context $ctx"
cmd="kubectl get nodes -o custom-columns=:metadata.name,:spec.taints[].effect --no-headers --context $ctx"
output="$($cmd 2>&1)"
if [[ $? == 0 ]]; then
echo " Kubectl returned nodes: $(echo $output | tr '\n' ' ')"
ctxnodes="$(echo "$output" | grep -v 'NoSchedule' | cut -d ' ' -f 1 | tr '\n' ' ')"
echo " Nodes that can have pods scheduled: $ctxnodes"
for node in $ctxnodes; do
nodes+=("$ctx/$node")
done
else
echo " Command failed (exit status $?): $(echo $output | tr '\n' ' ')"
fi
echo ""
done

echo "Nodes: ${nodes[@]}"

if [[ $has_sudo_password == false ]]; then
read -s -p "Sudo password (leave blank if '$(whoami)' has administrative access to containerd): " sudo_password
if [[ -z "$sudo_password" ]]; then
echo -n "(not using sudo)"
fi
echo ""
fi

for node in "${nodes[@]}"; do
IFS="/" read -r ctx name <<< $node
# https://kubernetes.io/docs/reference/kubectl/cheatsheet/
jsonpath="{.status.addresses[?(@.type=='InternalIP')].address}"
ip="$(kubectl get nodes $name --context $ctx -o jsonpath=$jsonpath)"
file="/tmp/containerd-import-image.sh"

# Start ssh in background
if [[ $has_ssh_password == true ]]; then
sshpass -p "$ssh_password" ssh -M -S /tmp/containerd-import-ssh-socket -fnNT "$(whoami)@$ip"
else
ssh -M -S /tmp/containerd-import-ssh-socket -fnNT "$(whoami)@$ip"
fi

# Copy script to remote machine
scp -o "ControlPath=/tmp/containerd-import-ssh-socket" ../../community/containerd-import-image.sh "$(whoami)@$ip:/tmp/containerd-import-image.sh"
# Execute script with archive coming from stdin
ssh -S /tmp/containerd-import-ssh-socket "$(whoami)@$ip" "chmod +x /tmp/containerd-import-image.sh"
# Add space at the start to prevent password from being saved in bash history
cat ../generated/hydragen-emulator.tar | ssh -S /tmp/containerd-import-ssh-socket -C "$(whoami)@$ip" " /tmp/containerd-import-image.sh "$sudo_password""
ssh -S /tmp/containerd-import-ssh-socket "$(whoami)@$ip" "rm /tmp/containerd-import-image.sh"
# Close ssh session
ssh -S /tmp/containerd-import-ssh-socket -O exit "$(whoami)@$ip"
done
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
#!/bin/bash
#
# Copyright 2021 Ericsson AB
#
Expand All @@ -13,16 +14,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
#!/bin/bash

DEFAULT_NUM=2
if [ -z "$1" ]; then
NUM=$DEFAULT_NUM
NUM=$DEFAULT_NUM
else
NUM=$1
NUM=$1
fi

# Push the image to the all clusters
# Push the image to all clusters
for i in $(seq ${NUM}); do
kind load docker-image app-demo --name=cluster-${i}
name="$(hostname -f)/hydragen-emulator"
kind load docker-image "$(docker images $name --format '{{.Repository}}:{{.Tag}}')" --name=cluster-${i}
done
8 changes: 4 additions & 4 deletions community/kind-setup-clusters.sh
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
#!/bin/bash
#
# Copyright 2021 Ericsson AB
#
Expand All @@ -13,14 +14,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
#!/bin/bash

DEFAULT_NUM=2
DEFAULT_CONFIG="kind-cluster-3-nodes.yaml"
if [ -z "$1" ]; then
NUM=$DEFAULT_NUM
NUM=$DEFAULT_NUM
else
NUM=$1
NUM=$1
fi

if [ -z "$2" ]; then
Expand All @@ -34,5 +34,5 @@ fi
# Create the kind multi-node clusters based on the given config
for i in $(seq ${NUM}); do
kind create cluster --name cluster-${i} --config $CONFIG
kind load docker-image app-demo --name=cluster-${i}
kind load docker-image hydragen-emulator --name=cluster-${i}
done
76 changes: 76 additions & 0 deletions docs/development-environment.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# Development Environment

This document helps you get started developing code for HydraGen.
If you follow this guide and find a problem, please take a few minutes to update this page.

HydraGen's build system is designed to run with minimal dependencies:

- kind
- docker
- git

These dependencies need to be set up before building and running the code.

- [Setting Up Docker](#setting-up-docker)
- [Build the worker image](#build-the-worker-image)
- [Setting Up Kind](#setting-up-kind)

## Setting up Docker

To use docker to build required images you will need:

- **docker tools:** To download and install Docker follow [these instructions](https://docs.docker.com/install/).

## Setting up Kind

To be able to run the *hydragen-emulator container* on a sample cluster, we use
[Kind](https://kind.sigs.k8s.io/docs/user/quick-start/).

- **Installation:** To download and install Kind follow [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
- **Setup the clusters:** To setup the clusters, you could simply run the [`kind-setup-clusters.sh`](kind-setup-clusters.sh)
script.

```bash
#
# This will create multiple Kind clusters (default 2)
# The naming of each cluster is followed by the number (ie, cluster-1, cluster-2, etc.)
# Each of the created clusters has 3 worker nodes and one control plane by default.
#

cd community
./kind-setup-clusters.sh [number of clusters (default 2)] [config of each cluster (default kind-cluster-3-nodes.yaml)]
```

## Build the source image

The source image, containing code and compilers, needs to be built from the local source code.

```bash
docker build -t "$(hostname -f)/hydragen-base" .
```

By default, HydraGen will use a release image from GitHub Packages as the base when building your image.
To use the development source image instead set this option in the input JSON configuration:

```json
{
...
"settings": {
"development": true
},
...
}
```

## Pushing the image to a cluster

The generated image needs to be pushed to all clusters after you have run `generator.sh`.

```bash
cd community
./push-image-to-clusters.sh [number of clusters (default 2)]
```

## Logging

To be able to have logging, simply follow the instructions [here](logging.md).
Loading

0 comments on commit 5ace3e8

Please sign in to comment.