Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: refs #45 Add GA with anon ip #46

Open
wants to merge 36 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
090ec8b
Added a chapter on how we perform job interviews. (#16)
stickgrinder Jul 30, 2018
a1361c8
Better phrasing for an online WIP (#17)
stickgrinder Aug 28, 2018
393e3d1
The istruction by PaoloM have been ported to the playbook in a proper…
stickgrinder Aug 28, 2018
9e691d7
Small corrections to the recipe (#18)
stickgrinder Aug 29, 2018
49bf03b
Last corrections to k8s recipe
stickgrinder Aug 29, 2018
ae8f1bc
Added information about projects envs availability and troubleshooting
stickgrinder Sep 25, 2018
be1dcc2
Typo corrections
stickgrinder Sep 25, 2018
2ebc614
Add gcloud docker integration command
paolomainardi Nov 14, 2018
debb091
refs #000: edited macosx command and docker machine name (#20)
enricosato Nov 23, 2018
2302ae5
HR - Job roles and evaluation framework (beta) (#21)
stickgrinder Jan 22, 2019
2a6a58e
Added administrative role and related ISC
stickgrinder Jan 31, 2019
4eba88e
closes #23: Added a first version for the promo resources page.
stickgrinder Feb 1, 2019
22419c7
Update README (#26)
francescoben Feb 6, 2019
c7196be
Tracking policies page laid down (#29)
stickgrinder Apr 24, 2019
5da1158
Fix for the manifiesto link (#30)
juanebarreira Jun 6, 2019
52f3910
Recipe: OpenVPN Ubuntu NetworkManager (#31)
edodusi Jun 18, 2019
7e97a80
Add instructions to upgrade to openvpn 2.4
paolomainardi Jun 18, 2019
4804d4e
WIP: employee onboarding process (#32)
stickgrinder Jul 9, 2019
8faa03e
Career advancement link (#34)
francescoben Jul 9, 2019
b3099cd
Fixed typos
stickgrinder Jul 16, 2019
8b9aebf
Merge branch 'master' of github.com:sparkfabrik/company-playbook
stickgrinder Jul 16, 2019
a67f56d
Added line related to project mail groups (#36)
stickgrinder Jul 18, 2019
b856d50
Configuration is now updated to support Lunr 2.0 in Raneto 0.16.2 (#37)
stickgrinder Jul 18, 2019
b2fd1a4
Content reorganization, improved readability (#38)
stickgrinder Jul 24, 2019
37a2405
Link to ISCs is now working (thanks Christian) (#39)
stickgrinder Jul 24, 2019
2217f1d
Hiring from abroad (#40)
stickgrinder Jul 31, 2019
5e414d5
Information addition to hiring from abroad page
stickgrinder Jul 31, 2019
c6ef07e
Typo correction
stickgrinder Jul 31, 2019
f1c4d94
the update openvpn must be done as sudo
MarianoFranzese Jul 15, 2019
2db0698
Merge pull request #35 from sparkfabrik/recipe/openvpn-ubuntu-network…
edodusi Aug 5, 2019
15e5d2b
Use redis image for testing dnsdock: non need to login into sparkfabr…
alessiopiazza Sep 7, 2019
7ae7344
Added link to the printable document for ISCs
stickgrinder Nov 21, 2019
9aab30b
refs #45: Added simple cookie policy page, since we are adding GA.
Jan 24, 2020
5426690
refs #45: Adding GA code, using the standard RANETO configuration for…
Jan 24, 2020
2ba11b1
refs #45: Removing main pages from the sidebar and adding the cookie …
Jan 24, 2020
e80a22a
#45: Adding the new templates to the docker image, or they will not b…
Jan 24, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
FROM sparkfabrik/docker-locke-server:latest
MAINTAINER Paolo Pustorino <[email protected]>
LABEL maintainer="Paolo Pustorino <[email protected]>"

# Remove content folder
RUN rm -rf content/

# Copy content and configuration
COPY ./custom/config.js /srv/locke/config.js
COPY ./custom/custom-styles.css /srv/locke/themes/spark/public/styles/custom.css
COPY ./custom/templates/page.html /srv/locke/themes/spark/templates/page.html
COPY ./custom/templates/layout.html /srv/locke/themes/spark/templates/layout.html
COPY ./content /srv/locke/content
COPY ./assets /srv/locke/assets
11 changes: 6 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
## SparkFabrik dockerized Raneto-based playbook
## SparkFabrik playbook

This repository contains our company playbook (and possibly all the knowledge around our company-wide practices and policies), packed with a Raneto container to consult them.
Ideal destination for this stuff is on a domain like **playbook.sparkfabrik.com**. So far you can `docker-compose up -d` and visit http://playbook.sparkfabrik.loc to enjoy the result.
This repository contains our [company playbook](https://playbook.sparkfabrik.com) (and possibly all the knowledge around our company-wide practices and policies), packed with a Raneto container to consult them.

## Contributions

So far the project is meant to be internal. All company members can download the project and provide merge-requests towards `master` branch.
The naming convention for the branches is:
So far the project is meant to be internal, all company members can clone the project and set up a local environment with the command `docker-compose up -d`.
After that, a local instance of the playbook will be available at `http://playbook.sparkfabrik.loc`.

To contribute provide pull-requests towards `master` branch. The naming convention for the branches is:

* `section/section-slug-title` for new sections (hardly they will be open by a company member, mostly it will be a matter of pre-made structure, but suggestions are welcome)
* `content/description-of-the-content` for content contributions of various nature, like typo corrections, adding a new procedure or policy, etc
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/images/procedures/nm-allusers.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/images/procedures/nm-openvpn-select.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/images/procedures/nm-select.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/images/procedures/nm-vpn-connected.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
2 changes: 1 addition & 1 deletion content/FAQ/can-i-have.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Yes. We have budgets for Linux PCs, Apple machines, PHPStorm licenses and some more gizmos that you may need.

Other than that, if you need a device or license to speed up your work, just ask your team leader and we'll evaluate case by case.
On standard hardware and software, please read [this section](/our-company/approved-hardware-and-software) to learn more.
On standard hardware and software, please read [this section](/tools-and-polices/approved-hardware-and-software) to learn more.

### Books

Expand Down
2 changes: 1 addition & 1 deletion content/FAQ/sort
Original file line number Diff line number Diff line change
@@ -1 +1 @@
50
60
61 changes: 61 additions & 0 deletions content/cookie-policy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
/*
Description: Company Playbook cookie policy
*/

Last updated: January 2020

### WHAT ARE COOKIES?
The website **playbook.sparkfabrik.com** uses cookies. Pursuant to the FAQ of the Italian Data Protection Authority, issued on December 2012, available at www.garanteprivacy.it, cookies are "small text files" – made of letters and numbers - "that the websites visited by the user send to the device of the user (usually to the browser), where they are saved and then sent again to the website on the user’s subsequent visits". The cookies are used to simplify the analysis of the web traffic, to report when the Site or a specific section of the Site has been visited, to distinguish the users in order to provide personalized contents, and to help the Site’s administrators to improve the browsing experience for the users.

Even though cookies are saved on the user’s device, they cannot be used to access the information stored on such device. Cookies cannot upload any kind of codes, carry computer viruses or malware and are not harmful for the user’s device.

Hereinafter you may find additional information on the cookies installed on this site, and all necessary information in order to manage your preferences accordingly.

### USERS' CONSENT
When you access for the first time to one of the pages of the site **playbook.sparkfabrik.com** you will see a short notice explaining how cookies are used on this site. By closing the notice, you will give your consent to the use of cookies, pursuant to the modalities described in this Cookie Policy.

The site will remember your choices; therefore, the short notice will not appear if you visit other pages of the site later. In any case, you will always have the right to revoke, fully or in part, your consent.

In case of technical issue in relation to the consent, please feel free to contact us through the form available on this site, so that we can provide you the assistance you need.

### WHAT KIND OF COOKIES WE USE
The use of cookies by the Data Controller of this site, SparkFabrik S.r.l., with registered office in Milano (MI) Via Gustavo Fara 9, 20124, is included in the Privacy Policy of the site, available at the following [link](http://www.sparkfabrik.com).

We use persistent cookies in order to allow the correct functioning of the site and the provision of our services (persistent cookies are stored until manual deletion by the user or until they are automatically removed); we also use the so called session cookies, which are not permanently stored on the user’s device and are deleted every time the user closes the browser.

We use different kind of cookies – with different functionalities – which may be classified as follows:


**THIRD PARTY COOKIES, I.E. COOKIES INSTALLED BY DIFFERENT SUBJECTS ON PLAYBOOK.SPARKFABRIK.COM**


| Type of cookie | What does it do? |
|--- |--- |
| Statistics/analytics | (in order to install such cookies, it is not necessary to ask for user’s consent) These cookies are used in order to collect information on the browsing activities of the users on our site. Such information will be analysed in aggregated form, and solely for statistical purposes. Such cookies are not necessary, nevertheless they are very useful to us, and help us improve our services and contents on the basis of the information we receive from the statistics’ analysis. |



### THIRD PARTY COOKIES
When browsing on this website you will receive cookies both from only from third-party websites, which may install cookies on your device on our behalf in order to deliver the services they provide.

Third-party cookies allow us to obtain more complete surveys of user browsing habits. We use these cookies, for example, to obtain statistics on the use of our website and to evaluate your interest in specific contents or services. More detailed information on these cookies is available in this document and on the websites of the third parties installing the cookie.

### STATISTICAL COOKIES (THIRD PARTY)

#### GOOGLE ANALYTICS
Google Analytics is web traffic analysis service provided by Google, Inc., which provides analysis and statistics on the use of the website.

Your browser will transmit to Google Inc. all data collected by the cookies installed by Google Analytics. Google Inc. can use such data in order to tailor ads shown to the users, on the basis of their interests.

If you wish to disable the statistical cookies, thus preventing Google Analytics from collecting data on your navigation, you can download the special component for deactivating Google Analytics that you will find here: https://tools.google.com/dlpage/gaoptout.

### HOW TO SET YOUR DEVICE
If you disagree with the installation of cookies on your device, you may set your browser as to disable the receipt of cookies or, in alternative, not to use this site. If you disable the cookies however, the site or parts of the site may not work properly.

If you wish to modify the modalities of use of the cookies, to block them or to delete the cookies downloaded in your device, you can do so by changing your browser’s settings.

Most browsers allow the user to accept or delete all cookies, or to accept only some of them (e.g. cookies from certain websites).

The modalities to manage cookies’ preferences may change depending on the browser you are using. Further information on how to set your browser may be found at the following link: www.aboutcookies.org or by visiting the “Help” section of your browser.

Additional information on cookies and on how to change your preferences on third party cookies can be found at the following link: www.youronlinechoices.com.
248 changes: 248 additions & 0 deletions content/guides/access-k8s-sparkfabrik-cluster.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,248 @@
## Introduction

Since late 2016 Sparkfabrik's internal services (Gitlab, CI/CD pipelines, SparkBoard, etc) are running into a Kubernetes cluster hosted on GKE/GCP.

This means that all intermediate environments other than local and production (so integrations, branch builds, epic builds, etc) run in pods into a Google Cloud Engine elastic cluster. The following guide will help you configure your local environment so that you will be able to access services inside pods, open shells into them, read relevant logs and - ultimately - devops all the things! :)

## Step 1: Authentication to Google Cloud

As said, the K8s cluster is running over Google Cloud infrastructure. To access it we first need to authenticate on GCP.
Rejoy! Your `sparkfabrik.com` account is enough to perform authentication, but you'll need to open a terminal and [install `gcloud` CLI tool](https://cloud.google.com/sdk/install). Follow the link to get `gcloud` running on your OS.

Once done, you can authenticate running

```text
$ gcloud auth login
```

Provide your `sparkfabrik.com` credentials.

Now configure the gcloud docker integration running:

```text
$ gcloud auth configure-docker
```

## Step 2: Accessing the K8s cluster

Access to the cluster and pods therein will happen using K8s CLI tool `kubectl`.

On MacOSX `gcloud` command has all that we need to make it work:

```text
$ gcloud components install kubectl
```

While Ubuntu users can enjoy `apt`:

```text
$ sudo apt install kubectl
```

Once `kubectl` is installed `gcloud` command will allow us to access the GKE cluster.
`gcloud` CLI manages so many GCP services and areas that there are commands specific to each one. To tame the complexity, all commands are grouped and subgrouped.

Right now, the `container` group is what we need: it contains groups of commands by which we can manage GKE aspects, like clusters, node-pools, Container Registry images, and so on.

We are going to use a command in the `clusters` subgroup of the `container` group to gain access to the cluster. That command is `get-credentials` which fetches credentials for already running clusters.

Now, the `get-credentials` command takes a single parameter which is the **cluster name**. In our case it is `spark-op-services`. In additions there is a mandatory flag that specifies the region and the datacenter zone inside the region (namely, where is the cluster phisically running?): `--zone`.

Last but not least, there is a global flag (not specific to the `get-credentials` command), which is `--project`. Projects in GCP are similare to realms. Not to be confused with *K8s namespaces*, (quoting GCP docs)

> [...] projects form the basis for creating, enabling, and using all GCP services including managing APIs, enabling billing, adding and removing collaborators, and managing permissions for GCP resources.

So let's specify the correct `spark-int-cloud-services`, that is the project that holds all the production services in Sparkfabrik.

**Beware**: environment for customers' projects CI are not customers' assets, they are Sparkfabrik assets, payed and managed by us. That's why accessing these environments involves our production project!

After this long explanation, the following command should be clear:

```text
$ gcloud container clusters get-credentials spark-op-services --zone europe-west1-b --project spark-int-cloud-services
```
A laconic message should inform you that *kubeconfig generated an entry for spark-op-services*. No frills but you can pat yourself a shoulder. You're done.

## Step 3: Fetching info from clusters

OK, we gained access to the cluster. Mind that's the access is read only, but you have execution permissions (namely you can run `kubectl exec`) so you can enter running pods.

Let's test if our access is working after all. Run

```text
$ kubectl cluster-info
```

and you should get a response in the lines of

```text
Kubernetes master is running at https://<IP address>
GLBCDefaultBackend is running at https://<IP address>/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
Heapster is running at https://<IP address>/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://<IP address>/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://<IP address>/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
```

If not check you followed all previous steps correctly.
Mind though that depeding on permissions on your account the output of this command may differ and you can see only a subset of the information and/or a specific error message. Keep this in mind before banging your head to the wall.

## Step 4: Namespaces

We mentioned projects, which is GCP realms to address accountability, ACLs and other "administrative" aspects related to the GCP services.

Projects as never to be confused with **namespaces**. The concept of namespace here is intended as typical of Kubernetes: K8s namespaces allow to segment the same "physical" cluster in reserved spaces, like they are separate clusters.

This makes us sure critical ops won't concur for resources or won't hinder each other in case of malfunctioning, **at a cluster level**.

We use this feature to make sure each Gitlab project (again, not to be confused with GCP projects: we mean each customer or internal product) that needs build environments in Gitlab, lives in its own namespace.

Let's take a look at all namespaces available in the cluster:

```text
$ kubectl get ns
```

Here are a dummy response (since this is a public playbook):

```text
NAME STATUS AGE
bunnies Active 293d
bunnies-demo Active 49d
default Active 1y
gizmo-website-d6 Active 99d
gizmo-website-d8 Active 4d
gitlab Active 345d
gitlab-test-envs-342 Active 23d
ingress-nginx Active 5d
kube-lego Active 345d
kube-public Active 1y
kube-system Active 1y
...
spark Active 345d
sparkfabrik-website-292 Active 245d
...
acme-website-304 Active 126d
acme-website-master-stage Active 36d
acme-website-subsid-stage Active 37d
acme-website-master-dev Active 121d
```

Some of the preceding namespaces are real. As you can see names are pretty self-explaining (at least the ones related to projects). But if you are in doubt you can check Gitlab to see which namespace is in use by a specific Gitlab projects.
Follow `Settings -> Integrations -> Kubernetes -> Namespace` in the project page to make sure (proper permissions may be necessary, ask your team leader if you can't access that section).

## Step 5: Pods

OK, so far we have this hierarchy:

```text
GCP Project foo
└── Cluster bar
├── Namespace foo-bar-alpha
├── Namespace foo-bar-bravo
└── Namespace foo-bar-charlie
```

Now, each namespace can contain pods. For simplicity think of pods like *docker containers with superpowers*.

Let's list all pods in a specific namespace, say `spark`.

```text
$ kubectl -n spark get pod
```
here is the result

```text
NAME READY STATUS RESTARTS AGE
artifacts-ssh-server-7d9b9db67b-wg4hh 1/1 Running 0 5d
cron-3028794900-znhs8 1/1 Running 0 5d
dashboard-develop-499waf-849b7c95f9-4qxmr 1/1 Running 0 5d
playbook-locke-2261095262-8x8p2 1/1 Running 0 5d
```

This command components are:

* `kubectl` : the client - duh
* `-n spark` : use `spark` namespace
- `get pod` : list all pods

If we want to view the logs of a specific pod (like issuing `docker logs -f` on a normal container), try

```text
$ kubectl -n spark logs -f <pod-name>
```

for example

```text
$ kubectl -n spark logs -f playbook-locke-2261095262-8x8p2

npm info it worked if it ends with ok
npm info using [email protected]
npm info using [email protected]
npm info lifecycle [email protected]~prestart: [email protected]
npm info lifecycle [email protected]~start: [email protected]

> [email protected] start /srv/locke
> node server.js

Express HTTP server listening on port 80
GET /robots.txt 404 58.932 ms - 2387
GET /FAQ/who-to-talk-to-for 200 68.338 ms - 11073
GET /guides/an-introduction-to-docker 200 24.580 ms
```

Again, let's see what the command does:

* `kubectl` : ok, ok...
* `-n spark` : use `spark` namespace
- `logs -f` : spit logs and follow the output (like `tail -f` where `f` stands for *forever*)
- `playbook-locke-2261095262-8x8p2` : the pod name

So, to sum things up. Since each pod can be seen as a container and each container usually runs a single service (as per best practice), with this swiss-army knife command template:

```text
kubectl -n <namespace name> logs [-f] <pod name>
```

you can see the logs of a specific service, for a specific project.
As a (almost) real life example *see apache logs for the ACME Drupal 8 website, develop environment* can translate to

```text
kubectl -n acme-dev logs [-f] drupal
```

## Step 6: Accessing pods command line

Now that we have logs we can debug 99% of the problems like a boss. Right?

Not really... accessing the shell may be a real boon, even to make live tests and assess the problem (or a solution) quickly.

To gain access to the shell we'll make use of the mentioned `exec` command of `kubectl` client. Let's try:

```text
$ kubectl -n spark exec -it playbook-locke-2261095262-8x8p2 -- /bin/bash
```

Ta-daaan. You should be logged to the terminal as root, as simple as that.

Dissecting the command we found:

* `kubectl` : enough of this, right?
* `-n spark` : again, use `spark` namespace
* `exec` : this works much like in Docker
* `-it` : the same Docker flags, meaning `interactive` and `tty`
* `--` : enforces what follows as positional parameter (shell stuff actually, not partaining to kubectl)
* `/bin/bash` : the shell to be executed (see below)

**Gotcha**: Please remind that not all containers have bash. Some (many actually) of them are based on Alpine Linux or other distros so the available shell may vary.
Alpine for example sports `ash` so you may have to issue

```text
$ kubectl -n acme exec -it acme-ash-test -- /bin/ash
```

## Conclusions

This is a small recipe to get you started with our production K8s environments. From here up it's a matter of experience, docs reading and a bit of work by you to increase your devops skills.

Roll your sleeves and enjoy!
Loading