Much Ado About Nothing by William Shakespeare
Ado is a collection of services which are helpful when setting up a collecting society.
The ado setup creates and maintains Docker <http://docs.docker.com> containers for development and also production use. Docker-compose <https://docs.docker.com/compose/>, - vagrant for docker containers, - is used as a creator and configurator for the needed Docker container setups:
- Postgres database (db) A database with name
c3s
is created automatically. Demo data included.- Web portal service including Tryton and Pyramid (portal) All Tryton and Pyramid dependencies installed
- Tryton service (tryton)
A Linux or OS X system, docker <https://docs.docker.com/engine/installation/>, docker-compose <https://docs.docker.com/compose/install/> and git <http://git-scm.com/downloads>.
Clone this repository into your working space:
$ cd WORKING/SPACE $ git clone https://github.com/C3S/c3s.ado.git
All setup and maintenance tasks are done in the root path of the
c3s.ado/
repository:
$ cd c3s.ado
Choose the environment to build:
For production environment switch to the
master
branch:$ git checkout master
For development environment switch to the
develop
branch:$ git checkout develop
Update the environment, clone/pull development repositories:
$ ./update
Build docker containers:
$ docker-compose build
The initial build of the containers will take some time. Later builds will take less time.
Adjust environment files for containers, if neccessary. Sane defaults for a development setup are given:
./portal.env
./api.env
Change the password for the admin user in
ado/etc/trytonpassfile
Start containers:
$ docker-compose up
This starts all ado service containers.
The number of portal services is implemented scalable. Because of this it is not possible to hard code the external port number of a service. So all services use random external ports on the host system. The tool nginx-proxy<https://github.com/jwilder/nginx-proxy> is used as a reverse proxy and load-balancer to the portal services host on port 81.
- Connecting the portal, point your browser to::
- http://0.0.0.0.xip.io:81
- Connecting the api, point your browser to::
- http://api.0.0.0.0.xip.io:81
- Connecting a specific instance of the portal service, point your browser to::
- http://localhost:<random external port on host system>/login
To connect to trytond you can use one of the several Tryton client applications or APIs. For back-office use of the application the Gtk2 based Tryton client is recommended.
Install the client application with the name tryton or tryton-client in Version 3.4.x from your Linux distribution. You can also use the source, OS X, or Windows packages or binaries found here: <http://www.tryton.org/download.html>
On the host system connect to:
server: localhost port: 8000 database: c3s user: admin password: admin
Note
Tryton server and the client are required to have the same version branch (actual 3.4.x).
For development purposes it is convenient to have the possibility to debug the running code. To start only the necessary services for developing a service use e.g:
$ docker-compose run --service-ports portal ado-do deploy-portal $ docker-compose run --service-ports api ado-do deploy-api
The portal service is started with ado-do
inside a portal container.
The tryton service can be started with:
$ docker-compose run --service-ports tryton ado-do deploy-tryton c3s
The flag service-ports
runs the container and all its dependecies
with the service's ports enabled and mapped to the host.
For development is the benefit of starting a service with
docker-compose run --service-ports <service>
vs docker-compose up
the possibility to communicate with a debugger like pdb.
A similar topic is to start a shell in a container. To manually examine the operating system of a container, just run a shell in the container:
$ docker-compose run portal /bin/bash
Warning
Manual changes are not persisted when closing a container. All changes are reset.
Note
The console is always opend in a freshly build of the service and
does not connect to a running container. To enter a running container use
docker exec
. See below for further instructions.
Ado-do is a command line tool to setup and maintain services in a container.
To start the ado-do
command from inside a container the
docker-compose run ado
must be removed from the following examples.
Get acquainted with ado-do
a command driven tool which performs tasks on
container start:
$ docker-compose run portal ado-do --help $ docker-compose run portal ado-do COMMAND --help
Update all modules in an existing database
$ docker-compose run portal ado-do update -m all DATABASE_NAME
To update specific modules in an existing database with name DATABASE_NAME:
$ docker-compose run portal ado-do update -m MODULE_NAME1[,MODULE_NAME2,...] \ DATABASE_NAME
E.g.:
$ docker-compose run tryton ado-do update -m party,account,collecting_society c3s
To manually examine and edit a database, use:
$ docker-compose run portal ado-do db-psql DATABASE_NAME
Backup a database:
$ docker-compose run tryton ado-do db-backup DATABASE_NAME \ > `date +%F.%T`_DATABASE_NAME.backup
Delete a database:
$ docker-compose run portal ado-do db-delete DATABASE_NAME
To scale increasing load it is possible to start more service containers on demand:
$ docker-compose scale portal=2 tryton=3 db=1
To scale decreasing load it is possible to stop service containers on demand:
$ docker-compose scale tryton=2
Lookup all host ports in use:
$ /path/to/c3s.ado/show_external_urls
… or use docker-compose ps
as an alternative.
Lookup a specific host port in use:
$ docker-compose --index=1 port tryton 8000
Note
This command has a fixed but not merged and released bug: docker/compose#667
To run tests in the tryton container use:
$ docker-compose run tryton sh -c \ 'ado-do pip-install tryton \ && export DB_NAME=:memory: \ && python /ado/src/trytond/trytond/tests/run-tests.py'
Some changes in the container setup require a rebuild of the whole system.
Best is to move the actual c3s.ado
directory to another name and
make a fresh clone of the c3s.ado
repository.
Update the environment as usual:
$ cd c3s.ado $ ./update
Build containers, this time without a cache:
$ docker-compose build --no-cache
Start containers:
$ docker-compose up
To monitor all running containers use:
$ watch ./monitor
Note
The monitoring abilities are limted to system and user cpu and rss+cache size. The most informative metrics to use for monitoring are a moving target.
The general Python requirements are provided by default Debian packages from
Jessie (actual testing) if available, otherwise from PyPI.
Packages under development are located in ado/src
and can be edited on the
host system, outside the containers.
For developer convenience all Tryton modules use a git mirror of the upstream
Tryton repositories.
For this setup the Tryton release branch 3.4 is used.
This repository is build by the following files and directories:
├── ado # This directory is mapped into portal and tryton container │ ├── ado-do # Maintenance Utility for containers │ ├── etc │ │ ├── requirements-portal.txt # Pip requirements for portal service │ │ ├── requirements-tryton.txt # Pip requirements for Tryton service │ │ ├── scenario_master_data.txt # Demo data script │ │ ├── trytond.conf # Configuration file for Tryton service │ │ └── trytonpassfile # Password file for Tryton admin user │ ├── src # Source repositories, edit here │ │ ├── account │ │ ├── account_invoice │ │ ├── ... │ └── var # upload directory for tryton webdav service │ └── lib ... ├── CHANGELOG ├── config.py # Configuration for paths and reporitories ├── Dockerfiles # Definition of service container images │ ├── portal ... │ └── tryton ... ├── docker-compose.yml # docker-compose configuration ├── postgresql-data ... # postgresql database data files ├── README.rst #*this file* ├── show_external_urls # helper script to show used external urls └── update # Update script for repositories and file structure
This setup maintains three levels of package inclusion:
- Debian packages
- Python packages installed with pip
- Source repositories for development purposes
Source packages for the development are available as git repositories are
stored in config.py
in variable repositories
:
( git repository url or None. git clone option, required if repository is given. relative path to create or clone. ),
These packages are cloned or updated with the ./update
command and must
be pip installable.
To install a source repository package in a container, it is be declared in
one of the ado/etc/requirements*.txt
files.
Note
The requirements-portal.txt
inherits the
requirements-tryton.txt
.
Note
The config.py
can be used to create empty directories, too.
Debian and Python packages are included in one of the Dockerfiles
:
- tryton
- portal
Note
Add source repository packages only when they are realy needed for development.
The database files are stored in postgresql-data
.
To rebuild a new database use the following pattern:
$ docker-compose stop db $ docker-compose rm db $ sudo rm -rf postgresql-data/ $ mkdir postgresql-data
Warning
All data in this database will be deleted!
If docker fails to start and you get messages like this: "Couldn't connect to Docker daemon at http+unix://var/run/docker.sock [...]" or "docker-compose cannot start container <docker id> port has already been allocated"
Check if the docker service is started:
$ /etc/init.d/docker[.io] stop $ /etc/init.d/docker[.io] start
Check if any user of docker is member of group
docker
:$ login $ groups | grep docker
If the Tryton client already connected the tryton-container, the fingerprint check could restrict the login with the message: Bad Fingerprint!
That means the fingerprint of the server certificate changed.
In production use, the Bad fingerprint
alert is a sign that someone
could try to fish your login credentials with another server responding your
client.
Ask the server administrator if the certificate is changed.
Close the Tryton client.
Check the problematic host entry in ~/.config/tryton/3.4/known_hosts
.
Add a new fingerprint provided by the server administrator or
simply remove the whole file, if the setup is not in production use:
rm ~/.config/tryton/3.4/known_hosts
For manual execution of nosetests, you need to start the container:
- docker-compose run portal bash
- ado-do pip-install portal
execute nosetest:
- nosetests -v --nologcapture /ado/src/collecting_society.portal/collecting_society_portal/tests/nose-test-01.py
This is a collection of docker internals. Good to have but seldom useful.
Show running container (docker-compose level), e.g.
$ docker-compose ps Name Command State Ports --------------------------------------------------------------------------- c3sadointernal_db_1 /docker-entrypoint.sh postgres Up 5432/tcp c3sadointernal_portal_1 ado-do deploy-portal Up 6543->6543/tcp c3sadointernal_tryton_1 ado-do deploy-tryton c3s Up 8000->8000/tcp
Use docker help:
$ docker help
Show running container (docker level):
$ docker ps
Enter a running container by id (Docker>=1.3;Kernel>3.8):
$ docker exec -it <container-id> bash
Note
The docker containers are usually stored under /var/lib/docker
and can occupy some gigabyte diskspace.
Docker is memory intensive. To Stop and remove all containers use:
$ docker stop $(docker ps -a -q) $ docker rm $(docker ps -a -q)
Remove images
$ docker rmi $(docker images -f "dangling=true" -q)
In case you need disk space, remove all local cached images:
$ docker rmi $(docker images -q)
For infos on copyright and licenses, see ./COPYRIGHT.rst