openQA is an automated test tool that makes it possible to test the whole installation process of an operating system. It’s free software released under the GPLv2 license. The source code and documentation are hosted in the os-autoinst organization on GitHub.
This document provides the information needed to install and setup the tool, as well as information useful for everyday administration of the system. It’s assumed that the reader is already familiar with openQA and has already read the Starter Guide, available at the official repository.
The easiest way to install openQA is from packages. You can find openSUSE packages in OBS in the openQA:stable repository.
The latest development version can also be found in OBS in the openQA:devel repository.
For Fedora, packages are available in the official repositories for Fedora 23 and later. Installation on these distributions is therefore pretty simple:
# openSUSE 13.2 (stable version)
zypper ar -f obs://devel:openQA:stable/openSUSE_13.2 openQA
zypper ar -f obs://devel:openQA:13.2/openSUSE_13.2 openQA-perl-modules
# openSUSE Leap 42.1
zypper ar -f obs://devel:openQA/openSUSE_Leap_42.1 openQA
zypper ar -f obs://devel:openQA:Leap:42.1/openSUSE_Leap_42.1 openQA-perl-modules
# openSUSE Tumbleweed
zypper ar -f obs://devel:openQA/openSUSE_Factory openQA
# all openSUSE
zypper in openQA
# Fedora 23+
dnf install openqa openqa-httpd
It’s recommended to run openQA behind an apache proxy. See the
openqa.conf.template
config file in /etc/apache2/vhosts.d
(openSUSE) or
/etc/httpd/conf.d
(Fedora). To make everything work correctly on openSUSE,
you need to enable the headers, proxy, proxy_http and proxy_wstunnel
modules using a2enmod. This is not necessary on Fedora. For a basic setup,
you can copy openqa.conf.template
to openqa.conf
and modify the
ServerName
setting. This will direct all HTTP traffic to openQA.
By default openQA expects to be run with HTTPS. The openqa-ssl.conf.template
Apache config file is available as a base for creating the Apache config; you
can copy it to openqa-ssl.conf
and uncomment any lines you like, then
ensure a key and certificate are installed to the appropriate location
(depending on distribution and whether you uncommented the lines for key and
cert location in the config file). If you don’t have a TLS/SSL certificate for
your host you must turn HTTPS off. You can do that in /etc/openqa/openqa.ini
:
[openid]
httpsonly = 0
systemctl start openqa-scheduler
systemctl start openqa-gru
systemctl start openqa-websockets
systemctl start openqa-webui
# openSUSE
systemctl restart apache2
# Fedora
# for now this is necessary to allow Apache to connect to openQA
setsebool -P httpd_can_network_connect 1
systemctl restart httpd
The openQA web UI should be available on http://localhost/ now. To ensure
openQA runs on each boot, you should also systemctl enable
the same services.
Workers are processes running virtual machines to perform the actual testing. They are distributed as a separate package and can be installed on multiple machines but still using only one WebUI.
# openSUSE
zypper in openQA-worker
# Fedora
dnf install openqa-worker
To allow workers to access your instance, you need to log into
openQA as operator and create a pair of API key and secret. Once you
are logged in, follow the link manage API keys in the top right
corner. Click the create button to generate key
and secret
. There is
also a script available for creating an admin user and an API key+secret
pair non-interactively, /usr/share/openqa/script/create_admin
, which can
be useful for scripted deployments of openQA. Copy and paste the key and
secret into /etc/openqa/client.conf
on the machine(s) where the worker
is installed. Make sure to put in a section reflecting your webserver URL.
In the simplest case, your client.conf
may look like this:
[localhost]
key = 0123456789ABCDEF
secret = 0123456789ABCDEF
To start the workers you can use the provided systemd files via systemctl
start openqa-worker@1
. This will start worker number one. You can start as
many workers as you dare, you just need to supply different worker id (number
after @).
You can also run workers manually from command line.
sudo -u _openqa-worker /usr/share/openqa/script/worker --instance X
This will run a worker manually showing you debug output. If you haven’t
installed os-autoinst from packages make sure to pass --isotovideo
option
to point to the checkout dir where isotovideo is, not to /usr/lib
! Otherwise
it will have trouble finding its perl modules.
OpenQA supports three different authentication methods - OpenID (default), iChain
and Fake. See auth
section in /etc/openqa/openqa.ini
.
[auth]
# method name is case sensitive!
method = OpenID|iChain|Fake
Independently of method used, the first user that logs in (if there is no admin yet) will automatically get administrator rights!
By default openQA uses OpenID with opensuse.org as OpenID provider.
OpenID method has its own openid
section in /etc/openqa/openqa.ini
:
[openid]
## base url for openid provider
provider = https://www.opensuse.org/openid/user/
## enforce redirect back to https
httpsonly = 1
OpenQA supports only OpenID version up to 2.0. Newer OpenID-Connect and OAuth is not supported currently.
For development purposes only! Fake authentication bypass any authentication and
automatically allow any login requests as Demo user with administrator privileges
and without password. To ease worker testing, API key and secret is created (or updated)
with validity of one day during login.
You can then use following as /etc/openqa/client.conf
:
[localhost]
key = 1234567890ABCDEF
secret = 1234567890ABCDEF
If you switch authentication method from Fake to any other, review your API keys! You may be vulnerable for up to a day until Fake API key expires.
Editing needles from web can optionally commit new or changed needles automatically to git. To do so, you need to enable git support by setting
[global]
scm = git
in /etc/openqa/openqa.ini
. Once you do so and restart the web interface, openQA will
automatically commit new needles to the git repository.
You may want to add some description to automatic commits coming
from the web UI.
You can do so by setting your configuration in the repository
(/var/lib/os-autoinst/needles/.git/config
) to some reasonable defaults such as:
[user] email = [email protected] name = openQA web UI
To enable automatic pushing of the repo as well, you need to add the following to your openqa.ini:
[scm git]
do_push = yes
Depending on your setup, you might need to generate and propagate ssh keys for user geekotest to be able to push.
Default behavior for all workers is to use the Qemu backend and connect to
http://localhost. If you want to change some of those options, you can do so
in /etc/openqa/workers.ini
. For example to point the workers to the FQDN of
your host (needed if test cases need to access files of the host) use the
following setting:
[global]
HOST = http://openqa.example.com
Once you got workers running they should show up in the admin section of openQA in the workers section as idle. When you get so far, you have your own instance of openQA up and running and all that is left is to set up some tests.
There are some additional requirements to get remote worker running. First is to
ensure shared storage between openQA WebUI and workers.
Directory /var/lib/openqa/share
contains all required data and should be
shared with read-write access across all nodes present in openQA cluster.
This step is intentionally left on system administrator to choose proper shared
storage for her specific needs.
Example of NFS configuration:
NFS server is where openQA WebUI is running. Content of /etc/exports
/var/lib/openqa/share *(fsid=0,rw,no_root_squash,sync,no_subtree_check)
NFS clients are where openQA workers are running. Run following command:
mount -t nfs openQA-webUI-host:/var/lib/openqa/share /var/lib/openqa/share
Auditing plugin enables openQA administrators to maintain overview about what is happening with the system. Plugin records what event was triggered by whom, when and what the request looked like. Actions done by openQA workers are tracked under user whose API keys are workers using.
Audit log is directly accessible from Admin menu
.
Auditing, by default enabled, can be disabled by global configuration option in /etc/openqa/openqa.ini
:
[global]
audit_enabled = 0
The audit section of /etc/openqa/openqa.ini
allows to exclude some events from logging using
a space separated blacklist:
[audit]
blacklist = job_grab job_done
List of events tracked by the auditing plugin:
Assets: asset_register asset_delete Workers: worker_register command_enqueue Jobs: iso_create iso_delete iso_cancel jobtemplate_create jobtemplate_delete job_create job_grab job_delete job_update_result job_done jobs_restart job_restart job_cancel job_duplicate jobgroup_create jobgroup_connect Tables: table_create table_update table_delete Users: user_comment user_login Needles: needle_delete needle_modify
Some of these events are very common and may clutter audit database. For this reason job_grab
and job_done
events are blacklisted by default.
Note
|
Upgrading openQA does not automatically update /etc/openqa/openqa.ini . Review your configuration after upgrade.
|
The openQA web interface can be started via MOJO_REVERSE_PROXY=1 morbo script/openqa
in
development mode.
/var/lib/openqa/
must be owned by root and contain several sub
directories, most of which must be owned by the user that runs openQA
(default geekotest):
-
db
contains the sqlite database -
images
is where the server stores test screenshots and thumbnails -
share
contains shared directories for remote workers, can be owned by root -
share/factory
contains test assets and temp directory, can be owned by root but sysadmin must create subdirs -
share/factory/iso
contains ISOs for tests -
share/factory/hdd
contains hard disk images for tests -
share/factory/repo
contains repositories for tests -
share/factory/other
contains miscellaneous test assets (e.g. kernels and initrds) -
share/factory/tmp
is used as a temporary directory (openQA will create it if it ownsshare/factory
) -
share/tests
contains the tests themselves -
testresults
is where the server stores test logs and test-generated assets
It also contains several symlinks which are necessary due to various things moving around over the course of openQA’s development. All the symlinks can of course be owned by root:
-
script
(symlink to/usr/share/openqa/script/
) -
tests
(symlink toshare/tests
) -
factory
(symlink toshare/factory
)
It is always best to use the canonical locations, not the compatibility
symlinks - so run scripts from /usr/share/openqa/script
, not
/var/lib/openqa/script
.
You only need the asset directories for the asset types you will actually use,
e.g. if none of your tests refer to openQA-stored repositories, you will need
no factory/repo
directory. The distribution packages may not create all
asset directories, so make sure the ones you need are created if necessary.
Packages will likewise usually not contain any tests; you must create your
own tests, or use existing tests for some distribution or other piece of
software.
The worker needs to own /var/lib/openqa/pool/$INSTANCE
, e.g.
* /var/lib/openqa/pool/1
* /var/lib/openqa/pool/2
* …. - add more if you have more CPUs/disks
You can also give the whole pool directory to the _openqa-worker
user and let
the workers create their own instance directories.
By default, openQA will use an SQLite database: /var/lib/openqa/db/db.sqlite
.
This will be automatically created on first access to the openQA web UI, if it
does not exist.
It is possible to use PostgreSQL or MariaDB / MySQL instead of SQLite, and
indeed this is recommended for production deployments of openQA. You should
create a database and a dedicated user account with full access to it. To
configure access to the chosen database in openQA, edit /etc/openqa/database.ini
and change the settings in the [production]
section. Here is an example for
connecting to a remote PostgreSQL database with a username and password:
[production]
dsn = dbi:Pg:dbname=openqa;host=db.example.org
user = openqa
password = somepassword
The dsn
value format technically depends on the database type (though at
time of writing it’s in fact identical for both supported databases). For
PostgreSQL it’s documented at
http://search.cpan.org/~rudy/DBD-Pg/Pg.pm#DBI_Class_Methods ,
for MySQL / MariaDB it’s documented at
http://search.cpan.org/~capttofu/DBD-mysql/lib/DBD/mysql.pm#Class_Methods
If you intend to use a different database, it is best to create the database and configuration file before starting the services and connecting to the web UI for the first time, otherwise openQA will set itself up with an SQLite database and may get confused when you try to switch to a different one. See the following section if you want to migrate an existing openQA-on-SQLite deployment to a different database.
openQA is compatible with several database engines and comes with all the needed tools to initialize a clean database in any of them. But openQA does not include tools to migrate the existing data from a database to another. If you are planning, for example, to leave behind SQLite and switch to PostgreSQL in your openQA installation, you will need to start with a clean database or perform the data conversion by yourself.
Converting databases from one engine to another is far from trivial. There are plenty of tools, both commercial and free, that try to address the problem for different databases and in different ways. The following example SQL scritps are provided just as a starting point for those willing to migrate from SQLite (the default engine) to PostgreSQL (successfully backing the biggest openQA installations at the time of writing). Keep in mind that the scripts will probably need some previous work, since they are based on the version 22 of the database schema (likely outdated at the time of reading).
First, run this in the SQLite database to dump the database content into a bunch of CSV files.
.mode csv
.header ON
.output assets.csv
SELECT * FROM assets;
.output job_settings.csv
SELECT * FROM job_settings;
.output machine_settings.csv
SELECT * FROM machine_settings;
.output machines.csv
SELECT * FROM machines;
.output product_settings.csv
SELECT * FROM product_settings;
.output products.csv
SELECT * FROM products;
.output secrets.csv
SELECT * FROM secrets;
.output test_suite_settings.csv
SELECT * FROM test_suite_settings;
.output test_suites.csv
SELECT * FROM test_suites;
.output users.csv
SELECT * FROM users;
.output worker_properties.csv
SELECT * FROM worker_properties;
.output workers.csv
SELECT * FROM workers WHERE id > 0;
.output api_keys.csv
SELECT * FROM api_keys;
.output job_modules.csv
SELECT * FROM job_modules;
.output job_templates.csv
SELECT * FROM job_templates;
.output jobs.csv
SELECT * FROM jobs;
.output job_dependencies.csv
SELECT * FROM job_dependencies;
.output jobs_assets.csv
SELECT * FROM jobs_assets;
Then, initialize the PostgreSQL database using the standard procedure and afterwards run this script from the directory containing the CSV files to import them into the new database.
\copy users FROM users.csv WITH csv header NULL AS ''
\copy api_keys FROM api_keys.csv WITH csv header NULL AS ''
\copy secrets FROM secrets.csv WITH csv header NULL AS ''
\copy assets FROM assets.csv WITH csv header NULL AS ''
\copy workers FROM workers.csv WITH csv header NULL AS ''
\copy worker_properties FROM worker_properties.csv WITH csv header NULL AS ''
\copy products FROM products.csv WITH csv header NULL AS ''
\copy product_settings FROM product_settings.csv WITH csv header NULL AS ''
\copy machines FROM machines.csv WITH csv header NULL AS ''
\copy machine_settings FROM machine_settings.csv WITH csv header NULL AS ''
\copy test_suites FROM test_suites.csv WITH csv header NULL AS ''
\copy test_suite_settings FROM test_suite_settings.csv WITH csv header NULL AS ''
\copy job_templates FROM job_templates.csv WITH csv header NULL AS ''
\copy jobs FROM jobs.csv WITH csv header NULL AS ''
\copy job_settings FROM job_settings.csv WITH csv header NULL AS ''
\copy job_modules FROM job_modules.csv WITH csv header NULL AS ''
\copy job_dependencies FROM job_dependencies.csv WITH csv header NULL AS ''
\copy jobs_assets FROM jobs_assets.csv WITH csv header NULL AS ''
SELECT SETVAL('users_id_seq', (SELECT MAX(id) FROM users));
SELECT SETVAL('api_keys_id_seq', (SELECT MAX(id) FROM api_keys));
SELECT SETVAL('secrets_id_seq', (SELECT MAX(id) FROM secrets));
SELECT SETVAL('assets_id_seq', (SELECT MAX(id) FROM assets));
SELECT SETVAL('workers_id_seq', (SELECT MAX(id) FROM workers));
SELECT SETVAL('worker_properties_id_seq', (SELECT MAX(id) FROM worker_properties));
SELECT SETVAL('products_id_seq', (SELECT MAX(id) FROM products));
SELECT SETVAL('product_settings_id_seq', (SELECT MAX(id) FROM product_settings));
SELECT SETVAL('machines_id_seq', (SELECT MAX(id) FROM machines));
SELECT SETVAL('machine_settings_id_seq', (SELECT MAX(id) FROM machine_settings));
SELECT SETVAL('test_suites_id_seq', (SELECT MAX(id) FROM test_suites));
SELECT SETVAL('test_suite_settings_id_seq', (SELECT MAX(id) FROM test_suite_settings));
SELECT SETVAL('job_templates_id_seq', (SELECT MAX(id) FROM job_templates));
SELECT SETVAL('jobs_id_seq', (SELECT MAX(id) FROM jobs));
SELECT SETVAL('job_settings_id_seq', (SELECT MAX(id) FROM job_settings));
SELECT SETVAL('job_modules_id_seq', (SELECT MAX(id) FROM job_modules));
-
make sure you have a machine with kvm support
-
make sure
kvm_intel
orkvm_amd
modules are loaded -
make sure you do have virtualization enabled in BIOS
-
make sure the _openqa-worker user can access
/dev/kvm
-
make sure you are not already running other hypervisors such as VirtualBox
-
when running inside a vm make sure nested virtualization is enabled (pass nested=1 to your kvm module)
www.opensuse.org’s openid provider may have trouble with IPv6. openQA shows a message like this:
no_identity_server: Could not determine ID provider from URL.
To avoid that switch off IPv6 or add a special route that prevents the system from trying to use IPv6 with www.opensuse.org:
ip -6 r a to unreachable 2620:113:8044:66:130:57:66:6/128