Skip to content

This repo contains the tools and documentation to setup your yocto build environment

License

Notifications You must be signed in to change notification settings

seapath/yocto-bsp

Seapath Yocto BSP Platform Developer Guide

The Yocto firmware generation has been tested on Ubuntu 18.04. You can either use your host machine’s tools, or use cqfd to build. More details are given in the next sections of this document.

1. Fetching the source using Git

We are using repo to synchronize the source code using a manifest (an XML file) which describes all git repositories required to build a firmware. The manifest file is hosted in a git repository named repo-manifest.

First initialize repo:

$ cd my_project_dir/
$ repo init -u <manifest_repo_url>
$ repo sync

For instance, for Seapath yocto-bsp project:

$ cd my_project_dir/
$ repo init -u https://github.com/seapath/repo-manifest.git
$ repo sync
Note
The initial build process takes approximately 4 to 5 hours on a current developer machine and will produce approximately 50GB of data.

2. Build prerequisites

Before building you must put a ssh public in keys/ansible_public_ssh_key.pub. It will be used by Ansible to communicate with the machines. See for keys/README for more informations.

3. Building a firmware using cqfd

cqfd is a quick and convenient way to run commands in the current directory, but within a pre-defined Docker container. Using cqfd allows you to avoid installing anything else than Docker and repo on your development machine.

Note
We recommend using this method as it greatly simplifies the build configuration management process.

Yocto SSTATE and Download cache

Yocto provides a way to share build artifacts between multiple workspaces and developers through the DL_DIR and SSTATE_DIR environment variables. To use them with cqfd, add to your .bashrc:

export CQFD_EXTRA_RUN_ARGS="-v <your_dldir_path>:/mnt/dl -e DL_DIR=/mnt/dl -v <your_sstate_path>:/mnt/sstate -e SSTATE_DIR=/mnt/sstate"

You could also do this configuration in .cqfdrc with docker_run_args under build section:

docker_run_args='-v <your_dldir_path>:/mnt/dl -e DL_DIR=/mnt/dl -v <your_sstate_path>:/mnt/sstate -e SSTATE_DIR=/mnt/sstate'

Prerequisites

Docker installation

See docker manual: Install docker

Repo installation

On Ubuntu 20.04 and greater:

$ sudo curl -o /usr/local/bin/repo https://storage.googleapis.com/git-repo-downloads/repo
$ sudo chmod +x /usr/local/bin/repo
$ sed 's|/usr/bin/env python|/usr/bin/env python3|' -i /usr/local/bin/repo
  • Install cqfd:

if necessary install make and pkg-config packages.

For instance, with Ubuntu/Debian distribution: $ sudo apt-get install build-essential pkg-config

then

$ git clone https://github.com/savoirfairelinux/cqfd.git
$ cd cqfd
$ sudo make install

The project page on Github contains detailed information on usage and installation.

  • Make sure that docker does not require sudo

Please use the following commands to add your user account to the docker group:

$ sudo groupadd docker
$ sudo usermod -aG docker $USER

Log out and log back in, so that your group membership can be re-evaluated.

SEAPATH parameters

Some SEAPATH settings can be customized with a file call seapath.conf. This file must be created in the project root directory. All settings which can be set in this file are described in the example file seapath.conf.sample.

Building the firmware

The first step with cqfd is to create the build container. For this, use the cqfd init command:

$ cqfd init
Note
The step above is only required once, as once the container image has been created on your machine, it will become persistent. Further calls to cqfd init will do nothing, unless the container definition (.cqfd/docker/Dockerfile) has changed in the source tree.

cqfd provides different flavors that allow to call build.sh with certain image, distro and machine parameters. To list the available flavors, run:

$ cqfd flavors

Here is a description of flavors:

  • all: all flavors

  • flasher: image to flash a SEAPATH disk

  • guest_efi: efi guest image (VM)

  • guest_efi_test: similar to guest_efi with additionnal test packages

  • guest_efi_dbg: similar to guest_efi with debug tools

  • host_efi: efi host image (including security, clustering and readonly features)

  • host_efi_dbg: similar to host_efi with debug tools

  • host_efi_test: similar to host_efi with additionnal test packages

  • host_efi_swu: efi host update image (SwUpdate)

  • observer_efi: efi observer image (used to observer the cluster)

  • observer_efi_swu: efi observer update image (SwUpdate)

To build on of this flavor, run:

$ cqfd -b <flavor>

Note: * bash completion works with -b parameter * detail command used per flavor is described in .cqfdrc file

4. Building the firmware manually

This method relies on the manual installation of all the tools and dependencies required on the host machine.

Prerequisites on Ubuntu

The following packages need to be installed:

$ sudo apt-get update && apt-get install -y ca-certificates build-essential
$ sudo apt-get install -y gawk wget git-core diffstat unzip texinfo gcc-multilib \
   build-essential chrpath socat cpio python python3 python3-pip python3-pexpect \
   xz-utils debianutils iputils-ping libsdl1.2-dev xterm repo

Building the firmware

The build is started by running the following command:

$ ./build.sh -i seapath-host-efi-image -m boardname --distro distroname

You can also pass custom BitBake commands using the -- separator:

$ ./build.sh -i seapath-host-efi-image -m boardname --distro distroname -- bitbake -c clean package_name

Images can be produced for UEFI firmware only.

You can find below the Yocto images list:

  • Host images

    • seapath-host-efi-image: production image

    • seapath-host-efi-dbg-image: debug image

    • seapath-host-efi-test-image: production image with test tools

  • Guest images

    • seapath-guest-efi-image: QEMU-compatible virtual machine production image (UEFI only)

    • seapath-guest-efi-dbg-image: QEMU-compatible virtual machine debug image (UEFI only)

    • seapath-guest-efi-test-image: guest production image with test tools (UEFI only)

  • Hybrid images guest with test tool

  • Flasher images

    • seapath-flasher: USB drive or PXE flash image used to flash SEAPATH images on disk. during a PXE boot.

  • Observer images

    • seapath-observer-efi: production image for an observer (needed for clustering quorum establishment)

SEAPATH offer different distros for hosts depending on the use case. Here is a table describing the different distros available:

Table 1. SEAPATH host distros
DISTRO container virtualization security read-only secure-boot clustering

seapath-host-sb

Yes

Yes

Yes

Yes

Yes

Yes

seapath-host

Yes

Yes

Yes

Yes

No

Yes

seapath-container-host

Yes

No

Yes

Yes

No

Yes

seapath-standalone-host

Yes

Yes

Yes

Yes

No

No

seapath-standalone-containers-host

Yes

No

Yes

Yes

No

No

seapath-host-cluster-minimal

Yes

Yes

No

No

No

Yes

seapath-host-minimal

Yes

Yes

No

No

No

No

In addition to host distro we have specific distros:

  • seapath-flash: distro used for the flasher image

  • seapath-guest: distro used for guest images

  • seapath-observer: distro used for observer image

5. Building an SDK Installer

You can create an SDK matching your system’s configuration using with the following command:

$ ./build.sh -i seapath -m boardname --sdk
Note
prefix this command with cqfd run if using cqfd.

When the bitbake command completes, the toolchain installer will be in tmp/deploy/sdk/ under your build directory.

6. Install Seapath

Prerequisites

  • The seapath-flasher image and the SEAPATH image to flash.

  • bmap-tools

bmap-tool can be installed through your package manager, commonly under the name bmap-tools or python3-bmaptools.

On Ubuntu/Debian/Mint:

$ sudo apt install bmap-tools

On Fedora:

$ sudo dnf install bmap-tools

On CentOS/Red Hat:

$ sudo yuml install bmap-tools

Flashing the flasher image to an USB drive

To be able to install Seapath firmware on machines you need to use a USB drive running a specific application. This application is available in seapath-installer directory.

To create the flash USB drive, on a Linux system, you can use the bmaptool command. For instance, if USB drive device is /dev/sdx:

$ sudo umount /dev/sdx*
$ sudo bmaptool copy build/tmp/deploy/images/seapath-installer/seapath-flasher-seapath-installer.rootfs.wic.gz /dev/sdx
Tip
You can also use the lsblk command to list all block devices and their mount points to identify the USB drive.

Flashing the firmware to the disk

Copy the generated image in format wic or wic.gz on the USB drive flasher_data parition.

Boot the usb drive (usually requires to go through the BIOS’s boot menu). You can login with a screen and a keyboard to the root user.

Use the flash script to write the firmware image on the disk. flash takes two arguments:

  • --images: the path to the image to be flashed. The image partition are mounted on /media.

  • --disk: the disk to flash. Usualy /dev/sda.

For instance:

$ flash --image /media/seapath-host-efi-image.wic.gz --disk /dev/sda
Tip
You can also use the lsblk command to list all block devices and their mount points to identify the disk.

Flashing the firmware to Raspberry Pi using a SD card

To be able to install Seapath observer on Raspberry Pi, you need to use a SD card.

To create a bootable SD card for Raspberry Pi on a Linux system, you can use the bmaptool command.

For instance, if the SD card is /dev/sdx:

$ sudo umount /dev/sdx*
$ sudo bmaptool copy --bmap build/tmp/deploy/images/seapath-observer-rpi/seapath-observer-rpi-image-seapath-observer-rpi.wic.bmap  build/tmp/deploy/images/seapath-observer-rpi/seapath-observer-rpi-image-seapath-observer-rpi.wic.bz2 /dev/sdx

Note: The A/B updates does not work on Raspberry Pi for now. The cluster functionalities are not tested yet.

7. Usage

You can login on the hypervisor using a screen and keyboard. The default credentials are emergadmin:emergadmin. Alternatively, if DHCP is available, you can connect to the admin user using the SSH key configured through [the keys folder](keys/README.adoc).

The next step to use Seapath is to setup the cluster with Ansible. Follow the instructions of the [Ansible playbook](https://github.com/seapath/ansible).

8. Tests

Performance tests

The Yocto image seapath-test-image incudes Real Time tests such as cyclictest.

On the target, call:

$ cyclictest -l100000000 -m -Sp90 -i200 -h400 -q >output

Note: This test will run around 5 hours Then generate the graphics:

$ ./tools/gen_cyclic_test.sh -i output -n 28 -o seapath.png

Note: we reused OSADL tools.

Virtualization tests

KVM unit tests

The Yocto image seapath-test-image includes kvm-unit-tests.

On the target, call:

$ run_tests.sh

KVM/Qemu guest tests

All Seapath Yocto images include the ability to run guest Virtual Machines (VMs).

We used KVM and Qemu to run them. As we do not have any window manager on the host system, VMs should be launched in console mode and their console output must be correctly set.

For testing purpose, we can run our Yocto image as a guest machine. We do not use the .wic image which includes the Linux Kernel and the rootfs because we need to set the console output. We use two distinct files to modify the Linux Kernel command line:

  • bzImage: the Linux Kernel image

  • seapath-test-image-seapath-vm.ext4: the Seapath rootfs

Then run:

$ qemu-system-x86_64 -accel kvm -kernel bzImage -m 4096 -hda seapath-test-image-seapath-vm.ext4 -nographic -append 'root=/dev/sda console=ttyS0'

Yocto ptests

Ptest (package test) is a concept for building, installing and running the test suites that are included in many upstream packages, and producing a consistent output format for the results.

ptest-runner is included in seapath_test_image and allows to run those tests.

For instance:

$ ptest-runner openvswitch libvirt qemu rt-tests

The usage for the ptest-runner is as follows:

$ Usage: ptest-runner [-d directory] [-l list] [-t timeout] [-h] [ptest1 ptest2 ...]

9. Hypervisors updates

Hypervisors updates are enabled only for production efi images:

  • legacy bios images do not implement update mechanism

  • debug and test update images are not offered

A/B partitioning

A/B partitioning is used to allow for an atomic and recoverable update procedure. The update will be written to the passive partition. Once the update is successfully transferred to the device, the device will reboot into the passive partition which thereby becomes the new active partition.

If the update causes any failures, a roll back to the original active partition can be done to preserve uptime.

The following partitioning is used on hypervisors:

Slot A Slot B

Boot A partition (Grub + Kernel) [/dev/<disk>1]

Boot B partition (Grub + Kernel) [/dev/<disk>2]

Rootfs A partition [/dev/<disk>3]

Rootfs B partition [/dev/<disk>4]

Logs partition [/dev/<disk>5]

Persistent data partition [/dev/<disk>6]

Updates

Hypervisor updates can be performed with SwUpdate.

First, create a SwUpdate image (.swu):

$ cqfd -b host_efi_swu

Then, you have different options

Run an update with command line

Copy the image on the target and run

$ sudo swupdate -i <my update>.swu
$ sudo reboot

Run an update from a deployment server (Hawkbit)

SwUpdate can interact with a Hawbit server to push updates on the device.

Installation of Hawkbit server

We use docker-compose as explained in Hawkbit documentation.

$ git clone https://github.com/eclipse/hawkbit.git
$ cd hawkbit/hawkbit-runtime/docker

We decided to enable anonymous connection. To do that, add this line in hawkbit-runtime/docker/docker-compose.yml

  • HAWKBIT_SERVER_DDI_SECURITY_AUTHENTICATION_ANONYMOUS_ENABLED=true

And start the server:

$ docker-compose up -d

Then you can access the http server on port 8080. In System Config menu, enable "Allow targets to download artifact without security credentials", so that anonymous updates can be used. More documentation on Hawkbit is available on Hawkbit website.

Configuration of Hawkbit

Hawkbit Server URL and PORT must be configured in /etc/sysconfig/swupdate_hawkbit.conf or directly in meta-seapath (/recipes-seapath/system-config/system-config/efi/swupdate_hawkbit.conf)

A systemd daemon (swupdate_hawkbit.service) is started automatically at boot. If you want to modify swupdate_hawkbit.conf at runtime, you must restart the systemd service.

Once the systemd service is started, you should see the device in Hawkbit interface. Once an update on the device is performed, a reboot will be done.

10. About this documentation

This documentation uses the AsciiDoc documentation generator. It is a convenient format that allows using plain-text formatted writing that can later be converted to various output formats such as HTML and PDF.

In order to generate an HTML version of this documentation, use the following command (the asciidoc package will need to be installed in your Linux distribution):

$ asciidoc README.adoc

This will result in a README.html file being generated in the current directory.

If you prefer a PDF version of the documentation instead, use the following command (the dblatex package will need to be installed on your Linux distribution):

$ asciidoctor-pdf README.adoc

About

This repo contains the tools and documentation to setup your yocto build environment

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks