Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Yocto requires separate epiphany and host builds #229

Open
peteasa opened this issue Sep 7, 2016 · 4 comments
Open

Yocto requires separate epiphany and host builds #229

peteasa opened this issue Sep 7, 2016 · 4 comments

Comments

@peteasa
Copy link
Contributor

peteasa commented Sep 7, 2016

@olajep whilst looking into the yocto build environment I have found that the build process works better if there are separate configure scripts for arm and epiphany builds. This is because the configure script tends to have many things overridden with environment variables when building with yocto. These overrides should be different for arm and epiphany.

Have you an objection to splitting the configure script into three such that the top level configure script will build both, but the next level down will build either arm code or epiphany code depending on the configure script used?

I envisage a top level configure with two folders one for arm and one for devices/epiphany where the second set of configure scripts will be located.

In addition yocto requires that the build object can be created in a separate folder tree. At the moment the source and build tree is the same.

The reason for doing this is that then we can create a meta-pal layer that could be used for multiple processor types in the yocto build environment.

Peter.

@olajep
Copy link
Member

olajep commented Sep 16, 2016

You can set
{CFLAGS,CXXFLAGS,CPPFLAGS,LDFLAGS}_FOR_EPIPHANY and the top level (host) configure will pass it down to the device configure.
Do you need anything else?
Example

$ cd pal
$ ./configure \
--enable-device-epiphany \
CFLAGS="-O2" \
LDFLAGS_FOR_EPIPHANY="..." \
CFLAGS_FOR_EPIPHANY="-Os"

Alternatively, you can rerun the device configure after host configure.

Example:

$ cd pal
$ ./configure --enable-device-epiphany
$ cd device/epiphany
$ head config.log
$ ../../configure --host=epiphany-elf --prefix=foo CFLAGS="-Os"
$ cd ../..
$ make

In addition yocto requires that the build object can be created in a separate folder tree. At the moment the source and build tree is the same.

Out of tree build is supported by autotools. Piece of cake:

$ mkdir build
$ cd build
$ path/to/pal/source/configure --my-flags
$ make

@peteasa
Copy link
Contributor Author

peteasa commented Sep 17, 2016

So my plan is to have two separate build folders. First will be configured for just the epiphany build and the second will be configured just for the arm host build. In the first folder when I run make only the epiphany code will be built. In the second folder when I run make only the host code will be built. These two separate builds can then be packaged separately and used on the target.

At the moment as soon as I configure for epiphany this also configures the host build. Building will then produce epiphany and arm code. The yocto environment overrides a lot of the things set in configure using environment variables. Mixed builds tend not to work so well because the overrides for epiphany conflict with the overrides needed for the arm build. It would be possible to remove the yocto environment overrides, however this goes against the yocto way of doing things and might end up with a significantly more complex build environment.

Peter

@olajep
Copy link
Member

olajep commented Sep 17, 2016

Hi Peter,

Which environment variables?

Environment variables (e.g. CFLAGS, LDFLAGS) are only considered at configure time and are then ignored by make, this works:

$ CFLAGS="-invalid-ignored-option" make

I still don't see why you need to split the build into two. The top level pal build will take care of the device build. You don't create a separate build for the epiphany C runtime when building the GCC compiler, that's taken care of by the GCC build.

This is how it should work. The Yocto build system probably already does most of the steps below:

-1. Create a build directory

mkdir pal-build
cd pal build

-2. configure cross build for arm w/ epiphany device enabled and some custom flags (concept example)

/path/to/pal/source/configure \
--host=arm-linux-gnueabi \
CFLAGS_FOR_EPIPHANY="-Os -ggdb" \
CFLAGS="-O0 -ggdb -I/opt/adapteva/esdk/tools/e-gnu.x86_64/include/" \
LDFLAGS="-L/opt/adapteva/esdk/tools/e-gnu.x86_64/lib -Wl,-rpath-link=/opt/adapteva/esdk/tools/e-gnu.x86_64/lib/" \
--enable-device-epiphany-sim --prefix=/usr

-3. make

-4. install to staging area

make DESTDIR=/path/to/staging install

-5. Then split the output into two packages, pal and pal-epiphany
https://www.yoctoproject.org/docs/1.8/ref-manual/ref-manual.html#package-splitting-dev-environment

Everyting in /path/to/staging/usr/epiphany-elf goes into pal-epiphany, everything else into pal

You might also want to add a -dev package for include files

Does this make any sense?

Ola

@peteasa
Copy link
Contributor Author

peteasa commented Sep 18, 2016

Thank you for your detailed response. I will spend some time this week looking at the details and trying some things out.

Here are a few quick answers:

Which environment variables
http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/conf/bitbake.conf?h=krogoth shows you that all the key environment variables that effect the way that gnu configure and make work are part of the yocto environment. For example
export CC = "${CCACHE}${HOST_PREFIX}gcc ${HOST_CC_ARCH}${TOOLCHAIN_OPTIONS}"
So this is why building for epiphany code in a separate build to the arm code is preferred.

You don't create a separate build for the epiphany C runtime when building the GCC compiler, that's taken care of by the GCC build
In yocto build of the gcc compiler each part is build separately with a separate recipe with dependencies clearly identified look at http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/recipes-devtools/gcc?h=krogoth. Each .bb file is read by the yocto python scripts and each results in a separate build from shared gcc source where the resulting output is separately packaged. I have used exactly the same way to do things for the epiphany-elf-gcc build. Very different to the way that the official e-gcc compiler is built.

Then split the output into two packages
The yocto documentation does not detail the major issues that there are with packaging of object files for different processor types. In practice the QA part of yocto checks the type of object files are in the correct package. Typically epiphany object files fail the "arch" package check - http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/classes/insane.bbclass?h=krogoth#n493 because as far as the yocto build is concerned it only handles the arm object files. This is handled by skipping various QA checks.
INSANE_SKIP_${PN}-staticdev = "arch"
Now when the Package (PN == package name) contains object files for different CPU types this complicates the configuration. It is possible to package each separately from one package name... but it is not a good yocto way of doing things. The better way to do things is have a different package for object files of each CPU type.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants