-
Notifications
You must be signed in to change notification settings - Fork 49
HOWTO: Creating images for additional Virtual Application types
Before fully making use of CB, you will have to create images for all other Virtual Application types. While we provide a method for the automatic creation of all images, unfortunately some packages and binaries (e.g., coremark, parboil) simply cannot be automatically downloaded, due to licensing restrictions.
Go to cbtool/3rd_party/workload/
and open the manually_download_files.txt
. Follow the instructions there to download all requested files, and in the end you should have this directory populated with a series of .tar, .tgz, .deb and .rpm files (again, this procedure is required only for files that cannot be automatically downloaded).
cbuser@klinux:~/cloudbench$ ~/cbtool/cb --soft_reset
Cbtool version is "55c720f"
Parsing "cloud definitions" file..... "/home/cbuser/cbtool/configs/cbuser_cloud_definitions.txt" opened and parsed successfully.
Checking "Object Store".....An Object Store of the kind "Redis" (shared) on node 172.17.1.2, TCP port 6379, database id "10" seems to be running.
Checking "Log Store".....A Log Store of the kind "rsyslog" (private) on node 172.17.1.2, UDP port 5114 seems to be running.
Checking "Metric Store".....A Metric Store of the kind "MongoDB" (shared) on node 172.17.1.2, TCP port 27017, database id "metrics" seems to be running.
Checking "File Store".....A File Store of the kind "rsync" (private) on node 172.17.1.2, TCP port 10000 seems to be running.
Executing "soft" reset: (killing all running toolkit processes and flushing Object store) before starting the experiment......
Killing all processes... done
Flushing Object Store... done
Checking for a running API service daemon.....API Service daemon was successfully started. The process id is ['17394'](http://172.17.1.2:7070).
Checking for a running GUI service daemon.....GUI Service daemon was successfully started. The process id is ['17478', '17479'], listening on port 8080. Full url is "http://172.17.1.2:8080".
############################# Executing command "cldattach osk TESTOPENSTACK" (specified on the configuration file)
status: VPN configuration for this cloud already generated: /home/cbuser/cbtool/lib/auxiliary//../../configs/generated/TESTOPENSTACK_server-cb-openvpn.conf
status: OpenStack Cloud connection parameters: username=admin, password=<omitted>, tenant=admin, cacert=None, insecure=False, region_name=RegionOne, access_url=http://172.17.1.2:5000/v2.0/, endpoint_type=publicURL
status: Checking if the ssh key pair "cbuser_cbtool_rsa" is created on VMC RegionOne....
status: Checking if the security group "default" is created on VMC RegionOne....
status: Checking if the network "flat_net" can be found on VMC RegionOne...
status: Checking if the imageids associated to each "VM role" are registered on VMC "RegionOne"....
status: WARNING Image id for VM roles "xpingsender,xpingreceiver" is "1147b8f6-e81d-11e6-bcc9-6cae8b2ac98e" ("cb_xping") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "driver_tradelite,client_tradelite": "C5F67B76-920D-5102-A7F8-80F83657CB06" ("cb_tradelite") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "driver_hadoop,driver_netperf" is "b77ddafa-e81a-11e6-bcc9-6cae8b2ac98e" ("cb_hadoop") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "fio,driver_fio" is "b77c2a0a-e81c-11e6-bcc9-6cae8b2ac98e" ("cb_fio") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "windows,client_windows": "4322F915-BCBA-55FD-ADFE-A27D7FCC0D18" ("cb_windows") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "filebench,driver_filebench" is "a8bcba22-e81a-11e6-bcc9-6cae8b2ac98e" ("cb_filebench") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "nuttcpserver,nuttcpclient" is "7b3f3802-e81c-11e6-bcc9-6cae8b2ac98e" ("cb_nuttcp") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "giraphmaster,giraphslave" is "36079b4e-e81c-11e6-bcc9-6cae8b2ac98e" ("cb_giraph") is NOT registered (attaching VMs with any of these roles will result in error).
INFO Image id for VM roles "lb,tinyvm,yatinyvm" is "a1372342-e819-11e6-bcc9-6cae8b2ac98e" ("cb_nullworkload") and it is already registered.
WARNING Image id for VM roles "ycsb,seed,cassandra" is "50a90f36-e81d-11e6-bcc9-6cae8b2ac98e" ("cb_ycsb") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "client_ibm_daytrader,db2,driver_daytrader,was" is "f355287c-e819-11e6-bcc9-6cae8b2ac98e" ("cb_daytrader") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "netserver,netclient" is "c414381c-e81b-11e6-bcc9-6cae8b2ac98e" ("cb_netperf") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "mysql,geronimo,client_open_daytrader": "7FAD7B49-47BE-5A90-ABAC-7C13D31D0AE7" ("cb_open_daytrader") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "driver_coremark,coremark" is "dc92f2b8-e819-11e6-bcc9-6cae8b2ac98e" ("cb_coremark") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "iperfserver,iperfclient" is "70b6b104-e81b-11e6-bcc9-6cae8b2ac98e" ("cb_iperf") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "hadoopslave,hadoopmaster" is "5e8d0822-e81e-11e6-bcc9-6cae8b2ac98e" ("cb_hadoop") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "ddgen" is "98bee3fc-e81a-11e6-bcc9-6cae8b2ac98e" ("cb_ddgen") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "btest": "C4BA42BC-BEF3-539F-AF98-D10D95204902" ("cb_btest") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "unixbench": "91CA36BC-D6D8-56B8-83AF-50D44AB917E2" ("cb_unixbench") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "specjbb" is "16591610-e81c-11e6-bcc9-6cae8b2ac98e" ("cb_specjbb") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "postmark": "58A980CA-77D5-53E7-9BD3-4F969435A479" ("cb_postmark") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "linpack" is "f78fd2f4-e81c-11e6-bcc9-6cae8b2ac98e" ("cb_linpack") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "mongos,redis,mongodb,mongo_cfg_server" is "0b4e5740-e81b-11e6-bcc9-6cae8b2ac98e" ("cb_ycsb") is NOT registered (attaching VMs with any of these roles will result in error).
WARNING Image id for VM roles "cn_hpc,fen_hpc" is "ebe6200e-e81a-11e6-bcc9-6cae8b2ac98e" ("cb_hpcc") is NOT registered (attaching VMs with any of these roles will result in error).
status: VMC "RegionOne" was successfully tested.
The OSK cloud named "TESTOPENSTACK" was successfully attached to this experiment.
The experiment identifier is EXP-03-23-2017-01-22-56-PM-UTC
############################# Executing command "vmcattach all" (specified on the configuration file)
status: Removing all VMs previously created on VMC "RegionOne" (only VM names starting with "cb-rdu37-RDU37").....
status: VMC RegionOne (7A5BB013-DE73-5678-9D4C-0A1A068973F9) was successfully registered on OpenStack Cloud "RDU37"
status: Starting a new Host OS performance monitor daemon (gmetad.py)......
status: Host OS performance monitor daemon (gmetad.py) started successfully. The process id is ['30590'](using ports 8637 and 8737).
All VMCs successfully attached to this experiment.
(TESTOPENSTACK)
Please note, early in the CBTOOL startup process, the message Checking "File Store".....A File Store of the kind "rsync" (private) on node 172.17.1.2, TCP port 10000 seems to be running.
. CBTOOL by default start an rsync server, according to the parameters HOSTNAME
and PORT
under the section [FILESTORE]
. By default, the port is 10000
and hostname is the value of the attribute MANAGER_IP
, under the section [USER-DEFINED]
.
cbuser@klinux:~/cbtool$ rsync --list-only rsync://172.17.1.2:10000/cbuser_cb/3rd_party/workload/
drwxrwxr-x 4,096 2017/03/01 13:30:38 .
-rw-r--r-- 1,454,080 2014/02/18 17:48:36 coremark_v1.0.tar
-rw-r--r-- 30,918,225 2015/05/08 16:28:03 l_lpk_p_11.3.0.004.tgz
-rw-rw-r-- 741 2017/03/01 13:11:21 manually_download_files.txt
-rw-r--r-- 14,325,760 2017/02/01 17:45:07 pb2.5benchmarks-2.tgz
-rw-r--r-- 764,856,320 2017/02/01 17:45:26 pb2.5datasets_standard-2.tgz
-rw-r--r-- 256,000 2017/02/01 17:45:26 pb2.5driver.tar
3. Optional, but highly recommended for anyone experimenting with "pure Docker" clouds (PDM) : Build all the workloads from Dockerfiles
The automated image building mechanism used by CBTOOL can be executed against VMs, Containers (Docker and LXD), or even VMs. However, it extracts the actual commands used to install the dependencies directly from Dockerfiles. Therefore, it is always a good idea to try to build all the Virtual Application types (workloads) from its original Dockerfiles first. This manner, in case of successful build run, there is a high degree of probability that the building of the actual workload images, be it on VMs, Containers or Bare-metal will work.
Assuming that the Docker engine is co-located with the CBTOOL Orchestrator Node, just execute :
cd cbtool/docker
./build_workloads.sh -r <myrepository>
In case the node running the Docker engine is differente from the CBTOOOL Orchestrator Node, you will have to copy the contents of cbtool/docker
directory to there, and the add an additional parameter to the execution:
cd cbtool/docker
./build_workloads.sh -r <myrepository> --rsync <FILESTORE_HOSTNAME>:<FILESTORE_PORT>/<CBTOOL_UNSERNAME>_cb
For instance, using the previously mentioned CBTOOL Orchestrator Node, we will have, for a non-colocated Docker engine:
cd cbtool/docker
./build_workloads.sh -r <myrepository> --rsync 172.17.1.2:10000/cbuser_cb
Typically, the building of Docker images for all Virtual Application types takes around 1 hour.
4. Also Optional, but highly recommended for anyone experimenting with "pure Libvirt" clouds (PLM) : Build all the workloads into qcow2 images
The automated image building mechanism used by CBTOOL can be executed against VMs, Containers (Docker and LXD), or even VMs. However, for the direct creation of qcow2 images, we leverage the virt-customize
utility. This will result in qcow2 images which can be either directly used by "pure Libvirt" clouds or imported into other clouds (e.g., OpenStack). Please note that we recommend this process to be performed after all Docker images containing the workloads are created (in the previous step), just to increase the likelihood of a successful build here.
Assuming that the Libvirt daemon is co-located with the CBTOOL Orchestrator Node, just execute :
cd cbtool/kvm-qemu
./build_workloads.sh -r <path to my libvirt storage pool>
In case the node running the Libvirt daemon is differente from the CBTOOOL Orchestrator Node, you will have to copy the cbtool/kvm-qemu
directory to there, and the add an additional parameter to the execution:
cd cbtool/kvm-qemu
./build_workloads.sh -r <path to my libvirt storage pool> --rsync <FILESTORE_HOSTNAME>-<FILESTORE_PORT>-<CBTOOL_UNSERNAME>
For instance, using the previously mentioned CBTOOL Orchestrator Node, we will have, for a non-colocated Libvirt daemon:
cd cbtool/kvm-qemu
./build_workloads.sh -r <path to my libvirt storage pool> --rsync 172.17.1.2-10000-cbuser
Typically, the building of qcow2 images for all Virtual Application types takes around 2 hours.
At this point we: a) downloaded all the third-party requirements that required manual intervention, b) made it available through an rsync server running on the CBTOOL Orchestrator Node and c) checked that all images can be built from Dockerfiles. We can now proceed to the other images.
For instance, to create an image for the "iperf" VApp (typeshow iperf for more information), use this new already created "nullworkload" image as a base (this is highly recommended, albeit not strictly necessary), restart the previously described procedure, skipping directly to step 2.3 (i.e. vmattach check:cb_nullworkload:ubuntu:iperf
, login and run the install command, followed by vmcapture youngest cb_iperf
)
When done, exit the CBTOOL cli, and re-execute it forcing a re-read of the configuration file with cb --soft_reset
(--hard_reset could also be used).