Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Improve networking options for libvirtd target #922

Closed
wants to merge 12 commits into from
104 changes: 95 additions & 9 deletions doc/manual/overview.xml
Original file line number Diff line number Diff line change
Expand Up @@ -1525,15 +1525,6 @@ add your user to libvirtd group and change firewall not to filter DHCP packets.
</programlisting>
</para>

<para>Next we have to make sure our user has access to create images by
executing:
<programlisting>
$ sudo mkdir /var/lib/libvirt/images
$ sudo chgrp libvirtd /var/lib/libvirt/images
$ sudo chmod g+w /var/lib/libvirt/images
</programlisting>
</para>

<para>We're ready to create the deployment, start by creating
<literal>example.nix</literal>:

Expand Down Expand Up @@ -1602,6 +1593,101 @@ deployment.libvirtd.extraDevicesXML = ''
</para>
</note>

<section>
<title>Remote libvirtd server</title>

<para>
By default, NixOps uses the local libvirtd daemon (<literal>qemu:///system</literal>). It is also possible to
deploy to a
<link xlink:href="https://libvirt.org/remote.html">remote libvirtd server</link>.
Remote deployment requires a couple of things:

<itemizedlist>

<listitem>Pointing <code>deployment.libvirtd.URI</code> to the
<link xlink:href="https://libvirt.org/remote.html">remote libvirtd server</link>
instead of <literal>qemu:///system</literal>.
</listitem>

<listitem>
Configuring the network to ensure the VM running on the remote server is
reachable from the local machine. This is required so that NixOps can reach the
newly created VM by SSH to finish the deployment.
</listitem>

</itemizedlist>
</para>

<para>Example: suppose the remote libvirtd server is located at 10.2.0.15.</para>

<para>
First, create a new <link
xlink:href="https://wiki.libvirt.org/page/TaskRoutedNetworkSetupVirtManager">routed
virtual network</link> on the libvirtd server. In this example we'll use the
192.168.122.0/24 network named <literal>routed</literal>.
</para>

<para>
Next, add a route to the virtual network via the remote libvirtd server. This
can be done by running this command on the local machine:

<screen>
# ip route add to 192.168.122.0/24 via 10.2.0.15
</screen>
</para>

<para>
Now, create a NixOps configuration file <literal>remote-libvirtd.nix</literal>:

<programlisting>{
example = {
deployment.targetEnv = "libvirtd";
deployment.libvirtd.URI = "qemu+ssh://10.2.0.15/system";
deployment.libvirtd.networks = [ "routed" ];
};
}
</programlisting>
</para>

<para>
Finally, deploy it with NixOps:

<screen>
$ nixops create -d remote-libvirtd ./remote-libvirtd.nix
$ nixops deploy -d remote-libvirtd
</screen>
</para>

</section>

<section>
<title>Libvirtd storage pools</title>

<para>
By default, NixOps uses the <literal>default</literal>
<link xlink:href="https://libvirt.org/storage.html">storage pool</link> which
usually corresponds to the <filename>/var/lib/libvirt/images</filename>
directory. You can choose another storage pool with the
<code>deployment.libvirtd.storagePool</code> option:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When trying out this PR, I got:

libvirt: Storage Driver error : Storage pool not found: no storage pool with matching name 'default'

This looks like this: simon3z/virt-deploy#8

And indeed for me:

% virsh pool-list 
 Name                 State      Autostart 
-------------------------------------------

Can we make this work also without the default storage pool or do we need to instruct the user to set this default storage pool up?


<programlisting>
{
example = {
deployment.targetEnv = "libvirtd";
deployment.libvirtd.storagePool = "mystoragepool";
};
}
</programlisting>
</para>

<warning>
<para>NixOps has only been tested with storage pools of type <code>dir</code> (filesystem directory).
Attempting to use a storage pool of any other type with NixOps may not work as expected.
</para>
</warning>

</section>

</section>

<section><title>Deploying Datadog resources</title>
Expand Down
41 changes: 33 additions & 8 deletions nix/libvirtd.nix → nix/libvirtd/default.nix
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ with lib;

let
sz = toString config.deployment.libvirtd.baseImageSize;
base_image = import ./libvirtd-image.nix { size = sz; };
base_image = import ./image.nix { size = sz; };
the_key = builtins.getEnv "NIXOPS_LIBVIRTD_PUBKEY";
ssh_image = pkgs.vmTools.runInLinuxVM (
pkgs.runCommand "libvirtd-ssh-image"
Expand Down Expand Up @@ -45,11 +45,19 @@ in
###### interface

options = {
deployment.libvirtd.imageDir = mkOption {
type = types.path;
default = "/var/lib/libvirt/images";
deployment.libvirtd.storagePool = mkOption {
type = types.str;
default = "default";
description = ''
The name of the storage pool where the virtual disk is to be created.
'';
};

deployment.libvirtd.URI = mkOption {
type = types.str;
default = "qemu:///system";
description = ''
Directory to store VM image files. Note that it should be writable both by you and by libvirtd daemon.
Connection URI.
'';
};

Expand Down Expand Up @@ -96,9 +104,11 @@ in
};

deployment.libvirtd.networks = mkOption {
default = [ "default" ];
type = types.listOf types.str;
description = "Names of libvirt networks to attach the VM to.";
type = types.listOf (types.submodule (import ./network-options.nix {
inherit lib;
}));
default = [];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  `default = [{ source = "default"; type= "bridge"; }];` might be best for backwards compatibility.

description = "Networks to attach the VM to.";
};

deployment.libvirtd.extraDevicesXML = mkOption {
Expand Down Expand Up @@ -156,6 +166,21 @@ in
services.openssh.extraConfig = "UseDNS no";

deployment.hasFastConnection = true;

services.udev.extraRules = ''
SUBSYSTEM=="virtio-ports", ATTR{name}=="org.qemu.guest_agent.0", TAG+="systemd" ENV{SYSTEMD_WANTS}="qemu-guest-agent.service"
'';

systemd.services.qemu-guest-agent = {
description = "QEMU Guest Agent";
bindsTo = [ "dev-virtio\\x2dports-org.qemu.guest_agent.0.device" ];
after = [ "dev-virtio\\x2dports-org.qemu.guest_agent.0.device" ];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For some reason this creates a problem with my config

serviceConfig = {
ExecStart = "-${pkgs.kvm}/bin/qemu-ga";
Restart = "always";
RestartSec = 0;
};
};
};

}
5 changes: 5 additions & 0 deletions nix/libvirtd-image.nix → nix/libvirtd/image.nix
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,11 @@ let
config = (import <nixpkgs/nixos/lib/eval-config.nix> {
inherit system;
modules = [ {

imports = [
<nixpkgs/nixos/modules/profiles/qemu-guest.nix>
];

fileSystems."/".device = "/dev/disk/by-label/nixos";

boot.loader.grub.version = 2;
Expand Down
23 changes: 23 additions & 0 deletions nix/libvirtd/network-options.nix
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
{ lib } :

with lib;
{
options = {

source = mkOption {
type = types.str;
default = "default";
description = ''
'';
};

type = mkOption {
type = types.enum [ "bridge" "virtual" ];
default = "virtual";
description = ''
'';
};

};

}
2 changes: 1 addition & 1 deletion nix/options.nix
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ in
./gce.nix
./hetzner.nix
./container.nix
./libvirtd.nix
./libvirtd
];


Expand Down
3 changes: 2 additions & 1 deletion nixops/backends/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -334,7 +334,8 @@ def run_command(self, command, **kwargs):
# mainly operating in a chroot environment.
if self.state == self.RESCUE:
command = "export LANG= LC_ALL= LC_TIME=; " + command
return self.ssh.run_command(command, self.get_ssh_flags(), **kwargs)

return self.ssh.run_command(command, **kwargs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is self.get_ssh_flags() removed here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See the commit message for 574ba39. Although this is quick-and-dirty WIP code, I always try to write comprehensible commit messages, so git blame can help people understand changes.

MachineState.run_command() passes SSH flags to self.ssh.run_command().
However, self.get_ssh_flags() is already registered as a ssh_flag_fun in the
class __init__() function, so ssh_util.SSH already uses it to get the flags
when initiating a connection. This lead to the SSH flags being duplicated, which
causes an error for some flags (e.g. the -J flag, which can only be specified once).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My bad -- I didn't pay enough attention. Great commit message!


def switch_to_configuration(self, method, sync, command=None):
"""
Expand Down
Loading