Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Routing, IPv6, secondary IFs, trafic control, tunelling trial... #122

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

pierrecdn
Copy link

Added one argument to pipework allowing to specify routes in the container.
Use-cases :

  • More than one internal interface
  • Multicast routing
  • ...

@pierrecdn
Copy link
Author

Argh. I had not seen PR #27 by @pjkundert, which is much richer !

@pierrecdn
Copy link
Author

Hi,

One "large" push (to the pipework scale) yesterday.

As I can't wait for these Poposals on Docker :

and more generally the industrialization of Docker on the network side with its new acquisition, socket.io.

I'd to extend your little toolkit, cause I plan to integrate it in an orchestration stack (https://github.com/mesos/mesos with custom schedulers and executors).

With these enhancements, I'm able to handle these use-cases :

  • manage routing
pipework ovsbr-eth1 $(docker run ...) -i eth1 -a ip 192.168.4.2/[email protected] -r 10.10.0.0/16,10.20.0.0/16
pipework ovsbr-eth2 $(docker run ...) -i eth2 -a ip 192.168.8.2/[email protected]
# default route goes on eth2
  • manage IPv6 adressing and routing (requirement : Docker > 1.5)
pipework ovsbr-eth1 $(docker run ...) -a ip 2001:db8::beef/64@2001:db8::1 
# eth1 is globally reachable if the underlying host network 
  • add secondary IPs (v4/v6) on interface (doesn't require to establish the link etc.)
pipework ovsbr-eth1 $(docker run ...) -a sec_ip 2001:db8::face/64
  • add trafic control/qos rules using tc
pipework ovsbr-eth1 $(docker run ...) -a tc qdisc add dev eth1 root netem loss 30%
  • add specific interfaces, like IPIP or GRE tunnels. Unfortunately, I faced many problems in the linux kernel area, I think this part is unusable for now. Advices ?

I also imported the arguments parsing logic from #27, with more capabilities this seems mandatory.

Some fixes to envisage, as I didn't test all the existing UCs.

@pierrecdn pierrecdn changed the title Adding routes establishement on the internal interfaces Routing, IPv6, secondary IFs, trafic control, tunelling trial... Mar 5, 2015
@jpetazzo
Copy link
Owner

Hi!

I think we've discussed this briefly a couple of times, and as I'm trying to cleanup my inbox, this means I actually have to give official feedback on this :-)

First, I appreciate the contribution a lot. This is a good amount of work, and I'm sure it can be extremely useful. But I'm wondering if this might be also the sign that pipework (if it continues to exist) needs to adopt a more modular model, have tests, etc.

I don't feel like merging this right now, because to be honest I have no idea how this will affect others, and as they'll report issues and ask questions I'll be totally out of touch with the project. But I still want your work to be able to be useful.

A few questions:

  • are you using this in production?
  • are you maintaining your own fork on your side?
  • are you aware of the libnetwork efforts and what do you think about it?
  • where are you based?

Let's see if/how we can merge that into pipework for the greater good :-)

@jpetazzo jpetazzo mentioned this pull request Jun 17, 2015
@pierrecdn
Copy link
Author

Hi Jérôme,

Yes, we discussed this one time briefly in Paris, at the SocGen meetup in which you presented the Storage Drivers !

Before I deep dive into my needs, I will answer your questions :

  • Yes, I'm using it in production. By production, I mean that "a little part of my business rely on". For now, I handle microscopic traffic flows on this setup (related to other business constraints).
  • No, this fork (consisting of these few commits made at the end of February, 2015) was a "one-shot commit". If I have to update/patch, I will do it here. It fit my needs, but I didn't extensively test the features I don't use.
  • I'm aware of efforts in libnetwork area and networking in general since 2014. I think there is a great community and work to specify many things. Unfortunately it's too generic so far, that's why I'm patching here and there (docker, mesos, etc.)
  • I'm based in Paris. We could discuss about it when you come (I think you're quite busy...)

Regarding pipework and this pull request, as my version is not backward compatible, I understand if you do not want to merge.

To be exhaustive, I will expose my company constraints regarding this deployment (internal policies, etc.) :

  • Three-tier design, with 3 connected NICs per host (admin, middle-end, public when frontend zone)
  • L2 design imposed (bye bye project Calico and others), no VLAN-tagging at host-level
  • Each container shall have it's own IP in the company IP address space (ISP context, with complex routing schemes, RFC1918 space is quite full, avoid overlap, etc.)
  • I do not want to rely on any form of encapsulation to build the container network (VXLAN, GRE etc.), with potential vRouter of SPOF like that
  • I experienced a legacy setup with LVS LBs which is powerful (Keepalived + IPVS-TUN + Quagga setup), and I wanted to make it work at a container level

Schematically :

View at host-level

host-level

View at container-level (IP tunneling case)

container-level

Final requirements are the same as everyone in 2015 : full support of legacy features (especially IPv6), PaaS platform, LB dynamic management to achieve easy deployment and scaling, maximize hardware utilization, etc.

At container level, this implies to manage :

  • routing
  • add tunnel interfaces
  • multiple addresses per interface, including IPv6
  • connecting veth pairs to OVS bridges (this was already achieved in pipework)
  • QoS for advanced usages (that's why I added a related action to pipework).

FYI, my current setup consists in :

  • Marathon > Mesos > Docker
  • Powerstrip > IPAM > pipework > Docker

At some levels, it's nothing more than dirty patching. My hope is to gradually re-integrate the huge work of the communities at different levels.

My thought about pipework, libnetwork news etc. :

  • pipework is clearly not sustainable in the long run, but it's a good plumbing tool for everyone waiting libnetwork to handle such use-cases.
  • I appreciate the libnetwork efforts in designing the CNM
  • modularity is the key, advanced users should not be constrained to wait to implement their use-cases. I'm talking about :
    • docker events (ability to implement hooks)
    • network drivers.

For those who are interested in Mesos/Docker setups, I'm actively watching these issues who looks promising :

Unfortunately I have not enough time for now to do all of this work properly and share.

@dreamcat4
Copy link

Hi @pierrecdn. Does this new feature allow the user to add host MACVLAN routes to let them see the docker containers on the same host?

The reason I ask is because I've implemented that specific feature externally to pipework tool, in my docker image (the wrapper script I put around pipework)...

Wheras if your patch gets integrated then maybe I could remove that outside code.

You can find my "host routes" feature here:

https://github.com/dreamcat4/docker-images/blob/master/pipework/4.%20Config.md#host_routes

https://github.com/dreamcat4/docker-images/blob/master/pipework/entrypoint.sh#L173

I know it's not for the same exact purpose. But can your feature do that too? Or are they only for L3 (ip) routes? e.g. with the unix route cmd. Many thanks.

@pierrecdn
Copy link
Author

Hi @dreamcat4,

The goal of these features is to easily manipulate the network stack inside a container/netns.
For example, you may want to add a secondary interface in your container, so to route packets through it, to add an IPv6 on it, etc etc.
It relies on iproute2, just as pipework did in the past. No more requirements.

My current setup is based on OVS bridges, and yes it allows same host inter-containers communications.
With macvlan interfaces and bridge mode it can also work, yes (when I studied this in Q1 2015, I founded this good presentation from NTT : http://events.linuxfoundation.org/sites/events/files/slides/LinuxConJapan2014_makita_0.pdf).
Reading your documentation, I understand the two approaches are similar. This is a good point to pass an environment_variable to give the right pipework command to run.

Regarding orchestration, my integration was a little more tricky... I'd many things to run when launching a container :

  • get free IP adresses in my pool,
  • doing 3, 4 or 5 calls to pipework depending on the number of interfaces and the specific needs.
  • etc.

So I choosed to patch the orchestrator to handle these custom requirements, based on the same principles than yours.

@dreamcat4
Copy link

Interesting. The crane tool for orchestration has some generic hooks mechanism where users can specify their own pre- and post- cmd hooks.

The other thing can be done inside a container's startup script is this:

if [ "$pipework_wait" ]; then
    for _pipework_if in $pipework_wait; do
        echo "Waiting for pipework to bring up $_pipework_if..."
        pipework --wait -i $_pipework_if
    done
    sleep 1
fi

To keep waiting for the multiple pipework_cmd= to be completed. Where docker run -e pipework_wait="eth0 eth1 etc..."

@pierrecdn
Copy link
Author

That's clearly a good and interesting design...
It's maybe more simple for me to maintain in the long run.

The only thing that seems difficult (from an organizational point of view) is what you exposed in the last message, e.g. the need to wrap each container startup script to wait for its interfaces to be ready.

@dreamcat4
Copy link

e.g. the need to wrap each container startup script to wait for its interfaces to be ready.

Yes. But what's the best alternative to doing that?

@jpetazzo
Copy link
Owner

I'd like to do a temperature check; you two on this thread are probably the most advanced users of pipework that I'm aware of, and I'd like to know if you're still using it, or if you have abandoned it because the Engine and libnetwork can cover 90% of the usecases, or if you are using a fork or another tool (because I've been really slow to maintain things).

I'm trying to decide what to do with pipework; I'd love to continue to write little tools in the same style (e.g. to support traffic control) but I'd also love to make them cleanly integrated with libnetwork and the main codebase.

Your feedback will be useful for that!

Thanks,

@dreamcat4
Copy link

Hi @jpetazzo,

My own systems no longer use pipework. I switched just recently. Since docker v1.10.2. However it was not ideal at that time.

Yet as I'm sure you are aware, the upcoming docker v1.11.x is very soon to be released. And it has even better networking support just coming up. RC preview build is already available. I am very much looking forward to trying them (like the new marked 'experimental' network drivers).

My comments about it on Stackoverflow:

http://stackoverflow.com/a/36470828/287510

As for my pipework docker image (of you):

Recently added a deprecation notice to the "Status" section:

https://github.com/dreamcat4/docker-images/tree/master/pipework

Probably Jerome you can / should but something similar like that. Into your README.md too. To try and direct users where they can find best guides / documentation to solve in the new ways.

Of course there may also be users who continue to need pipework. For whatever specific reason(s). Honestly I'm not entirely sure who is covered and who is not covered? As more networking drivers keep getting added. Then documenting each case is something of a moving target... [EDIT] which is why I am writing answers on StackOverflow now.

@pierrecdn
Copy link
Author

Hi Jérôme,

My former company probably still use that fork, but AFAIK there are no real improvements anymore on their side.

I wrote this patch to cover specific use-cases: integrate docker in a well-known network setup (and a bit constrained environment) i.e. VIPs + IP-IP tunnel to integrate with IPVS + dual-stack setupv+ multiple ethernet interfaces on the host, connected to diferent networks + L2 mode. We also had use-cases in dev for traffic shaping.

The work achieved in libnetwork and docker since last year seems awesome, especially moby/libnetwork#964.

Maybe I would have taken a different approach if I had to do this today, but what I love in pipework is its simplicity (you basically manipulate netns and you're done). Writing a libnetwork driver seems to be a bit harder if I had to integrate what's currently missing (secondary IPs, specific interfaces like tunl/ip6tnl, etc.)

@jpetazzo
Copy link
Owner

jpetazzo commented Aug 3, 2017

For posterity: I'm leaving this PR open because it has very useful code; but since I'm not maintaining pipework anymore (except for trivial patches) it will probably never get merged. But I'd like to thank anyway @pierrecdn (who wrote it) as well as @dreamcat4 (for the very insightful feedback) for their time and contribution. Thanks for being great open source citizens! (And for not hating me for being a bad maintainer 😅 )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants