-
Notifications
You must be signed in to change notification settings - Fork 727
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Routing, IPv6, secondary IFs, trafic control, tunelling trial... #122
base: master
Are you sure you want to change the base?
Conversation
Argh. I had not seen PR #27 by @pjkundert, which is much richer ! |
…#27 / Verbose mode to trace all iproute2 calls / etc.
… do not work as per the netns limitations (tested on 3.16 kernel)
Hi, One "large" push (to the pipework scale) yesterday. As I can't wait for these Poposals on Docker :
and more generally the industrialization of Docker on the network side with its new acquisition, socket.io. I'd to extend your little toolkit, cause I plan to integrate it in an orchestration stack (https://github.com/mesos/mesos with custom schedulers and executors). With these enhancements, I'm able to handle these use-cases :
pipework ovsbr-eth1 $(docker run ...) -i eth1 -a ip 192.168.4.2/[email protected] -r 10.10.0.0/16,10.20.0.0/16
pipework ovsbr-eth2 $(docker run ...) -i eth2 -a ip 192.168.8.2/[email protected]
# default route goes on eth2
pipework ovsbr-eth1 $(docker run ...) -a ip 2001:db8::beef/64@2001:db8::1
# eth1 is globally reachable if the underlying host network
pipework ovsbr-eth1 $(docker run ...) -a sec_ip 2001:db8::face/64
pipework ovsbr-eth1 $(docker run ...) -a tc qdisc add dev eth1 root netem loss 30%
I also imported the arguments parsing logic from #27, with more capabilities this seems mandatory. Some fixes to envisage, as I didn't test all the existing UCs. |
…fault address selection)
Hi! I think we've discussed this briefly a couple of times, and as I'm trying to cleanup my inbox, this means I actually have to give official feedback on this :-) First, I appreciate the contribution a lot. This is a good amount of work, and I'm sure it can be extremely useful. But I'm wondering if this might be also the sign that pipework (if it continues to exist) needs to adopt a more modular model, have tests, etc. I don't feel like merging this right now, because to be honest I have no idea how this will affect others, and as they'll report issues and ask questions I'll be totally out of touch with the project. But I still want your work to be able to be useful. A few questions:
Let's see if/how we can merge that into pipework for the greater good :-) |
Hi Jérôme, Yes, we discussed this one time briefly in Paris, at the SocGen meetup in which you presented the Storage Drivers ! Before I deep dive into my needs, I will answer your questions :
Regarding pipework and this pull request, as my version is not backward compatible, I understand if you do not want to merge. To be exhaustive, I will expose my company constraints regarding this deployment (internal policies, etc.) :
Schematically :
Final requirements are the same as everyone in 2015 : full support of legacy features (especially IPv6), PaaS platform, LB dynamic management to achieve easy deployment and scaling, maximize hardware utilization, etc. At container level, this implies to manage :
FYI, my current setup consists in :
At some levels, it's nothing more than dirty patching. My hope is to gradually re-integrate the huge work of the communities at different levels. My thought about pipework, libnetwork news etc. :
For those who are interested in Mesos/Docker setups, I'm actively watching these issues who looks promising :
Unfortunately I have not enough time for now to do all of this work properly and share. |
Hi @pierrecdn. Does this new feature allow the user to add host MACVLAN routes to let them see the docker containers on the same host? The reason I ask is because I've implemented that specific feature externally to pipework tool, in my docker image (the wrapper script I put around pipework)... Wheras if your patch gets integrated then maybe I could remove that outside code. You can find my "host routes" feature here: https://github.com/dreamcat4/docker-images/blob/master/pipework/4.%20Config.md#host_routes https://github.com/dreamcat4/docker-images/blob/master/pipework/entrypoint.sh#L173 I know it's not for the same exact purpose. But can your feature do that too? Or are they only for L3 (ip) routes? e.g. with the unix |
Hi @dreamcat4, The goal of these features is to easily manipulate the network stack inside a container/netns. My current setup is based on OVS bridges, and yes it allows same host inter-containers communications. Regarding orchestration, my integration was a little more tricky... I'd many things to run when launching a container :
So I choosed to patch the orchestrator to handle these custom requirements, based on the same principles than yours. |
Interesting. The The other thing can be done inside a container's startup script is this: if [ "$pipework_wait" ]; then
for _pipework_if in $pipework_wait; do
echo "Waiting for pipework to bring up $_pipework_if..."
pipework --wait -i $_pipework_if
done
sleep 1
fi To keep waiting for the multiple |
That's clearly a good and interesting design... The only thing that seems difficult (from an organizational point of view) is what you exposed in the last message, e.g. the need to wrap each container startup script to wait for its interfaces to be ready. |
Yes. But what's the best alternative to doing that? |
I'd like to do a temperature check; you two on this thread are probably the most advanced users of pipework that I'm aware of, and I'd like to know if you're still using it, or if you have abandoned it because the Engine and libnetwork can cover 90% of the usecases, or if you are using a fork or another tool (because I've been really slow to maintain things). I'm trying to decide what to do with pipework; I'd love to continue to write little tools in the same style (e.g. to support traffic control) but I'd also love to make them cleanly integrated with libnetwork and the main codebase. Your feedback will be useful for that! Thanks, |
Hi @jpetazzo, My own systems no longer use pipework. I switched just recently. Since docker v1.10.2. However it was not ideal at that time. Yet as I'm sure you are aware, the upcoming docker v1.11.x is very soon to be released. And it has even better networking support just coming up. RC preview build is already available. I am very much looking forward to trying them (like the new marked 'experimental' network drivers). My comments about it on Stackoverflow: http://stackoverflow.com/a/36470828/287510 As for my pipework docker image (of you): Recently added a deprecation notice to the "Status" section: https://github.com/dreamcat4/docker-images/tree/master/pipework Probably Jerome you can / should but something similar like that. Into your Of course there may also be users who continue to need pipework. For whatever specific reason(s). Honestly I'm not entirely sure who is covered and who is not covered? As more networking drivers keep getting added. Then documenting each case is something of a moving target... [EDIT] which is why I am writing answers on StackOverflow now. |
Hi Jérôme, My former company probably still use that fork, but AFAIK there are no real improvements anymore on their side. I wrote this patch to cover specific use-cases: integrate docker in a well-known network setup (and a bit constrained environment) i.e. VIPs + IP-IP tunnel to integrate with IPVS + dual-stack setupv+ multiple ethernet interfaces on the host, connected to diferent networks + L2 mode. We also had use-cases in dev for traffic shaping. The work achieved in libnetwork and docker since last year seems awesome, especially moby/libnetwork#964. Maybe I would have taken a different approach if I had to do this today, but what I love in pipework is its simplicity (you basically manipulate netns and you're done). Writing a libnetwork driver seems to be a bit harder if I had to integrate what's currently missing (secondary IPs, specific interfaces like tunl/ip6tnl, etc.) |
For posterity: I'm leaving this PR open because it has very useful code; but since I'm not maintaining pipework anymore (except for trivial patches) it will probably never get merged. But I'd like to thank anyway @pierrecdn (who wrote it) as well as @dreamcat4 (for the very insightful feedback) for their time and contribution. Thanks for being great open source citizens! (And for not hating me for being a bad maintainer 😅 ) |
Added one argument to pipework allowing to specify routes in the container.
Use-cases :