-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible bug in weighted_fair when resouce is 'bit' #717
Comments
There is an experimental work-conserving scheduler. If you add the following line at the beginning of the script,
Then t-0 will be scheduled even if t-1 is not 'runnable'.
|
The experimental scheduler seems very interesting. However, it turned out what I really want is something like a WFQ (or General Processor Sharing) packet/traffic scheduler where every session/flow has a minimum guaranteed rate (calculated from associated weights); but if a source is silent, then the excess bandwidth is distributed among the active sessions according to the weights as well. I thought task scheduling might be good for that, but I'm not so sure anymore. I repeated my original measurement with the scheduler set to 'experimental', and after a while I added another source that is routed to queue1. (I also modified the 'run' command not to clear the pipeline.) These are the results:
So in the first phase queue0 served alone (receiving all the available resources), but in the second phase queue0 and queue1 do not equally share the resources: queue1 receives almost all of them. Do you think it is possible to configure bess to provide a WFQ type of traffic scheduling without writing a new module? Thank you. For reproducability:
import time
bess.add_worker(wid=0, core=0, scheduler='experimental')
bess.add_tc('root', 'weighted_fair', wid=0, resource='bit')
tagger= SetMetadata(attrs=[{'name': 'tag', 'size': 4, 'value_int': 0}])
split = Split(attribute='tag', size=4)
s::Source() -> tagger -> split
for i in range(2):
q = Queue()
name = 't-%s' % i
bess.add_tc(name=name, parent='root', policy='round_robin', share=1)
q.attach_task(parent=name)
split:i -> q -> Sink()
bess.attach_task(s.name, wid=0)
bess.resume_all()
time.sleep(6)
import time
bess.pause_all()
tagger2 = SetMetadata(attrs=[{'name': 'tag', 'size': 4, 'value_int': 1}])
s2::Source() -> tagger2
bess.connect_modules(tagger2.name, 'split0')
bess.attach_task(s2.name, wid=0)
bess.resume_all()
time.sleep(10)
bess.pause_all() |
I don't know if it's of use, but we already have a DRR implementation and there is a defunct (but fixable) PR adding Codel to the DRR implementation: |
Thank you for the detailed bug scenario. I found a bug in weighted_fair traffic class when used with the experimental scheduler. I will make a PR for this soon. |
Merged #729. Closing |
I think the current behavior of the wighted_fair scheduler when the resource is set to 'bit' is counter-intuitive. Take a look at the following bess script:
Running the script results in a dead-lock because the split module never sends a packet to the second queue, and therefore the queue never forwards a packet although it always 'runnable'.
I somehow expected that the scheduler would choose queue0 because queue1 was empty and queue0 would opportunistically work all the time. Now that I understand the current behavior I can live with it, but still wonder whether it is intended. Thanks.
The text was updated successfully, but these errors were encountered: