Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cost vs iterations plot #143

Open
raulconchello opened this issue Dec 2, 2022 · 2 comments
Open

Cost vs iterations plot #143

raulconchello opened this issue Dec 2, 2022 · 2 comments
Assignees
Labels
enhancement New feature or request

Comments

@raulconchello
Copy link
Contributor

raulconchello commented Dec 2, 2022

Description

Now when you call q.results.plot_cost() you get a plot of cost vs number of function evaluations. Like the following:

image

It could be interesting to have another method that plots cost vs number of iterations, so as to see the cost at the end of each iteration of the classical optimizer.

@Q-lds Q-lds added the enhancement New feature or request label Dec 16, 2022
@Q-lds
Copy link
Member

Q-lds commented Dec 16, 2022

How would we do this? We would need:

  • a counter-label that indicates which indexes within the cost array belong to the give optimisation round
  • a way to average(?) over these multiple costs to provide a plot

The first part is easy (I think), but the second one ... I mean, that is optimiser dependent and it may not be trivial.

@raulconchello
Copy link
Contributor Author

raulconchello commented Dec 16, 2022

With @vishal-ph, we thought it could be nice to have this method when we were looking at a graph like the following:
image
Here the maxiter was 20, so only 20 iterations have been performed but 250 evaluations are displayed. It is rather difficult to see the actual evolution.

I don't think we need to do the average, but just take the last evaluation of each step. This one is the optimized cost if we stop the optimization at that iteration.

For example for spsa:

def SPSA(...):
    ...
    def grad_SPSA(params, c):
        delta = (2*np.random.randint(0, 2, size=len(params))-1)
        return np.real((fun(params + c*delta) - fun(params - c*delta))*delta/(2*c))
    ...
    while improved and not stop and niter < maxiter:
        improved = False
        niter += 1

        # gain sequences
        a = a/(A+niter+1)**alpha
        c = c/(niter+1)**gamma

        # compute gradient descent step
        testx = testx - a*grad_SPSA(testx, c)
        testy = np.real(fun(testx, *args))

        if np.abs(besty-testy) < tol:
            improved = False

        else:
            besty = testy
            bestx = testx
            improved = True

        ...

    return OptimizeResult(fun=besty, x=bestx ....))

You would plot the variable besty vs niter

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants