-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Export all iterations to be able to create animations #33
Comments
I think it's a great idea...maybe can add it as an option? Also, in addition to the adaptive gradient descent, another reason why calling it with max_iter=1 in a loop is because of the early exaggeration. |
I was taking care to specify exaggeration correctly, so I think the only difference must have been due to adaptive gradient descent... Anyway, optional output sounds right. I noticed that you have recently added another optional output so we should take care that these two optional outputs work correctly together. Or maybe combine them together? If some input flag is on, then optional outputs (all of them) are returned and if not then not? Or do you prefer to have a separate flag for each optional output? |
I think you're referring to the R wrapper, correct? If so, then yes, it now optionally returns the KL divergence computed at every 50 iterations. I would have preferred that the output always be a list (i.e. so that the cost could also be outputted), but since people have already started using the old interface, I was hesitant to change the default and break peoples' code. Anyways, I think it would be most intuitive to have a separate flag. That is, a 'get_costs' flag as we currently have, and (for example) an 'intermediate_iterations' flag. If these are both false, then the embedded matrix Y is returned (by default), if either of them are true, then a list is returned. Do you think that is a good solution? |
OK. Currently the C++ code always saves the costs, but I'm not sure it's a good idea with gradient descent history: e.g. for 1mln points, the output would be 1mln x 2 x 1000, which is pretty large... I was originally thinking of animating smaller datasets :) |
@dkobak if this is still relevant, I have released a python only version of FIt-SNE, which was built with interactivity in mind (since we are integrating it into Orange). I've included a callback system where you can look at embeddings at each step of the optimization and animate this. I played around with this in the Orange widgets and the animations look really neat - but that hasn't been merged into master yet. If you need a quick fix, you can use it here. I definitely think this would be a great addition to have here, but I am not that familiar with C++ and I don't know how a callback system would work. |
@pavlin-policar I don't think there is a way to set up a callback system. My current thinking is that we should pass a boolean flag For 25k points, the final output is @linqiaozhi Thoughts? I could try to implement it some time in the next few weeks. @pavlin-policar Wow, thanks a lot for the link to your fastTSNE package. Great work! I might leave some comments over there. |
@pavlin-policar A callback system for visualizing real-time FIt-SNE is super cool...I wish that would be possible with our C++ code, but I can't imagine that working since the wrappers are just calling a binary right now. @dkobak I think your approach is very reasonable. I would only suggest that we actually output floats instead of doubles, so each element would typically be 4 bytes instead of 8 bytes, which would halve the file size (we don't need that much precision for the visualization anyways). There are other more sophisticated things that could be done (e.g. only output a random subset of the points, or only specific iterations), but I don't think it's necessary at this point...this is a function that people will use only for very specific situations (e.g. diagnostics) so using some disk space and taking some time for I/O is probably okay. At least for a first implementation. Thanks for being willing to implement it! |
When preparing a public talk, I often want to include an animation of how t-SNE develops during gradient descent. I thought I can call
fast_tsne
in a loop withmax_iter=1
(loading the similarities from a file), but this yields a worse final result than runningfast_tsne
in one go; I assume this is because of the adaptive gradient descent. So it would be great to be able to runfast_tsne
and export allmax_iter
iterations. What do you think?The text was updated successfully, but these errors were encountered: