-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discuss Beholder potentially becoming an officially supported TensorBoard plugin #33
Comments
Hi Chris, First of all - nice work! @jart, @wchargin have poked around the repository and run the demo and we're all really impressed. The streaming image response with multipart response content type is really neat, none of us have seen that before. To answer your questions:
Whether we want to merge the repo: we're not sure yet, because we aren't sure how broad interest in the plugin would be. It'd be helpful for us to show it to a bunch of researchers within Google and see if it excites them; if it does, that would be a strong indicator for including it. Right now, the documentation is a bit sparse. Could you document the API, and create an example that shows how it is practically useful for TF users' workflows? A guided tour kind of like my Dev Summit tutorial would be sweet. Here are my views of the pros and cons: Merge: Pros
Keep Separate: Pros
We'd need to review the code more thoroughly to say. The copy+pasted ffmpeg code might be an issue for synchronizing into Google (although you did a nice job keeping the licensing clear). We'd need to change the way Pillow is depended on. Overall, I don't expect drastic changes.
Probably it would take us a week or two to decide, after you provided more documentation / a motivating demo.
After we merged the code, you would continue to be the owner of the plugin. We would do code reviews, but mostly defer to you on directions the plugin should take.
Probably it would be best to mothball it with a message at the top of the readme pointing to its new home in TensorBoard. |
Thanks! I'm glad you like it. :) In regards to how it is practically useful, I started the why-is-this-even-useful section of my project report (part of my Master's requirements) today. Turns out it's kinda useful! Vanishing gradients could be pretty easy to detect, based on one contrived example I tried. For CNNs, I reshape conv weights so that each row of [kernel height, kernel width] blocks represents an input channel and each column of blocks represents an output channel. Because of this, you can e.g. look at a glance to see if there is high variance among convolutional filters - in this untrained GAN, there is not: In a pre-trained VGG network, there is more variance (though some redundancy might lead us to believe we don't need as many output channels for this layer): It brings up interesting exploratory questions: This one is from the same pre-trained VGG network... but why is there that one channel of a filter that is so much higher than the rest? Why is that input channel (row) so distinctive? I also added color images to the There's still a lot of potential enhancements, but it's kinda fun to play with as is. I think I'll include some examples like this in the README. I also got a suggestion to include some presets (I'm going to have so many options I won't know where to put them all) for common issues. For example, to detect vanishing gradients you would want to see variance over time, and use the "all sections" image scaling option. Pillow would be installed during I haven't tested Mac or Windows. At this point, I'd lean towards keeping it in this repo for a few weeks, and then passing it off sometime after that if I can (given interest, etc.), because I'll likely be very busy September through December trying to cram classes in to graduate a bit early. I imagine researchers at Google will have all kinds of ideas on what they would like to see different; please pass those on! |
This is very useful information @chrisranderson. Thank you. Regarding what @dandelionmane said earlier about a guided tour, one easy way you could do that on a Mac is by opening QuickTime Player and clicking File | New Screen Recording. You can then draw a rectangle on your screen, which it records along with your voice and mouse cursor. The ideal length would probably be five to ten minutes. Then upload it to YouTube. I think this will be the most effective way to communicate to our researchers why they should feel enthusiastic about what your wrote. After all, Beholder is basically a video stream into TensorFlow. Explaining it with a video just feels like the right thing to do. But most importantly, a video would stand a better chance of convincing them to sit down and fiddle with Beholder for a few hours with their own models. Regarding Pillow, have you considered using TensorFlow instead? Your with tf.Session() as sess:
x = tf.placeholder(tf.uint8, name='x')
encode = tf.image.encode_png(x)
while True:
yield sess.run(encode, feed_dict={x: self._fetch_current_frame()}) Ignoring initialization, each |
@jart @dandelionmane You're both right about a video. Maybe someday I'll work up the courage to put something out there... for now, I've added a bunch of info to the README. I could use that as an outline for a video. @jart I have not thought about using TensorFlow, but I guess it isn't a bad idea to reduce the number of dependencies. I'll add an issue. |
@jart How goes feedback gathering? |
(Justine is OOO until Monday.) We posted a link to your video and this issue in the Brain chat. There was a general positive response ("real time view of things would be really cool," etc.). One thing that someone noted was that network overhead could be an issue if data is on a remote filesystem—but our progress toward SQL-mode could help with this. That's all I have for you so far. :-) |
We discussed Beholder at our team meeting today. There is unanimous support for merging it into TensorBoard. It's a fantastic contribution. Everyone really loves it. There's only two small things we want to see happen beforehand:
Everything else can be done by iterating after the merge, like transient tensor delivery over the network. We're still not 100% sure how we want to approach it. But we don't want that to stop you from bringing your work to the community. |
That's great news! Thank you for getting back to me on this. It will be fun to share this when I present my project to the committee on Wednesday. I'll add issues for what you mentioned, add at least one of my own, and put them on a milestone. |
Here's the milestone: https://github.com/chrisranderson/beholder/milestone/1. |
I'm still (slowly) working on this. I almost have @jart From the milestone, what do you think is necessary before handing everything over? How will that work? Will git history be preserved? |
It begins! Here is the pull request to start merging: tensorflow/tensorboard#613 |
To: @jart @dandelionmane @wchargin
I think @jart and @dandelionmane both mentioned potentially merging this repo into TensorBoard. Here are some of my questions.
The text was updated successfully, but these errors were encountered: