-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
integration to tinny-cnn #3
Comments
@edgarriba |
@naibaf7 Regarding simplified interface, just to make clear what type of data will need libdnn interfaces. |
@naibaf7 What do you think of this design? |
@bhack |
What do you want to be our responsabilities? Device, memory and context? Kernel launch? |
@bhack But if you can wait 2 days, I will update libDNN with simplifications for the device, memory and context part (basically copied over from Caffe). Basically just wrappers for memory allocation, device listing & initialization & memory transfer around CUDA and ViennaCL. |
We have no problem to handle device, memory and context. If you think that would be useful to have this features here ok. If not we will implement this in tiny. We need to think at the largest audience for this library. So if you think that generally callers benefit for handling context, device and memories itself for other kind of operations it is ok for us to implement in tiny and go ahead with other kernel coverage porting here. |
@naibaf7 |
@edgarriba |
with this branch: https://github.com/edgarriba/tiny-cnn/tree/libdnn
important routines: |
@edgarriba If you need more assistance in fixing these, or want to discuss it in more details, we can schedule a Skype conversation with screen sharing and revise the code that way. |
By the way, random observation, not sure if this is useful or not, but if you use the clew abstraction layer, then you can build for opencl, without libOpenCL.so being present, and you can even run your program, without libOpenCL.so being present. Your program can make a runtime-decision about whether to try binding with opencl or not. Actually, I think it can even attempt to call |
@hughperkins Intriguing! |
No
:-) Well, I geuss someone could create something analagous for cuda. Maybe... you? :-)
For libOpenCL.so, it searches in a couple of hard-coded locations: https://github.com/hughperkins/clew/blob/master/src/clew.c#L180-L184
You can see that there is no reason why this couldnt be extended arbitrarily and/or read from some config file. But this covers almost all, or perhaps all?, cases I've seen, I think? |
@hughperkins |
Wow, you are just a mine of useful information bhack. Its amazing. Are you are AI? :-P |
@bhack @hughperkins |
yes. the sequence is:
|
@edgarriba Can you post the UML rendered image of the integration proposal? |
I don't think that Caffe it is the right testbed for libdnn cause libdnn was in some sense designed around Caffe. So if you want to give some feedback on Edgar design... |
feedback is welcomed! |
Another interesting roadmap to monitor is https://phabricator.pmoreau.org/w/mesa/opencl_through_spirv_current_status/ |
@CNugteren I think that you could be interested in the last messages. |
As you can see we have started to use CLCudaAPI for Tiny but also integrating libdnn for convolutional kernels. @CNugteren told that he was interested to contribute in libdnn. By this GSoC integration experiment I see a little bit of duplication of features. Libdnn actually maintain its tuning class and Cedric has CLtune. Libdnn uses ViennaCL as helper class for Opencl/cuda and Cedric has CLCudaAPI. Libdnn use some algebra from Vienna and Cedric from CLBlast. Is there a little bit of efforts duplication? |
@bhack |
We cannot share the tuning component? Also, I think CLCudaAPI it is neutral enougth to use Vienna and CLBast algebra. |
@bhack |
Not so bad as a small scale test... |
@bhack |
Yes remeber also of the old BVLC/caffe#3168 |
@naibaf7 What about removing device and contrxt creation from libdnn standalone and require that this kind of responsibility will be handled by third party application and then passed to libnn? Could this help code sync between Caffe libdnn and standalone version? |
@bhack The issue then is that I can't use a standalone LibDNN in Caffe because it would make compilation of Caffe a multistep process whereas it is very easy at the moment (when using ViennaCL and LibDNN, it's only one compile step and outside OS provided libraries, just header-only dependencies to Boost and ViennaCL). And then there'd be problems with duplicated functionality, such as the |
Sorry probably I've not explained well what I mean... It is possibile to pass only some device metadata to libdnn and to let execute and compile program on app side? Apps could comunicate timings to the tuner and retrive new source to compile and execute. |
@bhack |
Exactly what I mean is to exange only metadata. So we could excange only metadata forward and back from libdnn and third party apps. So we don't need to replicate bootstrap and comping and execution code cause libdnn has only partial coverage of layers and so generally every third party apps will have it is own setup, compile and launch code. |
Yeah but look at this example on how complex launch code depends on kernel properties - I might be wrong here but I think wrapping a device to ViennaCL or any other wrapper is waay easier and more in line with how - for example cuDNN works - than keeping up with the launch parameter conformity for all kernel variants - a small excert here:
You'd have to duplicate about this amount of code times 8 (cuda, opencl, forward, backward, convolution, pooling), and it will increase even more so with more kernel variants. Whereas with a wrapper, you need not worry about kernel variants at all - it will just work, and not many if any interface changes at all between versions of LibDNN. |
I don't know why we need to replicate all this code 8 times.. I.e. in our Tiny-dnn case where we use CLCUDAAPI we have for kernels this two interfaces:
Can this be asked to libdnn? |
@bhack |
It is not a particular problem for tiny-dnn cause actually libdnn it is already a shared object for us. It was just a tentative to find a path to let caffe and tiny-dnn use libdnn directly as upstream and to let collect more variants here (i.e. Intel kernels). |
@bhack |
Are you sure of that this is a plausibile path? You still needs all this stuffs for lauching kernels not covered by libdnn (full connected etc...) same as tiny-dnn. |
@bhack |
Yes is what I mean.. how could you move device abstraction from caffe to libdnn? You still need that in caffe for other kernels just like tiny-dnn. |
@naibaf7 How do you think that XLA will impact this project? |
@bhack |
@naibaf7 if you are interested take a look at https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler |
@bhack Bumped to latest kernel versions and added pooling kernels. Should be straight-forward to use again, don't hesitate to ask questions. |
I have compiled and installed libdnn from github and got the following errors with tiny-dnn: tiny-dnn/tiny_dnn/core/kernels/conv2d_op_libdnn.h:227:5: error: 'LibDNNConfig' is not a member of 'greentea' /tiny-dnn/tiny_dnn/core/kernels/conv2d_op_libdnn.h:229:5: error: 'config' was not declared in this scope Any idea? Thanks |
Oh... no it's an incompability issue. You need to ask them to change "LibDNNConfig" to "LibDNNConvConfig" in the source code. |
@naibaf7 thanks to point out, we'll try to update during the summer |
@edgarriba |
right, we are not checking versions right now. It would be worth to do it. Opening an issue to fix that |
@bhack |
@naibaf7 @bhack
I open this ticket in order to discuss ideas for integration libdnn to tiny-cnn.
Currently, I implemented a small interface to get native OpenCL context from tiny-cnn:
https://github.com/edgarriba/tiny-cnn/blob/f4d9e1d4f45ad8ac46824b5c007f5507fe8925eb/tiny_cnn/core/session.h
Things I think that are needed:
BTW, @naibaf7 @hughperkins notice that we are planing to migrate tiny-cnn to an organization account and renaming the library itself since now it's more a pure DNN lib than just CNN. Maybe you are interested in getting more involved in the development. tiny-dnn/tiny-dnn#235
The text was updated successfully, but these errors were encountered: