Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Considerations for separating ML runtime from GUI. #384

Open
damellis opened this issue Sep 30, 2016 · 0 comments
Open

Considerations for separating ML runtime from GUI. #384

damellis opened this issue Sep 30, 2016 · 0 comments

Comments

@damellis
Copy link
Owner

damellis commented Sep 30, 2016

This would allow us to write the GUI in whatever framework we wanted (browser-based might be good), while retaining flexibility in the choice of machine learning framework. It would require writing a transport layer to forward pipeline data from the runtime to the UI and control information the other way. My sense is that browser-based interfaces are the way to go for cross-platform applications, possibly using something like electron to help bridge to the desktop backend. This could also be an opportunity to switch to a new machine learning framework, although I'm not sure what we'd pick instead of the GRT. We could also potentially keep the GRT but replace openFrameworks with some other approach to i/o streams. For instance, we could try wrapping the GRT as an npm module and then use node for i/o (to both the input and output streams and to the GUI). We'd then might want a GUI for selecting i/o streams, as it would be weird to have part of the example (pipeline definition) in C++ and another part (stream definition) in node / Javascript. Separating the pipeline from the i/o stream specification would further ease the process of porting the pipeline runtime to embedded C++ platforms, too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant