-
Notifications
You must be signed in to change notification settings - Fork 19
Some thoughts #1
Comments
(1) I would give it a low priority. We should use textures instead of shapes. We should think about a way to make GUI prototyping easy (no boilerplate, maybe editor? but then we lose a strict checking by the compiler), fast (no compiling? serializing maybe?) and pretty (ever tried bulma or bootstrap? those are easy to use solutions for styling and look decent). No current solution is providing that. |
@jaroslaw-weber I can't see what you're referring to since Githubs Markdown re-labels your numbers. Use something like (5) to escape this. That said, I don't see how you could do something like a dropdown with a hierarchy as the constraints are right now. My point was that right now there is no way for components to overlap each other. If you insert something, it would just push down the rest of the UI, not overlap.
So how would you draw an SVG component or something like this with only textures? Usually GUI toolkits allow you to draw arbitrary shapes, like arcs and curves and lines. This is what I am referring to. You could draw these to a OpenGL texture, but this should be the task of the GUI toolkit, not the user of the library.
Sure, I wasn't talking about the here and now, rather about what's possible in the future. Wrapping text is fairly simple, you only have to check for each character if it has overflown the parent rectangle and move the line down accordingly. The TextEdit component already does this. Yes, text styling is not super important, but it's far from useless.
If code generation happens before the compiler, checking would still be enforced. C# does code generation at compile time, Java does code generation at compile time, GTK does code generation (with glade). If you use in-memory buffers and caching effectively, generating code is pretty fast. I am against loading something at runtime, it causes all kinds of problems. Plus, as it is right now, this might not even be possible (inserting constraints at runtime). |
Fixed
You are right, we need this. However, a lot of people just need few buttons, sliders, input fields etc. I know that lot of people care about functionality, but without an option to create a pretty app I don't think the library will go mainstream. If we can make a pretty, cross-platform GUI app in rust then people may follow. But to do this, we need to focus on styling/textures more than abstract shapes. I see this library as HTML-ish way to do GUI, maybe we could get some inspiration from css styling. I am not sure what is your idea here but I wish you luck. |
Hi, thanks for raising these questions, and your vote of confidence! Limn is still in a very early stage right now so expect major changes, I'd say don't build anything complicated on top of it if you're not planning to rewrite a lot of code, and find serious bugs, but if you are ok with that then by all means. It could help inform the direction it evolves in. I've been meaning to start writing about the plans for this project so this is as good a time as any to start, I'll try and write up some more high level documentation on goals and priorities in the next week or so. 1. Custom ShapesAgreed that custom shapes should be supported and made easy to write. My plan on this actually is just to wait for webrender to support it, might be a while, but it looks like it's in the pipeline for them: (https://github.com/servo/servo/wiki/Basic-SVG-support-project) I should also say that in general, at this stage, limn should try to be responsible for as little graphics code as possible, just to keep the maintenance burden manageable while things are in flux. Complicated features should be possible, but not necessarily easy, yet. If you render to an image buffer and then pass that to webrender most things should be possible, in fact I think that's how servo supports svg and canvas elements right now. 2. PopoversCurrently the z-ordering of widgets is determined by the hierarchy and the ordering of widgets by their parents. So parents are always drawn behind children and children are drawn in the order they are added to their parent. You can remove and re-add a child to move it in front of it's siblings (there should probably be a 3. Default ConstraintsYes, the constraint system and the 4. Animations / Multiple UIsYep these are all important. In terms of just the animation itself, I'm hoping the EventHandler model can handle it well enough, similar to how the Clock example works, which is essentially an animation which only updates every second. The UI should emit an event for every frame, then widgets can add a handler than sets an edit variable or changes drawable parameters for each frame event. In terms of transitioning between UIs, it's honestly not something that I've thought much about and I think will require some careful design, but the idea so far has been for the UI to be truly global, so there should not be a need to have more than one per window, or per application (could be multiple windows for one UI). It sounds like you are using UIs sort of like 'Activity's in Android? My thought was that those could just be Widgets, which can be deactivated. Limn should include more EventHandlers, logic, and bundled widgets to facilitate this, which will require some iteration on the design, especially in regards to performance. Maybe something like a master widget that manages the transitions and passes global UI input to the active 'Activity' widget. Are there any features that the UI has that you wish Widgets had, or that make this impossible? A simple example that demonstrates this with animations might be nice to have, especially as a benchmark, ie. if it doesn't get 60fps on a low end laptop or a phone, that's an issue. For any new feature, It should be demoed in the examples, as a (hopefully temporary) substitute for integration tests and proper benchmarks. 5. ComponentsThe idea is for any Widget behavior to be composable from existing or new EventHandlers, drawables and child widgets. The directory src/widgets contains the bundled widgets but it should be possible to construct and package them in the same way outside of limn. The basic idea is that you define a function or a struct that is responsible for building a 6. Code GenerationI think a high level/declarative interface is absolutely necessary, and planned for long term, but also a low priority right now since I don't want it to distract from determining the right core design and making limn stable and reliable, and there are a lot of open questions that will have a large impact on what the design of that would look like. 7. Text ShapingThis is an interesting idea, but also I think should be a low priority since it expands the scope considerably and (probably) won't impact the core design much. Still, I have some thoughts on how it could be done. 8. Gradients / Blend modesI believe this is possible with webrender, although it requires a custom drawable at the moment. It might make sense to add some API to expose it more easily. Maybe generalize 9. Window handling / custom drawing regionsTake a closer look at the event loop ;) it only uses Implementing a 'custom drawing region' or 'canvas' widget would probably just involve passing an image buffer to webrender. Some helper code could probably be added to make this easier. Might be another good candidate for an example. So, thanks again for looking at all this, and sorry if there's not a lot to go on in terms of documentation, I'll try and resolve that soon. Some of the core aspects of the API are starting to crystalize, although even then I might go through a pass of renaming a bunch of things, so that's made me hesitant to document too much too early. In general, if you're interested in contributing right now, I'd say just try and build things with it, see where it breaks, and what things are missing or more difficult than they should be, and let me know what things are unclear. I love the idea of a GUI builder, it's something I've thought about building myself eventually, but probably won't attempt for a while. |
Thanks for your response. I'm closing this issue as it's not really an "issue", I just wanted to get your thoughts on these topics. Thanks for the extensive reply. |
Hi @fschutt I hope it's ok if I re-open this issue since some others might find this info useful. I think there's no problem with issues just being a discussion. Feel free to make separate issues for some of the individual points above too though as it might be easier to track that way. |
Hi. I've followed this repository for a while and I think it's going in the right direction in order to have a GUI for Rust. My goal is to make some sort-of desktop UI builder, using limn as a foundation. These are some things I wanted to discuss, however:
1. Custom shapes
I think one of the basic shapes should be a custom shape, which can take some points and render it. I have made good experiences with lyon, which can build paths and return a vertex and index buffer. I am however not sure how you'd integrate something like this in webrender, afaik it doesn't allow direct access to the underlying vertex buffers. Might be worth a look.
2. Popovers
Things such as dropdowns need to have some sort of "overlay" over other components. As it is right now, limn tries to layout everything in one plane. There should be some constraint that builds a new z-plane. Just an idea.
3. Default constraints
This is visible in the button example, if you remove the last constraints. The button will jump all from corner to corner. It would be good to somehow check for underconstrainted things and give them some default constraints. For example, on Android, if something is not constrainted, it just gets put in the top left corner.
4. Animations / Multiple UIs
This might be good to consider early on - transitions between UIs, fading, rollout, scrolling in / out of the UIs, as well as transition between multiple UIs. Theoretically a UI could have in / out transitions defined, but this would only be a temporary solution. Currently one window = one UI. I solved this in my application by having a
Vec
of UIs and aOption<usize>
with the currently active UI. If no UI is loaded, the window is simply white. UIs and windows should be seperate.5. Components / some standard way to make new widgets
I'll take Android as an example: Every component inherits from a "View". Multiple views can be composited together to a final application. Since Rust does not have inheritance, a standardized WidgetInterface that allows putting together basic and complex shapes to build more complex widgets (outside of limn itself), would be good. This way you could build (and repackage) your own widgets, even without access to limn.
6. Code generation
This is something I've thought about: It may be possible to invest a big deal in the macro system, but so far, an XML or JSON file to define a UI makes the most sense. Which is why I wanted to build some sort of code generator. Something like this:
is easier to modify than Rust itself. This is just something to think about, making the code to make new widgets generator-friendly.
7. Text shaping
(regarding text_layout) These operations are currently not possible with Rusttype, but as I worked on a PDF library, I thought of which operators are necessary to express every text possible. PDF distinguishes between "path construction" and "path painting", which is (in my opion) the right way to tackle the problem of so many combinations. Regarding text rendering, UI text should be able to be any combination of:
Regarding path construction, text should be able to:
Combine this with the different rendering modes, leads me to think that text should rather be approached as "a series of paths". This might sound vague, but using rusttype to load the paths, then use lyon to triangulate the individual paths. After this, the characters can be scaled, spaced, etc. Since this emits triangle, a "simple" scanline renderer would suffice to convert these triangle into pixels or they could directly be submitted to the GPU. Submitting vertices + indices instead of font textures would also solve the next problem:
8. Gradients / Blend modes
This is something missing from web browsers, so I don't know in which way this could be done with webrender. Currently, there are 16 blend modes, documented here and here with formulas.
9. Window handling / custom drawing regions
The current window uses
poll_events
. For most UIs, it's best to usewait_events
to not unnecessarily waste CPU performance. However, for game UIs, it's better to usepoll events
, maybe with a configurable timeout.The next problem is integration with custom shaders / "custom drawing regions". I don't know if it is possible, but the window and GL context should be somehow exposed to outside applications, in order to draw "under" the UI, for example for 3D CAD applications or OpenGL-based games. Or using custom vertex shaders in parts of the application. Since the window is OpenGL-based, this would technically not be a problem, but I don't know how webrender can be integrated into this. Maybe (optionally) rendering to a framebuffer or texture would be a better, then give a builder with an option to return the texture or (by default) render the texture to the window. As well as allowing an external application to submit a framebuffer / texture to be rendered below the UI (to allow for custom drawing).
These are just ideas and suggestions I had for this library. I know that this library is still under development and I'd help out where necessary but I don't want to just go ahead and implement something just to have it rejected later on. So I wanted to talk about how these problems could be solved. I am not experienced with webrender, I don't know how well it integrates with these ideas. Thanks for reading.
The text was updated successfully, but these errors were encountered: