Skip to content
This repository has been archived by the owner on Jun 26, 2021. It is now read-only.

Some thoughts #1

Open
fschutt opened this issue Sep 12, 2017 · 6 comments
Open

Some thoughts #1

fschutt opened this issue Sep 12, 2017 · 6 comments

Comments

@fschutt
Copy link
Contributor

fschutt commented Sep 12, 2017

Hi. I've followed this repository for a while and I think it's going in the right direction in order to have a GUI for Rust. My goal is to make some sort-of desktop UI builder, using limn as a foundation. These are some things I wanted to discuss, however:

  1. Custom shapes
  2. Popovers
  3. Default constraints
  4. Animations / Multiple UIs
  5. Components / some standard way to make new widgets?
  6. Code generation
  7. Text shaping
  8. Gradients / Blending
  9. Window handling

1. Custom shapes

I think one of the basic shapes should be a custom shape, which can take some points and render it. I have made good experiences with lyon, which can build paths and return a vertex and index buffer. I am however not sure how you'd integrate something like this in webrender, afaik it doesn't allow direct access to the underlying vertex buffers. Might be worth a look.

2. Popovers

Things such as dropdowns need to have some sort of "overlay" over other components. As it is right now, limn tries to layout everything in one plane. There should be some constraint that builds a new z-plane. Just an idea.

3. Default constraints

This is visible in the button example, if you remove the last constraints. The button will jump all from corner to corner. It would be good to somehow check for underconstrainted things and give them some default constraints. For example, on Android, if something is not constrainted, it just gets put in the top left corner.

4. Animations / Multiple UIs

This might be good to consider early on - transitions between UIs, fading, rollout, scrolling in / out of the UIs, as well as transition between multiple UIs. Theoretically a UI could have in / out transitions defined, but this would only be a temporary solution. Currently one window = one UI. I solved this in my application by having a Vec of UIs and a Option<usize> with the currently active UI. If no UI is loaded, the window is simply white. UIs and windows should be seperate.

5. Components / some standard way to make new widgets

I'll take Android as an example: Every component inherits from a "View". Multiple views can be composited together to a final application. Since Rust does not have inheritance, a standardized WidgetInterface that allows putting together basic and complex shapes to build more complex widgets (outside of limn itself), would be good. This way you could build (and repackage) your own widgets, even without access to limn.

6. Code generation

This is something I've thought about: It may be possible to invest a big deal in the macro system, but so far, an XML or JSON file to define a UI makes the most sense. Which is why I wanted to build some sort of code generator. Something like this:

    <com.custom.LoginDialog
        android:layout_width="0dp"
        android:layout_height="0dp"
        tools:layout_constraintTop_creator="1"
        tools:layout_constraintRight_creator="1"
        tools:layout_constraintBottom_creator="1"
        android:layout_marginStart="8dp"
        app:layout_constraintBottom_toBottomOf="parent"
        android:layout_marginEnd="8dp"
        app:layout_constraintRight_toRightOf="parent"
        android:layout_marginTop="8dp"
        tools:layout_constraintLeft_creator="1"
        android:layout_marginBottom="8dp"
        app:layout_constraintLeft_toLeftOf="parent"
        app:layout_constraintTop_toTopOf="parent"
        android:layout_marginLeft="8dp"
        android:layout_marginRight="8dp" />

is easier to modify than Rust itself. This is just something to think about, making the code to make new widgets generator-friendly.

7. Text shaping

(regarding text_layout) These operations are currently not possible with Rusttype, but as I worked on a PDF library, I thought of which operators are necessary to express every text possible. PDF distinguishes between "path construction" and "path painting", which is (in my opion) the right way to tackle the problem of so many combinations. Regarding text rendering, UI text should be able to be any combination of:

  • Filled, with a solid color or a gradient
  • Outlined, with a solid color or a gradient
  • Being used as a clip shape for other shapes
  • Blended with a blend mode

Regarding path construction, text should be able to:

  • rotated
  • spaced by word
  • spaced by character
  • stretched

Combine this with the different rendering modes, leads me to think that text should rather be approached as "a series of paths". This might sound vague, but using rusttype to load the paths, then use lyon to triangulate the individual paths. After this, the characters can be scaled, spaced, etc. Since this emits triangle, a "simple" scanline renderer would suffice to convert these triangle into pixels or they could directly be submitted to the GPU. Submitting vertices + indices instead of font textures would also solve the next problem:

8. Gradients / Blend modes

This is something missing from web browsers, so I don't know in which way this could be done with webrender. Currently, there are 16 blend modes, documented here and here with formulas.

9. Window handling / custom drawing regions

The current window uses poll_events. For most UIs, it's best to use wait_events to not unnecessarily waste CPU performance. However, for game UIs, it's better to use poll events, maybe with a configurable timeout.

The next problem is integration with custom shaders / "custom drawing regions". I don't know if it is possible, but the window and GL context should be somehow exposed to outside applications, in order to draw "under" the UI, for example for 3D CAD applications or OpenGL-based games. Or using custom vertex shaders in parts of the application. Since the window is OpenGL-based, this would technically not be a problem, but I don't know how webrender can be integrated into this. Maybe (optionally) rendering to a framebuffer or texture would be a better, then give a builder with an option to return the texture or (by default) render the texture to the window. As well as allowing an external application to submit a framebuffer / texture to be rendered below the UI (to allow for custom drawing).

These are just ideas and suggestions I had for this library. I know that this library is still under development and I'd help out where necessary but I don't want to just go ahead and implement something just to have it rejected later on. So I wanted to talk about how these problems could be solved. I am not experienced with webrender, I don't know how well it integrates with these ideas. Thanks for reading.

@jaroslaw-weber
Copy link

jaroslaw-weber commented Sep 13, 2017

(1) I would give it a low priority. We should use textures instead of shapes.
(2) You can do a lot with just hierarchy and it is more maintainable than z axis.
(4) Cool but low priority.
(6) This is super important.
(7) Instead of useless text styles just add wrapping text inside a rect. It is a base, everything else is not necessary.

We should think about a way to make GUI prototyping easy (no boilerplate, maybe editor? but then we lose a strict checking by the compiler), fast (no compiling? serializing maybe?) and pretty (ever tried bulma or bootstrap? those are easy to use solutions for styling and look decent). No current solution is providing that.

@fschutt
Copy link
Contributor Author

fschutt commented Sep 13, 2017

@jaroslaw-weber I can't see what you're referring to since Githubs Markdown re-labels your numbers. Use something like (5) to escape this.

That said, I don't see how you could do something like a dropdown with a hierarchy as the constraints are right now. My point was that right now there is no way for components to overlap each other. If you insert something, it would just push down the rest of the UI, not overlap.

We should use textures instead of shapes.

So how would you draw an SVG component or something like this with only textures? Usually GUI toolkits allow you to draw arbitrary shapes, like arcs and curves and lines. This is what I am referring to. You could draw these to a OpenGL texture, but this should be the task of the GUI toolkit, not the user of the library.

just add wrapping text inside a rect

Sure, I wasn't talking about the here and now, rather about what's possible in the future. Wrapping text is fairly simple, you only have to check for each character if it has overflown the parent rectangle and move the line down accordingly. The TextEdit component already does this. Yes, text styling is not super important, but it's far from useless.

but then we lose a strict checking by the compiler

If code generation happens before the compiler, checking would still be enforced. C# does code generation at compile time, Java does code generation at compile time, GTK does code generation (with glade). If you use in-memory buffers and caching effectively, generating code is pretty fast. I am against loading something at runtime, it causes all kinds of problems. Plus, as it is right now, this might not even be possible (inserting constraints at runtime).

@jaroslaw-weber
Copy link

jaroslaw-weber commented Sep 13, 2017

I can't see what you're referring to since Githubs Markdown re-labels your numbers

Fixed

So how would you draw an SVG component or something like this with only textures?

You are right, we need this. However, a lot of people just need few buttons, sliders, input fields etc. I know that lot of people care about functionality, but without an option to create a pretty app I don't think the library will go mainstream. If we can make a pretty, cross-platform GUI app in rust then people may follow. But to do this, we need to focus on styling/textures more than abstract shapes. I see this library as HTML-ish way to do GUI, maybe we could get some inspiration from css styling.

I am not sure what is your idea here but I wish you luck.

@christolliday
Copy link
Owner

Hi, thanks for raising these questions, and your vote of confidence! Limn is still in a very early stage right now so expect major changes, I'd say don't build anything complicated on top of it if you're not planning to rewrite a lot of code, and find serious bugs, but if you are ok with that then by all means. It could help inform the direction it evolves in.

I've been meaning to start writing about the plans for this project so this is as good a time as any to start, I'll try and write up some more high level documentation on goals and priorities in the next week or so.

1. Custom Shapes

Agreed that custom shapes should be supported and made easy to write. My plan on this actually is just to wait for webrender to support it, might be a while, but it looks like it's in the pipeline for them: (https://github.com/servo/servo/wiki/Basic-SVG-support-project)

I should also say that in general, at this stage, limn should try to be responsible for as little graphics code as possible, just to keep the maintenance burden manageable while things are in flux. Complicated features should be possible, but not necessarily easy, yet. If you render to an image buffer and then pass that to webrender most things should be possible, in fact I think that's how servo supports svg and canvas elements right now.

2. Popovers

Currently the z-ordering of widgets is determined by the hierarchy and the ordering of widgets by their parents. So parents are always drawn behind children and children are drawn in the order they are added to their parent. You can remove and re-add a child to move it in front of it's siblings (there should probably be a bring_to_front method to make this more obvious). I think this should be enough to handle dropdowns, pop ups or other overlaying widgets. I'm not sure yet if there will be a need for widgets to have a separate z-index, there might be, but I haven't found it yet.

3. Default Constraints

Yes, the constraint system and the layout crate is something I'm focused on right now. I'm trying various things to make it as unsurprising as possible, so the top-left corner default could be a good idea. Expect big changes in this area.

4. Animations / Multiple UIs

Yep these are all important. In terms of just the animation itself, I'm hoping the EventHandler model can handle it well enough, similar to how the Clock example works, which is essentially an animation which only updates every second. The UI should emit an event for every frame, then widgets can add a handler than sets an edit variable or changes drawable parameters for each frame event.

In terms of transitioning between UIs, it's honestly not something that I've thought much about and I think will require some careful design, but the idea so far has been for the UI to be truly global, so there should not be a need to have more than one per window, or per application (could be multiple windows for one UI). It sounds like you are using UIs sort of like 'Activity's in Android? My thought was that those could just be Widgets, which can be deactivated. Limn should include more EventHandlers, logic, and bundled widgets to facilitate this, which will require some iteration on the design, especially in regards to performance. Maybe something like a master widget that manages the transitions and passes global UI input to the active 'Activity' widget. Are there any features that the UI has that you wish Widgets had, or that make this impossible?

A simple example that demonstrates this with animations might be nice to have, especially as a benchmark, ie. if it doesn't get 60fps on a low end laptop or a phone, that's an issue. For any new feature, It should be demoed in the examples, as a (hopefully temporary) substitute for integration tests and proper benchmarks.

5. Components

The idea is for any Widget behavior to be composable from existing or new EventHandlers, drawables and child widgets. The directory src/widgets contains the bundled widgets but it should be possible to construct and package them in the same way outside of limn. The basic idea is that you define a function or a struct that is responsible for building a Widget.
So what you are describing is possible right now, but very poorly documented, and it might look a little different from what you're used to. Essentially the standard interface for creating a widget is constructing instances of the Widget type itself. Once a widget is created, it communicates via events, some of which are standardized, but I don't think there needs to be a common trait that all custom widgets types need to implement. The meaning of a widget should be very open ended, even the name might be changed. All a Widget is fundamentally, is something that exists in a hierarchy, with a bounding Rect, optionally a way to draw itself, and a means for it to react to events. Hopefully the plan here makes more sense as I write more documentation.

6. Code Generation

I think a high level/declarative interface is absolutely necessary, and planned for long term, but also a low priority right now since I don't want it to distract from determining the right core design and making limn stable and reliable, and there are a lot of open questions that will have a large impact on what the design of that would look like.
That said, my idea generally is that Rust would be used to define widget or component types, for which there should be no difference between those bundled with Limn, and those in an external library or application, and then the high level language can be used to declare and configure instances of those types. The lines between those two are still a little blurry however.
I'm planning on experimenting with how widgets can expose an interface of properties or attributes, which will likely be how a high level binding can communicate with them. This might look something like 'attributes' in IUP. This is important so that the interface to the widgets is not hard coded, it can be determined by the widgets themselves, wherever they might be defined.
The high level language might start out just as macros, although something interpreted would be ideal. Maybe by the time the macro design comes together, cretonne might be usable, then it might be possible to compile Limn in release mode, and then build and run your application code using cretonne with the fastest code generation settings. I realize that is a long way out, but I guess for me, the goal is first to demonstrate that a large, complex, reliable app can be built with it, and only after that, make it as easy as possible to throw apps together.

7. Text Shaping

This is an interesting idea, but also I think should be a low priority since it expands the scope considerably and (probably) won't impact the core design much. Still, I have some thoughts on how it could be done.
For path construction, Rusttype is mostly only used for font handling, as a 'rusty' wrapper around stb_truetype-rs. The line text positioning convenience method is used, but could easily be bypassed with something that allowed more complex paths. Then text could be drawn by passing glyphs one at a time to webrender, each with their own transform/color. I believe this is the approach used by [stylish] to draw multi-color text.
As far as how this could be exposed to the user, rather than having the current Text drawable and TextWidget add these features and being all things to all people, it might make sense to be a separate TextPath drawable, for instance, to keep the main Text widget/drawable simple to use. Similarly, if someone was writing a text editor that only cares about monospace fonts and no automatic line breaks, that could be it's own widget and drawable, optimized for that use case.

8. Gradients / Blend modes

I believe this is possible with webrender, although it requires a custom drawable at the moment. It might make sense to add some API to expose it more easily. Maybe generalize BackgroundColor or Color fields into something that could take gradients, for instance.

9. Window handling / custom drawing regions

Take a closer look at the event loop ;) it only uses poll_events to flush unhandled events before checking if a redraw is needed. The new winit method for wait_events is called run_forever which is used to wait when there are no events to be processed. Right now there is some logic to ensure at least 1/60th of a second has passed before drawing the next frame, if some change has occured. This should be fine for both games and regular applications. I'll probably leave this as is until there is a good reason to change it. There is a lot of work to be done in minimizing unneccessary redrawing, however.

Implementing a 'custom drawing region' or 'canvas' widget would probably just involve passing an image buffer to webrender. Some helper code could probably be added to make this easier. Might be another good candidate for an example.

So, thanks again for looking at all this, and sorry if there's not a lot to go on in terms of documentation, I'll try and resolve that soon. Some of the core aspects of the API are starting to crystalize, although even then I might go through a pass of renaming a bunch of things, so that's made me hesitant to document too much too early.

In general, if you're interested in contributing right now, I'd say just try and build things with it, see where it breaks, and what things are missing or more difficult than they should be, and let me know what things are unclear. I love the idea of a GUI builder, it's something I've thought about building myself eventually, but probably won't attempt for a while.

@fschutt
Copy link
Contributor Author

fschutt commented Sep 18, 2017

Thanks for your response. I'm closing this issue as it's not really an "issue", I just wanted to get your thoughts on these topics. Thanks for the extensive reply.

@fschutt fschutt closed this as completed Sep 18, 2017
@christolliday
Copy link
Owner

christolliday commented Oct 3, 2017

Hi @fschutt I hope it's ok if I re-open this issue since some others might find this info useful. I think there's no problem with issues just being a discussion. Feel free to make separate issues for some of the individual points above too though as it might be easier to track that way.

@christolliday christolliday reopened this Oct 3, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants
@christolliday @jaroslaw-weber @fschutt and others