-
Notifications
You must be signed in to change notification settings - Fork 73
Request: Control Phong shading material properties #74
Comments
I like your proposal. CC'ing @sebastien who might have comments. Although my research notes have a few references to "Phong shading" and "Blinn/Phong shading", I never tried to implement it. Instead, the existing lighting model was copied and pasted from a shadertoy written by Inigo Quilez, and I don't know how it works. There's nothing sacred about this code, I'm happy to replace it with something better.
|
@doug-moen, do you think it would be possible to reify that code into Curv itself, ie. have a standard implementation for This would be a really interesting feature for people (like me), who are not only interested in making printable 3D models, but who are generally interested in exploring generative art, both in 2D and 3D. For instance works like what Sean Zellmer or Kyndinfo are doing would be hard to do with Curv at the moment because it requires custom shading. So my proposition would be: 1) implement a default Also, in general, I think it's great to have high level scene-like constructs available in the language, but I think we should also make sure that these high level constructs are made on a flexible low level foundation so that people can customize at will. |
@sebastien: Yes, this seems like a good idea. Yes, a render function could be written in Curv. In issue #73, I talked about View records, a new kind of graphical value that Curv knows how to render onto a display, and a So what does the View record actually contain? Because that is the low level interface you are requesting. I think it contains a shape, camera, and render function. What is the interface to a render function? I eventually plan, for performance reasons, to have ray-casting, ambient occlusion, and rendering run in 3 separate compute shaders, quite different from the current architecture, which puts everything in a single fragment shader. This architecture is based on mTec. So the render function cannot call "castRay", "calcNormal" or "calcAO". Instead, the results of these function calls must be passed in as arguments (or as uniform variables). So the render function will have an interface more like this:
And there also needs to be a Material argument, because using an mTec-like architecture, the render function cannot call the shape's material function directly. |
This makes sense, the only thing that worries me both for the For instance, So what would a pure-Curv rendering pipeline would look like?
Let's imagine that we have a sketch with two spheres: one that uses a toon-like shading and the other one that uses a reflective metallic shading. For this, we would need at least three separate functions:
What seems clear to me is that we'll need to pass an open-ended environment/context structure that holds any information that might be required to compute the shading (normal, occlusion, lights, etc). This information would definitely vary depending on the type of shading. If I try to decompose the process of going from a screen
Each of this is a rendering pass, and we could imagine multiple rendering passes being combined together. An infrastructure like this, if it is reified into the language, would make it possible to not only experiment with procedural shape generation, but also experiment with custom rendering strategies and get even more visually interesting results. |
The proposed I want to make Curv's rendering faster and more scalable. Right now, if you union together a large number of shapes, the frame rate plummets. There are multiple open source projects that demonstrate techniques I can use to speed up rendering. To achieve my performance goals, I will need to experiment with multiple techniques (and measure their performance impact), and I'll need to combine multiple techniques together in the final design. To run these experiments, I need the Curv code that I'm running performance tests on to be well modularized. For example, the I was intending to only support raycasting at this stage, which means: you cast a ray from the camera position, in a straight line, until it hits the surface of the shape, then you stop and compute the lighting at the point where you hit. If you want reflections, then that is ray tracing, which is a lot more complicated. Generalizing my proposed high performance renderer to support real-time raytracing is very ambitious, and it's not in my short or medium term plan. @sebastien, maybe you can accomplish your goals with a ShaderToy like API, where you write a Curv function similar to ShaderToy's So I'm going to replace my previous "View record" proposal with a new version that works at a lower level, and includes a rendering function that maps (x,y) viewport coordinates to a colour, inspired by ShaderToy. I don't have the details worked out yet. Then @p-e-w's interface can be implemented in Curv on top of this low level interface. If you want to experiment with custom rendering strategies, and explore ways to modularize the rendering process, that can be prototyped in Curv. |
@doug-moen That sounds fair, provided we can reuse some of the lower-level rendering building blocks provided by Curv. For instance, there's little benefit to re-implementing the raymarching algorithm if it's already available. Looking forward to seeing this new API! |
I don't have a written design, but how about this. Maybe we should convert most of the application logic in the Viewer window from C++ code to Curv code, and place it in a Curv Viewer value. Use the model/view/controller paradigm. The The high level building blocks that are composed to create a Viewer are put into a Curv library. If you want to explore a different way to decompose Viewer values into modular building blocks, just create another library: this part of the design does not need to be built in. |
What are the parameters to the In the current design, the fragment shader that displays 3D shapes depends on the following parameters:
In the general form of this feature, where you define the model, view and controller by writing Curv code, the
The |
There was a discussion in October last year where several people requested more flexibility and configurability in how the Viewer UI interprets keyboard and mouse events for viewing a shape and moving around in space. This MVC design would address those requirements, and also provide the ability to create simple interactive animations in Curv. |
@doug-moen Regarding MVC (3 posts up), that would be great! I would however leave the controller part out of Curv and delegate it to the host (curv's command line viewer, or curved web preview) -- the rationale for that is that Curv is a DSL for generative/procedural modeling and rendering, and that supporting the controller part would mean pulling in a ton of supporting constructs (user input, events, etc) that would dilute Curv's focus and be redundant with the embedding host. This sums up the ideal scenario: good defaults that you can decompose, override and recompose in pure-Curv:
Also completely agree on the following interface
This would be great for Shadertoy-like exploration, but I think we also need a default rendering function that makes use of lower-level primitives, like raymarching, lighting model/shading. In pseudo-code, it could look like this:
Here |
I have refrained from commenting on this thread for the past few days because I wanted to be really sure how I feel about these proposals before responding but here it is now. When I started to seriously explore Curv around 6 months ago, I realized that this type of scope expansion was something that might happen, and I was very much hoping that it would not. What the various proposals described above have in common is that they are essentially turning Curv from a DSL for describing three-dimensional shapes (I will ignore 2D here) into a frontend for GLSL. The part of the renderer that is not written in Curv would essentially be limited to OpenGL boilerplate, creating a quad, and then executing a fragment shader that is a thin wrapper around whatever Here is why I think this might be problematic: 1. It ties Curv to GLSLNo matter how the API looks in the end, it will be shaped by how GLSL fragment shaders operate, that is, it will require an underlying rendering system that roughly resembles the Shadertoy API. Raytracing software, the 3D printing ecosystem, virtual reality etc., all of which are valid environments for shapes represented in Curv, do not provide this type of interface. As a result, there will be a bunch of Curv code that doesn't apply to some output environments because it has nothing to do with shapes per se (I realize that this is also true for the proposal in #73, and perhaps this is actually an argument for not having those parameters in Curv itself but in an external configuration file). 2. It is unlikely to be as good as hand-written GLSL codeThe current GPU compiler generates code such as this:
Note that much of the code just shuffles values around without any effect on the result. Of course, there must be lots of low-hanging fruit here and the output of such functions can probably be much improved with a moderate amount of effort, but I am highly sceptical of it ever approaching human-written GLSL code in terms of quality and performance, as projects like the GNU Compiler Collection strongly indicate that automatic optimization to that degree requires an effort measured in thousands of man-years. 3. It is not necessaryProfessional 3D artists use rendering systems that are highly configurable, but not necessarily "pluggable" in the sense proposed here. I believe the same will work fine for Curv. Of course, I need to have control over the camera position and perspective. Of course, I want to be able to place lights anywhere, and fine-tune material reflectivity, transparency, refraction and other optical properties. But do I really need to be able to replace the rendering code? Not sure why, as long as I have control over all of the above. I want the best-looking output possible, as fast as possible. Having a renderer that is compiled from Curv instead of hand-written is unlikely to further that goal, and offers me almost nothing of value in exchange. If there is something the renderer cannot do (such as transparency), it should be added to the renderer, behind a configuration flag, for the benefit of all Curv users. In summary, I believe there should be one (adaptive and highly configurable) piece of rendering code, handwritten in GLSL, that is output by Curv to render shapes defined by their distance functions. I do not think that this renderer should be, or needs to be, written in Curv, as it breaks the abstraction Curv currently provides and offers nothing that cannot also be accomplished by the default renderer in a configurable fashion. What Curv desperately needs is better actual modeling capabilities, such as an easy way to construct polyhedra or essential operations like taking the convex hull of another shape. It also needs the high-level configurability provided by every 3D modeling software with respect to materials, lights etc. But arbitrary control over per-pixel rendering output strikes me as somewhere between niche and unnecessary. Of course, I understand that others might legitimately feel differently about this subject. |
@p-e-w: I agree with the high level goals that you stated:
There are lots of Curv programs that can be 3D printed, but not rendered on screen at an interactive frame rate. So, better rendering performance is a big part of giving Curv better modeling capabilities. To improve Curv's modeling capabilities, in priority order, I am working on:
The New Renderer is the last item in my priority list due to technical constraints. Compute shaders are not available in WebGL 2, or in the OpenGL subset supported by MacOS. However, I could use WebGPU, an API that is still being designed, but which is already available in alpha release form as C, C++ and Rust libraries, and in the 3 major web browsers (as an experimental feature you must explicitly enable). WebGPU will require a GPU manufactured in 2012 or later. The old renderer, based on a single shadertoy-style fragment shader, is capable of running on older hardware. Hypothetically, WebGPU might be far enough along in 6 months to support the subset of capabilities needed by the new renderer. Maybe development could start then, but the old renderer would still be needed to support older hardware, and to support web browsers that don't have the WebGPU flag enabled. Once people start using the new renderer, new models will be created that can't be rendered by the old renderer. I'm bothered by the idea of maintaining two renderers in parallel, so ultimately, the old renderer will be removed. This means that the rendering API that Sebastien is requesting is an evolutionary dead end: it requires the old renderer. Nevertheless, it might be useful to implement Sebastien's API as temporary scaffolding, because it would make it easier to prototype ideas for the new lighting model that is part of the new renderer. What I intend is that we would build various high level abstractions on top of Sebastien's API, then port those abstractions to the new renderer once the new renderer becomes available. Sebastien's API also depends on the New Shape Compiler. |
My understanding from reading the Curv documentation is that Phong shading is used to render 3D shapes. But currently, the parameters of the Phong reflection model are not exposed to the user.
It would be nice if those parameters could be controlled on a per-point basis, just like it is already possible for colors with the
colour
function. Proposed interface:The data structure returned by the
material
function can grow as needed to satisfy evolving modeling requirements. For example, if a future (hypothetical) 3D printer supported printing using multiple physical materials in the same model, material selection could be realized by extending the structure with an appropriately named field whose value is one of several supported materials.I would suggest that what is currently
colour
be absorbed into this more generic function as well, if it weren't for the fact that material properties don't make much sense with 2D images so it is probably better to keepcolour
separate.The text was updated successfully, but these errors were encountered: