Skip to content

Pipelines

Mystfit edited this page Aug 8, 2023 · 2 revisions

WIP instructions from old readme

Layers

image

Some model pipelines use different types of images as inputs to help guide the image generation process. The achieve this, the plugin uses LayerProcessors to capture the required infmation from Unreal's viewport or a SceneCapture2D actor in order to feed the layers into the pipeline. For example, any model based upon the StableDiffusionImg2Img pipeline will use a single FinalColourLayerProcessor that will capture the viewport as an input image. Something more complex like the StableDiffusionControlNetPipeline may use multiple layer processors such as combining a DepthLayerProcessor and a NormalLayerProcessor. Take a look at the provided depth and normal ControlNet model preset to see how it's put together.

Some layer processors are configurable in the plugin UI, if you navigate to the Layers section, then each layer may or may not show multiple configurable properties that will affect the layer before it is passed to the pipeline. You can also preview the layer by clicking on the closed eye icon.

Clone this wiki locally