# Built-in node

For the convenience of developers and to support built-in pipelines, the current small game framework has provided many built-in nodes.

# Core function node

The first is the basic core functional nodes:

  1. class RGCameraNode extends RGNode<{},'Camera', {camera: BaseCamera}>: Pass in a camera and output a Camera type data, generally as the beginning of a link.

  2. class RGCullNode extends RGNode<{camera:'Camera'},'MeshList', {lightMode: string,initSize?: number}>: To remove the node, a Camera type input is required, and a lightMode is provided As an initialization parameter, finally output a list of render bodies that meet the requirements of the lightMode in the camera's frustum.

  3. class RGGenRenderTargetNode extends RGNode<{},'RenderTarget', {createRenderTarget(): RenderTexture | Screen;}>: Render target node, used to create the output of a render target RenderTarget, through the createRenderTarget method, It can be really created when it is used, saving memory.

  4. class RGClearNode extends RGNode<{camera:'Camera', renderTarget:'RenderTarget'},'None', {}>: Clear the screen node, enter a Camera and a RenderTarget, use the former to target the canvas Clear the screen.

  5. class RGGenViewNode extends RGNode<{},'Camera', {viewObject: {view: Kanata.View}}>: The essence of the screen clearing operation has nothing to do with the camera. Only one View is needed to provide the information of the screen clearing. Developers need to clean up the entire canvas at the beginning to use this solution.

With the above nodes and with the life cycle, we can achieve a simple screen clearing:

class MyRenderGraph extends engine.RenderGraph {
  public onCamerasChange(cameras: engine.BaseCamera[]) {
    const camera = cameras[0];
    const cameraNode = this.createNode<engine.RGCameraNode>(`camera`, RGGenViewNode, {camera});
    const rtNode = this.createNode<engine.RGGenRenderTargetNode>('rt-screen', RGGenRenderTargetNode, {createRenderTarget: () => camera.renderTarget});
    const clearNode = this.createNode<engine.RGClearNode>('clear', RGClearNode, {});
    this.connect(cameraNode, clearNode,'camera');
    this.connect(rtNode, clearNode,'renderTarget');
  }
}

# Render node

The rendering node is a special core function node. Because the developer has a strong customization of rendering, it is abstracted as a node that can inherit customization:

interface IRGRenderNodeOptions<TInputs> {
  createUniformBlocks(): Kanata.UniformBlock[];
  lightMode: string;
  inputTypes?: TInputs;
  uniformsMap?: {
    [uniformName: string]: {
      inputKey: keyof TInputs,
      /**
       * If the uniform is `RenderTexture`, which buffer to use.
       */
      name:'depth' |'stencil'|'color',
    }
  };
}

class RGRenderNode<TInputs extends {
  [key: string]: keyof IRGData;
} = {}> extends RGNode<
  TInputs & {camera:'Camera', renderTarget:'RenderTarget', meshList:'MeshList'},
  'RenderTarget',
  IRGRenderNodeOptions<TInputs>
>

One of the most important is its initialization parameters. createUniformBlocks allows developers to define a set of global Uniforms themselves (but due to limitations, only one is currently supported!), this and the previous chapter Rendering System Speaking of no contradiction, it can be considered that this is the highest priority. Then there is the lightMode parameter, see Effects and Materials for details of this sum.

The second is the input parameters. The render node needs at least the camera camera (usually from the RGCameraNode node), the rendering target renderTarget (usually from the RGGenRenderTargetNode node) and the rendering list meshList (usually from the RGCullNode Node) in order to render, and the inputTypes and uniformsMap in the initialization parameters provide a cutout so that developers can add their own inputs and bind these inputs to the global Uniform in a convenient way stand up.

In addition to these, if the developer needs more in-depth customization, the render node also provides a life cycle:

public onRender(context: RenderSystem, options: IRGRenderNodeOptions<TInputs>);

In most cases, developers do not need to customize this life cycle. If you need to use this life cycle, you need to refer to Deep Pipeline.

However, it should be noted that for shadow drawing, a node RGLightNode is specially provided, and its parameters are exactly the same as RGRenderNode, but for drawing shadows, its output is generally used as the additional input of the rendering node when in use:

const fwRenderNode = this.createNode<RGRenderNode<{shadowMap:'RenderTarget'}>>('fb-render', RGRenderNode, {
  lightMode:'ForwardBase',
  createUniformBlocks: () => [this._fbUniforms!],
  // Add input for shadow map
  inputTypes: {'shadowMap':'RenderTarget'},
  // The mapping of the input of the shadow map to the global Uniform
  uniformsMap: {
    u_shadowMapTex: {
      inputKey:'shadowMap',
      name:'color'
    }
  }
});
if (camera.shadowMode !== Kanata.EShadowMode.None) {
  const scRenderNode = this.createNode<engine.RGLightNode>('shadow-caster', RGLightNode, {
    lightMode:'ShadowCaster',
    createUniformBlocks: () => [this._scUniforms!]
  });
  this.connect(scRenderNode, fwRenderNode,'shadowMap');
}

# Sky box node

The sky box node class RGSkyBoxNode extends RGNode<{},'MeshList', {}> is very simple, and it is often aimed at the drawSkybox parameter on the camera. Generally, when using it, if the camera is used as the source to build a RenderGraph Time:

if (camera.drawSkybox) {
  const skyboxNode = this.createNode('skybox', engine.RGSkyboxNode, {});
  this.connect(skyboxNode, renderNode,'meshList');
}

That is, it is directly connected to the render node and can be used as the input of the render list of the render node.

# UI Node

# Light node

Shadows and multiple light sources are very common lighting effects. In the framework of the mini game, shadows are realized in the form of ShadowMap, and the drawing of multiple light sources is completed through multiple passes.

# Shadow node

To connect the shadow node in the pipeline, you need to go through three steps:

  1. Build the RenderTexture render target node RGGenRenderTargetNode of the shadow map
  2. Build the render node RGLightNode of the shadow map
  3. Pass the shadow map as a parameter to the render node of the scene

The complete pipeline code is as follows:

if (camera.shadowMode !== Kanata.EShadowMode.None) {
    // Construct the RenderTexture of the shadow map
    const scRTNode = this.createNode<RGGenRenderTargetNode>('sc-render-target', RGGenRenderTargetNode, {createRenderTarget: () => this._createRT(camera,'ShadowCaster')});
    // Build the shaded render node
    const scRenderNode = this.createNode<RGLightNode>('shadow-caster', RGLightNode, {
      lightMode:'ShadowCaster',
      createUniformBlocks: () => [this._scUniforms!]
    });
    
    // Connect the camera node, the rendering list and the shadow node, and connect the shadow node to the scene rendering node
    this.connect(cameraNode, scRenderNode,'camera');
    this.connect(scRTNode, scRenderNode,'renderTarget');
    // fwCullNode outputs the rendering list of scene objects
    this.connect(fwCullNode, scRenderNode,'meshList');
    // Pass the shadow map as a parameter to the render node of the scene
    this.connect(scRenderNode, fwRenderNode,'shadowMap');
  }

In this way, we can start rendering the shadow

# Multi-light source node

To connect a node for multi-light source rendering in the pipeline, four steps are required:

  1. Construct a light source clipping node RGCullLightFANode, and light sources that have no effect on objects within the visual threshold will be eliminated
点击咨询小助手