# start

**xr - frame **Is a set of Mini programs officially provided by XR/3D application solution, based on the implementation of hybrid solutions, performance approximation native, good effect, easy to use, strong expansion, progressive, follow the development of Mini Program standards.

In this chapter, we'll take you from the beginning to build an XR Mini Program with it.

This article is just a start guide. For more detailed information, please seeComponent Framework DocumentationandAPI documentation

xr - frameIn the base libraryv2.32.0Initially stable and released as official, but there are still some features in development, see[Limitations and Outlook](../../component/xr - frame /overview/index.md#Limitations and Outlook)。

# Create a new XR component

First create the project, let's select the Mini Program project:

And then there's theapp.jsonAdd a line of configuration:"lazyCodeLoading": "requiredComponents"Then create the components folder, create a new component, and modify the contents of the component:

index.json:

{
  "component": true,
  "renderer": "xr-frame," 
  "usingComponents": {}
}

index.wxml:

<xr-scene>
  <xr-camera clear-color="0.4 0.8 0.6 1" />
</xr-scene>

inindex.jsonWe specify that the renderer for this component isxr - frameinindex.wxmlWe created a scene.xr-scene, and added a camera under itxr-camera

# Use this component in the page

Once you have created the component, you can use it in the page, let's go topages/index, modify itsjsonwxmlandtsDocuments:

injsonIn:

{
  "usingComponents": {
    "xr-start": "../../components/xr-start/index"
  },
  "disableScroll": true
}

intsIn the script:

Page({
  data: {
    width: 300,
    height: 300,
    renderWidth: 300,
    renderHeight: 300,
  },
  onLoad() {
    const Info = wx.getSystemInfoSync()
    const width = info.windowWidth
    const height = info.windowHeight
    const dpi = info.pixelRatio 
    this.setData({
      width, height,
      renderWidth: width * dpi,
      renderHeight: height * dpi
    })
  },
})

inwxmlIn:

<view>
  <xr-start
    disable-scroll
    id="main-frame" 
    width="{{renderWidth}}"
    height="{{renderHeight}}"
    style="width:{{width}}pxheight:{{height}}px"
  />
</view>

Here we set up in the scriptxr - frameThe width and height the component needs to render, and then pass in thewxml, and in which the use of thejsonThe current effect is as follows, showing that the entire canvas has beenxr-cameraThe screen color set on the screen is clear:

# Add an object

Next we add an object to the field, using thexr-meshAnd built-in geometric data, materials, create a cube:

<xr-scene>
  <xr-mesh node-id="cube" geometry="cube" />
  <xr-camera clear-color="0.4 0.8 0.6 1" position="0 1 4" target="cube" camera-orbit-control />
</xr-scene>

Here we assign an objectnode-id, as the index of the node, which is later modifiedxr-cameraofpositionandtarget, and keep looking at the cube, and finally add the cameracamera-orbit-controlThis allows us to control the camera.

At this point, a cube is rendered, but... why black?

# A little color and light

The object is black because we don't givexr-meshWhen specifying the material, you use the default material based on the PBR effect, which requires lighting. There are two ways to solve this problem. One is that objects that do not need lighting, which can be usedsimpleMaterial, here introduces the material definition:

<xr-asset-material  asset-id="simple" effect="simple" uniforms="u_baseColorFactor:0.8 0.4 0.4 1" />
<xr-mesh node-id="cube" geometry="cube" material="simple" />

The effect is as follows:

Although this solves some problems, most of the time we still need lights, so let's change the material back and add some lights:

<xr-light type="ambient" color="1 1 1" intensity="1" />
<xr-light type="directional" rotation="40 70 0" color="1 1 1" intensity="3" cast-shadow />

<xr-mesh
  node-id="cube" cast-shadow
  geometry="cube" uniforms="u_baseColorFactor:0.8 0.4 0.4 1"
/>
<xr-mesh
  position="0 -1 0" scale="4 1 4" receive-shadow
  geometry="plane" uniforms="u_baseColorFactor:0.4 0.6 0.8 1"
/>

Here we add an ambient light and a main parallel light, adjust the brightness and direction, and add a new object, and then pass through the various components of thecaster-shadowandreceive-shadowThe shadow is turned on and the effect is as follows:

# A little bland, plus the images

Although there is light, but only the solid color is still a little bland, then we try to add texture, let the scene color more rich, here need to use the resource loaderxr-asset-loadandxr-assets

<xr-assets bind:progress="handleAssetsProgress" bind:loaded="handleAssetsLoaded">
  <xr-asset-load type="texture" asset-id="waifu"  src="https://mmbizwxaminiprogram-1258344707.cos.ap-guangzhou.myqcloud.com/xr - frame /Demo/waifu.png" />
</xr-assets>

<xr-mesh
  node-id="cube" cast-shadow
  geometry="cube" uniforms="u_baseColorMap: waifu"
/>

Noticed that we were inxr-assetsTwo events are bound onprogressandloaded, which allows developers to listen to the resource load progress, and then do some operations as needed, such as after the resource load is completed andwx:ifCollaboration to show the object again. By default, we have a progressive strategy that is automatically applied to the object when the resource is finished loading:

methods: {
  handleAssetsProgress: function ({detail}) {
    console.log('assets progress', detail.value)
  },
  handleAssetsLoaded: function ({detail}) {
    console.log('assets loaded', detail.value)
  }
}

The effect of this modification is as follows:

Of course, we can also use the code to dynamically load a texture and then set it to the object, here is the avatar to get user information as an example:

data: {
  avatarTextureId: 'white'
},

methods: {
  handleReady:  function ({detail}) {
    this.scene = detail.value
    // This interface is deprecated. Please authorize it after using the getUserInfo  Instead.
    wx.getUserProfile ({
      desc: 'Get the avatar ',
      success: (res) => {
        this.scene.assets.loadAsset({
          type: 'texture', assetId: 'avatar', src: res.userInfo.avatarUrl
        }).then(() => this.setData({avatarTextureId: 'avatar'}))
      }
    })
  }
}

according to Mini Program User Avatar Nickname Get Rules Adjustment Announcement wx.getUserProfile to 2022 year 10 month 25 day 24 After that, it was abandoned.

Pay attention here.handleReady, We Can Inxr-sceneUpper bindingbind:ready="handleReady"Trigger. After you finish getting your avatar, set the data touniformsOf the sources:

<xr-mesh
  position="0 -1 0" scale="4 1 4" receive-shadow
  geometry="plane" uniforms="u_baseColorMap: {{avatarTextureId}}"
/>

The effect is as follows:

# Make the scene richer, environmental data

If the object has texture, can the background also have texture? Of course. We provide the environmental elementsxr-envTo define the environment information, in conjunction with the camera can render the sky box, here to frame a built-in environment dataxr-frame-team-workspace-dayFor example:

<xr-env env-data="xr-frame-team-workspace-day" />

<xr-mesh
  node-id="cube" cast-shadow
  geometry="cube" uniforms="u_baseColorMap: waifu,u_metallicRoughnessValues:1 0.1"
/>

<xr-camera
  position="0 1 4" target="cube" background="skybox"
  clear-color="0.4 0.8 0.6 1" camera-orbit-control
/>

Here we willxr-cameraofBackgroundSet in order toskybox, while adjusting the metal roughness of the cube. The effect is as follows:

You can also see that the objects in the scene are subjected to a layer of reflection by the camera, as if they were influenced by the environment, because the environmental data also includes some IBL information, which we won't discuss here. Those interested can read more in the following chapter.

In addition to images, Skybox also supports video, we can load a video texture first, and then overwrite the environment information in thesky-map

<xr-asset-load type="video-texture" asset-id="office" src="https://mmbizwxaminiprogram-1258344707.cos.ap-guangzhou.myqcloud.com/xr - frame /Demo/videos/office-skybox.mp4" options="autoPlay:true,loop:true" />

<xr-env env-data="xr-frame-team-workspace-day" sky-map="video-office" />

The effect is as follows:

In addition to this sky box, we also support the 2D background, which will be useful when doing some product displays:

<xr-asset-load type="texture" asset-id="weakme" src="https://mmbizwxaminiprogram-1258344707.cos.ap-guangzhou.myqcloud.com/xr - frame /Demo/weakme.jpg" />

<xr-env env-data="xr-frame-team-workspace-day" sky-map="weakme" is-sky2d />

The effect is as follows:

# Move, join the animation

At the moment our whole scene is still static, we will add some animation to enrich it. To use frame animation resources here, let's first create a resource directory and create ajsonDocuments:

{
  "keyframe": {
    "plane": {
      "0": {
        "rotation.y": 0,
        "material.u_baseColorFactor": [0.2, 0.6, 0.8, 1]
      },
      "50": {
        "material.u_baseColorFactor": [0.2, 0.8, 0.6, 1]
      },
      "100": {
        "rotation.y": 6.28,
        "material.u_baseColorFactor": [0.2, 0.6, 0.8, 1]
      }
    },
    "cube": {
      "0": {
        "position": [-1, 0, 0]
      },
      "25": {
        "position": [-1, 1, 0]
      },
      "50": {
        "position": [1, 1, 0]
      },
      "75": {
        "position": [1, 0, 0]
      }
    }
  },
  "animation": {
    "plane": {
      "keyframe": "Plane", 
      "duration": 4,
      "ease": "ease-in-out",
      "loop": -1
    },
    "cube": {
      "keyframe": "cube",
      "duration": 4,
      "ease": "steps",
      "loop": -1,
      "direction": "both"
    }
  }
}

Then load it and refer to two objects on the field:

<xr-asset-load asset-id="anim" type="keyframe" src="/assets/animation.json"/>

<xr-mesh
  node-id="cube" cast-shadow anim-keyframe="anim" anim-autoplay="clip:cube,speed:2"
  geometry="cube" uniforms="u_baseColorMap: waifu,u_metallicRoughnessValues:1 0.1"
/>
<xr-mesh
  node-id="plane"  position="0 -1 0" scale="4 1 4" receive-shadow anim-keyframe="anim" anim-autoplay="clip:plane"
  geometry="plane" uniforms="u_baseColorMap: {{avatarTextureId}}"
/>

<xr-camera
  position="0 1 6" target="plane" background="skybox"
  clear-color="0.4 0.8 0.6 1" camera-orbit-control
/>

Here we willxr-cameraoftargetSet to theplaneUp, in case it followsCubeMove.

Pay attention because it's inside the bagjsonFile, so it needs to be in theproject.config.jsonofsettingAdd in the field "ignoreDevUnusedFiles": falseand"ignoreUploadUnusedFiles": falseConfiguration parameters! The effect is as follows:

# Still not enough, put a model

Looking at this scene, you may also feel that something is missing. NiceIt's all square geometry. it's still too monotonous. So here we will load and use the glTF model to make the scene richer. To keep the scene concise, we remove all objects from the original scene and adjust the camera'starget

<xr-asset-load type="gltf" asset-id="damage-helmet" src="https://mmbizwxaminiprogram-1258344707.cos.ap-guangzhou.myqcloud.com/xr - frame /Demo/damage-helmet/index.glb" />
<xr-asset-load type="gltf" asset-id="miku"  src="https://mmbizwxaminiprogram-1258344707.cos.ap-guangzhou.myqcloud.com/xr - frame /Demo/miku.glb" />

<xr-gltf node-id="damage-helmet"  model="damage-helmet" />
<xr-gltf model="miku"  position="-0.15 0.75 0" scale="0.07 0.07 0.07" rotation="0 180 0" anim-autoplay />

<xr-camera
  position="0 1.5 4" target="damage-helmet"  background="skybox"
  clear-color="0.4 0.8 0.6 1" camera-orbit-control
/>

Here we load two models: a static one that supports all the features of PBR rendering, and a simpler one with animation. The final result is as follows:

# More interaction.

That's enough for rendering, but as an application, interaction with the user is essential. In many scenarios developers may need to click on objects in the scene to do some logic, so we provideShapeSeries Components:

<xr-gltf
  node-id="damage-helmet"  model="damage-helmet"
  id="helmet" mesh-shape bind:touch-shape="handleTouchModel"
/>
<xr-gltf
  model="miku"  position="-0.15 0.75 0" scale="0.07 0.07 0.07" rotation="0 180 0" anim-autoplay
  id="friend" cube-shape="autoFit:true" shape-gizmo bind:touch-shape="handleTouchModel"
/>

We set up a couple of modelsid, adding different shapes ofShape, amesh-shapeCan match the model exactly, but with high overhead and a vertex limit, acube-shapeLow overhead and can also turn on the debug switchshape-gizmoShow it. Finally, we bind the corresponding click event, and then we can write the logic in the script to complete the corresponding operation:

HandleTouchModel:  function ({detail}) {
  const {target}  = detail.value
  const id = target.id
  
  wx.showToast({title: `Click on the model: ${id}`})
}

Then when you click on the corresponding object, you will pop up a prompt:

# Component communication, plus HUD

Although there is interaction, but the total interaction can not be such a pop-up bar. A lot of times we let interaction and UI elements interact with each other, but for nowxr - frameMixing UI elements with Mini Programs is not yet supported (it will be supported in future versions), but we can use the same layer solution, and the same layer solution will inevitably involve component communication.

xr - frameThe communication between the component and the parent is basically the same as the traditional component, so let's use the UI element of the Mini Program to implement it.HUDThere may be some knowledge of 3D transformation here, but it doesn't matter, it's just calling the interface.

First, let's modify the component'swxml, Add for ScenetickEvents, and add both the model and the cameraidConvenient indexing.

<xr-scene bind:ready="handleReady" bind:tick="handleTick" >
......
<xr-gltf
  node-id="damage-helmet"  model="damage-helmet"
  id="helmet" mesh-shape bind:touch-shape="handleTouchModel"
/>
<xr-gltf
  model="miku"  position="-0.15 0.75 0" scale="0.07 0.07 0.07" rotation="0 180 0" anim-autoplay
  id="friend" cube-shape="autoFit:true" shape-gizmo bind:touch-shape="handleTouchModel"
/>
<xr-camera
  id="camera" position="0 1.5 4" target="damage-helmet"  background="skybox"
  clear-color="0.4 0.8 0.6 1" camera-orbit-control
/>
</xr-scene>

Then handle the events in the component's script, writing the logic:

handleReady:  function ({detail}) {
  this.scene = detail.value
  const xrFrameSystem = wx.getXrFrameSystem()
  this.camera = this.scene.getElementById('camera').getComponent (xrFrameSystem.Camera)
  this.helmet = {el:  this.scene.getElementById('helmet' ), color: 'rgba(44, 44, 44, 0.5)'}
  this.miku = {el:  this.scene.getElementById('miku'), color: 'rgba(44, 44, 44, 0.5)'}
  this.tmpV3 = new (xrFrameSystem.Vector3)()
},
handleAssetsLoaded: function ({detail}) {
  this.triggerEvent('assetsLoaded', detail.value)
},
handleTick:  function({detail}) {
  this.helmet && this.triggerEvent('syncPositions', [
    this.getScreenPosition(this.helmet),
    this.getScreenPosition(this.miku)
  ])
},
HandleTouchModel:  function ({detail}) {
  const {target}  = detail.value
  this[target.id].color = `rgba(${Math.random()*255}, ${Math.random()*255}, ${Math.random()*255}, 0.5)`
},
getScreenPosition: function(value) {
  const {el,  color} = value
  const xrFrameSystem = wx.getXrFrameSystem()
  this.tmpV3.set(el.getComponent (xrFrameSystem.Transform).worldPosition)
  const clipPos = this.camera.convertWorldPositionToClip(this.tmpV3)
  const {frameWidth, frameHeight} = this.scene
  return [((clipPos.x + 1) / 2) * frameWidth, (1 - (clipPos.y + 1) / 2) * frameHeight, color, el.id]
}

Here we are.readyIn the event throughidThe index gets the needed instances coexisted down, and then in each frame thetickEvent to get the world coordinates of an object in real time, convert them to the position of the screen, and also add the ability to change color when the user clickscolorThe effect. In the end, we passthis.triggerEvent, initiated communication from the component to the page, one is an event that the resource load completesassetsLoaded, one is the event that the coordinates are updatedsyncPositionsLet's see how these events are handled in the scenario's script:

data: {
  width: 300, height: 300,
  renderWidth: 300, renderHeight: 300,
  loaded: false,
  positions: [[0, 0, 'rgba(44, 44, 44, 0.5)', ''], [0, 0, 'rgba(44, 44, 44, 0.5)', '']],
},
handleLoaded: function({detail}) {
  this.setData({loaded: true})
},
handleSyncPositions: function({detail}) {
  this.setData({positions: detail})
},

Visible simply accepts the event and sets it toDataThat's all. So this...DataFor what? Look at the page.wxml

<view>
  <xr-start
    disable-scroll
    id="main-frame" 
    width="{{renderWidth}}"
    height="{{renderHeight}}"
    style="width:{{width}}pxheight:{{height}}px"
    bind:assetsLoaded="handleLoaded"
    bind:syncPositions="handleSyncPositions"
  />

  <block wx:if="{{loaded}}" wx:for="{{positions}}" wx:for-item="pos" wx:key="*this">
    <view style="display: block position: absoluteleft: [[pos [0]}}pxtop: [[pos [1]}}pxbackground: [[pos [2]}}transform: translate(-50%, -50%)">
      <view style="text-align: Centercolor: whitefont-size: 24pxpadding: 8px">[[pos [3]}}</view>
    </view>
  </block>
</view>

Is also quite simple. It is in thexr-startThe event bindings are added to the component, and then some more UI below, which is displayed after the model is loaded, and moves with the model by position and color. This can be considered DOM-basedHUDWhen the whole thing is finished, the user clicks on the object, which will make these HUD colors, the effect is as follows:

Note that the screenshot of the left effect here isReal Machine ScreenshotsP up, because the tool does not support the same layer rendering!

# to imagine x Reality, additional AR capabilities

So far, we have achieved the rendering and interaction of 3D scenes, but the framework is calledXR-frame, so next we use the built-in AR system to transform the scene to make it AR capable. Transformation is very simple. We first remove all irrelevant objects and then usear-systemandar-tracker, and modify thexr-cameraThe relevant properties ofis-ar-cameraandbackground="ar"Just fine:

<xr-scene ar-system="modes:Plane" bind:ready="handleReady">
  <xr-assets bind:loaded="handleAssetsLoaded">
    <xr-asset-load type="gltf" asset-id="anchor" src="https://mmbizwxaminiprogram-1258344707.cos.ap-guangzhou.myqcloud.com/xr - frame /Demo/ar-plane-marker.glb  />
    <xr-asset-load type="gltf" asset-id="miku"  src="https://mmbizwxaminiprogram-1258344707.cos.ap-guangzhou.myqcloud.com/xr - frame /Demo/miku.glb" />
  </xr-assets>

  <xr-env env-data="xr-frame-team-workspace-day" />
  <xr-light type="ambient" color="1 1 1" intensity="1" />
  <xr-light type="directional" rotation="40 70 0" color="1 1 1" intensity="3" cast-shadow />

  <xr-ar-tracker mode="Plane" >
    <xr-gltf model="anchor"></xr-gltf>
  </xr-ar-tracker>
  <xr-node node-id="setitem"  visible="false">
    <xr-gltf model="miku"  anim-autoplay scale="0.08 0.08 0.08" rotation="0 180 0"/>
  </xr-node>

  <xr-camera clear-color="0.4 0.8 0.6 1" background="ar" is-ar-camera />
</xr-scene>

Notice here that we openedar-systemThe pattern isPlane, that is, plane recognition. In this mode, the camera cannot be controlled by the user. It is necessary to place the controller,targetRemove them at the same time.ar-trackerofmodeTo talk toar-systemExactly the same. Then write some simple logic in the script:

handleAssetsLoaded: function({detail}) {
  wx.showToast({title: 'Click to Screen Place '})
  this.scene.event.add('touchstart',  () => {
    this.scene.ar.placeHere('setitem',  true)
  })
}

At present, the AR system can only work when the real machine is previewed, so we can submit it for preview, and the final effect is as follows (AR case effects are all P):

# Recognize faces, put a mask on yourself

After a preliminary understanding of the AR system, we can try more different modes to play with some fun effects. Next is the face recognition pattern, for which we only need to change a few words in the above code, you can put on the mask of Joker (escape):

Gesture, face, body recognition all need basic libraryv2.28.1Above.

<xr-scene ar-system="modes:Facecamera:Front" bind:ready="handleReady" bind:tick="handleTick" >
  <xr-assets bind:loaded="handleAssetsLoaded">
    <xr-asset-load type="gltf" asset-id="mask" src="https://mmbizwxaminiprogram-1258344707.cos.ap-guangzhou.myqcloud.com/xr - frame /Demo/jokers_mask_persona5.glb" />
  </xr-assets>

  <xr-env env-data="xr-frame-team-workspace-day" />
  <xr-light type="ambient" color="1 1 1" intensity="1" />
  <xr-light type="directional" rotation="40 70 0" color="1 1 1" intensity="3" />

  <xr-ar-tracker mode="Face" auto-sync="43">
    <xr-gltf model="mask"  rotation="0 180 0" scale="0.5 0.5 0.5" />
  </xr-ar-tracker>

  <xr-camera clear-color="0.4 0.8 0.6 1" background="ar" is-ar-camera />
</xr-scene>

Here we willar-systemofmodesChanged toFace, and added new settingsCameraProperty isFrontOpen the front camera (Note that the front camera is in the client8.0.31Later support.This is a demonstration only). At the same time inar-trackerThis way, we willmodeChanged to andar-systemThe sameFace, and added the attributeauto-sync, which is an array of numbers representing the number that willThe identified facial feature points and the corresponding order of the child node bindingAnd automatic synchronization, specific feature points visible component documents described in detail. The final effect is as follows:

# Gesture, give a thumbs up to your favorite work

In addition to human faces, we also providebodyandhandDatong, but in addition to the top of the corresponding feature point synchronization, but also provides "gesture" recognition, this is more interesting, let us have a look:

<xr-scene ar-system="modes:Hand" bind:ready="handleReady" bind:tick="handleTick" >
  <xr-assets bind:loaded="handleAssetsLoaded">
    <xr-asset-load type="gltf" asset-id="cool-star" src="https://mmbizwxaminiprogram-1258344707.cos.ap-guangzhou.myqcloud.com/xr - frame /Demo/cool-star.glb" />
  </xr-assets>

  <xr-env env-data="xr-frame-team-workspace-day" />
  <xr-light type="ambient" color="1 1 1" intensity="1" />
  <xr-light type="directional" rotation="40 70 0" color="1 1 1" intensity="3" cast-shadow />

  <xr-ar-tracker id="tracker" mode="Hand" auto-sync="4">
    <xr-gltf model="cool-star" anim-autoplay />
  </xr-ar-tracker>

  <xr-camera clear-color="0.4 0.8 0.6 1" background="ar" is-ar-camera />
</xr-scene>

wxmlHere we change the model, andar-systemandar-trackerThe model has been replaced byHand, and modified thear-trackerAnd add a feature pointidEasy to index, and finally give back tosceneBoundtickEvent, and what follows is thejsLogically:

handleAssetsLoaded: function ({detail}) {
  this.setData({loaded: true})

  const el = this.scene.getElementById('tracker')
  this.tracker = el.getComponent (wx.getXrFrameSystem().ARTracker)
  this.gesture = -1
},
handleTick:  function() {
  if (!this.tracker) return
  const {gesture, score} = this.tracker
  if (score < 0.5 || gesture === this.gesture) {
    return
  }

  this.gesture = gesture
  gesture === 6 && wx.showToast({title: 'Good!'})
  gesture === 14 && wx.showToast({title: 'Alas... '})
}

The most important thing isHandleTickMethod. In each frame we gettrackerOf the reference, and then get its propertiesgestureandscore, whichgestureFor gesture numberingscoreFor confidence. The specific gesture number is visible in the component document, where I first filter it with confidence, and then based on the gesturegestureValue (6 likes, 14 steps) to indicate different information, the effect is as follows:

# OSDMarker, marking real objects

The only other abilities outside the human body are two.markerAnd... One is OSD. Marker, usually a photo of a real object, to identify the object in the two dimensional area of the screen, we have done a conversion to three dimensional space, but the developer has to guarantee.trackerThe scale of the model below is consistent with the identification source. The OSD pattern works best at identifying two-dimensional, well-defined objects, such as billboards.

Here is the default sample resource, you can change it to your own photos and videos, if you just want to try, directly copy accesssrcThe location of the browser to open it.

<xr-scene ar-system="modes:OSD" bind:ready="handleReady">
  <xr-assets bind:loaded="handleAssetsLoaded">
    <xr-asset-material  asset-id="mat"  effect="simple" uniforms="u_baseColorFactor: 0.8 0.6 0.4 0.7" states="alphaMode:BLEND" />
  </xr-assets>

  <xr-node>
    <xr-ar-tracker
      mode="OSD" src="https://mmbizwxaminiprogram-1258344707.cos.ap-guangzhou.myqcloud.com/xr - frame /Demo/marker/osdmarker-test.jpg"
    >
      <xr-mesh geometry="plane" material="mat" rotation="-90 0 0" />
    </xr-ar-tracker>
  </xr-node>

  <xr-camera clear-color="0.4 0.8 0.6 1" background="ar" is-ar-camera />
</xr-scene>

Here we putar-systemThe pattern of theOSD, accordinglyar-trackerThe model has also been changed toOSD, this model requires the provision ofsrc, which is the image to be identified. And this time we used an effect forsimpleThe material, because there is no need for light, at the same time, in order to better see the effect, in thematerialofstatesSet thealphaMode:BLEND, that is, open the transparent blend and then place theuniformsSet the coloru_baseColorFactorAnd pay attention to its transparency as0.7The final effect is as follows:

# 2DMarker + Video, Make Photos Move

The final capability is 2D. Marker, which is used to accurately identify a rectangular plane with a certain texture, we can match it with the video texture, only a very simple code can complete an effect, first of allwxml

Here is the default sample resource, you can change it to your own photos and videos, if you just want to try, directly copy accesssrcThe location of the browser to open it.

<xr-scene ar-system="modes:Marker" bind:ready="handleReady">
  <xr-assets bind:loaded="handleAssetsLoaded">
    <xr-asset-load
      type="video-texture" asset-id="hikari"  options="loop:true"
      src="https://mmbizwxaminiprogram-1258344707.cos.ap-guangzhou.myqcloud.com/xr - frame /Demo/xr-frame-team /2dmarker /hikari-v.mp4"
    />
    <xr-asset-material  asset-id="mat"  effect="simple" uniforms="u_baseColorMap: video-hikari" />
  </xr-assets>

  <xr-node wx:if="{{loaded}}">
    <xr-ar-tracker
      mode="Marker" bind:ar-tracker-switch="handleTrackerSwitch"
      src="https://mmbizwxaminiprogram-1258344707.cos.ap-guangzhou.myqcloud.com/xr - frame /Demo/xr-frame-team /2dmarker /hikari.jpg 
    >
      <xr-mesh node-id="mesh-plane"  geometry="plane" material="mat" />
    </xr-ar-tracker>
  </xr-node>

  <xr-camera clear-color="0.4 0.8 0.6 1" background="ar" is-ar-camera />
</xr-scene>

Here we putar-systemThe model was changed toMarker, will be followed byar-trackerThe type is also changed toMarker, and changed a recognition source, and then loaded a prepared video texture, and willsimpleChange the color of the material to the textureu_baseColorMap, while closing the mix. Notice that we used variablesloadedTo controlar-trackerDisplays and binds the eventar-tracker-switch, which is to handle in the script:

handleAssetsLoaded: function ({detail}) {
  this.setData({loaded: true})
},
handleTrackerSwitch: function ({detail}) {
  const active = detail.value
  const video = this.scene.assets.getAsset('video-texture', Hikari )
  active ? video.play() : video.stop()
}

Display the content after the video has finished loading, and inar-tracker-switchThe event is to play the video after the recognition is successful and optimize the experience, the final effect is as follows:

# Add magic, some particles.

It seems a bit monotonous just to play the video, but here we can ask the particle system to do some magic to make the whole scene more vivid:

  ......
  <xr-asset-load type="texture" asset-id="point" src="https://mmbizwxaminiprogram-1258344707.cos.ap-guangzhou.myqcloud.com/xr - frame /Demo/particles/point.png" />
  ......
  <xr-node wx:if="{{loaded}}">
    <xr-ar-tracker
      mode="Marker" bind:ar-tracker-switch="handleTrackerSwitch"
      src="https://mmbizwxaminiprogram-1258344707.cos.ap-guangzhou.myqcloud.com/xr - frame /Demo/xr-frame-team /2dmarker /hikari.jpg 
    >
      <xr-mesh node-id="mesh-plane"  geometry="plane" material="mat" />
      <xr-particle
        capacity="500" emit-rate="20"
        size="0.03 0.06" life-time="2 3" speed="0.04 0.1"
        start-color="1 1 1 0.8" end-color="1 1 1 0.2"
        emitter-type="BoxShape"
        emitter-props="minEmitBox:-0.5 0 0.5,maxEmitBox:0.5 0.2 0,direction:0 0 -1,direction2:0 0 -1"
        texture="point"
      />
    </xr-ar-tracker>
  </xr-node>
......

On top of the previous 2DMarker video, we added thexr-particleElement, using the newly loaded mappointandboxShapeEmitter and other parameters to generate particles, the final effect is as follows (of course, limited to my art foundation effect is very general, I believe you can easily tune a burst of my 233):

# Post-processing, make the picture more fun

At the end of the main rendering, it still seems a little monotonous, lack of a clear sense of separation from the real world, this time can use full-screen post-processing to achieve some more fun effects:

  ......
  <xr-asset-load asset-id="anim" type="keyframe" src="/assets/animation.json"/>
  ......
  <xr-asset-post-process
    asset-id="vignette"  type="vignette"  data="intensity:1,smoothness:4,color:1 0 0 1"
    anim-keyframe="anim" anim-autoplay
  />
  <xr-camera clear-color="0.4 0.8 0.6 1" background="ar" is-ar-camera post-process="vignette" />

Here I applied a vignette to the cameravignettePost-processing effect, and added frame animation control parameters:

{
  "keyframe": {
    "vignette": {
      "0": {
        "asset-post-process.assetData.intensity": 0
      },
      "100": {
        "asset-post-process.assetData.intensity": 1
      }
    }
  },
  "animation": {
    "vignette": {
      "keyframe": "vignette",
      "duration": 2,
      "ease": "ease-in-out",
      "loop": -1,
      "direction": "both"
    }
  }
}

The final effect is as follows:

# Share it with your friends!

Well, here we are. What is the most important thing when we have achieved some satisfactory results? Sharing with friends, of course! Let's use the followingxr - frameBuilt-in sharing system to accomplish this function:

......
<xr-mesh node-id="mesh-plane"  geometry="plane" material="mat" cube-shape="autoFit:true" bind:touch-shape="handleShare" />
......
handleShare:  function() {
  this.scene.share.captureToFriends()
}

To the video displayed after identificationMeshAdded the above mentionedShapeBind the touch event, and then use it directly in the event handler with thethis.scene.share.captureToFriends()The effect is as follows:

Of course, many times we just need pictures, and then use it to access other sharing interfaces of WeChat, such asonShareAppMessageLife cycle, at which point the use ofshare.captureToLocalPathInterface can be, detailed visible component documentation.

# And after that, it's up to you.

So far, we've had a brief taste of the framework's various capabilities, but mainlywxmlThere is very little logic. For beginning developers, we tend to provide developers with very simple to achieve good results, which is alsoProgressive developmentThe basis. More detailed documentation tutorials are availableComponent documentation

But beyond these simple uses, the framework also offers highly flexible componentization features. Developers can customize their own components, elements, all resources, etc., and even if there is a need, we can open the underlying RenderGraph to customize the rendering process. Detailed custom development capabilities can be seen in the following sections of the document, we have done a more detailed explaination and guidance.

Well, so much for getting started, technology is always just a tool, and the rest is up to you as a creator! Before that, let's take a look at these demos:

# A key