# Shoe Shoe inspection

VisionKit From the base library 3.2.1 version (AndroidWeChat>= 8.0.43, iOS WeChat>=8.0.43) Start supporting.

Shoe detection Ability to function in conjunction with Other VisionKit ability Parallel capability interface.

This capability is generally used by users to perform AR Test Shoes or Leg occlusion Development of such functions.

# Definition of method

Shoe testing is currently only supported through Real-time camera detection

Can be configured to determine whether each frame gets Used to mask the leg area of a shoe Leg Occlusion Texture

# Real-time camera detection

First you need to create VKSession Of the configuration is then passed through the VKSession.start start-up VKSession Instance.

During operation, the algorithm detects the shoes in the camera in real time by [VKSession.on](https://developers.weixin.qq.com/miniprogram/dev/api/ai/Visionkit /VKSession.on.html) Real-time output shoe matrix information (position rotation zoom), 8 key point coordinates.

Finally, real-time access to VKSession Frame data of the VKFrame, Obtain VKCameraGet VKCamera.viewMatrix and VKCamera.getProjectionMatrix And combine,Shoe Matrix andKey Point Information For specific rendering.

Open the VKSession sample code:

// VKSession To configure
const session = wx.createVKSession({
    track: {
        shoe: {
            mode: 1 // 1 Use the camera
        }
    }
})

// Camera real-time detection mode, when monitoring shoes, updateAnchors Events are triggered continuously (Triggered once per frame)
session.on('updateAnchors', anchors => {
    // The current version recognizes 1-2 Pair of shoes, anchors Length is 0-2
    anchors.forEach(anchor => {
        // shoedirec Shoes, 0. To the left, 1 To the right.
        console.log('anchor.shoedirec', anchor.shoedirec)
        // transform Matrix Information for Shoes (Position Rotation Zoom), 4*4 Row Main Order Matrix,
        console.log('anchor.transform', anchor.transform)
        // points3d Location, 8 key point coordinates
        console.log('anchor.points3d', anchor.points3d)
    })
})

// Camera real-time detection mode, when the shoe is lost from the camera, it constantly triggers
session.on('removeAnchors',  () => {
  console.log('removeAnchors')
})

// Need to call once start To initiate
session.start(errno => {
  if (errno) {
    // If it fails, it will return. errno
  } else {
    // Otherwise, return null, indicating success
  }
})

Rendering Trial Shoe Sample Code:

// Can understand the behavior that needs to be performed for each frame

// 1. adopt VKSession Examples of `getVKFrame` Method can get the frame object
const frame = this.session.getVKFrame(canvas.width, canvas.height)

// 2. Obtain VKCamera
const VKCamera = frame.camera

// 3. Obtain VKCamera of View Matrix and Projection Matrix
const viewMat = VKCamera.viewMatrix
const projMat = VKCamera.getProjectionMatrix(near, far)

// 4. Combination updateAnchors Return inside. anchor of transform、points3d For specific rendering. (You can refer to the official example)

# Open the leg shield.

Camera in real time mode, want to turn on the leg division ability.

Need to guarantee that the VKSession.start interface Call, open VKSession After. call VKSession of updateMaskMode Update whether to enable the return of the leg split texture. And then by VKFrame of getLegSegmentBuffer Get the specific leg occlusion texture buffer, and then based on the buffer Create the texture and do the corresponding occlusion culling.

Sample code:

// 1. Turn on the leg occlusion texture acquisition switch
// VKSession.start Later, you can pass the updateMaskMode Update whether the leg occlusion texture buffer is available
this.session.updateMaskMode({
    useMask: true // Turn on or off the leg occlusion texture buffer (client 8.0.43 Get is turned on by default and can be turned off manually)
})

// 2. adopt VKSession Examples of `getVKFrame` Method can get the frame object
const frame = this.session.getVKFrame(canvas.width, canvas.height)

// 3. Through the frame object, get the specific leg segmentation texture Buffer (160*160. Single channel)
const legSegmentBuffer = frame.getLegSegmentBuffer ()

// 4. Filtering content based on leg segmentation texture (refer to the official case)

PS. In addition to the algorithm segmentation, you can manually add Mesh occlusion, you can achieve similar occlusion segmentation effect. You can refer to official cases.

# Output Dxplaination

anchor information

struct anchor
{
  transform,  // Shoe Matrix Information, Position Rotation Zoom
  points3d,   // Shoe Key Point Array
  shoedirec,  // Shoes left and right feet, 0 For the left foot, 1 For the right foot
}

# 1. Shoe Matrix transform

Length is 16 An array of, representing Row main sequence of 4*4 Matrix.

# 2. Shoes Key Points points3d

Length is 8 An array of, representing Shoes identified. 8 key points:

Array<Point>(8) point3d

Each array element is structured as:

struct Point { x, y, z }

Indicates that in Shoe Matrix After the action, each point of the Three-dimensional offset position

The following is the key point diagram and point explanation:

  • Point 0 At the bottom of the shoe, at the heel, near the ground.
  • Point 1 Located at the front of the shoe, approximately halfway between the upper surface of the shoe and the lower underside of the shoe in the vertical direction.
  • Point 2 Located at the back of the shoe, above point 0, near the ankle.
  • Point 3, 4. Located on the side of the shoe, where 3 points are on the left side of the left shoe and the right side of the right shoe. The 4 o'clock is located to the right of the left shoe and to the left of the right shoe. The 3 points in the image above are in the obscured position. 3, 4 points are close to the bottom of the shoe.
  • Points 5, 6 Located on the side of the shoe, farther back relative to 3, 4, and a little up on the underside of the shoe.
  • Point 7 Located roughly in the middle of the tongue.

# 3.Leg Segmentation Texture Buffer

160*160 of Single-channel floating-point number color ArrayBuffer, Numeric 0.0 Corresponding to the non-leg area, greater than the 0.0 Represents the leg area. The closer you get, 1.0 The closer is the leg area.

# How to Model Based on Return Information

First, you can add a node to synchronize the shoe matrix information. Add a model to the node, set the corresponding offset based on the shoe key points, and then scale the model to adaptation the shoe key points to achieve the target effect.

# Program Examples

# Camera Real Time Detection Example

Small Program Example of interface - VisionKit Visual Capabilities - Real-time shoe inspection - Try Shoes Case

Open Source Address:[Real Time Shoe Inspection - Shoes Test Case](https://github.com/WeChat mini-program /miniprogram-demo/tree/master/miniprogram/packageAPI/pages/ar/shoe-detect)

Identify the shoe and place the shoe model into the identified area. The default and support to dynamically turn on and off leg occlusion, as well as show and hide the corresponding key points of the shoe.

Case effect: