# Shoe inspection

VisionKit begins support from the base library3.2.1version(Android WeChat > = 8.0.43, iOS WeChat > 8.0.43)).

The shoe detectioncapability serves as a parallel capability interface withother VisionKit capabilities.

This capability is generally used by users for the development of functions such asAR test shoesorleg blocking.

# Method Definition

Shoe detection, currently only support throughCamera real-time detection.

It can be configured to determine whether to get theleg occlusion textureper frame to mask the leg area of the shoe.

# Cameras detect in real time

You first need to create a configuration for VKSession , and then start the VKSession instance byVKSession.start.

During the operation, the algorithm detects shoes in the camera in real time and outputs shoe matrix information (location rotation zoom) and eight key point coordinates in real time via VKSession.on .

Finally, the frame data of VKSession is obtained in real timeVKFrameandVKCamera.GetVKCamera.viewMatrixandVKCameragetProjectionMatrixAnd combined with,shoe matrixandkey point informationfor specific rendering.

Open the VKSession sample code:

// VKSession configuration
const session = wx.createVKSession({
    track: {
        shoe: {
            mode: 1 // 1 使用摄像头
        }
    }
})

// The updateAnchors event is triggered continuously (once per frame) when shoes are detected in Camera Realtime Detection mode
session.on('updateAnchors', anchors => {
    // 目前版本能识别 1-2 双鞋子,anchors 长度为 0-2
    anchors.forEach(anchor => {
        // shoedirec 鞋子左右,0 为左,1 为右。
        console.log('anchor.shoedirec', anchor.shoedirec)
        // transform 鞋子的矩阵信息(位置旋转缩放),4*4 行主序矩阵,
        console.log('anchor.transform', anchor.transform)
        // points3d 位置,8个关键点坐标
        console.log('anchor.points3d', anchor.points3d)
    })
})

// In real-time detection mode, the shoe is constantly triggered when it is lost from the camera
session.on('removeAnchors', () => {
  console.log('removeAnchors')
})

// You need to call start once to start
session.start(errno => {
  if (errno) {
    // 如果失败,将返回 errno
  } else {
    // 否则,返回null,表示成功
  }
})

Example code for rendering a test shoe:

// You can understand the behavior that each frame needs to perform

// 1. Frame objects can be obtained through the`getVKFrame`method of the VKSession instance
const frame = this.session.getVKFrame(canvas.width, canvas.height)

// 2. Get VKCamera
const VKCamera = frame.camera

// 3. Get the view matrix and projection matrix of VKCamera
const viewMat = VKCamera.viewMatrix;
const projMat = VKCamera.getProjectionMatrix(near, far);

// 4. Combine updateAnchors inside return anchor transform, points3d for specific rendering. (See official examples for more details)

# Turn on the leg block

In real-time mode, I want to turn on the ability to partially cut my leg.

You need to make sure that after the VKSession.start interface is called, VKSession is opened. CallVKSessionupdateMaskModeupdatewhether to turn on return of leg segmentation textures.Then useVKFrame``getLegSegmentBuffer`to obtain a specific leg blocking texture buffer, and then create a texture based on the buffer to block out the corresponding block.

Example code:

// 1. Turn on the leg to block the texture Get the switch
// After VKSession.start, you can update whether you can get the leg occlusion texture buffer by updateMaskMode
this.session.updateMaskMode({
    useMask: true // 开启或关闭腿部遮挡纹理buffer (客户端 8.0.43 默认开启获取,可以手动关闭)
});

// 2. Frame objects can be obtained through the`getVKFrame`method of the VKSession instance
const frame = this.session.getVKFrame(canvas.width, canvas.height)

// 3. Frame object, get specific leg segmentation texture Buffer (160 * 160, single channel)
const legSegmentBuffer = frame.getLegSegmentBuffer();

// 4. Filter content based on partial leg cutting texture (refer to official cases for details)

PS. In addition to the algorithm segmentation, you can manually add Mesh occlusion, you can achieve similar occlusion segmentation effect. Specifics can be referred to official cases.

# Output instructions

AnchorInformation

struct anchor
{
  transform,  // 鞋子矩阵信息,位置旋转缩放
  points3d,   // 鞋子关键点数组
  shoedirec,  // 鞋子左右脚,0 为左脚,1 为右脚
}

# 1. Shoes matrix transform

An array of 16 in length represents a 4 * 4 matrix of the row leading order.

# 2. Shoes Key Points points3d

An array of 8 in length represents the 8 key points of shoe identification:

Array<Point>(8) point3d

Each array element is structured as:

struct Point { x, y, z }

Represents the3D offset positionof each point after the shoe matrix is applied.

The following is a map of key points and a location explanation:

  • Point 0is located at the back of the bottom of the shoe, at the heel, near the ground.
  • Point 1is located at the front of the shoe, approximately halfway between the upper surface of the shoe and the underside of the shoe in the vertical direction.
  • Point 2is located at the back of the shoe, above point 0, near the ankle.
  • Points 3 and 4are located on the side of the shoe, where 3 points are located on the left side of the left shoe and the right side of the right shoe.The four points are on the right side of the left shoe and the left side of the right shoe. The three points in the above diagram are in a blocked position. The three and four points are close to the bottom of the shoe.
  • Points 5, 6are located on the side of the shoe, further back relative to 3, 4, a little above the bottom of the shoe.
  • Point 7is located roughly in the center of the tongue.

# 3. Leg Segmentation Texture Buffer

The single-channel floating-point number ArrayBuffer of 160 * 160. A value of 0.0 corresponds to the non-leg area, greater than 0.0 represents the leg area, and closer to 1.0 is the leg area.

# How to position a model based on the returned information

First, you can synchronize shoe matrix information by adding a node. Add a model to the node, set the corresponding offset based on the shoe key point, and then scale the model adaptation ratio to make the model adaptation the shoe key points to achieve the target effect.

# Program Examples

# Examples of real-time camera detection

Weixin Mini Program Sampleinterface-VisionKitVisual Ability-Real time shoe detection - shoe testing case

Open source address: Real-time shoe detection - shoe test case

Identify the shoes and place the shoe model into the identified area. By default, and support for dynamically turning on and off leg blocking, as well as displaying and hiding the corresponding shoe key points.

Case Effects: