# Body detection

VisionKit from the base library Version 2.28.0 provides body detection capabilities. from Weixin>=8.1.0 The version begins to provide 3D keypoint detection of the human body as an extended capability interface for body detection.

# Definition of method

There are two ways to use body detection, one is to input a static picture for detection, and the other is to detect through a camera.

# 1. Static picture detection

adopt [VKSession.detectBody interface](https://developers.weixin.qq.com/miniprogram/dev/api/ai/Visionkit /VKSession.detectBody.html) Input an image, the algorithm detects the human body in the image and then passes [VKSession.on interface](https://developers.weixin.qq.com/miniprogram/dev/api/ai/Visionkit /VKSession.on.html) The key point information of human body is output.

Sample code:

const session = wx.createVKSession({
  track: {
    body: { mode: 2 } // mode: 1 - Use the camera2 - Manual input image
  },
})

// In static image detection mode, each tone detectBody The interface will trigger once updateAnchors event
session.on('updateAnchors', anchors => {
    this.setData({
        anchor2DList:  anchors.map(anchor => {
            points: anchor.points, // Key point coordinates
            origin: anchor.origin, // Identifying Box Start Point Coordinates
            size: anchor.size // The size of the identification box
        }),
    })
})

// Need to call once start To initiate
session.start(errno => {
  if (errno) {
    // If it fails, it will return. errno
  } else {
    // Otherwise, return null, indicating success
    session.detectBody({
      frameBuffer, // picture ArrayBuffer Data. Pixel data of the image to be detected, each of four terms representing a RGBA
      width, // Image width
      height, // Image height
      scoreThreshold: 0.5, // Scoring threshold
      sourceType: 1 //Photo credit, Defaults to 1, 0 indicates that the input image is from a continuous frame of the video
    })
  }
})

# 2. Real-time detection via camera

Algorithm to detect the human pose in the camera in real time, by [VKSession.on interface](https://developers.weixin.qq.com/miniprogram/dev/api/ai/Visionkit /VKSession.on.html) Real-time output of detected key points in the human body.

Sample code:

const session = wx.createVKSession({
  track: {
    body: { mode: 1 } // mode: 1 - Use the camera2 - Manual input image
  },
})

// Camera real-time detection mode, when monitoring the human body, updateAnchors Events are triggered continuously (Triggered once per frame)
session.on('updateAnchors', anchors => {
    this.data.anchor2DList = []
    this.data.anchor2DList = this.data.anchor2DList.concat(anchors.map(anchor => {
        points: anchor.points,
        origin: anchor.origin,
        size: anchor.size
    }))
})

// When the body is removed from the camera, it triggers removeAnchors event
session.on('removeAnchors',  () => {
  console.log('removeAnchors')
})

// Need to call once start To initiate
session.start(errno => {
  if (errno) {
    // If it fails, it will return. errno
  } else {
    // Otherwise, return null, indicating success
  }
})

# 3. Turn on 3D keypoint detection

To enable 3D keypoint detection of the human body, the static picture mode only needs to be added on the basis of the 2D callopen3dFields, as follows

// Static picture mode call
session.detectBody({
      ...,           // Same as 2D call parameters
      open3d: true,  // Turn on human 3D key point detection capability, defaults to false
    })

The real-time mode adds a 3D switch update function to the 2D call, as follows

// Camera Real Time Mode Call
session.on('updateAnchors', anchors => {
  this.session.update3DMode({open3d: true})  // Turn on human 3D key point detection capability, defaults to false
  ...,  // Same as 2D call parameters
})

# Output Dxplaination

# Point explaination

The human 2D key points are defined as 23 points, as shown in the figure below.

The 3D key point of the human body is defined as the SMPL-24 point joint, as shown in the figure below.

# Human detection

Human detection output fields include

struct anchor
{
  points,    // The human body 2D key points in the image of the(x,y)coordinate
  origin,    // The upper left corner of the human detection frame(x,y)coordinate
  size,      // Width and height of human detection frame(w,h)
  score,     // Confidence of the human detection frame
  confidence // Confidence in the key points of the human body
}

# Human 3D Key Points

After the human 3D key point detection capability is turned on, the human 2D and 3D key point information can be obtained.

struct anchor
{ 
  ...,               // Human Body Detection 2D Output Information
  points3d,          // 3D Key Points of the Human Body(x,y,z)3D coordinates
  camExtArray,       // The camera outer parameter matrix, defined as[R, T  03 , 1], 3D points can be projected back to the image using the camera's internal and external parameters matrix
  camIntArray        // Camera internal parameter matrix, with reference to glm:: perspective(fov, width / height, near, far)
}

# Application Scenario Examples

  1. Portrait cutout.
  2. Crossing the border.
  3. Crowd flow statistics.

# Program Examples

  1. [Real Time Camera Body Detection Capability Using Reference](https://github.com/WeChat mini-program /miniprogram-demo/tree/master/miniprogram/packageAPI/pages/ar/body-detect)
  2. [Static image body detection capabilities to use reference](https://github.com/WeChat mini-program /miniprogram-demo/tree/master/miniprogram/packageAPI/pages/ar/photo-body-detect)