# Face Keypoint Detection
VisionKit From the base library 2.25.0 version (AndroidWeChat>= 8.0.25, iOS WeChat>=8.0.24) Began to provide face keypoint detection as a way to communicate with marker ability
and OSD ability
Parallel capability interface.
from Weixin >= 8.1.0 The version begins to provide face 3D keypoint detection as an extensible interface for face 2D keypoint detection.
# Definition of method
There are two methods of face keypoint detection, one is to input a static image to detect, the other is to detect real-time through the camera.
# 1. Static picture detection
adopt [VKSession.detectFace interface](https://developers.weixin.qq.com/miniprogram/dev/api/ai/Visionkit /VKSession.detectFace.html) Input an image, the algorithm detects the face in the image, and then passes [VKSession.on interface](https://developers.weixin.qq.com/miniprogram/dev/api/ai/Visionkit /VKSession.on.html) The position of the face, 106 key points and the angle of swing of the face in 3D.
Sample code:
const session = wx.createVKSession({
track: {
face: { mode: 2 } // mode: 1 - Use the camera2 - Manual input image
},
})
// In static image detection mode, each tone detectFace The interface will trigger once updateAnchors event
session.on('updateAnchors', anchors => {
anchors.forEach(anchor => {
console.log('anchor.points', anchor.points)
console.log('anchor.origin', anchor.origin)
console.log('anchor.size', anchor.size)
console.log('anchor.angle', anchor.angle)
})
})
// Need to call once start To initiate
session.start(errno => {
if (errno) {
// If it fails, it will return. errno
} else {
// Otherwise, return null, indicating success
session.detectFace({
frameBuffer, // picture ArrayBuffer Data. Face image pixel data, each of four items representing a pixel of RGBA
width, // Image width
height, // Image height
scoreThreshold: 0.5, // Scoring threshold
sourceType: 1,
modelMode: 1,
})
}
})
# 2. Real-time detection via camera
Algorithm to detect faces in the camera in real time, by [VKSession.on interface](https://developers.weixin.qq.com/miniprogram/dev/api/ai/Visionkit /VKSession.on.html) Face position, 106 key points and the angle of swing of the face in the 3D coordinate system are exported in real-time.
Sample code:
const session = wx.createVKSession({
track: {
face: { mode: 1 } // mode: 1 - Use the camera2 - Manual input image
},
})
// Camera real-time detection mode, when the face is detected, updateAnchors Events are triggered continuously (Triggered once per frame)
session.on('updateAnchors', anchors => {
anchors.forEach(anchor => {
console.log('anchor.points', anchor.points)
console.log('anchor.origin', anchor.origin)
console.log('anchor.size', anchor.size)
console.log('anchor.angle', anchor.angle)
})
})
// When the face is removed from the camera, it triggers removeAnchors event
session.on('removeAnchors', () => {
console.log('removeAnchors')
})
// Need to call once start To initiate
session.start(errno => {
if (errno) {
// If it fails, it will return. errno
} else {
// Otherwise, return null, indicating success
}
})
# 3. Turn on 3D keypoint detection
To turn on the ability to detect 3D keypoints of the face, the static picture mode only needs to be added on the basis of the 2D call.open3d
Fields, as follows
// Static picture mode call
session.detectFace({
..., // Same as 2D call parameters
open3d: true, // Turn on face 3D keypoint detection capability, defaults to false
})
The real-time mode adds a 3D switch update function to the 2D call, as follows
// Camera Real Time Mode Call
session.on('updateAnchors', anchors => {
this.session.update3DMode({open3d: true}) // Turn on face 3D keypoint detection capability, defaults to false
..., // Same as 2D call parameters
})
# Output Dxplaination
# 1. Point definition
The 2D key point of the face and the 3D key point of the face are 106 points, defined as shown in the figure below. When the face pose changes, the contour points of the face 2D key points will always follow the visible face edge, while the face 3D key points will maintain the three-dimensional structure.
# 2. Face 2D Key Points
The face 2 D key point output field includes
struct anchor
{
points, // 106 points in the image(x,y)coordinate
origin, // The upper left corner of the face box(x,y)coordinate
size, // Width and height of face frame(w,h)
angle, // Face angle information(pitch, yaw, roll)
confidence // Confidence of Key Points in Face
}
# 3. Face 3D Key Points
After the face 3D key point detection capability is turned on, the face 2D and 3D key point information can be obtained.
struct anchor
{
..., // Face key points 2D output information
points3d, // Face 106 points(x,y,z)3D coordinates
camExtArray, // The camera outer parameter matrix, defined as[R, T 03 , 1], 3D points can be projected back to the image using the camera's internal and external parameters matrix
ccamIntArray // Camera internal parameter matrix, with reference to glm:: perspective(fov, width / height, near, far)
}
# Application Scenario Examples
- Face detection.
- Face effects.
- Face pose estimation.
- Human face AR Game.
# Program Examples
- [Real-Time Camera Face Detection Capabilities Use Reference](https://github.com/WeChat mini-program /miniprogram-demo/tree/master/miniprogram/packageAPI/pages/ar/face-detect)
- [Static image face detection ability to use reference](https://github.com/WeChat mini-program /miniprogram-demo/tree/master/miniprogram/packageAPI/pages/ar/photo-face-detect)
# Special note
If the face recognition function of the Mini Program involves the collection and storage of user biometrics (such as face photos or videos, ID cards and handheld ID cards, ID card photos and bareheaded photos, etc.), this type of service needs to be used.WeChat native face recognition interface