Current location - Health Preservation Learning Network - Slimming men and women - Vision, the native framework of iOS, has achieved the effect of face-lifting and big eyes.
Vision, the native framework of iOS, has achieved the effect of face-lifting and big eyes.
Generally, commercial sdk similar to Face++ will be used in short video projects to achieve face-lifting and big-eye effects. It should also be possible to recognize faces and extract facial feature points in Apple's native frame vision. I didn't expect it to go well. I referred to the related algorithms on the Internet, and the effect was achieved in one hour.

Comparison between Vision and Face++;

1.Vision native framework, small size, free; Face++ needs to pay, and the bag is about 50 m.

2. Visual requirements are more than ios 1 1, while Face++ seems to have none.

3. The number of key points visually detected is 74 on iphone 5S, 74 on iphone7 and 87 on iphone XS. Face++ detects 106 key points.

4. The visual feature points seem to be a little floating (stable), and the edge detection is not very accurate. Face++ feature points should be more accurate.

Vision official document

Face++ official document

1. use GPUImageVideoCamera to collect camera data.

2. Send the collected data CVPixelBufferRef to Vision for processing, and get face feature points.

3. Custom face-lifting and big-eye filters are added to the filter chain of GPUImage.

4. Rewrite the-(void) RenderTotexturewithCVertices: (const GL float *) Vertices texture coordinate: (constglfloat *) texture coordinate method in the custom filter, and send the feature points to the chip shader for processing.

5. Face lifting and big-eye correlation algorithms are used in the shader: in-circle enlargement algorithm, in-circle reduction algorithm and fixed-point stretching algorithm. Analysis of algorithm principle

6. Finally, it is displayed through GPUImageView.

1. Send the collected original image data CVPixelBufferRef to Vision for processing.

2. When extracting facial feature points visually, we need to pay attention to the coordinate transformation of feature points.

3. Processing the feature points in FaceBeautyThinFaceFilter filter.

4. The feature point data is processed in the chip shader.

The first one is the original picture, and the second one is the effect of thin face and big eyes. It can be seen that the big eye effect is unnatural because the coefficient is set relatively large.

1. As shown in the figure, take out the coordinates of the left eye pupil feature point 72 and the coordinates of the upper feature point 13.

2. Take the pupil 72 as the center and 5 times the distance from 72 to 13 as the radius to determine the magnification range.

3. According to the enlargement algorithm in the circle, the closer the pixel is to the center of the circle, the greater the deviation outside the circle, and the farther the pixel is from the center of the circle, the smaller the deviation outside the circle. Therefore, the degree of eye longitudinal stretching is obvious. But also can realize a smooth transition between the enlarged area and the non-enlarged area.

4. Other algorithms for shrinking in a circle and stretching at fixed points are actually similar, so I won't go into details here.

Github: demo address

Welcome to leave a message or a private message to discuss problems and stars, thank you ~