Current location - Health Preservation Learning Network - Healthy weight loss - Slow motion video processing
Slow motion video processing
Slo-mo mode is one of the shooting modes of iOS system, which is essentially different from other modes in storage mode, as shown in the following figure.

/articles/sharing? id=9208285652934558845。 s_bucket=na

& lt inline frame

Height =450

Width =800 src=" /article/share? id=9208285652934558845。 s_bucket=na "

Frame boundary =0

allowfullscreen & gt

& lt/iframe & gt;

This paper mainly solves the following problems.

There is a bug in the system album. The duration of slow-motion video shows the shooting duration, not the actual playing duration of the video.

The video in slo-mo mode will be stored in the photo album as the original video taken normally. When playing, read the slow-motion information stored in local shooting and control the playing speed, instead of generating slow-motion video. This guess is for the sake of playback and display efficiency, because video export is very slow, ps: important things are highlighted.

Ordinary video only has the function of cropping, while slo-mo slow-motion video supports the selection of slow-motion areas, which affects the real duration, but it will not be synthesized into a new slow-motion video, but only updates the slow-motion information and controls the playback rate during playback.

There is a cognitive premise that iOS currently obtains videos and albums through PHFetchResult.

Most of the assets recalled by requestAVAssetForVideo are AVURLAsset.

The so-called AVURLAsset can directly index this video according to the URL, but when there is this type of slow-motion video, it is not this type, but the AVComposition mode.

The question is, why not report the error most of the time? If it is a type mismatch, it will have crashed.

This is because of options. version = phvideorequestoptionsversioncurrent; Many times, this sentence is written as an option. version = phvideorequestoptions version original; Shoot the original video directly, the original picture is not edited, because the camera must have saved an original video after shooting, and according to this original shot, it will definitely return to urlasset.

There are several levels of understanding,

RequestplayerItemForVideo api can get PlayerItem, and the duration in it can be converted into real duration without consuming time.

Real slow-motion video does not exist and must be exported through its own api. Its principle is to read the information tracks saved at that time and synthesize them step by step.

Thirdly, the export time is quite long, and it takes more than ten seconds to export the real time of one minute. In this case, the test will generate bugs and the product may not be accepted. So how to optimize?

Because it involves the use of the api of the system, the export time can not be significantly shortened at present.

However, if users feel that time is not that long, they can do a lot of things.

Have the following ideas