Current location - Health Preservation Learning Network - Healthy weight loss - Introduction to core image
Introduction to core image
The above is an introduction to Core Image in Apple's official documents, to the effect that Core Image is a technology that provides processing and analysis for still images and videos. It can use GPU/CPU to process images. Core Image provides users with a concise API and hides the complex underlying content in image processing. You can use it without knowing OpenGL, OpenGES or even GCD, and it has already helped you deal with these complicated contents.

In iOS, the Core Image framework is a new image processing framework added by iOS5 on the iOS platform, which provides a powerful and efficient image processing method for pixel-based image operation and analysis, and has many powerful filters built in (currently the number exceeds 180). These filters provide a variety of effects, which can be superimposed through the filter chain to form a powerful custom effect.

A filter is an object that has many inputs and outputs and performs some changes. For example, a blur filter may require an input image and a blur radius to produce an appropriate blurred output image.

A filter chain is a network where filters are linked together, so the output of one filter can be the input of another filter. In this way, the effect of elaborate production can be achieved.

Example of Gaussian blur effect:

The effect is as shown in the figure:

There are many filters in the core image frame, so how do we know which filters are there? How do you know the specific usage of each filter? We can find the filter classification starting with kCICategory from the header file of CIFilter, and then get all the filters under the corresponding classification through the class method provided by CIFilter. Take kCICategoryBlur as an example:

The string printed by the above filter is a variety of filter effects, and then you can understand its usage by looking at the properties of the filter. Take Gaussian blur as an example:

Attribute description:

CIAttributeFilterAvailable_Mac: the lowest supported Mac system version.

Ciattributefilterable _ IOs: the lowest version supported by the IOs system.

CIAttributeFilterCategories: the category to which the effect belongs.

CIAttributeFilterDisplayName: the name of the effect display.

CIAttributeFilterName: effect name

Ciattributereference document: the address of the effect document

InputImage: Enter the image attribute description (CIImage object).

InputRadius: enter the description of the fuzzy radius attribute (NSNumber object). The default ambiguity is 10, with a minimum of 0 and a maximum of 100.

There are many ways to create CIContext under iOS platform, and there are two commonly used methods.

note:

Using GPU-based CIContext will get better performance, but when it is used across applications, it will automatically be reduced to CPU-based. For example, when you enter the photo album, you use the CIContext object in the proxy method of UIImagePickerControllerDelegate to process the image, and the system will give this task to the CPU.