Image foreground and background segmentation method, image foreground and background segmentation network model training method, and image processing method and device

A background segmentation and network model technology, applied in image data processing, image analysis, image enhancement, etc., can solve problems such as high training cost, complicated training process, and long training time

Active Publication Date: 2017-11-10
BEIJING SENSETIME TECH DEV CO LTD
View PDF2 Cites 45 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the current convolutional neural network training process is complicated, coupled with ...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image foreground and background segmentation method, image foreground and background segmentation network model training method, and image processing method and device
  • Image foreground and background segmentation method, image foreground and background segmentation network model training method, and image processing method and device
  • Image foreground and background segmentation method, image foreground and background segmentation network model training method, and image processing method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0061] refer to figure 1 , shows a flowchart of steps of a method for training an image foreground and background segmentation network model according to Embodiment 1 of the present invention.

[0062] The training method of the image foreground and background segmentation network model of the present embodiment comprises the following steps:

[0063] Step S102: Obtain feature vectors of sample images to be trained.

[0064] Wherein, the sample image is a sample image including foreground annotation information and background annotation information. That is, the sample image to be trained is a sample image that has been marked with a foreground area and a background area. In the embodiment of the present invention, the foreground area may be the area where the subject of the image is located, such as the area where the person is located; the background area may be other areas except the area where the subject is located, and may be all or part of other areas.

[0065] In a ...

Embodiment 2

[0147] refer to figure 2 , shows a flow chart of the steps of a method for image foreground and background segmentation according to Embodiment 2 of the present invention.

[0148] In this embodiment, the trained image foreground and background segmentation network model shown in the first embodiment is used to detect the image and segment the foreground and background of the image. The image foreground and background segmentation method of the present embodiment comprises the following steps:

[0149] Step S202: Obtain an image to be detected.

[0150] Wherein, the image includes a still image or an image in a video. In an optional solution, the image in the video is an image in a live video. In another optional solution, the images in the video include multi-frame images in the video stream, because there are more contextual associations in the multi-frame images in the video stream, and the image segmentation method shown in Embodiment 1 is used to The convolutional ne...

Embodiment 3

[0156] refer to image 3 , shows a flow chart of steps of a video image processing method according to Embodiment 3 of the present invention.

[0157] The video image processing method in this embodiment can be executed by any device with data collection, processing and transmission functions, including but not limited to mobile terminals and PCs. This embodiment takes a mobile terminal as an example to describe the method for processing a service object in a video image provided by the embodiment of the present invention, and other devices may refer to this embodiment for implementation.

[0158] The video image processing method of the present embodiment comprises the following steps:

[0159] Step S302: the mobile terminal acquires the currently displayed video image.

[0160] In this embodiment, the video image of the currently playing video is obtained from the live application as an example, and the processing of a single video image is taken as an example, but those s...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The embodiment of the invention provides an image foreground and background segmentation network model training method, an image foreground and background segmentation method, an image processing method and device, and a terminal device. The image foreground and background segmentation network model training method comprises the following steps: obtaining the eigenvectors of a sample image to be trained; performing convolution on the eigenvectors to obtain the convolution results of the eigenvectors; magnifying the convolution results of the eigenvectors; determining whether the magnified convolution results of the eigenvectors satisfy the convergence conditions or not; if so, completing the training of the convolutional neural network model used for segmenting the foreground and background of the image; and if not, adjusting the parameters of the convolutional neural network model according to the convolution results of the amplified eigenvectors and performing iteration training on the convolutional neural network model according to adjusted parameters of the convolutional neural network model until the convolution results satisfy the convergence conditions. By means of the image foreground and background segmentation network model training method, the training efficiency of the convolutional neural network model is improved, and the training time is shortened.

Description

technical field [0001] Embodiments of the present invention relate to the field of artificial intelligence technology, and in particular, to a training method, device and terminal device for an image foreground and background segmentation network model, a method, device and terminal device for image foreground and background segmentation, and a video image processing method , devices and terminal equipment. Background technique [0002] Convolutional neural network is an important research field for computer vision and pattern recognition. It uses computers to imitate biological brain thinking to perform information processing similar to human beings on specific objects. With convolutional neural networks, object detection and recognition can be efficiently performed. With the development of Internet technology and the sharp increase in the amount of information, convolutional neural networks are more and more widely used in the field of object detection and recognition to ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/11G06T7/136G06T7/194
CPCG06T2207/10016G06T2207/20081G06T2207/20084
Inventor 石建萍栾青
Owner BEIJING SENSETIME TECH DEV CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products