Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Label construction method, system and device and storage medium

A construction method and labeling technology, applied in the field of image processing, can solve problems such as cumbersome and complex operations, low work efficiency, and heavy workload, and achieve the effect of simplifying the operation process and improving work efficiency

Pending Publication Date: 2020-07-03
RES INST OF TSINGHUA PEARL RIVER DELTA +1
View PDF3 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] Existing video synthesis methods for virtual characters often use multi-frame images to identify the characters and key points of their bones in the multi-frame images. This method needs to establish a training model and manually The images input into the model are annotated one by one to train the model, which is cumbersome and complicated to operate, and requires staff with high-level image processing capabilities, resulting in heavy workload and low work efficiency.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Label construction method, system and device and storage medium
  • Label construction method, system and device and storage medium
  • Label construction method, system and device and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0052] figure 1 It is a flow chart of the label construction method for training the model described in the embodiment of the present invention, such as figure 1 As shown, the method includes the following steps:

[0053] S1. Obtaining a person image sample for training the model;

[0054] S2. Perform key point detection on the person image sample, and extract multiple sets of key point coordinates;

[0055] S3. Carry out image segmentation on the person image sample, and extract multiple groups of two-dimensional masks;

[0056] S4. Combining the multiple sets of key point coordinates with multiple sets of two-dimensional masks to form labels.

[0057] In this embodiment, the step S2, that is, the step of performing key point detection on the person image sample and extracting multiple sets of key point coordinates, is composed of the following steps:

[0058] S201. Using a deep neural network to perform area detection on the image, the area includes a face area and a bod...

Embodiment 2

[0085] Embodiments of the present invention also include a training method for generating an adversarial network model, comprising the following steps:

[0086] P1. Construct the first label using the label construction method described in Example 1;

[0087] P2. Construct a training set, the training set is composed of a person image sample and a first label, and the first label is constructed according to the person image sample;

[0088] P3. Obtain the training set to train the generation confrontation network model;

[0089] P4. Modifying the first label to obtain a plurality of different second labels;

[0090] P5. Generating an adversarial network model to obtain the second label;

[0091] P6. Detect whether the GAN model outputs an image corresponding to the second label.

[0092] In this embodiment, step P4, that is, the step of modifying the first label to obtain multiple second labels that are different from each other, specifically includes:

[0093] P401. Obtai...

Embodiment 3

[0097] The embodiment of the present invention also includes an image processing method, comprising the following steps:

[0098] D1. Acquire the first image, the first image is an image with label constraints, the constraints include human face outline, human body key point skeleton, human body outline, head outline and background;

[0099] D2. Using the GAN model trained by the training method described in Embodiment 2 to receive the first image and process it to output a second image, the second image is a real image corresponding to the restriction condition.

[0100] In summary, the label construction method for training the model in the embodiment of the present invention has the following advantages:

[0101] By extracting labels from person images, the complex person images are simplified into two-dimensional coordinates or two-dimensional masks of key points, which are used to train the generated confrontational neural network model (GAN model); by simply modifying th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a label construction method, system and device for a training model, and a storage medium, and the method comprises the steps: carrying out the label extraction of a figure image, simplifying a complex figure image into a key point two-dimensional coordinate or a two-dimensional mask, and carrying out the training of a generative adversarial neural network model (GAN model); different label images can be generated by simply modifying coordinate positions or two-dimensional mask shapes of key points, figure images corresponding to labels can be generated by inputting thetrained generative adversarial neural network model, and then videos are further synthesized, so that the operation process of figure video synthesis is greatly simplified, and the working efficiencyis improved; subsequently, a new label can be added as a limiting condition according to actual generation requirements, the label and the real image corresponding to the label are sent to the generative adversarial network model for training, and finally, the corresponding real image can be generated through expansion conditions. The method is widely applied to the technical field of image processing.

Description

technical field [0001] The present invention relates to the technical field of image processing, in particular to a method, system, device and storage medium for constructing labels for training models. Background technique [0002] Existing video synthesis methods for virtual characters often use multi-frame images to identify the characters and key points of their bones in the multi-frame images. This method needs to establish a training model and manually The images input into the model are annotated one by one to train the model, which is cumbersome and complicated to operate, and requires staff with high-level image processing capabilities, resulting in heavy workload and low work efficiency. Contents of the invention [0003] In order to solve at least one of the above problems, the object of the present invention is to provide a method, system, device and storage medium for constructing labels for training models. [0004] The technical solution adopted by the pres...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/46G06K9/62
CPCG06V10/462G06F18/241G06F18/214
Inventor 王伦基叶俊杰李权黄桂芳任勇韩蓝青
Owner RES INST OF TSINGHUA PEARL RIVER DELTA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products