Image content marking method and apparatus thereof

An image content and image technology, applied in the field of image processing, can solve problems such as slow completion speed, achieve the effect of reducing workload, ensuring accuracy, and improving completion efficiency

Inactive Publication Date: 2017-05-17
SOUTH UNIVERSITY OF SCIENCE AND TECHNOLOGY OF CHINA
View PDF5 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Embodiments of the present invention provide a method and device for annotating image content to solve the problem that in the existing image annotation method, the feature content in the image needs to be annotated one by one and the completion speed is slow

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image content marking method and apparatus thereof
  • Image content marking method and apparatus thereof
  • Image content marking method and apparatus thereof

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0018] figure 1 It is a flow chart of an image content labeling method provided in Embodiment 1 of the present invention. This embodiment is applicable to the situation of collecting video image training sets. The method can be executed by an image content labeling device, which can be implemented by software and / or implemented by hardware. Such as figure 1 As shown, the method includes:

[0019] S110. Determine at least two key frame images of the video image and label information of the key frame images.

[0020] Video images can be dynamic images in various storage formats, such as digital video formats, including DVD, QuickTime and MPEG-4 formats; and video tape video formats, including VHS and Betamax formats. The video images include multiple key frame images and common frame images between different key frame images.

[0021] The key frame image is adjusted and determined according to the generation efficiency of the annotation information, and is usually determine...

Embodiment 2

[0033] figure 2 It is a schematic flow chart of a game level processing method in Embodiment 2 of the present invention. On the basis of Embodiment 1, this embodiment further elaborates the generation operation of the annotation information of ordinary frame images, such as figure 2 As shown, the method includes:

[0034] S210. Determine at least two key frame images of the video image and annotation information of the key frame images.

[0035] The annotation information of key frame images also includes feature regions. The content in the feature area is the object to be labeled. Feature regions can be of arbitrary shape. For better operation, the shape of the feature region is preferably a regular polygon.

[0036] S220. For each group of two adjacent key frame images, use the feature area of ​​any key frame image in the group of two adjacent key frame images as the initial feature area of ​​the normal frame image.

[0037] The initial feature area can be obtained by...

Embodiment 3

[0049] image 3 Shown is a schematic structural diagram of an image content tagging device provided by Embodiment 3 of the present invention, as shown in image 3 As shown, the image content labeling device includes: a key frame information confirmation module 310 and a normal frame information confirmation module 320 .

[0050] Wherein, the key frame information confirming module 310 is configured to determine at least two key frame images of the video image and annotation information of the key frame images.

[0051] The common frame information confirmation module 320 is used for each group of adjacent two key frame images, according to the group of adjacent two key frame images, the label information of the group of adjacent two key frame images, and the group of adjacent key frame images. The number of normal frame images between two key frame images determines the annotation information of normal frame images.

[0052] Further, the common frame information confirmation...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

Embodiments of the invention disclose an image content marking method and an apparatus thereof. The image content marking method comprises the following steps of determining at least two key frame images and annotated information of the key frame images of a video image; and aiming at each group of the two adjacent key frame images, according to the group of the two adjacent key frame images, the annotated information of the group of the two adjacent key frame images and a quantity of common frame images located between the two adjacent key frame images of the group, determining annotated information of the common frame images. By using the image content marking method and the apparatus thereof, through the key frame images and the annotated information of the key frame images, the annotated information of the common frame images is automatically generated. In an existing image annotation method, characteristic contents in an image need to be annotated one by one so that a completion speed is slow. By using the method and the apparatus, the above problem is solved. Under the condition that annotation information is accurate, a workload is reduced and annotation operation efficiency is increased.

Description

technical field [0001] Embodiments of the present invention relate to the field of image processing, and in particular, to a method and device for labeling image content. Background technique [0002] The training set can be used as effective information to guide and optimize the structure of the neural network, and the process is a way of machine learning technology. After optimization, the structure can be used in computer vision applications, such as recognizing vehicles and pedestrians based on the images returned by the camera. [0003] In the prior art, the user is required to manually mark the feature region in the image, and then use a box or other methods to manually circle it as an annotation. The resulting annotations are read by the computer as data and then saved as a training set. [0004] However, in the existing image annotation methods, manually annotating the feature content in the image one by one not only consumes a lot of human resources, but also is s...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00
CPCG06V20/49G06V20/46
Inventor 郝祁董镝谢宜
Owner SOUTH UNIVERSITY OF SCIENCE AND TECHNOLOGY OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products