Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Animal video label automatic generation method based on deep learning, terminal and medium

A video tag and automatic generation technology, applied in the field of video tag, can solve the problems of low model recognition accuracy, increase the burden of subsequent feature extraction and recognition, redundant windows, etc., and achieve the effect of improving recognition efficiency and recognition accuracy.

Pending Publication Date: 2020-08-18
SHENZHEN INVENO TECH
View PDF5 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the sliding window method will generate a large number of redundant windows, and will also increase the burden of subsequent feature extraction and recognition, which seriously affects the processing efficiency.
Moreover, the feature matrix extracted by this artificially designed feature extraction template has weak expressive ability, and the classifier generally uses a weak classifier such as SVM or Adaboost, so the recognition accuracy of the final model is also very low.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Animal video label automatic generation method based on deep learning, terminal and medium
  • Animal video label automatic generation method based on deep learning, terminal and medium
  • Animal video label automatic generation method based on deep learning, terminal and medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0049] A method for automatic generation of animal video tags based on deep learning, see Image 6 , including the following steps:

[0050] Extract several key frame images in the video to be detected, and input the key frame images into the feature extraction model;

[0051] Input the feature information output by the feature extraction model into the trained target detection algorithm model;

[0052] Record the position and category of the target object output by the target detection algorithm model in the video to be detected, and define the category of the target object as the animal label of the video to be detected.

[0053] Specifically, the method for automatically generating animal video tags provided in this embodiment includes a feature extraction model and a target detection model. Wherein the feature extraction model is composed of a convolutional neural network, and is trained through the ImageNet classification data set. The feature extraction model is used ...

Embodiment 2

[0055] Embodiment 2 On the basis of Embodiment 1, the training method of the target detection model is further limited.

[0056] see Figure 7 , the target detection model is trained by the following method:

[0057] Obtain a training set consisting of multiple training pictures, and label the position and category of objects in each training picture;

[0058] Realize target detection algorithm based on TensorFlow framework programming;

[0059] using the training set to train the target detection algorithm;

[0060] Save the trained target detection algorithm as the target detection algorithm model.

[0061] Specifically, the training pictures in the training set can be determined according to the specific user's business situation and use situation. For example, select an appropriate number of pictures based on the pictures of animals that have appeared in the business provided by the user, mark the position and category of the animals in the pictures, and use all these ...

Embodiment 3

[0080] Embodiment 3 On the basis of the foregoing embodiments, a terminal is provided.

[0081] A terminal, including a processor, an input device, an output device and a memory, the processor, the input device, the output device and the memory are connected to each other, wherein the memory is used to store a computer program, and the computer program includes program instructions, The processor is configured to invoke the program instructions to execute the above method.

[0082] It should be understood that in the embodiment of the present invention, the so-called processor may be a central processing unit (Central Processing Unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), dedicated integrated Circuit (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an animal video label automatic generation method based on deep learning, a terminal and a medium. The method comprises the following steps of extracting a plurality of key frame images in a to-be-detected video, and inputting the key frame images into a feature extraction model, inputting feature information output by the feature extraction model into a trained target detection algorithm model, and recording the position and the category of the target object output by the target detection algorithm model in the to-be-detected video, and defining the category of the target object as an animal label of the to-be-detected video. The method improves the recognition efficiency and the recognition accuracy.

Description

technical field [0001] The invention belongs to the technical field of video tags, and in particular relates to a method, a terminal and a medium for automatically generating animal video tags based on deep learning. Background technique [0002] The animal video label automatic generation system is to detect whether there is an animal in the video, and what the animal is, so as to generate a label for the video. The commonly used methods in the existing animal video label automatic generation system include the method based on frame difference and the traditional computer vision image processing method. [0003] see figure 1 , 2 , based on the inter-frame difference method, the absolute value of the brightness difference between the two frames of images is obtained by making a difference between the pixel values ​​of two adjacent frames of the video or a few frames apart, and then thresholding is performed to extract the motion area in the image, thereby Infer regions of...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F16/78G06F16/732G06F16/783G06K9/62
CPCG06F16/7867G06F16/7328G06F16/7837G06F16/7847G06F18/2414
Inventor 刘露蔺昊
Owner SHENZHEN INVENO TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products