Animal video label automatic generation method based on deep learning, terminal and medium

A video tag and automatic generation technology, applied in the field of video tag, can solve the problems of low model recognition accuracy, increase the burden of subsequent feature extraction and recognition, redundant windows, etc., and achieve the effect of improving recognition efficiency and recognition accuracy.

Pending Publication Date: 2020-08-18
SHENZHEN INVENO TECH
View PDF5 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the sliding window method will generate a large number of redundant windows, and will also increase the burden of subsequent feature extraction and recognition, which seriously affects the processing efficiency.
Moreover, the feature m

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Animal video label automatic generation method based on deep learning, terminal and medium
  • Animal video label automatic generation method based on deep learning, terminal and medium
  • Animal video label automatic generation method based on deep learning, terminal and medium

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0048] Example one:

[0049] A method for automatically generating animal video tags based on deep learning, see Image 6 , Including the following steps:

[0050] Extract several key frame images from the video to be detected, and input the key frame images into the feature extraction model;

[0051] Input the feature information output by the feature extraction model into the trained target detection algorithm model;

[0052] Record the position and category of the target object output by the target detection algorithm model in the video to be detected, and define the category of the target object as the animal tag of the video to be detected.

[0053] Specifically, the method for automatically generating animal video tags provided in this embodiment includes a feature extraction model and a target detection model. The feature extraction model is composed of a convolutional neural network and is obtained by training on the ImageNet classification data set. The feature extraction mod...

Example Embodiment

[0054] Embodiment two:

[0055] The second embodiment further defines the training method of the target detection model on the basis of the first embodiment.

[0056] See Figure 7 , The target detection model is obtained by training in the following method:

[0057] Obtain a training set composed of multiple training pictures, and mark the position and category of objects in each training picture;

[0058] Implement target detection algorithm based on TensorFlow framework programming;

[0059] Training the target detection algorithm by using the training set;

[0060] Save the trained target detection algorithm as the target detection algorithm model.

[0061] Specifically, the training pictures in the training set can be determined according to the specific user's business conditions and usage conditions. For example, an appropriate number of pictures are screened out according to the animal pictures that have appeared in the service provided by the user, the position of the animal in ...

Example Embodiment

[0079] Embodiment three:

[0080] The third embodiment provides a terminal based on the foregoing embodiment.

[0081] A terminal includes a processor, an input device, an output device, and a memory, the processor, input device, output device, and memory are connected to each other, wherein the memory is used to store a computer program, and the computer program includes program instructions, The processor is configured to call the program instructions to execute the above-mentioned method.

[0082] It should be understood that, in the embodiments of the present invention, the so-called processor may be a central processing unit (Central Processing Unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), and dedicated integrated Circuits (Application Specific Integrated Circuit, ASIC), ready-made programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides an animal video label automatic generation method based on deep learning, a terminal and a medium. The method comprises the following steps of extracting a plurality of key frame images in a to-be-detected video, and inputting the key frame images into a feature extraction model, inputting feature information output by the feature extraction model into a trained target detection algorithm model, and recording the position and the category of the target object output by the target detection algorithm model in the to-be-detected video, and defining the category of the target object as an animal label of the to-be-detected video. The method improves the recognition efficiency and the recognition accuracy.

Description

technical field [0001] The invention belongs to the technical field of video tags, and in particular relates to a method, a terminal and a medium for automatically generating animal video tags based on deep learning. Background technique [0002] The animal video label automatic generation system is to detect whether there is an animal in the video, and what the animal is, so as to generate a label for the video. The commonly used methods in the existing animal video label automatic generation system include the method based on frame difference and the traditional computer vision image processing method. [0003] see figure 1 , 2 , based on the inter-frame difference method, the absolute value of the brightness difference between the two frames of images is obtained by making a difference between the pixel values ​​of two adjacent frames of the video or a few frames apart, and then thresholding is performed to extract the motion area in the image, thereby Infer regions of...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F16/78G06F16/732G06F16/783G06K9/62
CPCG06F16/7867G06F16/7328G06F16/7837G06F16/7847G06F18/2414
Inventor 刘露蔺昊
Owner SHENZHEN INVENO TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products