Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A measuring algorithm for the difference of time information

A time information and difference technology, applied in the field of video understanding, can solve the problems that the accuracy of the model is not enough to meet the application requirements, the accuracy is not satisfactory, and the time information of continuous video frames cannot be accurately extracted, so as to improve the understanding ability, The effect of reliable data and reliable process

Active Publication Date: 2019-02-19
DALIAN NATIONALITIES UNIVERSITY
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, since the development time of the direction of video understanding is not long, the accuracy in practical application scenarios is not satisfactory.
More and more scholars believe that the existing methods cannot accurately extract the time information of continuous frames of video, which leads to the accuracy of the model is not enough to meet the application requirements, and further improvement of the original method is needed

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A measuring algorithm for the difference of time information
  • A measuring algorithm for the difference of time information
  • A measuring algorithm for the difference of time information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0071] This implementation example is for figure 2 A set of continuous frame images of the original video and such as image 3 The distance metric calculation for the corresponding convolutional feature map is shown, Figure 4 for the calculated results.

Embodiment 2

[0073] This implementation example is for Figure 5 A set of continuous frame images of the original video and such as Figure 6 The distance metric calculation for the corresponding convolutional feature map is shown, Figure 7 for the calculated results.

Embodiment 3

[0075] This implementation example is for Figure 8 A set of continuous frame images of the original video and such as Figure 9 The distance metric calculation for the corresponding convolutional feature map is shown, Figure 10 for the calculated results.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to measuring algorithm for the difference of time information, and belongs to the video understanding field in the computer vision application. In order to solve the problem of increasing the kinds of information available to convolution neural networks, The maximum mean difference in time information is: the original video image and convolution feature map are divided into two sets, within each set, two adjacent images are used as a set of time information elements to be calculated in the set, The resulting set is the first set of original video images and the first setof convolution feature maps. The effect is that the temporal information of the two sets of data can also be regarded as having a direct relationship. The rational use of the relationship between thetwo sets of data has certain value for the field of video understanding and its related applications.

Description

technical field [0001] The invention belongs to the field of video understanding in computer vision applications, in particular to a method for measuring the difference between continuous video frames and their convolution feature maps. Background technique [0002] Deep learning uses the model constructed by the neural network structure to realize the end-to-end application mode. At the same time, the storage capacity of the model itself for key information in huge data ensures the reliability of the model, which makes the deep learning model unique compared with traditional algorithms. The advantage of comparison has been studied by many scholars in the fields of image, speech and text in a short period of several years and has made great progress. [0003] In computer vision technology such as target detection, target classification, target recognition, target segmentation and other single-frame image applications, deep learning can obtain corresponding models that meet t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/48G06F18/2413
Inventor 毛琳陈思宇杨大伟
Owner DALIAN NATIONALITIES UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products