Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video scene analysis method and device based on human body

An analysis method and video technology, applied in the field of image processing, can solve problems such as low error tolerance, poor robustness, and low accuracy, and achieve the effect of reducing labor costs and improving efficiency

Pending Publication Date: 2020-05-15
ZHEJIANG UNIV
View PDF4 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method lacks the recognition of specific semantic features, has a low error tolerance rate, poor robustness, and is not universally applicable to videos of different types and different screen ratios.
[0005] The scene classification of the video screen is related to factors such as the frame, camera angle, human posture, and action angle, so the above-mentioned methods and algorithms are not suitable for fast identification of the video scene. The rate will be very low, which cannot meet the needs of fast and accurate scene calculation and classification in video automation design

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video scene analysis method and device based on human body
  • Video scene analysis method and device based on human body

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0019] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, and do not limit the protection scope of the present invention.

[0020] figure 1 It is a schematic flowchart of a video scene analysis method with human beings as the main body provided by the embodiment of the present invention. see figure 1 , the video scene analysis method includes the following steps:

[0021] S101. Collect images, label the images with scene types, extract human feature vectors of the images using deep learning methods, and form a training sample with the human feature vectors and marked scenes to form a training sample set.

[0022] In an embodiment, performing scene labeling on an image includes:

[0023] D...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video scene analysis method and a device based on a human body, and the method comprises the steps: collecting an image, carrying out the scene labeling of the image, extracting a human body feature vector of the image through a deep learning method, forming a training sample through the human body feature vector of the image and the labeled scene, and forming a trainingsample set; training a random forest model by utilizing the training set, and obtaining a scene analysis model after parameters of the random forest model are determined; and reading each frame of image of a video to be analyzed, extracting a human body feature vector of the frame of image by using a deep learning method, and calculating and outputting a scene classification result of each frame of image based on the input human body feature vector by using the scene analysis model. According to the method and the device, the scene type of the video can be accurately identified, and the requirements of video automatic analysis and editing on rapid and accurate scene type calculation are met.

Description

technical field [0001] The invention relates to the field of image processing, in particular to a video scene analysis method and device with people as the main body. Background technique [0002] With the continuous enhancement of multimedia technology and the continuous popularization of the Internet, people have put forward higher requirements for the diversity and convenience of capturing information, and the applications and products related to video carriers have increased accordingly, and technologies related to video automation have also emerged as the times require. pregnancy. Technologies such as video automatic analysis, automatic editing, and automatic generation all require accurate calculation and analysis of its attributes and characteristics. [0003] In the process of video creation, various visual elements in various shots will affect the intuitive feeling that the video brings to the audience, and the audience will receive different information and meanin...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/10G06V20/41G06F18/24323G06F18/254
Inventor 陈实王禹溪吴文齐杨昌源马春阳陈羽飞
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products