Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Video emotion identification method based on emotion significant feature integration

A feature fusion and emotion recognition technology, which is applied in character and pattern recognition, instruments, computer components, etc., can solve problems that affect the accuracy of video classification and recognition, and the discrimination of video emotional features is not obvious.

Active Publication Date: 2015-12-09
SHANDONG INST OF BUSINESS & TECH
View PDF1 Cites 26 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Video is a kind of multimedia, which includes feature data such as language, sound, and image. However, in the existing research, there is no in-depth research on the fusion of video multimedia features, which leads to the indiscrimination of video emotional features, which affects video classification. and recognition accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video emotion identification method based on emotion significant feature integration
  • Video emotion identification method based on emotion significant feature integration
  • Video emotion identification method based on emotion significant feature integration

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0043] The present invention will be described in detail below in conjunction with the drawings:

[0044] figure 1 Shows the video emotion recognition method based on the fusion of emotional salient features provided by the present invention. Such as figure 1 As shown, the method specifically includes the following steps:

[0045] Step 1: Perform structural analysis on the video, use the theory of mutual information entropy based on information theory to detect the boundaries of shots and extract video shots, and then select emotional key frames for each shot. The specific extraction steps include:

[0046] Step 1.1: Calculate the color emotion intensity value of each video frame with the lens as the unit, take time as the horizontal axis and the color emotion intensity value on the vertical axis to obtain the lens emotion fluctuation curve; the color emotion intensity value calculation method is as follows:

[0047] IT i = 1 M X N X p = 1 M ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video emotion identification method based on emotion significant feature integration. A training video set is acquired, and video cameras are extracted from a video. An emotion key frame is selected for each video camera. The audio feature and the visual emotion feature of each video camera in the training video set are extracted. The audio feature is based on a word package model and forms an emotion distribution histogram feature. The visual emotion feature is based on a visual dictionary and forms an emotion attention feature. The emotion attention feature and the emotion distribution histogram feature are integrated from top to bottom to form a video feature with emotion significance. The video feature with emotion significance is sent into an SVM classifier for training, wherein the video feature is formed in the training video set. Parameters of a training model are acquired. The training model is used for predicting the emotion category of a tested video. An integration algorithm provided by the invention has the advantages of simple realization, mature and reliable trainer and quick prediction, and can efficiently complete a video emotion identification process.

Description

Technical field [0001] The present invention relates to the fields of multimedia content understanding and computer network content retrieval, in particular to a video emotion recognition method based on the fusion of emotional saliency features. Background technique [0002] With the rapid development of computer technology, communication technology and multimedia technology, the Internet has become a vast and massive multimedia information source. People are eager to use computers to automatically understand the rapidly growing digital image / video information, so as to facilitate users to choose effective The technical means to effectively organize, manage and search these visual information. Content-based video retrieval has become an important research topic. Among them, the video retrieval research on the cognitive level has been carried out earlier, and many feasible algorithms have emerged. However, video retrieval research based on sentiment analysis has not received mu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/40G06V20/46G06F18/2411
Inventor 丁昕苗郭文朱智林王永强华甄刘延武
Owner SHANDONG INST OF BUSINESS & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products