Video content identification method and device, storage medium and electronic equipment

A technology of video content and recognition method, applied in selective content distribution, electrical components, image communication, etc., can solve problems such as low video clips, inability to recognize concentration, and achieve the effect of improving learning efficiency

Active Publication Date: 2020-02-18
GREE ELECTRIC APPLIANCES INC +1
View PDF16 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In view of the above problems, the present application provides a video content identification method, device, storage medium and electronic equipment, which solves the problem that

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video content identification method and device, storage medium and electronic equipment
  • Video content identification method and device, storage medium and electronic equipment
  • Video content identification method and device, storage medium and electronic equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0043] figure 1 It is a schematic flowchart of a method for identifying video content provided by the embodiment of the present application. Such as figure 1 As shown, this method includes:

[0044] Step S110: Divide the video into multiple video segments.

[0045] Specifically, the time of the video etc. is divided into multiple video segments.

[0046] Wherein, the divided time is artificially preset according to the actual duration of the video.

[0047] Step S120: When the video is playing, acquire eye tracking concentration data related to each video segment.

[0048] Specifically, the display area of ​​the display screen where the video is played is sub-regionally processed to form at least two sub-regions; the region with video content is selected from the two sub-regions as an effective sub-region; through eye tracking technology, the effective sub-region is acquired The eye tracking information of the user in the area; according to the eye tracking information, t...

Embodiment 2

[0070] image 3 Another schematic flowchart of a method for identifying video content provided by the embodiment of the present application.

[0071] Such as image 3 As shown, the user turns on the electronic device to play the video, the front camera of the electronic device turns on the eye tracking mode, and the display area of ​​the display screen where the training video is played is divided into regions to form 12 sub-regions, and 6 regions with training video content are screened out. The effective sub-area divides the video into multiple video segments. When the training video is played, the concentration data of each video segment in the six effective sub-areas is obtained through eye tracking technology.

[0072] According to the visual processing of the eye tracking concentration data of the video clip, the corresponding concentration value of the video clip is estimated.

[0073] Specifically, visual processing, that is, to remove some abnormal data that is too ...

Embodiment 3

[0077] Figure 4 It is a connection block diagram of an apparatus 20 for identifying video content provided by the embodiment of the present application. Such as Figure 4 As shown, the device includes:

[0078] The video division module 21 is configured to divide the video into a plurality of video segments;

[0079] The data collection module 22 is configured to obtain eye tracking concentration data related to each video segment when the video is played;

[0080] The processing module 23 is configured to estimate the concentration value corresponding to the video clip according to the eye tracking concentration data of the video clip;

[0081] The judging module 24 is configured to judge whether the concentration value is less than a preset concentration threshold;

[0082] The identification module 25 is configured to identify the video segment when the concentration value is less than the preset concentration threshold, so as to optimize the content of the video segme...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to the technical field of video optimization, in particular to a video content identification method, device and apparatus, and an electronic device. The method comprises the following steps: step S110, dividing a video into a plurality of video clips; S120, when the video is played, acquiring eyeball tracking concentration degree data related to each video clip is acquired;S130, estimating a concentration value corresponding to the video clip according to the eyeball tracking concentration data of the video clip; S140, judging whether the concentration degree value is smaller than a preset concentration degree threshold value or not; and S150, when the concentration degree value is smaller than a preset concentration degree threshold value, identifying the video clip to optimize the content of the video clip. The problem that in the prior art, the video clip with low concentration cannot be identified through the concentration degree of the user on the video content so as to optimize the video content is solved.

Description

technical field [0001] The present application relates to the technical field of video optimization, and in particular to a video content identification method, device, storage medium and electronic equipment. Background technique [0002] In the current life, the importance of learning is deeply rooted in the hearts of the people, and various training institutions have emerged as the times require, and it is becoming more and more common for training parties to teach online. Among them, most users learn by playing videos on mobile smart devices such as mobile phones and tablets. In the process of learning through videos, users may only be interested in some content, and skip or skip content that is not of interest. Other operations, or being distracted during the learning process, etc., reduce the learning efficiency, but in the prior art, the video clip with low concentration cannot be identified through the user's concentration on the video content, so as to facilitate th...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04N21/44H04N21/845H04N21/458
CPCH04N21/44008H04N21/4586H04N21/8456
Inventor 蔺烜
Owner GREE ELECTRIC APPLIANCES INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products