Video interaction method

A video interaction and video technology, applied in the field of video interaction, can solve the problems of intensive interactive content, affecting viewing effect, poor quality of video interactive content, etc., and achieve the effect of accurate positioning

Inactive Publication Date: 2017-05-17
ZHONGGUANG REDIANYUN TECH CO LTD
View PDF7 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] In order to improve the participation and fun when watching videos, there are video interactions such as voting and comments. Now many TV programs and movie theaters have video interaction functions. Viewers will comment on people, things, things, etc. that appear in the video , and display it on the video, such as what is often called "bullet screen" video, is to float the audience's comments from the video content, but the current technical problems are: the quality of the interactive content cannot be controlled, and the interactive content appears densely. Affect viewing effects, etc. In order to solve the above technical problems, CN102129346 discloses a video interaction method, and CN105357586 discloses a video barrage filtering method and device. The interaction method disclosed above preliminarily solves the problem of poor quality of video interaction content , but the above method is only a simple switch between whether it is interactive or not, and it cannot position and lock each video frame in the video and requires interaction.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video interaction method
  • Video interaction method
  • Video interaction method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0064] The present invention provides a video interaction method, such as figure 1 As shown, the interaction methods include:

[0065] S1: Cache the video that the user wants to watch into the first buffer area and the second buffer area;

[0066] S2: Collect the interactive content of the video to be watched, record the time stamp of the interactive content, and store it in the third cache area. The interactive content is the existing interactive content stored by the interactive content server and the interactive content of new comments collected in real time ;

[0067] S3: Obtain each video frame that forms the video in the first buffer area, and extract the I frame in each video frame, and record the playing time period of each I frame;

[0068] S4: find the video frame corresponding to the video of each I frame in the second buffer area according to the playing time period, associate a small window on the last frame sequence of the video frame, and display;

[0069] S5...

Embodiment 2

[0074] The difference between the video interaction method provided by Embodiment 2 of the present invention and Embodiment 1 is that the small window described in step S4 is the corresponding I-frame image at the position or the keyword recorded in the I-frame.

[0075] The specific method of step S4 is:

[0076] S41: Judging whether the video data packet reduction number in the second buffer area in the preset time period is greater than the preset reduction number threshold, if greater, the small window is displayed with the I frame image corresponding to the position, otherwise, the small window is displayed with The keywords recorded in this I frame are displayed.

[0077] The small window provided by the present invention can be an I-frame image, or a text note, which one is selected for display, and the main consideration is not to cause a phenomenon of stuck or network delay in the video playback process, so the present invention monitors and plays in real time The ne...

Embodiment 3

[0079] The video interaction method provided by Embodiment 3 of the present invention is different from Embodiment 1 in that, as figure 2 As shown, the specific method of step S6 includes the following steps:

[0080] S61: Determine whether the Nth I frame and the N+1th I frame are similar, N≥1, if they are similar, put the Nth I frame and the N+1th I frame into the same storage area, otherwise, Put the Nth I frame and the N+1th I frame into different storage areas respectively, and number each storage area;

[0081] S62: Calculate the first time difference T1 of the time S1 corresponding to the first frame sequence of the first I frame corresponding to the video frame in each storage area and the time S2 corresponding to the last frame sequence of the last I frame corresponding to the video frame;

[0082] S63: Determine whether the first time difference T1 is greater than a preset time difference threshold, if so, go to step S64, otherwise go to step S65;

[0083] S64: Fi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a video interaction method. The video interaction method includes the steps: buffering videos to be watched by a user to a first buffer cache and a second buffer cache; collecting the interaction content of the videos to be watched; obtaining each video frame of the videos in the first buffer cache, and extracting an I frame in each video frame; associating the last frame sequence of the video frame with a small window; utilizing the small window and an interaction interface to screen the interaction content matched with the I frame image, and associating the interaction content with the I frame image; and displaying. The video interaction method can perform positioning locking interaction for one video frame in the video; and audiences can select the video frames requiring interaction, according to the own will. The video interaction method is accurate in positioning, and does not have the defect that the interaction contents appear on the whole screen. The video interaction method also can solve the problem of influencing the watching effect of the video frame by the interaction content of the previous video frame when the next video appears.

Description

technical field [0001] The invention relates to the field of video on demand, in particular to a video interaction method. Background technique [0002] A video is composed of multiple video frames, and each video frame includes: [0003] I frame: It is a full-frame compression coded frame, also called an intra-frame coded frame, so the data volume of an I frame is generally relatively large, and the I frame does not need to be generated by referring to other frames. It is a reference frame of a P frame or a B frame. A complete image can be reconstructed by only using I frames during decoding. [0004] P frame: It is a forward predictive coding frame, also known as an inter-frame coding frame. The P frame needs to be generated by referring to the previous I frame or P frame adjacent to it, and it is also the reference frame of other P frames or B frames. When decoding, it must rely on the previous I frame or P frame to reconstruct a complete image. [0005] B frame: It is...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04N21/431H04N21/472H04N21/475H04N21/4788
CPCH04N21/4316H04N21/47202H04N21/47205H04N21/4756H04N21/4788
Inventor 纪琦华蒲珂方宏曾泽基李哲山胡彬陈传海蔡忠善张毅萍魏明蔡辉
Owner ZHONGGUANG REDIANYUN TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products