Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

80 results about "Video eeg" patented technology

Method and system for recommending video resource

The invention provides a method and a system for recommending a video resource. The method comprises that: historical records of all users in the process of watching videos are collected; statistics is performed on the collected historical records of all the users, number ratio or duration ratio of video wartching in each time period of each user is calculated according to time information of the watched videos, and the users are classified into the corresponding user groups of specific time periods whose number ratio or duration ratio is greater than a group threshold so that user groups which include typical users and are corresponding to each specific time period are generated; data characteristics of the typical users in the user groups of all the specific time periods on other dimensions are respectively acquired, and group characteristics of the users of all the specific time periods on other dimensions are confirmed; if the result is yes, the users are added into the user groups of the specific time periods; and according to specific time period information corresponding to all the user groups, the corresponding video resource is recommended to the users in the user groups. With application of the method and the system for recommending the video resource, content recommendation can be specifically performed when the users are watching videos so that recommendation effect is enhanced.
Owner:LE SHI ZHI ZIN ELECTRONIC TECHNOLOGY (TIANJIN) LTD

Audio comment information generating method and device and audio comment playing method and device

The invention provides an audio comment information generating method and device and an audio comment playing method and device. The audio comment information generating method and device and the audio comment playing method and device are used for solving the problems of difficulty in text comment generation and influence of text comment browsing on user experience during a video playing process. The video comment information generating method comprises obtaining audio information when input triggering audio is detected during the playing process of certain video data; uploading the audio information to a server; converting the audio information into audio comment information through the server. The audio comment playing method comprises, after a video playing request is sent to the server, receiving pre-generated video data issued by the server and corresponding audio data. According to the audio comment information generating method and device and the audio comment playing method and device, the audio comment generating process is simple and high in universality; during a video playing process, a user can play the audio data simultaneously, and the user experience can be enhanced by calling the voice and the sound feeling of the user.
Owner:LETV INFORMATION TECH BEIJING

Video recommending method based on video affective characteristics and conversation models

The invention provides a video recommending method based on video affective characteristics and conversation models, which is characterized in that: affective characteristics of a video are adopted as the comparison foundation, multiple affective characteristics are extracted from the video and an affiliated sound track to synthesize an attraction-arousal curve diagram (V-A diagram), then the V-Adiagram is homogenized, the homogenized V-A diagram is classified into different identical blocks with a fixed quantity, a color block diagram of each block is determined, a difference of the two color block diagrams on corresponding positions of two pictures is compared to a threshold value to obtain a block difference and a coverage difference, finally the similarity value of the two videos canbe obtained, and a processed result for clustering the similarity value is used as a video recommend result. The method also adopts a conversation model to update the video recommend result during the continuous watching process of a user. Due to the adoption of the method, the video recommend result can more satisfy the current affective status of the user, the clicking rate of the user on the recommended video and the number of the continuously-watched video can be improved.
Owner:BEIHANG UNIV

Video emotion classification method and system fusing electroencephalogram and stimulation source information

The invention discloses a video emotion classification method and system fusing electroencephalogram and stimulation source information, and the method comprises the steps: constructing a stimulation source-electroencephalogram signal data set: enabling a subject to watch a video clip, collecting the electroencephalogram signals of the subject when the subject watches a video through an electroencephalogram scanner, and constructing the stimulation source-electroencephalogram signal data set; and constructing a multi-modal feature fusion model: for the training data set, respectively extracting video features and electroencephalogram signal features, and generating a fusion vector by adopting a multi-modal information fusion method based on an attention mechanism. Training a fusion vector classification model: taking the fusion vector as the input of a neural network full connection layer for prediction; updating the weight of the neural network according to the difference between the prediction result and the real label, and training the neural network. Classifying by using the model; collecting an electroencephalogram signal when a subject watches a to-be-classified video; extracting and fusing video features and electroencephalogram signal features; and inputting the fusion vector into the trained neural network to obtain a classification result.
Owner:XI AN JIAOTONG UNIV

Three-dimensional visual unattended transformer substation intelligent linkage system based on video fusion

The invention relates to a three-dimensional visual unattended substation intelligent linkage system based on video fusion. The system comprises a 3D video live-action fusion subsystem, an intelligentinspection subsystem, a sensor subsystem, a rail machine inspection subsystem, a door magnet and perimeter alarm subsystem, a dome camera linkage subsystem, a flame recognition and early warning subsystem and a monitoring center, and all the subsystems are interconnected and intercommunicated with the monitoring center. The three-dimensional visual unattended substation intelligent linkage systemis realized by fusing three-dimensional videos, the all-weather, omnibearing and 24-hour uninterrupted intelligent monitoring of a substation is realized, and the arbitrary browsing and viewing of athree-dimensional real scene of the substation and alarm prompting of faults in the substation are realized; a worker can watch the videos in the monitoring room, the real-time monitoring and dispatching command of a plurality of substations under administration are achieved, the worker does not need to inspect power transformation facilities and equipment every day, and the monitoring capacity and the working efficiency are improved.
Owner:刘禹岐

Method and system for synchronously watching videos and interacting in real time by multiple users

The invention discloses a method and a system for synchronously watching videos and interacting , which are used for solving the problems that pictures of the same video watched by multiple users in real time are difficult to synchronize and the real-time interaction of the multiple users is difficult when the multiple users watch the video, and the method for synchronously watching videos and interacting in real time by the multiple users comprises the following steps that: the user creates a studio and sets a playing performance level and a playing priority guarantee strategy of the studio;a creator can invite other user terminals to join in the studio and set a playing video; an invited user terminal automatically adjusts the playing quality level of the terminal and automatically controls the elimination terminal; the creator sets different real-time synchronization instruction operation authorities for other user terminals; the user can control the synchronous playing of the video within the authority range; synchronous playing of terminal videos is accurately controlled through synchronous control and a real-time feedback algorithm; all synchronous users can perform real-time interaction in the form of characters, voices, videos, barrages or expressions.
Owner:北京我声我视科技有限公司

Method and system for categorizing video

The invention provides a method and system for categorizing a video aiming at the problems that the efficiency is low, the cost is high, and the preview observation accuracy can not be adjusted in the existing method for manually categorizing the video. The method comprises the following steps: 1, loading the video, acquiring the running time and the format information of the video and determining the length of a timer shaft; 2, determining an existing observation position of the video, decoding and displaying an adjacent frame neighboring the existing observation position, and updating the adjacent frame when the existing observation position of the video is changed; and 3, determining an initial preview display scale, decoding and displaying a preview frame, and updating the preview frame when the preview display scale or the existing observation position is changed. A categorization operator uses the method and system provided by the invention to browse the preview frame of the video in advance without watching the video completely; and then the adjacent frame is observed after preliminary positioning is carried out, thus the categorization positioning is more accurate, the categorization efficiency is improved, the labor intensity of categorization work is reduced, and a user can be preferably helped to process and use video resources in time.
Owner:北京新岸线网络技术有限公司

Video key frame self-adaptive extraction method under emotion encourage

The invention relates to a video key frame self-adaptive extraction method under emotion encourage. The video key frame self-adaptive extraction method comprises the steps of: thinking in terms of emotional fluctuation of a video looker, computing exercise intensity of video frames to serve as visual emotion incentive degrees of the video looker when looking a video, computing short-time average energy and tone as audition emotion incentive degrees, and linearly fusing the visual emotion incentive degree and the audition emotion incentive degree to obtain the video emotion incentive degree of each video frame and then generate a video emotion incentive degree curve of the scene; obtaining video key frame number KN shall be distributed to the scene according to the video emotion incentive change of the scene; at last taking the video frames corresponding to KN crests before the highest emotion incentive degree of the video emotion incentive degree curve as the scene key frames. The video key frame self-adaptive extraction method is simple and performed from the perspective of the emotional fluctuation of the video looker, and utilizes the video emotion incentive degree to semantically direct the extraction of the key frames; and the extracted video key frames are more representative and effective.
Owner:FUZHOU UNIV

Video watermark removing method, video data publishing method and related devices

The embodiment of the invention discloses a video watermark removing method, a video data publishing method and related devices. The video watermark removing method comprises the following steps: extracting multiple frames of image data from video data; detecting watermarks in the image data to obtain a plurality of initial watermark positions; determining the position belonging to the same watermark according to the plurality of initial watermark positions as a target watermark position; and removing the watermark in the target watermark position. According to the embodiment of the invention,the target watermark position belonging to the same watermark is determined according to the initial watermark position; wherein the target watermark position can include the initial watermark position of the watermark in each frame of image data; jittering of the detected initial watermark position is avoided. The problem that the watermarks in the video data cannot be completely removed due tothe fact that the detected initial watermark position deviates from the real position is solved, the watermarks in the clean video data can be removed after the watermarks of each frame of image dataare removed based on the target watermark position, and the video data watching experience of a user is improved.
Owner:BIGO TECH PTE LTD

Multi-cloud video distribution strategy optimization method and system

The invention provides a multi-cloud video distribution strategy optimization method and system. The method comprises the following steps: acquiring live video stream data, CDN time delay data of each cloud platform server, CDN distribution price and transcoding price of each transcoding template in a historical time period; cleaning the live broadcast video stream data, counting the total amount of first audience terminals in a first time period of a historical time period, the minimum demand code rate proportion and the proportion of the number of live broadcast users corresponding to each source code rate, and performing user category classification according to the source video code rate and the minimum demand code rate of the audience terminals; predicting the total number of second audience terminals watching videos in a future predetermined time period, calculating the number of audience terminals corresponding to various users in the future predetermined time period, and respectively calculating each consumption cost and each QoE value of each user in the future predetermined time period based on each transcoding template; and determining a cloud platform server suitable for each type of users according to each consumption cost and each QoE value, and performing statistics on the proportion of the total number of users covered by each cloud platform server in a future predetermined time period.
Owner:BEIJING UNIV OF POSTS & TELECOMM

Method and device for controlling video content in real time according to physiological signals

The invention discloses a method and device for controlling video content in real time according to physiological signals, and the method and device are used for VR equipment. The method comprises thesteps of enabling a photographing device to photograph a plurality of original videos according to an instruction; acquiring user information, and setting a physiological feature threshold related tothe interest point and storing the physiological feature threshold; identifying and playing an original video which optimally matches the user; and acquiring the physiological signal of the user in real time, comparing the physiological signal with the physiological feature threshold, and changing the playing state of the original video in real time according to a result. Through the method and the device, according to personal preferences and physical conditions, a set of optimal original VR video can be firstly selected; in the watching process, the playing state of watching the VR video can be adjusted at any time according to own physiological needs, including real-time cut-in or cut-back of the original VR video clip; the user can enjoy shocking experience of watching the VR video ina multi-direction, multi-view and multi-season mode; interestingness and satisfaction are greatly improved, and meanwhile, the body health of the user is guaranteed; and the method and the device areuser-friendly.
Owner:未来新视界科技(北京)有限公司

Video content evaluation method and video content evaluation system

The invention provides a video content evaluation method and a video content evaluation system, and the method comprises the steps: in a first test stage, receiving eye movement signals in a process that eye movement equipment collects each tested watching video sequence, obtaining first eye movement data, and receiving an electroencephalogram signal collected by electroencephalogram equipment, and obtaining first electroencephalogram data; calculating first eye movement index data based on the first eye movement data; calculating a first emotion index value corresponding to each target video in the first test stage based on the first electroencephalogram data; in a second test stage, receiving an electroencephalogram signal acquired by the electroencephalogram equipment to obtain second electroencephalogram data; calculating second emotion index data according to the second electroencephalogram data, and calculating a second emotion index value according to the second emotion index data; and calculating a comprehensive evaluation value of the target video based on the first eye movement index value, the first emotion index value and the second emotion index value, and obtaining a first evaluation result according to the comprehensive evaluation value. Through the steps, the purpose that the obtained evaluation result is objective, direct and accurate is achieved.
Owner:北京意图科技有限公司

Video recommending method based on video affective characteristics and conversation models

The invention provides a video recommending method based on video affective characteristics and conversation models, which is characterized in that: affective characteristics of a video are adopted as the comparison foundation, multiple affective characteristics are extracted from the video and an affiliated sound track to synthesize an attraction-arousal curve diagram (V-A diagram), then the V-Adiagram is homogenized, the homogenized V-A diagram is classified into different identical blocks with a fixed quantity, a color block diagram of each block is determined, a difference of the two color block diagrams on corresponding positions of two pictures is compared to a threshold value to obtain a block difference and a coverage difference, finally the similarity value of the two videos canbe obtained, and a processed result for clustering the similarity value is used as a video recommend result. The method also adopts a conversation model to update the video recommend result during the continuous watching process of a user. Due to the adoption of the method, the video recommend result can more satisfy the current affective status of the user, the clicking rate of the user on the recommended video and the number of the continuously-watched video can be improved.
Owner:BEIHANG UNIV

Child interest mining and enhancing system based on big data analysis

The invention discloses a child interest mining and enhancing system based on big data analysis, and belongs to the field of big data analysis, and the child interest mining and enhancing system based on big data analysis comprises interest mining and enhancing system equipment, the internet and a display module, wherein the interest mining enhancement system equipment is composed of a video playing recording module, an interactive operation recording module, an analysis calculation module and an information pushing module; the display module is used for displaying various operation interfaces, playing videos and performing interacting operations; and the video playing recording module is used for recording the total video playing watching time and the playing time of various videos when the children use the interest mining enhancement system equipment. The interests and hobbies of children can be determined and explored, the interests of the children can be enhanced and cultivated in a targeted mode, education and learning can be conducted as soon as possible, the learning talent of the children can be fully played, cultivation is not conducted blindly so as to avoid the conflict psychology, and teaching is conducted according to the children.
Owner:苏州荣学网络科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products