Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

776results about "Video data browsing/visualisation" patented technology

Direct broadcasting room recommending method and system based on broadcaster style

The invention discloses a direct broadcasting room recommending method and system based on broadcaster styles, and relates to the network technical field; the method comprises the following steps: collecting characteristic parameters and user data of direct broadcasting rooms from a server in a set time period; using characteristic parameters of each direct broadcasting room as characteristic constants to build a characteristic vector of the direct broadcasting room; selecting two random direct broadcasting rooms with broadcasters of different personal information, calculating similarity between characteristic vectors of the two direct broadcasting rooms, and determining direct broadcasting rooms with similarities; recommending other direct broadcasting rooms similar to the direct broadcasting room to all users in the direct broadcasting room according to user data of each direct broadcasting room; calculating a characteristic vector evaluate index according to the visiting rate and / or return visiting rate of the recommended direct broadcasting room, using the evaluate index to screen the characteristic vector characteristic constant, and using the screened characteristic vector to determine similar direct broadcasting rooms. The method and system can precisely recommend direct broadcasting rooms with similar styles to users, thus improving recommending efficiency, and improving user experiences.
Owner:WUHAN DOUYU NETWORK TECH CO LTD

Video display method

A method for video playback uses only resources universally supported by a browser (“inline playback”) operating in virtually all handheld media devices. In one case, the method first prepares a video sequence for display by a browser by (a) dividing the video sequence into a silent video stream and an audio stream; (b) extracting from the silent video stream a number of still images, the number of still images corresponding to at least one of a desired output frame rate and a desired output resolution; and (c) combining the still images into a composite image. In one embodiment, the composite image having a number of rows, with each row being formed by the still images created from a fixed duration of the silent video stream. Another method plays the still images of the composite image as a video sequence by (a) loading the composite image to be displayed through a viewport defined the size of one of the still images; (b) selecting one of the still images of the composite image; (c) setting the viewport to display the selected still image; and (d) setting a timer for a specified time period based on a frame rate, such that, upon expiration of the specified time period: (i) selecting a next one of the still images to be displayed in the viewport, unless all still images of the composite image have been selected; and (ii) return to step (c) if not all still images have been selected.
Owner:SILVERPUSH PTE LTD

Content-aware geographic video multilayer correlation method

The invention relates to a content-aware geographic video multilayer correlation method. The method comprises the following steps: a) unifying the structural features of a multi-source geographic video; b) analyzing the common features of temporal and spatial variation during which a track object is used as a carrier, and establishing an associated element view which combines the content semantics and the geographic semantics under a uniform basis reference; c) extracting a geographic video data set which comprises temporal and spatial variation features, and establishing regular function mapping from data to associated elements; and d) distinguishing the relevancy of geographic video data examples on the basis of a rule and calculating the association distance, gathering the geographic data layer by layer according to the hierarchical property of the association, and ranking data objects in the set according to the association distance. According to the method, the global association, in which the comprehensive geographic video contents are similar, geographically related under the uniform basis reference can be supported, so that the cognitive calculation ability and information expression efficiency of multi-scale complicated behavior events in discontinuous or cross-region monitoring scenes behind multiple geographic videos in monitoring network systems can be enhanced.
Owner:WUHAN UNIV

Adaptive intelligent generation method of picture-text video thumbnails based on query terms

The invention discloses a self-adaptive intelligent generation method of picture-text video thumbnails based on query words. The method comprises the following steps: acquiring an object video, extracting the object video, and obtaining audio and video information in the video; performing structured processing on the audio and video information to obtain structured video data and structured audiodata; selecting the structured video data to obtain a key frame corresponding to the query keyword semantics, namely a visual element; extracting and processing the structured audio data to obtain text elements related to query keyword semantics; obtaining visual elements and text elements for dynamic composition processing, and obtaining picture-text video thumbnails; the thumbnails are obtained,the semantic text of the thumbnails is extracted, the global color matching monitoring process is carried out, and the target video thumbnails related to the semantics of the query keywords are obtained. The embodiment of the invention can intelligently generate video thumbnails according to query keywords according to a system, thereby saving human resources and having more purposefulness than the existing automatic generation technology of video thumbnails.
Owner:SUN YAT SEN UNIV

Video recommendation method and system

The invention provides a video recommendation method and system. The method comprises the steps of obtaining long and short video features and user features of a user; inputting the user features andthe long and short video features into a preset recommendation model for click rate prediction to obtain a predicted click rate of each to-be-recommended long video and a predicted click rate of eachto-be-recommended short video; and sorting all theto-be-recommended long videos to be recommended and all the to-be-recommended short videos to be recommended according to the predicted click rate ofeach to-be-recommended long video to be recommended and the predicted click rate of each to-be-recommended short video to be recommended, and feeding back a sorting result to the user. In the scheme,the user characteristics and the long and short video characteristics obtained by combining the long video characteristics and the short video characteristics are input into the pre-trained recommendation model to obtain the predicted click rates of the to-be-recommended long video and the to-be-recommended short videos. Sorting t The to-be-recommended long videos and the to-be-recommended short videos are sorted according to the predicted click rate, and feeding back a sorting result is reported to the user, thereby realizing recommendation of different types of information, and improving theuse experience of the user and the accuracy of information recommendation.
Owner:北京搜狐新动力信息技术有限公司

Target video clip extraction method and device

In order to solve the problem of small relevance between target video clip extraction and user interestingness, the invention provides a target video clip extraction method, which comprises the following steps of: obtaining target scene classification information and target person information; identifying a scene to obtain a scene classification wonderful degree score of each video clip; identifying the character to obtain a target character identification wonderful degree score of each video clip; generating a user option wonderful degree score according to the scene classification wonderfuldegree score and the target character identification wonderful degree score; according to the image difference, obtaining an image wonderful degree score of the video clip; according to the short-timeenergy value of the audio frame, acquiring an audio highlight score of each video clip; obtaining a content highlight score according to the image highlight score and the audio highlight score; obtaining a diversity score between the video clips according to the distance between the video clips; and according to an optimization objective function f(X)=[Sigma]i<XUScore(i)*w1+[Sigma]i<XCScore(i)*w2+[Sigma]i, j<XDScore(i, j)*w3, selecting a preset value Nsel video clips to form a target video clip set X, so that the value of f(X) is maximum.
Owner:WUXI YSTEN TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products