Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

183 results about "Video browsing" patented technology

Video browsing, also known as exploratory video search, is the interactive process of skimming through video content in order to satisfy some information need or to interactively check if the video content is relevant. While originally proposed to help users inspecting a single video through visual thumbnails, modern video browsing tools enable users to quickly find desired information in a video archive by iterative human–computer interaction through an exploratory search approach. Many of these tools presume a smart user that wants features to interactively inspect video content as well as automatic content filtering features. For that purpose, several video interaction features are usually provided, such as sophisticated navigation in video or search by a content-based query. Video browsing tools often build on lower-level video content analysis, such as shot transition detection, keyframe extraction, semantic concept detection, and create a structured content overview of the video file or video archive. Furthermore, they usually provide sophisticated navigation features, such as advanced timelines, visual seeker bars or a list of selected thumbnails, as well as means for content querying. Examples of content queries are shot filtering through visual concepts (e.g., only shots showing cars), through some specific characteristics (e.g., color or motion filtering), through user-provided sketches (e.g., a visually drawn sketch), or through content-based similarity search.

Redundancy elimination in a content-adaptive video preview system

InactiveUS20050200762A1Reducing visual redundancyEfficiently contentTelevision system detailsDrawing from basic elementsAdaptive videoSelf adaptive
A content-adaptive video preview system (100) allows to go faster through a video than existing video skimming techniques. Thereby, a user can interactively adapt (S1) the speed of browsing and/or the abstraction level of presentation.
According to one embodiment of the invention, this adaptation procedure (S1) is realized by the following steps: First, differences between precalculated spatial color histograms associated with chronologically subsequent pairs of video frames said video file is composed of are calculated (S1a). Then, these differences and/or a cumulative difference value representing the sum of these differences are compared (S1b) to a predefined redundancy threshold (S(t)). In case differences in the color histograms of particular video frames (302a-c) and/or said cumulative difference value exceed this redundancy threshold (S(t)), these video frames are selected (S1c) for the preview. Intermediate video frames (304a-d) are removed and/or inserted (S1d) between each pair of selected chronologically subsequent video frames depending on the selected abstraction level of presentation. Thereby, said redundancy threshold value (S(t)) can be adapted (S1b′) for changing the speed of browsing and/or the abstraction level of presentation.
Owner:SONY DEUT GMBH

System and method for fast on-line learning of transformed hidden Markov models

A fast variational on-line learning technique for training a transformed hidden Markov model. A simplified general model and an associated estimation algorithm is provided for modeling visual data such as a video sequence. Specifically, once the model has been initialized, an expectation-maximization (“EM”) algorithm is used to learn the one or more object class models, so that the video sequence has high marginal probability under the model. In the expectation step (the “E-Step”), the model parameters are assumed to be correct, and for an input image, probabilistic inference is used to fill in the values of the unobserved or hidden variables, e.g., the object class and appearance. In one embodiment of the invention, a Viterbi algorithm and a latent image is employed for this purpose. In the maximization step (the “M-Step”), the model parameters are adjusted using the values of the unobserved variables calculated in the previous E-step. Instead of using batch processing typically used in EM processing, the system and method according to the invention employs an on-line algorithm that passes through the data only once and which introduces new classes as the new data is observed is proposed. By parameter estimation and inference in the model, visual data is segmented into components which facilitates sophisticated applications in video or image editing, such as, for example, object removal or insertion, tracking and visual surveillance, video browsing, photo organization, video compositing, and meta data creation.
Owner:MICROSOFT TECH LICENSING LLC

Online video concentration device, system and method

The invention discloses an online video concentration device, system and method. The method can sequentially perform real-time execution on an image which is currently acquired by each frame, and the method comprises the following steps of: the segmentation step: segmenting a background image and a foreground image of each image; the extraction step: extracting a motion object from each foreground image; the step of extracting the sequence of the motion objects: accumulating the motion objects which are respectively extracted from the foreground images of the multiple frames and forming the sequence of the motion objects; the step of extracting the sequence of main backgrounds: extracting specific (n) frames of the background images from the background images of the images of the multiple frames as the sequence of the main backgrounds; and the splicing step: splicing the sequence of the main backgrounds with the sequence of the motion objects. By utilizing the online video concentration way, the length of a concentrated video is shortened and information of the motion objects in the video can be retained as far as possible. Fast and convenient video browsing and previewing can be realized, and the visual effect is better. The hardware requirements and the complexity in an algorithm can be reduced.
Owner:北京中科奥森数据科技有限公司

Video abstraction generation method based on space-time recombination of active events

The invention provides a video abstraction generation method based on space-time recombination of active events. According to the method, an original video is pre-treated, blank frames are removed, and the video after pretreatment is subjected to structured analysis as follows: moving targets in the original video are taken as an object, videos of all key moving target events are extracted, time correlation between the moving target events is weakened, and time sequence recombination is performed on the moving target events based on the principle that activity ranges are not conflicted; meanwhile, background images are extracted reasonably based on the reference of the visual perception of a user, and a delayed dynamic background video is generated; and finally, the moving target events and the delayed dynamic background video are sutured seamlessly, a video abstraction with short time, concise content and comprehensive information is formed, and a plurality of moving targets can occur simultaneously in the finally generated video abstraction. The video abstraction generation method can generate the video abstraction used for video browsing or searching efficiently and rapidly, and the video abstraction can express semantic information of the video more reasonably and better meets the visual perception of the user.
Owner:BEIJING UNIV OF POSTS & TELECOMM

Event-oriented intelligent camera monitoring method

ActiveCN104284158ASave browsing timeImprove the efficiency of investigation and evidence collectionImage analysisClosed circuit television systemsComputer graphics (images)Engineering
The invention provides an event-oriented intelligent camera monitoring method. The method includes the steps that after video capture is conducted, processing is carried out, three code streams are obtained, the three code streams include a low-resolution video, high-definition images which are spaced by a plurality of seconds and a high-definition moving object image, and the three code streams are displayed on a display terminal independently or are displayed on the display terminal after being overlaid and fused. By means of the event-oriented intelligent camera monitoring method, one video which is much shorter than an original video can be provided, and the browsing time is greatly shortened; event clues can be searched for quickly according to the information such as time, and thus the investigation and evidence collection efficiency is improved; space information in the scene is fully utilized, space-time redundancy in the video is reduced, events which happen at different periods of time are displayed at the same time, and activities in the video can be understood and grasped easily; activities and events in the original video cannot be lost, and the effects that fast playing can be achieved and video information cannot be lost are achieved; the code stream bandwidth and the definition of special areas are both considered, hence, powerful support is provided for evidence collection after events, and a wide application requirement for the method can be met in the video security and protection monitoring field.
Owner:NANJING XINBIDA INTELLIGENT TECH

Method for sharing video of mobile phone and system

The invention discloses a method for sharing a video of a mobile phone and a system. The method comprises the following steps that: a source mobile phone terminal acquires audio coding data and video coding data in real time and transmits to a video server in real time after the being marked with timestamp; the audio coding data and the video coding data are generated into a path or multi paths of VOD (Video on Demand) video file(s) and one path of live video file and transmits the video attribute data of the VOD file(s)/or the live video file to the source mobile phone terminal or communication agency equipment; the source mobile phone terminal transmits the video attribute data to a target mobile phone terminal, or when the target mobile phone subscribes the business of sharing the video of the source mobile phone terminal, the communication agency equipment pushes the information containing the video attribute data into the target mobile phone terminal; and the target mobile phone terminal analyzes the information containing the video attribute data, and calls a video browser for browsing the video. Thus, by the invention, by a mobile communication network and through connectionless conversion interaction, a data channel is established between two mobile phones for sharing the video in real time.
Owner:北京沃安科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products