Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

2745 results about "Key frame" patented technology

A keyframe in animation and filmmaking is a drawing that defines the starting and ending points of any smooth transition. The drawings are called "frames" because their position in time is measured in frames on a strip of film. A sequence of keyframes defines which movement the viewer will see, whereas the position of the keyframes on the film, video, or animation defines the timing of the movement. Because only two or three keyframes over the span of a second do not create the illusion of movement, the remaining frames are filled with inbetweens.

Content-based matching of videos using local spatio-temporal fingerprints

A computer implemented method computer implemented method for deriving a fingerprint from video data is disclosed, comprising the steps of receiving a plurality of frames from the video data; selecting at least one key frame from the plurality of frames, the at least one key frame being selected from two consecutive frames of the plurality of frames that exhibiting a maximal cumulative difference in at least one spatial feature of the two consecutive frames; detecting at least one 3D spatio-temporal feature within the at least one key frame; and encoding a spatio-temporal fingerprint based on mean luminance of the at least one 3D spatio-temporal feature. The least one spatial feature can be intensity. The at least one 3D spatio-temporal feature can be at least one Maximally Stable Volume (MSV). Also disclosed is a method for matching video data to a database containing a plurality of video fingerprints of the type described above, comprising the steps of calculating at least one fingerprint representing at least one query frame from the video data; indexing into the database using the at least one calculated fingerprint to find a set of candidate fingerprints; applying a score to each of the candidate fingerprints; selecting a subset of candidate fingerprints as proposed frames by rank ordering the candidate fingerprints; and attempting to match at least one fingerprint of at least one proposed frame based on a comparison of gradient-based descriptors associated with the at least one query frame and the at least one proposed frame.
Owner:SRI INTERNATIONAL

PTAM improvement method based on ground characteristics of intelligent robot

The invention discloses a PTAM improvement method based on ground characteristics of an intelligent robot. The PTAM improvement method based on ground characteristics of the intelligent robot comprises the steps that firstly, parameter correction is completed, wherein parameter correction includes parameter definition and camera correction; secondly, current environment texture information is obtained by means of a camera, a four-layer Gausses image pyramid is constructed, the characteristic information in a current image is extracted by means of the FAST corner detection algorithm, data relevance between corner characteristics is established, and then a pose estimation model is obtained; two key frames are obtained so as to erect the camera on the mobile robot at the initial map drawing stage; the mobile robot begins to move in the initializing process, corner information in the current scene is captured through the camera and association is established at the same time; after a three-dimensional sparse map is initialized, the key frames are updated, the sub-pixel precision mapping relation between characteristic points is established by means of an extreme line searching and block matching method, and accurate re-positioning of the camera is achieved based on the pose estimation model; finally, matched points are projected in the space, so that a three-dimensional map for the current overall environment is established.
Owner:BEIJING UNIV OF TECH

Method and system for streaming media manager

A computer-implemented or computer-enabled method and system is provided for working with streaming media, such as digital video clips and entire videos. Clips can be grouped together and snippets of video can be re-ordered into a rough cut assemblage of a video storyboard. Later, the video storyboard and the final video scene may be fine-tuned. The invention is not limited to digital video, and may also be used with other digital assets, including for example audio, animation, logos, text, etc. Accordingly, computer-enabled storyboarding of digital assets includes providing a storage having digital assets, the digital assets including at least one digital clip, and each digital clip having frames including a key frame corresponding to the digital clip. Digital clips are selected to be included in a storyboard. The storyboard is displayed, including an image for the key frame corresponding to each of the digital clips of the storyboard. Preferably, the image is a low-resolution image representing the key frame for the digital clips. The storyboard may be modified and saved, including adding parts of digital assets to the storyboard, deleting digital clips from the storyboard, and re-ordering the order of the clips in the storyboard. The digital clips can be edited/adjusted, including adjusted the in and/or out time of each clip. The storyboard may be played, that is each digital clip in the storyboard is played in sequence.
Owner:ARTESIA TECH

Method for segmenting and indexing scenes by combining captions and video image information

The invention relates to a method for segmenting and indexing scenes by combining captions and video image information. The method is characterized in that: in the duration of each piece of caption, a video frame collection is used as a minimum unit of a scene cluster. The method comprises the steps of: after obtaining the minimum unit of the scene cluster, and extracting at least three or more discontinuous video frames to form a video key frame collection of the piece of caption; comparing the similarities of the key frames of a plurality of adjacent minimum units by using a bidirectional SIFT key point matching method and establishing an initial attribution relationship between the captions and the scenes by combining a caption related transition diagram; for the continuous minimum cluster units judged to be dissimilar, further judging whether the minimum cluster units can be merged by the relationship of the minimum cluster units and the corresponding captions; and according to the determined attribution relationships of the captions and the scenes, extracting the video scenes. For the segments of the extracted video scenes, the forward and reverse indexes, generated by the caption texts contained in the segments, are used as a foundation of indexing the video segments.
Owner:INST OF ACOUSTICS CHINESE ACAD OF SCI

Human skeleton behavior recognition method and device based on deep reinforcement learning

The invention discloses a human skeleton behavior recognition method and device based on deep reinforcement learning. The method comprises: uniform sampling is carried out on each video segment in a training set to obtain a video with a fixed frame number, thereby training a graphic convolutional neural network; after parameter fixation of the graphic convolutional neural network, an extraction frame network is trained by using the graphic convolutional neural network to obtain a representative frame meeting a preset condition; the graphic convolutional neural network is updated by using the representative frame meeting the preset condition; a target video is obtained and uniform sampling is carried out on the target video, so that a frame obtained by sampling is sent to the extraction frame network to obtain a key frame; and the key frame is sent to the updated graphic convolutional neural network to obtain a final type of the behavior. Therefore, the discriminability of the selectedframe is enhanced; redundant information is removed; the recognition performance is improved; and the calculation amount at the test phase is reduced. Besides, with full utilization of the topologicalrelationship of the human skeletons, the performance of the behavior recognition is improved.
Owner:TSINGHUA UNIV

Visual and inertial navigation fusion SLAM-based external parameter and time sequence calibration method on mobile platform

The invention discloses a visual and inertial navigation fusion SLAM-based external parameter and time sequence calibration method on a mobile platform. The method comprises an initialization stage, wherein the relative rotation parameters between two frames estimated by a camera and an IMU are aligned through a loosely-coupled method, and the relative rotation parameters of the camera and the IMUare estimated and obtained; a front-end stage, wherein the front end completes the function of a visual odometer, namely, the pose of the current frame of the camera in the world coordinate system isgenerally estimated according to the pose of the camera in the world coordinate system estimated in the former several frames, and the estimated value serves as an initial value of back-end optimization; a back-end stage, wherein some key frames are selected from all the frames, variables to be optimized are set, a uniform objective function is established, and optimization is carried out according to the corresponding constraint conditions, and therefore, accurate external parameters are obtained. By adoption of the method disclosed by the invention, the error of the estimated external parameters is relatively low, and the precision of the time sequence calibration track is high.
Owner:SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products