Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

95 results about "Monocular video" patented technology

Multocular image pickup apparatus and multocular image pickup method

A multocular image pickup apparatus includes: a distance calculation unit that calculates information regarding the distance to a captured object from an output video of a first image pickup unit, which is to be the reference unit of a plurality of image pickup units that pick up images, and from an output video of an image pickup unit that is different from the first image pickup unit; a multocular video synthesizing unit that generates synthesized video from the output video of the plurality of image pickup units based on the distance information for regions where the distance information could be calculated; and a monocular video synthesizing unit that generates synthesized video from the output video of the first image pickup unit for regions where the distance information could not be calculated. The distance calculation unit calculates a first distance information that is the distance to the captured object from the output video of the first image pickup unit and the output video of a second image pickup unit that is different from the first image pickup unit, and in case that there is a region where the first distance information could not be calculated, for the region where the first distance information could not be calculated, the distance calculation unit recalculates information regarding the distance to the captured object from the output video of an image pickup unit that was not used for calculating the distance among the plurality of image pickup units, and from the output video of the first image pickup unit.
Owner:SHARP KK

Light stream based vehicle motion state estimating method

The invention discloses a light stream based vehicle motion state estimating method which is applicable to estimating motion of vehicles running of flat bituminous pavement at low speed in the road traffic environment. The light stream based vehicle motion state estimating method includes mounting a high-precision overlook monocular video camera at the center of a rear axle of a vehicle, and acquiring video camera parameters by means of calibration algorithm; preprocessing acquired image sequence by histogram equalization so as to highlight angular point characteristics of the bituminous pavement, and reducing adverse affection caused by pavement conditions and light variation; detecting the angular point characteristics of the pavement in real time by adopting efficient Harris angular point detection algorithm; performing angular point matching tracking of a front frame and a rear frame according to the Lucas-Kanade light stream algorithm, further optimizing matched angular points by RANSAC (random sample consensus) algorithm and acquiring more accurate light stream information; and finally, restructuring real-time motion parameters of the vehicle such as longitudinal velocity, transverse velocity and side slip angle under a vehicle carrier coordinate system, and accordingly, realizing high-precision vehicle ground motion state estimation.
Owner:SOUTHEAST UNIV

A Lane Departure Distance Measurement and Early Warning Method Based on Monocular Vision

The invention discloses a method for measuring and pre-warning a lane departure distance based on monocular vision, belonging to the technical field of computer imaging processing. The method comprises the following steps of: collecting a video image through a monocular video camera installed in the front of an automobile at first, completing the detection of a lane line after processing through an image processing technology, and extracting geometrical information of the lane line; obtaining vertical distances between the automobile and the lane lines at left and right sides by utilizing thethree-dimensional geometry transformation relation of a pinhole imaging principle; and establishing a departure pre-warning decision method according to the vertical distances measured in real time, and providing effective information for an intelligent assistant driving technology. According to the method disclosed by the invention, when the lane line is detected by utilizing Hough transform, a constraint condition is added, a part of virtual lane line is excluded, and the operation speed and the lane line detection accuracy are increased; simultaneously, the lane departure pre-warning can be realized only by utilizing image information; the measurement influence of a vehicle departure angle on the lane departure distance is low; furthermore, the solving operation speed is high owing to the use of the three-dimensional geometry transform method; and the requirements of the intelligent assistant driving technology can be satisfied.
Owner:厚普清洁能源(集团)股份有限公司

Monocular video based real-time posture estimation and distance measurement method for three-dimensional rigid body object

The invention discloses a monocular video based real-time posture estimation and distance measurement method for a three-dimensional rigid body object, comprising the following steps: collecting the observation video of the object through an optical observation device; feeding the image sequence obtained from collection into an object segmentation module to obtain an two-value segmentation image and an contour image of the object; extracting the characteristic vectors of contour points of the target to generate a multiple-characteristic drive distance image; establishing the tentative homonymic characteristic correspondence between an input two-dimensional image sequence and the objective three-dimensional model; inverting the three-dimensional posture and distance parameters of the object in the image; feeding back the three-dimensional posture and distance parameters of the object obtained from inversion; and correcting and updating the tentative homonymic characteristic correspondence between the two-dimensional image sequence and the objective three-dimensional model until the correspondence meets the iteration stop condition. The method does not need three-dimensional imaging devices, and has the advantages of no damage to observed objects, good concealment, low cost and high degree of automation.
Owner:TSINGHUA UNIV

Target person hand gesture interaction method based on monocular video sequence

The invention provides a target person hand gesture interaction method based on a monocular video sequence. The target person hand gesture interaction method comprises the following steps: first, acquiring an image in a monocular video frame sequence, using a movement detection algorithm to extract a movement foreground mask, and using a palm classifier to detect the minimum circumscribed rectangle frame of a palm and screen the palm of a target person; extracting a color histogram model from the palm image of the target person, calculating the reverse projection drawing of the palm image of the target person, and summarizing the area model of the palm of the target person; for a tracked target region, using a color model to calculate the reverse projection drawing, and calculating the area of the current frame of a target person hand to judge the static gesture of the target person hand: a fist or the palm; using the fist gesture or the palm gesture to carry out click or move interaction control. The target person hand can be screened under complicated background, and the tracking of any hand gesture and the recognition of any preset trajectory are finished. The method can be applied on an embedded platform with low operation capability and is simple, speedy and stable.
Owner:天辰时代科技有限公司

Multi-angle consistent plane detection and analysis method for monocular video scene three dimensional structure

The invention discloses a multi-angle consistent plane detection and analysis method for a monocular video scene three dimensional structure. The method includes inputting a monocular video to extract a key frame and generating a semi-dense point cloud containing noise; extracting a two-dimensional line segment in the key frame image, back-projecting the two-dimensional line segment to the three-dimensional space to obtain the corresponding point cloud; projecting the line segment obtained by single frame extraction into other key frames, filtering out the noise points in the point cloud to obtain the point cloud meeting the constraint according the constraint of multi-angle consistence and fitting the point cloud to obtain three-dimensional line segments; extracting the intersected line segments from the three-dimensional line segments and constructing a plane according to the constraint that line intersection is absolutely in the same plane, and performing detection and analysis in the three-dimensional point cloud including noise according to the constraint of multi-angle consistence to obtain the plane in a monocular video scene; and applying the reconstructed plane to augmented reality according to users' needs. The method has great performance in plane reconstruction and virtual-real fusion, and can be widely applied to the field of augmented reality.
Owner:BEIHANG UNIV

Upper limb training system based on monocular video human body action sensing

InactiveCN105536205AStimulate training initiativeIncrease motivationGymnastic exercisingHuman bodyUpper limb training
The invention discloses an upper limb training system based on monocular video human body action sensing. The upper limb training system comprises a computer and a video camera, wherein the video camera performs video data transmission with the computer, and is used for capturing the human body movement and transmitting the video data through a USB (Universal Serial Bus); the computer is used for receiving the video data, analyzing the video data, completing the hand moving trace tracking and identifying the gesture action according to the tracking result; and the computer is also used for realizing the interaction with a game platform, obtaining a training evaluation parameter and feeding the training evaluation parameter to a user. The training system has the advantages that a quantitative evaluation system is used; the training initiative of a user can be aroused; the defects of the existing training measure are overcome; and the upper limb training system can be applied to communities and families. The immersion feeling of the virtual reality technology of the training system is strong; the interestingness of the training process and the enthusiasm of the user are enhanced; and meanwhile, the safety of the user in the training process is improved by the virtual reality technology.
Owner:TIANJIN UNIV

Method and system of converting monocular video into stereoscopic video

The invention provides a method of converting monocular video into stereoscopic video, the method is based on an image color characteristic and comprises the following steps of providing a monocular video sequence and obtaining an initial depth image of each frame of flat image in the monocular video sequence; converting each frame of flat image in the monocular video sequence into a grey level space; carrying out depth filling on cavity pixel points of the initial depth images according to converted gray level images; carrying out combined bilateral filter processing on the initial depth images which are subjected to filling according to the chromatic aberration of RGB (Red, Green and Blue) three channels of each frame of flat image to obtain a smooth depth image; and converting the monocular video sequence into a stereoscopic image sequence according to the smooth depth image of each frame of flat image in the monocular video sequence. According to the embodiment of the invention, no manual participation is needed, the all-automatic conversion of the monocular video into the stereoscopic video can be realized; the processing is simple and rapid; and the whole stereoscopic display effect is excellent. The invention also provides a system of converting monocular video into stereoscopic video, based on the image color characteristic.
Owner:TSINGHUA UNIV

Auxiliary training system based on human body posture estimation algorithm

The invention discloses an auxiliary training system based on a human body posture estimation algorithm. The system is achieved through the steps that firstly, a binary outline image of a human body is detected and extracted from a monocular video on the basis of a background modeling method of a ViBe model; secondly, a outline edge graph of the image is obtained through a Canny edge detection algorithm; thirdly, coordinates of 15 main joint points of the human body in the image are acquired through the image processing methods such as horizontal line scanning and human body length ratio constraining; fourthly, on the above basis, the auxiliary training system is built, five joint angels formed by the 15 joint points are adopted as training indexes, the Euclidean distance is adopted as the similarity measure of postures, and the two auxiliary indexes of the joint angle track and the posture similarity are taken as system output. According to the auxiliary training system based on the human body posture estimation algorithm, analysis and contrast of the athlete postures are achieved by quantitatively analyzing the motion characteristics, therefore, the athlete level and grades are scientifically increased, and physical training can get rid of the state of purely depending on experience.
Owner:BEIJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products