Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

206 results about "Camera tracking" patented technology

Method for generating virtual-real fusion image for stereo display

The invention discloses a method for generating a virtual-real fusion image for stereo display. The method comprises the following steps: (1) utilizing a monocular RGB-D camera to acquire a depth map and a color map in real scene; (2) rebuilding a three-dimensional scene surface model and calculating a camera parameter; (3) mapping, thereby acquiring the depth map and the color map in a virtual viewpoint position; (4) finishing the three-dimensional registration of a virtual object, rendering for acquiring the depth map and the color map of the virtual object, and performing virtual-real fusion, thereby acquiring a virtual-real fusion content for stereo display. According to the method provided by the invention, the monocular RGB-D camera is used for shooting, the three-dimensional scene surface model is rebuilt frame by frame and the model is simultaneously used for tracking the camera and mapping the virtual viewpoint, so that higher camera tracking precision and virtual object registration precision can be acquired, the cavities appearing in the virtual viewpoint drawing technology based on the image can be effectively handled, the shielding judgment and collision detection for the virtual-real scene can be realized and a stereo display device can be utilized to acquire a vivid stereo display effect.
Owner:ZHEJIANG UNIV

Cross-domain pedestrian re-identification method based on unsupervised joint multi-loss model

The invention discloses a cross-domain pedestrian re-identification method based on an unsupervised joint multi-loss model. The method mainly solves the problem of low identification rate of an existing unsupervised cross-domain pedestrian re-identification method. The method comprises the following steps: 1) obtaining a data set, and dividing the data set into a training set and a test set; 2) carrying out a plurality of preprocessings and extensions on the training set; 3) selecting a residual network as a reference network model, initializing network parameters, and adjusting a network structure; 4) constructing a target domain loss function; 5) fusing the target domain loss function with the source domain loss function and the triple loss function to obtain a total loss function; 6) training the residual network by using the total loss function to obtain a trained network model; and 7) inputting the test set into the trained network model, and outputting an identification result. The identification rate of unsupervised cross-domain pedestrian re-identification is improved, the occurrence of an over-fitting situation is effectively avoided, and the method can be used for intelligent security of suspected target search and pedestrian cross-camera tracking.
Owner:XIDIAN UNIV

Camera tracking method and device

Provided are a camera tracking method and device, which use a binocular video image to perform camera tracking, thereby improving the tracking accuracy. The camera tracking method provided in the embodiments of the present invention comprises: acquiring an image set of a current frame; respectively extracting feature points of each image in the image set of the current frame; according to a principle that depths of scene in adjacent regions on an image are similar, acquiring a matched feature point set of the image set of the current frame; according to an attribute parameter and a pre-set model of a binocular camera, respectively estimating three-dimensional positions of scene points corresponding to each pair of matched feature points in a local coordinate system of the current frame and a local coordinate system of the next frame; and according to the three-dimensional positions of the scene points corresponding to the matched feature points in the local coordinate system of the current frame and the local coordinate system of the next frame, estimating a motion parameter of the binocular camera in the next frame using the invariance of a barycentric coordinate with respect to rigid transformation, and optimizing the motion parameter of the binocular camera in the next frame.
Owner:HUAWEI TECH CO LTD

Non-blind-area multi-target cooperative tracking method and system

The invention relates to a non-blind-area multi-target cooperative tracking method, which comprises the following steps of: manually designating a monitoring area or a blind area in a gun camera monitoring background, and arranging a dome in a corresponding blind area; obtaining an image sequence of a gun camera monitoring scene, and carrying out gauss background modeling to the monitoring image sequence to obtain a background image; detecting a moving target in a monitoring image to obtain the moving target; carrying out gun camera tracking and marking the detected moving target; and continuously detecting whether the moving target exists in the blind area by the gun camera, after determining the moving target, tracking the moving target, transmitting the target position information back to a gun camera, and controlling the movement of the dome according to the movement of the moving target so as to ensure that the moving target is generally within view central range of the dome. The invention also discloses a non-blind-area multi-target cooperative tracking system. According to the invention, as the dome and the gun camera cooperate for the interaction of movement target information, non-blind-area display is carried out in a unitive monitoring picture, the monitoring at all corners can be realized, and further, the safety of the monitoring area is guarded.
Owner:合肥博微安全电子科技有限公司

Three-dimensional reconstruction method of single object, based on color depth camera

The invention discloses a three-dimensional reconstruction method of single object, based on a color depth camera. The three-dimensional reconstruction method of single object, based on a color depthcamera includes the three steps: 1) during the scanning process, extracting the scanned object area; 2) according to the color depth data, performing camera tracking and local fusion of the depth data, and performing global non rigid body registration on the locally fused data, and gradually constructing an overall three-dimensional model and the accurate key frame camera position; and 3) performing grid extraction on the fusion model, according to the obtained key frame camera position and the key frame color image, calculating the texture map of the three-dimensional grid model. The method framework of the three-dimensional reconstruction method of single object, based on the color depth camera can guarantee that even if the proportion of the object in the image is relatively smaller, geometrical reconstruction and texture mapping with high quality can still be performed when single object is reconstructed. The three-dimensional reconstruction method of single object, based on a color depth camera has the advantages of being clear, being high in speed, being robust in the result, and being able to be applied to the virtual reality scene construction field and the like.
Owner:ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products