Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

197 results about "Feature tracking" patented technology

Suspicious target detection tracking and recognition method based on dual-camera cooperation

The invention discloses a suspicious target detection tracking and recognition method based on dual-camera cooperation, and belongs to the technical field of video image processing. The method comprises the steps that a panoramic surveillance camera is utilized for collecting a panoramic image, the improved Gaussian mixture modeling method is adopted for carrying out foreground detection, basic parameters of moving targets are extracted, a Kalman filter is utilized for estimating a movement locus of a specific target, the specific target is recognized according to velocity analysis, the dual-camera cooperation strategy is adopted, a feature tracking camera is controlled to carry out feature tracking on the moving targets, a suspicious target is locked, the face of the suspicious target is detected, face recognition is carried out, face data are compared with a database, and an alarm is given if abnormities exist. According to the suspicious target detection tracking and recognition method, the dual-camera cooperation tracking surveillance strategy is adopted, defects of a single surveillance camera on a specific scene are overcome, and the added face recognition function can assist workers in identifying the specific target to a greater degree; in addition, the tracking algorithm adopted in the method is good in real-time performance, target recognition and judgment standards are simple and reliable, and the operation process is fast and accurate.
Owner:CHONGQING UNIV

Visual positioning method based on robust feature tracking

InactiveCN103345751AImprove convergence rateImprove feature tracking performanceImage analysisFeature extraction algorithmVisual positioning
The method discloses a robust feature tracking and stereoscopic vision positioning technology based on image processing and machine vision. The technology can integrate inertial information and visual information and achieve reliable stereoscopic vision positioning under camera waggling conditions and outdoor light conditions. Images are collected through a binocular video camera in real time, and rotation information of the camera is collected with an inertial measurement unit. Feature points in the images are extracted with a feature extraction algorithm, and the feature points of the left image and the feature points of the right images are matched stereoscopically. The inertial information is combined and the inertia and the KLT algorithm are integrated to track the feature points, so that the reliability of the feature tracking is promoted. Three-dimensional information of the feature points is restored according to the double vision geometric principle. Motion parameters of the camera are obtained through position information of the feature points with the Gaussian and Newton iteration method. The accuracy of visual positioning is further promoted with the RANSIC algorithm. The whole process is iterated continuously, and thus real-time calculation of the posture and the position of the camera is achieved.
Owner:BEIJING UNIV OF POSTS & TELECOMM

Method of Tracking Morphing Gesture Based on Video Stream

InactiveCN102270348ARemove background changesEliminate distractionsImage analysisSkin complexionMean-shift
The invention discloses a method for tracking deformable hand gesture based on video streaming, comprising the steps of: obtaining a frame image, and extracting a sub-image containing a human hand from the obtained frame image; selecting feature tracking points from the sub-image containing the human hand, and initializing a continuously self-adaptive mean shift tracker by the sub-image containing the human hand; performing optical flow calculation on the selected feature tracking points to serve as a local tracking result, and synchronously overall tracking the human hand by the continuouslyself-adaptive mean shift tracker to obtain a global tracking result; updating the feature tracking points; and adopting the result of the optical flow tracking as the final output result of the deformable hand gesture. The method for tracking the deformable hand gesture based on video streaming can be used for tracking the human hand with randomly deformable hand gesture and enabling human-computer gesture interaction to operate in a more comfortable manner. According to the invention, the tracking can be performed aiming at the randomly deformable hand gesture, the interference from change of a background and a large area of complexion is eliminated, and the real-time robust hand gesture tracking is achieved.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI +1

Method and system for detecting illegal left-and-right steering of vehicle at traffic intersection

ActiveCN102903239AReduce workloadWays to Avoid Landfilling Induction CoilsDetection of traffic movementFeature trackingVirtual position
The invention provides a method and a system for detecting illegal left-and-right steering of a vehicle at traffic intersection. The method comprises the following steps of: decoding monitoring video streaming in real time from the traffic intersection so as to obtain a video image; distinguishing vehicles passing through the traffic intersection and detecting the moving trails of the vehicles; and presetting at least one virtual position on a steering lane of the video image so as to judge that the vehicle has illegal steering when the steering is not allowed by a signal lamp at the traffic intersection and the moving trail of the vehicle passes through the virtual position. According to the invention, by adopting the method for intelligently analyzing the video, the virtual loop is drawn on the monitoring video, the moving trail of the vehicle is tracked and analyzed through characteristics, and whether the vehicle has illegal steering is judged through the preset virtual loop so as to reduce the workload of manual check and control; and simultaneously, as the touch capture is achieved through the video virtual loop, the way that an induction loop is embedded in a pavement in the traditional technology is avoided, and the pavement does not need to be dig.
Owner:JIANGSU CHINA SCI INTELLIGENT ENG

Feature tracking linear optic flow sensor

This invention is a one-dimensional elementary motion detector that measures the linear optical flow in a small subsection of the visual field. This sensor measures motion by tracking the movement of a feature across the visual field and measuring the time required to move from one location to the next. First a one-dimensional image is sampled from the visual field using a linear photoreceptor array. Feature detectors, such as edge detectors, are created with simple circuitry that performs simple computations on photoreceptor outputs. The detection of the feature's location is performed using a winner-take-all (WTA) mechanism on feature detector outputs. Motion detection is the performed by monitoring the location of the high WTA output in time to detect transitions corresponding to motion. The correspondence problem is solved by ignoring transitions to and from the end lines of the WTA output bus. Speed measurement is performed by measuring the time between WTA output transitions. This invention operates in a one-dimensional subspace of the two-dimensional visual field. The conversion of a two-dimensional image section to a one-dimensional image is performed by a specially shaped photoreceptor array which preserves image information in one direction but filters out image information in the perpendicular direction. Thus this sensor measures the projection of the 2-D optical flow vector onto the vector representing the sensor's orientation. By placing several of these sensors in different orientations and using vector arithmetic, the 2-D optical flow vector can be determined.
Owner:THE GOVERNMENT OF THE UNITED STATES OF AMERICA AS REPRESENTED BY THE SEC OF THE NAVY NAVAL RES LAB WASHINGTON

Self-adaptive feature fusion-based multi-scale correlation filtering visual tracking method

ActiveCN108549839AImprove performanceAvoid the problem of limited expression of a single featureImage analysisCharacter and pattern recognitionScale estimationPhase correlation
The invention discloses a self-adaptive feature fusion-based multi-scale correlation filtering visual tracking method. The method comprises the following steps: firstly, the correlation filtering is carried out on a target HOG feature and a target color feature respectively by using a context-aware correlation filtering framework; the response values under the two features are normalized; weightsare distributed according to the proportion of the response values and then are subjected to linear weighted fusion, so that a final response graph after fusion is obtained; the final response graph is compared with a pre-defined response threshold value to judge whether the filtering model is updated or not; finally, a scale correlation filter is introduced in the tracking process, so that the scale adaptability of the algorithm is improved. The method can be used for tracking various features. The performance advantages of the features are brought into play, and a model self-adaptive updating method is designed. In addition, a precise scale estimation mechanism is further introduced. According to the invention, the updating quality and the tracking precision of the model can be effectively improved, and the model can be changed in scale. The method is good in robustness under complex scenes such as rapid movement, deformation, shielding and the like.
Owner:HUAQIAO UNIVERSITY +1

Tracking and matching parallel computing method for wearable device

The invention discloses a tracking and matching parallel computing method for a wearable device so as to achieve augmented reality tracking and matching. According to the tracking and matching parallel computing method, the SCAAT-EKF feature tracking technology is adopted, complementary fusion data acquisition is conducted on multiple sensors in the wearable device, and data collision can be effectively avoided; the operation strategy based on the double kernal CPU+GPU group kernel multi-channel is utilized, corner detection and extraction based on the Harris algorithm are conducted in the GPU, the double kernal CPU is used for conducting P-KLT tracking and matching calculation, and therefore algorithm fast parallel processing is achieved. The tracking and matching parallel computing method mainly comprises the steps of hybrid tracking and feature extraction for the wearable device, accurate extraction of feature points of target natural features without marks, Harris corner point detection achieved based on a GPU parallel processing mechanism, the CPU-based P-KLT parallel feature tracking algorithm and the secondary matching optimization algorithm. The tracking and matching parallel computing method for the wearable device achieves combination of the sensors of the wearable device and visual tracking and matching, and has the wide prospect in the augmented reality three-dimensional registration aspect.
Owner:北京中海新图科技有限公司 +1
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products