Patents
Literature
Eureka-AI is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Eureka AI

1802 results about "Characteristic point" patented technology

First, the characteristic point which is expressed as an object point is stored in hash table form which includes a large amount of information due to geometric transformation to store in the database. On one end of a bone, for example, the top end would have a characteristic point that would be very easy to decipher between species.

Three-dimensional enhancing realizing method for multi-viewpoint free stereo display

The invention discloses a three-dimensional enhancing realizing method for multi-viewpoint free stereo display, which comprises the following steps: 1) stereoscopically shooting a natural scene by using a binocular camera; 2) extracting and matching a characteristic point of an image of a main camera, generating a three-dimensional point cloud picture of the natural scene in real time, and calculating a camera parameter; 3) calculating a depth image corresponding to the image of the main camera, drawing a virtual viewpoint image and a depth image thereof, and performing hollow repairing; 4) utilizing three-dimensional making software to draw a three-dimensional virtual model and utilizing a false-true fusing module to realize the false-true fusing of the multi-viewpoint image; 5) suitably combining multiple paths of false-true fused images; and 6) providing multi-viewpoint stereo display by a 3D display device. According to the method provided by the invention, the binocular camera is used for stereoscopically shooting and the characteristic extracting and matching technique with better instantaneity is adopted, so that no mark is required in the natural scene; the false-true fusing module is used for realizing the illumination consistency and seamless fusing of the false-true scenes; and the multi-user multi-angle naked-eye multi-viewpoint stereo display effect is supplied by the 3D display device.
Owner:万维显示科技(深圳)有限公司

Method for measuring three-dimensional position and stance of object with single camera

The invention discloses a method for measuring the three-dimensional position and the stance of an object with a single camera. The method comprises the following steps of: acquiring an image of a target to be measured by utilizing a single camera; confirming the real-time three-dimensional position and stance information of the target to be measured by accurately identifying marking points on the target to be measured; selecting a suitable camera according to a detection scene and a range and calibrating the camera to acquire inner and outer parameters of the camera; designing target marking points according to the target to be measured and reasonably arranging the marking points; then, detecting the target, identifying characteristic points according to the image shot by the camera, and matching the detected characteristic points with the marking points; and finally, solving the three-dimensional position and stance information of the target to be measured according to the corresponding relation between the measuring points and the object marking points. Whether a non-rigid object is deformed or not can also be detected by using the method. In the invention, the single camera is adopted to realize three-dimensional measurement, acquire the information of the target in a three-dimensional space, such as space geometrical parameters, position, stance, and the like, decrease the measuring cost and the size of a measuring system, and facilitate the operation.
Owner:XI AN JIAOTONG UNIV

Three-dimensional human body gait quantitative analysis system and method

The invention discloses a three-dimensional human body gait quantitative analysis system and method. The method comprises the following steps of: simultaneously using two inertia measurement nodes on feet, carrying out analyses fusion on the data of the two inertia measurement nodes, measuring a more accurate gait parameter, and obtaining information which can not be measured in a single-foot manner; and firstly storing the collected data in storage units of the measurement nodes in a walking process, and finally transmitting the collected data to an analytical calculation device in a wired or wireless manner after the walking process is ended. The high-speed acquisition of all gait information in the walking process can be realized and no characteristic gait information point is omitted. A foot part binding device of the designed inertia measurement node is adopted, a measuring error brought by fixed position shift of the measurement nodes in the walking process is eliminated, and the identity of fixed positions of the measurement nodes in a walking process is guaranteed through repeated measurement; and a gait analytical calculation program module adopts a moving window searching value method to definition a characteristic point of gait information, and can more exactly extract the gait diagnostic information.
Owner:王哲龙

Camera on-field calibration method in measuring system

ActiveCN101876532AEasy extractionOvercome the adverse effects of opaque imagingImage analysisUsing optical meansTheodoliteSize measurement
The invention discloses a camera on-field calibration method in a measuring system, belonging to the field of computer vision detection, in particular to an on-field calibration method for solving inside and outside parameters of cameras in a large forgeable piece size measuring system. Two cameras and one projector are provided in the measuring system. The calibration method comprises the following steps of: manufacturing inside and outside parameter calibration targets of the cameras; projecting inside parameter targets and shooting images; extracting image characteristic points of the images through an image processing algorithm in Matlab; writing out an equation to solve the inside parameters of the cameras; processing the images shot simultaneously by the left camera and the right camera; and measuring the actual distance of the circle center of the target by using a left theodolite and a right theodolite, solving a scale factor and further solving the actual outside parameters. The invention has stronger on-field adaptability, overcomes the influence of impermeable and illegible images caused by the condition that a filter plate filters infrared light in a large forgeable piece binocular vision measuring system by adopting the projector to project the targets, and is suitable for occasions with large scene and complex background.
Owner:DALIAN UNIV OF TECH

Automobile fatigue driving prediction method

The invention discloses an automobile fatigue driving prediction method. The method comprises the following steps of successively constructing and arranging first to fourth grades of convolutional neural networks, inputting an image, using a first grade of the convolutional neural network to acquire a candidate face window and a corresponding bounding box regression vector, and through the first grade and a second grade of the convolutional neural networks, merging candidate windows which are highly overlapped; for the residual candidate windows, through a third grade of the convolutional neural network, using face characteristic point mark information to predict and identify a human eye area; according to an eye characteristic point, segmenting an eye area, inputting a fourth grade of the convolutional neural network, through a depth learning algorithm, training a depth visual characteristic model of an eye image; making a video collected by a camera successfully pass through a CNN1, a CNN2, a CNN3 and a CNN4, and distinguishing a closing state of eyes; and calculating a driver fatigue visual assessment parameter PERCLOS, and when a PERCLOS value is greater than 40%, determining that a driver begins to feel fatigue or is in a fatigue state, and outputting an early warning signal. By using the method of the invention, the fatigue state of the driver under various conditions of illumination, an attitude and an expression can be detected, detection result robustness is high, and influences of factors of the illumination, the attitude, the expression and the like on driver fatigue detection are effectively overcome.
Owner:FUJIAN NORMAL UNIV

Method and system for generating three-dimensional road model

The invention relates to a method for generating a three-dimensional road model, which belongs to the field of three-dimensional road modeling. The method comprises the following steps: (1) collecting the data of the center line of a road to obtain the two-dimensional node data of the center line of the road; (2) resolving the structural characteristic of the road and constructing a road model; (3) separating out the node data of the center lines of a common road section and an intersection from the two-dimensional data of the center line of the road; (4) modeling the node data by invoking the road model to generate the characteristic point data of the edges of the common road section and the intersection; (5) connecting each point corresponding to the characteristic point data of the edges to generate a road model of a three-dimensional grid; and (6) sticking a corresponding texture onto the generated road model of the three-dimensional grid according to the attribute data of the road to obtain a three-dimensional solid model of the road. The invention also provides a system for generating the three-dimensional road model. According to the method for generating the three-dimensional road model, which is disclosed by the invention, the three-dimensional road model is generated in a fully automatic way, the manual intervention is avoided, and the modeling efficiency is enhanced.
Owner:北京中恒丰泰科技有限公司

Tracking system based on binocular camera shooting

The invention relates to a full automatic target detecting and tracking system in the computer vision field, wherein, an input module is responsible for collecting digital images shot by a binocular camera to be taken as system input, the obtained digital images are input into a feature extraction module and feature analysis is carried out to one image to obtain a plurality of characteristic points to be taken as the subsequently processed images. By matching the characteristic points of two images, the parallax of the two images is calculated, and by combining the pre-informed external and internal parameters of the camera, the lower coordinate of a camera coordinate system of the characteristic points can be calculated; furthermore, by the relationship between a world coordinate system and the camera coordinate system, the coordinate of the world coordinate system of the characteristic points can be known. A clustering module clusters the characteristic points into an aggregation for expressing target position, while a trajectory analysis module estimates the target position on a time sequence to obtain the motion trajectory of the target. The invention can effectively and steadily detect the targets in a designated area, track the targets and calculate the motion trajectories of the targets.
Owner:SHANGHAI JIAO TONG UNIV

Image searching method

The invention discloses an image searching method, which comprises a training part and a searching part, wherein the training part comprises the following steps of: the extraction of characteristic points, the supplementation of the characteristic points and the determination of matching relationships, the generation of similar point set, the clustering of the characteristic point sets and the generation of characteristic vectors of each image in an image database; and the searching part comprises the following steps of: extracting the characteristic points of a picture to be retrieved and generating the characteristic point sets; calculating distances between each characteristic point descriptor vector and corresponding cluster centers, and determining a cluster where a current characteristic point belongs by using a smallest distance; calculating the frequency ni of each cluster where the characteristic points of the picture to be retrieved belong; based on the frequency ni of the clusters where the characteristic points of the picture to be retrieved belong, and the probability logarithm wi of each cluster, generating and unitizing the characteristic vector; and calculating Euler distances between the characteristic vector of the picture to be retrieved and the characteristic vectors of each image in a picture library, and selecting the image output with the smallest distance as a searching result.
Owner:南京来坞信息科技有限公司

Video face cartoon animation generation method

ActiveCN105139438AGood effectHigh speedAnimationImaging processingCartoon animation
The present invention discloses a video face cartoon animation generation method belonging to the image processing technology field. The video face cartoon animation generation method of the present invention comprises the following steps of firstly intercepting a frame of positive and neutral expression image from an input video, carrying out the cartoonlization processing on the neutral expression image, recording the eyebrow contour points, the eye contour points, and the maximum height difference h of the upper and lower eyelids when the eyes open; searching a frame similar with the neutral expression image as an initial conversion frame, based on the characteristic points of the last frame, determining the characteristic point processing of the next frame containing the corresponding conversion processing of the eyebrows, the eyes and the mouth, and synthesizing the converted cartoon images, obtaining the next frame image of the image frame corresponding to the synthesized cartoon images, repeating the above steps, and outputting the multiple frames of continuous cartoon images to generate the cartoon animation. The video face cartoon animation generation method of the present invention is used for the generation of the cartoon animation, is faster in generation speed and better in generation effect.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Method for controlling robot based on visual sense

The invention provides a method for controlling a robot based on visual sense. The method comprises the following steps of: (1) acquiring a gesture image of a human hand by using a camera; (2) extracting characteristic points of the human hand from the gesture image; (3) performing three-dimensional reconstruction on the characteristic points to obtain a position relation of the characteristic points of the human hand in a three-dimensional space; (4) converting coordinate points corresponding to the characteristic points of the human hand to be under a base coordinate of the robot; (5) performing inverse-solution calculation by using the position relation of the human hand under a base coordinate system of the robot to obtain a joint angle of the robot; and (6) driving the robot to run by using the calculated joint angle. The method has the advantages that: 1) the control is intuitive, and the holding gesture of the robot directly corresponds to the gesture of the human hand; 2) the control is flexible without contacting an onerous exchange tool; 3) an operator can be assisted to operate more accurately and safely by imitating the prior art; 4) the recovery is allowed to be interrupted or the operator is allowed to be replaced in midway; and 5) the operator does not need to walk in a wide range so that the operating pressure of the operator is reduced.
Owner:SOUTH CHINA UNIV OF TECH

Method for fitting and interpolating G01 code based on quadratic B spline curve

The invention discloses a method for fitting and interpolating a G01 code based on a quadratic B spline curve, comprising the following steps of: by an adaptive approach selecting each characteristic point of each group of small line segment which is described by the G01 code; fitting a route which is to be processed with the quadratic B spline curve of all the characteristic points; according to the characteristic of the quadratic B spline curve and the limit of the acceleration of each driving shaft of the numerical control machine, simultaneously obtaining the maximum permissible machining velocity curve (VLC curve) of the quadratic B spline curve and the each speed key point on the VLC curve; according to the each speed key point, the control axis of the each key point, the maximum permissible machining velocity and the VLC curve, computing real machining velocity; according the real machining velocity curve and a interpolating error computing interpolating point and completing real-time interpolation. The invention has fast computing velocity, high machining precision, stable working performance and wide application range, can complete the interpolating computation of the spline curve in real time and meet digital control processing requirement of fast velocity and high precision under a premise that the preset precision of the system is met.
Owner:ACAD OF MATHEMATICS & SYSTEMS SCIENCE - CHINESE ACAD OF SCI

Method and system for pasting image to human face based on affine transformation

ActiveCN104778712ASolve the robustness problemNatural effectImage analysisCharacter and pattern recognitionCurve fittingAdaptive matching
The invention discloses a method and system for pasting an image to a human face based on affine transformation. Characteristic point locating and characteristic point extraction are carried out on a standard human face image, curve fitting is carried out on extracted characteristic points of the standard human face image through a Lagrange's interpolation method, standard grain coordinates are obtained, then characteristic point locating is carried out on the human face image to be processed, self-adaption matching is carried out on the standard grain coordinates and the corresponding characteristic points of the human face image to be processed through affine transformation according to the actual characteristic points of the human face image to be processed, and transformation grain coordinates are obtained; finally, image pasting materials are drawn on the corresponding characteristic points of the human face image to be processed through the transformation grain coordinates, and an effect human face image is obtained. Therefore, the method and system can adapt to human face portions in various shapes by itself, the processed effect human face image is more natural, and the problem of robustness of automatic decoration is solved.
Owner:XIAMEN MEITUZHIJIA TECH

Parameter calibration method for cameras of vehicle-mounted all-round view system

Provided is a parameter calibration method for cameras of a vehicle-mounted all-round view system. When internal parameters and external parameters of the cameras of the vehicle-mounted all-round view system are calibrated, a set of three-dimensional calibration markers are adopted and comprise a plurality of characteristic points which can be recognized easily from images and are provided with given three-dimensional coordinates; when a vehicle to be calibrated is parked in a calibration area, automatic image data collection is carried out through manual work or through external trigger signals, characteristic point extraction is carried out on the collected images, each camera is calibrated corresponding to the three-dimensional coordinate of each characteristic point, and errors of locating are lowered through the processes such as parameter optimization. According to the parameter calibration method for the cameras of the vehicle-mounted all-round view system, through the three-dimensional calibration markers, all areas of the cameras can be covered as much as possible, and the influence of camera distortion on the measuring errors in the calibration process is small; compared with a previous plane two-dimensional calibration board, a three-dimensional calibration board is accurate in selected characteristic point position, clear in imaging and low in error value, and image stitching precision is greatly improved.
Owner:宁波舜宇智行传感技术有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products