Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

1366 results about "Angular point" patented technology

Light stream based vehicle motion state estimating method

The invention discloses a light stream based vehicle motion state estimating method which is applicable to estimating motion of vehicles running of flat bituminous pavement at low speed in the road traffic environment. The light stream based vehicle motion state estimating method includes mounting a high-precision overlook monocular video camera at the center of a rear axle of a vehicle, and acquiring video camera parameters by means of calibration algorithm; preprocessing acquired image sequence by histogram equalization so as to highlight angular point characteristics of the bituminous pavement, and reducing adverse affection caused by pavement conditions and light variation; detecting the angular point characteristics of the pavement in real time by adopting efficient Harris angular point detection algorithm; performing angular point matching tracking of a front frame and a rear frame according to the Lucas-Kanade light stream algorithm, further optimizing matched angular points by RANSAC (random sample consensus) algorithm and acquiring more accurate light stream information; and finally, restructuring real-time motion parameters of the vehicle such as longitudinal velocity, transverse velocity and side slip angle under a vehicle carrier coordinate system, and accordingly, realizing high-precision vehicle ground motion state estimation.
Owner:SOUTHEAST UNIV

Land software tool

Disclosed is a network accessible tool that is capable of providing map and satellite image data, as well as other photographic image data to locate, identify, measure, view, and communicate information about land over the Internet-to-Internet users. The network accessible tool includes a location tool that allows the user to locate areas on a map using geographic names, township, range and section descriptions, county names, latitude and longitude coordinates or zip codes. Network accessible tool also includes a metes and bounds tool that draws boundaries on the map and image data in response to metes and bounds descriptions that have been entered by the Internet user. The network accessible tool also includes a lat / long drawing tool that draws boundaries on the map and image data based upon latitude and longitude coordinate pairs that have been entered by the Internet user. A cursor drawing tool allows the Internet user to draw and edit boundaries on the map and image data by simply clicking the cursor on the corner points of the boundary. An acreage calculation tool is also provided that calculates the acreage of an enclosed boundary. A distance measurement tool is also provided. The cursor information tool provides information relating to the name and creation date of the map and image data in accordance with the location of the cursor on the screen. The information can be communicated by printing, downloading, or e-mailing.
Owner:LANDNET CORP

Fusion calibration method of three-dimensional laser radar and binocular visible light sensor

The invention discloses a fusion calibration method of a three-dimensional laser radar and a binocular visible light sensor. According to the invention, the laser radar and the binocular visible lightsensor are used to obtain the three-dimensional coordinates of the plane vertex of the square calibration plate, and then registration is carried out to obtain the conversion relation of the two coordinate systems. In the calibration process, an RANSAC algorithm is adopted to carry out plane fitting on the point cloud of the calibration plate, and the point cloud is projected to a fitting plane,so that the influence of measurement errors on vertex coordinate calculation is reduced. For the binocular camera, the vertex of the calibration plate is obtained by adopting an angular point diagonalfitting method; for the laser radar, a distance difference statistical method is adopted to judge boundary points of the point cloud on the calibration board. By utilizing the obtained vertex coordinates of the calibration plate, fusion calibration can be accurately carried out on the three-dimensional laser radar and the binocular visible light sensor, a rotation matrix and a translation vectorof coordinate systems of the three-dimensional laser radar and the binocular visible light sensor are obtained, and a foundation is laid for realizing data fusion of three-dimensional point cloud anda two-dimensional visible light image.
Owner:BEIHANG UNIV

Image characteristic matching method

The invention relates to an image characteristic matching method. The image characteristic matching method includes the steps of pre-processing an obtained CCD (charge coupled device) image; extracting characteristic points of the pre-processed CCD image by a SURF operator and conducting matching image by the quasi epipolar line limit condition and minimum Euclidean distance condition to obtain the identical point information; establishing affine deformation relation between the CCD images according to the obtained identical point information; extracting the characteristic points of a reference image by Harris corner extracting operator, projecting the characteristics points to a searching image by the affine transformation to obtain points to be matched; in neighbourhood around the points to be matched, counting the correlation coefficient between the characteristic points and the points in the neighbourhood and taking extreme points as the identical points; and using the comprehensive results of twice matching as the final identical point information. According to the method of the invention, can match surface images of deep-space stars obtained in a deep-space environment is utilized for imaging matching to obtain high-precision identical point information of CCD images, so that the characteristic matching is realized.
Owner:THE PLA INFORMATION ENG UNIV

Method for automatic extraction of license plate position in vehicle monitoring image and perspective correction

InactiveCN106203433APerspective Correction Accurately ImplementedPrecise positioningCharacter and pattern recognitionIn vehicleAngular point
The invention discloses a method for the automatic extraction of the position of a license plate in a vehicle monitoring image and perspective correction, and the method comprises the steps: finding an approximate range of the license plate in the image through employing the edge and color information; carrying out the linear detection of the image; determining four boundary lines forming a perspective deformation license plate through employing the regional boundary information and combining with significant straight lines; solving four intersection points of the significant straight lines, and building mapping relation between the four intersection points with four corners of a target rectangle; carrying out the reverse calculation of a perspective correction matrix, and completing the perspective correction of a license plate. The method obtains the positions of four corners of a license plate with perspective distortion, builds the mapping relation between the four corners of the license plate with perspective distortion and the four corners of the target rectangle, calculates a perspective transformation matrix, and completes the perspective correction of the license plate. Compared with an affine transformation method based on horizontal and vertical rotation, the method truly describes the perspective transformation, and guarantees the higher license plate image correction precision. Compared with an affine transformation algorithm, the method can detect the perspective distortion information of the license plate, also can precisely locate the boundary corners of the license plate, calculates the perspective transformation matrix, and accurately achieves the perspective correction of the license plate.
Owner:XIDIAN UNIV

Flexible-target-based close-range large-field-of-view calibrate method of high-speed camera

The invention, which belongs to the computer vision field, provides a flexible-target-based close-range large-field-of-view calibrate method of a high-speed camera and relates to a close-range large-field-of-view binocular-visual-sense camera calibration method in a wind tunnel. According to the method, a flexible target is used for fill an overall calibration view field for calibration; the internal region of the target is formed by a planar chessboard mesh and distances between angular points of the chessboard mesh are known; the external region of the target is formed by cross target rods perpendicular to each other and a plurality of coding marking points with known distances are distributed on the target rods uniformly. During calibration, regional and constraint calibration is carried out on a high-speed camera by using different constraint information provided by different regions of the target. When the internal region of the target is calibrated, calibration is carried out by using a homography matrix; and the external region is calibrated by using distance constraints of the coding marking points. According to the invention, the cost is lowered and the operation portability is realized. During calibration, distortion of different regions is considered by using the regional and constraint camera calibration method, so that the calibration precision is improved.
Owner:DALIAN UNIV OF TECH

Method for reconstructing outer outline polygon of building based on multivariate data

The invention discloses a method for reconstructing an outer outline polygon of a building based on multivariate data. The method comprises the following steps of: respectively dividing DSM (Design Standards Manual) data and image data so as to obtain a mask image of an interest region of the building and an image dividing object; combining the mask image with the image dividing object so as to obtain a complete building object; carrying out boundary tracing on the building object so as to obtain curves of the building; using points corresponding to local maximum curvature values of the curves as angular points; connecting the angular points in sequence so as to obtain the outline polygon of the building; dividing the building object into regions by using a hierarchical clustering method and calculating the main direction of the building; establishing a linear model of the polygon of the building and correcting and regularizing the linear model of the outline of the building with the combination of the main direction of the building and gradient information of the image data; calculating an intersection point of every two adjacent straight line sections by using the linear model of each line section of the polygon; and by taking the intersection points as the angular points, connecting the angular points in sequence so as to form the final polygon of the building. According to the method, the DSM data is organically combined with the image data, the data are complementary to each other in the whole process, so that the problem of reconstructing the polygon of the outline of the building is solved well, and the method has very strong robustness in the two-dimensional outline modeling aspect of the building.
Owner:NANJING UNIV

Circular masking-out area rate determination-based adhesive particle image concave point segmentation method

The present invention relates to a circular masking-out area rate determination-based adhesive particle image concave point segmentation method. The method comprises the following steps of 1) carrying out the image pre-processing to obtain a binary image of a particle image; 2) carrying out the concave point rough detection to obtain an angular point image of the particle image; 3) carrying out the concave point accurate detection, and utilizing an area rate method to obtain all concave points capable of being used for the particle segmentation in a regional contour; 4) carrying out the concave point pairing, wherein the selected concave point pairs are used as the segmentation points of an adhesive particle image; 5) constructing a segmentation line of the particles, obtaining the contour coordinates of individual particles, and combining the coordinates of two segmentation points to obtain the complete particle contour. According to the present invention, by combining an angular point detection method and a method based on concave point analysis, an operand problem brought by purely utilizing the concave point search based on area is avoided; by setting few parameters, a lot of sample training is not needed; at the same time, the segmentation paths can be replanned, thereby being able to adapt to the different shape and size change of the images.
Owner:WEIFANG UNIVERSITY

A method and device for positioning and tracking on corners of the eyes and mouths of human faces

InactiveCN101216882AImprove synthesis abilitySolve the problem of inaccurate corner positioningCharacter and pattern recognitionFace detectionAngular point
The invention discloses a method for positioning and tracking eye corners and mouth corners of a human face and a device thereof. In the invention, firstly a human face detection algorithm is adopted to obtain a position of the human face; an AAM algorithm is adopted to obtain an affined transformation coefficient for the detected human face and preliminary positions of six corner points of the eyes and the mouth on the human face; AdaBoost training models of all the corner points are combined to search the positions of candidate points in a neighborhood, so as to obtain a certain number of candidate points for each corner point; Harris corner point features of all the corner points are calculated, and the number of the candidate points of all the corner points are reduced according to certain rules; the candidate points of the corner points of the left eye, the right eye and the mouth are respectively combined into pairs; the point pairs are gradually eliminated by adopting a plurality of features, and finally an optimum result is returned. The proposal provided by the embodiment of the invention solves the problem of inaccurate positioning of the corner points of the eyes and the mouth of the human face in all kinds of gestures, and realizes the positioning of outer profiles of the eyes and the mouth of the human face, thereby providing a feasible scheme for driving human face two-dimensional and three-dimensional models.
Owner:VIMICRO CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products