Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

1054 results about "Gradient direction" patented technology

The direction of the gradient is simply the arctangent of the y-gradient divided by the x-gradient. tan−1(sobely/sobelx). Each pixel of the resulting image contains a value for the angle of the gradient away from horizontal in units of radians, covering a range of −π/2 to π/2.

Fast high-precision geometric template matching method enabling rotation and scaling functions

The present invention provides a fast high-precision geometric template matching method enabling rotation and scaling functions. According to the method, based on image edge information, with the sub-pixel edge points of a template image adopted as feature points, the tangent directions of the gradient directions of a target image adopted as feature lines, and based on the local extremum points of a similarity value, the similarity and the candidate positions of the template image and the target image are calculated from fine to rough through adopting a Pyramid algorithm and according to a similarity function; and pixel-level positioning accuracy is obtained at the bottommost layer of a Pyramid, the least squares method is adopted to carry out fin adjustment, so that sub-pixel positioning accuracy, higher-precision angle and size scaling accuracy can be achieved. The method can realize fast, stable and high-precision positioning and identification of target images which are moved, rotated, scaled and is partially shielded, and where the brightness of illumination changes, illumination is uneven, and cluttered background exists. The method can be applied to situations which require machine vision to carry out target positioning and identification.
Owner:吴晓军

Face recognition method of deep convolutional neural network

The invention discloses a face recognition method of a deep convolutional neural network, which reduces the time complexity, and enables a weight in the network to still have a high classification capacity under the condition of reducing the number of training samples. The face recognition method comprises a training stage and a classification stage. The training stage comprises the steps of (1) randomly generating a weight wj between an input unit and a hidden unit and an offset bj of the hidden unit, wherein j equals to 1,...,L and represents the number of the weight and the offset, and the total number is L; (2) inputting a training image Y and a label thereof, by using a forward conduction formula hw, b(x)=f(W<T>x), wherein hw, b(x) is an output value, x is input, and an output value hw, b(x<(i)>) of each layer is calculated; (3) calculating the offset of the last layer according to a label value and an output value of the last layer; (4) calculating the offset of each layer according to the offset of the last layer, and acquiring the gradient direction; and (5) updating the weight. The classification stage comprises the steps of (a) keeping all parameters in the network to be unchanged, and recording a category vector outputted by the network of each training sample; (b) calculating a residual error delta, wherein delta=||hw, b(x<(i)>)-y<(i)>||<2>; and (c) classifying a tested image according to the minimum residual error.
Owner:BEIJING UNIV OF TECH

Real-time robust far infrared vehicle-mounted pedestrian detection method

The invention discloses a real-time robust far infrared vehicle-mounted pedestrian detection method. The method comprises the steps of catching a potential pedestrian pre-selection area in an input image through a pixel gradient vertical projection, searching an interest area in the pedestrian pre-selection area through a local threshold method and morphological post-processing techniques, extracting a multi-stage entropy weighing gradient direction histogram for feature description of the interest area, inputting the histogram to a support vector machine pedestrian classifier for online judgment of the interest area, achieving pedestrian detection through multi-frame verification and screening of judgment results of the pedestrian classifier, dividing training sample space according to sample height distribution, building a classification frame of a three-branch structure, and collecting difficult samples and a training pedestrian classifier in an iteration mode with combination of a bootstrap method and an advanced termination method. According to the real-time robust far infrared vehicle-mounted pedestrian detection method, not only is accuracy of pedestrian detection improved, but also a false alarm rate is reduced, input image processing speed and generalization capacity of the classifier are improved, and provided is an effective night vehicle-mounted pedestrian-assisted early warning method.
Owner:SOUTH CHINA UNIV OF TECH

Random convolutional neural network-based high-resolution image scene classification method

The invention discloses a random convolutional neural network-based high-resolution image scene classification method. The method comprises the steps of performing data mean removal, and obtaining a to-be-classified image set and a training image set; randomly initializing a parameter library of model sharing; calculating negative gradient directions of the to-be-classified image set and the training image set; training a basic convolutional neural network model, and training a weight of the basic convolutional neural network model; predicting an updating function, and obtaining an addition model; and when an iteration reaches a maximum training frequency, identifying the to-be-classified image set by utilizing the addition model. According to the method, features are hierarchically learned by using a deep convolutional network, and model aggregation learning is carried out by utilizing a gradient upgrading method, so that the problem that a single model easily falls into a local optimal solution is solved and the network generalization capability is improved; and in a model training process, a random parameter sharing mechanism is added, so that the model training efficiency is improved, the features can be hierarchically learned with reasonable time cost, and the learned features have better robustness in scene identification.
Owner:WUHAN UNIV

Fault identification method of high voltage transmission line based on computer vision

The invention relates to a high-voltage transmission line fault identification method based on computer vision, which relates to the technical field of high-voltage transmission line running state monitoring. The invention aims at solving the problem of high false alarm rate of the existing high-voltage transmission line on-line monitoring system. 11) carrying out edge detection on the transmission line image according to the edge detection algorithm, a strong edge image is obtained, and edge endpoints and edge directions are obtained from the strong edge image. Since the gradient direction ofthe edge endpoints is perpendicular to the edge direction, an edge connection window is selected according to the gradient direction of the edge endpoints, and edge connection points are selected inthe edge connection window according to a Hough transform method, and the edge connection points are connected into an edge image. Step 2, screening the transmission lines from the edge images of thetransmission line images by adopting a transmission line detection algorithm based on phase grouping; Step 3, the transmission conductor is processed to identify the fault on the transmission line. Itis used to identify transmission line faults.
Owner:国网黑龙江省电力有限公司佳木斯供电公司 +2

Rapidly converged scene-based non-uniformity correction method

InactiveCN102538973APrevent erroneous updatesBug update avoidanceRadiation pyrometryPhase correlationSteep descent
The invention discloses a rapidly converged scene-based non-uniformity correction method, wherein the aim of non-uniformity correction is achieved by minimizing interframe registration error of two adjacent images. The method mainly comprises the following steps of: initializing gain and offset correction parameters and acquiring an uncorrected original image; acquiring a new uncorrected original image, and carrying out non-uniformity correction on the new uncorrected original image and the previous uncorrected original image by utilizing the current non-uniformity correction parameters; obtaining relative displacement, scene correlation coefficient and interframe registration error of two corrected images by utilizing an original point masking phase correlation method; and updating correction parameters along the negative gradient direction by adopting a steepest descent method. The method disclosed by the invention has the advantages of high correction accuracy, fast convergence speed, no ghost effect and low calculated amount and storage content and is especially applicable to being integrated into an infrared focal plane imaging system, and the effect of improving imaging quality, environmental suitability and time stability of an infrared focal plane array is achieved.
Owner:NANJING UNIV OF SCI & TECH

Rapid sub pixel edge detection and locating method based on machine vision

The invention discloses a rapid sub pixel edge detection and locating method based on machine vision. The method includes the following steps that firstly, a detection image is acquired; secondly, denoising pretreatment is conducted on the image; thirdly, the gradient Gx of each pixel point in the horizontal direction and the gradient Gy of each pixel point in the vertical direction are calculated; fourthly, the gradient magnitude G0 and the gradient direction Gtheta of each pixel points under polar coordinates are calculated; fifthly, neighborhood pixel points of each pixel point are determined; sixthly, pixel-level edge points are determined; seventhly, the distance between a sub pixel edge point of each pixel-level edge point in the eight-gradient direction and the pixel-level edge point is calculated; eighthly, the distance d between each sub pixel edge point in the actual gradient direction Gtheta and the corresponding pixel-level edge point is calculated; ninthly, a cosine lookup table method is adopted for calculating rectangular coordinates of each sub pixel edge point in the actual gradient direction Gtheta, so that the image edge points are detected and sub-pixel-level localization is conducted. The whole method is high in calculation accuracy and speed.
Owner:湖南湘江时代机器人研究院有限公司

Margin-oriented self-adaptive image interpolation method and VLSI implementation device thereof

The invention discloses a margin-oriented self-adaptive image interpolation method and a VLSI implementation device thereof. The method comprises the steps that the gradient magnitude and the gradient direction of a source image pixel are computed, and marginal information is obtained by comparing the gradient magnitude and a local self-adaptive threshold value, wherein the marginal direction is perpendicular to the gradient direction; the marginal direction is classified, filtering is conducted through the marginal information, and an image is divided into a regular marginal area and a non-marginal area; the regular marginal area interpolation is conducted in the marginal direction, and an improved bicubic interpolation method, a slant bicubic interpolation method and a slant bilinear interpolation method based on local gradient information are adopted to conduct image interpolation according to the classification of the marginal information; image interpolation is conducted on the non-marginal area through the improved bicubic interpolation method based on the local gradient information. The VLSI implementation device comprises a marginal information extraction module, a self-adaptive interpolation module, an input line field synchronous control module and an after-scaling line field synchronous control module. The margin-oriented self-adaptive image interpolation method and the VLSI implementation device of the margin-oriented self-adaptive image interpolation method can effectively improve the effect of image interpolation with high-magnification scaling, and is beneficial to integrated circuit framework achieving.
Owner:XI AN JIAOTONG UNIV

Target tracking method based on multi-characteristic adaptive fusion and kernelized correlation filtering technology

The invention provides a target tracking method based on multi-characteristic adaptive fusion and kernelized correlation filtering technology. The method comprises steps of according to target position and the dimension of the previous frame tracking, acquiring a candidate region of target motion; extracting histogram characteristics and color characteristics in the gradient direction of the candidate region, fusing the two kinds of characteristics, carrying out Fourier transform so as to obtain a characteristic spectrum and then calculating kernelized correlation; determining the position and the dimension of the target at the current frame, and acquiring a target region; extracting histogram characteristics and color characteristics in the gradient direction of the target region, fusing the two kinds of characteristics, carrying out Fourier transform so as to obtain a characteristic spectrum and then calculating kernelized self-correlation; designing the adaptive target correlation and training a position filter model and a dimension filter model; and using a linear interpolation method to update the characteristic spectrums and the related filters. According to the invention, the discrimination capability of the models is improved; robustness of the target tracking of the target in a complex scene and the appearance change is improved; calculation complexity is reduced; and tracking timeliness is improved.
Owner:NANJING UNIV OF SCI & TECH

Water surface optical visual image target area detection method based on gradient information fusion

The invention provides a water surface optical visual image target area detection method based on gradient information fusion. The water surface optical visual image target area detection method based on gradient information fusion adopts two sliding window modes and calculates the longitudinal gradient and the transverse gradient of a water surface optical visual image. The information of the two gradients is fused, the position of an area is marked through a connected component detection method, and the target area is marked according to the final boundary of a target. The water surface optical visual image target area detection method based on gradient information fusion combines sea boundary area characteristics of the water surface image, respectively extracts target boundary information in the longitudinal gradient and the transverse gradient directions of the water surface optical image, determines the attribute of the sea boundary line and division of an image processing space, determines the area type attribute of the target boundary according to the fused information, and finishes scanning and classification of pixels according to the property of the boundary. As the water surface optical visual image target area detection method based on gradient information fusion fuses the information in the two gradient directions by combining the characteristics of the sea boundary lines, reduces the range of the processing area, reduces influence from noise, avoids calculation and processing of the whole image space, and saves calculation time.
Owner:HARBIN ENG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products