Bionic vision transformation-based image RSTN (rotation, scaling, translation and noise) invariant attributive feature extraction and recognition method

An attribute feature and bionic technology, which is applied in the cross field of bioinformatics and machine vision technology, can solve the problems of noise sensitivity and insufficient robustness

Active Publication Date: 2016-07-27
CENT SOUTH UNIV
View PDF3 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, there are two deficiencies. First, the model uses a box filter, which is implemented by using the weighted average value of pixels around the image, which is not consistent with the human visual perception mechanism. Therefore, it is particularly sensitive to noise.
Secondly, the edge detector of the black and white filter recognizes simple structural targets (such as letter I or number 1, etc.), due to the lack of edge features, the robustness is insufficient after adding noise

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Bionic vision transformation-based image RSTN (rotation, scaling, translation and noise) invariant attributive feature extraction and recognition method
  • Bionic vision transformation-based image RSTN (rotation, scaling, translation and noise) invariant attributive feature extraction and recognition method
  • Bionic vision transformation-based image RSTN (rotation, scaling, translation and noise) invariant attributive feature extraction and recognition method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0092] This embodiment is aimed at images with 26 letters and 10 numbers, such as figure 1 As shown, the invariant attribute feature extraction process is carried out in the following five steps:

[0093] Step 1: Perform grayscale processing on the original image, and normalize its grayscale value to [0,1]. And use the bilinear interpolation method to reset the image size to 128×128.

[0094] Step 2: Obtain the two-dimensional image M(x,y) after step 1 preprocessing, use Gabor filter to obtain the intermediate response G(x,y), and then use the horizontal-vertical bipolar filter F and G(x,y) Do the convolution. That is, the filter-filter filter based on Gabor and the bipolar filter F detects the direction edge of the target image and obtains the edge image E.

[0095] Step 3: For the edge image E, measure the spatial resolution of image lines with different edge directions θ and different distances I. First, carry out the dislocation processing with the distance I and the ...

Embodiment 2

[0165] In order to verify the RSTN invariance of extracted image features, the original images of G and F letters are rotated, scaled, translated and noised to different degrees. For the visual comparison of the results, the output results of the first stage and the second stage are visualized in the form of images. Such as Figure 7 and Figure 8 (a) is the original image, where Figure 7 and Figure 8 (b) is the output result of (a) first-stage transformation, Figure 7 and Figure 8 (c) is the output feature map of the second stage of (a). Then, will Figure 7 and Figure 8 (a) is rotated 135° counterclockwise, such as Figure 7 and Figure 8 As shown in (d), the output of the first stage is obtained Figure 7 and Figure 8 (e). compare Figure 7 and Figure 8 In terms of (b), it is equivalent to a horizontal translation of 45° to the right. However, the second stage features Figure 7 and Figure 8 (f) of Figure 7 and Figure 8 As far as (c) is concerne...

Embodiment 3

[0171] In the process of traffic sign recognition in natural scenes, the image is easily disturbed by factors such as illumination, distance, and camera angle. Usually, the distance between the camera and the traffic sign cannot be obtained accurately, and the size of the traffic sign in the image is also difficult to uniformly determine. For this reason, the robustness of traffic sign feature extraction is insufficient, which constrains the performance of traffic sign recognition. Therefore, applying this method to the feature extraction of traffic sign recognition and extracting the invariant attribute features in the process of traffic sign recognition is of great significance for improving its recognition rate and robustness.

[0172] Figure 9 The first column shows 5 traffic signs with different sizes and rotation angles, which respectively indicate no left, straight or left. The rings and arrows in these two types of signs are in prominent positions. And it has conne...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a bionic vision transformation-based image RSTN (rotation, scaling, translation and noise) invariant attributive feature extraction and recognition method. The method includes the following steps: 1) gray processing is performed on an original image, and the size of the image is reset by using a bilinear interpolation method; 2) the directional edge of the target image is detected based on a Gabor and bipolar filter F-based filter-filter filter, so that an edge image E can be obtained; 3) the spatial resolution pitch detection value of the edge image E is calculated, so that a first-stage output image S1 is obtained; and 4) directional edge detection mentioned in the step 2) and spatial resolution pitch detection mentioned in the step 3) are performed on the first-stage output image S1 again, so that a second-stage feature output image S2 can be obtained, and invariant attributive features can be obtained. According to the method of the invention, a human visual perception mechanism is simulated, and bionic vision transformation-based RSTN invariant attributive features are used in combination, and therefore, the accuracy of image recognition can be improved, and robustness to noises is enhanced.

Description

technical field [0001] The invention belongs to the intersecting field of biological information and machine vision technology, and in particular relates to an image RSTN invariant attribute feature extraction and recognition method based on biological imitation vision transformation. Background technique [0002] Image invariant attribute feature extraction is an important means to improve target recognition rate. Human vision is known to accurately perceive rotated, scaled, translated, and noisy images. However, using traditional computer vision algorithms to achieve object recognition in rotated, scaled, translated and noised images is a very challenging task. With the continuous revelation of the response mechanism of the human visual cerebral cortex, Hubel reported in Nature that biological visual cortical cells respond very strongly to lines of certain lengths or directions. Inspired by this biological visual response mechanism, if machine vision can extract line fea...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/46
CPCG06V10/443
Inventor 余伶俐周开军
Owner CENT SOUTH UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products