Texture feature extraction method fused with visual significance and gray level co-occurrence matrix (GLCM)

A technology of gray co-occurrence moment and texture feature, applied in the field of computer vision, it can solve the problems of slow calculation speed, large information redundancy, and inability to describe the visual sensitivity of human eyes.

Inactive Publication Date: 2012-12-19
HUNAN ZESUM TECH
View PDF4 Cites 23 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

A typical example is the Histogram of Oriented Gradient (HOG for short). HOG is based on gradient information and allows blocks to overlap each other. Therefore, it is not sensitive to illumination changes and offsets, and can better describe the edge features of the target. But HOG also has its disadvantages: high feature dimension, a large number of overlaps and histogram

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Texture feature extraction method fused with visual significance and gray level co-occurrence matrix (GLCM)
  • Texture feature extraction method fused with visual significance and gray level co-occurrence matrix (GLCM)
  • Texture feature extraction method fused with visual significance and gray level co-occurrence matrix (GLCM)

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0068] Such as figure 1 As shown, the present invention is a texture feature extraction method that combines visual saliency and gray co-occurrence moments, and its steps are:

[0069] 1. Initialization;

[0070] Determine the target detection window, basic block, super block size (shown in step ①). The target detection window is determined based on the experience of detecting targets. For example, the detection window for pedestrians is 36*108, the basic block size is 9*12, and the super block size is 18*24. The parameters can be adjusted appropriately according to the size of the actual target;

[0071] If the image is successfully acquired, that is, the image file is successfully read or the camera captures the image successfully (step ②), continue preprocessing including filtering image noise (step ③) to provide more accurate input for the next step, otherwise end ( Step ⑧).

[0072] 2. Feature extraction;

[0073] Calculate the significance factor in units of basic b...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a texture feature extraction method fused with visual significance and GLCM (gray level co-occurrence matrix). The method comprises: (1) a initialization step of determining the size of a detection window, a basic block and a super block for a certain image; (2) calculating a significance factor and a texture structural feature vector of an image by selecting the basic block as a unit; (3) generating a significance texture structural feature description operator with a certain number of dimensions by two-dimensional histogram according to the significant factor and the texture structural feature vector by selecting the super block as a unit; and extracting one significance texture structural feature description factor for each super block according to the significance factor operator as well as the size of the detection window, the basic block and the super block, and describing the significance texture structural feature description factor by a one-dimension feature vector. The texture feature extraction method provided by the invention can simulate the human eye to observe the divergence and significance characteristics of things, and has the advantages of simple calculation, low redundancy degree, and high real-time performance.

Description

technical field [0001] The invention belongs to the field of computer vision methods, and relates to a texture feature extraction method that combines visual saliency and gray scale co-occurrence moments. Background technique [0002] Target detection is an important topic in computer vision research. It is the basis of target behavior understanding and an important part of the continuous and accurate work of the image system. Visual target detection usually refers to the detection of regions of interest or The target object is precisely positioned. Computer vision target detection has very important value and significance in applications such as robot positioning and navigation, intelligent vehicles, video surveillance, video codec compression technology, and human-computer interaction in virtual reality. How to effectively improve the accuracy of object detection algorithms in complex environments, and how to increase the robustness of algorithms in changing scenes, have ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/46G06T7/00
Inventor 肖德贵辛晨曾凡仔
Owner HUNAN ZESUM TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products