Humanoid target segmentation method based on convolutional neural network

A convolutional neural network and humanoid technology, applied in the field of computer vision recognition, can solve the problems of inaccurate humanoid target segmentation and slow humanoid target recognition speed, so as to avoid the reduction of detection efficiency and accuracy, reduce the time spent, and improve segmentation speed effect

Pending Publication Date: 2021-11-12
ZHEJIANG GONGSHANG UNIVERSITY
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In order to solve the problems of inaccurate humanoid target segmentation and slow humanoid target recognition speed in the prior art, the present invention proposes a humanoid target segmentation method based on convolutional neural network, which extracts image features through deep convolutional network, and uses the obtained The feature map of the FPN network is constructed; the FPN network features are input into a classification network and a segmentation network at the same time, and the two networks are executed in parallel; after the output results of the two networks are fused, the segmented humanoid target feature map is obtained, and finally the humanoid target is determined. The location information of the humanoid target area is intercepted from the image, the interference of the background is eliminated, and the speed and accuracy of subsequent bullet hole detection are improved.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Humanoid target segmentation method based on convolutional neural network
  • Humanoid target segmentation method based on convolutional neural network
  • Humanoid target segmentation method based on convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] In order to make the object, technical solution and technical effect of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0029] like Figure 1-2 As shown, in the embodiment of the present invention, implement the humanoid target segmentation method based on the convolutional neural network as follows:

[0030] Step 1: Construct a humanoid target segmentation data set, collect the target surface image in the actual shooting environment as the original input data, manually label the original input data, and divide the training set, verification set and test set;

[0031] Step 2: Input the image into the deep convolution network with ResNet101 as the Backbone, and extract the image feature layers C1, C2, C3, C4 and C5 in sequence, where C1 is the feature layer obtained by ResNet through the conv1 convolution module, and C2 is the feature layer obtained by ResNet ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a humanoid target segmentation method based on a convolutional neural network, belongs to the field of computer visual identification, and is used for solving the problems of low humanoid target segmentation precision, low humanoid target identification and segmentation speed and low adaptability to complex scenes at present. The method comprises the following steps: determining a target image data set of a human-shaped target, extracting image features through a deep convolutional network, and constructing an FPN network by using an obtained feature map; inputting the FPN network features into a classification network and a segmentation network at the same time, and performing the two networks in parallel; and fusing output results of the two networks to obtain a segmented human-shaped target feature map, and finally, intercepting a human-shaped target area from the image by determining position information of the human-shaped target, eliminating background interference, and improving subsequent bullet hole detection speed and precision. According to the method, rapid, efficient and high-adaptability humanoid target segmentation is realized, the humanoid target segmentation precision and speed are improved, and correct target surface data are provided for subsequent target shooting detection processing.

Description

technical field [0001] The invention belongs to the technical field of computer vision recognition, in particular to a humanoid target segmentation method based on a convolutional neural network. Background technique [0002] The current automatic target reporting system mostly adopts the principle of target reporting based on image processing, and the segmentation of humanoid targets is a very important step. The existing humanoid target segmentation technology mostly uses image-based color, texture and shape characteristics to complete through traditional image processing technology, which has low adaptability to complex scenes, and the accuracy of target surface segmentation needs to be improved. Moreover, the rough target surface segmentation will affect the judgment of whether the bullet holes fall in the effective area, resulting in statistical errors in target reporting. [0003] With the rapid development of deep learning convolutional neural network and the substan...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06K9/34G06N3/04G06T7/194
CPCG06T7/194G06N3/045G06F18/214
Inventor 徐晓刚余新洲陈雨杭徐冠雷
Owner ZHEJIANG GONGSHANG UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products