Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Target segmentation system and its training method, target segmentation method and equipment

A technology of target segmentation and training images, applied in the field of computer vision, can solve the problem of lack of perception ability of multi-size targets, and achieve the effect of increasing running time

Active Publication Date: 2022-03-25
山东力聚机器人科技股份有限公司
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the depth of the network pursued in the current deep learning will make this structure ignore the shallow texture information, which will inevitably cause the network to lack the ability to perceive multi-size objects.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Target segmentation system and its training method, target segmentation method and equipment
  • Target segmentation system and its training method, target segmentation method and equipment
  • Target segmentation system and its training method, target segmentation method and equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0066] This embodiment proposes an object segmentation system. The system proposes a dual-branch multi-scale feature fusion model, which is suitable for the case of non-uniform object scales in images, and improves the accuracy and robustness of multi-scale object segmentation in natural images. figure 1 It is a schematic structural diagram of an object segmentation system provided by an embodiment of the present invention. Such as figure 1 As shown, the system includes: a semantic perception network 110 , a texture perception network 120 and a feature fusion layer 130 .

[0067] The semantic awareness network 110 adopts the form of a full convolution network, including a convolution module, a pooling module, and a regularization module. The semantic awareness network is set to obtain the first preprocessing data of the image, and extracts Semantic feature map of the described image.

[0068] The texture-aware network 120 adopts a poolless network form, including a serially...

Embodiment 2

[0109] This embodiment provides a training method for an object segmentation system, which is used for training the object segmentation system described in Embodiment 1. Figure 5 It is a flowchart of a training method for an object segmentation system provided by an embodiment of the present invention. Such as Figure 5 As shown, the method includes steps S10-S40.

[0110] S10. Acquire a training image set, wherein the training image set includes a plurality of training images; perform pixel-level manual segmentation and labeling on each training image to obtain an annotation map of each training image.

[0111]S20. Perform original-scale data enhancement on each training image to obtain the first preprocessed data of each training image; perform multi-scale data enhancement on the first preprocessed data to obtain the first preprocessed data of each training image Two preprocessing data; wherein, the original scale data enhancement includes at least one of flipping, rotati...

Embodiment 3

[0150] This embodiment provides a method for object segmentation. Firstly, the target segmentation system is trained by using the training method of the second embodiment, and the method utilizes the trained target segmentation system to realize multi-scale target segmentation of images. Figure 7 It is a flowchart of an object segmentation method provided by an embodiment of the present invention. Such as Figure 7 As shown, the method includes steps S1-S4.

[0151] S1: Obtain an image to be segmented.

[0152]S2: Input the image to be segmented into the semantic perception network of the trained target segmentation system described in Embodiment 1 as the first preprocessed data.

[0153] S3: Input the image to be segmented into the texture perception network of the target segmentation system as second preprocessed data.

[0154] S4: Using the object segmentation system to perform object segmentation on the image to be segmented to obtain an object segmentation map of the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a target segmentation system, its training method, target segmentation method and equipment. The system includes: a semantic perception network, which adopts the form of a fully convolutional network, including a convolution module, a pooling module and a regularization module, and the semantic perception network is set to extract the semantic feature map of the image; a texture perception network, which adopts a network without pooling Form, including serially arranged dilated convolutional layer, feature shrinkage layer, feature expansion layer and the first convolutional layer, the texture perception network is set to extract the texture feature map of the image; the feature fusion layer is set to extract the semantic feature map Splicing and merging with the texture feature map to obtain the target segmentation map of the image. The invention proposes a dual-branch multi-scale feature fusion model, which improves the accuracy and robustness of multi-scale object segmentation in natural images.

Description

technical field [0001] Embodiments of the present invention relate to the field of computer vision, and in particular to an object segmentation system and its training method, an object segmentation method and equipment. Background technique [0002] Image segmentation is a classic problem in the field of computer vision, and it is one of the important ways to complete scene understanding. More and more applications and scenarios acquire knowledge from images, such as autonomous driving, human-computer interaction, intelligent robots, and augmented reality, etc., which highlights the importance of image segmentation as a core problem in computer vision. Image segmentation can be defined as a specific image processing technique used to divide an image into two or more meaningful regions. Image segmentation can also be seen as the process of defining boundaries between various semantic entities in an image. From a technical perspective, image segmentation is the process of a...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V10/774G06V10/80G06K9/62G06N3/04G06T7/11G06T7/40
CPCG06T7/11G06T7/40G06N3/045G06F18/214G06F18/253
Inventor 张凯王任丁冬睿杨光远
Owner 山东力聚机器人科技股份有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products