Self-supervision-based visual representation learning method

A learning method and representation technology, applied in the field of visual representation learning, can solve the problem of low accuracy

Inactive Publication Date: 2018-01-19
SHENZHEN WEITESHI TECH
View PDF3 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at the problem that it is easily affected by viewpoint, pose, deformation, lighting, etc., and the accuracy is not high, the purpose of the present invention is to provide a self-supervised visual representation learning method, which adopts the self-supervised method to learn invariance, and its representation includes examples The difference between the differences and the difference within the instance, construct a g

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Self-supervision-based visual representation learning method
  • Self-supervision-based visual representation learning method
  • Self-supervision-based visual representation learning method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] It should be noted that, in the case of no conflict, the embodiments in the present application and the features in the embodiments can be combined with each other. The present invention will be further described in detail below in conjunction with the drawings and specific embodiments.

[0030] figure 1 It is a system framework diagram of a self-supervision-based visual representation learning method of the present invention. Mainly including self-supervision, visual representation, image construction and learning transformations in images.

[0031] Self-supervised, self-supervised methods learn invariant representations including both inter-instance differences and intra-instance differences; inter-instance differences reflect commonalities between different instances, e.g., patches or Relative position of color channels; intra-instance variance Learn intra-instance invariance from pose, viewpoint, and lighting changes by tracking individual moving instances in video...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a self-supervision-based visual representation learning method. Main contents of the method include: self-supervision, visual representation, image construction and transformation learning in an image. A process of the method includes: adopting a self-supervision method to learn invariance of which expression includes differences between instances and differences inside theinstances; building a graph, in which nodes indicate image patches, describing similarities between the image patches, and defining two types of edges associating the image patches mutually in the graph, wherein constructing the graph with the instances and the internal edges includes scaling moving objects and is through the edges between the clustered instances and the edges inside the tracked instances; and training deep neural networks to generate similar visual representation. The visual feature learning method of the invention carries out self-supervision, can automatically acquire annotation labels, and greatly saves manpower and material resources; and reduces, at the same time, impacts brought by viewpoints, postures, deformation, illumination and the like, and increases an accuracy rate.

Description

technical field [0001] The invention relates to the field of visual representation learning, in particular to a self-supervision-based visual representation learning method. Background technique [0002] As the society enters the digital information age, the rapidly increasing amount of images and videos brings great challenges to data management and analysis, which makes intelligent visual data classification and retrieval technology attract more and more attention. Visual representation learning is to use cameras and computers instead of human eyes to recognize, track, and measure targets, and then do further graphics processing, and use computers to process images that are more suitable for human eyes to observe or send to instruments for detection. It can be applied to visual object recognition, such as automatic labeling of Web images, massive image search, image content filtering, medical remote consultation and other fields; it can also be applied to the detection of ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06N3/04G06N3/08
Inventor 夏春秋
Owner SHENZHEN WEITESHI TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products