Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Collaborative training method based on domain self-adaptation

A collaborative training and adaptive technology, applied in the field of model training, can solve problems such as performance degradation, achieve the effect of reducing the dependence on human resources, reducing demand, and improving target detection capabilities

Active Publication Date: 2020-12-01
SUN YAT SEN UNIV
View PDF5 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0009] When transferring Faster RCNN to an unlabeled target domain, the performance will drop due to the existence of domain differences

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Collaborative training method based on domain self-adaptation
  • Collaborative training method based on domain self-adaptation
  • Collaborative training method based on domain self-adaptation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0046] Please refer to figure 2 , a flow chart of the steps of a domain-adaptive-based collaborative training method provided by an embodiment of the present invention, including steps 101 to 105, each step is specifically as follows:

[0047] Step 101 , in each training iteration of the Faster RCNN model, obtain a labeled source domain image and an unlabeled target domain image.

[0048] Step 102, input the source domain image into the Faster RCNN model for target detection and obtain the first source domain features output by the backbone network; at the same time, transmit the first source domain features to the domain classifier through the gradient flip layer for further Loss calculation.

[0049] Specifically, input the source domain pictures and their annotations into Faster RCNN, and perform loss calculation and training in the same way as the original Faster RCNN. At the same time, the features extracted from the source domain pictures by the backbone network are s...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a collaborative training method based on domain self-adaptation, which comprises the following steps: training one party by utilizing high confidence output of the other party,and processing candidate regions with low confidence output of the two parties by utilizing a maximized classifier difference method; besides, in the aspect of feature alignment of the backbone network, the output of the RPN is used for calculating the foreground probability of each point on the feature map, and a larger weight is given to an area with a larger foreground probability during feature alignment; the target detection capability of the model in the label-free field is improved, the requirement of the target detection model for annotation data is reduced, and dependence on human resources is reduced.

Description

technical field [0001] The invention relates to the field of model training, in particular to a collaborative training method based on domain self-adaptation. Background technique [0002] The development of deep learning has made great progress in the field of computer vision. However, the huge amount of labeled data required for deep learning model training restricts the wide application of deep learning. Object detection tasks require more fine-grained annotations, so the problem is particularly serious. Domain adaptation tries to solve this problem by transferring the model from a domain rich in labeled data (source domain) to a domain lacking in labeled data (target domain). [0003] At present, domain adaptation in object detection is mainly divided into two types of schemes. [0004] (1) Domain adaptation at the feature level, the main method is to achieve the alignment of two domain features through adversarial learning. At this time, since the image of the target...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06K9/32G06N3/04G06N3/08
CPCG06N3/08G06V10/25G06N3/045G06F18/2415Y02T10/40
Inventor 李冠彬赵赣龙
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products