Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Data set reduction method and system for deep neural network model training

A deep neural network and model training technology, which is applied in the field of deep neural network models, can solve the problems of increasing model training resources and budget, and cannot help significantly improve the accuracy of the target model.

Pending Publication Date: 2021-07-27
INST OF INFORMATION ENG CAS
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, there is a large amount of information redundancy in the data. These information redundancy not only cannot help to significantly improve the accuracy of the target model, but also often increase the resources and budget for model training.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Data set reduction method and system for deep neural network model training
  • Data set reduction method and system for deep neural network model training
  • Data set reduction method and system for deep neural network model training

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0019] In order to make the above objects, features and advantages of the present invention more obvious and understandable, the present invention will be further described in detail through specific examples below.

[0020] The scenario that the present invention is mainly aimed at is that the training data of the deep neural network model includes a large amount of redundant information, resulting in a large waste of model training costs. Therefore, in order to solve the waste of computing power in the model training process and improve the training effect of the model, the research is in The measurement method of the information redundancy of the deep neural network model, and how to efficiently remove the redundancy from the massive data to obtain more representative data points. like figure 1 As shown, the original training data set contains six pictures whose content is "0". By calculating the information redundancy between these pictures (that is, the value on the edge)...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a data set reduction method and system for deep neural network model training, and relates to the field of deep neural network models. Information redundancy between every two data samples is obtained by calculating mutual information indexes of the data samples of training data, the data samples and the information redundancy serve as vertex and edge weights of a graph, on the basis of greedy expansion of an initial point and single-step replacement of a reduced data set, the training data volume is greatly reduced, the time and the computing power of training dependence are reduced, and a replacement model with the performance close to that of an original model is obtained.

Description

technical field [0001] The invention relates to the field of deep neural network models, and proposes a data set reduction method and system using mutual information. Background technique [0002] The deep learning technology represented by neural network has brought the third wave of artificial intelligence development, which has significantly improved the capabilities of image classification, speech recognition, and natural language processing, and brought great convenience to people's production and lifestyle. Training a high-accuracy and high-reliability model often requires a large amount of training data. In order to learn more essential and important features from these massive data, the neural network needs to spend a lot of computing power for long-term iterative training to obtain the final deep learning model. However, there is a large amount of information redundancy in the data. These information redundancy not only cannot help to significantly improve the accu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/08
CPCG06N3/04G06N3/08
Inventor 孟国柱何英哲陈恺
Owner INST OF INFORMATION ENG CAS
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products