Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Jinji network self-strengthening image voice deep learning model

A deep learning and self-reinforcing technology, applied in the field of image speech deep learning, can solve the problems of poor generalization ability, inability to transform, and unsatisfactory accuracy of local estimators, etc., to achieve enhanced self-learning ability, improved system stability, Effects with reasonable computational complexity

Pending Publication Date: 2022-07-01
潘振华
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0009] First, the existing technology uses artificial design features to find suitable data features to improve model accuracy and obtain information from raw data, which not only requires extremely high experience and technical requirements for design features, but also requires a lot of time cost , there is a lack of a method that can independently learn useful information from massive image and speech data, and can independently learn data features instead of simply learning the mapping from features to results. In a shallow network, the convolution operation of images and filters can only get Some edge and simple texture features of the sample have few hierarchical structures, and the model has no self-learning ability. Not only cannot learn the mapping relationship between input and output, but also cannot learn the feature expression of the original data, and cannot perform feature transformation between layers. The generalization ability of the local estimator is very poor, it is difficult to realize the characterization of the complex objective function, and the computational complexity is large, especially it is not mature enough in classification and estimation reasoning tasks, the calculation speed is slow, the anti-noise ability is poor, and the accuracy cannot meet the requirements. Require;
[0010] Second, prior art While building a CNN, the network designer has to make many design choices: the number and order of layers of each type, the exponent in the normalization operation, and the hyperparameters of each type of layer including the receptive field The size, step size, and number of convolution kernels make the design space of CNN architecture very large, many model instances cannot be realized, and complete manual search is not feasible; the design of deep learning architecture based on convolutional neural network is more complicated and requires With a lot of professional knowledge and experience, and a lot of time and cost, the design of the current neural network is still a big problem. The current CNN architecture is mainly made by hand through experiments or modified from a few existing networks, which is very important for professional technical knowledge. High requirements and huge workload, unable to realize the automation and computer-aided design of neural networks, and unable to meet the needs of most application scenarios such as speech recognition, image understanding and natural language processing;
[0011] Third, the existing technology cannot realize the selection process of the automated CNN architecture. It lacks the integration of the promotion scheme and the neural network structure, which can realize the independent learning of network parameters and modules, and is compatible with linear and nonlinear deep learning architectures; The enhanced network includes two schemes of maximum morphological promotion and median morphological promotion. It is impossible to combine the linear and nonlinear promotion schemes with the neural network classification model to construct a deep learning classification model through loose and compact methods respectively; in terms of network structure Lack of a self-reinforced network promotion scheme to generate a neural network layer architecture, unable to autonomously combine convolution and pooling layers, and unable to use adaptive convolution kernels to replace estimated inference operators and transformation operators to achieve full promotion of the network; in terms of network operations, it is impossible to use The promotion operation realizes convolution operation and pooling operation, realizing the unity of linearity and nonlinearity; in the training process of neural network, there is a lack of accelerated network training, not only the network reserved convolution is not learnable, but the pooling layer is also not learnable; The practical application value of image speech learning is greatly reduced;
[0012] Fourth, the existing technology has no reliable solution to the flexible selection of CNN layer structure, including layer order and layer type. It is impossible to build different promotion frameworks to achieve the fitting of different modules of the neural network, and it lacks to implement CNN linear convolution separately. and non-linear pooling to build a fusion network; the existing technology cannot solve the internal variable drift phenomenon in the training process of the image classification network, does not take into account the influence of the size of the receptive field on the performance of the network, and the estimated reasoning and transformation operators are not reasonable enough. Hierarchical low-entropy method, low model learning efficiency, slow speed, and low accuracy of image and speech classification

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Jinji network self-strengthening image voice deep learning model
  • Jinji network self-strengthening image voice deep learning model
  • Jinji network self-strengthening image voice deep learning model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0123] In order to make the purpose, features, advantages, and innovations of the present application more obvious and easy to understand and easy to implement, the specific embodiments are described in detail below with reference to the accompanying drawings. Those skilled in the art can make similar promotions without departing from the connotation of the present application, so the present application is not limited by the specific embodiments disclosed below.

[0124] Deep learning based on convolutional neural networks, including network structure, basic operations and training techniques. Although the network is becoming more and more efficient, the architecture design is more complex, requires a lot of professional knowledge and experience, and consumes a lot of time and cost. The design of the current neural network is still a big problem.

[0125] Current CNN architectures are mainly handcrafted through experiments or modified from a few existing networks, which requi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a promotion network self-enhanced image voice deep learning model. According to the application, a promotion scheme and a neural network structure are fused to design a deep learning architecture which can realize autonomous learning of network parameters and modules and is compatible with linearity and nonlinearity, and the system stability is improved; linear and nonlinear promotion schemes are combined with a neural network classification model through loosening and compacting to construct a deep learning classification model, and the generalization ability is high; the convolution layer and the pooling layer are autonomously combined in the aspect of the network structure, a fully promoted network is realized, and grading processing has a better effect in classification and estimation of reasoning tasks; in the aspect of network operation, convolution operation and pooling operation are achieved through promotion, unification of linearity and nonlinearity is achieved, and the calculation complexity is low; in the network training process, hierarchical low entropy is adopted to accelerate network training, so that the network retains the learnability of convolution, a pooling layer becomes learnable, the model learning ability is strong, the error is smaller than that of the method in the prior art, and the accuracy and robustness of image speech recognition and classification are better.

Description

technical field [0001] The present application relates to a self-reinforcing image and speech deep learning model, in particular to an image and speech deep learning model for upgrading network self-reinforcing, and belongs to the technical field of image and speech deep learning. Background technique [0002] Machine learning is an important research field of artificial intelligence. The most basic approach is to use various algorithms to allow computers to learn from data to obtain the required knowledge, and to generalize and solve related problems. Machine learning starts from the hierarchical structure of the model. It has gone through two stages: shallow learning and deep learning. When applying machine learning algorithms to build complex models to solve problems, an important factor that affects its performance is the form of data representation. The existing technology uses the method of artificially designing features to find suitable data features to improve the ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/04G06N3/08G06N5/04G10L15/16
CPCG06N3/08G06N5/04G10L15/16G06N3/045
Inventor 潘振华
Owner 潘振华
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products