Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

187results about How to "Stable training" patented technology

Robot navigation system and navigation method

The utility model discloses a robot navigation system and a navigation method. The robot navigation system comprises a navigation network which is formed by a plurality of wireless access points, a wireless communication module which is used for transferring data and collecting the intensity sequence communicated with the wireless access points, a sensor which is used for checking that the robot meets barriers or not, and a position server which is used for storing the referenced intensity sequence and running the intricate position arithmetic, and is characterized in that the position server is connected with the wireless communication module and interacts with the navigation network. The navigation method is characterized in that the robot judges the next target position until reaches the destination by comparing the intensity sequence collected in real time with stored reference intensity sequence of the position points; when the robot meets barriers, the robot records and demarcates the intensity sequence of the position in order to avoid entering the position again, therefore achieving intellectual learning; the robot can upload the correlative information to the position server and achieve the assistant navigation position by the help of the database of the position server and the position arithmetic. The utility model is not likely to be affected by the environment and also has the advantages of low cost of maintenance.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI

Dialog strategy online realization method based on multi-task learning

The invention discloses a dialog strategy online realization method based on multi-task learning. According to the method, corpus information of a man-machine dialog is acquired in real time, current user state features and user action features are extracted, and construction is performed to obtain training input; then a single accumulated reward value in a dialog strategy learning process is split into a dialog round number reward value and a dialog success reward value to serve as training annotations, and two different value models are optimized at the same time through the multi-task learning technology in an online training process; and finally the two reward values are merged, and a dialog strategy is updated. Through the method, a learning reinforcement framework is adopted, dialog strategy optimization is performed through online learning, it is not needed to manually design rules and strategies according to domains, and the method can adapt to domain information structures with different degrees of complexity and data of different scales; and an original optimal single accumulated reward value task is split, simultaneous optimization is performed by use of multi-task learning, therefore, a better network structure is learned, and the variance in the training process is lowered.
Owner:AISPEECH CO LTD

Video generation method combining variational auto-encoder and generative adversarial network

ActiveCN110572696AOvercoming the problem of poor continuity between framesImproved continuity between framesSelective content distributionDiscriminatorPattern recognition
The invention discloses a video generation method combining a variational auto-encoder and a generative adversarial network. Belonging to the technical field of video generation, the method comprisesthe following steps: a generator of the generative adversarial network does not directly generate a video but generate a series of associated hidden variables, the hidden variables pass through atrained decoder of the variational auto-encoder to generate a series of related images, a discriminator of the generative adversarial network does not directly discriminate the video, but enables the videoto pass through the encoder of the variational auto-encoder to obtain a series of low-dimensional hidden variables and discriminate the hidden variables. According to the method, the video can be generated according to the input description text; the method is advantaged in that a problem of poor inter-frame continuity in video generation is solved, inter-frame continuity of video generation is improved, the training step is divided into two parts of training the variation auto-encoder and training the generative adversarial network based on the trained variation auto-encoder, and training iseasier and more stable.
Owner:ZHEJIANG UNIV

High-resolution image generation method based on generative adversarial network

The invention discloses a high-resolution image generation method based on a generative adversarial network. The method comprises the following steps: firstly, preprocessing a to-be-learned data set image to obtain a training set; constructing a generative adversarial network comprising a generative network and a discrimination network; pre-training the generative adversarial network; obtaining pre-trained model parameters as initialization parameters of the generative adversarial network; then, separately inputting the training set and an image generated by the generative network into the discrimination network, enabling the output of the discrimination network to react on the generative network, carrying out adversarial training on the generative adversarial network, optimizing network parameters of the generative network and the discrimination network, and ending the training when a loss function converges to obtain a trained generative adversarial network; and finally, inputting the random data distribution into the trained generation network to realize high-resolution image generation. According to the invention, the generated image is clearer, the training process is stable,and the network converges quickly.
Owner:NANJING UNIV OF INFORMATION SCI & TECH

Mode collapse resistant robust image generation method based on novel conditional generative adversarial network

The invention discloses a mode collapse resistant robust image generation method based on a novel conditional generative adversarial network. Compared with a method in the related art, the technologydisclosed by the invention has the advantages that the adaptability is strong, the robustness is good, only the category of a required image is required to be specified in the using phase, not manualintervention is required in the process, the training phase is short in consumed time, the training process is stable, and the balance between the diversity and the authenticity of generated images issufficiently maintained. The main innovation of the mode collapse resistant robust image generation method lies in that a mode collapse problem and a training failure problem in the training processof other conditional generative methods are solved. Meanwhile, parameters are respectively optimized for a classifier and a discriminator, thereby avoiding problems of instable training and mode collapse of the methods of the same category. In addition, the invention further introduces a construction strategy for weight sharing, so that the training speed is greatly improved and the storage overhead is reduced under the premise of not damaging the original performance. The mode collapse resistant robust image generation method is applied to a diversified image data generation task of low-costlarge-scale specified labels.
Owner:TIANJIN POLYTECHNIC UNIV

Target domain oriented unsupervised image conversion method based on generative adversarial network

The invention provides a target domain oriented unsupervised image conversion method based on a generative adversarial network, and belongs to the field of computer vision. The target domain orientedunsupervised image conversion method is used for realizing an unsupervised cross-domain image-to-image conversion task, and belongs to the field of computer vision. According to the target domain oriented unsupervised image conversion method, a self-encoding reconstruction network is designed, and hierarchical representation of a source domain image is extracted by minimizing reconstruction loss of the source domain image; meanwhile, through a weight sharing strategy, the weights of network layers for encoding and decoding high-level semantic information in two groups of generative adversarialnetworks in the network model are shared, so that the output image can keep the basic structure and characteristics of the input image; and then, the two discriminators are respectively used for discriminating whether the input image is a real image or a generated image in respective fields. According to the target domain oriented unsupervised image conversion method, unsupervised cross-domain image conversion can be effectively carried out, and a high-quality image is generated. Experiments prove that the method provided by the invention obtains a good result on standard data sets such as CelebA and the like.
Owner:DALIAN UNIV OF TECH

Pedestrian re-identification method based on knowledge distillation

The invention discloses a pedestrian re-identification method based on knowledge distillation, and the method comprises the steps: inputting a pedestrian image training set into a teacher network, andinputting the same data set into a student network; through the synergistic effect of student network transfer, characteristic distillation positions and distance loss functions, carrying out distillation at multiple stages of the whole backbone network at the same time, so that the characteristic output of the student network is continuously close to the characteristics output by the teacher network; minimizing and updating parameters of the student model through a distillation loss function, and training a student network; carrying out distance measurement on the obtained feature vectors, searching out a pedestrian target graph with the highest similarity, and finally enabling the accuracy of the student network resnet18 to be greatly improved to be close to the accuracy of the teachernetwork resnet50. According to the method, personnel re-identification is realized by using a knowledge distillation transfer learning method, and the thought of replacing a large model with a small model is adopted, and therefore, calculation complexity can be effectively reduced, and the accuracy of a student model can be ensured.
Owner:KUNMING UNIV OF SCI & TECH

Calligraphy word stock automatic restoration method and system based on style migration

PendingCN110570481AAutomatic extractionImprove the situation where there is a large deformationTexturing/coloringNeural architecturesCode moduleDiscriminator
The invention provides a calligraphy font library automatic restoration method and system based on style migration. The calligraphy font library automatic restoration method comprises the steps that input fonts and standard style fonts are set; the input font image is input into a coding module, and potential feature information is obtained by the coding module; the conversion module converts thefeature information into feature information of standard style fonts; the decoding module performs processing to obtain a generated font image; the input font image and the generated font image are input into a discriminator, and the probability that the generated font image is a real standard style font is output; similarly, the input font image and the standard style font image are input into adiscriminator to obtain the probability that the standard style font image is a real standard style font; and finally loss functions of the generator and the discriminator are obtained according to the two probabilities. The optimizer adjusts the generator and the discriminator according to the loss functions until the loss functions of the generator and the discriminator converge to obtain a trained generator; a complete font library of standard style fonts can be obtained by adopting the trained generator.
Owner:CHINA UNIV OF GEOSCIENCES (WUHAN)

Vehicle image optimization method and system based on adversarial learning

The invention discloses a vehicle image optimization method and system based on adversarial learning. The vehicle image optimization method comprises the steps: collecting vehicle images photographedat different angles, and dividing the vehicle images into a standard scene image and a non-standard scene image; carrying out image preprocessing on the non-standard image to obtain a low-quality dataset; constructing a vehicle image optimization model based on the generative adversarial network, wherein the model is composed of a generator, a discriminator and a feature extractor; training a vehicle image optimization model based on the generative adversarial network, setting a loss function, calculating a network weight gradient by adopting back propagation, and updating parameters of the vehicle image optimization model; and after the vehicle image optimization model is trained, reserving the generator as a final vehicle image optimization model, inputting multi-scene vehicle images, and outputting optimized standard scene images. According to the invention, migration from complex scene vehicle images to standard scene vehicle images is realized, and the purpose of optimizing the image quality is achieved, and the vehicle detection and recognition accuracy is improved.
Owner:JINAN UNIVERSITY

Handwritten numeral generation method based on parameter optimization generative adversarial network

The invention provides a handwritten numeral generation method based on a parameter optimization generative adversarial network. The method comprises the following steps of preparing a handwritten digital data set as a sample training data set, sampling to obtain the real data, and initializing the random noise data; establishing the generative adversarial network, and initializing a generator network weight parameter theta and a discriminator network weight parameter omega; establishing a generator loss function and a discriminator loss function through the soil moving distance W, and additionally adding a gradient penalty loss item to the discriminator loss function; and iteratively training a generator network and a discriminator network, and optimizing the generator network weight parameter theta and the discriminator network weight parameter omega. According to the embodiment of the invention, the problems of slow convergence, unstable training, high calculation overhead and the like of the original generative adversarial network are solved, the optimization of the generative adversarial network is realized, the network performance of the generative adversarial network is fully improved, and the generator can generate the handwritten digital images with higher quality.
Owner:HEFEI UNIV OF TECH

Neural machine translation system training acceleration method based on stacking algorithm

The invention discloses a training acceleration method of a deep neural machine translation system based on a stacking algorithm. The method comprises the following steps: constructing a coding end comprising a coding block, a decoding end and a preceding Transformer model; inputting sentences expressed by dense vectors into the coding end and the decoding end, and writing the input of the codingend into a memory network; writing the output vector into the memory network after completing the operation of each coding block, and accessing the memory network to perform linear aggregation to obtain the output of the current coding block; training a current model; copying the coding block parameters of the top layer to construct a new coding block and stacking the new coding block on the current coding end to construct a model containing two coding blocks; repeating the process to construct a neural machine translation system with a deeper coding end, and training the neural machine translation system to a target layer number until convergence; and performing translating by using the trained model. According to the method, the network with 48 coding layers can be trained, and the performance of the model is improved while 1.4 times of the speed-up ratio is obtained.
Owner:沈阳雅译网络技术有限公司

Optimization model method based on generative adversarial network and application

ActiveCN110097185ABoost parameter training processStable trainingLogisticsNeural learning methodsDiscriminatorLocal optimum
The invention discloses an optimization model method based on a generative adversarial network and an application, called GAN-O, the method comprises the following steps: expressing the application (such as logistics distribution optimization) as a function optimization problem; establishing a function optimization model based on the generative adversarial network according to the test function and the test dimension of the function optimization problem, including constructing a generator and a discriminator based on the generative adversarial network; training a function optimization model; carrying out iterative computation by utilizing the trained function optimization model to obtain an optimal solution; therefore, the optimization solution based on the generative adversarial network is realized. According to the method, a better local optimal solution can be obtained in a shorter time, so that the training of the deep neural network is stable, and the method has more excellent local search capability. The method can be used for many application scenarios such as logistics distribution problems which can be converted into function optimization problems in reality, the application field is wide, a large number of actual problems can be solved, and the popularization and application value is high.
Owner:PEKING UNIV

Mechanical arm action learning method and system based on third-person imitation learning

ActiveCN111136659ABreak the balance of the gameGame balance maintenanceProgramme-controlled manipulatorThird partyAutomatic control
The invention discloses a mechanical arm action learning method and system based on third-person imitation learning. The method and system are used for automatic control of a mechanical arm so that the mechanical arm can automatically learn how to complete a corresponding control task by watching a third-party demonstration. According to the method and system, samples exist in a video form, and the situation that a large number of sensors are needed to be used obtaining state information is avoided; an image difference method is used in a discriminator module so that the discriminator module can ignore the appearance and the environment background of a learning object, and then third-party demonstration data can be used for imitation learning; the sample acquisition cost is greatly reduced; a variational discriminator bottleneck is used in the discriminator module to restrain the discriminating accuracy of a discriminator on demonstration generated by the mechanical arm, and the training process of the discriminator module and a control strategy module is better balanced; and the demonstration action of a user can be quickly simulated, operation is simple and flexible, and the requirements for the environment and demonstrators are low.
Owner:NANJING UNIV

Image super-resolution method based on densely linked neural network, storage medium and terminal

InactiveCN109544457AImprove the ability to extract low-frequency and high-frequency features of imagesImprove the effectGeometric image transformationNeural architecturesImage resolutionDeconvolution
The invention discloses an image super-resolution method based on a densely linked neural network, a storage medium and a terminal. The method includes: preprocessing an image; performing Feature extraction: Building a dense-linked neural network, inputting the low-resolution image Input from the entrance of the dense-linked neural network, and extracting the feature information contained in the Input after calculation; Predicting the super-resolution image and updating the network parameters: performing upsampling/deconvolution on the feature-extracted image to obtain the predicted image predict; calculating the error values between the predicted image predict and the real image label, and updating the parameters of the densely linked neural network in the reverse direction; and performing super resolution reconstruction. The method can remarkably improve the ability of extracting the low-frequency and high-frequency features of an image by a depth neural network, improve the effect of the image super-resolution, and improve the ability of providing information by a picture, so that the invention is applied in the field of expecting to obtain a high-resolution image and providingmore details by the picture.
Owner:UNIV OF ELECTRONIC SCI & TECH OF CHINA

Human face forehead area detection and positioning method and system of low-resolution thermodynamic diagram

The invention discloses a face forehead area detection and positioning method for a low-resolution thermodynamic diagram, and the method comprises the following steps: removing no-face and blurred images in thermodynamic images collected by an infrared camera, obtaining an effective thermodynamic image set, dividing the thermodynamic image set into a training set and a test set, and marking corresponding labels; performing data enhancement processing on the training set with the label and the test set with the label; combining a bidirectional multi-scale feature and a rapid normalization fusion method to form a multi-scale feature fusion network TwFPN; solving a DEfficientNet human face forehead detection model based on the multi-scale feature fusion network TwFPN with the weight and a joint scaling algorithm; inputting the training set with the label into a DEfficientNet human face forehead detection model, extracting an optimal forehead region feature, and obtaining an optimal humanface forehead detection model; inputting the test set with the label into an optimal human face forehead detection model to obtain a forehead area of the human face, and adding a detection box to theforehead area.
Owner:成都东方天呈智能科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products