Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for learning non-player character combat strategies on basis of deep Q-learning networks

A non-player character, learning network technology, applied in the field of non-player character combat strategy learning, can solve problems such as low efficiency, poor flexibility, and the inability of players to respond differently, and achieve the effect of reducing labor and rapid automatic adjustment

Active Publication Date: 2018-06-29
ZHEJIANG UNIV
View PDF5 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] For the combat strategies of non-player characters in the game, the behavior tree is used to write fixed responses to each state. The problems with this method are: first, it is difficult to guarantee the pros and cons of the response actions, which can only be done artificially. Judging its pros and cons; secondly, the efficiency is low, and it takes a lot of time and energy to design these responses; finally, the flexibility is poor, and it cannot respond differently to the player's operations, and it is easy to find loopholes
[0004] Reinforcement learning is a kind of machine learning method. This method inputs the state and outputs decision-making actions. Each step will receive a return value from the environment. The purpose is to maximize the return value. Finally, the action is selected according to the size of the return value. The performance on linear problems has been poor, so it is difficult to directly apply to the field of game combat strategy learning

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for learning non-player character combat strategies on basis of deep Q-learning networks
  • Method for learning non-player character combat strategies on basis of deep Q-learning networks
  • Method for learning non-player character combat strategies on basis of deep Q-learning networks

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0043] The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments.

[0044] Step (1): Determine the input state set S of the learning NPC; the combat strategy of the learning NPC means that when the learning NPC fights one-on-one with the sparring character, the learning NPC The ability to make different output actions; the characters in the game can be divided into two categories: Learning non-player character (LNPC) and sparring character (Sparring character, SC); the learning non-player character refers to It is a non-player character based on a deep Q-learning network. This type can generate different samples through multiple interactions with sparring characters, so as to continuously learn new combat strategies; sparring characters can be divided into player characters (Playercharacter, PC ) and fixed non-player character (Fixed non-player character, FNPC); the player character refers to the char...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for learning non-player character combat strategies on the basis of deep Q-learning networks. The method for learning the non-player character combat strategies on thebasis of the deep Q-learning networks has the advantages that the locations, skill cooling time and control states of learning non-player characters and locations, skill cooling time and control states of sparring characters are used as input states, all skills of the learning non-player characters are used as output action sets, the deep Q-learning networks are used as learning algorithms, bloodvolume difference information of the characters of two parties is used as reward for the deep Q-learning networks, the minimum time difference errors are used as targets, back propagation computationis carried out, and hidden layer weight coefficients and output layer weight coefficients of deep neural networks can be updated; the non-player character combat strategies can be automatically generated by the aid of the method, accordingly, the efficiency and the flexibility can be improved, the battle capacity can be reinforced, and the challenging and the interestingness of games can be obviously enhanced.

Description

technical field [0001] The invention relates to a method for learning game combat strategies, which belongs to the field of machine learning, and in particular to a method for learning combat strategies of non-player characters based on a deep Q-learning network. Background technique [0002] The combat strategy of non-player characters (Non-player character, NPC) in the game is a very important part of the game experience, especially in fighting games. The quality of the combat strategy directly affects the overall evaluation and sales of the entire game. A good combat strategy includes reasonable positioning, instant response to key skills, knowing how to use some skills to restrain enemy units, etc. [0003] For the combat strategies of non-player characters in the game, the behavior tree is used to write fixed responses to each state. The problems with this method are: first, it is difficult to guarantee the pros and cons of the response actions, which can only be done ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): A63F13/67A63F13/55A63F13/833G06N3/04G06N3/08
CPCA63F13/55A63F13/67A63F13/833G06N3/084A63F2300/6027A63F2300/8029G06N3/045
Inventor 卢建刚卢宇鹏刘勇
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products