Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

31results about How to "Increase winning percentage" patented technology

Multi-module automatic trading system based on network distributed computing

InactiveCN106934716AReduce human errorSimplify Quantitative Modeling ProceduresFinanceRisk ControlTransaction management
The invention discloses a multi-module automatic trading system based on network distributed computing. The system comprises a historical and real-time market data processing module, a real-time market probability distribution prediction module, a trading strategy design and development module, a trading strategy history backtracking and evaluating module, a trading strategy market access and transaction management module, a system monitoring and risk control module, and a transaction report and quantitative analysis module. The automatic trading system can face various types of quantitative traders, a quantitative modeling program is simplified, a quantitative trading threshold is reduced, the quantitative strategy test and evaluation efficiency is improved, and the transaction security is guaranteed. The automatic trading system can automatically and intelligently analyze market and trading trends, provide real-time price probability distribution prediction for market main transaction products, provide a transaction signal with low delay and a high success rate for a transaction strategy and thus generate a timely and accurate trading signal, an efficient tool and comprehensive and accurate data are provided to construct the strategy of a user, and back testing and optimization are carried out.
Owner:燧石科技(武汉)有限公司

Combined type toy spinning top capable of automatically splitting

The invention discloses a combined type toy spinning top capable of automatically splitting. The combined type toy spinning top is characterized by comprising at least two spinning top bodies which are butted vertically, wherein the spinning top tip of each spinning top body is an elastic spinning top tip; the upper spinning top body in the spinning top can automatically separate from the lower spinning top body when being impacted or blocked during the rotation, and ejects out under the action of the elastic spinning top tip of the upper spinning top body, so that the two spinning top bodies rotating independently respectively are formed. According to the invention, when a user plays the spinning top, the fact that the spinning top can be split into two parts or three parts during a competition can be realized, so that the attack force of the spinning top is greatly improved, and further the win rate is higher; besides, the spinning top tips of the spinning top bodies are elastic spinning top tips, and when the spinning top is split during a game, the upper spinning top body ejects out mainly under the action of the elasticity of the spinning top tip of the upper spinning top body, so that the impact to the lower spinning top body is not large; besides, the soft landing can be realized through the utilization of the buffer action of the elasticity during landing, so that the spinning top tips can be effectively protected, and the operation stability can be kept.
Owner:ALPHA GRP CO LTD +2

Air combat maneuvering strategy generation technology based on deep random game

The invention discloses a short-distance air combat maneuvering strategy generation technology based on a deep random game. The technology comprises the following steps: firstly constructing a training environment for combat aircraft game confrontation according to a 1V1 short-distance air combat flow, and setting an enemy maneuvering strategy; secondly, by taking a random game as a standard, constructing agents of both sides of air combat confrontation, and determining a state space, an action space and a reward function of each agent; thirdly, constructing a neural network by using a maximumand minimum value DQN algorithm combining random game and deep reinforcement learning, and training our intelligent agent; and finally, according to the trained neural network, obtaining an optimal maneuvering strategy under an air combat situation through a linear programming method, and performing game confrontation with enemies. The thought of random game and deep reinforcement learning is combined, the maximum and minimum value DQN algorithm is provided to obtain the optimal air combat maneuver strategy, the optimal air combat maneuver strategy can be applied to an existing air combat maneuver guiding system, and effective decisions can be accurately made in real time to guide a fighter plane to occupy a favorable situation position.
Owner:CHENGDU RONGAO TECH

Toy gyro good in defensiveness

The invention provides a toy gyro good in defensiveness. The toy gyro comprises a spiral cover, a spiral sheet, a spiral base and a hillock point, and is characterized by further comprising an elastic shock absorption piece for buffering impact force when the gyro is impacted. The shock absorption piece is provided with an elastic part used for resisting impact and a connection part fixedly connected with the spiral base. The elastic part is located on the periphery of the spiral base through fixed connection of the connection part and the spiral base, and therefore when the gyro competes with other gyros and the other party impacts on the elastic part, impact force will be reduced due to elastic deformation of the elastic part; thus, the shock absorption effect is achieved to make the gyro rotate stably, anti-impact force is produced and returned to the other party at the same time to interfere the other party, and accordingly the win rate of the gyro is improved. The toy gyro is strong in interestingness, the elastic property of the elastic shock absorption piece can be changed by a player according to judgments so as to achieve a better anti-impact effect, the method to play the toy gyro is novel, the toy gyro can attract more players, and manipulative ability and competitive ability of children can be trained at the same time.
Owner:ALPHA GRP CO LTD +2

Multi-agent reinforcement learning method and system based on population training

ActiveCN112561032ASolve the small amount of training dataSolve the problem of small amount of training dataArtificial lifeNeural architecturesSpecific populationEngineering
The invention relates to a multi-agent reinforcement learning method and system based on population training. The method comprises the steps of obtaining a first training set according to game videos;training the multi-layer full convolution LSTM network by using the first training set to obtain a first intelligent agent; utilizing the first intelligent agent to perform self-gaming, and obtaininga first population after a set time period; selecting a second intelligent agent, a first intelligent agent set and a second intelligent agent set from the first population; utilizing the first intelligent agent to fight with the selected three groups of intelligent agents at the same time, and storing and updating the first population until any one of the selected three groups of intelligent agents goes wrong, so as to obtain a second population; selecting a replacement agent from the second population to replace the battle agent to continue to fight with the first agent, storing and updating the second population, and obtaining a third population; and until the number of the agents in the third population reaches a preset value, outputting the first agent. According to the invention, the intelligent agent capable of simulating unmanned system combat command and control can be trained.
Owner:NO 15 INST OF CHINA ELECTRONICS TECH GRP

Wargame multi-entity asynchronous collaborative decision-making method and device based on reinforcement learning

The invention belongs to the technical field of intelligent decision making, and relates to a war game multi-entity asynchronous collaborative decision making method and device based on reinforcement learning, and the method comprises the steps: obtaining a war game deduction environment and a multi-entity asynchronous collaborative decision making problem, and carrying out the modeling analysis of the multi-entity asynchronous collaborative decision making problem, and obtaining an initial model; according to the initial model, a multi-agent deep reinforcement learning algorithm is adopted to establish an agent network model and a hybrid evaluation network model; training the agent network model and the hybrid evaluation network model to obtain a collaborative decision framework; a loss function of the multi-agent deep reinforcement learning algorithm is reconstructed by setting a weighting operator or optimizing the multi-agent deep reinforcement learning algorithm through multi-step return; using the reconstructed loss function to update the collaborative decision framework; and according to the updated collaborative decision framework, performing decision making on asynchronous collaboration of multiple entities. According to the method, multi-entity asynchronous collaborative decision-making in war game deduction can be realized.
Owner:NAT UNIV OF DEFENSE TECH

Threat evaluation method for aerial target in beyond-visual-range air combat

The invention provides a threat evaluation method for an air target in an over-the-horizon air combat, and relates to the technical field of threat evaluation in the over-the-horizon air combat. The method comprises the steps of firstly establishing a missile model, and calculating flight data of a missile carried by a target enemy plane; determining a turning angle when the aircraft escapes according to the azimuth angle of the enemy aircraft, and further determining an escape route and a folded escape distance of the aircraft; determining the turning angle and the turning time of the enemy plane according to the azimuth angle of the enemy plane and the speed vector direction of the enemy plane; judging whether the aircraft is within the range of the enemy missile or not, and judging theattack result of the enemy aircraft to the aircraft and the total attack time by combining the flight speeds of the enemy aircraft and the aircraft; and finally, determining a threat evaluation valueof the enemy plane to the aircraft according to an attack result of the enemy plane to the aircraft. According to the method, energy is used as a basis, time is used as a unified dimension, and the judgment precision of absolute threat evaluation is effectively improved; and the output quantity units are unified, so that the judgment precision of relative threat evaluation is effectively improved.
Owner:SHENYANG AEROSPACE UNIVERSITY

Soccer robot, and control system and control method of soccer robot

The invention discloses a control system of a soccer robot. The control system comprises a visual processor and a motion controller, wherein the visual processor comprises an image processing module,a positioning module and a communication module, wherein the image processing module is used for processing an image collected by the soccer robot so as to generate image information, the positioningmodule is connected with the image processing module and is used for processing the image information so as to generate the position and distance information of the soccer robot, and the communicationmodule is connected with the positioning module and is used for sending the position and distance information to other soccer robots and receiving instruction information sent by the other soccer robots; and the motion controller includes a motion state management and decision making module, wherein the motion state management and decision making module is connected with the visual processor andis used for receiving the position and distance information and instruction information, and controlling the movement of the soccer robot according to the position and distance information and instruction information. With the control system of the soccer robot of the present invention adopted, the teamwork of a plurality of soccer robots can be realized.
Owner:SHENYANG SIASUN ROBOT & AUTOMATION

Object positioning method and device, storage medium and electronic equipment

The invention discloses an object positioning method and device, a storage medium and electronic equipment. The method comprises the following steps that: 1, under the condition that a first virtual object controlled by a shooting application client is provided with a positioning sensing prop, the resource quantity of virtual resources accumulated after the first virtual object executes a shootingaction in a target time period is obtained, wherein the positioning sensing prop is used for scanning second virtual objects in a target scanning area associated with the first virtual object at regular intervals, and the second virtual object and the first virtual object are in different camps; under the condition that the resource quantity of the virtual resources reaches a triggering condition, the position of at least one second virtual object is obtained by using the positioning sensing prop; and the positions of the second virtual objects are displayed in a virtual scanning panel provided by the positioning sensing prop. According to the object positioning method and device, the storage medium and the electronic equipment of the invention, the technical problem of low positioning efficiency of an object positioning method provided by the prior art is solved.
Owner:TENCENT TECH (SHENZHEN) CO LTD

Stock trading method based on reinforcement learning

PendingCN112884576AMeet the needs of transaction decision-makingHigh speedFinanceNeural learning methodsReinforcement learning algorithmEngineering
The invention discloses a stock trading method based on reinforcement learning, and relates to the field of machine learning and quantitative trading. According to the invention, stock transaction is carried out based on a cyclic reinforcement learning algorithm with adaptive capability; the method comprises the steps that a user interaction interface and a classification neural network N train a classification neural network; through a circular reinforcement learning RRL training stage, three types of operations of buying, holding and selling are respectively executed in recognized different stock market cycle scenes, after RRL training is completed, an automatic transaction execution and loss stopping stage is entered, dynamic loss stopping is performed according to a loss stopping strategy set by a user, and automatic transaction execution is performed. According to the invention, the specific preference of the user for risk earnings can be satisfied, the risk of manual transaction errors is reduced, and the cost of manual decision making is reduced; compared with a traditional linear model and a Q learning method, the price self-adaptability is higher, the made investment decision is timely and effective, and the transaction winning rate can be greatly improved.
Owner:上海卡方信息科技有限公司

Algorithm for predicting price rise and fall by combining energy values with big data and application

PendingCN109741085AGood entry positionSmall stop loss possibilityMarketingData validationMarket prediction
The invention discloses an algorithm for predicting price rise and fall by combining energy values with big data and application. On the basis of a K line graph analyzing the price trend and on the basis of minute chart, subtracting the opening price of each K line from the closing price of each K line to obtain the energy value of each K line; if the energy value is positive, the K line is a positive line, and if the energy value is negative, the K line is a negative line, and the energy values of the K lines at different stages are accumulated to obtain the energy values with different durations, such as extremely short term, short term, middle term, middle and long term, long term and ultra-long term. The method is accurate and timely, can quantify, can quantify rising energy or fallingenergy of different stages, and can also timely enter or reject entry. According to the method, the winning rate of market prediction is relatively high, and after historical big data verification and parameter optimization, an automatic transaction system or index based on an energy value is based on an objective energy value and probability when an automatic transaction (or transaction signal sending) is executed, so that relatively high winning rate guarantee can be achieved.
Owner:上海句石智能科技有限公司

A defensive toy top

A toy gyroscope with strong defensiveness comprises a gyroscope cap (1), a gyroscope disc (2), a gyroscope base (3) and a gyroscope point (6), and also comprises an elastic absorber for reducing impact force when the gyroscope is collided with, wherein the elastic absorber comprises an elastic part (4) for confronting collision and a connecting part (5) that is fixedly connected to the gyroscope base. Through a fixed connection between the connecting part (5) and the gyroscope base (3), the elastic part (4) is located in a position of a periphery of the gyroscope base (3); therefore, when the gyroscope competes with other gyroscopes and a counterpart gyroscope collides with the elastic absorber, the elastic absorber deforms elastically to reduce the impact force, thus providing a cushioning effect and making the gyroscope rotate stably. Meanwhile, the elastic absorber generates and returns counterpart force to the counterpart gyroscope, causing interference to the counterpart gyroscope, thus improving a winning rate of the own gyroscope and providing great fun. In addition, a player can change elastic performance of the elastic absorber based on judgment to achieve a better anti-collision effect. The gyroscope provides a novel playing method, can obtain favor of more players, and can culture a manipulative ability and a competitive ability of children.
Owner:ALPHA GRP CO LTD +2

A combined toy top with magnetic control separation

ActiveCN104623901BSolved the problem that the gyro could not be separated normallyReduce power consumptionTopsEngineeringOperability
The invention provides a combined toy top with the magnetic control separating function. The combined toy top comprises a main top body, an auxiliary top body and an elastic part, and the main top body is fixed on the auxiliary top body by the auxiliary top body in a limited mode and compresses the elastic part. The combined toy top is characterized in that a magnetic control device, a motor, a power source and a connecting part are arranged in the auxiliary top body, a motor rotating shaft is in linkage with the connecting part, the main top body is fixed on the auxiliary top body in the limited mode through the connecting part, a magnetic object gets close to the magnetic control device to generate magnetic induction, then the motor is powered on to drive the connecting part to rotate so as to remove limited fixation, and the combined toy top is separated into the main top body and the auxiliary top body which rotate individually. Therefore, a player can separate the top according to own will, as the non-contact magnetic induction unlocking is adopted, rotation of the toy top is not affected, the whole game process is controllable, operability is high, the playing method is more newfangled, interestingness is high, the top is divided into the two top bodies, attack force of the top of the own side is greatly increased, the victory probability is higher, and the combined toy top can gain favor of more players.
Owner:ALPHA GRP CO LTD +2
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products