Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

154 results about "Augmented learning" patented technology

Augmented learning is an on-demand learning technique where the environment adapts to the learner. By providing remediation on-demand, learners can gain greater understanding of a topic while stimulating discovery and learning.

Machine vision and machine learning-based automatic parking method and system

The invention provides a machine vision and machine learning-based automatic parking method and system. Images are collected via cameras arranged all around an automobile body, and a three-dimension image of the surround environment and a distance between the automobile body and a surrounding environment object can be obtained; parking lines can be identified according to a machine vision technology; the automobile is parked when an effective image is identified and an automatic driving mode is started; at the automatic driving mode, an optimal scheme is selected from preset parking lines according to a relative position between the current automobile and a parking lot; and the automobile is parked with a steering wheel, an accelerator and a brake pedal which are controlled by an electriccontrol device. Based on a preset database and reinforced machine learning, improvements are continuously made during the parking process, so less data input is guaranteed upon more application conditions; low demand is required for a parking position of the parking lot; forward and backward operations can be conducted; a wide application range is provided; and by the use of the machine vision andmachine learning-based automatic parking method and system, parking speed is accelerated and a reliable parking process is obtained.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS

Video curriculum learning method and system

The invention is applied to the technical field of computers, and provides a video curriculum learning method and system. The method comprises the following steps: according to a selected video curriculum and a planed learning duration of a unit period, generating a curriculum quantity plan of the unit period; in the unit period, recording durations of the selected video curriculum already watched by students; according to the durations of the already watched video curriculum and the curriculum quantity plan of the unit period, obtaining a curriculum quantity completion percentage value; and according to the curriculum quantity completion percentage value, sending corresponding prompt information. According to the invention, curriculum quantity plans of the unit period are generated for student users, at the same time, the duration of the selection video curriculum already watched by the students are continuously recorded, the curriculum quantity completion percentage value is calculated, and the corresponding prompt information is sent to the students, such that the student users are urged and encouraged to complete the video curriculum according to the plans, the student users are supervised, the learning motivation is enhanced, and the method and system help the students to complete the video curriculum.
Owner:GUANGDONG XIAOTIANCAI TECH CO LTD

Underwater robot trajectory tracking method based on double-BP network reinforcement learning framework

ActiveCN111240345AOvercoming time-consuming and labor-intensiveAltitude or depth controlFuzzy ruleControl system
The invention discloses an underwater robot trajectory tracking method based on a double-BP network reinforcement learning framework, and belongs to the technical field of underwater robot trajectorytracking. According to the method, the problems that in the prior art, when online optimization of the controller parameters is carried out, fuzzy rules need to be established depending on a large amount of expert prior knowledge, and consequently time and labor are consumed for online optimization of the controller parameters is solved. According to the invention, continuous interaction with theenvironment can be realized by using a reinforcement learning method, after the strengthening value given by the environment is obtained, and the characteristic of the optimal strategy can be found through loop iteration, a reinforcement learning method is combined with a double BP network, by adjusting the speed of the underwater robot and the relevant parameters of the control law of the headingcontrol system on line, the designed speed and heading control system can select the optimal control parameters corresponding to the environment in different environments, and the problem that in theprior art, time and labor are consumed when controller parameters are optimized on line is solved. The method can be applied to trajectory tracking of the underwater robot.
Owner:HARBIN ENG UNIV

Electric vehicle lithium ion battery health state estimation method based on AdaBoost-CBP neural network

The invention provides an electric vehicle lithium ion battery health state estimation method based on an AdaBoost-CBP neural network. Due to the fact that the discharge voltage, the discharge currentand the cyclic charge and discharge frequency are obvious in change trend in the battery using process, the three parameters are adopted as input data of SOH estimation, and the battery capacity is adopted as an output parameter. Because the battery data has noise and presents a nonlinear change characteristic, an extended Kalman filtering algorithm is adopted to carry out denoising. Aiming at the problem that the BP neural network is easy to fall into local optimum, a fractional calculus theory is adopted to optimize a gradient descent method. Finally, the fractional order BP neural networkis used as a weak learner; the fitting capability of the learners is enhanced by utilizing the self-adaptive enhancement performance of the AdaBoost algorithm, and each round of weak learners are integrated to obtain the strong learners, so that the diversity of the learners is improved, the performance advantage complementation of the learners under different working condition data is realized, and the estimation precision is effectively improved.
Owner:JIANGSU UNIV

Method for optimizing robustness of topological structure of Internet of Things through autonomous learning

PendingCN110807230AIncreased ability to resist attacksHighly Reliable Data TransmissionGeometric CADAlgorithmThe Internet
The invention discloses a method for optimizing robustness of a topological structure of the Internet of Things through autonomous learning. The method comprises: 1, initializing the topological structure of the Internet of Things; 2, compressing the topological structure; 3, initializing an autonomous learning model; constructing a deep deterministic learning strategy model to train the topological structure of the Internet of Things according to the features of deep learning and reinforcement learning; 4, training and testing a model; 5, periodically repeating the step 4 in one independent repeated experiment, and periodically repeating the steps 1, 2, 3 and 4 in multiple independent repeated experiments until the maximum number of iterations is reached. In the process, the maximum number of iterations is set, the experiment is independently repeated each time, and the optimal result is selected. Experiments are repeated for many times, andan average value is selected as a result ofthe experiment. According to the method, the attack resistance of the initial topological structure can be remarkably improved; the robustness of a network topology structure is optimized through autonomous learning, and high-reliability data transmission is ensured.
Owner:TIANJIN UNIV

Electric automobile compound energy management method based on rule and Q-learning reinforcement learning

The invention discloses an electric automobile compound energy management method based on a rule and Q-learning reinforcement learning. By the adoption of the method, energy management is conducted according to the power requirement of a vehicle at every moment and the SOCs of a lithium battery and a super capacitor. In an energy management strategy based on Q-learning reinforcement learning, an energy management controller takes actions through observation of the system state, calculates the corresponding award value of each action and makes updates in real time, an energy management strategywith the minimum system loss power is obtained through utilization of the award values through Q-learning reinforcement learning algorithm simulative training, finally, real-time power distribution is conducted through the energy management strategy obtained through learning, and meanwhile, the award values are continuously updated to adapt to the current driving condition. By the adoption of themethod, on the basis that the required power is met, the electric quantity of the lithium battery can be kept, the service life of the lithium battery is prolonged, meanwhile, system energy loss is reduced, and the efficiency of a hybrid power system is improved.
Owner:NINGBO INST OF TECH ZHEJIANG UNIV ZHEJIANG

Multi-agent cluster obstacle avoidance method based on reinforcement learning

ActiveCN113156954AOptimal Cluster Individual Obstacle Avoidance StrategyImprove obstacle avoidance efficiencyPosition/course control in two dimensionsAlgorithmSimulation
The invention discloses a multi-agent cluster obstacle avoidance method based on reinforcement learning. The method comprises the following steps: S1, establishing a motion model of a cluster system; S2, defining an obstacle avoidance factor xi and an obstacle avoidance evaluation criterion; S3, designing a state space, a behavior space and a reward function trained by a cluster formation transformation obstacle avoidance model Q-learning when ximin is less than ximin; S4, enhancing a state space, a behavior space and a reward function of learning training by the cluster autonomous collaborative obstacle avoidance model during design; S5, designing an agent behavior selection method; and S6, obtaining a Q value table obtained by training, and carrying out cluster autonomous collaborative obstacle avoidance based on the motion model defined in the S1. According to the invention, parameters such as an obstacle avoidance factor and an obstacle avoidance evaluation criterion are used for selection and judgment of an intelligent agent cluster obstacle avoidance model, and a Q-learning algorithm is combined to train a cluster autonomous collaborative obstacle avoidance model, so that an optimal cluster individual obstacle avoidance strategy and high obstacle avoidance efficiency are obtained.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Cooperative unloading and resource allocation method based on multi-agent DRL under MEC architecture

The invention relates to a cooperative unloading and resource allocation method based on a multi-agent DRL under an MEC architecture. The method comprises the following steps: 1) proposing a collaborative MEC system architecture, and considering collaboration between edge nodes, namely, when the edge nodes are overloaded, migrating a task request to other low-load edge nodes for collaborative processing; 2) adopting a partial unloading strategy, namely unloading partial calculation tasks to an edge server for execution, and distributing the rest calculation tasks to local IoT equipment for execution; 3) modeling a joint optimization problem of a task unloading decision, a computing resource allocation decision and a communication resource allocation decision into an MDP problem according to dynamic change characteristics of task arrival; and 4) further using a multi-agent reinforcement learning collaborative task unloading and resource allocation method to dynamically allocate the resources to maximize the experience quality of users in the system. According to the method, dynamic management of the system resources under the collaborative MEC system architecture is realized, and the average delay and energy consumption of the system are reduced.
Owner:STATE GRID FUJIAN POWER ELECTRIC CO ECONOMIC RES INST +1

Rail transit automatic simulation modeling method and device based on reinforcement learning

The invention discloses a rail transit automatic simulation modeling method and device based on reinforcement learning, and the method comprises the steps: building a passenger flow simulation systemthrough employing passenger flow as a research object of simulation; initializing the state of the passenger flow simulation system at the moment t, and then performing analogue simulation to obtain asection passenger flow congestion degree penalty function of the train in the running section and a penalty function of path selection action of passengers at the moment t; then, taking a reward value obtained by the passenger selecting the path action as a return function of the research object at the moment t; then, executing simulation training of a passenger flow simulation system, updating related network parameters, and then, obtaining a trained passenger flow simulation model; and finally, extracting an action function as a passenger path selection probability generation function. A simulation system is established according to known operation logic and parameters, unknown parameter values in the simulation system are automatically obtained, and therefore the obtained simulation model can accurately describe a real system.
Owner:CRSC RESEARCH & DESIGN INSTITUTE GROUP CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products