Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

125 results about "Robot learning" patented technology

Robot learning is a research field at the intersection of machine learning and robotics. It studies techniques allowing a robot to acquire novel skills or adapt to its environment through learning algorithms. The embodiment of the robot, situated in a physical embedding, provides at the same time specific difficulties (e.g. high-dimensionality, real time constraints for collecting data and learning) and opportunities for guiding the learning process (e.g. sensorimotor synergies, motor primitives).

Path planning Q-learning initial method of mobile robot

The invention discloses a reinforcing learning initial method of a mobile robot based on an artificial potential field and relates to a path planning Q-learning initial method of the mobile robot. The working environment of the robot is virtualized to an artificial potential field. The potential values of all the states are confirmed by utilizing priori knowledge, so that the potential value of an obstacle area is zero, and a target point has the biggest potential value of the whole field; and at the moment, the potential value of each state of the artificial potential field stands for the biggest cumulative return obtained by following the best strategy of the corresponding state. Then a Q initial value is defined to the sum of the instant return of the current state and the maximum equivalent cumulative return of the following state. Known environmental information is mapped to a Q function initial value by the artificial potential field so as to integrate the priori knowledge into a learning system of the robot, so that the learning ability of the robot is improved in the reinforcing learning initial stage. Compared with the traditional Q-learning algorithm, the reinforcing learning initial method can efficiently improve the learning efficiency in the initial stage and speed up the algorithm convergence speed, and the algorithm convergence process is more stable.
Owner:山东大学(威海)

Intelligent robot learning toy

InactiveCN105498228AEasy to expandFully mobilize the funSelf-moving toy figuresTeaching apparatusEngineeringHead nodding
The invention discloses an intelligent robot learning toy. The intelligent robot learning toy is characterized in that a multi-freedom-degree concealed joint is arranged on the neck of the intelligent robot learning toy, a head rotating mechanism is sleeved with a bowl-shaped shell assembly of the concealed joint, the bowl-shaped shell assembly is connected with the connecting part for connecting the head and the body of the intelligent robot learning toy with the neck, and a head nodding mechanism is arranged in the head, connected with the upper end of the head rotating mechanism, and driven by the head rotating mechanism. The intelligent robot learning toy has the following advantages that the structure is scientific and compact, the neck of the intelligent robot learning toy is provided with the concealed joint, the effects of horizontal rotation and neck connection in the nodding process are achieved, the multi-freedom-degree joint mechanism is protected from being exposed, and by means of the limb modularized structure, different functions of the intelligent robot learning toy can be expanded conveniently; multimedia is matched with action and expression, things can be expressed vividly, good interaction with children can be achieved, learning and play can be achieved, the robot helps parents to achieve the purpose of education, and creativity of the children is motivated.
Owner:胡文杰

Abnormal sound detection method and abnormal sand detection system for automobile seat slide rail

The invention discloses an abnormal sound detection method and an abnormal sand detection system for an automobile seat slide rail. The method comprises the following steps: S1, acquiring an originalvibrating signal as a training sample and denoising the original vibrating signal to obtain an effective vibrating signal; S2, extracting characteristic parameters such as a time domain characteristic, a frequency domain characteristic and an envelope characteristic in the effective vibrating signal; S3, inputting the characteristic parameters to a mixed model to train the model to obtain an abnormal sound recognizing module and a mixing matrix corresponding to the abnormal sound recognizing module, wherein the mixing matrix comprises a test accuracy of an abnormal recognizing module; and S4,inputting the to-be-detected original vibrating signal to the corresponding abnormal sound recognizing model, the testing accuracy of which is higher than a preset value, to judge the to-be-detected original vibrating signal automatically. By means of a detection means based on industrial big data and robot learning model, automatic recognition can be achieved, and the method is high in detectionefficiency, excludes manual unstable factors and improves the detection accuracy.
Owner:宁波慧声智创科技有限公司

Intrinsically motivated extreme learning machine autonomous development system and operating method thereof

InactiveCN106598058AImprove learning initiativeImprove the speed of adaptation to the environmentNeural architecturesAttitude controlLearning machineOrientation function
The invention belongs to the technical field of intelligent robots, and concretely relates to an intrinsically motivated extreme learning machine autonomous development system and an operating method thereof. The autonomous development system comprises an inner state set, a motion set, a state transition function, an intrinsic motivation orientation function, a reward signal, a reinforced learning update iteration formula, an evaluation function and a motion selection probability. According to the invention, an intrinsic motivation signal is utilized to simulate an orientation cognitive mechanism of the interest of people in things so that a robot can finish relevant tasks voluntarily, thereby solving a problem that the robot is poor in self-learning. Furthermore, an extreme learning machine network is utilized to practice learning and store knowledge and experience so that the robot, if an experience fails, can use the stored knowledge and experience to keep exploring instead of learning from the beginning. In this way, the learning speed of the robot is increased, and a problem of low efficiency of reinforced learning for single-step learning is solved.
Owner:NORTH CHINA UNIVERSITY OF SCIENCE AND TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products