Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

156 results about "Robot action" patented technology

Welding setting device, welding robot system and welding setting method

The invention provides a simultaneous multiple-layer welding setting device, comprising a cascading pattern determining part for determining the cascading pattern of a welding pass corresponding to joints of objects based on basic values of each input data and the cascading pattern data; a welding pass determining part for independently running which is used for determining the welding combination after excluding the welding pass with a large sectional area as the welding pass for independently running when the difference value of the sectional area of a deposited metal quantity is over the predetermined threshold value when combining the welding passes in the cascading pattern of two joints of the objects; a welding conditions determining part for determining each welding pass welding condition comprising current values corresponding to a welding wire feeding speed calculated based by an input welding condition and a seam forming position; and an action procedure generating part for generating a robot action procedure based on the determined welding conditions, and setting in a robot controlling device. The simultaneous multiple-layer welding setting device shortens the welding time of simultaneously welding a plurality of layers of a plurality of welding joints of a steel structure by two welding robots.
Owner:KOBE STEEL LTD

Facial expression robot multi-channel information emotion expression mapping method

The present invention relates to a facial expression robot multi-channel information emotion expression mapping method comprising the steps of: pre-building an expression library, an input speech reference model, an output speech reference model and a speech output library; acquiring a to-be-identified face image, identifying emotional expressions by comparing with the expression library, acquiring a speech input, identifying voice expressions by comparing with the input speech reference model, fusing the emotional expressions and the voice expressions to obtain a composite expression instruction, and according to the composite expression instruction and by comparing with the output speech reference module, selecting corresponding speech data for output by the facial expression robot; and correspondingly setting a macro action instruction for the composite expression instruction, and performing facial expression by the facial expression robot according to the macro action instruction, thereby realizing multi-channel information emotional expression by the expression robot. By adopting the method, visual expression analysis, speech signal processing and expression robot action coordination are integrally fused to reflect visual expressions and speech emotions, so that the method has relatively high intelligence.
Owner:HUAQIAO UNIVERSITY

Multi-mode comprehensive information recognition mobile double-arm robot device, system and method

The invention provides a multi-mode comprehensive information recognition mobile double-arm robot device, system and method. A robot action planning equipment platform is realized by utilizing an artificial intelligence robot multi-scene recognition technology, a multi-mode recognition technology and a voice recognition and positioning navigation mobile technology. According to the multi-mode comprehensive information recognition mobile double-arm robot device, system and method, the artificial intelligence and the robot technology are applied, and the robot node communication principle is combined, so that voice acquisition, voice interaction, voice instruction, voice query, remote and autonomous control, autonomous placement, code scanning query, scanning and reading of biological information, multi-scene article and personnel identification, article and equipment management, radar double-precision position locating and autonomous mobile navigation are realized; a double-arm sorting,counting and article placing integrated robot device is provided; and a robot system is connected with a personnel management system and an article management system. According to the present invention, the capabilities of voice interaction, accurate positioning, autonomous positioning and navigation and autonomous sorting, counting and placing of articles are improved, and the method, the systemand the device are widely applied to schools, business, factories, warehouses and medical scenes.
Owner:谈斯聪

Human action intention recognition training method based on cooperative computation of multiple brain areas

ActiveCN108304767AEnables cognitive function modelingPerform cognitive tasksInput/output for user-computer interactionPhysical realisationCategory recognitionHuman body
The invention belongs to the field of cognitive neuroscience, and specifically relates to a human action intention recognition training method based on cooperative computation of multiple brain areas.The human action intention recognition training method comprises the steps of 1, performing image collection on a human body action; 2, obtaining human body joint information, and recognizing the category of the human body action; 3, calculating a robot action strategy according to the category of the action executed by the human by adopting a mode of cooperative computation of multiple brain areas based on a brain-like computing model; 4, inputting a correctness judgment for the robot action strategy calculated in the step 3; 5, adjusting parameters of the brain-like computing model throughan STDP mechanism based on the correctness judgment inputted in the step 4; and 6, if the correctness judgment inputted in the step 4 shows that the robot action strategy is wrong, executing the step1 for repeated training until the correctness judgment inputted in the step 4 shows that the robot action strategy is correct. The human action intention recognition training method overcomes the defect of being not flexible enough because programming and the like need to be performed in advance in the traditional human-computer interaction technology, and improves the use experience.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI

Intelligent toy robot with action imitation function

The invention discloses an intelligent toy robot with an action imitation function. The intelligent toy robot comprises a computer, a humanoid robot body, a video acquisition and process module and a robot action control module and also comprises a wireless communication module I and a wireless communication module II, wherein the wireless communication module I is used for realizing the data wireless transmission between the video acquisition and process module and a computer, and the wireless communication module II is used for realizing the data wireless transmission between the computer and the robot action control module; the humanoid robot body is composed of 17 steering engines and steering engine connecting metal pieces; the video acquisition and process module is composed of a CCD (Charge-Coupled Device) camera, a video decoder and an ARM (Advanced RISC Machines)7 processor; and the wireless communication module I is composed of a wireless transmitting module I and a wireless receiving module I, and the wireless communication module II is composed of a wireless transmitting module II and a wireless receiving module II. The intelligent toy robot has a novel structure and is reasonable in design, high in personification degree and intelligent degree and can imitate the actions of a child so as to interact with the child, so that the intelligent toy robot can stimulate the scientific exploring desire of the child invisibly.
Owner:XIAN TEKTONG DIGITAL TECH

Nonlinear constrained primal-dual neural network robot action planning method

ActiveCN108015766AEliminate the initial error problemOvercome the problem of error accumulationProgramme-controlled manipulatorNerve networkStandard form
The invention discloses a nonlinear constrained primal-dual neural network robot action planning method, which comprises the steps of acquiring a current state of a robot, and adopting a quadratic optimization scheme for carrying out inverse kinematics analysis on a robot track on a speed layer; converting the quadratic optimization scheme to a standard form of a quadratic planning problem; enabling a quadratic planning optimization problem to be equivalent to a linear variational inequality problem; converting the linear variational inequality problem to a solution of a piecewise linearity projection equation based on nonlinear equality constraint; utilizing a nonlinear constrained primal-dual neural network solver for solving the piecewise linearity projection equation; and transferringa solved instruction to a robot instruction input port, and driving a robot to carry out path follow. According to the nonlinear constrained primal-dual neural network robot action planning method provided by the invention, convex set constraint and non-convex set constraint can be compatible, a preliminary test error problem occurred in robot control is eliminated, and an error accumulation problem during a robot control process is solved.
Owner:SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products