Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

5937 results about "Human–robot interaction" patented technology

Human–robot interaction is the study of interactions between humans and robots. It is often referred as HRI by researchers. Human–robot interaction is a multidisciplinary field with contributions from human–computer interaction, artificial intelligence, robotics, natural language understanding, design, and social sciences.

Multifunctional remote fault diagnosis system for electric control automobile

The invention relates to a multifunctional remote fault diagnosis system for an electric control automobile. The multifunctional remote fault diagnosis system comprises a remote fault diagnosis service center, PC (Personal Computer) diagnosis client sides and a diagnosis communication device. The remote fault diagnosis service center serves as a key of the system and is mainly used for realizing an automobile fault diagnosis network management function and an automobile remote fault diagnosis assistance function; the PC diagnosis client sides are mainly used for providing specific automobile diagnosis application functions and remote diagnosis interfaces for users with different rights through human-computer interaction interfaces; and the diagnosis communication device is mainly used for realizing the data communication between the PC diagnosis client sides and a vehicle-mounted network and providing diagnosis data service for upper applications. By means of the multifunctional remote fault diagnosis system, with the remote fault diagnosis service center as a core and all PC diagnosis client sides as nodes, an automobile fault diagnosis network is established; automobile diagnosis data sharing is realized by means of the diagnosis communication device; multifunctional automobile remote fault assistance and fault elimination help can be provided; automobile fault information is subjected to statistic analysis; and a reliable automobile quality report is provided for an automobile manufacturer.
Owner:WUHAN UNIV OF TECH +1

Motion control method of lower limb rehabilitative robot

The invention relates to a motion control method of a lower limb rehabilitative robot. In the method, aiming at different rehabilitation stages of a patient, two working modes of passive training and active training are carried out: under the mode of passive training, the patient is driven by controlling the robot to finish specific motions or motion according to a right physiological gait track; abnormal motions of the patient are completely restrained; and the patient passively follows the robot to do walking rehabilitation training; under the mode of active training, limited abnormal motions of the patient are restrained by the robot; through a real-time detection on joint driving forces generated when the patient acts on the robot in the motion process, human-computer interaction moment is extracted by utilizing an inverse dynamic model to judge the active motion intention of lower limbs of the patient; and the interaction moment is converted into correction value of gait track by utilizing an impedance controller to directly correct or generate the gait training track the patient expects through an adaptive controller, therefore, the purpose that the robot can provide auxiliary force and resistant force for the rehabilitation training can be indirectly realized. By means of the motion control method of the lower limb rehabilitative robot, rehabilitation training motions suitable for different rehabilitation stages can be provided for a dysbasia patient, thereby enhancing active participation degree of the rehabilitation training of the patient, building confidence of the rehabilitation and positivity of the motion, and then enhancing effect of the rehabilitation training.
Owner:SHANGHAI UNIV

Method and system for implicitly resolving pointing ambiguities in human-computer interaction (HCI)

A method and system for implicitly resolving pointing ambiguities in human-computer interaction by implicitly analyzing user movements of a pointer toward a user targeted object located in an ambiguous multiple object domain and predicting the user targeted object, using different categories of heuristic (statically and / or dynamically learned) measures, such as (i) implicit user pointing gesture measures, (ii) application context, and (iii) number of computer suggestions of each predicted user targeted object. Featured are user pointing gesture measures of (1) speed-accuracy tradeoff, referred to as total movement time (TMT), and, amount of fine tuning (AFT) or tail-length (TL), and, (2) exact pointer position. A particular application context heuristic measure used is referred to as containment hierarchy. The invention is widely applicable to resolving a variety of different types of pointing ambiguities such as composite object types of pointing ambiguities, involving different types of pointing devices, and which are widely applicable to essentially any type of software and / or hardware methodology involving using a pointer, such as in computer aided design (CAD), object based graphical editing, and text editing.
Owner:RAMOT AT TEL AVIV UNIV LTD

Man-machine interactive system for unmanned vehicle

The invention provides a man-machine interactive system for an unmanned vehicle. The system comprises an information processing and planning module, a vehicle control module, a self-inspection module, an environment information acquisition module, a man-machine interactive module, a data storage module and a communication module. System operation manners are divided into the manual operation manner, the voice operation manner and the mobile phone APP control manner. The system is mainly operated according to the steps that firstly, according to the situation whether a user enters the vehicle or remotely controls the vehicle, a corresponding unlocking manner and the corresponding operation manner are adopted; secondly, the vehicle is started, the man-machine interactive system is activated, and vehicle self-inspection is performed; thirdly, after it is confirmed that the vehicle is free of faults, a destination is input, and a driving route is planned according to the actual condition; fourthly, the vehicle is driven according to a driving scheme; and fifthly, the vehicle state and the surroundings are monitored in real time, whether an accident occurs or not is judged, and the driving manner and the route are adjusted at any time according to the surrounding condition till the vehicle reaches the destination. The invention further provides a method that a standby computer takes charge when a central computer breaks down and other three accident handling methods.
Owner:CHANGZHOU JIAMEI TECH CO LTD

Man-machine interaction method and system based on sight judgment

The invention relates to the technical field of man-machine interaction and provides a man-machine interaction method based on sight judgment, to realize the operation on an electronic device by a user. The method comprises the following steps of: obtaining a facial image through a camera, carrying out human eye area detection on the image, and positioning a pupil center according to the detected human eye area; calculating a corresponding relationship between an image coordinate and an electronic device screen coordinate system; tracking the position of the pupil center, and calculating a view point coordinate of the human eye on an electronic device screen according to the corresponding relationship; and detecting an eye blinking action or an eye closure action, and issuing corresponding control orders to the electronic device according to the detected eye blinking action or the eye closure action. The invention further provides a man-machine interaction system based on sight judgment. With the adoption of the man-machine interaction method, the stable sight focus judgment on the electronic device is realized through the camera, and control orders are issued through eye blinking or eye closure, so that the operation on the electronic device by the user becomes simple and convenient.
Owner:SHENZHEN INST OF ADVANCED TECH

Multifunctional composite rehabilitation system for patient suffering from central nerve injury

ActiveCN102813998ARealize human-machine coordinated controlGood curative effectGymnastic exercisingChiropractic devicesDiseaseHuman motion
The invention discloses a multifunctional composite rehabilitation system for a patient suffering from central nerve injury. The multifunctional composite rehabilitation system comprises a database module, a data managing module, a man-machine interaction module, a function evaluating module, a prescription generating and managing module, a calculating module, a system control module and a safety protecting module. By utilizing the artificial intelligence, the human motion intension can be predicted and recognized, the man-machine coordination control is realized, and the active type rehabilitation treatment of the patient is finished; through a multisource signal collection, signal fusion and real-time control technologies, every function is organically combined, the coordinative rehabilitation treatment of every function module is realized, the rehabilitation effect is effectively improved and the labor consumption is reduced; during the process of rehabilitation, the patient is guided to finish the rehabilitation treatment step by step by using the virtual reality technology; through the interaction among a doctor, the patient and a computer, a progressive composite rehabilitation strategy according with the disease condition of the patient is comprehensively determined, so that the patient suffering fromnerve injury can receive the most effective rehabilitation treatment within a gold rehabilitation period of 50 days, and therefore, the maximization of the functional rehabilitation is promoted.
Owner:SHANGHAI JIAO TONG UNIV

Human-machine interaction method and system based on binocular stereoscopic vision

The invention relates to the technical field of human-machine interaction and provides a human-machine interaction method and a human-machine interaction system based on binocular stereoscopic vision. The human-machine interaction method comprises the following steps: projecting a screen calibration image to a projection plane and acquiring the calibration image on the projection surface for system calibration; projecting an image and transmitting infrared light to the projection plane, wherein the infrared light forms a human hand outline infrared spot after meeting a human hand; acquiring an image with the human hand outline infrared spot on the projection plane and calculating a fingertip coordinate of the human hand according to the system calibration; and converting the fingertip coordinate into a screen coordinate according to the system calibration and executing the operation of a contact corresponding to the screen coordinate. According to the invention, the position and the coordinate of the fingertip are obtained by the system calibration and infrared detection; a user can carry out human-machine interaction more conveniently and quickly on the basis of touch operation of the finger on a general projection plane; no special panels and auxiliary positioning devices are needed on the projection plane; and the human-machine interaction device is simple in and convenient for mounting and using and lower in cost.
Owner:SHENZHEN INST OF ADVANCED TECH

Distributed cognitive technology for intelligent emotional robot

The invention provides distributed cognitive technology for an intelligent emotional robot, which can be applied in the field of multi-channel human-computer interaction in service robots, household robots, and the like. In a human-computer interaction process, the multi-channel cognition for the environment and people is distributed so that the interaction is more harmonious and natural. The distributed cognitive technology comprises four parts, namely 1) a language comprehension module which endows a robot with an ability of understanding human language after the steps of word division, word gender labeling, key word acquisition, and the like; 2) a vision comprehension module which comprises related vision functions such as face detection, feature extraction, feature identification, human behavior comprehension, and the like; 3) an emotion cognition module which extracts related information in language, expression and touch, analyzes user emotion contained in the information, synthesizes a comparatively accurate emotion state, and makes the intelligent emotional robot cognize the current emotion of a user; and 4) a physical quantity cognition module which makes the robot understand the environment and self state as the basis of self adjustment.
Owner:UNIV OF SCI & TECH BEIJING

Material object programming method and system

The invention discloses a material object programming method and a material object programming system, which belong to the field of human-machine interaction. The method comprises the following steps of: 1) establishing a set of material object programming display environment; 2) shooting the sequence of material object programming blocks which are placed by a user and uploading the shot image toa material object programming processing module by using an image acquisition unit; 3) converting the sequence of the material object blocks into a corresponding functional semantic sequence by usingthe material object programming processing module according to the computer vision identification modes and the position information of the material object programming blocks; 4) determining whether the current functional semantic sequence meets the grammatical and semantic rules of the material object display environment or not, and if the current functional semantic sequence does not meet the grammatical and semantic rules of the material object display environment, feeding back a corresponding error prompt; 5) replacing the corresponding material object programming blocks by using the useraccording to the prompt information; and 6) repeating the steps 2) to 5) until the functional semantic sequence corresponding to the sequence of the placed material object programming blocks meets the grammatical and semantic rules of the material object display environment, and finishing a programming task. By using the method and the system, the problem that children and green hands are difficult to learn programming is solved, and the system has low cost and is easy to popularize.
Owner:INST OF SOFTWARE - CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products