Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

1835 results about "Human system interaction" patented technology

Human System Interaction (HSI), or Human-Computer Interaction (HCI), focuses on the design, research and development of interfaces between people and systems such as computers. HSI and HCI are multi-disciplinary fields involving computer science, behavioral science, design technology and other related fields of study.

Human-machine interaction method and device based on sight tracing and gesture discriminating

The invention discloses a human-computer interaction method and a device based on vision follow-up and gesture identification. The method comprises the following steps of: facial area detection, hand area detection, eye location, fingertip location, screen location and gesture identification. A straight line is determined between an eye and a fingertip; the position where the straight line intersects with the screen is transformed into the logic coordinate of the mouse on the screen, and simultaneously the clicking operation of the mouse is simulated by judging the pressing action of the finger. The device comprises an image collection module, an image processing module and a wireless transmission module. First, the image of a user is collected at real time by a camera and then analyzed and processed by using an image processing algorithm to transform positions the user points to the screen and gesture changes into logic coordinates and control orders of the computer on the screen; and then the processing results are transmitted to the computer through the wireless transmission module. The invention provides a natural, intuitive and simple human-computer interaction method, which can realize remote operation of computers.
Owner:SOUTH CHINA UNIV OF TECH

Motion control method of lower limb rehabilitative robot

The invention relates to a motion control method of a lower limb rehabilitative robot. In the method, aiming at different rehabilitation stages of a patient, two working modes of passive training and active training are carried out: under the mode of passive training, the patient is driven by controlling the robot to finish specific motions or motion according to a right physiological gait track; abnormal motions of the patient are completely restrained; and the patient passively follows the robot to do walking rehabilitation training; under the mode of active training, limited abnormal motions of the patient are restrained by the robot; through a real-time detection on joint driving forces generated when the patient acts on the robot in the motion process, human-computer interaction moment is extracted by utilizing an inverse dynamic model to judge the active motion intention of lower limbs of the patient; and the interaction moment is converted into correction value of gait track by utilizing an impedance controller to directly correct or generate the gait training track the patient expects through an adaptive controller, therefore, the purpose that the robot can provide auxiliary force and resistant force for the rehabilitation training can be indirectly realized. By means of the motion control method of the lower limb rehabilitative robot, rehabilitation training motions suitable for different rehabilitation stages can be provided for a dysbasia patient, thereby enhancing active participation degree of the rehabilitation training of the patient, building confidence of the rehabilitation and positivity of the motion, and then enhancing effect of the rehabilitation training.
Owner:SHANGHAI UNIV

Human-computer interaction device and method adopting eye tracking in video monitoring

The invention belongs to the technical field of video monitoring and in particular relates to a human-computer interaction device and a human-computer interaction method adopting human-eye tracking in the video monitoring. The device comprises a non-invasive facial eye image video acquisition unit, a monitoring screen, an eye tracking image processing module and a human-computer interaction interface control module, wherein the monitoring screen is provided with infrared reference light sources around; and the eye tracking image processing module separates out binocular sub-images of a left eye and a right eye from a captured facial image, identifies the two sub-images respectively and estimates the position of a human eye staring position corresponding to the monitoring screen. The invention also provides an efficient human-computer interaction way according to eye tracking characteristics. The unified human-computer interaction way disclosed by the invention can be used for selecting a function menu by using eyes, switching monitoring video contents, regulating the focus shooting vision angle of a remote monitoring camera and the like to improve the efficiency of operating videomonitoring equipment and a video monitoring system.
Owner:FUDAN UNIV

Method and system for implicitly resolving pointing ambiguities in human-computer interaction (HCI)

A method and system for implicitly resolving pointing ambiguities in human-computer interaction by implicitly analyzing user movements of a pointer toward a user targeted object located in an ambiguous multiple object domain and predicting the user targeted object, using different categories of heuristic (statically and / or dynamically learned) measures, such as (i) implicit user pointing gesture measures, (ii) application context, and (iii) number of computer suggestions of each predicted user targeted object. Featured are user pointing gesture measures of (1) speed-accuracy tradeoff, referred to as total movement time (TMT), and, amount of fine tuning (AFT) or tail-length (TL), and, (2) exact pointer position. A particular application context heuristic measure used is referred to as containment hierarchy. The invention is widely applicable to resolving a variety of different types of pointing ambiguities such as composite object types of pointing ambiguities, involving different types of pointing devices, and which are widely applicable to essentially any type of software and / or hardware methodology involving using a pointer, such as in computer aided design (CAD), object based graphical editing, and text editing.
Owner:RAMOT AT TEL AVIV UNIV LTD

Man-machine interaction method and system based on sight judgment

The invention relates to the technical field of man-machine interaction and provides a man-machine interaction method based on sight judgment, to realize the operation on an electronic device by a user. The method comprises the following steps of: obtaining a facial image through a camera, carrying out human eye area detection on the image, and positioning a pupil center according to the detected human eye area; calculating a corresponding relationship between an image coordinate and an electronic device screen coordinate system; tracking the position of the pupil center, and calculating a view point coordinate of the human eye on an electronic device screen according to the corresponding relationship; and detecting an eye blinking action or an eye closure action, and issuing corresponding control orders to the electronic device according to the detected eye blinking action or the eye closure action. The invention further provides a man-machine interaction system based on sight judgment. With the adoption of the man-machine interaction method, the stable sight focus judgment on the electronic device is realized through the camera, and control orders are issued through eye blinking or eye closure, so that the operation on the electronic device by the user becomes simple and convenient.
Owner:SHENZHEN INST OF ADVANCED TECH

Distributed cognitive technology for intelligent emotional robot

The invention provides distributed cognitive technology for an intelligent emotional robot, which can be applied in the field of multi-channel human-computer interaction in service robots, household robots, and the like. In a human-computer interaction process, the multi-channel cognition for the environment and people is distributed so that the interaction is more harmonious and natural. The distributed cognitive technology comprises four parts, namely 1) a language comprehension module which endows a robot with an ability of understanding human language after the steps of word division, word gender labeling, key word acquisition, and the like; 2) a vision comprehension module which comprises related vision functions such as face detection, feature extraction, feature identification, human behavior comprehension, and the like; 3) an emotion cognition module which extracts related information in language, expression and touch, analyzes user emotion contained in the information, synthesizes a comparatively accurate emotion state, and makes the intelligent emotional robot cognize the current emotion of a user; and 4) a physical quantity cognition module which makes the robot understand the environment and self state as the basis of self adjustment.
Owner:UNIV OF SCI & TECH BEIJING

Upper limb exoskeleton rehabilitation robot control method based on radial basis neural network

The invention discloses an upper limb exoskeleton rehabilitation robot control method based on a radial basis neural network. The method includes the steps that a human body upper limb musculoskeletal model is established; upper limb muscle myoelectric signals and upper limb movement data are collected, the movement data are imported into the upper limb musculoskeletal model, upper limb joint torque is obtained, the radial basis neural network is established and a neural network model is given out; the patient movement intention is recognized, the joint angular speed is subjected to fusion analysis, the result is used for recognizing the training object joint stretching state, and the limb movement intention is determined; and myoelectric signals and joint angles in affected side rehabilitation training are collected in real time, affected side joint torque is obtained through the neural network, joint torque needing to be compensated by an exoskeleton mechanical arm is calculated, myoelectric signal fatigue characteristics are analyzed, the compensation torque magnitude can be adjusted by classifying the degree of fatigue, and a torque controller can be controlled to achieve the effect that an upper limb rehabilitation robot assists patients in rehabilitation training by combining the movement intention. By means of the upper limb exoskeleton rehabilitation robot control method, the rehabilitation training process is made to be more suitable for the patients, man-machine interaction is strengthened, and the rehabilitation effect is improved.
Owner:YANSHAN UNIV

System and method for intelligently controlling indoor environment based on thermal imaging technology

The invention discloses a system for intelligently controlling an indoor environment based on thermal imaging technology, which comprises a thermal imaging sensor, a human-computer interaction interface device and an area controller, wherein the thermal imaging sensor acquires thermal imaging data of the indoor environment and identifies and analyzes the thermal imaging data so as to calculate a subjective radiation temperature; and the area controller adjusts the refrigerating and heating capacities at the tail end of an indoor air conditioner according to the subjective radiation temperature and the temperature set by the human-computer interaction interface device so as to perform optimized control on the indoor environment to achieve a temperature comfortable for indoor personnel. The system and a method for intelligently controlling the indoor environment based on the thermal imaging technology can adjust the temperature of the indoor environment, improve the personnel comfort level and work efficiency, reduce the energy consumption and improve the management level of an automatic control system of a building under the condition of acquiring comprehensive environment and personnel information.
Owner:于震

Method and system for implicitly resolving pointing ambiguities in human-computer interaction (HCI)

A method and system for implicitly resolving pointing ambiguities in human-computer interaction by implicitly analyzing user movements of a pointer toward a user targeted object located in an ambiguous multiple object domain and predicting the user targeted object, using different categories of heuristic (statically and/or dynamically learned) measures, such as (i) implicit user pointing gesture measures, (ii) application context, and (iii) number of computer suggestions of each predicted user targeted object. Featured are user pointing gesture measures of (1) speed-accuracy tradeoff, referred to as total movement time (TMT), and, amount of fine tuning (AFT) or tail-length (TL), and, (2) exact pointer position. A particular application context heuristic measure used is referred to as containment hierarchy. The invention is widely applicable to resolving a variety of different types of pointing ambiguities such as composite object types of pointing ambiguities, involving different types of pointing devices, and which are widely applicable to essentially any type of software and/or hardware methodology involving using a pointer, such as in computer aided design (CAD), object based graphical editing, and text editing.
Owner:RAMOT AT TEL AVIV UNIV LTD

Man-computer interaction method for intelligent human skeleton tracking control robot on basis of kinect

The invention provides a man-computer interaction method for an intelligent human skeleton tracking control robot on the basis of kinect. The man-computer interaction method includes the steps of detecting actions of an operator through a 3D depth sensor, obtaining data frames, converting the data frames into an image, splitting objects and background environments similar to a human body in the image, obtaining depth-of-field data, extracting human skeleton information, identifying different positions of the human body, building a 3D coordinate of joints of the human body, identifying rotation information of skeleton joints of the two hands of the human body, identifying which hand of the human body is triggered according to catching of changes of angles of the different skeleton joints, analyzing different action characteristics of the operator, using corresponding characters as control instructions which are sent to a robot of a lower computer, receiving and processing the characters through an AVR single-chip microcomputer master controller, controlling the robot of the lower computer to carry out corresponding actions, and achieving man-computer interaction of the intelligent human skeleton tracking control robot on the basis of the kinect. According to the method, restraints of traditional external equipment on man-computer interaction are eliminated, and natural man-computer interaction is achieved.
Owner:NORTHWEST NORMAL UNIVERSITY

Free-running speech comprehend method and man-machine interactive intelligent system

The invention discloses natural language understanding method, comprising the steps that: a natural language is matched with a conceptual language symbol after receiving the natural language input by the customer, and then a conception is associated with the conceptual language symbol; a conception which is most suitable to the current language content is selected by being compared with the preset conception dictionary, and then whether the conception is ambiguous is judged; and if the answer is YES, the conception is obtained by a language data base, entering the next step; and if the answer is NO, the conception is obtained based on the principle of language content matched, entering to the next step; a core conception and a sub conception are obtained by a conception reorganization, wherein, the core language meaning of the core conception is defined by an operation of the computer while the sub language meaning of the sub conception is defined by the operation content of the computer; and the complete language meaning is obtained by combining the core language meaning with the sub language meaning. The invention also provides a human-computer interaction intelligent system based on the method provided by the invention. The invention recognizes the natural sound input by the customer more accurately, thereby providing the customer with more intelligent and perfect services.
Owner:CHINA TELECOM CORP LTD

Intelligent system for monitoring vehicles and people in community

The invention discloses an intelligent system for monitoring vehicles and people in a community. The intelligent system comprises a monitor image collection module, a monitor image pre-processing module, a vehicle contour and license plate image information recognition module, a vehicle information database, a face image information recognition module, a people information database, an image information identification verification and positioning speed-measurement module and an intelligent management module. The monitor image collection module collects image information of the vehicles and the people, the monitor image pre-processing module pre-processes the image information, the vehicle contour and license plate image information recognition module processes, extracts and recognizes vehicle image information, and the face image information recognition module extracts and recognizes obtained people face information; and the image information identification verification and positioning speed-measurement module performs identification verification an positioning testing, and an intelligent control module is used for human-machine interactive interface operation and intelligent processing. The intelligent system monitors the vehicles and the people in the community and performs entrance guard management, thereby guaranteeing the trip safety of the people in the community.
Owner:SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products