Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

108results about How to "Interactive nature" patented technology

Man-computer interaction method for intelligent human skeleton tracking control robot on basis of kinect

The invention provides a man-computer interaction method for an intelligent human skeleton tracking control robot on the basis of kinect. The man-computer interaction method includes the steps of detecting actions of an operator through a 3D depth sensor, obtaining data frames, converting the data frames into an image, splitting objects and background environments similar to a human body in the image, obtaining depth-of-field data, extracting human skeleton information, identifying different positions of the human body, building a 3D coordinate of joints of the human body, identifying rotation information of skeleton joints of the two hands of the human body, identifying which hand of the human body is triggered according to catching of changes of angles of the different skeleton joints, analyzing different action characteristics of the operator, using corresponding characters as control instructions which are sent to a robot of a lower computer, receiving and processing the characters through an AVR single-chip microcomputer master controller, controlling the robot of the lower computer to carry out corresponding actions, and achieving man-computer interaction of the intelligent human skeleton tracking control robot on the basis of the kinect. According to the method, restraints of traditional external equipment on man-computer interaction are eliminated, and natural man-computer interaction is achieved.
Owner:NORTHWEST NORMAL UNIVERSITY

Man-machine interaction method and device based on emotion system, and man-machine interaction system

The invention discloses a man-machine interaction method and device based on an emotion system, and a man-machine interaction system. The method comprises following steps of collecting voice emotion parameters, expression emotion parameters and body emotion parameters; calculating to obtain a to-be-determined voice emotion according to the voice emotion parameters; selecting a voice emotion most proximate to the to-be-determined voice emotion from preset voice emotions as a voice emotion component; calculating to obtain a to-be-determined expression emotion according to the expression emotion parameters; selecting an expression emotion most proximate to the to-be-determined expression emotion from preset expression emotions as an expression emotion component; calculating to obtain a to-be-determined body emotion according to the body emotion parameters; selecting a body emotion most proximate to the to-be-determined body emotion from preset body emotions as a body emotion component; fusing the voice emotion component, the expression emotion component and the body emotion component, thus determining an emotion identification result; and outputting multi-mode feedback information specific to the emotion identification result. According to the method, the device and the system, the man-machine interaction process is more smooth and natural.
Owner:BEIJING GUANGNIAN WUXIAN SCI & TECH

Eye movement interaction method, system and device based on eye movement tracking technology

ActiveCN111949131ASolve the problem of not being able to accurately locate the targetImprove operational efficiencyInput/output for user-computer interactionSound input/outputOptometryEye tracking on the ISS
The invention belongs to the technical field of eye movement tracking, and discloses an eye movement interaction method, system and device based on eye movement tracking technology, and the method comprises the steps: setting a passive adsorption gaze cursor or an eye movement interaction intention of a sensing area and predicting an active adsorption gaze cursor to select a target; setting corresponding sensing areas, namely effective clicking areas, for different targets, when the cursor makes contact with or covers the sensing area of a certain target, whether eye movement behaviors such aseye tremor exists or not, detecting whether the glancing distance exceeds a threshold value or not at the same time, and absorbing or highlighting then the target object. A machine learning algorithmis adopted to train the eye movement behavior data of the user, the data is filtered, processed and analyzed, an eye movement behavior rule is trained, and a subjective consciousness eye movement interaction intention model of the user is obtained. Through the method, the stability and accuracy in the eye movement interaction process are improved, and the user experience of eye movement interaction is improved.
Owner:陈涛

Human-computer interaction guiding method and device based on artificial intelligence

The invention discloses a human-computer interaction guiding method and device based on artificial intelligence. The method comprises the following steps that S1, interaction information input by a user is received, and a current topic is determined according to the interaction information; S2, multiple to-be-chosen guiding topics which are related to the current topic are obtained on the basis of a topic graph; S3, user image data of the user are obtained; S4, the guiding topic is chosen from the multiple to-be-chosen guiding topics which are related to the current topic according to the user image data and fed back to the user. According to the human-computer interaction guiding method and device based on the artificial intelligence, the current topic is determined by receiving the interaction information input by the user, the multiple to-be-chosen guiding topics which are related to the current topic are obtained on the basis of the topic graph, and then the guiding topic is chosen from the multiple to-be-chosen guiding topics which are related to the current topic by combining the user image data and fed back to the user, so that the sustainability of human-computer interaction is improved, and the human-computer interaction is more fluent and natural.
Owner:BAIDU ONLINE NETWORK TECH (BEIJIBG) CO LTD

Man-machine interaction method based on analysis of interest regions by bionic agent and vision tracking

The invention relates to a man-machine interaction method based on analysis of interest regions by bionic agent and vision tracking, comprising the steps: (1) a designer carries out user analysis and designs interest regions which can be possible to cause the attention of a user according to the user analysis result; (2) an event interaction manager receives and analyzes data generated by an eye tracker in real time, and calculates the focal positions of the eyeballs of the user on a screen; (3) the event interaction manager analyzes the interest regions causing the attention of the user according to the obtained focal positions of the eyeballs of the user on the screen; and (4) the event interaction manager takes the analyzed result of the interest regions causing attention of the user as a non-contact instruction to control the expressions, actions and voices of the bionic agent on the man-machine interaction method so as to carry out intelligent feedback on the user and further realize natural and harmonious man-machine interaction. A man-machine interaction system which is established according to the invention based on analysis of interest regions by bionic agent and vision tracking, comprises (1) the eye tracker; (2) a man-machine interaction interface; (3) the event interaction manager; and (4) the bionic agent.
Owner:BEIHANG UNIV

Virtual simulation system and method of antagonistic event with net based on three-dimensional multi-image display

The invention discloses a virtual simulation system and method of an antagonistic event with a net based on three-dimensional multi-image display. The system comprises a network connection module, a game logic module, an interaction control module, a physical engine module, a three-dimensional rendering module and a dual-image projection display module, wherein the network connection module comprises a server sub-module and a client sub-module and is used for network communication and data transmission; the game logic module is used for storing game rules, controlling the play of character animation and performing mapping by position; the interaction control module is used for controlling corresponding game characters in a virtual tennis game scene to move and shooting three-dimensional images of different sight points; the physical engine module is used for efficiently and vividly simulating the physical effects of a tennis ball, such as rebound and collision through a physical engine and enabling the game scene to be relatively real and vivid. According to the system, the three-dimensional multi-image display operation can be performed on a same display screen, and the same game scene can be rendered in real time based on different sight angles.
Owner:SHANDONG UNIV

Intelligent voice interaction system and method

The invention discloses an intelligent voice interaction system and method. The system comprises a preprocessing module, a strategy flow module, a central control module, an automatic call-out module,a voice synthesis module, a voice recognition module and a language processing module, wherein the central control module is internally provided with a central control scheduling module for scheduling the strategy flow module, the automatic call-out module and the language processing module. The method comprises the steps of 1-12. According to the method, integrated scheduling of multiple algorithms can be realized, multiple algorithm models are scheduled according to set rules for calculation, and an optimal solution is obtained by integrating calculation results, so that the limitation of blind spot calculation of a single algorithm model is solved, and a complementary effect is achieved; complex answers such as multiple questions, multiple intentions and the like are processed; and thecentral control scheduling module performs preliminary preprocessing before the text is sent to the question calculation model, decomposes questions with multiple intentions into multiple parts through the multi-intention splitting calculation model, then sends the multiple parts to the question calculation model, integrates answer results after obtaining multiple answers, and feeds back the answer results to the client.
Owner:上海荣数信息技术有限公司

Video multi-scale visualization method and interaction method

The invention discloses a video multi-scale visualization method and a video multi-scale interaction method. The video multi-scale interaction method comprises the steps of establishing a user cognition model, which is oriented to a video content structure, of a target video; extracting a foreground object and a background scene in the target video and an image frame of the foreground object; obtaining a moving target and a corresponding trajectory thereof; calculating an appearance density of the moving target according to a time shaft-based moving target appearance amount and a correspondingtime mapping relation; extracting a key frame from processed target video data, and performing annotation on moving target information in the key frame; performing multi-scale division on a processedmoving target identification result and trajectory data of the moving target to generate a multi-scale video information representation structure; and introducing a sketch interactive gesture to an interactive interface of the multi-scale video information representation structure in combination with corresponding semantics of a mouse interactive operation based on an interactive operation mode of a user in an interactive process, and operating the target video on the interactive interface through the sketch interactive gesture.
Owner:INST OF SOFTWARE - CHINESE ACAD OF SCI

Human-computer interaction method based on eye movement control

The invention discloses a human-computer interaction method based on eye movement control. The method comprises the following steps of human eye pupil positioning and eye movement characteristic extraction. The step of pupil center positioning based on grey information comprises the following stages that: 1: human eye positioning; 2: pupil edge detection; 3: pupil center positioning. The step of eye movement characteristic information extraction based on visual point movement comprises the following steps that: calculating a fixation point deviation and visual point deviation mapping; on the basis of the displacement difference of two obtained adjacent frames of images, enabling eyes to move back and forth to calibrate points calibrated by coordinates on the screen, and utilizing a least square curve fitting algorithm to solve a mapping function; and after eye movement characteristic information is obtained, on the basis of the obtained eye movement control displacement and angle information, carrying out a corresponding system message response. By use of the human-computer interaction method based on the eye movement control, the energy consumption of the user can be reduced, theuser can be assisted in carrying out direct control, and efficient and natural interaction is realized.
Owner:BEIJING INST OF COMP TECH & APPL

Augmented reality device

The invention is suitable to the AR technical field and discloses an augmented reality device. The device comprises a camera which is used for shooting external real environment; an image display screen which is used for displaying virtual features; a semi-reflective-semi-permeable mirror which is used for users to watch an integrated scene with their eyes by superimposing virtual features and external real environment and the semi-reflective-semi-permeable mirror has a single-view structure for double eyes; an optical path transition assembly which is used for projecting the virtual features displayed on the image display screen onto the semi-reflective-semi-permeable mirror; a data processor which is connected with the camera and the image display screen in a data connection mode and is used for identifying, positioning and operating image pickup information of the camera and controlling the image display screen and changing display information according to the operation results so as to realize spatial interaction between users and virtual features. The optical path of the invention is the single-view system for double eyes so that there is no need to split screen for calculation and the operation amount and energy consumption are saved; the device has strong compatibility and the spatial interaction between users and virtual features is realized and users experience is increased.
Owner:广州数娱信息科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products