Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

582 results about "Interaction technique" patented technology

An interaction technique, user interface technique or input technique is a combination of hardware and software elements that provides a way for computer users to accomplish a single task. For example, one can go back to the previously visited page on a Web browser by either clicking a button, pressing a key, performing a mouse gesture or uttering a speech command. It is a widely used term in human-computer interaction. In particular, the term "new interaction technique" is frequently used to introduce a novel user interface design idea.

Method, system, and computer program product for visualizing a data structure

A data structure visualization tool visualizes a data structure such as a decision table classifier. A data file based on a data set of relational data is stored as a relational table, where each row represents an aggregate of all the records for each combination of values of the attributes used. Once loaded into memory, an inducer is used to construct a hierarchy of levels, called a decision table classifier, where each successive level in the hierarchy has two fewer attributes. Besides a column for each attribute, there is a column for the record count (or more generally, sum of record weights), and a column containing a vector of probabilities (each probability gives the proportion of records in each class). Finally, at the top-most level, a single row represents all the data. The decision table classifier is then passed to the visualization tool for display and the decision table classifier is visualized. By building a representative scene graph adaptively, the visualization application never loads the whole data set into memory. Interactive techniques, such as drill-down and drill-through are used view further levels of detail or to retrieve some subset of the original data. The decision table visualizer helps a user understand the importance of specific attribute values for classification.
Owner:RPX CORP +1

System and method that facilitates computer desktop use via scaling of displayed objects with shifts to the periphery

The present invention relates to a system that facilitates multi-tasking in a computing environment. A focus area component defines a focus area within a display space—the focus area occupying a subset area of the display space area. A scaling component scales display objects as a function of proximity to the focus area, and a behavior modification component modifies respective behavior of the display objects as a function their location of the display space. Thus, and more particularly the subject invention provides for interaction technique(s) and user interface(s) in connection with managing display objects on a display surface. One aspect of the invention defines a central focus area where the display objects are displayed and behave as usual, and a periphery outside the focus area where the display objects are reduced in size based on their location, getting smaller as they near an edge of the display surface so that many more objects can remain visible. In addition or alternatively, the objects can fade as they move toward an edge, fading increasing as a function of distance from the focus area and/or use of the object and/or priority of the object. Objects in the periphery can also be modified to have different interaction behavior (e.g., lower refresh rate, fading, reconfigured to display sub-objects based on relevance and/or visibility, static, etc.) as they may be too small for standard rendering. The methods can provide a flexible, scalable surface when coupled with automated policies for moving objects into the periphery, in response to the introduction of new objects or the resizing of pre-existing objects by a user or autonomous process.
Owner:MICROSOFT TECH LICENSING LLC

Man-machine interaction method and system based on sight judgment

The invention relates to the technical field of man-machine interaction and provides a man-machine interaction method based on sight judgment, to realize the operation on an electronic device by a user. The method comprises the following steps of: obtaining a facial image through a camera, carrying out human eye area detection on the image, and positioning a pupil center according to the detected human eye area; calculating a corresponding relationship between an image coordinate and an electronic device screen coordinate system; tracking the position of the pupil center, and calculating a view point coordinate of the human eye on an electronic device screen according to the corresponding relationship; and detecting an eye blinking action or an eye closure action, and issuing corresponding control orders to the electronic device according to the detected eye blinking action or the eye closure action. The invention further provides a man-machine interaction system based on sight judgment. With the adoption of the man-machine interaction method, the stable sight focus judgment on the electronic device is realized through the camera, and control orders are issued through eye blinking or eye closure, so that the operation on the electronic device by the user becomes simple and convenient.
Owner:SHENZHEN INST OF ADVANCED TECH

Human-machine interaction method and system based on binocular stereoscopic vision

The invention relates to the technical field of human-machine interaction and provides a human-machine interaction method and a human-machine interaction system based on binocular stereoscopic vision. The human-machine interaction method comprises the following steps: projecting a screen calibration image to a projection plane and acquiring the calibration image on the projection surface for system calibration; projecting an image and transmitting infrared light to the projection plane, wherein the infrared light forms a human hand outline infrared spot after meeting a human hand; acquiring an image with the human hand outline infrared spot on the projection plane and calculating a fingertip coordinate of the human hand according to the system calibration; and converting the fingertip coordinate into a screen coordinate according to the system calibration and executing the operation of a contact corresponding to the screen coordinate. According to the invention, the position and the coordinate of the fingertip are obtained by the system calibration and infrared detection; a user can carry out human-machine interaction more conveniently and quickly on the basis of touch operation of the finger on a general projection plane; no special panels and auxiliary positioning devices are needed on the projection plane; and the human-machine interaction device is simple in and convenient for mounting and using and lower in cost.
Owner:SHENZHEN INST OF ADVANCED TECH

System and method that facilitates computer desktop use via scaling of displayed objects with shifts to the periphery

The present invention relates to a system that facilitates multi-tasking in a computing environment. A focus area component defines a focus area within a display space—the focus area occupying a subset area of the display space area. A scaling component scales display objects as a function of proximity to the focus area, and a behavior modification component modifies respective behavior of the display objects as a function their location of the display space. Thus, and more particularly the subject invention provides for interaction technique(s) and user interface(s) in connection with managing display objects on a display surface. One aspect of the invention defines a central focus area where the display objects are displayed and behave as usual, and a periphery outside the focus area where the display objects are reduced in size based on their location, getting smaller as they near an edge of the display surface so that many more objects can remain visible. In addition or alternatively, the objects can fade as they move toward an edge, fading increasing as a function of distance from the focus area and/or use of the object and/or priority of the object. Objects in the periphery can also be modified to have different interaction behavior (e.g., lower refresh rate, fading, reconfigured to display sub-objects based on relevance and/or visibility, static, etc.) as they may be too small for standard rendering. The methods can provide a flexible, scalable surface when coupled with automated policies for moving objects into the periphery, in response to the introduction of new objects or the resizing of pre-existing objects by a user or autonomous process.
Owner:MICROSOFT TECH LICENSING LLC

Information processing method and device for realizing intelligent question answering

The invention relates to the technical field of man-machine interaction, and discloses an information processing method and device for realizing intelligent question answering. The information processing method comprises the following steps of: carrying out sentence segmentation on question text information to obtain a user question; and searching a standard question most similar to the user question and corresponding answer information from a QA library on the basis of a question similarity. Compared with the existing keyword retrieval-based question answering method, the method disclosed by the invention does not need to require the users to have keyword decomposition ability, is automatic in the whole process and is capable of greatly enhancing the user experience and improving the search effect and the pertinence and effectiveness of answers. Meanwhile, through fusing natural language understanding technologies such as sentence model analysis, lexical analysis and lexical meaning extension, and carrying out comprehensive calculation on multi-dimensional similarity, the method is capable of improving the correctness of a final sentence similarity in a Chinese automatic question answering process, and enabling a Chinese intelligent question answering system to be possible.
Owner:JIANGMEN POWER SUPPLY BUREAU OF GUANGDONG POWER GRID

Systems and Methods for Constructing Relationship Specifications from Component Interactions

Techniques for automatically creating at least one relationship specification are provided. For example, one computer-implemented technique includes observing at least one interaction between two or more components of at least one distributed computing system, consolidating the at least one interaction into at least one interaction pattern, and using the at least one interaction pattern to create at least one relationship specification, wherein the at least one relationship specification is useable for managing the at least one distributed computing system. In another computer-implemented technique, at least one task relationship and at least one corresponding relationship constraint of at least two components of at least one computing system are determined, the at least one task relationship is consolidated with the at least one corresponding relationship constraint, the at least one consolidated task relationship and relationship constraint are used to generate at least one deployment descriptor, and the at least one deployment descriptor is stored, wherein the at least one deployment descriptor is useable for subsequent reuse and processing in managing the at least one distributed computing system and/or one or more different computing systems.
Owner:IBM CORP

Method for controlling four-rotor aircraft system based on human-computer interaction technology

InactiveCN102219051AFlexible operationPrecise flight positioning operationImage analysisActuated automaticallyAttitude controlControl signal
A method for controlling a four-rotor aircraft system based on the human-computer interaction technology, which belongs to the intelligent flying robot field, is characterized in that a manipulator can control the four-rotor aircraft by gestures. The flight attitude control is accomplished by the cooperative running of the four rotors which are arranged at geometrical vertexes of the four-rotor aircraft with three degrees of freedom being yaw angle, pitch angle and roll angle. The human-computer interaction technology mainly utilizes OpenCV and OpenGL. The system captures depth images of the manipulator hands through a depth camera; the depth images are analyzed and processed by the computer to obtain gesture information and generate control signals corresponding to the gesture information; and then the control signals are sent through a radio communication device to the aircraft for execution, so as to accomplish the mapping from the motion state of the manipulator hands to the motion state of the aircraft and complete the gesture control. With the help of far controlling distance and more visual gesture corresponding relation, the gesture control can be applied to danger experiments and industrial production processes with high execution difficulty.
Owner:BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products