Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

145 results about "Natural interaction" patented technology

Virtual learning environment natural interaction method based on multimode emotion recognition

The invention provides a virtual learning environment natural interaction method based on multimode emotion recognition. The method comprises the steps that expression information, posture information and voice information representing the learning state of a student are acquired, and multimode emotion features based on a color image, deep information, a voice signal and skeleton information are constructed; facial detection, preprocessing and feature extraction are performed on the color image and a depth image, and a support vector machine (SVM) and an AdaBoost method are combined to perform facial expression classification; preprocessing and emotion feature extraction are performed on voice emotion information, and a hidden Markov model is utilized to recognize a voice emotion; regularization processing is performed on the skeleton information to obtain human body posture representation vectors, and a multi-class support vector machine (SVM) is used for performing posture emotion classification; and a quadrature rule fusion algorithm is constructed for recognition results of the three emotions to perform fusion on a decision-making layer, and emotion performance such as the expression, voice and posture of a virtual intelligent body is generated according to the fusion result.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Digital twin system of an intelligent production line

The invention relates to a digital twin system of an intelligent production line, which comprises a physical space layer, an information layer and a virtual space layer. The physical space layer is composed of a physical production line, intelligent sensing devices and an industrial control network. The information layer includes a data conversion module, a data analysis module and a production line information database. The virtual space layer can adapt to a variety of platforms and environments including personal computers and handheld devices, is used for generating a virtual production line consistent with the physical production line by a 3D visual engine through online real-time and offline non-real-time rendering under the driving of the production line information database of the information layer, and has the functions of multi-angle of view visual display, natural interaction, state monitoring and so on. The real-time state information of the physical production line is collected by various intelligent sensing devices, and based on the information, a three-dimensional visual engine is driven to generate a virtual production line model consistent with the physical production line through rendering, thereby realizing the twin mirror image of the virtual production line and the physical production line.
Owner:CHINA ELECTRONIC TECH GRP CORP NO 38 RES INST

Somatosensory-based natural interaction method for virtual mine

The invention discloses a somatosensory-based natural interaction method for a virtual mine. The method comprises the steps of applying a Kinect to acquire gesture signals, depth information and bone point information of a user; carrying out smoothing filtering on images, depth information and bone information of the gesture signals; dividing gesture images by using a depth histogram, applying an eight neighborhood outline tracking algorithm to find out a gesture outline, and identifying static gestures; planning feature matching identification of dynamic gestures by improving dynamic time according to the bone information; triggering corresponding Win32 instruction information by using a gesture identification result, and transmitting the information to a virtual reality engine, respectively mapping the instruction information to the primary keyboard mouse operation of a virtual mining natural interaction system, so as to realize the somatosensory interaction control of the virtual mine. According to the method provided by the invention, the natural efficiency of man-machine interaction can be improved, and the immersion and natural infection represented by the virtual mine can be improved, and the application of the virtual reality and a somatosensory interaction technology can be effectively popularized in coal mines and other fields.
Owner:重庆雅利通实业有限公司

Multimodal input-based interactive method and device

The invention aims to provide an intelligent glasses device and method used for performing interaction based on multimodal input and capable of enabling the interaction to be closer to natural interaction of users. The method comprises the steps of obtaining multiple pieces of input information from at least one of multiple input modules; performing comprehensive logic analysis on the input information to generate an operation command, wherein the operation command has operation elements, and the operation elements at least include an operation object, an operation action and an operation parameter; and executing corresponding operation on the operation object based on the operation command. According to the intelligent glasses device and method, the input information of multiple channels is obtained through the input modules and is subjected to the comprehensive logic analysis, the operation object, the operation action and the operation element of the operation action are determined to generate the operation command, and the corresponding operation is executed based on the operation command, so that the information is subjected to fusion processing in real time, the interaction of the users is closer to an interactive mode of a natural language, and the interactive experience of the users is improved.
Owner:HISCENE INFORMATION TECH CO LTD

Mine virtual reality training system based on immersion type input and output equipment

The invention discloses a mine virtual reality training system based on immersion type input and output equipment. With the system, a trainee can be completely immersed in the virtual scene, and carries out man-machine interaction with the virtual training scene through natural methods including gestures, walking and running, and the like, so as to complete the preset training program. The system is composed of output equipment such as a head-wearing display, an omnidirectional running machine, output equipment such as a motion capturing device, a computer, and matched training software, wherein the head-wearing display can provide complete vision immersion experiences, and the omnidirectional running machine and the motion capturing device map the movements of the trainee in the reality on a virtual object in the virtual scene, and operate the system through the natural interaction method. Compared with the traditional training method, the training system provided by the invention enables the trainee to be immersed in the virtual mine environment close to the reality at any place, and obtain the training experiences the same as those obtained in the scene, and finally, the effect comparable with the on-site training effect is obtained.
Owner:CHINA UNIV OF MINING & TECH (BEIJING)

Mining operation multi-operation realization method based on virtual reality and augmented reality

The invention discloses a mining operation multi-operation realization method based on virtual reality and augmented reality, and belongs to the technical field of the virtual reality and the augmented reality. The method comprises two patterns including the virtual reality and the augmented reality. Under a virtual reality scene, the selection and the replacement of a material and a material in the virtual reality scene can be realized, scene exploration is realized, the model can be moved and placed at will, a video is embedded, a two-dimensional code is generated, and a trigger realizes natural interaction, voice interaction and the like; under an augmented reality scene, the model can be selected, voice is played, the operation dynamics of the model can be demonstrated, and the rotation stopping, screen capture and function expansion of the model can be controlled; and under two patterns, various interaction ways of voice control, gesture control and keyboard mouse control can be realized. The method is applied to the virtual simulation application scene of the mining operation, can be used for training mine lot mining workers and students of the mining engineering specialty, reducing training capital and improving the skills of workers and provides an advanced and quick meaning for guiding production construction and science and technology studies.
Owner:SHANDONG UNIV OF SCI & TECH

A natural interaction method of virtual learning environment based on speech emotion recognition

The invention relates to a natural interactive method of a virtual learning environment based on speech emotion recognition, belonging to the field of depth learning. The method comprises the following steps: 1, collecting speech signals of students and users through kinect, resampling, adding windows by frames, and mute processing to obtain short-time single frame signals; 2, carrying out fast Fourier transform on that signal to obtain the frequency domain data, obtaining the pow spectrum thereof, and adopting a Mel filter bank to obtain a Mel spectrum diagram; 3, inputting the features of the Mel spectrum map into a convolution neural network, performing convolution operation and pooling operation, and inputting the matrix vectors of the last desample layer to the whole connecting layerto form a vector output feature; 4, compressing and inputting the output characteristic into a bi-directional long-short time memory neural network; 5, inputting the output features into a support vector machine to classify and output a classification result; 6, feeding back the classification result to the virtual learning system for virtual learning environment interaction. The invention driveslearners to adjust the learning state and enhances the practicability of the virtual learning environment.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

An intelligent maintenance auxiliary system and method for airborne equipment

The invention discloses an intelligent maintenance auxiliary system for airborne equipment. The intelligent maintenance auxiliary system comprises an interaction module, an image detection and recognition module, a maintenance information management module, an analysis module and an augmented reality display module, The interaction module is used for obtaining input of a user, projection of a UI and interaction logic processing; The image detection and recognition module is used for detecting characters and specific patterns on the picture and recognizing content information; The analysis module is used for carrying out logic judgment and generating guided characters or 3D images; The maintenance information management module is used for sending the search request to the cloud database server; And the augmented reality display module is used for positioning a target position in a real environment in the field of view of the user and superposing the guidance information to the positionthrough an augmented reality technology. An augmented reality technology and intelligent maintenance assistance are organically combined, maintenance personnel are helped and guided to maintain airborne equipment in a natural interaction mode, and therefore operability and learnability of maintenance operation are improved.
Owner:SOUTH CHINA UNIV OF TECH

HoloLens-based command post cooperative work electronic sand table system

ActiveCN107479705ARealize holographic three-dimensional realistic displayRealize coordinated commandInput/output for user-computer interactionEducational modelsSystems managementData access
The invention discloses a HoloLens-based command post cooperative work electronic sand table system. The system comprises a hardware supporting layer, a resource management layer, an application supporting layer and a service function layer; the hardware supporting layer collects interactive information of a commander through hardware in the system and transmits the interactive information to the resource management layer; the resource management layer manages resource data in the system and provides system management, resource control and data access interfaces for the application supporting layer; the application supporting layer achieves holographic image calculation and provides application development and operation environment supporting for the service function layer; the service function layer achieves holographic situation display, multi-role fight person interaction, natural interaction, cooperative plotting, discussion process control and holographic image generation. The system supports whole scene holographic display and military science element holographic display of an electronic sand table, the commander performs situation plotting operation in the cooperative work environment of the holographic electronic sand table, and role interaction between commanders inside the command post is supported; environment and situation are collected for performing role interaction with fight persons.
Owner:THE 28TH RES INST OF CHINA ELECTRONICS TECH GROUP CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products