Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

86 results about "Spatial perception" patented technology

Gaze-contingent Display Technique

A gaze contingent display technique for providing a human viewer with an enhanced three-dimensional experience not requiring stereoscopic viewing aids. Methods are shown which allow users to view plenoptic still images or plenoptic videos incorporating gaze-contingent refocusing operations in order to enhance spatial perception. Methods are also shown which allow the use of embedded markers in a plenoptic video feed signifying a change of scene incorporating initial depth plane settings for each such scene. Methods are also introduced which allow a novel mode of transitioning between different depth planes wherein the user's experience is optimized in such a way that these transitions trick the human eye into perceiving enhanced depth. This disclosure also introduces a system of a display device which comprises gaze-contingent refocusing capability in such a way that depth perception by the user is significantly enhanced compared to prior art. This disclosure comprises a nontransitory computer-readable medium on which are stored program instructions that, when executed by a processor, cause the processor to perform operations relating to timing the duration the user's gaze is fixated on each of a plurality of depth planes and making a refocusing operation contingent of a number of parameters.
Owner:VON & ZU LIECHTENSTEIN MAXIMILIAN RALPH PETER

Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS

The invention discloses a real-time monitoring video and three-dimensional scene fusion method based on a three-dimensional GIS (Geographic Information System), which belongs to the technical field ofthree-dimensional GIS, and comprises the following steps: S1, inputting model data, checking texture and triangular patch number of an artificial modeling model by using hypergraph iDesktop software,removing repeated points, converting a format to generate a model data set, and storing the model data set in a database; performing root node combination and texture compression on the original OSGBformat data by the inclination model; and S2, converting the model data set and the inclined OSGB slice into a three-dimensional slice cache in an S3M format. The live-action fusion method disclosedby the invention is oriented to the fields of public security and smart cities, avoids the application limitation of the traditional two-dimensional map and monitoring video, solves the disadvantage that the monitoring video picture is split, enhances the spatial perception of video monitoring, improves the display performance of multi-path real-time video integration in a three-dimensional sceneto a certain extent, and improves the real-time performance of the three-dimensional scene. The method can be widely applied to the fields of public security and smart cities with strong videos and strong GIS services.
Owner:NANJING GUOTU INFORMATION IND

Device, system and method for training upper and lower limbs

ActiveCN111407590AIncrease interest in trainingOvercome the defect of single training angleChiropractic devicesMovement coordination devicesInformation processingSpatial perception
The invention discloses a device, system and method for training upper and lower limbs. The system comprises the device for training upper and lower limbs, information processing equipment connected to the device for training upper and lower limbs, and display equipment connected to the information processing equipment. The system for training upper and lower limbs can realize high-interactivity active and passive cooperative rehabilitation training of upper and lower limbs under the induction of a virtual-reality three-dimensional training scene, realizes overall spatial perception closed-loop motion feedback from the eyes to the brain, then to the upper limbs and finally to the lower limbs, and is beneficial for improving the training participation degree of a patient. Meanwhile, the system has four modes including a constant-speed movement mode, a passive movement mode, a power-assisted movement mode and an active movement mode, so movement scenes are enriched. In addition, a human-computer interaction force can be estimated through the current change of a lower limb motor, and the torque feedback adjustment control of the lower limb motor is realized according to the interaction force, so active compliance control is realized.
Owner:西安臻泰智能科技有限公司

Video frame processing method and device, computer readable medium and electronic equipment

The embodiment of the invention provides a video frame processing method and device, a computer readable medium and electronic equipment. The video frame processing method comprises the steps of calculating initial time perception information and/or initial space perception information of each video frame in a to-be-processed video; obtaining initial time perception information and/or initial space perception information of a predetermined number of video frames in front of each video frame in the to-be-processed video; determining final time perception information of each video frame according to the initial time perception information of each video frame and the initial time perception information of the predetermined number of video frames; and/or determining final spatial perception information of each video frame according to the initial spatial perception information of each video frame and the initial spatial perception information of the predetermined number of video frames. According to the technical scheme provided by the embodiment of the invention, the kinematicity between the video frames can be considered, so that the TI and/or SI of the obtained video frames can be ensured to accord with objective description.
Owner:TENCENT TECH (SHENZHEN) CO LTD +1

Image enhancement method, model training method and equipment

ActiveCN113066017ARealize the mappingSolve the problem of lack of spatial informationImage enhancementImage analysisPattern recognitionImage extraction
The embodiment of the invention discloses an image enhancement method, a model training method and equipment, which can be applied to the field of image processing in the field of artificial intelligence, and the method comprises the following steps: extracting features from an input image through a first neural network layer to obtain a first feature, performing pixel classification and image classification on the first feature through a second neural network layer and a third neural network layer to generate first classification information and second classification information, and obtaining a target lookup table based on the first classification information, the second classification information and a spatial perception three-dimensional lookup table (3D LUT); building the spatial perception three-dimensional lookup table according to each image category and each pixel category; and finally, obtaining an enhanced image according to the input image and the target lookup table. Compared with a traditional three-dimensional lookup table, the method has the advantages that the processing capacity is improved, and the problem that inaccurate results (such as local wrong colors and artifacts) are easily generated due to less information in an enhancement method based on the traditional three-dimensional lookup table is solved.
Owner:HUAWEI TECH CO LTD

Application pre-loading method

An embodiment of the invention relates to an application pre-loading method. The method comprises the steps of receiving selection operation of a user at a user interface of an interface application by a user terminal; determining icon data of a first application icon of a first interface application in a spatial perception region corresponding to the selection operation, wherein the icon data of the first application icon has first identifier information; determining that an application which the first interface application points to is a homegrown application or a third-party application according to the first identifier information, wherein an open data interface is arranged between the homegrown application and the interface application, and no data interface is arranged between the third-party application and the interface application; when the application which the first interface application points to is a first homegrown application, obtaining a name of the first homegrown application by the first interface application, and generating a message updating request by the first interface application to obtain a latest pushing message of the first homegrown application; and displaying the name of the first homegrown application and the latest pushing message by the first interface application through an interface application interface.
Owner:北京博瑞彤芸科技股份有限公司

Semantic vision positioning method and device based on multi-modal graph convolutional network

ActiveCN111783457AAccurate acquisitionImproving the performance of semantic visual localization tasksSemantic analysisCharacter and pattern recognitionSpatial perceptionMultilayer perceptron
The invention provides a semantic vision positioning method and device based on a multi-modal graph convolutional network. The method comprises the steps: obtaining an input picture and corpus description; extracting multi-scale visual features of an input picture by using a convolutional neural network, and encoding and embedding spatial coordinate information to obtain spatial perception visualfeatures; analyzing the corpus description to construct a semantic structure diagram, encoding each node word vector in the semantic structure diagram, and learning diagram node semantic features through a multilayer perceptron; fusing the spatial perception visual features and the graph node semantic features to obtain multi-modal features of each node in the semantic structure graph; spreading relationship information of nodes in the semantic structure chart through a graph convolution network, and learning visual semantic relationships under the guidance of semantic relationships; and performing semantic visual position reasoning to obtain a visual position of the semantic information. According to the method, context semantic information is combined when ambiguous semantic elements areprocessed, and visual positioning can be guided by utilizing semantic relation information.
Owner:BEIJING SHENRUI BOLIAN TECH CO LTD +1

Gesture recognition method and system based on skeleton

The invention discloses a gesture recognition method and system based on a skeleton. The gesture recognition method comprises the steps of conducting data enhancement on an obtained to-be-recognized original gesture skeleton sequence; respectively extracting motion features between skeleton nodes in each frame and spatial motion features of different scales, and obtaining a first dynamic gesture prediction label by utilizing a spatial perception network; respectively extracting motion characteristics between adjacent interframe skeleton nodes and time motion characteristics of different scales, and obtaining a second dynamic gesture prediction label by utilizing a short-term time perception network; respectively extracting motion characteristics between non-adjacent interframe skeleton nodes and time motion characteristics of different scales, and obtaining a third dynamic gesture prediction label by using a long-term time perception network; and according to the obtained dynamic gesture prediction label, outputting a final gesture prediction label by utilizing a space-time multi-scale chain network model. According to the invention, improvement of the overall identification efficiency and the identification precision can be realized by optimizing the individual branches in a targeted manner.
Owner:SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products