Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

118 results about "Action prediction" patented technology

Display method, device, equipment and computer readable storage medium

ActiveCN110460831AEliminate or avoid ghostingGuaranteed display clarityImage analysisGeometric image transformationDisplay deviceAction prediction
The invention provides a display method, a device, equipment and a computer readable storage medium, and the method comprises the steps: obtaining human eye information, and determining a gaze regionand a non-gaze region in display equipment according to the human eye information; determining to-be-displayed content of the gaze area according to the movement parameters of the dynamic object in the gaze area, generating an image of the gaze area, and rendering the non-gaze area to obtain an image of the non-gaze area; and combining the image of the gaze area and the image of the non-gaze areato obtain a combined image, and displaying the combined image on a display device. According to the display method provided by the invention, the fixation area and the non-fixation area in the displayequipment are divided according to the human eye information; by determining the dynamic object in the gaze area and predicting the action of the dynamic object, the content to be displayed in the gaze area can be determined according to the action prediction of the dynamic object, ghosting of the dynamic object during scene movement is eliminated or avoided, the display definition of a dynamic picture is ensured, and the display performance of equipment is improved.
Owner:BOE TECH GRP CO LTD +1

Method for recognizing actions on basis of deep feature extraction asynchronous fusion networks

The invention provides a method for recognizing actions on the basis of deep feature extraction asynchronous fusion networks. The method is implemented by the aid of main contents including coarse-grained-to-fine-grained networks, asynchronous fusion networks and the deep feature extraction asynchronous fusion networks. The method includes procedures of inputting each short-term light stream stackof each space frame and each movement stream of input video appearance stream into the coarse-grained-to-fine-grained networks; integrating depth features of a plurality of action class grain sizes;creating accurate feature representation; inputting extracted features into the asynchronous fusion networks with different integrated time point information stream features; acquiring each action class prediction results; combining the different action prediction results with one another by the deep feature extraction asynchronous fusion networks; determining ultimate action class labels of inputvideo. The method has the advantages that deep-layer features can be extracted from the multiple action class grain sizes and can be integrated, accurate action representation can be obtained, complementary information in a plurality of pieces of information stream can be effectively utilized by means of asynchronous fusion, and the action recognition accuracy can be improved.
Owner:SHENZHEN WEITESHI TECH

Multi-agent confrontation method and system based on dynamic graph neural network

The invention belongs to the field of reinforcement learning of a multi-agent system, particularly relates to a multi-agent confrontation method and system based on a dynamic graph neural network, and aims at solving the problems that an existing multi-agent model based on the graph neural network is low in training speed and low in efficiency, and much manual intervention is needed in graph construction. The method comprises the following steps: obtaining an observation vector of each agent, and carrying out linear transformation to obtain an observation feature vector; calculating a connection relationship between adjacent agents, and constructing a graph structure between the agents; carrying out embedded representation on a graph structure between the intelligent agents in combination with the observation feature vectors; performing network space-time parallel training on the action prediction result of the action network and the evaluation of the evaluation network by using the embedded representation; and performing action prediction and action evaluation in multi-agent confrontation through the trained network. According to the method, a more real graph relationship is established through pruning, space-time parallel training is realized by utilizing the full-connection neural network and position coding, the training efficiency is high, and the effect is good.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI +1

Video processing device, electronic equipment and computer readable storage medium

The invention provides a video processing device, electronic equipment and a computer readable storage medium, and the device comprises: a prompt display module which is used for displaying prompt information of at least one specified action through a display screen; a video acquisition module which is used for acquiring a video of the patient by using the camera; an action prediction module which is used for inputting the video of the patient into a Parkinson's disease detection model, performing predicting to obtain action information of the patient and sending the action information to the doctor equipment; an information acquisition module which is used for acquiring disease information of the patient; and a suggestion strategy module which is used for obtaining a suggestion program control strategy of the patient based on the disease information and the action information of the patient and sending the suggestion program control strategy to the doctor equipment. According to the device, the patient can be prompted to do the specified action based on the prompt information, the corresponding action information is obtained according to the video of the specified action of the patient, and a doctor is helped to quantitatively know the local posture and the movement performance condition of the patient in the movement process.
Owner:景昱医疗器械(长沙)有限公司

Method for predicting acute joint toxicity of three pesticides to photogenic bacteria

The invention discloses a method for predicting acute joint toxicity of three pesticides to photogenic bacteria, which aims to overcome problems that a conventional toxicology acute joint toxicity evaluation technique is large in testing workload and quantitative evaluation and acute joint toxicity action prediction methods are not available. The method for predicting the acute joint toxicity of three pesticides to photogenic bacteria comprises the following steps: (1) performing pretesting, namely confirming the testing concentration of different pesticides in official tests, wherein the step of performing pretesting, namely confirming testing concentrations of different pesticides in official tests comprises the following steps: (1) primarily confirming high, medium and low photogenic bacteria toxicity concentration ranges of a single pesticide; (2) establishing a dosage-effect equation that y is equal to f(x)(x belongs to [C,C']) of the single pesticide; (3) confirming the testing concentrations of different pesticides in the BBD tests; (2) confirming testing schemes through three-factor three-level center combined testing design (BBD); (3) performing official testing, namely, testing the relative light emission inhibition rates of different test groups of photogenic bacteria; and (4) establishing a model to predict the acute joint toxicity of the three pesticides to the photogenic bacteria.
Owner:JILIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products