Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

599 results about "Human behavior" patented technology

Human behavior is the response of individuals or groups of humans to internal and external stimuli. It refers to the array of every physical action and observable emotion associated with individuals, as well as the human race. While specific traits of one's personality and temperament may be more consistent, other behaviors will change as one moves from birth through adulthood. In addition to being dictated by age and genetics, behavior, driven in part by thoughts and feelings, is an insight into individual psyche, revealing among other things attitudes and values. Social behavior, a subset of human behavior, study the considerable influence of social interaction and culture. Additional influences include ethics, social environment, authority, persuasion and coercion.

Human behavior recognition method integrating space-time dual-network flow and attention mechanism

The invention discloses a human behavior recognition method integrating the space-time dual-network flow and an attention mechanism. The method includes the steps of extracting moving optical flow features and generating an optical flow feature image; constructing independent time flow and spatial flow networks to generate two segments of high-level semantic feature sequences with a significant structural property; decoding the high-level semantic feature sequence of the time flow, outputting a time flow visual feature descriptor, outputting an attention saliency feature sequence, and meanwhile outputting a spatial flow visual feature descriptor and the label probability distribution of each frame of a video window; calculating an attention confidence scoring coefficient per frame time dimension, weighting the label probability distribution of each frame of the video window of the spatial flow, and selecting a key frame of the video window; and using a softmax classifier decision to recognize the human behavior action category of the video window. Compared with the prior art, the method of the invention can effectively focus on the key frame of the appearance image in the originalvideo, and at the same time, can select and obtain the spatial saliency region features of the key frame with high recognition accuracy.
Owner:NANJING UNIV OF POSTS & TELECOMM

Method and system for segmenting people in a physical space based on automatic behavior analysis

The present invention is a method and system for segmenting a plurality of persons in a physical space based on automatic behavior analysis of the persons in a preferred embodiment. The behavior analysis can comprise a path analysis as one of the characterization methods. The present invention applies segmentation criteria to the output of the video-based behavior analysis and assigns segmentation label to each of the persons during a predefined window of time. In addition to the behavioral characteristics, the present invention can also utilize other types of visual characterization, such as demographic analysis, or additional input sources, such as sales data, to segment the plurality of persons in another exemplary embodiment. The present invention captures a plurality of input images of the persons in the physical space by a plurality of means for capturing images. The present invention processes the plurality of input images in order to understand the behavioral characteristics, such as shopping behavior, of the persons for the segmentation purpose. The processes are based on a novel usage of a plurality of computer vision technologies to analyze the visual characterization of the persons from the plurality of input images. The physical space may be a retail space, and the persons may be customers in the retail space.
Owner:VIDEOMINING CORP

Distributed cognitive technology for intelligent emotional robot

The invention provides distributed cognitive technology for an intelligent emotional robot, which can be applied in the field of multi-channel human-computer interaction in service robots, household robots, and the like. In a human-computer interaction process, the multi-channel cognition for the environment and people is distributed so that the interaction is more harmonious and natural. The distributed cognitive technology comprises four parts, namely 1) a language comprehension module which endows a robot with an ability of understanding human language after the steps of word division, word gender labeling, key word acquisition, and the like; 2) a vision comprehension module which comprises related vision functions such as face detection, feature extraction, feature identification, human behavior comprehension, and the like; 3) an emotion cognition module which extracts related information in language, expression and touch, analyzes user emotion contained in the information, synthesizes a comparatively accurate emotion state, and makes the intelligent emotional robot cognize the current emotion of a user; and 4) a physical quantity cognition module which makes the robot understand the environment and self state as the basis of self adjustment.
Owner:UNIV OF SCI & TECH BEIJING

Human behavior recognition method based on attention mechanism and 3D convolutional neural network

The invention discloses a human behavior recognition method based on an attention mechanism and a 3D convolutional neural network. According to the human behavior recognition method, a 3D convolutional neural network is constructed; and the input layer of the 3D convolutional neural network includes two channels: an original grayscale image and an attention matrix. A 3D CNN model for recognizing ahuman behavior in a video is constructed; an attention mechanism is introduced; a distance between two frames is calculated to form an attention matrix; the attention matrix and an original human behavior video sequence form double channels inputted into the constructed 3D CNN and convolution operation is carried out to carry out vital feature extraction on a visual focus area. Meanwhile, the 3DCNN structure is optimized; a Dropout layer is randomly added to the network to freeze some connection weights of the network; the ReLU activation function is employed, so that the network sparsity isimproved; problems that computing load leap and gradient disappearing due to the dimension increasing and the layer number increasing are solved; overfitting under a small data set is prevented; and the network recognition accuracy is improved and the time losses are reduced.
Owner:NORTH CHINA ELECTRIC POWER UNIV (BAODING) +1

On-line sequential extreme learning machine-based incremental human behavior recognition method

The invention discloses an on-line sequential extreme learning machine-based incremental human behavior recognition method. According to the method, a human body can be captured by a video camera on the basis of an activity range of everyone. The method comprises the following steps of: (1) extracting a spatio-temporal interest point in a video by adopting a third-dimensional (3D) Harris corner point detector; (2) calculating a descriptor of the detected spatio-temporal interest point by utilizing a 3D SIFT descriptor; (3) generating a video dictionary by adopting a K-means clustering algorithm, and establishing a bag-of-words model of a video image; (4) training an on-line sequential extreme learning machine classifier by using the obtained bag-of-words model of the video image; and (5) performing human behavior recognition by utilizing the on-line sequential extreme learning machine classifier, and performing on-line learning. According to the method, an accurate human behavior recognition result can be obtained within a short training time under the condition of a few training samples, and the method is insensitive to environmental scenario changes, environmental lighting changes, detection object changes and human form changes to a certain extent.
Owner:SHANDONG UNIV

Track and convolutional neural network feature extraction-based behavior identification method

The invention discloses a track and convolutional neural network feature extraction-based behavior identification method, and mainly solves the problems of computing redundancy and low classification accuracy caused by complex human behavior video contents and sparse features. The method comprises the steps of inputting image video data; down-sampling pixel points in a video frame; deleting uniform region sampling points; extracting a track; extracting convolutional layer features by utilizing a convolutional neural network; extracting track constraint-based convolutional features in combination with the track and the convolutional layer features; extracting stack type local Fisher vector features according to the track constraint-based convolutional features; performing compression transformation on the stack type local Fisher vector features; training a support vector machine model by utilizing final stack type local Fisher vector features; and performing human behavior identification and classification. According to the method, relatively high and stable classification accuracy can be obtained by adopting a method for combining multilevel Fisher vectors with convolutional track feature descriptors; and the method can be widely applied to the fields of man-machine interaction, virtual reality, video monitoring and the like.
Owner:XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products