Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

2635 results about "Behavior recognition" patented technology

Behavior recognition is based on several factors. These include the location and movement of the nose point, center point, and tail base of the animal; its body shape and contour; and information about the cage in which testing takes place (such as where the walls, the feeder, and the drinking bottle are located).

Movement human abnormal behavior identification method based on template matching

The invention relates to a movement human abnormal behavior identification method based on template matching, which mainly comprises the steps of: video image acquisition and behavior characteristic extraction. The movement human abnormal behavior identification method is a mode identification technology based on statistical learning of samples. The movement of a human is analyzed and comprehended by using a computer vision technology, the behavior identification is directly carried out based on geometric calculation of a movement region and recording and alarming are carried out; the Gaussian filtering denoising and the neighborhood denoising are combined for realizing the denoising, thereby improving the independent analysis property and the intelligent monitoring capacity of an intelligent monitoring system, achieving higher identification accuracy for abnormal behaviors, effectively removing the complex background and the noise of a vision acquired image, and improving the efficiency and the robustness of the detection algorithm. The invention has simple modeling, simple algorithm and accurate detection, can be widely applied to occasions of banks, museums and the like, and is also helpful to improve the safety monitoring level of public occasions.
Owner:XIDIAN UNIV

Crime monitoring method based on face recognition technology and behavior and sound recognition

The invention provides a crime monitoring method based on the face recognition technology and behavior and sound recognition, which includes the following steps: Step 1, recoding a video through a camera, reducing dimensions of the video to form a picture message ensemble; Step 2, performing the recognition comparison to the picture message ensemble as per an intelligent behavior pattern, and issuing an early warning and storing the video if the comparison is successful; and Step 3, verifying the crime situation by a police on duty, confirming the position of the camera through a GPS for positioning and tracking and confirming the police strength nearby, and sending crime situation to polices nearby by the police on duty, and if no police is on duty, confirming the position of the camera automatically through the GPS for positioning and tracking and confirming the police strength nearby, and sending crime situation to polices nearby and staff on duty. According to the invention, different intelligent behavior patterns are set as per the monitoring requirements of different situations, targeted monitoring is introduced, early warning prevention in advance is realized, the case is prevented from further worsening, the time in solving a criminal case is shortened, and the detection rate is improved.
Owner:FUJIAN YIRONG INFORMATION TECH

Bidirectional long short-term memory unit-based behavior identification method for video

The invention discloses a bidirectional long short-term memory unit-based behavior identification method for a video. The method comprises the steps of (1) inputting a video sequence and extracting an RGB (Red, Green and Blue) frame sequence and an optical flow image from the video sequence; (2) respectively training a deep convolutional network of an RGB image and a deep convolutional network of the optical flow image; (3) extracting multilayer characteristics of the network, wherein characteristics of a third convolutional layer, a fifth convolutional layer and a seventh fully connected layer are at least extracted, and the characteristics of the convolutional layers are pooled; (4) training a recurrent neural network constructed by use of a bidirectional long short-term memory unit to obtain a probability matrix of each frame of the video; and (5) averaging the probability matrixes, finally fusing the probability matrixes of an optical flow frame and an RGB frame, taking a category with a maximum probability as a final classification result, and thus realizing behavior identification. According to the method, the conventional artificial characteristics are replaced with multi-layer depth learning characteristics, the depth characteristics of different layers represent different pieces of information, and the combination of multi-layer characteristics can improve the accuracy rate of classification; and the time information is captured by use of the bidirectional long short-term memory, many pieces of time domain structural information are obtained and a behavior identification effect is improved.
Owner:SUZHOU UNIV

Human body gesture identification method based on depth convolution neural network

The invention discloses a human body gesture identification method based on a depth convolution neural network, belongs to the technical filed of mode identification and information processing, relates to behavior identification tasks in the aspect of computer vision, and in particular relates to a human body gesture estimation system research and implementation scheme based on the depth convolution neural network. The neural network comprises independent output layers and independent loss functions, wherein the independent output layers and the independent loss functions are designed for positioning human body joints. ILPN consists of an input layer, seven hidden layers and two independent output layers. The hidden layers from the first to the sixth are convolution layers, and are used for feature extraction. The seventh hidden layer (fc7) is a full connection layer. The output layers consist of two independent parts of fc8-x and fc8-y. The fc8-x is used for predicting the x coordinate of a joint. The fc8-y is used for predicting the y coordinate of the joint. When model training is carried out, each output is provided with an independent softmax loss function to guide the learning of a model. The human body gesture identification method has the advantages of simple and fast training, small computation amount and high accuracy.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

3D (three-dimensional) convolutional neural network based human body behavior recognition method

InactiveCN105160310AThe extracted features are highly representativeFast extractionCharacter and pattern recognitionHuman bodyFeature vector
The present invention discloses a 3D (three-dimensional) convolutional neural network based human body behavior recognition method, which is mainly used for solving the problem of recognition of a specific human body behavior in the fields of computer vision and pattern recognition. The implementation steps of the method are as follows: (1) carrying out video input; (2) carrying out preprocessing to obtain a training sample set and a test sample set; (3) constructing a 3D convolutional neural network; (4) extracting a feature vector; (5) performing classification training; and (6) outputting a test result. According to the 3D convolutional neural network based human body behavior recognition method disclosed by the present invention, human body detection and movement estimation are implemented by using an optical flow method, and a moving object can be detected without knowing any information of a scenario. The method has more significant performance when an input of a network is a multi-dimensional image, and enables an image to be directly used as the input of the network, so that a complex feature extraction and data reconstruction process in a conventional recognition algorithm is avoided, and recognition of a human body behavior is more accurate.
Owner:XIDIAN UNIV

Video frequency behaviors recognition method based on track sequence analysis and rule induction

The invention discloses a method for identifying the video action based on trajectory sequence analysis and rule induction, which solves the problems of large labor intensity. The method of the invention divides a complete trajectory in a scene into a plurality of trajectory section with basic meaning, and obtains a plurality of basic movement modes as atomic events through the trajectory clustering; meanwhile, a hidden Markov model is utilized for establishing a model to obtain the event rule contained in the trajectory sequence by inducting the algorithm based on the minimum description length and based on the event rule, an expanded grammar analyzer is used for identifying an interested event. The invention provides a complete video action identification frame and also a multi-layer rule induction strategy by taking the space-time attribute, which significantly improves the effectiveness of the rule learning and promotes the application of the pattern recognition in the identification of the video action. The method of the invention can be applied to the intelligent video surveillance and automatic analysis of movements of automobiles or pedestrians under the current monitored scene so as to lead a computer to assist people or substitute people to complete monitor tasks.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI

Human skeleton behavior recognition method and device based on deep reinforcement learning

The invention discloses a human skeleton behavior recognition method and device based on deep reinforcement learning. The method comprises: uniform sampling is carried out on each video segment in a training set to obtain a video with a fixed frame number, thereby training a graphic convolutional neural network; after parameter fixation of the graphic convolutional neural network, an extraction frame network is trained by using the graphic convolutional neural network to obtain a representative frame meeting a preset condition; the graphic convolutional neural network is updated by using the representative frame meeting the preset condition; a target video is obtained and uniform sampling is carried out on the target video, so that a frame obtained by sampling is sent to the extraction frame network to obtain a key frame; and the key frame is sent to the updated graphic convolutional neural network to obtain a final type of the behavior. Therefore, the discriminability of the selectedframe is enhanced; redundant information is removed; the recognition performance is improved; and the calculation amount at the test phase is reduced. Besides, with full utilization of the topologicalrelationship of the human skeletons, the performance of the behavior recognition is improved.
Owner:TSINGHUA UNIV

Infrared behavior identification method based on adaptive fusion of artificial design feature and depth learning feature

The invention relates to an infrared behavior identification method based on adaptive fusion of an artificial design feature and a depth learning feature. The method comprises: S1, improved dense track feature extraction is carried out on an original video by using an artificial design feature module; S2, feature coding is carried out on the extracted artificial design feature; S3, with a CNN feature module, optic flow information extraction is carried out on an original video image sequence by using a variation optic flow algorithm, thereby obtaining a corresponding optic flow image sequence; S4, CNN feature extraction is carried out on the optic flow sequence obtained at the S3 by using a convolutional neural network; and S5, a data set is divided into a training set and a testing set; and weight learning is carried out on the training set data by using a weight optimization network, weight fusion is carried out on probability outputs of a CNN feature classification network and an artificial design feature classification network by using the learned weight, an optimal weight is obtained based on a comparison identification result, and then the optimal weight is applied to testing set data classification. According to the method, a novel feature fusion way is provided; and reliability of behavior identification in an infrared video is improved. Therefore, the method has the great significance in a follow-up video analysis.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Behavior identification method based on recurrent neural network and human skeleton movement sequences

The invention discloses a behavior identification method based on a recurrent neural network and human skeleton movement sequences. The method comprises the following steps of normalizing node coordinates of extracted human skeleton posture sequences to eliminate influence of absolute space positions, where a human body is located, on an identification process; filtering the skeleton node coordinates through a simple smoothing filter to improve the signal to noise ratio; sending the smoothed data into the hierarchic bidirectional recurrent neural network for deep characteristic extraction and identification. Meanwhile, the invention provides a hierarchic unidirectional recurrent neural network model for coping with practical real-time online analysis requirements. The behavior identification method based on the recurrent neural network and the human skeleton movement sequences has the advantages of designing an end-to-end analyzing mode according to the structural characteristics and the motion relativity of human body, achieving high-precision identification and meanwhile avoiding complex computation, thereby being applicable to practical application. The behavior identification method based on the recurrent neural network and the human skeleton movement sequence is significant to the fields of intelligent video monitoring based on the depth camera technology, intelligent traffic management, smart city and the like.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI

An operator on duty violation behavior detection method and system

The invention relates to an on-duty person violation behavior detection method and system, belongs to the technical field of intelligent video analysis, and solves the problems of low efficiency, highcost and low recognition accuracy of an existing detection method. The method comprises the following steps: constructing a target detection network model, and training by using a data set; acquiringmultiple paths of videos at different angles in the same scene in real time, carrying out multi-target detection and tracking by utilizing a trained target detection network model and a target tracking algorithm, acquiring personnel information in each path of videos, carrying out integration processing, and judging whether behaviors of personnel on duty are abnormal or not. According to the invention, an environment camera video is used as an input video source for intelligent video analysis; the system supports multi-channel video source input and fusion analysis, greatly improves the recognition accuracy of illegal behaviors through deep learning and data modeling means, and achieves the real-time and accurate monitoring of the duty behaviors of personnel on duty in scenes such as a monitoring center, a duty room, a command center and the like.
Owner:XINGTANG TELECOMM TECH CO LTD +2

On-line sequential extreme learning machine-based incremental human behavior recognition method

The invention discloses an on-line sequential extreme learning machine-based incremental human behavior recognition method. According to the method, a human body can be captured by a video camera on the basis of an activity range of everyone. The method comprises the following steps of: (1) extracting a spatio-temporal interest point in a video by adopting a third-dimensional (3D) Harris corner point detector; (2) calculating a descriptor of the detected spatio-temporal interest point by utilizing a 3D SIFT descriptor; (3) generating a video dictionary by adopting a K-means clustering algorithm, and establishing a bag-of-words model of a video image; (4) training an on-line sequential extreme learning machine classifier by using the obtained bag-of-words model of the video image; and (5) performing human behavior recognition by utilizing the on-line sequential extreme learning machine classifier, and performing on-line learning. According to the method, an accurate human behavior recognition result can be obtained within a short training time under the condition of a few training samples, and the method is insensitive to environmental scenario changes, environmental lighting changes, detection object changes and human form changes to a certain extent.
Owner:SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products