Patents
Literature
Eureka-AI is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Eureka AI

1468 results about "Behavior recognition" patented technology

Behavior recognition is based on several factors. These include the location and movement of the nose point, center point, and tail base of the animal; its body shape and contour; and information about the cage in which testing takes place (such as where the walls, the feeder, and the drinking bottle are located).

Movement human abnormal behavior identification method based on template matching

The invention relates to a movement human abnormal behavior identification method based on template matching, which mainly comprises the steps of: video image acquisition and behavior characteristic extraction. The movement human abnormal behavior identification method is a mode identification technology based on statistical learning of samples. The movement of a human is analyzed and comprehended by using a computer vision technology, the behavior identification is directly carried out based on geometric calculation of a movement region and recording and alarming are carried out; the Gaussian filtering denoising and the neighborhood denoising are combined for realizing the denoising, thereby improving the independent analysis property and the intelligent monitoring capacity of an intelligent monitoring system, achieving higher identification accuracy for abnormal behaviors, effectively removing the complex background and the noise of a vision acquired image, and improving the efficiency and the robustness of the detection algorithm. The invention has simple modeling, simple algorithm and accurate detection, can be widely applied to occasions of banks, museums and the like, and is also helpful to improve the safety monitoring level of public occasions.
Owner:XIDIAN UNIV

Crime monitoring method based on face recognition technology and behavior and sound recognition

The invention provides a crime monitoring method based on the face recognition technology and behavior and sound recognition, which includes the following steps: Step 1, recoding a video through a camera, reducing dimensions of the video to form a picture message ensemble; Step 2, performing the recognition comparison to the picture message ensemble as per an intelligent behavior pattern, and issuing an early warning and storing the video if the comparison is successful; and Step 3, verifying the crime situation by a police on duty, confirming the position of the camera through a GPS for positioning and tracking and confirming the police strength nearby, and sending crime situation to polices nearby by the police on duty, and if no police is on duty, confirming the position of the camera automatically through the GPS for positioning and tracking and confirming the police strength nearby, and sending crime situation to polices nearby and staff on duty. According to the invention, different intelligent behavior patterns are set as per the monitoring requirements of different situations, targeted monitoring is introduced, early warning prevention in advance is realized, the case is prevented from further worsening, the time in solving a criminal case is shortened, and the detection rate is improved.
Owner:FUJIAN YIRONG INFORMATION TECH

Bidirectional long short-term memory unit-based behavior identification method for video

The invention discloses a bidirectional long short-term memory unit-based behavior identification method for a video. The method comprises the steps of (1) inputting a video sequence and extracting an RGB (Red, Green and Blue) frame sequence and an optical flow image from the video sequence; (2) respectively training a deep convolutional network of an RGB image and a deep convolutional network of the optical flow image; (3) extracting multilayer characteristics of the network, wherein characteristics of a third convolutional layer, a fifth convolutional layer and a seventh fully connected layer are at least extracted, and the characteristics of the convolutional layers are pooled; (4) training a recurrent neural network constructed by use of a bidirectional long short-term memory unit to obtain a probability matrix of each frame of the video; and (5) averaging the probability matrixes, finally fusing the probability matrixes of an optical flow frame and an RGB frame, taking a category with a maximum probability as a final classification result, and thus realizing behavior identification. According to the method, the conventional artificial characteristics are replaced with multi-layer depth learning characteristics, the depth characteristics of different layers represent different pieces of information, and the combination of multi-layer characteristics can improve the accuracy rate of classification; and the time information is captured by use of the bidirectional long short-term memory, many pieces of time domain structural information are obtained and a behavior identification effect is improved.
Owner:SUZHOU UNIV

Human body gesture identification method based on depth convolution neural network

The invention discloses a human body gesture identification method based on a depth convolution neural network, belongs to the technical filed of mode identification and information processing, relates to behavior identification tasks in the aspect of computer vision, and in particular relates to a human body gesture estimation system research and implementation scheme based on the depth convolution neural network. The neural network comprises independent output layers and independent loss functions, wherein the independent output layers and the independent loss functions are designed for positioning human body joints. ILPN consists of an input layer, seven hidden layers and two independent output layers. The hidden layers from the first to the sixth are convolution layers, and are used for feature extraction. The seventh hidden layer (fc7) is a full connection layer. The output layers consist of two independent parts of fc8-x and fc8-y. The fc8-x is used for predicting the x coordinate of a joint. The fc8-y is used for predicting the y coordinate of the joint. When model training is carried out, each output is provided with an independent softmax loss function to guide the learning of a model. The human body gesture identification method has the advantages of simple and fast training, small computation amount and high accuracy.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

3D (three-dimensional) convolutional neural network based human body behavior recognition method

InactiveCN105160310AThe extracted features are highly representativeFast extractionCharacter and pattern recognitionHuman bodyFeature vector
The present invention discloses a 3D (three-dimensional) convolutional neural network based human body behavior recognition method, which is mainly used for solving the problem of recognition of a specific human body behavior in the fields of computer vision and pattern recognition. The implementation steps of the method are as follows: (1) carrying out video input; (2) carrying out preprocessing to obtain a training sample set and a test sample set; (3) constructing a 3D convolutional neural network; (4) extracting a feature vector; (5) performing classification training; and (6) outputting a test result. According to the 3D convolutional neural network based human body behavior recognition method disclosed by the present invention, human body detection and movement estimation are implemented by using an optical flow method, and a moving object can be detected without knowing any information of a scenario. The method has more significant performance when an input of a network is a multi-dimensional image, and enables an image to be directly used as the input of the network, so that a complex feature extraction and data reconstruction process in a conventional recognition algorithm is avoided, and recognition of a human body behavior is more accurate.
Owner:XIDIAN UNIV

Video frequency behaviors recognition method based on track sequence analysis and rule induction

The invention discloses a method for identifying the video action based on trajectory sequence analysis and rule induction, which solves the problems of large labor intensity. The method of the invention divides a complete trajectory in a scene into a plurality of trajectory section with basic meaning, and obtains a plurality of basic movement modes as atomic events through the trajectory clustering; meanwhile, a hidden Markov model is utilized for establishing a model to obtain the event rule contained in the trajectory sequence by inducting the algorithm based on the minimum description length and based on the event rule, an expanded grammar analyzer is used for identifying an interested event. The invention provides a complete video action identification frame and also a multi-layer rule induction strategy by taking the space-time attribute, which significantly improves the effectiveness of the rule learning and promotes the application of the pattern recognition in the identification of the video action. The method of the invention can be applied to the intelligent video surveillance and automatic analysis of movements of automobiles or pedestrians under the current monitored scene so as to lead a computer to assist people or substitute people to complete monitor tasks.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI

Infrared behavior identification method based on adaptive fusion of artificial design feature and depth learning feature

The invention relates to an infrared behavior identification method based on adaptive fusion of an artificial design feature and a depth learning feature. The method comprises: S1, improved dense track feature extraction is carried out on an original video by using an artificial design feature module; S2, feature coding is carried out on the extracted artificial design feature; S3, with a CNN feature module, optic flow information extraction is carried out on an original video image sequence by using a variation optic flow algorithm, thereby obtaining a corresponding optic flow image sequence; S4, CNN feature extraction is carried out on the optic flow sequence obtained at the S3 by using a convolutional neural network; and S5, a data set is divided into a training set and a testing set; and weight learning is carried out on the training set data by using a weight optimization network, weight fusion is carried out on probability outputs of a CNN feature classification network and an artificial design feature classification network by using the learned weight, an optimal weight is obtained based on a comparison identification result, and then the optimal weight is applied to testing set data classification. According to the method, a novel feature fusion way is provided; and reliability of behavior identification in an infrared video is improved. Therefore, the method has the great significance in a follow-up video analysis.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Behavior identification method based on recurrent neural network and human skeleton movement sequences

The invention discloses a behavior identification method based on a recurrent neural network and human skeleton movement sequences. The method comprises the following steps of normalizing node coordinates of extracted human skeleton posture sequences to eliminate influence of absolute space positions, where a human body is located, on an identification process; filtering the skeleton node coordinates through a simple smoothing filter to improve the signal to noise ratio; sending the smoothed data into the hierarchic bidirectional recurrent neural network for deep characteristic extraction and identification. Meanwhile, the invention provides a hierarchic unidirectional recurrent neural network model for coping with practical real-time online analysis requirements. The behavior identification method based on the recurrent neural network and the human skeleton movement sequences has the advantages of designing an end-to-end analyzing mode according to the structural characteristics and the motion relativity of human body, achieving high-precision identification and meanwhile avoiding complex computation, thereby being applicable to practical application. The behavior identification method based on the recurrent neural network and the human skeleton movement sequence is significant to the fields of intelligent video monitoring based on the depth camera technology, intelligent traffic management, smart city and the like.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI

Multi-target behavior identification method and system for monitoring video

The invention provides a multi-target behavior identification method and system for monitoring video. The method comprises the following steps of respectively training a target detection model and a behavior recognition model; predicting position information of pedestrians in the current frame of the video, and taking the position information as a target detection box of the current frame; predicting a target tracking frame of the current frame through the previous frame information according to the target detection frame of the current frame, and calculating a target frame matching degree between the two frames; matching the target detection box of the current frame with the target tracking box of the current frame to obtain matching information; estimating a pedestrian target frame coordinate of the current frame, and predicting a target tracking frame coordinate of the pedestrian target in the next frame; cutting pedestrian pictures and storing pedestrian numbers; according to the pedestrian numbers, matching the pedestrian pictures with the same numbers in multiple continuous frames, combining into a list, and storing the pedestrian numbers; and if the length of the list reaches a specified frame number threshold, inputting the pedestrian pictures stored in the list into the behavior recognition model, and calculating the behavior category probability of the list.
Owner:GUILIN UNIV OF ELECTRONIC TECH +1
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products