Eureka-AI is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Eureka AI

1462 results about "Optical flow" patented technology

Optical flow or optic flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene. Optical flow can also be defined as the distribution of apparent velocities of movement of brightness pattern in an image. The concept of optical flow was introduced by the American psychologist James J. Gibson in the 1940s to describe the visual stimulus provided to animals moving through the world. Gibson stressed the importance of optic flow for affordance perception, the ability to discern possibilities for action within the environment. Followers of Gibson and his ecological approach to psychology have further demonstrated the role of the optical flow stimulus for the perception of movement by the observer in the world; perception of the shape, distance and movement of objects in the world; and the control of locomotion.

Human behavior recognition method integrating space-time dual-network flow and attention mechanism

The invention discloses a human behavior recognition method integrating the space-time dual-network flow and an attention mechanism. The method includes the steps of extracting moving optical flow features and generating an optical flow feature image; constructing independent time flow and spatial flow networks to generate two segments of high-level semantic feature sequences with a significant structural property; decoding the high-level semantic feature sequence of the time flow, outputting a time flow visual feature descriptor, outputting an attention saliency feature sequence, and meanwhile outputting a spatial flow visual feature descriptor and the label probability distribution of each frame of a video window; calculating an attention confidence scoring coefficient per frame time dimension, weighting the label probability distribution of each frame of the video window of the spatial flow, and selecting a key frame of the video window; and using a softmax classifier decision to recognize the human behavior action category of the video window. Compared with the prior art, the method of the invention can effectively focus on the key frame of the appearance image in the originalvideo, and at the same time, can select and obtain the spatial saliency region features of the key frame with high recognition accuracy.

Method and apparatus for matching portions of input images

A method and apparatus for finding correspondence between portions of two images that first subjects the two images to segmentation by weighted aggregation (10), then constructs directed acylic graphs (16,18) from the output of the segmentation by weighted aggregation to obtain hierarchical graphs of aggregates (20,22), and finally applies a maximally weighted subgraph isomorphism to the hierarchical graphs of aggregates to find matches between them (24). Two algorithms are described; one seeks a one-to-one matching between regions, and the other computes a soft matching, in which is an aggregate may have more than one corresponding aggregate. A method and apparatus for image segmentation based on motion cues. Motion provides a strong cue for segmentation. The method begins with local, ambiguous optical flow measurements. It uses a process of aggregation to resolve the ambiguities and reach reliable estimates of the motion. In addition, as the process of aggregation proceeds and larger aggregates are identified, it employs a progressively more complex model to describe the motion. In particular, the method proceeds by recovering translational motion at fine levels, through affine transformation at intermediate levels, to 3D motion (described by a fundamental matrix) at the coarsest levels. Finally, the method is integrated with a segmentation method that uses intensity cues. The utility of the method is demonstrated on both random dot and real motion sequences.

Space-time attention based video classification method

ActiveCN107330362AImprove classification performanceTime-domain saliency information is accurateCharacter and pattern recognitionAttention modelTime domain
The invention relates to a space-time attention based video classification method, which comprises the steps of extracting frames and optical flows for training video and video to be predicted, and stacking a plurality of optical flows into a multi-channel image; building a space-time attention model, wherein the space-time attention model comprises a space-domain attention network, a time-domain attention network and a connection network; training the three components of the space-time attention model in a joint manner so as to enable the effects of the space-domain attention and the time-domain attention to be simultaneously improved and obtain a space-time attention model capable of accurately modeling the space-domain saliency and the time-domain saliency and being applicable to video classification; extracting the space-domain saliency and the time-domain saliency for the frames and optical flows of the video to be predicted by using the space-time attention model obtained by learning, performing prediction, and integrating prediction scores of the frames and the optical flows to obtain a final semantic category of the video to be predicted. According to the space-time attention based video classification method, modeling can be performing on the space-domain attention and the time-domain attention simultaneously, and the cooperative performance can be sufficiently utilized through joint training, thereby learning more accurate space-domain saliency and time-domain saliency, and thus improving the accuracy of video classification.

Bidirectional long short-term memory unit-based behavior identification method for video

The invention discloses a bidirectional long short-term memory unit-based behavior identification method for a video. The method comprises the steps of (1) inputting a video sequence and extracting an RGB (Red, Green and Blue) frame sequence and an optical flow image from the video sequence; (2) respectively training a deep convolutional network of an RGB image and a deep convolutional network of the optical flow image; (3) extracting multilayer characteristics of the network, wherein characteristics of a third convolutional layer, a fifth convolutional layer and a seventh fully connected layer are at least extracted, and the characteristics of the convolutional layers are pooled; (4) training a recurrent neural network constructed by use of a bidirectional long short-term memory unit to obtain a probability matrix of each frame of the video; and (5) averaging the probability matrixes, finally fusing the probability matrixes of an optical flow frame and an RGB frame, taking a category with a maximum probability as a final classification result, and thus realizing behavior identification. According to the method, the conventional artificial characteristics are replaced with multi-layer depth learning characteristics, the depth characteristics of different layers represent different pieces of information, and the combination of multi-layer characteristics can improve the accuracy rate of classification; and the time information is captured by use of the bidirectional long short-term memory, many pieces of time domain structural information are obtained and a behavior identification effect is improved.

Small four-rotor aircraft control system and method based on airborne sensor

The invention relates to the technical field of four-rotor aircrafts, in particular to a small four-rotor aircraft control system and method based on an airborne sensor. The small four-rotor aircraft control system based on the airborne sensor comprises an inertia measurement unit module, a microprocessor, an electronic speed controller, an ultrasonic sensor, an optical flow sensor, a camera, a wireless module and a DC brushless motor. By merging the information of a light and low-cost airborne sensor system, the six-DOF flight attitude of the aircraft is estimated in real time, and a closed-loop control strategy comprising inner-loop attitude control and outer-ring position control is designed. Under the environment without a GPS or an indoor positioning system, flight path control and aircraft formation control based on the leader followed strategy over the rotorcraft are achieved through the airborne sensor system and the microprocessor, wherein the flight path control comprises autonomous vertical take-off and landing, indoor accurate positioning, autonomous hovering and autonomous flight path point tracking. According to the small four-rotor aircraft control system and method, a reliable, accurate and low-cost control strategy is provided for achieving autonomous flight of the rotorcraft.

3D (three-dimensional) convolutional neural network based human body behavior recognition method

InactiveCN105160310AThe extracted features are highly representativeFast extractionCharacter and pattern recognitionHuman bodyFeature vector
The present invention discloses a 3D (three-dimensional) convolutional neural network based human body behavior recognition method, which is mainly used for solving the problem of recognition of a specific human body behavior in the fields of computer vision and pattern recognition. The implementation steps of the method are as follows: (1) carrying out video input; (2) carrying out preprocessing to obtain a training sample set and a test sample set; (3) constructing a 3D convolutional neural network; (4) extracting a feature vector; (5) performing classification training; and (6) outputting a test result. According to the 3D convolutional neural network based human body behavior recognition method disclosed by the present invention, human body detection and movement estimation are implemented by using an optical flow method, and a moving object can be detected without knowing any information of a scenario. The method has more significant performance when an input of a network is a multi-dimensional image, and enables an image to be directly used as the input of the network, so that a complex feature extraction and data reconstruction process in a conventional recognition algorithm is avoided, and recognition of a human body behavior is more accurate.

Deep convolutional neural network-based abnormal crowd behavior visual detection and analysis early warning system

The invention discloses a deep convolutional neural network-based abnormal crowd behavior visual detection and analysis early warning system. The system comprises a camera mounted at a monitoring target facility, a security prevention cloud server and an abnormal crowd behavior visual detection and analysis early warning system. In the system, various human body objects in the target facility areextracted through a deep convolutional neural network technology; then motion states of human bodies are calculated, identified and judged by using an optical flow method; different states of the human body objects are subjected to clustering and crowd modeling; further crowd objects are subjected to density calculation and danger index calculation; and finally according to different combinationsof crowd density, motion vector values and duration quantitative index data, various abnormal crowd behaviors are identified and judged, and according to the states of the abnormal crowd behaviors, corresponding crowd gathering management control policies are enabled. The deep convolutional neural network-based abnormal crowd behavior visual detection and analysis early warning system provided bythe invention is unlimited in scale, relatively high in precision and relatively good in robustness, and is based on a deep convolutional neural network.

Biopsy method for use in human face identification

The invention provides a biopsy method for use in human face identification and belongs to the technical field of mode identification. The algorithm provided by the invention comprises: firstly, giving a 'please-face-the-camera-with-face' prompt to a user logging in regardless of the current posture of the user, finding the face and eye socket areas by using an Adaboost human face sorter during the posture correction of the user, determining upper and low eyelids and left and right eye corners by using differential projection and precisely framing the positions of the eye sockets; secondly, computing optical flow field of two adjacent frames in an input video sequence by using an LK algorithm; and thirdly, further processing obtained optical flow data to obtain an optical flow amplitude, acquiring the number of the pixels with high amplitude, computing the specific gravity of the pixels with high amplitude, and determining eye movement if the proportion is big. Experiments show that for a real human face, eye movement can be detected easily due to a heavy optical flow generated on the eyes in a posture correction and eye twinkling process, while for a picture, eyes only perform a micro movement no matter how the picture moves or translations. When the method is used for biopsy, a better effect can be obtained.
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products