Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

237 results about "Optical flow estimation" patented technology

Optical flow estimation is used in computer vision to characterize and quantify the motion of objects in a video stream, often for motion-based object detection and tracking systems. Moving object detection in a series of frames using optical flow.

Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion

The invention discloses an unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion. The method includes the steps of obtaining image information through an airborne binocular camera in real time; obtaining image depth information by using the graphics processing unit (GPU); extracting geometric contour information of the most threatening obstacle by using the obtained depth information and calculating the threat distance by a threat depth model; obtaining an obstacle tracking window by the rectangular fitting of the geometric contour information of the obstacle and calculating the optical flow field of the area to which the obstacle belongs to obtain the velocity of the obstacle relative to the UAV; and sending out an avoidance flight action instruction by a flight control computer to avoid the obstacle according to the calculated obstacle distance information, the geometric contour information and the relative velocity information. The invention realizes effective fusion of the depth information of the obstacle and the optical flow vector, obtains the movement information of the obstacle relative to the UAV in real time, improves the ability of the UAV to realize the visual obstacle avoidance, and enables a greater increase in the real-time performance and accuracy as compared with traditional algorithms.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS

End-to-end optical flow estimation method based on multi-stage loss

The invention discloses an end-to-end optical flow estimation method based on multistage loss, and the method comprises the steps: sending two adjacent images into the same feature extraction convolutional neural network, carrying out the feature extraction, and obtaining a multi-scale feature map of the two frames of images; carrying out correlation analysis operation on the two image feature maps under each scale so as to obtain multi-scale loss information; combining the loss amount information obtained under the same scale, the feature map of the first frame of image under the scale and the optical flow information obtained by the previous stage of prediction together, sending the combined information into an optical flow prediction convolutional neural network, obtaining a residual flow under the scale, and adding the residual flow with an upper sampling result of the optical flow information of the previous stage to obtain the optical flow information of the scale; and carrying out feature fusion operation on the optical flow information of the second-level scale and the input two frames of images, and sending the fused information to a motion edge optimization network to obtain a final optical flow prediction result. By using the method, the optical flow estimation algorithm precision and efficiency can be improved.
Owner:BEIJING INSTITUTE OF TECHNOLOGYGY

Effective estimation method for large displacement optical flows

ActiveCN105809712ABroadcast restrictionsAccurate Large Displacement MatchingImage enhancementImage analysisEstimation methodsComputer science
The invention discloses an effective estimation method for large displacement optical flows, comprising capturing two continuous images from a video and marking the frame images of the two as I1 and I2 in a time order; taking I1 and I2 as the lowest layers to establish image pyramids respectively (described in the specification); generating seed points in equal numbers on each layer of the image pyramids (described in the specification); initiating the matching of seed points at the top layer (described in the specification) for random values; matching the obtained seed points from the top layer to the lowest lower of the image pyramids successively and the matching result of seed points at each layer serves as an initial value for the next layer; and interpolating the matching result of seed points at the lowest layer through an edge sensitive interpolation algorithm; using the interpolating result as the initial value for optical flow estimation and optimizing it through a variation energy optimization model to finally achieve the estimation result of large displacement optical flows. According to the invention, a more effective and flexible result can be achieved. The number of seed points can be controlled for different application scenarios for optical flow results with varied efficiency and accuracy.
Owner:XIDIAN UNIV

Lightweight video interpolation method based on a feature level optical flow

The invention discloses a lightweight video interpolation method based on a feature level optical flow, which is used for solving the technical problem that the existing lightweight video interpolation method is poor in practicability. According to the technical scheme, firstly, multi-scale transformation is carried out on two continuous frames of images in a given video, and an optical flow estimation module at the feature level is adopted to calculate a forward optical flow and a reverse optical flow between the two frames under the current scale; warp transformation on the two images is performed in time sequence according to the forward optical flow and the reverse optical flow respectively to obtain two interpolation images; the interpolated images are combined to obtain a four-dimensional tensor, and the tensor is processed by utilizing three-dimensional convolution to obtain an interpolated image under the scale; And weighted average on the images are carried out with differentscales to obtain a final interpolation image. According to the invention, the video interpolation is carried out by using the optical flow of the feature level and the multi-scale fusion technology, and the precision and speed of the video interpolation are improved. A 1.03 MB network model is used for obtaining the average peak signal-to-noise ratio of 32.439 and the structural similarity of 0.886.
Owner:NORTHWESTERN POLYTECHNICAL UNIV

Method and device of video abnormal behavior detection based on Bayes surprise degree calculation

The invention provides a method and a device of video abnormal behavior detection based on Bayes surprise degree calculation. The method comprises the steps of extracting a spatio-temporal interest point (STIP) in a video frame to be used as a point to be detected, and using the movement velocity size and direction of the point to be detected in an optical flow estimation scene as a feature calculation surprise; calculating prior and posterior probability distribution in the spatial dimension and the time dimension by aiming at a video, and respectively calculating a spatial surprise degree and a time surprise degree of each point to be detected; combining a total surprise degree through the time surprise degree and the spatial surprise degree; and warning abnormal situations under the condition that the surprise values of a plurality of points to be detected exceed a threshold value. The device comprises an STIP detection module, a feature extraction module, a surprise calculation module and an anomaly detection module. By means of the method and the device of the video abnormal behavior detection based on the Bayes surprise degree calculation, detection of several specific types of emergent and anomalous events can be achieved, and the abnormal analysis algorithm has good applicability and high classifying accuracy rate.
Owner:UNIV OF SCI & TECH OF CHINA

Video depth map estimation method and device with space-time consistency

ActiveCN110782490AImprove relevanceImprove the problem of excessive error before and afterImage enhancementImage analysisNetwork structureVideo sequence
The invention provides a video depth map estimation method and device with space-time consistency. The method comprises the following steps: generating a training set: taking a central frame as a target view, taking a front frame and a rear frame as source views, and generating a plurality of sequences; for a static object in a scene, constructing a framework for jointly training monocular depth and camera pose estimation from an unmarked video sequence, which comprises constructing a depth map estimation network structure, constructing a camera pose estimation network structure, and constructing a loss function of the part; for a moving object in the scene, cascading a previous optical flow network after the created framework to simulate the motion in the scene, which comprises establishing an optical flow estimation network structure and establishing a loss function of the part; proposing a loss function of the deep neural network for the space-time consistency test of the depth map;continuously optimizing the model, carrying out joint training on monocular depth and camera attitude estimation, and then training an optical flow network; and realizing depth map estimation of continuous video frames by utilizing the optimized model.
Owner:WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products