Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

293 results about "Space time domain" patented technology

Infrared weak and small target detection method based on time-space domain background suppression

The invention belongs to the field of infrared image processing, and mainly relates to an infrared weak and small target detection method based on time-space domain background suppression. The infrared weak and small target detection method is used for achieving the aim of infrared movement weak and small target detection in a complicated background and includes the steps that firstly, stable background noise waves in a space domain are suppressed through guiding filtering; secondly, slowly-changed backgrounds in a time domain are suppressed with a gradient weight filtering method on the time domain through target movement information in an infrared image sequence; thirdly, the time domain background suppression result and the space domain background suppression result are fused to obtain a background-suppressed weak and small target image; finally, the image is split through a self-adaptation threshold value, and a weak and small target is detected. By means of the infrared weak and small target detection method, during target detection, space grey information of the infrared weak and small target is used, time domain movement information of the target is further sufficiently used, the background noise waves are suppressed in the time domain and the space domain, and therefore the movement weak and small target detection performance in the complex background is greatly improved.
Owner:SHANGHAI RONGJUN TECH

Infrared and visible light video image fusion method based on Surfacelet conversion

The invention discloses an infrared and visible light video image fusion method based on Surfacelet conversion, which mainly solves the problem of poor time consistency and stability of a fusion video image in the prior art. The method comprises the following steps of: firstly, carrying out multi-scale and multidirectional decomposition on an input video image by adopting Surfacelet conversion to obtain subband coefficients with different frequency domains; then, combining a low frequency subband coefficient with a band-pass direction subband coefficient of the input video image by using a fusion method of combining selection and weighted average based on three-dimensional partial space-time domain energy matching and a fusion method of combining energy and direction vector standard variance based on a three-dimensional partial space-time domain to obtain the low frequency subband coefficient and the band-pass direction subband coefficient of the fusion video image; and finally, carrying out Surfacelet conversion on each subband coefficient obtained by combination to obtain the fusion video image. The invention has the advantages of good fusion effect, high time consistency and stability and low noise sensitivity and can be used for field safety monitoring.
Owner:XIDIAN UNIV

Segmentation method for space-time consistency of video sequence of parameter and depth information of known video camera

ActiveCN101789124AKeep boundariesEnsure consistencyTelevision system detailsImage analysisSpace time domainMean shift segmentation
The invention discloses a segmentation method for the space-time consistency of a video sequence of the parameter and depth information of a known video camera, comprising the following steps of: (1) segmenting videos by using a Mean-shift method; (2) counting Mean-shift segmentation boundaries according to the parameter and depth information of the video camera, and computing a probability boundary diagram of each frame; (3) segmenting the probability boundary diagram by utilizing a Watershed and an energy optimization method to obtain a more consistent segmentation result; (4) matching and connecting segmentation blocks among different frames to generate segmentation blocks in a space-time domain in regard to initialized segmentation obtained by utilizing the Watershed and the energy optimization method; (5) counting the probability of each pixel belonging to each space-time domain segmentation block by utilizing the parameters and the depth of the video camera, and iteratively optimizing frame by frame by using the energy optimization method to obtain a space-time consistent video segmentation result. The segmentation result can well keep object boundaries and the high space-time consistency of the segmentation blocks among multiple frames of the videos without flashing and skipping.
Owner:ZHEJIANG SENSETIME TECH DEV CO LTD

Target tracking method based on correlation of space-time-domain edge and color feature

ActiveCN103065331AEnhanced Color DifferencesEasy extractionImage analysisVideo monitoringTime domain
The invention discloses a target tracking method based on correlation of space-time-domain edge and color feature. The target tracking method based on correlation of space-time-domain edge and color feature comprises the following steps: (1) selecting a tracked target area; (2) extracting the edge outline of the target and calculating the direction angle of the edge; (3) along the two orthogonal directions of horizontal direction and vertical direction, conducting statistics of edge-color symbiosis character pairs, and building a target edge-color correlation centroid model; (4) selecting the centroids of the edge-color pairs with high confidence coefficient to conduct probability weighting, so as to gain a transfer vector of a target centroids in a current frame; (5) conducting statistics of histograms of target edge distances between adjacent frames, conducting probability weighting of the successfully matched distance change rates between the adjacent frames so as to gain a target dimension scaling parameter. By means of the target tracking method based on correlation of space-time-domain edge and color feature, a target tracking in a crowded scene, a shelter, and a condition that the target dimension changes is achieved, and robustness, accuracy and instantaneity of the tracking are improved. The target tracking method based on correlation of space-time-domain edge and color feature has a wide application prospect in the video image processing field, and can be applied to the fields such as intelligent video monitoring, enterprise production automation and intelligent robot.
Owner:南京雷斯克电子信息科技有限公司

No-reference video quality evaluation method based on space-time domain natural scene statistics characteristics

Objective video quality evaluation is one of the important research points for QoE service in the further. The invention provides a video quality evaluation method based on no-reference natural scene statistics (NSS). Firstly, through analyzing a video sequence, corresponding statistical values of each pixel point and the adjacent point are calculated and space domain statistics characteristics of the video are thus obtained. A predication image of an n+1 frame is obtained according to a motion vector and in combination with a reference frame n, a motion residual image is obtained, and statistical distribution after DCT transformation is carried out on the residual image is observed. Values obtained in the former two steps are used for respectively calculating a mahalanobis distance between the space domain characteristics and the natural video characteristics and a mahalanobis distance between the time domain characteristics and the natural video characteristics so as to obtain statistical differences between a distorted video and the natural video, and the quality of a single-frame image is obtained when the time domain information and the space domain information are converged. Finally, a time domain aggregation strategy on the basis of visual hysteresis effects is adopted to obtain the objective quality of the final video sequence.
Owner:BEIJING UNIV OF POSTS & TELECOMM

Weak target detection method by use of time-space domain filtering adaptive to pipe diameter

ActiveCN106469313AEffectively preforms background clutterSolve the scale change problemCharacter and pattern recognitionPattern recognitionTime domain
The invention discloses a weak target detection method by use of time-space domain filtering adaptive to a pipe diameter. The weak target detection method comprises steps of: carrying out background detection on a to-be-processed image by use of anisotropic differential algorithm so as to improving subsequent target detection ability; segmenting a difference graph by use of a local maximum value so as to acquire a binary image; initializing a time domain parameter (an accumulation frame length) and a space domain parameter (the size of the pipe diameter), and successively inputting a series of binary images with the accumulation frame length of N; and finally, by use of the time-space domain filtering adaptive to the pipe diameter, detecting multiple frames of images so as to acquire a real target point, and superposing the detection results so as to output a target movement track. Compared with a traditional pipeline filtering target detection method with a fixed pipe diameter, the weak target detection method is advantageous in that based on the correlation between a target and the time-space domain multi-frame movement, the size of the pipe diameter is adaptively modified according to the change of a target scale, so a detection problem caused by the fact that the target becomes small/big due to unchanged pipe diameter is effectively solved and target detection precision is greatly improved.
Owner:INST OF OPTICS & ELECTRONICS - CHINESE ACAD OF SCI

Convex optimization method for three-dimensional (3D)-video-based time-space domain motion segmentation and estimation model

The invention discloses a convex optimization method for a three-dimensional (3D)-video-based time-space domain motion segmentation and estimation model. The method is implemented by the following steps of: 1) establishing the 3D-video-based time-space domain motion segmentation and estimation model according to an active contour theory and a mapping relationship between a background three-dimensional motion parameter and a two-dimensional light stream; 2) converting the model into a corresponding horizontal set description equation, calculating a corresponding gradient descent equation, calculating an equivalent equation of the gradient descent equation, calculating an energy function corresponding to the equivalent equation, and performing convex relaxation on the energy function to obtain a convexly-optimized time-space domain motion segmentation and estimation model; and 3) introducing a cost variable into the further relaxation of the convexly-optimized time-space domain motion segmentation and estimation model, minimizing the convexly-optimized time-space domain motion segmentation and estimation model by adopting a multi-variable alternate iteration algorithm, and performing iterative convergence to obtain a final split surface according to a selected threshold function. The method has the advantages of high adaptability to changes in a target number, independence of a segmentation result on an initialized contour, and high operation efficiency.
Owner:ZHEJIANG UNIV

Code rate control method based on subjective region of interest and time-space domain combination

ActiveCN106937118AImproving the quality of subjective observationCompression subjective effect is goodDigital video signal modificationPattern recognitionTime domain
The invention belongs to the technical field of HEVC (High Efficiency Video Coding), and discloses a code rate control method based on a subjective region of interest and time-space domain combination. The method comprises the following steps: calculating each CTU grade target bit allocation weight based on a space domain; calculating a motion region of each frame of image based on light streams and a global motion estimation algorithm to serve as a time domain region of interest; adjusting the obtained CTU grade target bit allocation weights according to a time domain region of interest result obtained on the basis of the light streams and the global motion estimation algorithm; performing a center enhancing operation on the CTU grade target bit allocation weights obtained through time-space domain subjective region of interest detection; obtaining final coding CTU grade target bit allocation to perform a coding operation; and modifying a video compression final evaluation standard. Compared with an HM16.0 compression model, the code rate control method has the advantages that the HEVC compression subjective effect is enhanced under the condition of ensuring that a target bit code rate is invariable.
Owner:XIDIAN UNIV +1

Denoising device and method for sequence image

The invention relates to a denoising device and a denoising method for sequence image. The denoising device comprises an input image unit, a motion estimation unit, a time-space domain federated filtering unit and a noise estimation module, wherein the input image unit inputs the sequence image to the motion estimation unit, and the sequence image is processed by the motion estimation unit and then input to the time-space domain federated filtering unit for processing; the noise estimation module outputs data to the motion estimation unit for processing; the motion estimation unit comprises a space domain filtering module, a differential module, a differential image fusion module, a morphological processing module and a motion estimation module; the space domain filtering module carries out filtering processing on the image and then transmits the image to the differential module for difference operation; and the differential image fusion module carries out weighting fusion processing on the image of the previous module, and the image is further processed by the morphological processing module and then transmitted to the motion estimation module for motion estimation operation after processing. The denoising device and the denoising method effectively complete the separation of a moving object and noise in the sequence image, effectively eliminate the noise, and improve the image quality.
Owner:杭州雄迈集成电路技术股份有限公司

Video steganography analysis method based on space-time domain local binary pattern

The invention discloses a video steganography analysis method based on a space-time domain local binary pattern. The video steganography analysis method includes step 1, classifying each frame in a video sequence to be analyzed by taking 8x8 blocks as a unit according to regional activity classifying standards; step 2, respectively establishing three-dimensional orthogonal planes, containing time-space coordinate axis, corresponding to classifying result, and making statistic analysis on LBP histograms of different three-dimensional orthogonal planes corresponding to each area in the classifying result; step 3, introducing a concept of activity factor, and enabling the activity factor and the corresponding LBP histograms to be in operation to acquire ST_LBP features; step 4, utilizing a Fisher Ratio method for feature selection, and realizing classifying recognition of steganographic information. Compared with the prior art, the video steganography analysis method has the advantages that by introducing the activity factor, interference, on detection result, caused by movement of an activity area object can be lowered, so that detection effectiveness is improved effectively; limitation of feature dimension is avoided, and valuing of radius R and field point number P in an LBP definition can be further expanded, so that relevance of space-time domain of the video sequence is utilized more effectively.
Owner:TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products