Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

175 results about "Motion intensity" patented technology

Time-space domain hybrid video noise reduction device and method

The invention relates to a time-space domain hybrid video noise reduction device and a time-space domain hybrid video noise reduction method, and belongs to the technical field of video image processing. The device comprises a time domain noise reduction module, a smoothing coefficient storage module, a reference frame storage module and a space domain noise reduction module. The method comprises the following steps that: the time domain noise reduction module calculates noise variance, the motion intensity of a current point and a weighting coefficient of the current point according to the current point and a reference point; the time domain noise reduction module calculates and stores a smoothing coefficient of the current point and a time domain filtering result of a current frame respectively; and the space domain noise reduction module performs space domain filtering to obtain noise-reduced image data. Therefore, the combined application of time domain noise reduction and space domain noise reduction is realized, a good noise reduction effect can be achieved, system overhead required by the noise reduction can be effectively reduced, and system cost is lowered. The noise reduction device is simple in structure; and the noise reduction method can be conveniently and widely applied.
Owner:SHANGHAI FULLHAN MICROELECTRONICS

Depth sequence generation method of technology for converting plane video into stereo video

ActiveCN101271578AIn line with the real deep relationshipSegmentation results are accurate and reliableImage analysisSteroscopic systemsTime informationStereoscopic video
The invention relates to a depth sequence generation method in a technique of converting a plane video to a stereo video, belonging to the technical field of a computer multimedia, in particular to the technique of converting the normal plane video to the stereo video. The method includes: based on an optical flow algorithm, a two-dimensional motion of a pixel of a frame in an original two-dimensional video sequence is picked up to obtain an image of a motion intensity of the frame; by utilizing a minimum discrimination information principle, the coloring information of the image of each frame in the original two-dimensional video sequence is mixed with the image of the motion intensity to obtain a distinguishing image of a motion color used for cutting a video image; the image is cut according to the luminance of the distinguishing image of the motion color; and a different depth value is given to each cut area to get a depth map of the image of each frame; the depth maps of the image of all frames constitute a depth sequence. The depth sequence generation method has the advantages that the space and time information of the video sequence are used jointly, and the judgment of the cutting and the depth are accurate and reliable.
Owner:TSINGHUA UNIV

Movie action scene detection method based on story line development model analysis

The invention discloses a movie action scene testing method based on analysis of a story line development model, which comprises the steps that: a video is carried out with the operation of pretreatment; the scene length of each scene is calculated; the average action intensity of the scenes is calculated; a movie editing factor is calculated by utilizing the scene length and the average action intensity of the scenes; the short-term audio energy of each audio frame and the average audio energy of the scenes are calculated; the average action dispersity of the scenes is calculated; a human perception factor is calculated by utilizing the average audio energy of the scenes and the average action dispersity of the scenes; according to the movie editing factor and the human perception factor, the story line development model is established and a story line development flow graph is generated according to the time order; action scenes in a movie are tested according to the story line development model. The testing method has the advantages that the story line development model is established by taking two viewpoints of the movie editing method and human perception into account to consider visual and auditory factors so as to simulate story line development and change, thereby realizing the accurate test of the action scenes in the movie.
Owner:欢瑞世纪(东阳)影视传媒有限公司

Fighting detecting method based on stereoscopic vision motion field analysis

InactiveCN102880444AOvercome the effects of adverse conditionsJudgment of severity is reasonableData comparisonHuman bodyMotion vector
The invention relates to a fighting detecting method based on stereoscopic vision motion field analysis. The method is as follows: an object human body is extracted according to the motion vector field and the depth map of the object; by taking the motion vector field as the main characteristic, the entropy of a cumulative histogram within a certain time is calculated out based on a block statistics cumulative motion vector direction histogram; a motion intensity degree evaluation strategy is planned according to the depth information of the object by calculating the average motion vector intensity in the block; and finally, the disorder and intensity degree evaluation scores are combined to obtain the probability of fighting behavior, through the time and space accumulation, when the probability of the fighting behavior is continuously greater than a threshold value, then the fighting behavior is confirmed. The fighting detecting method can filter the motion vector calculation errors caused by scene illumination variation and object blocking under the monitoring environment. The intensity degree judgment is more reasonable according to the scene depth information, and the robust detection result is obtained after the analysis of the time and space distribution by establishing the fighting probability model.
Owner:ZHEJIANG ICARE VISION TECH

High efficiency video coding sensing rate-distortion optimization method based on structural similarity

ActiveCN103607590AImprove perceived visual qualityDoes not add much computational complexityDigital video signal modificationTime correlationComputer architecture
Provided is a high efficiency video coding sensing rate-distortion optimization method based on structural similarity. The high efficiency video coding sensing rate-distortion optimization method based on the structural similarity comprises the following steps that (1) calculation of image distortion is carried out by using the structural similarity as the evaluation criterion of distortion before the coding end of a high efficiency video coder carries out mode judgment, and the calculation of the image distortion is used for replacing calculation of a coded image distortion value in the process of rate-distortion judgment carried out by the coding end of the high efficiency video coder; (2) Lagrangian multipliers in the process of high efficiency video coding rate-distortion judgment calculation are corrected according to the motion intensity consistency between every two adjacent zones on a space domain and a time domain of a coded image, and rate-distortion optimization calculation of a current coding region is carried out. According to the high efficiency video coding sensing rate-distortion optimization method based on the structural similarity, due to the facts that the structural similarity serves as the evaluation criterion of image distortion, spatial correlation and time correlation of a former frame and a later frame in the process of interframe coding are used for deducing the motion intensity consistency of the current coding region, and accordingly, parameters used in the process of the rate-distortion judgment are corrected, sensing visual quality of the coded image is greatly improved under the condition that the calculation complexity is not greatly improved.
Owner:BEIJING UNIV OF POSTS & TELECOMM

Moon surface dust environment simulating method and simulating device

ActiveCN103318428ASolve the problem that the actual lunar surface dust environment cannot be effectively simulatedEasy to implementCosmonautic condition simulationsControlling environmentMotion intensity
The invention discloses a moon surface dust environment simulating method and a simulating device. The moon surface dust environment simulating method includes a first step of vacuumizing the atmosphere in a cavity through vacuum equipment and enabling the vacuum degree to approximate the moon surface environment, a second step of enabling dust samples to be placed in a sample groove in the cavity, adjusting temperatures of the dust samples within the range of minus 190-150 DEG C, and using a microporous film to seal the dust samples in the sample groove, a third step of carrying out ultraviolet irradiation on the dust samples through deep ultraviolet spectrum, and enabling the dust samples to be electrified through external photoelectric effect, and a fourth step of enabling the dust samples to float through electric field attraction, creating the dust environment, controlling environmental temperatures by means of a quartz lamp array, and controlling dust density and motion intensity through adjustment of ray intensity and electric field intensity so as to achieve the purpose of simulating the cosmic dust environment. The moon surface dust environment simulating method and the simulating device solve the problem that the actual moon surface dust environment can not be effectively simulated in the prior art. The moon surface dust environment simulating method is easy to implement, and the simulating device is simple in structure and good in using effect.
Owner:INST OF GEOCHEM CHINESE ACADEMY OF SCI

Digital inertia dumbbell capable of measuring motion parameters

InactiveCN103394178ANumber of real-time measurementsMeasure time in real timeDumb-bellsDumbbellComputer module
Provided is a digital inertia dumbbell capable of measuring motion parameters. A circuit board is fixedly arranged inside a shield, a Hall code disc is connected to the circuit board, the Hall code disc is provided with a Hall component, the axial lead of the Hall code disc coincides with the axial lead of an eccentric rotary shaft, a magnet rotary disc which is adjacent with the Hall code disc in parallel is arranged inside the shield, the magnet rotary disc is fixedly arranged at one end of the eccentric rotary shaft and is coaxial with the eccentric rotary shaft, a permanent magnet is arranged on the magnet rotary disc, a micro-processor is arranged on the circuit board and is connected with the Hall component on the Hall code disc, the micro-processor is connected with a display screen and a data storage module, and the display screen is fixedly arranged in the shield. The digital inertia dumbbell capable of measuring the motion parameters can measure the times, time, speed and interrupt times of rotation of the eccentric rotary shaft in the inertia dumbbell in real time, the motion parameters are displayed through the display screen in real time or stored through the data storage module, and accordingly the motion intensity, coordination condition and motion progress can be reflected accurately.
Owner:SHANGHAI UNIV OF SPORT

Cross-view-angle action identification method and system based on time sequence information

ActiveCN104200218AResolve differencesAchieving cross-view motion recognitionCharacter and pattern recognitionMotion intensityStudy methods
The invention discloses a cross-view-angle action identification method and system based on time sequence information and relates to the technical field of mode identification. The method includes detecting interest points of videos and extracting motion intensity of the interest points, wherein the videos include a source view angle video and a target view angle video; conducting time sequence accumulation on the motion intensity according to the time sequence information of the videos to obtain motion characteristic description of the videos; conducting coarseness labeling on the target view angle video according to the motion characteristic description and source coarseness labeling information of the source view angle video to obtain target coarseness labeling information; conducting measurement learning on the source view angle video and the target view angle video through a measurement learning method according to the source coarseness labeling information and the target coarseness labeling information to obtain a cross-view-angle measurement method; conducting action classification on action in the target view angle video through the cross-view-angle measurement method to finish cross-view-angle action recognition.
Owner:中科海微(北京)科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products