Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

143 results about "Video reconstruction" patented technology

Quasi-three dimensional reconstruction method for acquiring two-dimensional videos of static scenes

The invention discloses a quasi-three dimensional reconstruction method for acquiring two-dimensional videos of static scenes, which belongs to the field of computer vision three-dimensional video reconstruction. The method comprises the following steps: step A, extracting double-viewpoint image pairs from each frame of a two-dimensional video; step B, respectively and polarly correcting each double-viewpoint image; step C, adopting a binocular stereo matching method based on overall optimization to respectively solve overall optimum disparity maps of all the polarly corrected double-viewpoint images; step D, reversely correcting the overall optimum disparity maps so as to obtain the corresponding disparity maps of all the frames in the three-dimensional video; step E, splicing the disparity maps obtained in the step D according to a corresponding video frame sequence to form a disparity map sequence, and optimizing the disparity map sequence; and step F, combining all the extracted video frames and the corresponding disparity maps, adopting a depth image based rendering (DIBR) method to recover virtual viewpoint images, and splicing the virtual viewpoint images into a virtual viewpoint video. The method is low in computational complexity, simple and practicable.
Owner:NANJING UNIV OF POSTS & TELECOMM

Fluid-solid interaction simulation method based on video reconstruction and SPH model

The invention discloses a fluid-solid interaction simulation method based on video reconstruction and an SPH model. The fluid-solid interaction simulation method includes steps: 1), adopting a bright-dark recovery shape method to quickly reconstruct fluid surface geometric information in video, and acquiring a surface height field of each frame of input image; 2), combining the height fields with a shallow water equation, and calculating to acquire a speed field on the surface of fluid in a form of a minimizing energy equation; 3), using the surface geometric information as a boundary constraint condition to be discretized into a whole three-dimension body to acquire volume data; 4), guiding the volume data reconstructed in the video into an SPH simulation scene to serve as an initial condition of the simulation scene, and interacting with other virtual environment objects in the scene. By the fluid-solid interaction simulation method, high-accuracy data reconstruction can be realized, fluid surface details can be retained, bidirectional interaction simulation is performed on the basis of reconstructed data and a physical simulation model, fluid animation effect closer to real condition is acquired, algorithm complexity is low, and the method has high creativity compared with related algorithms.
Owner:EAST CHINA NORMAL UNIV

Video description generation system based on graph convolution network

The invention belongs to the technical field of cross-media generation, and particularly relates to a video description generation system based on a graph convolution network. The video description generation system comprises a video feature extraction network, a graph convolution network, a visual attention network and a sentence description generation network. The video feature extraction network performs sampling processing on videos to obtain video features and outputs the video features to the graph convolution network. The graph convolution network recreates the video features accordingto semantic relations and inputs the video features into sentence descriptions to generate a recurrent neural network; and the sentence description generation network generates sentences according tofeatures of video reconstruction. The features of a frame-level sequence and a target-level sequence in the videos are reconstructed by adopting graph convolution, and the time sequence information and the semantic information in the videos are fully utilized when description statements are generated, so that the generation is more accurate. The invention is of great significance to video analysisand multi-modal information research, can improve the understanding ability of a model to video visual information, and has a wide application value.
Owner:FUDAN UNIV

Navigation system for entering and leaving port of ships and warships under night-fog weather situation and construction method thereof

The invention discloses a navigation system for entering and leaving port of ships and warships under a night-fog weather situation and a construction method thereof, the navigation system includes a port-circumference seacoast panorama system, a color video reconstruction system for the night-fog weather situation, a wireless request signal sending module, a wireless video signal sending module, a wireless video signal receiving module, an infrared camera imaging system, a video display module and a differential global position system (GPS)system; and the construction method comprises the construction of the port-circumference seacoast panorama system and the construction of the color video reconstruction system for the night-fog weather situation. According to the navigation system and the construction method thereof, port-circumference seacoast panoramas are stored in advance in the ships and warships frequently haunting about a port, so that the ships and warships can still clearly know the details circumstances of a port seacoast without need of commands from an on-shore control center, and the ships and warships can enter and leave the port at a day with night fog. Through use of the navigation system and the construction method thereof, differential GPS information of the ships and warships can be effectively utilized to precisely locate direction, speed, location and other information of the ships and warships, and maps with best view-field effects can be sent to the ships and warships.
Owner:DALIAN MARITIME UNIVERSITY

Variable-length input super-resolution video reconstruction method based on deep learning

The invention discloses a variable-length input super-resolution video reconstruction method based on deep learning. The method comprises the following steps: constructing a training sample with a random length, and obtaining a training set; establishing a super-resolution video reconstruction network model, wherein the super-resolution video reconstruction network model comprises a feature extractor, a gradual alignment fusion module, a depth residual error module and a superposition module which are connected in sequence; training the super-resolution video reconstruction network model by adopting the training set to obtain a trained super-resolution video reconstruction network; and sequentially inputting to-be-processed videos into the trained super-resolution video reconstruction network for video reconstruction to obtain corresponding super-resolution reconstructed videos. According to the method, a gradual alignment fusion mechanism is adopted, alignment and fusion can be carried out frame by frame, and alignment operation only acts on two adjacent frames of images, so that the model can process a longer time sequence relationship, more adjacent video frames are used, that is to say, more scene information is contained in input, and the reconstruction effect can be effectively improved.
Owner:CHANGAN UNIV

Frame loss prediction based cellular network uplink video communication QoS (Quality of Service) optimization method

The invention relates to a frame loss prediction based cellular network uplink video communication QoS (Quality of Service) optimization method in the technical field of communication. In the invention, frame loss prediction and timeout frame loss based on ARQ (Automatic Repeat Request) retransmission number counting are realized through designing a link layer proxy at a sending terminal link layer on the basis of the own ARQ function of the link layer of a base station, and the frame loss information of the link layer is fed back to an application layer encoder; meanwhile, the receiving conditions of all frames are counted through designing an application layer proxy on the application layer of a receiving end, and correctly received information is fed back to a sending end encoder end to end; and the sending end encoder marks the transmitting state of each encoded image frame according to received cross layer feedback and end-to-end feedback, wherein a correctly transmitted mark is G, and a reference frame while encoding is limited in a frame of which the mark is G so as to promote the robustness to error codes while video reconstruction.
Owner:SHANGHAI JIAO TONG UNIV

Adaptive partition compression and perception-based video compression method

The invention provides an adaptive partition compression and perception-based video compression method. The method comprises two steps, namely the step of adaptively partitioning video images, and classifying and assigning sampling rates for various image blocks. During the step of adaptively partitioning video images, a gray difference value between the adjacent pixels of a reference frame image is adopted as a basis for block size segmentation, and a partitioning threshold value T is set. The gray average difference value between the adjacent pixels of a current region block is compared with the threshold value, and the adaptive blocks of the video image are partitioned based on the quad-tree algorithm. In this way, flat regions are effectively separated from detail regions and edge regions. On the basis of the adaptive partitioning operation, an inter-frame difference value for the DCT coefficients of video pixels is adopted as a basis for partitioning, and various image blocks diversified in size are divided into three types, namely quickly changing blocks, transition blocks and slowly varying blocks. Meanwhile, appropriate sampling rates are assigned to different types of image blocks. The method is good in video reconstruction quality and short in reconstruction time. Under the same condition, the video reconstruction quality and the reconstruction time of the above method are better than those of the video uniform partitioning, compression and perception processing method.
Owner:ZHEJIANG UNIV OF TECH

Shunting coding method for digital video monitoring system and video monitoring system

The embodiment of the invention discloses a shunting coding method for a digital video monitoring system. The method comprises the following steps: decomposing a video into primary video frames and secondary video frames by using two-dimensional wavelet transform; coding the primary video frames to generate a primary code stream and primary video reconstruction frames; and based on the primary video reconstruction frames, coding the secondary video frames to generate a secondary code stream. Corresponding, the embodiment of the invention also discloses a video monitoring system. The video monitoring system comprises a shunting module, a primary code stream coding module and a secondary code stream coding module, wherein the shunting module is used for decomposing the video into the primary video frames and the secondary video frames by using two-dimensional wavelet transform; the primary code stream coding module is used for coding the primary video frames to generate the primary code stream and the primary video reconstruction frames; and the secondary code stream coding module is used for coding the secondary video frames to generate the secondary code stream based on the primary video reconstruction frames. By implementing the invention, the digital video monitoring system can satisfy monitoring requirements for videos with different resolutions and enhance the video compression coding efficiency.
Owner:广东中大讯通信息有限公司 +2

DCT-based 3D-HEVC fast intra-frame prediction decision-making method

The invention discloses a DCT-based 3D-HEVC fast intra-frame prediction decision-making method. The method comprises the following steps: firstly, calculating a DCT matrix of a current prediction block by utilizing a DCT formula; then, judging whether the left upper corner coefficient of a current coefficient block is provided with an edge and further judging whether the right lower corner coefficient thereof is provided with an edge; and finally, judging whether DMMs need to be added into an intra-frame prediction mode candidate list through judging whether the edges are provided. According to the DCT-based 3D-HEVC fast intra-frame prediction decision-making method, a depth map is introduced into the 3D-HEVC to achieve better view synthesis; aiming at the depth map intra-frame predicationcoding, a 3D video coding extension development joint cooperative team proposes four new kinds of intra-frame predication modes DMMs for the depth map. The DCT has the characteristic of energy aggregation, so that whether a coding block is provided with an edge can be obviously distinguished in the 3D-HEVC depth map coding process. The DCT-based 3D-HEVC fast intra-frame prediction decision-makingmethod has the advantages of low calculation complexity, short coding time and good video reconstruction effect.
Owner:HANGZHOU DIANZI UNIV

Dual-band time domain compressed sensing high-speed imaging method and device

The invention provides a dual-band time domain compressed sensing high-speed imaging method and device. Aiming at a high-speed moving object, the characteristic of overturning of a digital micromirrordevice is utilized, and the light energy in two directions is respectively used for imaging at visible light and infrared bands; in combination with a time domain compressed sensing imaging method, the detail information at double wavebands of the high-speed moving target is obtained at the same time, and a video reconstruction algorithm based on compressed sensing is used to reconstruct the shotdouble-waveband low-speed video into a double-waveband target high-speed video. According to the invention, the method can obtain the high-speed moving target video with more detail information whilethe data conversion bandwidth of the imaging camera is not increased, thereby solving the problem of the strict requirements of high-speed imaging for high sensitivity and ultrahigh data bandwidth ofthe camera. Meanwhile, through the characteristics of the optical modulator of the digital micromirror device, the visible light and infrared high-speed imaging light paths can be obtained at the same time under the condition that no additional device is added, the light energy is fully utilized, and the utilization rate of the energy is improved.
Owner:BEIJING INSTITUTE OF TECHNOLOGYGY

Video semantic communication method and system based on hierarchical knowledge expression

The invention provides a video semantic communication method based on hierarchical knowledge expression, and mainly aims to solve the problems of incomplete semantic extraction, insufficient semantic representation capability and redundant semantic description in the prior art. According to the implementation scheme, the method comprises the following steps: constructing a hierarchical knowledge base consisting of a multi-level signal sensing network, a semantic abstract network, a semantic reconstruction network and a video reconstruction network; collecting a to-be-transmitted video signal; extracting structured semantic features of the video signals based on a signal sensing network and a semantic abstract network in the hierarchical knowledge base, and transmitting the structured semantic features through an ultra-narrowband channel; and reconstructing a video signal by using a semantic reconstruction network and a signal reconstruction network in the hierarchical knowledge base according to the structured semantic features. According to the method, semantic features of different scales are mined, and the semantics are structurally represented by using the structured data, so that the integrity of semantic extraction is improved, the semantic representation capability and the communication bandwidth utilization rate are improved, and the method can be applied to online conferences, human-computer interaction and intelligent Internet of Things.
Owner:XIDIAN UNIV

Reliable video transmission method and device based on network coding

The invention discloses a reliable video transmission method based on network coding to mainly solve the problem of lower video reconstruction quality of a receiving end caused by packet loss or error codes in a general video transmission method. The reliable video transmission method based on network coding comprises the following steps of: adding corresponding redundant information to a layered code stream used for performing the scalable video coding for each picture group of original video information, and packing the layered code stream to complete code rate distribution; performing the random linear network coding on data packets subjected to code rate distribution by using a designed global coding matrix with a nonstringent lower triangular structure; and transmitting coded information to the receiving end through a transmission channel, and decoding a network coding code stream at the receiving end by using a Gaussian elimination method, and restoring the original video information. The reliable video transmission method based on network coding has the advantages of improving the problem of lower video reconstruction quality of the receiving end caused by packet loss and time delay in the transmission process, and realizing the unequal error protection of an H. 264/SVC (scalable video coding) code stream and the high-reliability video multicast.
Owner:XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products