Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

53results about How to "Improve rebuild quality" patented technology

Large-scale MIMO channel state information feedback method based on deep learning

The invention discloses a large-scale MIMO channel state information feedback method based on deep learning. The method comprises the following steps: firstly, carrying out two-dimensional discrete Fourier transform (DFT) on a channel matrix H-wave of MIMO channel state information in a spatial frequency domain on a user side, so that a channel matrix H which is sparse in an angle delay domain isobtained; secondly, constructing a model CsiNet comprising a coder and a decoder, wherein the coder belongs to the user side and is used for coding the channel matrix H into codons with a lower dimension, and the decoder belongs to a base station side and is used for reconstructing an original channel matrix estimation value H-arrow from the codons; thirdly, training the model CsiNet to obtain model parameters; fourthly, carrying out two-dimensional inverse DFT on a reconstructed channel matrix H-arrow which is output by the CsiNet, so that a reconstructed value of the original channel matrixH-wave in the spatial frequency domain is recovered; and finally, using the trained model CsiNet for compressed sensing and reconstruction of channel information. The method provided by the inventionhas the advantages that large-scale MIMO channel state information feedback expenditures can be reduced, and an extremely high channel reconstruction quality and an extremely high channel reconstruction speed can be achieved.
Owner:SOUTHEAST UNIV

Partial echo compressed sensing-based quick magnetic resonance imaging method

The invention discloses a partial echo compressed sensing-based quick magnetic resonance imaging (MRI) method. The conventional imaging method has low speed and high hardware cost. The method comprises the following steps of: acquiring echo data of a random variable density part, namely intensively acquiring data in a central area of a k-space and acquiring the data around the k-space randomly and sparsely to generate a two-dimensional random mask, adding the two-dimensional random mask into every data point which needs to be acquired on a frequency coding shaft to form a three-dimensional random mask, and acquiring the data of the k-space according to the generated three-dimensional random mask; re-establishing by projection onto convex sets based on a wavelet domain which is de-noised by soft thresholding; and nonlinearly re-establishing a minimum L1 normal number based on finite difference transformation, namely sparsely transforming an image space signal x, determining an optimization objective and solving the optimization objective. By the method of the invention, partial echo technology and compressed sensing technology are combined and applied to data acquisition of MRI, sothat echo time is shortened, and data acquisition time is shortened at the same time.
Owner:HANGZHOU DIANZI UNIV

Method of improving density of three-dimensional reconstructed point cloud based on neighborhood block matching

The invention discloses a method of improving the density of the three-dimensional reconstructed point cloud based on neighborhood block matching. The method comprises the steps of obtaining the rough and sparse object point cloud using a three-dimensional reconstruction algorithm based on an image sequence, and obtaining the transformation matrix of a camera in the three-dimensional space when each frame of images is taken; processing the original image again, and performing the dense feature matching in the image using a neighborhood-based block matching algorithm; then, according to the obtained position of the camera in the space, carrying out a legitimacy test on the obtained dense feature points, and mapping the feature points satisfying requirements to corresponding positions in the three-dimensional point cloud; and conducting an outer point filtering for the obtained point cloud using an outer point deletion algorithm based on the object contour, and performing a color remapping to obtain the dense point cloud which is far better than the original point cloud. According to the invention, the point cloud having far higher quality and density than that by the traditional algorithm can be obtained, the effect of the original algorithm can be greatly improved, and the quality of reconstruction is improved. The method has high universality and robustness.
Owner:陕西仙电同圆信息科技有限公司

Multispectral single-pixel imaging deep learning image reconstruction method

The invention provides a multispectral single-pixel imaging deep learning image reconstruction method, which comprises a measurement process and a reconstruction process, and is characterized in thatin the measurement process is characterized by using a coding pattern for coding a target scene, and then using a multispectral detector for recording the light intensities corresponding to differentwavelengths; after the multispectral single-pixel detection is realized in a physical mode, based on an image reconstruction method of a deep neural network, realizing a process of reconstructing anoriginal signal X from all detection signals Yc, wherein the deep neural network is composed of a linear mapping network and a convolutional neural network; splicing C measurement vectors together according to columns to form a new matrix Y ', wherein the linear mapping network uses Y' as the input data to preliminarily perform linear processing on the data, then subjecting the linear processing result to the information fusion processing between channels through the convolutional neural network, and finally obtaining a to-be-observed image X through reconstruction. According to the technicalscheme, the problems that in the prior art, the algorithm complexity is high, the reconstruction time is longer, and the requirement for the sampling rate is higher are solved.
Owner:DALIAN MARITIME UNIVERSITY

Cross-node controlled online video stream selective retransmission method

The invention relates to a cross-node controlled online video stream selective retransmission method comprising the following steps of: when an online-coded single video stream enters a transmitting end, dividing the online-coded single video stream into DFS (Delay Constraint Frame Sets) one by one according to starting and transmitting delay limitation as selective retransmission basic objects; using the transmitting end as a first communication node, determining importance levels of IP (Internet Protocol) packets in the DFS according to fault-tolerant importance measure, and supplying the selective retransmission of the IP packets by packet scheduling based on fault-tolerant importance; and on the basis of the importance levels of the IP packets, using a base station as a second communication node, and supplying the selective retransmission of a wireless link unit by a priority descending ARQ (Automatic Request for Repetition) mode. Under delay and bandwidth condition limitation, the invention organically combines priority transmission mechanisms among main communication nodes, realizes the importance division and the selective retransmission of wireless link unit levels, can strengthen the error control performance of online video streams and effectively increase the rebuilding quality of received videos.
Owner:DONGHUA UNIV

Super-resolution reconstruction method for real-time video session service

The invention provides a super-resolution reconstruction method for a real-time video session service, and relates to the technical field of digital image processing. According to the invention, the method comprises the steps: redesigning each super-resolution module, enabling a feature extraction module to adopt coarse-to-fine feature extraction and adopt a residual idea to accelerate the feature extraction speed, introducing deformable convolution into the video super-resolution reconstruction method, and through the idea of a recurrent neural network, dynamically optimizing a frame difference learning module to obtain an optimal alignment parameter; employing the optimal parameter for guiding deformable convolution to carry out alignment operation; designing a feature fusion network for enhancing correlation to carry out feature fusion of adjacent frames, and finally designing a reconstruction module by adopting an information distillation thought, designing an up-sampling reconstruction module, extracting more edge and texture features by using an information distillation block, and adding the extracted edge and texture features with an up-sampled reference frame to generate a final high-resolution video frame. The method is high in reconstruction speed and good in reconstruction quality.
Owner:NORTHEASTERN UNIV

Adaptive block compressive sensing reconstruction method

The invention belongs to the field of image processing, and particularly relates to an adaptive block compressive sensing reconstruction method used for feature extraction and identification of a target image. The adaptive block compressive sensing reconstruction method comprises the steps that initial parameters are defined; the image is divided into sub-image blocks with the sizes being A; energy E of each sub-image block is calculated, and according to a preset energy threshold value T, each sub-image block is divided into a background sub-image block and a target sub-image block; a background region and a target region of the image are blocked again; measured-value obtaining and image reconstruction are conducted on the background region and the target region of the image with the same sampling rate; a reconstructed target region image and a reconstructed background region image are combined into a reconstructed original image. As for the adaptive block compressive sensing reconstruction method, the image is divided into the background region and the target region according to an energy value, different blocking schemes are used for the background region and the target region, a blocking effect on the target region can be omitted, and better reconstruction quality can be obtained with less reconstruction time.
Owner:HARBIN ENG UNIV

Remote sensing image super-resolution method based on multi-scale feature adaptive fusion network

The invention relates to a remote sensing image super-resolution method based on a multi-scale feature adaptive fusion network, and the method comprises: 1) carrying out the convolution operation of an originally inputted low-resolution remote sensing image through a filter, and extracting an original feature map; 2) extracting self-adaptive multi-scale features of the original feature map throughn cascaded multi-scale feature extraction modules AMFE to obtain a self-adaptive multi-scale feature map; 3) superposing the original feature map and the adaptive multi-scale feature map, and performing convolution operation on the superposed map by using a filter to realize feature dimension reduction and fusion; and 4) by adopting a sub-pixel convolution method, obtaining a final remote sensingimage after super-resolution reconstruction. The invention provides the remote sensing image super-resolution method based on the multi-scale feature self-adaptive fusion network, which can realize self-adaptive fusion of multi-scale feature information of the remote sensing image, can realize efficient reconstruction of high-resolution detail information of the remote sensing image and can improve the image super-resolution reconstruction effect.
Owner:HUBEI UNIV OF TECH

Multiple description coding high-quality edge reconstruction method based on spatial downsampling

The invention provides a multiple description coding high-quality edge reconstruction method based on spatial downsampling, which comprises the following steps: making a data set: selecting a video, dividing the video into two descriptions through spatial downsampling, coding and decoding the video under the setting of a quantization parameter QP value, and taking the decoded video and a corresponding original video as a training set; and training a SD-VSRnet network: taking every five frames of the video as an input of the network, sequentially performing feature extraction, high-frequency detail recovery and pixel rearrangement, then performing jump connection with an input intermediate frame to obtain reconstructed video frames, and performing frame-by-frame reconstruction to obtain a final reconstructed video, thereby realizing training of the SD-VSRnet network. According to the method provided by the invention, a multiple description coding high-quality edge reconstruction data set suitable for space downsampling is made, in addition, a video super-resolution neural network is adopted to separately test four QP values, and the edge decoding video reconstruction quality of different compression degrees can be effectively improved.
Owner:HUAQIAO UNIVERSITY

Measuring device for measuring gas parameters of combustion field

PendingCN111141524AIncreased deployment count and coverageHigh reconstruction resolutionInternal-combustion engine testingSingle-mode optical fiberOptical measurements
The invention discloses a measuring device for measuring gas parameters of a combustion field, belongs to the technical field of flow field optical measurement, and can solve the problem of poor measurement result caused by insufficient light distribution in the flow field in the prior art. The measuring device comprises a measuring ring; each laser emitting unit corresponds to each laser receiving unit group, and the corresponding laser emitting unit and laser receiving unit group are respectively arranged on two opposite side walls of the measuring ring; each laser receiving unit group comprises a plurality of laser receiving units arranged in rows; each single-mode optical fiber corresponds to each laser emission unit; each first multimode optical fiber corresponds to each laser receiving unit; the single-mode optical fiber outputs a laser beam to the laser emission unit; the laser emitting unit converts the laser beam into a fan-shaped light beam, and the laser receiving unit converges the fan-shaped light beam irradiated on the laser receiving unit and couples the fan-shaped light beam into the corresponding first multimode optical fiber. The device is used for flow field gasparameter measurement.
Owner:PLA PEOPLES LIBERATION ARMY OF CHINA STRATEGIC SUPPORT FORCE AEROSPACE ENG UNIV +1

Hybrid input method for solving electromagnetic inverse scattering problem based on deep learning

The invention discloses a hybrid input method for solving an electromagnetic inverse scattering problem based on deep learning, and the method comprises the following steps: 1, obtaining the quantitative information of an unknown scatterer through employing a quantitative inversion method, and enabling the quantitative information to comprise a contrast ratio; 2, obtaining qualitative informationof an unknown scatterer by utilizing a qualitative inversion method, wherein the qualitative information comprises normalized numerical values, an index function is defined on the interested domain and used for judging whether the sampling points are located inside or outside the unknown scatterer, a set of normalized numerical values are obtained according to the index function, and the normalized numerical values indicate whether each sampling point is located at a point inside the unknown scatterer or not; 3, performing point multiplication on the normalized numerical value and the contrastvalue, and converting a point multiplication result into a combined dielectric constant value; 4, taking the combined dielectric constant value as the input of the neural network, taking the real dielectric constant value of the scatterer as the output of the neural network, and training the neural network.
Owner:HANGZHOU DIANZI UNIV

High-precision image information extraction method based on low dynamic range

The invention relates to a high-precision image information extraction method based on a low dynamic range, and the method comprises the steps: 1, carrying out feature extraction of an image to acquire three RGB channels of an original image and a V brightness channel in an HSV color space; 2, outputting 48 coefficients in groups by using a full convolutional neural network structure, and adding ashortcut structure on the basis of the 48 coefficients to realize fusion of high-level features and low-level features; finally, outputting the 48 spherical harmonic coefficients in total, dividing the 48 spherical harmonic coefficients into 16 groups, wherein each group comprises three pieces of data which represent components on an R channel, a G channel and a B channel respectively; 3, establishing a spherical harmonic coefficient loss function and a diffuse reflection map loss function, and calculating a mean square error loss function and a diffuse reflection map loss function of 48 spherical harmonic coefficients; and 4, performing feedback constraint on the full convolutional neural network structure by utilizing the mean square error loss function and the diffuse reflection mapping loss function of the 48 spherical harmonic coefficients in the step 3.
Owner:SHANGHAI GOLDEN BRIDGE INFOTECH CO LTD

Extensible man-machine cooperation image coding method and coding system

The invention discloses an extensible man-machine cooperation image coding method and coding system. The method comprises the following steps: extracting an edge graph of each sample picture and vectorizing the edge graph as a compact representation for driving a machine vision task; performing key point extraction in the vectorized edge image to serve as auxiliary information; respectively carrying out entropy coding lossless compression on the compact representation and the auxiliary information to obtain two paths of code streams; carrying out preliminary decoding on the two paths of code streams to obtain an edge graph and auxiliary information; inputting the edge graph obtained by decoding and auxiliary information into a generative neural network, and performing forward calculation of the network; carrying out loss function calculation according to the obtained calculation result and the corresponding original picture, and carrying out back propagation on the calculated loss to a neural network to carry out network weight updating until the neural network converges, so as to obtain a double-path code stream decoder; obtaining an edge image and auxiliary information of a to-be-processed image, and encoding and compressing the edge image and the auxiliary information to obtain two paths of code streams; and enabling the two-path code stream decoder to decode the received code stream and reconstructs an image.
Owner:PEKING UNIV

A Large-Scale MIMO Channel State Information Feedback Method Based on Deep Learning

The invention discloses a large-scale MIMO channel state information feedback method based on deep learning. The method comprises the following steps: firstly, carrying out two-dimensional discrete Fourier transform (DFT) on a channel matrix H-wave of MIMO channel state information in a spatial frequency domain on a user side, so that a channel matrix H which is sparse in an angle delay domain isobtained; secondly, constructing a model CsiNet comprising a coder and a decoder, wherein the coder belongs to the user side and is used for coding the channel matrix H into codons with a lower dimension, and the decoder belongs to a base station side and is used for reconstructing an original channel matrix estimation value H-arrow from the codons; thirdly, training the model CsiNet to obtain model parameters; fourthly, carrying out two-dimensional inverse DFT on a reconstructed channel matrix H-arrow which is output by the CsiNet, so that a reconstructed value of the original channel matrixH-wave in the spatial frequency domain is recovered; and finally, using the trained model CsiNet for compressed sensing and reconstruction of channel information. The method provided by the inventionhas the advantages that large-scale MIMO channel state information feedback expenditures can be reduced, and an extremely high channel reconstruction quality and an extremely high channel reconstruction speed can be achieved.
Owner:SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products