Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

37results about How to "Improve super-resolution" patented technology

A vhf frequency band cloud lightning detection and positioning system

The invention discloses a VHF (very high frequency) frequency-range intracloud lightning detecting and positioning system, which comprises a central treatment station and a plurality of detection and treatment sub-stations arranged in different positions, wherein the detection and treatment sub-stations and the central treatment station are accessed into a public wide area network in a wired or wireless mode; and through the wide area network, on one hand, obtained intracloud lightning detection data is uploaded to the central treatment station by the detection and treatment sub-stations, andon the other hand, the working state of each detection and treatment sub-station is remotely monitored by the central treatment station. The system can be used for precisely estimating the incidence directions of lightning radiation signals of a plurality of simultaneously occurring lightning radiation sources or the same lightning radiation source arriving at the detection sub-stations through different propagation paths, the arriving time difference of the lightning signals between any two detection and treatment sub-stations is calculated at the central treatment station (TDOA estimation),and the spatial position of an intracloud lightning radiation source is determined by DOA and TDOA information.
Owner:HUAZHONG UNIV OF SCI & TECH

Face super-resolution reconstruction method based on identity prior generative adversarial network

The invention relates to a face super-resolution reconstruction method based on an identity priori generative adversarial network. The face super-resolution reconstruction method comprises the following steps: firstly, reading an original face picture data set; then, utilizing a face image-identity label pair to train a face feature extraction network; thirdly, reading the high-resolution face image for bicubic interpolation down-sampling to obtain a high-resolution face image-low-resolution face image pair for model training; fourthly, inputting the low-resolution face image into a generatornetwork to generate a super-resolution face image; respectively inputting the high-resolution face image and the super-resolution face image into a trained face feature extraction network, and extracting identity prior features of the high-resolution face image and the super-resolution face image; and inputting the high-resolution face image, the super-resolution image and the corresponding identity prior features into a discriminator network, calculating a supervised adversarial loss function by using the output of the discriminator network, and training a generative adversarial network by using error back propagation.
Owner:UNIV OF SCI & TECH OF CHINA

Method for detecting small and medium objects in a structured road based on deep learning

A method for detecting small objects in a structured road based on deep learning comprises the steps that image data, containing the small objects, on the real structured road are collected, and the positions and the category information of the small objects in the structured road are marked through a manual method; Constructing a deep convolutional neural network suitable for small object detection in the structured road and a corresponding loss function; Inputting the acquired image and the labeled data into the convolutional neural network constructed in the previous step, updating the parameter value in the neural network according to the loss value between the output value and the target value, and finally obtaining an ideal network parameter. The invention provides a brand new network structure for the problem that the current neural network is poor in small object detection. On the premise that the calculated amount is not increased basically, the performance of small object detection is greatly improved, and the method can be conveniently deployed in an existing intelligent driving system, so that an intelligent driving automobile can detect dangerous objects on a road in along distance and respond in time, and the safety in the driving process is improved.
Owner:TONGJI UNIV

Image super-resolution model training method and device and image super-resolution model reconstruction method and device

The invention provides an image super-resolution model training method and device and an image super-resolution model reconstruction method and device. The training method comprises the steps of: obtaining a training sample set; inputting the low-resolution images in the training sample set into a preset image super-resolution model to obtain alternative high-resolution images; respectively performing image mode conversion on the alternative high-resolution image and the real high-resolution image to obtain corresponding visible light images; and constructing a loss function based on the difference between the two groups of visible light images and the real visible light image and the difference between the alternative high-resolution image and the real high-resolution image, and performing model training on a preset image super-resolution model. Mapping errors of alternative high-resolution images and corresponding real high-resolution images are calculated in a visible light space toserve as feedback information to participate in model training, so that the trained preset image super-resolution model can output high-fidelity high-resolution images under the condition of large-scale magnification times.
Owner:SHENZHEN UNIV

Interactive quantized noise calculating method in compressed video super-resolution

The invention discloses an interactive quantized noise calculating method in compressed video super-resolution. The method includes the following steps of: first counting the appearance probability of DCT coefficients before quantization before a coding end does quantized operation to the DCT coefficients of a video frame and calculating Laplace parameter capable of representing the distribution of the DCT coefficients before quantization; then writing the distribution parameter of the DCT coefficients before quantization of an obtained image block in a Data-user field reserved in a code stream to be sent to a decoding end through coding; and finally obtaining the distribution parameter of the DCT coefficients before quantization in the code stream from the decoding end, calculating and obtaining the quantized noise according to the distribution probability density of the DCT coefficients before quantization and coefficients after quantization and consequently obtaining a final high-resolution image in a super-resolution algorithm. The calculating method is applicable to the compressed video super-resolution algorithm interacting between the coding end and the decoding end, improves the accuracy of quantized noise by obtaining the distribution parameter of the DCT coefficients before quantization at the coding end and providing the distribution parameter for the decoding end to calculate the quantized noise.
Owner:WUHAN UNIV

Video space-time super-resolution implementation method and device

ActiveCN112712537AEfficient joint spatio-temporal super-resolutionImprove visual qualityImage analysisGeometric image transformationTime domainImage resolution
The invention provides a video space-time super-resolution implementation method and device, and the method comprises the steps of carrying out the edge enhancement of a video frame of a video, and obtaining an edge-enhanced video frame; inputting the plurality of edge-enhanced adjacent video frames into an optical flow estimation module in pairs to obtain a bidirectional optical flow; calculating the bidirectional optical flow to obtain an estimated optical flow, and inputting the estimated optical flow and the bidirectional optical flow into a bidirectional prediction module together to obtain a predicted optical flow; calculating the prediction optical flow and the corresponding video frame to obtain an intermediate frame for time domain super-resolution, and inserting the intermediate frame into a corresponding position in the video; and performing spatial domain super-resolution processing on the intermediate frame and the corresponding video frame through a cyclic super-resolution network to obtain a plurality of reconstructed frame; and circularly executing the steps until the space-time super-resolution of the whole video is completed. The invention has the beneficial effects that space-time joint super-resolution can be effectively carried out on the video, and the visual quality of the video is improved.
Owner:SHENZHEN UNIV

Video super-resolution method and system for cloud fusion

The invention provides a cloud fusion-oriented video super-resolution method and system, and belongs to the field of video processing, and the system comprises a restoration effect prediction module, a task dynamic scheduling module, a mobile terminal processing module, a cloud processing module and a frame fusion module. The method comprises the following steps: collecting features of a current low-resolution video frame, inputting the features into a restoration effect prediction module, and predicting a super-resolution effect of the current video frame after the current video frame passes through a bicubic interpolation method and a video restoration model based on an enhanced variable convolutional network; whether the current low-resolution video frame is unloaded to the cloud processing module for super-resolution reduction is determined through the task dynamic scheduling module; and inputting the video frame after cloud super-division and the video frame after local processing into a frame fusion module to obtain a high-definition video after super-resolution reduction. According to the method, on the premise that cloud resources are utilized, super-resolution processing of low-resolution videos is achieved, and the method has the advantages of being real-time, rapid and accurate in reduction and low in memory resource occupation.
Owner:SHAANXI NORMAL UNIV

Pupil filter far-field super-resolution imaging system and pupil filter design method

The pupil filter far-field super-resolution imaging system and pupil filter design method belong to the field of super-resolution imaging technology, in order to solve the problem that the final super-resolution imaging is seriously affected in the existing field diaphragm scanning far-field super-resolution imaging system For quality issues, the system is set up in sequence along the light incident direction: front-end optical objective lens, field diaphragm, collimator lens group, pupil filter, imaging lens and CCD detector, and the front-end optical objective lens is used to image distant scenes At the intermediate image plane of the system, the field diaphragm is placed on the rear focal plane of the front optical objective lens, that is, at the intermediate image plane of the entire system, the front focus of the collimator group is at the intermediate image plane of the system, and the pupil filter is placed on the front optical The exit pupil position of the rear end of the system combined with the objective lens and the collimating lens group, and the position and size of the effective aperture of the pupil filter coincide with the exit pupil surface, the imaging lens will perform secondary imaging on the light passing through the pupil filter, and the CCD detector The target plane coincides with the secondary imaging plane.
Owner:CHANGCHUN UNIV OF SCI & TECH

Super-resolution reconstruction method of lung 4D-CT images based on registration

Provided is a lung 4D-CT image super-resolution reconstruction method based on registration. The lung 4D-CT image super-resolution reconstruction method based on registration sequentially comprises the steps that (1) a sequence of low-resolution images with different phases is obtained through lung 4D-CT data; (2) the image, with some phase, in the sequence is selected as a reference image, interpolation amplification is carried out on the image, and the result obtained after interpolation serves as an initial estimated image f<0> of a reconstruction result; (3) the corresponding low-resolution images, with other phases, in the sequence serve as floating images, interpolation amplification is carried out on the floating images, motion deformation fields between the interpolation results of the floating images and the initial estimated image f<0> are estimated respectively; (4) a high-resolution lung 4D-CT image is reconstructed on the basis of the motion deformation fields obtained in the step (3). A multi-plane display image of the lung 4D-CT image obtained through the lung 4D-CT image super-resolution reconstruction method based on registration is clear, the structure is obviously improved, the image resolution is improved, and the quality of the multi-plane display image of the lung 4D-CT data can be effectively improved.
Owner:SOUTHERN MEDICAL UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products