Method and equipment for determining sequence of radar echo prediction frames

A technology of radar echo and frame prediction, which is applied in radio wave measurement system, radio wave reflection/re-radiation, measurement device and other directions, can solve the problem of low accuracy of radar echo and achieve good practical effect.

Active Publication Date: 2020-11-13
蔻斯科技(上海)有限公司
5 Cites 0 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0004] The purpose of this application is to provide a method and device for determining the radar echo predi...
View more

Method used

Wherein, described output branch is basically identical with the output branch network structure in Fig. 2, what output is the optical flow image (size and The predicted image is the same) and the optical flow prediction mask (accuracy), the output is combined with the radar echo prediction image output by another output branch using the Flownet2 algorithm to perform optical flow fusion, and the radar echo prediction image after the fusion of optical flow is obtained to improve the prediction Accuracy. Among them, the number of convolution kernels of DeConv Up Sample2 for optical flow prediction accuracy is 1, and the number of convolution kernels of DeConv Up Sample3 for predicting optical flow images is 2.
Wherein, neural network adopts GAN (Generative Adversarial Network, generation against network) training method, and model is corresponding to the generator in GAN network, needs to cooperate discriminator to carry out confrontation training in training process. In order to solve the image blurring problem in the GAN loss function, the loss function of PatchGAN is adopted, which pays more attention to the matching of detailed features and makes the generated image clearer. Lw is the L1 (mean absolute error) loss between the predicted optical flow image and the optical flow mask obtained by the Flownet2 algorithm, where the optical flow loss is only calculated for pixels with accurate optical flow prediction. In addition, in order to make the training of the GAN network more stable, the VGG feature matching loss Lvgg is assisted.
Wherein, the neural network structure schematic diagram of an embodiment is as shown i...
View more

Abstract

The invention discloses a method and equipment for determining a radar echo prediction frame sequence. The method comprises steps of obtaining a radar echo original frame sequence containing continuous N frames, superposing according to the channel direction to serve as a first input value; superposing the frame number of the radar echo original frame sequence and the frame number of the current radar echo prediction frame according to the channel direction to serve as a third input value; superposing the current radar echo prediction frame and (N-1) continuous radar echo prediction frames before the current radar echo prediction frame according to the channel direction to obtain a second input value; and inputting the second input value and the third input value into a trained neural network to obtain a next radar echo prediction frame, and finally circularly executing the above operations to iteratively predict the radar echo prediction frame till a radar echo prediction frame sequence is obtained. A radar echo image prediction frame sequence with high accuracy in a preset time period can be determined. The method can be used for short-time near weather forecast, and a good practical effect is brought.

Application Domain

ICT adaptationRadio wave reradiation/reflection

Technology Topic

Image predictionEngineering +7

Image

  • Method and equipment for determining sequence of radar echo prediction frames
  • Method and equipment for determining sequence of radar echo prediction frames
  • Method and equipment for determining sequence of radar echo prediction frames

Examples

  • Experimental program(1)

Example Embodiment

[0051] The invention will be described in further detail below with reference to the accompanying drawings.
[0052] In a typical configuration of this application, each module of the system and the trusted party include one or more processors (CPUs), input/output interfaces, network interfaces and memories.
[0053] Memory may include non-permanent memory, random access memory (RAM) and/or non-volatile memory in computer-readable media, such as read-only memory (ROM) or flash RAM. Memory is an example of a computer readable medium.
[0054] Computer readable media, including permanent and non-permanent, removable and non-removable media, can realize information storage by any method or technology. Information can be computer readable instructions, data structures, modules of programs or other data. Examples of computer storage media include, But not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, CD-ROM, DVD. Or other optical storage, magnetic cassette tape, magnetic tape disk storage or other magnetic storage devices or any other non-transmission media, which can be used to store information that can be accessed by computing devices. According to the definition in this paper, computer-readable media does not include non-transitory computer-readable media, such as modulated data signals and carrier waves.
[0055] In order to further explain the technical measures and effects of this application, the technical scheme of this application will be clearly and completely described below with reference to the drawings and preferred embodiments.
[0056] Figure 1 A flowchart of a method for determining a radar echo prediction frame sequence in an aspect of the present application is shown, wherein the method of one embodiment includes:
[0057] S11, acquiring a radar echo original frame sequence containing continuous N frames, superposing each frame of the radar echo original frame sequence according to the channel direction, and taking the superposition result as a first input value, wherein N is a preset value;
[0058] S12, superposing the original radar echo frame sequence and the frame number of the current radar echo prediction frame according to the channel direction, and taking the superposition result as a third input value;
[0059] S13, superposing the current radar echo prediction frame with its previous (N-1) consecutive radar echo prediction frames in the channel direction, and taking the superposition result as a second input value, wherein, if the number of consecutive radar echo prediction frames is insufficient (N-1), selecting a number of original frames from the radar echo original frame sequence until the number of frames included in the second input value reaches n, and if there is no current radar echo prediction frame, taking the first input value as the second input value;
[0060] S14, inputting the second input value and the third input value into a trained neural network to obtain a next radar echo prediction frame;
[0061] S15, circularly performing the above operations, iteratively predicting radar echo prediction frames until the number of radar echo prediction frames reaches the number of preset radar echo prediction frame sequences, and combining the obtained preset number of radar echo prediction frames into the radar echo prediction frame sequences.
[0062]In this embodiment, the method is executed by a device 1, which is a computer device and/or a cloud, and the computer device includes but is not limited to a personal computer, a notebook computer, an industrial computer, a network host, a single network server, and a plurality of network server sets. The cloud consists of a large number of computers or network servers based on Cloud Computing, in which cloud computing is a kind of distributed computing, which is a virtual supercomputer composed of a group of loosely coupled computers.
[0063] Here, the computer equipment and/or cloud are only examples, and other existing or future equipment and/or resource sharing platforms, if applicable to this application, should also be included in the protection scope of this application, and are hereby incorporated by reference.
[0064] In this embodiment, in the step S11, the acquisition of the radar echo original frame sequence containing continuous N frames can be obtained by directly receiving the latest real-time weather radar data generated by the weather radar device by the device 1, or by copying and transmitting the latest real-time weather radar data stored in other devices through the network. The acquisition mode of radar echo original frame sequence is not limited here, and any acquisition mode applicable to this application should also be included in the protection scope of this application.
[0065] The acquisition includes a radar echo original frame sequence of continuous N frames, wherein the continuous N frames acquired from the latest real-time weather radar data can be acquired continuously or at intervals. As can be understood by those skilled in the art, in this embodiment, the time interval of the input frame is the same as that of the output frame. Generally speaking, in order to ensure that the motion information of the cloud is captured for prediction, the time interval of the adjacent two frames in the acquired continuous N frames should not be too long, and it is better not to exceed at most. In one example, the time interval may be 5-6 minutes.
[0066] Among them, as the latest real-time weather radar data may contain abnormal weather radar data such as noise and frame skipping, it is necessary to determine whether to preprocess the data before obtaining it. For example, different types of noise are filtered by different filtering methods, and the frame skipping data is discarded after being determined and confirmed.
[0067] Among them, the acquired radar echo original frame sequence is usually obtained based on Doppler radar, and consists of radar echo images of continuous N frames in gray mode. Here, the acquisition of gray-scale mode radar echo images based on Doppler radar is only an example, and other existing or future radar echo images that may be acquired in gray-scale mode, if applicable to this application, should also be included in the scope of protection of this application.
[0068] Where n is a preset number, for example, n is preset to 5, that is, the radar echo original frame sequence contains five consecutive radar echo images.
[0069] The acquired radar echo images of the original frame sequence of radar echoes are superimposed according to the channel direction as the first input value. For example, five radar echo gray images of the original radar echo frame sequence composed of five consecutive radar echo gray images are superimposed in the channel direction, and then used as the first input value.
[0070] In this embodiment, in step S12, the frame number of the current radar echo prediction frame refers to the frame number of the currently obtained radar echo prediction frame. For example, if the currently obtained radar echo prediction frame is the first radar echo prediction frame, then the frame number of the current radar echo prediction frame is 1; If the currently obtained radar echo prediction frame is the 10th radar echo prediction frame, then the frame number of the current radar echo prediction frame is 10. The frame number of the original frame sequence of the radar echo image can also be preset to 0, and the frame number of the current radar echo prediction frame is the frame difference between the currently obtained radar echo prediction frame and the radar echo original frame sequence.
[0071] Overlapping the original radar echo frame sequence and the frame sequence number of the current radar echo prediction frame according to the channel direction as the third input value. Wherein, the superposition according to the channel direction can be that a gray-scale image with the same size as each frame of radar echo original frame sequence is first made through the network, and each pixel of the gray-scale image is assigned a frame number, and then the gray-scale image and the radar echo original frame sequence are superimposed according to the channel direction as the third input value.
[0072] In this embodiment, in step S13, the current radar echo prediction frame is superimposed with its previous (N-1) consecutive radar echo prediction frames in the channel direction, and the superimposed result is taken as the second input value, wherein, if the number of consecutive radar echo prediction frames is not enough (N-1), a number of original frames are selected from the radar echo original frame sequence until the number of frames included in the second input value reaches n, and if there is no current one.
[0073] Wherein, selecte a number of original frame from that radar echo original frame sequence can be randomly selected from the radar echo original frame sequence or selected according to a preset rule. When selecting according to the preset rules, for example, it can be selected according to the time sequence of acquiring the frames in the original frame sequence of radar echo, or it can be selected according to the reverse time sequence of acquiring the frames in the original frame sequence of radar echo.
[0074] For example, n is preset to 5, that is, the radar echo original frame sequence contains five consecutive radar echo images. If the first radar echo prediction frame is just predicted, if there is no current radar echo prediction frame at this time, the first input value is taken as the second input value; If the current radar echo prediction frame is the fourth radar echo prediction frame, then the current radar echo prediction frame and three previously predicted radar echo prediction frames, and one frame selected from the original radar echo frame sequence, are 5 frames in total, and these 5 frames are superimposed according to the channel direction, and the superposition result is taken as the second input value; If the current radar echo prediction frame is the 15th radar echo prediction frame, the current radar echo prediction frame and the previously predicted 11th-14th radar echo prediction frames are superimposed in the channel direction, and the superimposed result is taken as the second input value.
[0075] In this embodiment, in step S14, the second input value and the third input value are input into the trained neural network to obtain the next radar echo prediction frame.
[0076] Optionally, inputting the second input value and the third input value into the trained neural network to obtain the next radar echo prediction frame further comprises:
[0077] Inputting the second input value and the third input value into a trained neural network to obtain a currently predicted optical flow image;
[0078] Performing optical flow conversion on the currently predicted optical flow image and the current radar echo prediction frame after optical flow fusion, and performing optical flow fusion with the next radar echo prediction frame to obtain the next radar echo prediction frame after optical flow fusion, and updating the next radar echo prediction frame into the next radar echo prediction frame after optical flow fusion, wherein if there is no current radar echo prediction frame after optical flow fusion, the current radar echo prediction frame is used to replace the current radar echo prediction frame after optical flow fusion.
[0079] Optionally, the neural network is a fully convolution neural network, wherein the fully convolution neural network comprises a preset number of residual network modules, a convolution downsampling layer and a deconvolution upsampling layer.
[0080] Optionally, inputting the second input value and the third input value into the trained neural network to obtain the next radar echo prediction frame comprises:
[0081] Respectively passing the second input value and the third input value through a residual network module and a convolution downsampling layer to obtain respective image features;
[0082] Fusing the respective image features according to the pixel addition operation corresponding to the feature map to obtain fused image features;
[0083] Respectively passing the fused image features through the residual network module and the deconvolution upsampling layer to obtain the next radar echo prediction frame.
[0084]The neural network structure diagram of one embodiment is as follows Figure 2 It is shown that, in this embodiment, the recurrent neural network (RNN) network commonly used in neural networks is replaced by the full convolution network, which is used as a feature extractor, and the frame number difference information is added for auxiliary training, which reduces the accumulated errors of the RNN network in predicting longer frames, improves the accuracy of the neural network in predicting the radar echo image frame sequence, and avoids the problem of high training cost of the RNN network. The RNN network refers to the recurrent neural network based on the RNN network structure, such as RNN, LSTM(Long Short-Term Memory Network), GRU(GatedRecurrent Unit), BLSTM (Bi-directional Long Short-term Memory Network) and their related variants.
[0085] Specifically, according to the current radar echo prediction frame, the network first makes a gray-scale image with the same size as each frame of radar echo original frame sequence, and each pixel of the gray-scale image is assigned as the frame number of the current radar echo prediction frame, and then superimposes the gray-scale image and the radar echo original frame sequence according to the channel direction, and inputs it as the third input value into an input branch of the neural network; N consecutive radar echo prediction frames are superimposed according to the channel direction, and the superimposed result is input to the other input branch of the neural network as the second input value. The two input branches have the same network structure, and both of them contain two convolution downsample (convolution downsample) layers and four RES blocks (residual blocks), which are used to capture radar echo image features.
[0086] In which each ResBlock structure is as follows Figure 3 As shown in, there are two Conv(Convolution) layers. The output of the first Conv layer is activated by RELU (corrected linear unit) function and then input into the second Conv layer. The output of the second Conv layer is subjected to ELM-wise sum (corresponding element sum) with the output of the Identity module according to the corresponding position, and the result is used as the input of the next ResBlock. The last one
[0087] There is no unique limit to the number of conv down sample layers and resblocks in the input branch, but there can be more conv down sample layers and resblocks in the input branch on the premise that the performance of the device 1 can be supported. The number of ConvDown Sampl layer and ResBlock is increased in a certain range, which can make the network more capable of fitting data and more accurate overall.
[0088] Then, the outputs of the two input branches are fused with the extracted image features by the method of Elem-wise sum (corresponding element sum) according to the corresponding pixels of the image feature map.
[0089] Then, the fused image features are input into the output branch. The network structure of the output branch includes four ResBlock and two deconvolution upsample (deconvolution upsample) layers. The output branch predicts the radar echo image based on the input fused image features. The ResBlock structure in the output branch is the same as that in the input branch. The predicted radar echo images are superimposed by channels and used as the second input value to iteratively predict the subsequent radar echo images.
[0090] There is no unique limit to the number of deconvup sample layers and resblocks in the output branch, but there can be more deconvup sample layers and resblocks in the output branch on the premise that the performance of the device 1 can be supported. The number of Deconv Up Sampl layer and ReBlock is increased in a certain range, which can make the network more capable of fitting data and more accurate on the whole.
[0091] The fused image features are also input into a Full Connect layer, which is used to complete the classification of the frame numbers of the predicted frames, and the obtained frame numbers are used to calculate the loss function.
[0092] Figure 2 The network parameters corresponding to the neural network structure of one embodiment are shown in Table 1 below.
[0093] Table 1
[0094]
[0095] Explanation: 1. Conv- convolution; Conv Down Sampl-–convolution down sampling; Deconvupsample–deconvolution upsampling; Identity- Identity (the input and output of this module are of the same size).
[0096] 2. The convolution kernel channels of Conv Down Sample 1 of two input branches are different, in which the convolution kernel channel number corresponding to the third input value is N+1, and the convolution kernel channel number corresponding to the second input value is N.
[0097] 3. The network parameters of Conv1 and Conv2 in each ResBlock are the same.
[0098] The neural network structure of another embodiment is as follows Figure 4 As shown, in this embodiment, in the Figure 2 On the basis of the neural network shown in this paper, an output branch is added to predict the optical flow of radar echo images by integrating Flownet2 method, which is used to supervise the network to learn the optical flow, integrate the optical flow information into the neural network, and improve the neural network's capture of the motion in the original frame sequence of radar echo, so as to obtain accurate motion information.
[0099] Wherein the output branch is connected with the output branch Figure 2 The output branch network structure in is basically the same, and the output is the optical flow image (the same size as the predicted image) and the optical flow prediction mask (accuracy) between the predicted radar echo image and the radar echo image predicted in the previous frame. This output is fused with the radar echo prediction image output by another output branch by Flownet2 algorithm to obtain the radar echo prediction image after optical flow fusion, which improves the prediction accuracy. Among them, the number of convolution kernels of DeConv Up Sample2 for optical flow prediction accuracy is 1, and the number of convolution kernels of DeConv Up Sample3 for optical flow image prediction is 2.
[0100] Optionally, the loss function formula of the trained neural network is:
[0101] L(F)=min(max LI(F,DI) + max LV(F,DV)) +λw Lw(F) +λvgg Lvgg(F)
[0102] Where L(F) represents the loss function output of the radar echo prediction frame sequence after the optical flow is fused;
[0103] DI DV and di dv denote a picture discriminator and a video discriminator, respectively;
[0104] LI LV and Li LV respectively represent the average loss function output corresponding to the generated image sequence and video;
[0105] Lw represents the average loss function output between the corresponding predicted optical flow image sequence and the optical flow prediction accuracy based on the Flownet2 algorithm;
[0106]Lvgg represents the average absolute error loss function output between the VGG features obtained by inputting the radar echo prediction frame sequence after optical flow fusion into the pre-trained VGG classification network and the VGG features of the real sequence corresponding to the prediction frame sequence;
[0107] W and λvgg are preset superparameters.
[0108] Among them, the neural network adopts the training mode of GAN (Generative Adversarial Network), and the model corresponds to the generator in the GAN network. In the training process, it is necessary to cooperate with the discriminator to conduct confrontation training. In order to solve the image blur problem of GAN loss function, PatchGAN loss function is adopted, which pays more attention to the matching of detail features and makes the generated image clearer. Lw is the L1 (mean absolute error) loss between the predicted optical flow image and the optical flow mask obtained by Flownet2 algorithm, in which the optical flow loss is calculated only for the pixels with accurate optical flow prediction. In addition, in order to make the training of GAN network more stable, the VGG feature matching loss Lvgg is assisted.
[0109] For the classification of frame numbers, cross entropy is used as the loss function in the back propagation algorithm, and the loss function formula is:
[0110]
[0111] In which, m represents the possible type of frame number of the predicted frame, which is the preset number of radar echo predicted frame sequences to be predicted in this application document; Yc indicates whether the prediction result of frame number C is correct or not, and its value is 0 (incorrect prediction) or 1 (correct prediction); Pc represents the probability that the predicted frame number is C ..
[0112] It should be noted that although the network structure of each input branch and output branch of the neural network is the same, the related parameters obtained after training are different.
[0113] In this embodiment, in the step S15, the above operation is cyclically performed, and radar echo prediction frames are iteratively predicted until the number of radar echo prediction frames reaches the preset number of radar echo prediction frame sequences, and the obtained preset number of radar echo prediction frames are combined into the radar echo prediction frame sequences.
[0114] Step S12, step S13 and step S14 are circularly executed, radar echo prediction images are iteratively predicted, the predicted radar echo prediction images are accumulated to a preset number, and the obtained preset number of radar echo prediction frames are combined into the radar echo prediction frame sequence.
[0115] Wherein the preset number should be greater than n.. For example, by using 5 consecutive radar echo original frames, a 20-frame radar echo prediction frame sequence is obtained through the iterative prediction of the neural network.
[0116] If five consecutive radar echo original frames are iteratively predicted by the trained neural network to get 20 radar echo prediction frames, the overall loss function values corresponding to the 20 radar echo prediction frames should be calculated and judged whether they meet the preset threshold during the training. Furthermore, in order to add randomness to the training, the loss function values of randomly selected radar echo prediction frames can also be calculated and judged whether they meet the preset threshold, or the loss function values of several consecutive radar echo prediction frames can be calculated and judged whether they meet the preset threshold.
[0117] Optionally, performing optical flow conversion on the currently predicted optical flow image and the current radar echo prediction frame after optical flow fusion, and performing optical flow fusion with the next radar echo prediction frame to obtain the next radar echo prediction frame after optical flow fusion includes:
[0118] I_f(t+1) = m * Wrap(I_of, I_f(t)) + (1-m) * I_g(t+1),
[0119] Where I_f(t+1) represents the next radar echo prediction frame image after optical flow fusion;
[0120] M indicates the optical flow prediction accuracy, with the same size as the radar echo prediction frame, and the value of each pixel is 0 or 1. When the m of a pixel is 1, it means that the pixel in the fused image takes the pixel value after optical flow deformation;
[0121] Wrap represents optical flow deformation function, input radar echo prediction frame and corresponding predicted optical flow image, and output the image after optical flow deformation;
[0122] I_of represents the currently predicted optical flow image;
[0123] I_f(t) represents the current radar echo prediction frame image after optical flow fusion;
[0124] I_g(t+1) represents the next radar echo prediction frame image before optical flow fusion.
[0125] Those skilled in the art can understand that by integrating optical flow information, the capture of cloud movement by the network can be improved, and at the same time, the clarity of the predicted radar echo image can be improved by using methods such as GAN.
[0126] Optionally, the method for determining the radar echo prediction frame sequence further comprises:
[0127] And that radar echo prediction frame sequence is use for weather forecast in a preset time period.
[0128] For example, five consecutive radar echo original frames will be used for iterative prediction by trained neural network to obtain a radar echo prediction frame sequence consisting of 20 radar echo prediction frames, and the radar echo prediction frame sequence will be used to predict short-term near weather forecast.
[0129] Figure 5 There is shown a schematic diagram of an apparatus for determining a radar echo prediction frame sequence according to another aspect of the present application, wherein the apparatus comprises:
[0130] The first device 51 is used for acquiring a radar echo original frame sequence containing continuous N frames, superimposing each frame of the radar echo original frame sequence according to the channel direction, and taking the superimposition result as a first input value, wherein N is a preset value;
[0131] Second means 52 for superimposing the original radar echo frame sequence and the frame number of the current radar echo prediction frame in the channel direction, and taking the superimposition result as the third input value;
[0132] A third device 53, configured to superimpose the current radar echo prediction frame with its previous (N-1) consecutive radar echo prediction frames in the channel direction, and take the superimposed result as the second input value, wherein, if the number of previous consecutive radar echo prediction frames is insufficient (N-1), a corresponding number of original frames with a larger frame number is selected from the radar echo original frame sequence until the number of frames included in the second input value reaches n, and if there is no current radar echo prediction frame, a corresponding number of original frames with a larger frame number is selected.
[0133] The fourth device 54 is used for inputting the second input value and the third input value into the trained neural network to obtain the next radar echo prediction frame;
[0134] The fifth device 55 is used for circularly executing the operation of the above device, iteratively predicting radar echo prediction frames until the number of radar echo prediction frames reaches the preset number of radar echo prediction frame sequences, and combining the obtained preset number of radar echo prediction frames into the radar echo prediction frame sequences.
[0135] According to yet another aspect of the present application, there is also provided a computer-readable medium, which stores computer-readable instructions that can be executed by a processor to implement the aforementioned method.
[0136] According to another aspect of the present application, there is also provided an apparatus for determining a radar echo prediction frame sequence, wherein the apparatus comprises:
[0137] One or more processors; and
[0138]A memory that stores computer readable instructions that, when executed, cause the processor to perform the operations of the aforementioned method.
[0139] For example, the computer readable instructions, when executed, cause the one or more processors to acquire a radar echo original frame sequence containing continuous N frames, superimpose them in the channel direction as the first input value, and then superimpose the radar echo original frame sequence with the frame number of the current radar echo prediction frame in the channel direction as the third input value. Then, the current radar echo prediction frame and its previous (N-1) consecutive radar echo prediction frames, with a total of n frames, are superimposed according to the channel direction and used as the second input value; then, the second input value and the third input value are input into the trained neural network to obtain the next radar echo prediction frame; finally, the above operations are circularly executed, and the radar echo prediction frames are iteratively predicted until the radar echo prediction frame sequence is obtained.
[0140] It is obvious to those skilled in the art that the present invention is not limited to the details of the above exemplary embodiments, but can be implemented in other specific forms without departing from the spirit or essential characteristics of the present invention. Therefore, the embodiments should be regarded as illustrative and non-restrictive in all respects. The scope of the invention is defined by the appended claims rather than the above description, and therefore all changes that come within the meaning and range of equivalents of the claims are intended to be embraced by the invention. Any reference signs in the claims should not be regarded as limiting the claims involved. In addition, it is obvious that the word "including" does not exclude other units or steps, and the singular does not exclude the plural. A plurality of units or devices stated in the device claims can also be realized by software or hardware by one unit or device. The first, second and other words are used to indicate names, but do not indicate any particular order.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

Transferring type collision-bearable electrical cabinet

ActiveCN105960120AImprove crash resistanceGood practical effect
Owner:旌海(苏州)电气有限公司

Press-typed self-heating container of water cup

InactiveCN104150107Aimprove securityGood practical effect
Owner:NANCHANG STEAMING POT TECH DEV

Quantity-controllable feeding apparatus for animal husbandry

InactiveCN109156376AGood practical effect
Owner:吕桂英

Fuel molding machine

InactiveCN106863884ASimple structureGood practical effect
Owner:青岛纵横农业科技有限公司

Power distribution equipment health evaluation method, system and device

PendingCN114742345AGood practical effectimprove accuracy
Owner:SHANDONG LUNENG SOFTWARE TECH

Classification and recommendation of technical efficacy words

  • Good practical effect

Method and device for determining permeability of reservoir

ActiveCN105891089Agood reliabilitygood practical effect
Owner:CHINA UNIV OF PETROLEUM (BEIJING)

Multi-degree-of-freedom mechanical arm based on steel wire transmission

ActiveCN105328710AIncrease motion stability and positioning accuracyGood practical effect
Owner:JIANGSU SANGONG BUILDING MATERIALS TECH

Device and method for testing bonding strength of front window glass of internal combustion locomotive cab

ActiveCN104142297AVerify reliabilityGood practical effect
Owner:CRRC QISHUYAN CO LTD

CAN message loss monitoring method

ActiveCN106888123ASolve DTC false positives and DTC false negativesGood practical effect
Owner:CHINA FIRST AUTOMOBILE

Multiuse oil gas energy-saving stove

InactiveCN102878591ASave raw materialsGood practical effect
Owner:侯国山
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products