Image recognition system, method and device based on image channel correlation

An image channel and correlation technology, applied in the field of digital images, can solve problems such as the decline of the identification accuracy, and achieve the effect of improving the description ability and the recognition accuracy.

Pending Publication Date: 2020-06-02
INST OF AUTOMATION CHINESE ACAD OF SCI
0 Cites 2 Cited by

AI-Extracted Technical Summary

Problems solved by technology

Post-processing operations such as image interpolation and image compression can eliminate image generation traces to a certain extent...
View more

Method used

[0081] The feature and innovation of the method of the present invention are that the channel correlation and neighborhood correlation of the image are respectively obtained by using the mixed feature extraction module. The introduction of image color channel features and neighborhood features significantly improves the identification accuracy of rendered images, and greatly improves the identification efficiency of the network without extracting any prior features.
[0085] Finally, the proposed network is compared to a network that discriminates computer-rendered images. By comparing the "AVG" column in Fig. 4 with Fig. 5, it can be seen that the identification results of the network proposed by the present invention in the four network structures of "3Hc", "3Pc", "3Di" and "AVG" are better than those of the present invention The three discriminative networks LiNet, BSP-CNN [He 18] and YaoNet compared. In addition, the identification accuracy of the network designed by the present invention is 0.31% higher than [He 18]'s best identification accuracy of 93.87% on the SPL 2018 dataset. It is worth noting that in [He 18], the manually processed features are respectively input into the two-way convolutional neural network and the directed acyclic graph recurrent neural network. The network structure is more complex, but the identification accuracy is slightly lower than the network of the present invention. result. According to the above comparison, it can be seen that the present invention can effectively extract the features of natural images and rendered images, and has better identification performance than other computer-rendered image identification methods.
[0086] In addition, in order to explore the working mechanism of the self-encoding module, the present invention visualizes the convolution kernel of the self-encoding module 1 × 1 convolutional layer and the coding features of the output. By observing the weights of the convolution kernels of the three parallel self-encoding modules in the three experiments, it...
View more

Abstract

The invention belongs to the technical field of digital images, and particularly relates to an image recognition system, method and device based on image channel correlation. The system. The system comprises a plurality of mixed feature extraction modules which are used for obtaining fusion features of features, mixed with channel and neighborhood correlation, of a to-be-identified image; the feature fusion module is used for superposing the fusion features output by the plurality of mixed feature extraction modules into a total feature map, and fusing the total feature map into a high-dimensional feature representation through a plurality of convolution; and the image classification module is used for respectively acquiring classification probabilities of the natural image and the rendered image based on the high-dimensional feature representation, and outputting the image with the larger classification probability as an recognition result. According to the invention, the recognitionaccuracy and efficiency of the convolutional network for the rendered image are improved.

Application Domain

Character and pattern recognitionNeural architectures

Technology Topic

Feature mappingImage based +4

Image

  • Image recognition system, method and device based on image channel correlation
  • Image recognition system, method and device based on image channel correlation
  • Image recognition system, method and device based on image channel correlation

Examples

  • Experimental program(1)

Example Embodiment

[0050] In order to make the objectives, technical solutions, and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings. Obviously, the described embodiments are part of the embodiments of the present invention, not All examples. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
[0051] The application will be further described in detail below with reference to the drawings and embodiments. It can be understood that the specific embodiments described here are only used to explain the related invention, but not to limit the invention. In addition, it should be noted that, for ease of description, only the parts related to the relevant invention are shown in the drawings.
[0052] It should be noted that the embodiments in the application and the features in the embodiments can be combined with each other if there is no conflict.
[0053] The image identification system based on the correlation of image channels according to an embodiment of the present invention, such as figure 1 , figure 2 As shown, it includes multiple hybrid feature extraction modules, feature fusion modules, and image classification modules.
[0054] 1. Hybrid feature extraction module
[0055] This module is used to obtain the fusion feature of the image to be identified that is mixed with features of channel and neighborhood correlation. In this embodiment, there are three hybrid feature extraction modules. The hybrid feature extraction module in this embodiment includes a first sub-module and a second sub-module.
[0056] (1) The first submodule
[0057] The first sub-module obtains the channel correlation feature of one layer based on the three-layer color channels of the image to be identified.
[0058] The first sub-module is a self-encoding module. The self-encoding module models the color channel correlation of the natural image and the rendered image, and is set at the front end of the hybrid feature extraction module for extracting channel correlation information.
[0059] In this embodiment, the self-encoding module is a 1×1 convolutional layer, and its output feature dimension is 1. The convolution process is actually the process of encoding the three-layer color channels of the image to the correlation characteristics of one layer of the channel. The coefficient of the 1×1 convolution kernel [w 1 ,w 2 ,w 3 ] Represents the weight of the R, G, and B channels in the correlation.
[0060] Each pixel in the channel correlation feature obtained by the self-encoding module is expressed as
[0061] C ij =w 1 ·R ij +w 2 ·G ij +w 3 ·B ij
[0062] Where C ij Is the pixel representation of the pixel point (i, j) in the channel correlation feature, [w 1 ,w 2 ,w 3 ] Is the weight of R, G, B channel in channel correlation, R ij , G ij , B ij They are the R, G, and B values ​​of the pixel (i, j) in the image to be identified.
[0063] When the coefficients of the convolution kernel are [1,-1,0], [1,0,-1], [0,1,-1] or other special cases, the channel correlation representation is a differential image. Compared with the differential image obtained through hard coding operation, the self-encoding module of the present invention can flexibly learn the weights of the three channels, and has a larger parameter space to represent the channel correlation, so it can better describe the correlation of the image color channels Sex.
[0064] (2) The second submodule
[0065] The second sub-module is used to obtain the correlation of adjacent pixels in the channel correlation feature as a fusion feature.
[0066] In order to extract the correlation between adjacent pixels, the remaining part of the hybrid feature extraction module uses 3 sets of 3×3 convolutional layers to extract the neighborhood correlation of channel features. The number of feature mapping layers output by each convolutional layer is 8. In order to preserve as much of the original image feature information as possible, none of the three convolutional layers uses a pooling operation. In order to improve the training speed and stability of the neural network and increase the nonlinear mapping ability of the neural network, batch normalization and nonlinear activation function (ReLU) are added to the appropriate position of the network. In this embodiment, there are 3 sets of convolutional layers for extracting neighborhood correlations. In other embodiments, other numbers of settings can also be made.
[0067] The method of extracting the neighborhood correlation of nine adjacent pixels through 3×3 convolution is
[0068]
[0069] Among them, O ij Is the neighborhood related feature value of the pixel (i, j), F represents a 3×3 convolution kernel, I k Represents the k-th channel input in the convolution operation, u and v are the pixel coordinates in the convolution kernel respectively.
[0070] 2. Feature Fusion Module
[0071] The feature fusion module superimposes the fused features output by the multiple hybrid feature extraction modules into a total feature map, and fuses the total feature map into a high-dimensional feature representation through multiple convolutions.
[0072] The three parallel hybrid feature extraction modules are independent of each other, and their learned parameters are not shared with each other. Therefore, the three hybrid feature extraction modules can obtain different hybrid features about the input image, and the three hybrid feature extraction modules are integrated through this module. The feature map extracted by the module is fused into a feature space.
[0073] First, the feature maps output by the three hybrid feature extraction modules are superimposed together depth by depth to form a total feature map. At this time, although the feature mapping of each branch is a whole in physical form, it is actually independent of each other in its own feature space.
[0074] Then six convolutional layers with pooling operations are used to fuse the feature maps into a new feature space. The number of output features of the first and last layer of convolution is the same as the input. The number of output channels of the remaining convolutional layers increases by a power of two from 32 to 256. Each convolutional layer uses maximum pooling, and the size of the convolution kernel is 3×3, step size is 2.
[0075] Finally, a high-dimensional feature representation is learned through a global average pooling layer. The global average pooling (GAP) operation is used to convert the extracted feature map into a high-dimensional vector.
[0076] 3. Image classification module
[0077] The image classification module obtains the classification probabilities of the natural image and the rendered image respectively based on the high-dimensional feature representation, and outputs the classification probabilities of which are higher as the identification result. The image classification module in this embodiment includes a classification network composed of a fully connected layer and a Soft-max layer
[0078] The fully connected layer (FC) judges whether the image is a natural image or a rendered image through the high-dimensional vector, and the Soft-max layer maps the scoring result to the probability space to obtain the probabilities of the two types of images. The category with higher probability is the final decision result.
[0079] The image identification system based on image channel correlation of this embodiment needs to be trained through training samples before application. The training samples include natural image sample sets and rendered image sample sets; during training, the natural image sample set and the rendered image sample set are respectively Extract a set number of samples to form a sample set to train the image identification system based on the correlation of image channels.
[0080] It should be noted that the image identification system based on the correlation of image channels provided in the above embodiments is only illustrated by the division of the above functional modules. In practical applications, the above functions can be allocated to different functional modules as needed. To complete, that is, to decompose or combine the modules or steps in the embodiments of the present invention. For example, the modules in the above embodiments can be combined into one module or further divided into multiple sub-modules to complete all or part of the functions described above. . The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing each module or step, and are not regarded as improper limitations on the present invention.
[0081] The feature and innovation of the method of the present invention is that the channel correlation and neighborhood correlation of the image are obtained by using a hybrid feature extraction module. The introduction of image color channel features and neighborhood features significantly improves the identification accuracy of the rendered image, without extracting any prior features, greatly improving the identification efficiency of the network.
[0082] The present invention verifies the effectiveness of the proposed network through a series of experiments. The experimental data is the SPL 2018 data set ([He18] used data set, [He 18]: P.He,X.Jiang,T.Sun,and H.Li,"Computer graphics identification combining convolutional and recurrent neural networks,"IEEESignal Processing Letters, vol. 25, no. 9, pp. 1369–1373, 2018.), including 6,800 natural photographed images and 6,800 computer rendered images. In order to ensure the reasonable conduct of the experiment, the present invention divides the entire data set into a training set, a verification set and a test set at a ratio of 10:3:4. All tests use multiple sampling to average the experimental results to ensure the stability of the test results. The experimental platform is a 64-bit Ubuntu server with 4 Intel Xeon E5-2660 v4 2.00GHz CPUs, 256GB RAM, and 8 GeForce GTX 1080Ti graphics cards. The PyTorch 0.4.1 deep learning framework is used on this platform to verify the performance of the convolutional neural network designed in the present invention.
[0083] Firstly, the identification capabilities of the hybrid feature extraction module and the channel self-encoding module proposed by the present invention are verified. The self-encoding network designed by the present invention is called ScNet. In addition, three variant networks are introduced to remove the ScNet-3Pc of the self-encoding module on the basis of the network of the present invention, and replace the hybrid feature encoding module with the ScNet-3Di of the differential image. ScNet-Base with mixed coding module. image 3 Shown are the discrimination accuracy rates of the four networks on the verification set during one training process. It can be seen that the complete network designed by the present invention has the best discrimination performance. Figure 4 It is the final saved model test results after the four networks are trained. The table lists the results of three independent experiments and the average results of the three experiments. The present invention proposes that the network ScNet has a better performance than the three variant networks ScNet-3Pc, ScNet-3Di and ScNet-Base in terms of discrimination accuracy. It is worth noting that ScNet is 0.46% higher than ScNet-3Pc, indicating that the introduction of image color channel correlation features extracted by the self-encoding module can better improve the identification ability of the network.
[0084] Secondly, the versatility of the hybrid feature extraction module is evaluated, that is, the effect of adding the hybrid feature extraction module to the existing convolutional network on the identification result. The present invention selects three identification networks, namely LiNet, BSP-CNN and YaoNet. The present invention also uses the two variant networks of "3Pc" and "3Di" mentioned in the previous paragraph, and "3Hc" refers to a network with a hybrid feature extraction module. In addition, the "Base" in this paragraph represents the three original networks designed by the author, which has a different meaning from the "Base" mentioned in the previous paragraph. Figure 5 Shown are the average discrimination accuracy rates of the rendered images in three experiments with four variant structures of three discrimination networks. In the second and last rows of the comparison table, it can be found that the identification accuracy rates of the three networks with the hybrid encoding module are 1.55%, 1.55%, and 5.58% higher than the original network, respectively, indicating that the hybrid feature extraction module is effective in network identification. The promotion has good versatility. In addition, the identification accuracy of the "3Hc" structure of the three networks with the complete feature extraction module is higher than that of the "3Pc" structure without the self-encoding module. This shows that the self-encoding sub-module plays an important role in the improvement of the discrimination ability in the entire feature extraction module.
[0085] Finally, the network proposed by the present invention is compared with a network that authenticates computer rendered images. By comparison Figure 4 "AVG" column in and Figure 5 It can be seen that the authentication results of the four network structures of "3Hc", "3Pc", "3Di" and "AVG" of the network proposed by the present invention are better than those of the three authentication networks LiNet and BSP-CNN [He 18] and YaoNet. In addition, the network identification accuracy rate designed by the present invention is 0.31% higher than the best identification accuracy rate of [He 18] on the SPL 2018 data set of 93.87%. It is worth noting that in [He 18], the manually processed features are respectively input to the dual-path convolutional neural network and the directed acyclic graph recurrent neural network. The network structure is more complicated, but the discrimination accuracy rate is slightly lower than the network of the present invention. result. Based on the above comparison, it can be seen that the present invention can effectively extract the features of natural images and rendered images, and exhibits better identification performance than other computer rendered image identification methods.
[0086] In addition, in order to explore the working mechanism of the self-encoding module, the present invention visualizes the convolution kernel of the 1×1 convolution layer of the self-encoding module and the output coding characteristics. By observing the weights of the convolution kernels of three parallel self-encoding modules in three experiments, it is found that the weight of each convolution kernel has a positive value, a negative value and a value approximately zero, which is similar to the idea of ​​a differential image. However, the absolute values ​​of the weights of the three convolution kernels are roughly distributed on three orders of magnitude. In an experiment, the maximum magnitude of positive weight is 0.92, and the negative weight is -0.89; the intermediate magnitude of positive weight is 0.36, and the negative weight is 0.28; the smallest The positive weight of the order of magnitude is -0.04, and the negative weight is 0.03. Three orders of magnitude of image color channel coding can extract richer channel correlation features, enabling the network to learn features and make judgments better. Image 6 It is the feature map output by the self-encoding module of the first experiment. The first column is the image of the input module, and the second to fourth columns are visual coding features with small to large weights. The first line is a natural image, and the second line is a rendered image. The text content in the green box in the input image should be the content in the red box in the coding feature map. For natural images, as the encoding weight increases, the text in the red frame of the feature map becomes more and more blurred, and the text in the red frame in the rendered image is always prominent. This is consistent with the research conclusion that Gunturk et al. proposed that the high-frequency components between color channels in natural images have a strong similarity. For the present invention, the distance between the natural image and the rendered image in the feature domain is increased by self-encoding to improve the identification result.
[0087] In summary, the present invention combines the correlation of image color channels to distinguish natural images from rendered images, and has important application value in the field of digital image forensics.
[0088] An image identification method based on image channel correlation in an embodiment of the present invention includes: acquiring an image to be identified; inputting an image identification model to acquire an image identification result;
[0089] Wherein, the image identification model is the above-mentioned image identification system based on the correlation of image channels after training with training samples.
[0090] The image identification method based on image channel correlation of the present invention, such as Figure 7 As shown, another embodiment includes:
[0091] Obtain the image to be identified;
[0092] Based on the three-layer color channels of the image to be identified, the correlation characteristics of multiple channels are independently obtained;
[0093] For the correlation feature of each channel, the correlation of adjacent pixels is obtained as the fusion feature;
[0094] Superimposing multiple fusion features into a total feature map, and fusing the total feature map into a high-dimensional feature representation through multiple convolutions;
[0095] The features based on the high dimensionality represent the classification probabilities of acquiring the natural image and the rendered image respectively, and outputting the highest classification probability as the identification result.
[0096] Those skilled in the art can clearly understand that for the convenience and conciseness of description, the specific working process and related description of the image identification method based on image channel correlation described above can refer to the image channel-based method in the foregoing system embodiment. The corresponding content of the relevant image identification system will not be repeated here.
[0097] In the image identification system based on image channel correlation in the above embodiment, the hybrid feature extraction module can also be set before other image identification networks to improve the accuracy of image identification by the network.
[0098] A hybrid feature extraction device based on image channel correlation according to an embodiment of the present invention, such as Figure 8 As shown, it includes a first sub-module and a second sub-module; the first sub-module obtains the channel correlation characteristics of one layer based on the three-layer color channels of the image to be identified; the second sub-module is used to obtain the channel The correlation of adjacent pixels in the correlation feature is used as the fusion feature.
[0099] A hybrid feature extraction method based on image channel correlation in an embodiment of the present invention, such as Picture 9 As shown, it includes: independently acquiring multiple channel correlation features based on the three-layer color channels of the image to be extracted; for each channel correlation feature, respectively acquiring the correlation of adjacent pixels as a fusion feature.
[0100] Those skilled in the art can clearly understand that for the convenience and concise description, the above-described hybrid feature extraction device based on image channel correlation and the specific working process and related description of the hybrid feature extraction method based on image channel correlation are described above Please refer to the corresponding descriptions of the image identification system based on the correlation of the image channel and the image identification method based on the correlation of the image channel in the foregoing method embodiments, which will not be repeated here.
[0101] A storage device according to the fifth embodiment of the present invention stores a plurality of programs, and the programs are adapted to be loaded and executed by a processor to implement the above-mentioned image identification method based on image channel correlation, or based on image channel correlation The hybrid feature extraction method.
[0102] A processing device according to the sixth embodiment of the present invention includes a processor and a storage device; the processor is suitable for executing each program; the storage device is suitable for storing multiple programs; the program is suitable for being loaded and executed by the processor In order to realize the above-mentioned image identification method based on image channel correlation, or a hybrid feature extraction method based on image channel correlation.
[0103] Those skilled in the art can clearly understand that, for the convenience and conciseness of description, the specific working process and related description of the storage device and processing device described above can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here. Go into details.
[0104] Reference below Picture 10 , Which shows a schematic structural diagram of a computer system 1100 of a server suitable for implementing the method, system, and device embodiments of the present application. Picture 10 The server shown is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present application.
[0105] Such as Picture 10 As shown, the computer system 1100 includes a central processing unit (CPU, Central Processing Unit) 1101, which can be loaded into a random access memory (RAM, RAM, RAM, or RAM, according to a program stored in a read only memory (ROM, Read Only Memory) 1102) or from a storage portion 1108. Random Access Memory) 1103 to execute various appropriate actions and processing. In the RAM 1103, various programs and data required for the operation of the system 1100 are also stored. The CPU 1101, the ROM 1102, and the RAM 1103 are connected to each other through a bus 1104. An input/output (I/O, Input/Output) interface 1105 is also connected to the bus 1104.
[0106] The following components are connected to the I/O interface 1105: an input part 1106 including a keyboard, a mouse, etc.; an output part 1107 such as a cathode ray tube (CRT, Cathode Ray Tube), a liquid crystal display (LCD, Liquid Crystal Display), and speakers, etc. A storage part 1108 including a hard disk, etc.; and a communication part 1109 including a network interface card such as a LAN (Local Area Network) card, a modem, and the like. The communication section 1109 performs communication processing via a network such as the Internet. The driver 1110 is also connected to the I/O interface 1105 as needed. A removable medium 1111, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 1110 as needed, so that the computer program read from it is installed into the storage portion 1108 as needed.
[0107] In particular, according to an embodiment of the present disclosure, the process described above with reference to the flowchart can be implemented as a computer software program. For example, an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from the network through the communication part 1109, and/or installed from the removable medium 1111. When the computer program is executed by the central processing unit (CPU) 1101, the above-mentioned functions defined in the method of the present application are executed. It should be noted that the above-mentioned computer-readable medium in the present application may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two. The computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above. In this application, the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device. In this application, a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. The computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium. The computer-readable medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device . The program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wireless, wire, optical cable, RF, etc., or any suitable combination of the above.
[0108] The computer program code used to perform the operations of this application can be written in one or more programming languages ​​or a combination thereof. The above-mentioned programming languages ​​include object-oriented programming languages-such as Java, Smalltalk, C++, and also conventional Procedural programming language-such as "C" language or similar programming language. The program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server. In the case of a remote computer, the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
[0109] The flowcharts and block diagrams in the drawings illustrate the possible implementation of the system architecture, functions, and operations of the system, method, and computer program product according to various embodiments of the present application. In this regard, each block in the flowchart or block diagram may represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logical function Executable instructions. It should also be noted that, in some alternative implementations, the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved. It should also be noted that each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart, can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
[0110] The terms "first", "second", etc. are used to distinguish similar objects, rather than to describe or indicate a specific order or sequence.
[0111] The term "including" or any other similar term is intended to cover non-exclusive inclusion, so that a process, method, article or device/device including a series of elements includes not only those elements, but also other elements not explicitly listed, or It also includes the inherent elements of these processes, methods, articles or equipment/devices.
[0112] So far, the technical solutions of the present invention have been described in conjunction with the preferred embodiments shown in the drawings. However, those skilled in the art will readily understand that the protection scope of the present invention is obviously not limited to these specific embodiments. Without departing from the principle of the present invention, those skilled in the art can make equivalent changes or substitutions to the relevant technical features, and the technical solutions after these changes or substitutions will fall within the protection scope of the present invention.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

Embedded software reliability assessment method based on time extended petri net

InactiveCN106933737AEnhanced description ability
Owner:BEIHANG UNIV

Image retrieval method and device

ActiveCN106886599AEnhanced description abilityreduce complexity
Owner:BEIJING JINGDONG SHANGKE INFORMATION TECH CO LTD +1

Method for determining heavy oil thermal recovery steam chamber state

ActiveCN103590807AEnhanced description abilityreduce investment
Owner:BC P INC CHINA NAT PETROLEUM CORP +1

Classification and recommendation of technical efficacy words

  • Improve recognition accuracy
  • Enhanced description ability

Image retrieval method and device

ActiveCN106886599AEnhanced description abilityreduce complexity
Owner:BEIJING JINGDONG SHANGKE INFORMATION TECH CO LTD +1

Embedded software reliability assessment method based on time extended petri net

InactiveCN106933737AEnhanced description ability
Owner:BEIHANG UNIV

Method for determining heavy oil thermal recovery steam chamber state

ActiveCN103590807AEnhanced description abilityreduce investment
Owner:BC P INC CHINA NAT PETROLEUM CORP +1
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products