Application deployment method and device based on multi-inference engine system, and equipment

An inference engine, application deployment technology, applied in the field of deep learning, can solve problems such as low application deployment efficiency

Active Publication Date: 2021-09-10
SUZHOU LANGCHAO INTELLIGENT TECH CO LTD
View PDF3 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of this application is to provide an application deployment method, device, device, and readable storage medium based on a multi-reasoning engine system to solve the problem of low application deployment efficiency due to manual selection of a suitable reasoning engine

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Application deployment method and device based on multi-inference engine system, and equipment
  • Application deployment method and device based on multi-inference engine system, and equipment
  • Application deployment method and device based on multi-inference engine system, and equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0053] The following introduces Embodiment 1 of the application deployment method based on the multi-reasoning engine system provided by this application, see figure 1 , embodiment one includes:

[0054] S11. Obtain a source model for application deployment.

[0055] The above source model is a trained deep learning model that needs to be deployed in actual applications. Specifically, the file path of the source model is input, and the source model is read according to the file path. In order to ensure reliability, during the reading process, it is judged whether the path is correct and the file is readable.

[0056] S12. Convert the source model to each reasoning engine of the multi-reasoning engine system, and obtain a target model corresponding to each reasoning engine.

[0057] In this embodiment, the reasoning engine is used to implement optimization, conversion and reasoning evaluation of the source model. Specifically, before converting the source model to the infer...

Embodiment 2

[0064] The second embodiment of the application deployment method based on the multi-reasoning engine system provided by the present application will be introduced in detail below. see figure 2 , Embodiment 2 specifically includes the following steps:

[0065] S21. Obtain a source model for application deployment;

[0066] S22. Determine the model type of the source model according to the file suffix of the source model;

[0067] S23. Call the loading method of the model type to load the source model to determine whether the source model can be loaded normally; if so, go to S24, otherwise it prompts that the model is wrong;

[0068] S24. Convert the source model to each reasoning engine of the multi-reasoning engine system, and obtain a target model corresponding to each reasoning engine;

[0069] S25. Perform inference evaluation on each target model to obtain the inference duration of each target model; select the target model with the shortest inference time as the targ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an application deployment method based on a multi-inference engine system. The method can automatically achieve the conversion from a source model to different inference engines for a given source model, carries out the inference evaluation of the model obtained through conversion, and finally selects an optimal inference engine for subsequent application deployment according to the inference evaluation result . According to the invention, automatic evaluation of each inference engine in the system is achieved, the professional threshold and workload of the inference engine selection process are reduced, the situation that developers spend a large amount of time and energy to select the inference engines due to the fact that the developers are not familiar with the inference engines is avoided, and application deployment efficiency can be improved. In addition, the invention further provides an application deployment device based on the multi-inference engine system, equipment and a readable storage medium, and the technical effects correspond to the technical effects of the method.

Description

technical field [0001] The present application relates to the technical field of deep learning, and in particular to an application deployment method, device, device and readable storage medium based on a multi-reasoning engine system. Background technique [0002] With the development of deep learning, more and more deep learning frameworks have emerged. In the model development stage, Google's tensorflow and facebook's pytorch are the most widely used. However, when it comes to specific application deployment, considering the impact of performance, storage and other factors, most of them use reasoning engines such as caffe, onnx, tensorrt, and tvm for application deployment. Faced with many inference engines, how to select the most suitable and optimal inference engine for application deployment is a major difficulty in practical applications. [0003] Since different inference engines have different levels of support for operators, the acceleration performance is also d...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/445G06N5/04
CPCG06F9/44526G06N5/04
Inventor 刘鑫
Owner SUZHOU LANGCHAO INTELLIGENT TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products