Target tracking method and device, electronic equipment and storage medium
A target tracking and target technology, applied in the computer field, can solve the problems that the tracking effect depends on the quality of target detection and affects the target tracking effect, etc.
Pending Publication Date: 2021-05-11
BEIJING BAIDU NETCOM SCI & TECH CO LTD
0 Cites 2 Cited by
AI-Extracted Technical Summary
Problems solved by technology
Multiple Object Tracking (Multiple Object Tracking, MOT), the main task is to find an image sequence, find the moving target in the image sequence, and identify the moving target in different frames. Currently, t...
Method used
Corresponding to a plurality of initial trajectory scores among the multiframe video frames by determining that the predicted trajectory is corresponding to a plurality of initial trajectory scores among the multiframe video frames, and according to a plurality of initial trajectory scores and trajectory Length, determine the average trajectory score, and use the average trajectory score as the trajectory score of the predicted trajectory, which can accurately identify the timing of the termination of the predicted trajectory, and improve the reliability and referenceability of the determination of the timing of the predicted trajectory termination.
In the present embodiment, by detecting the target in the video frame, to obtain the detection frame information corresponding to the target, obtain the morphological features and motion characteristics of the target, and determine the target of the target according to the detection frame information and morphological features Three-dimensional information, according to the three-dimensional information of the target combined with the motion characteristics to determine the three-dimensional information of the trajectory of the predicted trajectory, and to track the target in the video frame according to the three-dimensional information of the target and the three-dimensional information of the trajectory, can effectively improve the target tracking effect.
In the present embodiment, by detecting the target in the video frame, to obtain the detectio...
Abstract
The invention discloses a target tracking method and device, electronic equipment and a storage medium, and relates to the technical field of computers, in particular to the technical field of artificial intelligence such as deep learning and computer vision. According to the specific implementation scheme, the method comprises the following steps: detecting a target in a video to obtain detection frame information corresponding to the target; obtaining morphological characteristics and motion characteristics of the target; determining target three-dimensional information of the target according to the detection frame information and the morphological characteristics; determining trajectory three-dimensional information of the predicted trajectory according to the target three-dimensional information in combination with the motion features; and tracking the target in the video according to the target three-dimensional information and the trajectory three-dimensional information, so that the target tracking effect can be effectively improved.
Application Domain
Image enhancementImage analysis
Technology Topic
Computer visionEngineering +4
Image
Examples
- Experimental program(1)
Example Embodiment
[0018]The exemplary embodiments of the present disclosure will be described below, including various details of the embodiments of the present disclosure to facilitate understanding, and they should be considered simply exemplary. Accordingly, it will be appreciated by those skilled in the art that various changes and modifications can be made to the embodiments described herein without departing from the scope and spirit of the present disclosure. Similarly, a description of the well-known functions and structures is omitted in order to clear and concise, the following description is omitted.
[0019]figure 1 It is a schematic diagram of the first embodiment of the present application.
[0020]In other words, it is to be noted that the execution body of the target tracking method of the present embodiment is a target tracking device, which can be implemented by software and / or hardware, which can be configured in an electronic device, and the electronic device can include but not Limited to terminal, server end, etc.
[0021]The embodiments of the present application relate to deep learning, computer vision and other fields of intelligence technology.
[0022]Among them, artificial intelligence (Artificial Intelligence), English abbreviation is AI. It is a new technical science in the research, development of intelligence, methods, techniques, and application systems for simulation, extension and expansion.
[0023]Deep learning is to learn the inherent law of sample data and the level, information obtained in these learning processes, has a great help in the explanation of data such as text, images, and sound. The ultimate goal of deep learning is to make the machine to have learned learning capabilities like people, identify data such as text, images, and sound.
[0024]Computer vision means that the camera and computer replace the human eye to identify, track and measure the target, and further graphically processing, making computer processing into images more suitable for human eye observation or transmitting to the instrument.
[0025]Such asfigure 1 As shown, the target tracking method includes:
[0026]S101: Detects the target in the video to obtain detection frame information corresponding to the target.
[0027]Among them, the video can be any tracking scene video, such as driving a driving environment scene video captured in automatic driving, or can be a video surveillance device captured monitoring scene video, and the video typically contains a multi-frame video frame, which is not Do restriction.
[0028]In the present application embodiment, the target in the video can be detected first to obtain the detection frame information corresponding to the target, where the target is the object to be detected, for example, the vehicle in the driving environment scene video, pedestrians Wait, monitor the passengers in the video video, etc., do not limit this.
[0029]The detection frame information may be the length, width of the detection box corresponding to the target detection technology detected by any target detection technology, and the detection frame is based on the video screen, the center point coordinates of the video screen.
[0030]In the present application embodiment, it can be correspondingly optimized for multi-target tracking methods in the related art - Simple online and real-time tracking and depth association measuring (SIMPLINE AND REALTIME TRACKING WITH DEEP AssociationMetric, Deepsort) The target detection method performs detection of the target in the video to obtain the steps of detection frame information corresponding to the target.
[0031]The target is detected in the video, which can be parsed to the video to obtain a plurality of video frames, thereby detecting the target in each video frame, or can extract a portion containing a target-containing video frame for detection, pair This does not limit this.
[0032]The objects in the video are detected to obtain detection frame information corresponding to the target, which can be specifically resulting in detection frame information in the respective video frames corresponding to the target, that is, said that the number of the detection frame information can be Specifically, there are many.
[0033]S102: The morphological characteristics and motion characteristics of the target are obtained.
[0034]The above is detected in the target to obtain the detection frame information corresponding to the target, the morphological features and motion features of the target can be obtained, where morphological features are used to describe the appearance of the target, such as the shape, size of the target, The central point is characterized, while the motion features can be used to describe the target trajectory, the action conversion effect, etc. of the target in each frame, and do not limit this.
[0035]S103: Determine the target 3D information of the target based on the detection frame information and morphological characteristics.
[0036]After the morphological features and motion features of the target, the target 3D information can be determined according to the detection frame information and the morphological feature.
[0037]Among them, three-dimensional information (3-DIMENSION, 3D) corresponding to the target, can be referred to as target 3D information, corresponding, three-dimensional information corresponding to the predicted trajectory, can be referred to as a trajectory 3D information.
[0038]The target three-dimensional information in the present application embodiment can be specifically a three-dimensional intersection of the target and more information (3D IOU), and 3D IOU), which is compared to IOU, is an accurate measurement of the corresponding object in a particular data set. A standard of the degree, if it is a task that derives a prediction range in the output, it can be measured with IOU, and the target's three-dimensional intersection is more than information 3D IOU, which can be used to describe the target detection box information and reference frame (reference) The frame can be the actual box corresponding to the target in the video, the corresponding, detection box, is a box detected according to a certain detection algorithm, and the detection box can be considered a prediction box) information between three-dimensional space coordinates. Barrier.
[0039]Alternatively, in some embodiments, the target three-dimensional information of the target is determined according to the detection frame information and the morphological feature, which may be based on the morphological feature, the target is determined based on the center point coordinates, direction angle, and dimensional information in the three-dimensional coordinate space; The central point coordinates, toward the angle, and the size information, determine the reference box information corresponding to the target; and determine the target 3D information according to the detection box information and the reference frame information, so that the 3D information of the target can be accurately determined. The support subsequent will be included in the target tracking method based on the target three-dimensional information based on the target three-dimensional information in the three-dimensional spatial coordinates.
[0040]For example, the central point coordinate of the target can be calculated in the three-dimensional spatial coordinates, facing the angle, can be represented as (x, y, z), for example, 3D transformation can be utilized, according to 2D detection frame information identifying video frames The target, and then, the position is calculated to calculate the target in the corresponding three-dimensional spatial coordinates, including central points (x, y, z), long width (h, w, l) (long width high can be referred to as target size information) ) And their orientation (T) seven features, and then, according to the center point (X, Y, Z), long width (H, W, L) in the three-dimensional spatial coordinates (as long as possible Size information called target) and its orientation angle (t), further solve the 3D information of the target corresponding to the target in the three-dimensional spatial coordinates (ie the reference frame information corresponding to the target), and saves a global variable. It is convenient for subsequent reading and calls.
[0041]S104: The three-dimensional information of the predicted trajectory is determined based on the target three-dimensional information.
[0042]Among them, three-dimensional information corresponding to the predicted trajectory, can be referred to as a trajectory 3D information, and the trajectory 3D information can be specifically a three-dimensional intersection of the predicted trajectory and more information (3D IOU), 3D IOU, predicted the three-dimensional intersection of the trajectory and More information 3D IOU can be used to describe the real trajectory of the predicted trajectory and the target based on the overlap rate in the three-dimensional spatial coordinates.
[0043]The above prediction trajectory may be predicted by the Kalman filtering algorithm in the depth sorting Deepsort method.
[0044]In some embodiments, the three-dimensional information of the predicted trajectory is determined according to the target three-dimensional information binding motif, which may be based on the central point coordinates, toward the angle, the motion feature, and combined with the Kalman filter algorithm, and the information of the predicted trajectory is mapped to Among the three-dimensional coordinate space to obtain the trajectory of the predicted trajectory, it is possible to accurately, the trajectory of the predicted trajectory can be accurately determined, and the support of the trajectory 3D information is incorporated into the target tracking method, and the accuracy of the promotion of the target tracking Sex.
[0045]That is to say, in the present application, when the Karman filtering algorithm in the Deepsort method is predicted, the center point coordinate of the target is incorporated into the predicted consideration, and the predicted trajectory Information is mapped to three-dimensional coordinate space to obtain three-dimensional information of the predicted trajectory, thereby effectively identifying three-dimensional features of the predicted trajectory, and ensuring the detection accuracy of the trajectory, and can effectively avoid the performance of the two-dimensional characteristics is not comprehensive. Defects, Improve characteristic characteristics, to assist in enhance the target tracking effect.
[0046]For example, the prediction trajectory (predicted trajectory length, wide, high) is mapped to the three-dimensional coordinate space according to the central point coordinates, the movement characteristics, combined with the Kalman filtering algorithm. Predict the trajectory of the trajectory 3D information.
[0047]S105: Track the target based on the target 3D information and trajectory 3D information.
[0048]That is, in the present application embodiment, it is possible to optimize the multi-object tracking method Deepsort in the related art, and if the target is relatively stationary in the two-dimensional space, or the morphology between multiple targets is more Similarly, the target three-dimensional information, and the trajectory of the predicted trajectory are integrated into the target tracking method, which can effectively avoid the impact of these situations on the target tracking effect, so that when multi-target tracking method Deepsort is applied to the target tracking, Ability to get better target tracking accuracy, enhance the overall target tracking effect.
[0049]In this embodiment, by detecting the target in the video frame to obtain the detection frame information corresponding to the target, the morphological features and motion features of the target are obtained, and the target 3D information is determined according to the detection box information and the morphological characteristics. According to the target three-dimensional information binding motility, the trajectory of the predictive trajectory is determined, and the target tracking effect can be effectively enhanced according to the target three-dimensional information and trajectory 3D information.
[0050]figure 2 It is a schematic diagram of the second embodiment of the present application.
[0051]Such asfigure 2 As shown, the target tracking method includes:
[0052]S201: Detects the target in the video to obtain detection frame information corresponding to the target.
[0053]S202: Get the morphological characteristics and motion characteristics of the target.
[0054]S203: Determine the target 3D information of the target based on the detection frame information and morphological characteristics.
[0055]S204: The trajectory of the predicted trajectory is determined based on the target three-dimensional information.
[0056]Example of S201-S204, it can be specifically refer to the above embodiment, and details are not described herein again.
[0057]S205: In the first phase of the tracking process, according to the target 3D information, the trajectory of the first trajectory is set to the target and the first trajectory, and the first trajectory and the second trajectory together constitute a predictive trajectory.
[0058]In the present application embodiment, it is possible to optimize the multi-object tracking method Deepsort, and the multi-objective tracking method Deepsort includes the first phase and the second phase of the tracking process, and the first stage in the multi-object tracking method Deepsort is the utilization target. The appearance characteristics and motion features, the match matching the target and part of the predictive trajectory, which is to achieve the level of the partial trajectory, and the prediction trajectory of the remaining length is one unit is further processed. , Use trajectory 2D information to do a secondary match.
[0059]In this application, the tracking processing method of the above two phases is optimized. In the present application, the first phase of the tracking process is based on the target three-dimensional information, the trajectory of the first trajectory in accordance with the target 3D information. Class matching, first trajectory, and second trajectories together, the first trajectory, the first trajectory, the first trajectory,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, The second trajectory is a predictive trajectory of the end of the first trajectory, such as a predicted trajectory of the end of one unit.
[0060]The above-described three-dimensional information of the first trajectory is set to the target and the first trajectory, which can be specifically combined with the predetermined algorithm (predetermined algorithm, such as Hungary matching algorithm) to handle 3D information, first trajectory Track three-dimensional information, link the target and the first trajectory, and do not limit this.
[0061]In the present application embodiment, the number of processes of the predicted trajectory is a plurality of strips, and different predictive trajectories can be scored the same or different trajectory score, the first phase of the tracking process, can be in accordance with the order of the trajectory score, the target 3D information The trajectory of the first trajectory matches the target and the first trajectory, and the order sequence is the trajectory score from high to low.
[0062]For example, the cascading can be used to match the different trajectories multiple times, and the closing of the predicted trajectory can be specifically referred to in the case of the predicted trajectory, which may be specifically referred to in the prediction trajectory. The trajectory score will determine the timing of the trajectory termination.
[0063]That is to say, in the present application embodiment, it can be supported multiple times to use the Hungarian algorithm to set the predictive trajectory of different trajectory scores. When matching, it can be scored in the order of high to low, and the different trajectory is scored. The prediction trajectory and the target are cascaded, which can accurately detect the timing of the trajectory termination, effectively improve the accuracy of the matching match, thereby ensuring the overall target tracking effect.
[0064]Optionally, in some embodiments, such asimage 3 Distanceimage 3 It is a schematic diagram of the third embodiment of the present application, and the present application is also provided with a method of determining the trajectory score of the prediction trajectory, including:
[0065]S301: Determine the plurality of initial trajectory scores in the multi-frame video frame.
[0066]S302: Determine the trajectory length of the predicted trajectory.
[0067]S303: The average trajectory score is determined based on a plurality of initial trajectory scores and trajectory, and the average trajectory score is scored as a trajectory score of predictive trajectory.
[0068]For example, the score mechanism can be set for each predicted trajectory. If the trajectory score is less than one set threshold, it is determined that the prediction trajectory is terminated, and the specific formula is as follows:
[0069]
[0070]Among them, T is the average trajectory score of the predicted trajectory, and n is the trajectory length, TiRepresents the initial trajectory score corresponding to the predicted trajectory in a frame of video frame (initial trajectory score, can be represented by the detection score of the corresponding detection frame).
[0071]By determining a plurality of initial trajectory scores in the multi-frame video frame, a plurality of initial trajectory scores in the multi-frame video frame are determined, and determined according to multiple initial trajectory scores and trajectory lengths. The average trajectory score, and the average trajectory score is scored as a path to the predicted trajectory, and it is possible to accurately identify the timing of the predicted trajectory termination, and the reliability and reference nickname of the prediction trajectory termination.
[0072]S206: In the second phase of the tracking process, the target and the second trajectory are matched according to the target 3D information, the trajectory of the second trajectory is matched until the prediction trajectory matches.
[0073]The first phase of the tracking process, after the target three-dimensional information, the trajectory of the first trajectory is set to the target and the first trajectory, and the second phase of the tracking process can be, according to the target 3D information, the second trajectory The trajectory 3D information matches the target and the second trajectory until the prediction trajectory matches.
[0074]The second trajectory is a predictive trajectory of the end of the first trajectory, such as a predicted trajectory of the end of one unit.
[0075]Then, when the target and the second trajectory of the second trajectory are matched according to the target three-dimensional information, the target three-dimensional information is generated according to the target three-dimensional information and the trajectory 3D information of the second trajectory to generate a match matrix, which is based on the match matrix. The second trajectory is matched until the termination of the predictive trajectory (the termination timing can be a trajectory score of the predicted trajectory is less than one set threshold, see the above).
[0076]In this embodiment, by detecting the target in the video frame to obtain the detection frame information corresponding to the target, the morphological features and motion features of the target are obtained, and the target 3D information is determined according to the detection box information and the morphological characteristics. According to the target three-dimensional information binding motility, the trajectory of the predictive trajectory is determined, and the target tracking effect can be effectively enhanced according to the target three-dimensional information and trajectory 3D information. By the first phase of the tracking process, the target and the first trajectory are cascaded, and the second phase of the tracking process is based on the second phase of the tracking process according to the target three-dimensional information, the second phase of the tracking process, the second trajectory The trajectory 3D information matches the target and the second trajectory until the prediction trajectory matches, thereby proposing an optimized improvement target tracking method, promoting tracking effects of two-dimensional targets in video, effectively avoiding the target detection results in the video The impact on tracking effects can be effectively applicable to multi-objective tracking scenarios.
[0077]Figure 4 It is a schematic diagram of the fourth embodiment of the present application.
[0078]Such asFigure 4 As shown, the target tracking device 40 includes:
[0079]Detection module 401 is configured to detect the target in the video to obtain detection frame information corresponding to the target.
[0080]The module 402 is acquired to obtain the morphological features and motion features of the target.
[0081]The first determination module 403 is used to determine the target 3D information of the target based on the detection frame information and the morphological feature.
[0082]The second determination module 404 is configured to determine the trajectory of the predicted trajectory based on the target three-dimensional information.
[0083]Tracking module 405 is used to track the targets in the video based on target 3D information and trajectory 3D information.
[0084]In some embodiments of the present application, in which the first determination module 403 is specifically used:
[0085]According to the morphological characteristics, it is determined that the target is based on the center point coordinate, orientation, and size information of the center point of the three-dimensional coordinate space;
[0086]According to the center point coordinates, facing the corners, and size information, the reference box information corresponding to the target is determined; and
[0087]Determine the target 3D information of the target based on the detection frame information and the reference frame information.
[0088]In some embodiments of the present application, in which the second determination module 404 is specifically used:
[0089]According to the center point coordinates, facing the corners, the motion characteristics, combined with the Kalman filtering algorithm;
[0090]The information of the predicted trajectory is mapped to the three-dimensional coordinate space to obtain a trajectory of the predicted trajectory.
[0091]In some embodiments of the present application, in which the tracking module 405 is specifically used:
[0092]In the first phase of the tracking process, according to the target 3D information, the trajectory of the first trajectory is set to the target and the first trajectory, the first trajectory, and the second trajectory together constitute a prediction trajectory;
[0093]In the second phase of the tracking process, the target and the second trajectory match the target and the second trajectory according to the target 3D information, the trajectory of the second trajectory is matched until the prediction trajectory matches.
[0094]In some embodiments of the present application, the number of processes of the predicted trajectory is a plurality of strips, and different predicted trajectories can be scored for the same or different trajectory.
[0095]Wherein, the tracking module 405 is specifically used for: in the first phase of the tracking process, the target three-dimensional information, the trajectory of the first trajectory is set to the target and the first track, according to the first phase of the track score, the target and the first trajectory of the target and the first trajectory. Trajectory scores are high to low.
[0096]In some embodiments of the present application, the video includes a multi-frame video frame, including: detecting module 501, acquisition module 502, a first determining module 503, a second determining module 504, a tracking module 505, further comprising :
[0097]The score module 506 is configured to determine a plurality of initial trajectory scores corresponding to the multi-frame video frame, and determine the path length of the predicted trajectory, and determine the average trajectory score, and will be determined according to the plurality of initial trajectory scores and trajectory lengths. The average trajectory score is a trajectory score of the predicted trajectory.
[0098]It will be appreciated that this embodiment is attachedFigure 5 The target tracking device 50 in the above-described embodiment, the detection module 501 and the detection module 401 in the above embodiment, the acquisition module 502 in the above-described acquisition module 402, and the first determining module 503 and the above The first determining module 403, the second determining module 504, and the second determining module 404 in the above embodiment, the tracking module 405 in the above embodiment, can have the same functions and configurations.
[0099]It should be noted that the foregoing explanation of the target tracking method also applies to the target tracking device of the present embodiment, and details are not described herein again.
[0100]In this embodiment, by detecting the target in the video frame to obtain the detection frame information corresponding to the target, the morphological features and motion features of the target are obtained, and the target 3D information is determined according to the detection box information and the morphological characteristics. According to the target three-dimensional information binding motility, the trajectory of the predictive trajectory is determined, and the target tracking effect can be effectively enhanced according to the target three-dimensional information and trajectory 3D information.
[0101]According to an embodiment of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium, and a computer program product.
[0102]Figure 6It is a block diagram of an electronic device for implementing the target tracking method of the present application embodiment. Electronic devices are intended to represent various forms of digital computers, such as laptop, desktop computers, workbenses, personal digital assistants, servers, blade servers, large computers, and other suitable computers. Electronic devices can also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connection and relationships, and their functions are merely examples, and the implementation of the present application described herein will be restricted.
[0103]Such asFigure 6As shown, device 600 includes a calculation unit 601, which can perform various appropriate as appropriate based on a computer program stored in the read-only memory (ROM) 602 or from the storage unit 608 to the random access memory (RAM) 603. Action and processing. In the RAM 603, various programs and data required for device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other by bus 604. The input / output (I / O) interface 605 is also connected to the bus 604.
[0104]The plurality of components in the device 600 are connected to the I / O interface 605, including: input unit 606, such as a keyboard, a mouse, or the like, the output unit 607, such as various types of displays, speakers, etc .; storage unit 608, such as a disk, CD, etc. ; And communication unit 609, such as network cards, modems, wireless communication transceivers, etc. The communication unit 609 allows the device 600 to exchange information / data exchange information / data from other devices through a computer network and / or a variety of telecommunications networks, such as the Internet.
[0105]Computing unit 601 can be a common and / or dedicated processing component having processing and computing power. Some examples of calculating unit 601 include, but are not limited to, a central processing unit (CPU), a graphics processing unit (GPU), various dedicated artificial intelligence (AI) computing chips, various operational machine learning model algorithms calculation unit, digital signal processing (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 601 performs the respective methods and processing described above, for example, the target tracking method.
[0106]For example, in some embodiments, the target tracking method can be implemented as a computer software program, which is typed in a machine readable medium, such as a storage unit 608. In some embodiments, the portion or all of the computer program may be loaded and / or mounted to device 600 via the ROM 602 and / or communication unit 609. When the computer program is loaded into the RAM 603 and executed by the calculation unit 601, one or more steps of the target tracking method described above can be performed. Alternatively, in other embodiments, the calculation unit 601 can be configured to perform a target tracking method by other suitable means (e.g., by means of firmware).
[0107]Various embodiments of the systems and techniques described above in this article can be in digital electronic circuitry, integrated circuitry, field programmable gate array (FPGA), dedicated integrated circuit (ASIC), special standard product (ASSP), chip system System (SoC), load programmable logic (CPLD), computer hardware, firmware, software, and / or combinations thereof. These various embodiments may include: implementing in one or more computer programs, which can be performed and / or interpret on a programmable system including at least one programmable processor, the programmable processor Can be a dedicated or universal programmable processor, data and instructions can be received from the storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and at least one An output device.
[0108]The program code for implementing the target tracking method of the present application can be written any combination of one or more programming languages. These program code can provide a processor or controller for a general purpose computer, a dedicated computer, or another programmable data processing device such that the program code is performed by the functionality of the flowchart and / or block diagram when executed by the processor or controller. The operation is implemented. The program code can be performed entirely on the machine, partially executed on the machine, execute on the machine as a stand-alone software package and partially execute on the remote machine or on the remote machine or server.
[0109]In the context of the present application, the machine readable medium can be a tangible medium, which may contain or store procedures for instruction execution systems, devices, or devices or combined with instruction execution systems, devices, or devices. The machine readable medium can be a machine readable signal medium or a machine readable storage medium. Machine readable media can include, but are not limited to, electron, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or any suitable combination of the above. More specific examples of machine readable storage media include electrical connection, portable computer disc, hard disk, random access memory (RAM), read-only memory (ROM) based on one or more lines of electrical connection, read-only memory (ROM), can be programmable read-only memory (EPROM or flash memory), fiber optic, convenient compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
[0110]In order to provide the interaction with the user, the system and technique described herein can be implemented on the computer, which has a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor to display information to the user. ); And keyboards and pointing devices (eg, mouse or trackballs), users can provide input to the computer by this keyboard and the pointing device. Other types of devices can also be used to provide interactions with the user; for example, feedback to the user can be any form of sensing feedback (eg, visual feedback, audible feedback, or haptic feedback); and can be used in any form (including Acoustic input, voice input, or tactile input) to receive input from the user.
[0111]The systems and techniques described herein can be implemented in a computing system (e.g., as a data server) including the background component, or a computing system (eg, a application server) including the middleware component, or a computing system including a front end member (for example, With a user computer with a graphical user interface or a web browser, the user can interact with the system and technology described herein by this graphical user interface or the web browser), or includes this background component, an intermediate member, Or in any combination of front end components. The components of the system can be connected to each other by digital data communication (e.g., communication network) of any form or a medium. Examples of the communication network include: LAN, WAN (WAN), Internet and block chain networks.
[0112]Computer systems can include clients and servers. Clients and servers are generally away from each other and are usually interacting over a communication network. The relationship between the client and the server is generated by running on the corresponding computer and having a client-server relationship with each other. The server can be a cloud server, also known as a cloud computing server or cloud main machine, which is a host product in the cloud computing service system to solve the traditional physical host and VPS service (* # * Virtual private server * # *, or referred to as * # * VPS * # *) In the defects that exist in management difficulties and weak business scalability. The server can also be a server of a distributed system, or a server that combines block chains.
[0113]It should be understood that the various forms of forms shown above can be used to reorder, increase or delete the steps. For example, the steps described in the present application can be performed in parallel, or may be performed sequentially, and may be performed in different order, and as long as the technical solutions disclosed herein, this paper is not limited thereto.
[0114]DETAILED DESCRIPTION OF THE INVENTION The limitation of the scope of the present application is not constituted. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations, and replacement can be made according to design requirements and other factors. Any modification, equivalent replacement and improvement in the spirit and principles of the present application, should be included within the scope of this application.
PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.