[0038] In order to make the objects, technical solutions, and advantages of the present disclosure, the technical solutions in the present disclosure will be described in contemplated, clearly, and clearly described in connection with the drawings of the present disclosure. It is a part of the embodiments of the present disclosure, not all of the embodiments. Based on the embodiments in the present disclosure, one of ordinary skill in the art is in the scope of the present disclosure without creative labor premise.
[0039] Further, in this paper, "and / or", is merely a relationship of the associated object, indicating that there may be three relationships, such as A and / or B, which may be represented: Alone A, while there is A and B, There are three cases of B alone. In addition, the characters "/" herein generally indicate a "or" relationship of the associated object before and after.
[0040] figure 1 A schematic diagram of an application frame 100 capable of implementing an embodiment of the present disclosure is shown. In the frame 100, the Android system, Wayland, PulseAudio, and Linux kernel, where the Android container and Linux host operating system share a kernel, and PulseAudio is on the host operating system, which is a cross-platform, and can work through the network. The container interacts with the audio hardware of the host system. It should be noted that Wayland is a communication protocol between Display Server and Client. When used on the architecture, WaylandCompositor can be used as Display Server to make the Client communicate directly with Wayland Compositor. Wayland can be used for both traditional desktops and mobile devices, while more and more windows and graphics systems are compatible with Wayland protocols. Wayland implements a library of DISPLAY SERVER and CLIEN based on Domain Socket, and defines a set of expandable communication protocols in XML. This protocol is divided into Wayland core protocols and expansion protocols.
[0041] Glossary:
[0042] Hardware Graphics Render: Hardware Graphics Renderer, running on the Linux operating system, you can use the graphics card on the server to speed up the hardware acceleration rendering, receive the rendering material data and rendering instructions sent by the Android container, and perform rendering operations.
[0043] figure 2 A flow chart of the GPU pool method of the Android container according to the present disclosure, including:
[0044] S210, receive the rendering instruction sent by the Android container.
[0045]Wherein, the rendering command includes a request to display a resource instruction.
[0046] In some embodiments, according to the actual application scenario and hardware resource information of the application, the upper limit of the memory is set for each Android container, and the memory backup buffer occupies the upper limit and GPU occupation rights.
[0047] In some embodiments, the hardware poolization graphics renderer When performing the rendering instruction sent by each APL, the GPU Explorer statistics real-time memory and GPU usage; before processing rendering instructions, hardware pool graphics rendering A rendering command queue can be maintained for each Android container to store the rendering instructions sent by each Android container.
[0048] In some embodiments, the hardware poolization graphic renderer receives the rendering instruction sent by the Android container; putting the rendering instructions into the rendering command queue; the rendering of the GPU occupied by each Android container is rendered from each Android container. The rendering command is obtained in the instruction queue and execute. Prevent a GPU for an Android container to occupy the GPU resource of other Android containers; at the same time, you can hand over a GPU resource that is temporarily used by a Android container to use the other Android container to avoid waste of resources. .
[0049] S220, according to the rendering command, check if the Android container has a memory resource exceeds a threshold. If it is, select at least one resource that meets the preset rules in the information list, store the memory backup buffer in the backup, and release The resource occupies the memory; after the memory is complete, run the rendering command; the information list is a list of information that identifies the information of the memory resource occupied by each Android container.
[0050] In some embodiments, the hardware poolization graphics renderer performs a command sent by a ADATA, GLTEXIMAGE2D, and / or VKALLOCATEMAGE2D, and / or VKALLOCATEMEMORY, according to the relevant parameters (such as size, width, height, etc.) according to the instructions (such as Size, Width, Height, etc. Real-time check whether the memory resource occupied by this Android container exceeds the upper limit (threshold), if it is more than the limit, selecting at least one distance from the list of time than the time threshold, and the total space is not smaller than the current rendering command The resource required for space is stored in the memory backup buffer for backup, and the memory occupied by the resource is released, and then the rendering command is executed; in order to ensure the flow of the system, the threshold is usually slightly smaller than the resource limit.
[0051] Further, including:
[0052] When the resource in the backup buffer is resembled again, the data in the backup buffer is loaded into the memory; according to the actual application scenario, the resource in the memory can be exchanged in the memory backup buffer.
[0053] According to an embodiment of the present disclosure, the following technical effects are achieved:
[0054] The server GPU resources can be quickly expanded, and the GPU resource reuse, greatly improve the utilization of GPU hardware resources, reduce server hardware purchase costs, save capital expenditures, and operate, and allow containers or applications that use GPU resources to get Stability guarantee.
[0055] It should be noted that for each method embodiment, for the sake of illustration, it is to be described as a series of operation combinations, but those skilled in the art will be known that the present disclosure is not limited by the described action sequence, Since it is disclosed in accordance with the present disclosure, some steps can be performed in other orders or simultaneously. Second, those skilled in the art should also be aware that the embodiments described in the specification are alternative embodiments, and the operations and modules involved are not necessarily necessary.
[0056] The above is a further explanation of the scheme of the present disclosure by means of a method embodiment.
[0057] image 3 A block diagram of the GPU poolization apparatus 300 of the Android container according to the embodiment of the present disclosure is shown. Such as image 3 As shown, device 300 includes:
[0058] The receiving module 310 is configured to receive the rendering instruction sent by the Android container;
[0059] The rendering module 320 is configured to check if the rendering command will check if the Android container has a memory resource exceeds the threshold. Backup and release the memory occupied by the resource; after the memory is complete, run the rendering command; the information list is a list of information that identifies the memory resource occupied by each Android container.
[0060] Those skilled in the art can clearly understand that in order to describe convenient and concise, the specific operation of the described module can refer to the corresponding process in the foregoing method embodiment, and details are not described herein again.
[0061] Figure 4 A schematic block diagram showing an electronic device 400 that can be used to implement the embodiment of the present disclosure is shown. As shown, device 400 includes a central processing unit (CPU) 401, which can be based on computer program instructions stored in the read only memory (ROM) 402 or from storage unit 408 to a random access memory (RAM) 403. Program instructions to perform a variety of appropriate action and processing. In the RAM 403, various programs and data required for the device 400 can also be stored. CPU 401, ROM 402, and RAM 403 are connected to each other through bus 404. The input / output (I / O) interface 405 is also connected to the bus 404.
[0062] The plurality of components in the device 400 are connected to the I / O interface 405, including: input unit 406, such as a keyboard, mouse, or the like, output unit 407, such as various types of displays, speakers, etc., storage unit 408, such as a disk, CD, etc. ; And communication unit 409, such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 409 allows the device 400 to exchange information / data to other devices through a computer network and / or a variety of telecommunications networks, such as the Internet.
[0063] The processing unit 401 performs the respective methods and processing described above, for example, 200. For example, in some embodiments, method 200 can be implemented as a computer software program, which is typically included in a machine readable medium, such as storage unit 408. In some embodiments, some or all of the computer program may be loaded and / or mounted to device 400 via ROM 402 and / or communication unit 409. When the computer program is loaded into the RAM 403 and executed by the CPU 401, one or more steps described above can be performed. Alternatively, in other embodiments, the CPU 401 can be configured to perform method 200 by other suitable means (e.g., by means of firmware).
[0064] The functions described above in this article can be performed at least partially by one or more hardware logic components. For example, non-limiting, hardware logic components that can be used include: field programmable gate array (FPGA), dedicated integrated circuit (ASIC), special standard product (ASSP), chip system system (SOC), Load programmable logic device (CPLD), etc.
[0065] The program code for implementing the method of the present disclosure can be written any combination of one or more programming languages. These program code can provide a processor or controller for a general purpose computer, a dedicated computer, or another programmable data processing device such that the program code is performed by the processor or the controller to perform the functions specified in the flowchart and / or block diagram / The operation is implemented. The program code can be performed entirely on the machine, partially executed on the machine, execute on the machine as a stand-alone software package and is performed on the remote machine or on the remote machine or server.
[0066] In the context of the present disclosure, the machine readable medium may be a tangible medium, which may contain or store procedures for instruction execution systems, devices or devices, or combined with instruction execution systems, devices, or devices. The machine readable medium can be a machine readable signal medium or a machine readable storage medium. Machine readable media can include, but are not limited to, electron, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or any suitable combination of the above. More specific examples of machine readable storage media include electrical connection, portable computer disc, hard disk, random access memory (RAM), read-only memory (ROM) based on one or more lines of electrical connection, read-only memory (ROM), erased-programmable read-only memory (EPROM or flash memory), fiber optic, convenient compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the above.
[0067] Further, although each operation is depicted in a particular order, it should be understood that such operations are performed in a particular order or in order order, or require all the illustrated operations should be performed to achieve the desired result. In a certain environment, multitasking and parallel processing may be advantageous. Likewise, although several specific implementations are included in the above discussion, these should not be construed as limiting the scope of the disclosure. Some features described in the context of separate embodiments can also be implemented in a single implementation. Conversely, various features described in the context described in a single implementation may be implemented in multiple implementations separately or in any suitable sub-combination.
[0068] Although this topic has been described in terms of structural features and / or method logic actions, it is understood that the subject matter defined in the appended claims is not necessarily limited to specific features or operations described above. In contrast, the specific features and operations described above are merely examples of the implementation of the claims.