Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

144 results about "Cpu architecture" patented technology

CPU Architecture. The processor (really a short form for microprocessor and also often called the CPU or central processing unit) is the central component of the PC. This vital component is in some way responsible for every single thing the PC does.

Central processing unit (CPU) accessing an extended register set in an extended register mode

A central processing unit (CPU) is described including a register file and an execution core coupled to the register file. The register file includes a standard register set and an extended register set. The standard register set includes multiple standard registers, and the extended register set include multiple extended registers. The execution core fetches and executes instructions, and receives a signal indicating an operating mode of the CPU. The execution core responds to an instruction by accessing at least one extended register if the signal indicates the CPU is operating in an extended register mode and the instruction includes a prefix portion including information needed to access the at least one extended register. The standard registers may be general purpose registers of a CPU architecture associated with the instruction. The number of extended registers may be greater than the number of general purpose registers defined by the CPU architecture. In this case, the additional register identification information in the prefix portion is needed to identify a selected one of the extended registers. A width of the extended registers may also be greater than a width of the standard registers. In this case, the prefix portion may also include an indication that the entire contents of the least one extended register is to be accessed. In this way, instruction operand sizes may selectively be increased when the CPU is operating in the extended register mode. A computer system including the CPU is also described.
Owner:GLOBALFOUNDRIES INC

Modular signature and data-capture system and point of transaction payment and reward system

InactiveUSRE41716E1Increase flexibilityMinimizes paper-workComplete banking machinesFinanceReward systemCompatible accessories
A modular signature and data capture device employs a standardized ISA bus, standardized communication ports, and standardized ×86 CPU architecture to promote flexibility in using past, present, and future software and accessories. A VGA-caliber backlit LCD is superimposingly combined with a pressure touch pad that is useable with a passive stylus. The LCD displays pen drawn signatures and graphics in real-time, and can display images and data stored in the device, or downloaded from a host system, including advertisements. The LCD can also display menus, device instructions, virtual pressure-sensitive data keys, and control keys. The device includes a built-in a three-stripe magnetic card reader unit. The device accepts PCMCIA-compatible accessories including solid state memory units and smartcards, and is compatible with plug-in accessories including an external PIN keypad entry unit, a fingerprint unit, an omnibus unit including a printer and check processor in addition to a fingerprint unit. Security is provided by DES-encrypting PIN data and / or using Master / Session and / or DUKPT key management, or by using fingerprint token data as a PIN. The invention may be used to conduct paperless transactions in which the merchant is paid in realtime. Further, merchant purchase profiles may be generated on a per-user basis to promote more effective advertising.
Owner:SYMBOL TECH LLC

Delayed lock-step CPU compare

The present invention relates to an electronic device comprising a first CPU, a second CPU, a first delay stage and a second delay stage for delaying data propagating on a bus, a CPU compare unit, and wherein the first delay stage is coupled to an output of the first CPU and a first input of the CPU compare unit, an input of the first CPU is coupled to a system input bus, the second delay stage is coupled to the system input bus and to an input of the second CPU, an output of the second CPU (CPU2) is coupled to the CPU compare unit, and wherein the first CPU and the second CPU are adapted to execute the same program code and the CPU compare unit is adapted to compare an output signal of the first delay stage, which is a delayed output signal of the first CPU, with an output signal of the second CPU. In one embodiment, the present invention relates to a method for lock-step comparison of CPU outputs of an electronic device, in particular a microcontroller, having a dual CPU architecture, the method comprising executing the same program code on a first CPU and a second CPU in response to data provided via a system input bus, delaying an output data of the first CPU by a predetermined first delay to receive a delayed output data, delaying the data to be input to the second CPU by a predetermined second delay, and comparing the output data of the second CPU with the delayed output data of the first CPU.
Owner:TEXAS INSTR INC

Time synchronization method, programmable logic device, single board and network element

The invention provides a time synchronization method, a programmable logic device, a single board and a network element. The method comprises the following steps: the programmable logic device receives a request message sent by a terminal; the programmable logic device generates a time synchronization message according to the request message; and the programmable logic device sends the time synchronization message to the terminal. By adopting the time synchronization method in the technical scheme provided by the invention, the problem that a time synchronization response must be completed by software interruption due to the special limitation of a CPU architecture to result in definicient packet sending ability is solved. The programmable logic device frames and processes the time synchronization message and directly receives and sends the message to comprehensively improve the message processing ability, and a high-frequency accurate timing manner of the programmable logic device ensures a more accurate packet sending interval and further optimizes the time synchronization performance, therefore bandwidth resources of the Ethernet are used to the utmost to determine the message transceiving ability.
Owner:ZTE CORP

Intelligent battery charging and discharging management system and method based on 5G cloud computing platform

The invention discloses an intelligent battery charging and discharging management system and method based on 5G cloud computing platform. The system comprises a battery management system-on-chip based on a double-CPU framework, a 5G communication unit, a database server, the cloud computing platform, a web server and a visual terminal. The battery management system-on-chip of the double-CPU architecture is used for realizing operation state data acquisition of each battery pack and realizing SOC estimation, SOH estimation, fault diagnosis and charging and discharging control of batteries based on battery data information. The 5G communication unit is connected to the battery management system-on-chip of the double-CPU architecture and is used for transmitting the operation state information and the position information of the battery pack to the cloud computing platform in real time and receiving control parameters and signals transmitted by the cloud computing platform; the cloud computing platform is used for realizing multi-dimensional battery parameter extraction, a BMS control strategy and cooperative control of the cloud auxiliary battery management system; 5G is taken as acarrier, and active safety management and intelligent charging and discharging control of a battery pack are realized through edge calculation and cloud state estimation and correction.
Owner:UNIV OF SCI & TECH OF CHINA

Efficient dynamic load balancing system and method for processing large-scale data

The invention discloses an efficient dynamic load balancing system and method for processing large-scale data, and belongs to the technical field of processing the large-scale data. The efficient dynamic load balancing system structurally comprises a central control system, a computing cluster system, a storage system and a high-speed network. A CPU and GPU mixed heterogeneous architecture is adopted in a middle node of the central control system; a CPU and GPU mixed heterogeneous architecture is adopted in middle nodes of the computing cluster system, or a CPU architecture is adopted in nodes of the computing cluster system; the storage system is divided into a shared storage and a local storage, a CPU architecture is adopted in middle nodes of the shared storage, and the local storage is used for storing data of nodes of the central control system or the nodes of the computing cluster system; the high-speed network is used for connecting the middle node of the central control system, the middle nodes of the computing cluster system and the middle nodes of the shared storage to form the centralized system for processing the large-scale data. The problem that a current server computing system is insufficient in network bandwidth and small in storage capacity and therefore cannot process the large-scale data is solved.
Owner:LANGCHAO ELECTRONIC INFORMATION IND CO LTD

Method for operating multi-CPU (Central Processing Unit) architecture service in Kubernetes

PendingCN111309401ASupport update delete operationMeet the requirements of daily useMultiple digital computer combinationsProgram loading/initiatingManifest fileCpu architecture
The invention discloses a method for operating a multi-CPU (Central Processing Unit) architecture service in Kubernetes. The method specifically comprises the following steps: S1, a Kubernetes component is supported and deployed on an AMD64 architecture by default; s2, according to a CPU architecture which is deployed in a Kubernetes cluster and is to be adapted to a service, establishing a service cluster, a container mirror image of ARM64 and a container mirror image of AMD64 are manufactured respectively. S3, customizing the name of the server mirror image by the container cloud platform according to a specified format; s4, according to the mirror image name and the adaptive CPU architecture type, obtaining a mirror image; the method comprises the steps of S1, establishing a containercloud platform, S2, establishing a container mirror image database integrated with the container cloud platform, S3, establishing a container mirror image database integrated with the container cloudplatform, S4, establishing a Docker manifest file, and pushing the Docker manifest file to the container mirror image database integrated with the container cloud platform, and S5, carrying out container mirror image management on a multi-CPU architecture through a container cloud platform mirror image management function. According to the method for operating the multi-CPU architecture service inthe Kubernetes, the multi-CPU architecture service is operated in the Kubernetes; according to the technical scheme, the requirement for daily use can be met through improvement, the effect of supporting management of multiple CPU architecture hosts in the Kubernetes cluster is achieved, and therefore the Linux containers of the corresponding CPU architectures can be operated on the correspondinghosts.
Owner:广西梯度科技股份有限公司

Method for realizing cross compiling of Docker mirror image

ActiveCN111309451AFix fast boot issuesSolve the problem of cross compilationEnergy efficient computingSoftware simulation/interpretation/emulationComputer hardwareCpu architecture
The invention discloses a method for realizing cross compiling of a Docker mirror image. The method specifically comprises the following steps: S1, installing a qemu-user-static service program on a Docker mirror image cross compiling environment system, s2, checking whether a binfmt _ misc in the Linux system registers a corresponding simulator configuration or not; s3, determining the CPU architecture type of the Docker mirror image needing cross compiling, s4, a Docker file for constructing a Docker mirror image is compiled; docker mirror images of different target CPU frameworks are constructed by using the same Docker file, and the Docker mirror images of different target CPU frameworks are constructed by using the same Docker file. And S5, determining whether the generated Docker mirror image is the mirror image of the target CPU architecture or not by checking the identifier in the generated Docker mirror image. The invention relates to the technical field of computer programs.According to the method for realizing Docker mirror image cross compilation, when the application compilation environment is started, the difference of underlying hardware facilities can be completelyshielded, and the problem of cross-platform cross compilation is solved, so that the problem caused by the difference of underlying hardware can also be shielded while the application cross compilation environment is rapidly started.
Owner:广西梯度科技股份有限公司

Fast domain switch and error recovery in a secure CPU architecture

In order to gather, store temporarily and efficiently deliver safestore information in a CPU having data manipulation circuitry including a register bank, first and second serially oriented safestore buffers are employed. At suitable times during the processing of information, a copy of the instantaneous contents of the register bank is transferred into the first safestore buffer. After a brief delay, a copy of the first safestore buffer is transferred into the second safestore buffer. If a call for a domain change (which might include a process change or a fault) is sensed, a safestore frame is sent to cache, and the first safestore buffer is loaded from he second safestore buffer rather than from the register bank. Later, during a climb operation, if a restart of the interrupted process is undertaken and the restoration of the register bank is directed to be taken from the first safestore buffer, this source, rather than the safestore frame stored in cache, is employed to obtain a corresponding increase in the rate of restart. In one embodiment, the transfer of information between the register bank and the safestore buffers is carried out on a bit-by-bit basis to achieve additional flexibility of operation and also to conserve integrated circuit space.
Owner:BULL HN INFORMATION SYST INC

Cluster deployment method and device of multi-CPU architecture

The invention provides a cluster deployment method and device of a multi-CPU architecture. The method comprises the following steps: constructing a code warehouse, and storing source codes of clustercomponents of different CPU architectures in the code warehouse; compiling corresponding mirror image files according to the source codes of the cluster components of different CPU architectures, andstoring the compiled mirror image files in a mirror image warehouse; compiling a cluster deployment script suitable for the multi-CPU architecture, and storing the script in a code warehouse; in response to a received instruction of deploying the cluster, obtaining and running a corresponding script from the code warehouse based on the CPU architecture, so as to call a corresponding program from the code warehouse and the mirror image warehouse to execute cluster deployment of the current CPU architecture. By using the scheme provided by the invention, the deployment exception or failure caused by personnel errors during deployment in different CPU architecture environments can be greatly reduced; the method has the advantages of one-time configuration and multiple reuse, shortening of thedeployment time consumption and improvement of the deployment efficiency.
Owner:SUZHOU LANGCHAO INTELLIGENT TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products