Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

199 results about "Multiprocessing" patented technology

Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them. There are many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined (multiple cores on one die, multiple dies in one package, multiple packages in one system unit, etc.).

Performance technology infrastructure for modeling the performance of computer systems

An infrastructure and a set of steps are disclosed for evaluating performance of computer systems. The infrastructure and method provide a flexible platform for carrying out analysis of various computer systems under various workload conditions. The flexible platform is achieved by allowing/supporting independent designation/incorporation of a workload specification and a system upon which the workload is executed. The analytical framework disclosed and claimed herein facilitates flexible/dynamic integration of various hardware models and workload specifications into a system performance analysis, and potentially streamlines development of customized computer software/system specific analyses.
The disclosed performance technology infrastructure includes a workload specification interface facilitating designation of a particular computing instruction workload. The workload comprises a list of resource usage requests. The performance technology infrastructure also includes a hardware model interface facilitating designation of a particular computing environment (e.g., hardware configuration and/or network/multiprocessing load). A disclosed hardware model comprises a specification of delays associated with particular resource uses. A disclosed hardware specification further specifies a hardware configuration describing actual resource elements (e.g., hardware devices) and their interconnections in the system of interest. The performance technology infrastructure further comprises an evaluation engine for performing a system performance analysis in accordance with a specified workload and hardware model incorporated via the workload specification and hardware model interfaces.
Owner:MICROSOFT TECH LICENSING LLC

Comparative updates tracking to synchronize local operating parameters with centrally maintained reference parameters in a multiprocessing system

In a multiprocessing system, a configuration manager maintains various reference parameters that are selectively copied by subordinate managed units to form local operating parameters, which subsequently govern operation of these managed units. A comparative technique is employed to track reference parameter updates, and synchronize each local operating parameter counterpart accordingly. At the configuration manager, reference parameters include reference profiles and reference characteristics. Each reference profile specifies one or more of the reference characteristics. At each managed unit, the operating parameters include subcribed-to profiles and operating characteristics; both are initially copied from the configuration manager's reference profiles/characteristics. Each local operating profile specifies one or more of the operating characteristics. Each managed unit operates according to its locally maintained operating characteristics. When certain update criteria are satisfied, a managed unit and the configuration manager cooperatively synchronize the manage unit's local operating profiles and characteristics with the configuration manager's reference profiles and characteristics. This involves comparing the reference and operating profiles to identify new, updated, or deleted operating characteristics. Also, the local operating profiles and operating characteristics may be cross-referenced to identify any "orphan" characteristics for deletion.
Owner:IBM CORP

Multiprocessing system with automated propagation of changes to centrally maintained configuration settings

In a multiprocessing system, hierarchically superior configuration managers maintain profiles of operating characteristics to which subordinate managed units selectively subscribe. If the profiles or operating characteristics change, the configuration managers propagate the changes to all managed units. Each configuration manager stores a record of operating characteristics and multiple server profiles, each profile specifying one or more operating characteristics. A subscription list identifies one or more managed units, each associated with one or more server profiles. Each managed unit acts according to its current operating characteristics, stored locally at the managed unit. If the managed unit receives a profile subscription request from a system administrator, the managed unit sends a subscription message to the configuration manager to subscribe to that input profile. Receiving the subscription, the configuration manager enters the subscribing managed unit and the associated profile into the subscription list, and returns the profiled operating characteristics to the subscribing managed unit. The subscribing managed unit stores these operating characteristics in its record of current operating characteristics. If there is a change to the operating characteristics (or to the profiles), the configuration manager transmits the changed matter to all managed units with affected subscriptions. Upon receipt of this data, each subscribing managed units stores the changed operating characteristics in its record of current operating characteristics.
Owner:IBM CORP

Method and system for exploiting parallelism on a heterogeneous multiprocessor computer system

In a multiprocessor system it is generally assumed that peak or near peak performance will be achieved by splitting computation across all the nodes of the system. There exists a broad spectrum of techniques for performing this splitting or parallelization, ranging from careful handcrafting by an expert programmer at the one end, to automatic parallelization by a sophisticated compiler at the other. This latter approach is becoming more prevalent as the automatic parallelization techniques mature. In a multiprocessor system comprising multiple heterogeneous processing elements these techniques are not readily applicable, and the programming complexity again becomes a very significant factor. The present invention provides for a method for computer program code parallelization and partitioning for such a heterogeneous multi-processor system. A Single Source file, targeting a generic multiprocessing environment is received. Parallelization analysis techniques are applied to the received single source file. Parallelizable regions of the single source file are identified based on applied parallelization analysis techniques. The data reference patterns, code characteristics and memory transfer requirements are analyzed to generate an optimum partition of the program. The partitioned regions are compiled to the appropriate instruction set architecture and a single bound executable is produced.
Owner:IBM CORP

Method and apparatus for managing the execution of a broadcast instruction on a guest processor

A method and apparatus for managing the execution on guest processors of a broadcast instruction requiring a corresponding operation on other processors of a guest machine. Each of a plurality of processors on an information handling system is operable either as a host processor under the control of a host program executing on a host machine or as a guest processor under the control of a guest program executing on a guest machine. The guest machine is defined by the host program executing on the host machine and contains a plurality of such guest processors forming a guest multiprocessing configuration. A lock is defined for the guest machine containing an indication of whether it is being held by a host lock holder from the host program and a count of the number of processors holding the lock as guest lock holders. Upon decoding a broadcast instruction executing on a processor operating as a guest processor, the lock is tested to determine whether it is being held by a host lock holder. If the lock is being held by a host lock holder, an instruction interception is recognized and execution of the instruction is terminated. If the lock is not being held by a host lock holder, the lock is updated to indicate that it is being held by the guest processor as a shared lock holder, the instruction is executed, and then the lock is updated a second time to indicate that it is no longer being held by the guest processor as a shared lock holder.
Owner:IBM CORP

System and method for command routing and execution in a multiprocessing system

Any node in a multi-node processing system may be employed to route commands to a selected group of one or more nodes, and initiate local command execution if permitted by local security provisions. The system includes multiple application nodes interconnected by a network, and one or more administrator nodes each coupled to at least one application node. Each administrator node has assigned security credentials. The process starts when the administrator node transmits input to one of the application nodes (an "entry" node). The input includes a command and routing information specifying a list of desired application nodes ("destination" nodes) to execute the command. In response to this input, the entry node transmits messages to all destination nodes to (1) log-in to the destination nodes as the originating administrator node, and (2) request the destination nodes to execute the command. Consulting locally stored security information, each destination node determines whether the entry node's log-in should succeed. If so, the destination node consults locally stored authority information to determine whether the initiating administrator node has authority to execute the requested command. If so, the destination node executes the command. The destination node sends the entry node a response representing the outcome of command execution. The entry node organizes such responses and provides a representative output.
Owner:IBM CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products