Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

312 results about "Supercomputer" patented technology

A supercomputer is a computer with a high level of performance compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, there are supercomputers which can perform over a hundred quadrillion FLOPS. Since November 2017, all of the world's fastest 500 supercomputers run Linux-based operating systems. Additional research is being conducted in China, the United States, the European Union, Taiwan and Japan to build even faster, more powerful and technologically superior exascale supercomputers.

Novel massively parallel supercomputer

A novel massively parallel supercomputer of hundreds of teraOPS-scale includes node architectures based upon System-On-a-Chip technology, i.e., each processing node comprises a single Application Specific Integrated Circuit (ASIC). Within each ASIC node is a plurality of processing elements each of which consists of a central processing unit (CPU) and plurality of floating point processors to enable optimal balance of computational performance, packaging density, low cost, and power and cooling requirements. The plurality of processors within a single node may be used individually or simultaneously to work on any combination of computation or communication as required by the particular algorithm being solved or executed at any point in time. The system-on-a-chip ASIC nodes are interconnected by multiple independent networks that optimally maximizes packet communications throughput and minimizes latency. In the preferred embodiment, the multiple networks include three high-speed networks for parallel algorithm message passing including a Torus, Global Tree, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. For particular classes of parallel algorithms, or parts of parallel calculations, this architecture exhibits exceptional computational performance, and may be enabled to perform calculations for new classes of parallel algorithms. Additional networks are provided for external connectivity and used for Input / Output, System Management and Configuration, and Debug and Monitoring functions. Special node packaging techniques implementing midplane and other hardware devices facilitates partitioning of the supercomputer in multiple networks for optimizing supercomputing resources.
Owner:INT BUSINESS MASCH CORP

Optimizing layout of an application on a massively parallel supercomputer

A general computer-implement method and apparatus to optimize problem layout on a massively parallel supercomputer is described. The method takes as input the communication matrix of an arbitrary problem in the form of an array whose entries C(i, j) are the amount to data communicated from domain i to domain j. Given C(i, j), first implement a heuristic map is implemented which attempts sequentially to map a domain and its communications neighbors either to the same supercomputer node or to near-neighbor nodes on the supercomputer torus while keeping the number of domains mapped to a supercomputer node constant (as much as possible). Next a Markov Chain of maps is generated from the initial map using Monte Carlo simulation with Free Energy (cost function) F=Σi,jC(i,j)H(i,j)—where H(i,j) is the smallest number of hops on the supercomputer torus between domain i and domain j. On the cases tested, found was that the method produces good mappings and has the potential to be used as a general layout optimization tool for parallel codes. At the moment, the serial code implemented to test the method is un-optimized so that computation time to find the optimum map can be several hours on a typical PC. For production implementation, good parallel code for our algorithm would be required which could itself be implemented on supercomputer.
Owner:IBM CORP

Massively parallel supercomputer

InactiveUS7555566B2Massive level of scalabilityUnprecedented level of scalabilityError preventionProgram synchronisationPacket communicationSupercomputer
A novel massively parallel supercomputer of hundreds of teraOPS-scale includes node architectures based upon System-On-a-Chip technology, i.e., each processing node comprises a single Application Specific Integrated Circuit (ASIC). Within each ASIC node is a plurality of processing elements each of which consists of a central processing unit (CPU) and plurality of floating point processors to enable optimal balance of computational performance, packaging density, low cost, and power and cooling requirements. The plurality of processors within a single node may be used individually or simultaneously to work on any combination of computation or communication as required by the particular algorithm being solved or executed at any point in time. The system-on-a-chip ASIC nodes are interconnected by multiple independent networks that optimally maximizes packet communications throughput and minimizes latency. In the preferred embodiment, the multiple networks include three high-speed networks for parallel algorithm message passing including a Torus, Global Tree, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. For particular classes of parallel algorithms, or parts of parallel calculations, this architecture exhibits exceptional computational performance, and may be enabled to perform calculations for new classes of parallel algorithms. Additional networks are provided for external connectivity and used for Input/Output, System Management and Configuration, and Debug and Monitoring functions. Special node packaging techniques implementing midplane and other hardware devices facilitates partitioning of the supercomputer in multiple networks for optimizing supercomputing resources.
Owner:IBM CORP

Method of and apparatus for delivery of proprietary audio and visual works to purchaser electronic devices

A method of delivering audio and audio visual works to users of computer terminals includes the steps of: providing a data warehouse of digitized works; providing program means for end user computers to access, select and play at least one of the works; providing means for controlling end user access to the works and for collecting payment for playing at least one of the works; and diverting a portion of the payment for playing the at least one work to the holder of a copyright to the at least one work. The method preferably includes the additional steps of encrypting the works; and providing the end user with program means for deciphering the works. The method still further preferably includes additional the steps of delivering advertising matter to the end user with each work the end user selects and plays; keeping a record of the particular works each end user selects and plays; customizing advertising delivered to the end user to fit within any pattern of work selection by the particular end user. An apparatus for performing the method is also provided, including a computer hive made up of several inter-linked computers having specialized functions, the computers operating in unison to build a supercomputer that has shared disk space and memory, in which each node belongs to the collective and possesses its own business rules and membership to an organization managerial hierarchy.
Owner:PAIZ RICHARD S

Optimizing layout of an application on a massively parallel supercomputer

A general computer-implement method and apparatus to optimize problem layout on a massively parallel supercomputer is described. The method takes as input the communication matrix of an arbitrary problem in the form of an array whose entries C(i, j) are the amount to data communicated from domain i to domain j. Given C(i, j), first implement a heuristic map is implemented which attempts sequentially to map a domain and its communications neighbors either to the same supercomputer node or to near-neighbor nodes on the supercomputer torus while keeping the number of domains mapped to a supercomputer node constant (as much as possible). Next a Markov Chain of maps is generated from the initial map using Monte Carlo simulation with Free Energy (cost function) F=Σi,jC(i,j)H(i,j)− where H(i,j) is the smallest number of hops on the supercomputer torus between domain i and domain j. On the cases tested, found was that the method produces good mappings and has the potential to be used as a general layout optimization tool for parallel codes. At the moment, the serial code implemented to test the method is un-optimized so that computation time to find the optimum map can be several hours on a typical PC. For production implementation, good parallel code for our algorithm would be required which could itself be implemented on supercomputer.
Owner:IBM CORP

Thermal management system for computers

The invention involves systems to channel the air available for cooling inside the chassis of the computing device to force the air into selected channels in the memory bank (i.e., between the modules rather than around the modules or some other path of least resistance). This tunneled cooling (or collimated cooling) is made possible by using a set of baffles (or apertures) placed upstream of and between the cooling air supply and the memory bank area to force air to go only through the rectangular space available between adjacent modules in the memory bank of the high speed computing machines (super computers or blade servers in these cases). In some instances, it may be desirable to include both a blower fan, for forcing cool air through the baffles and through the heat exchanger(s) aligned with openings in the baffles, and a suction fan to draw or pull air as it exits the rear of the blade server chassis. The invention includes a high performance heat exchanger to be thermally coupled to the memory chips and either placed between adjacent modules (in the memory bank) or integrated within a cross sectional area of the module that is in the cooling air path; whereby, the heat from a given module is transferred laterally to a heat sink that, in turn, transfers the heat to the heat exchanger which in turn is placed in the path of cool air. However in this case, the cool air has no other alternative path but to pass through the high performance heat exchanger. The efficiency of heat exhaust is thereby maximized. Lateral heat conduction and removal is a preferred method for module cooling in order to minimize the total module height of vertically mounted Very Low Profile (VLP) memory modules. The invention is applicable to a wide range of modules; however, it is particularly suitable for a set of memory modules with unique packaging techniques that further enhance the heat exhaust.
Owner:MICROELECTRONICS ASSEMBLY TECH

Moving image holography reproducing device and color moving image holography reproducing device

A moving-image holographic reproducing device that includes a reflective liquid crystal display and further a light-emitting diode functioning as a light source and that is capable of reproducing a high-resolution image in a simple way is provided. Also, a color moving-image holographic reproducing device that is capable of reproducing a color holographic image by a significantly simplified structure using a single plate hologram without the need for time-multiplexing is provided. The moving-image holographic reproducing device includes a computer for creating a hologram from three-dimensional coordinate data, a reflective liquid crystal display for displaying the hologram, a half mirror for projecting the displayed hologram, and a light-emitting diode, wherein a reconstructed three-dimensional image is displayed by illuminating the half mirror with light emitted from the light-emitting diode. Manipulating a sufficiently large size three-dimensional holographic image in real time requires computational power one thousand to ten thousand times higher than that of current supercomputers. According to the present invention, such an amount of information can be processed at high speed by dedicated hardware in a highly parallel and distributed manner.
Owner:JAPAN SCI & TECH CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products