Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

273 results about "Page fault" patented technology

A page fault (sometimes called #PF, PF or hard fault) is a type of exception raised by computer hardware when a running program accesses a memory page that is not currently mapped by the memory management unit (MMU) into the virtual address space of a process. Logically, the page may be accessible to the process, but requires a mapping to be added to the process page tables, and may additionally require the actual page contents to be loaded from a backing store such as a disk. The processor's MMU detects the page fault, while the exception handling software that handles page faults is generally a part of the operating system kernel. When handling a page fault, the operating system generally tries to make the required page accessible at the location in physical memory, or terminates the program in case of an illegal memory access.

Intelligent network streaming and execution system for conventionally coded applications

An intelligent network streaming and execution system for conventionally coded applications provides a system that partitions an application program into page segments by observing the manner in which the application program is conventionally installed. A minimal portion of the application program is installed on a client system and the user launches the application in the same ways that applications on other client file systems are started. An application program server streams the page segments to the client as the application program executes on the client and the client stores the page segments in a cache. Page segments are requested by the client from the application server whenever a page fault occurs from the cache for the application program. The client prefetches page segments from the application server or the application server pushes additional page segments to the client based on the pattern of page segment requests for that particular application. The user subscribes and unsubscribes to application programs, whenever the user accesses an application program a securely encrypted access token is obtained from a license server if the user has a valid subscription to the application program. The application server begins streaming the requested page segments to the client when it receives a valid access token from the client. The client performs server load balancing across a plurality of application servers. If the client observes a non-response or slow response condition from an application server or license server, it switches to another application or license server.
Owner:NUMECENT HLDG

Apparatus and method for providing simultaneous local and global addressing using software to distinguish between local and global addresses

An apparatus and method provide simultaneous local and global addressing capabilities in a computer system. A global address space is defined that may be accessed by all processes. In addition, each process has a local address space that is local (and therefore available) only to that process. An address space processor is implemented in software to perform system functions that distinguish between local addresses and global addresses. In the preferred embodiments, the local address space has a size that is a multiple of the size of a segment of global address space. When the hardware indicates a page fault, the address space processor determines whether the address being translated is a local address or a global address. If the address is a local address, the address space processor uses a local directory to process the page fault. If the address is a global address, the address space processor uses a global directory to process the page fault. When the hardware indicates an addressing error because a computed address crosses a global segment boundary, the address space processor determines whether the address is a local address or a global address. If the address is a global address, the address space processor indicates an addressing error. If the address is a local address, the address space processor determines whether the address is within the process' local address space, and indicates an addressing error if the address is outside the process' local address space. Instructions are allowed to operate on both local and global addresses because the address space processor handles either type of address whenever software assistance is required, such as for servicing a page fault or checking a segment boundary crossing. In addition, the address space processor dynamically checks the addressing compatibility of called code before passing control to the called code.
Owner:IBM CORP

Dynamic memory affinity reallocation after partition migration

A method of dynamically reallocating memory affinity in a virtual machine after migrating the virtual machine from a source computer system to a destination computer system migrates processor states and resources used by the virtual machine from the source computer system to the destination computer system. The method maps memory of the virtual machine to processor nodes of the destination computer system. The method deletes memory mappings in processor hardware, such as translation lookaside buffers and effective-to-real address tables, for the virtual machine on the destination computer system. The method starts the virtual machine on the destination computer system in virtual real memory mode. A hypervisor running on the destination computer system receives a page fault and virtual address of a page for said virtual machine from a processor of the destination computer system and determines if the page is in local memory of the processor. If the hypervisor determines the page to be in the local memory of the processor, the hypervisor returning a physical address mapping for the page to the processor. If the hypervisor determines the page not to be in the local memory of the processor, the hypervisor moves the page to local memory of the processor and returns a physical address mapping for said page to the processor.
Owner:IBM CORP

Process memory protecting method based on auxiliary virtualization technology for hardware

The invention provides a process memory safety protecting method based on auxiliary virtualization for hardware. The method comprises the following steps: 1, loading a process memory monitoring module; step 2, informing the monitoring module during the starting of a protected process; step 3, creating an encrypted copy for a protected internal memory space of the protected process; step 4, realizing internal memory virtualization to a virtual machine system by using a shadow page table mechanism; step 5: acquiring rewritten operation and page fault abnormality of a CR3 register. The process memory safety protecting method provided by the invention has the advantages as follows: the monitoring module working at a Root stage is created to monitor page directories, page tables and modification of a page directory register in all processes so as to prevent any process except the protected process from visiting data in the memory space of the protected process, when the protected process is switched to a core state, a page in a user-mode space is replaced so as to prevent codes in a kernel mode from injection attacks, and a data execution prevention technology is used for setting the page of the data area of the protected process to be non-executable. Therefore, codes in the user mode are prevented from injection attacks.
Owner:NANJING UNIV

System and method for memory management

InactiveUS20050188164A1Minimizing memory access latencyMaximizing and memory accessResource allocationMemory adressing/allocation/relocationHandling CodeParallel computing
The present invention is directed to a method and system for minimizing memory access latency during realtime processing. The method includes a mechanism for marking information that will be accessed during realtime processing. The marked information may include code, data, heaps, stacks, as well as other information. The method includes support for locking down all of the marked information so that it is present in a computing machine's physical memory so that no page faults will be incurred during realtime processing. The method additionally enables realtime processing code to allocate and free memory in a non-blocking manner. It does so by enabling the creation of heaps for use during realtime processing, wherein each heap supports allocating and freeing memory in a non-blocking fashion. Each heap tracks freed memory blocks using individual non-blocking tracking lists for each memory block size supported by that heap. If a memory allocation request to a heap can be satisfied by using a memory block available on one of the lists of freed memory blocks, the method includes allocating the available memory block by popping the memory block from the tracking list. If no freed memory blocks of the desired size are available, then the method includes traversing a separate set of source memory blocks for that heap, and making the allocation in a non-blocking fashion from one of those blocks.
Owner:MICROSOFT TECH LICENSING LLC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products