Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

167 results about "Handle" patented technology

In computer programming, a handle is an abstract reference to a resource. Handles are used when application software references blocks of memory or objects managed by another system, such as a database or an operating system. A resource handle can be an opaque identifier, in which case it is often an integer number (often an array index in an array or "table" that is used to manage that type of resource), or it can be a pointer that allows access to further information.

Method for realizing local file system through object storage system

InactiveCN107045530AReduce the number of interactionsImprove the performance of accessing swift storage systemSpecial data processing applicationsApplication serverFile system
The invention discloses a method for realizing a local file system through an object storage system. A metadata cache algorithm of the file system and a memory description structure are adopted, so that the interactive frequency of an application and a background of a swift storage system is reduced and the performance of accessing the swift storage system by the application is improved; a policy of pre-allocating a memory pool and recovering idle memory blocks in batches in a delayed manner is adopted, so that the efficiency of traversing a directory comprising a large amount of subdirectories and files is improved; a memory description structure of an open file handle is adopted, so that the application can efficiently perform file reading-writing operation; a pre-reading policy is adopted, so that the frequency of network interaction between an application server and a swift storage back end is effectively reduced and the reading performance of the file system is improved; and a zero copying and block writing policy is adopted, so that no any data copying and caching exist in a file writing process, system call during each write is a complete block writing operation, and the file writing efficiency is improved.
Owner:HUAZHONG UNIV OF SCI & TECH

Transactional file system

A transactional file system wherein multiple file system operations may be performed as part of a user-level transaction. An application specifies that the operations on a file, or the file system operations of a thread, should be handled as part of a transaction, and the application is given a file handle associated with a transaction context. For file system requests associated with a transaction context, a component within the file system manages the operations consistent with transactional behavior. The component, which may be a resource manager for distributed transactions, provides data isolation by providing multiple versions of a file by tracking copies of pages that have changed, such that transactional readers do not receive changes to a file made by transactional writers, until the transactional writer commits the transaction and the reader reopens the file. The component also handles namespace logging operations in a multiple-level log that facilitates logging and recovery. Page data is also logged separate from the main log, with a unique signature that enables the log to determine whether a page was fully flushed to disk prior to a system crash. Namespace isolation is provided until a transaction commits via isolation directories, whereby until committed, a transaction sees the effects of its own operations not the operations of other transactions. Transactions over a network are also facilitated via a redirector protocol.
Owner:MICROSOFT TECH LICENSING LLC

Application programming interface for data transfer and bus management over a bus structure

In a first embodiment, an applications programming interface (API) implements and manages isochronous and asychronous data transfer operations between an application and a bus structure. During an asynchronous transfer the API includes the ability to transfer any amount of data between one or more local data buffers within the application and a range of addresses over the bus structure using one or more asynchronous transactions. An automatic transaction generator may be used to automatically generate the transactions necessary to complete the data transfer. The API also includes the ability to transfer data between the application and another node on the bus structure isochronously over a dedicated channel. During an isochronous data transfer, a buffer management scheme is used to manage a linked list of data buffer descriptors. During isochronous transfer of data, the API provides implementation of a resynchronization event in the stream of data allowing for resynchronization by the application to a specific point within the data. Implementation is also provided for a callback routine for each buffer in the list which calls the application at a predetermined point during the transfer of data. An isochronous API of the preferred embodiment presents a virtual representation of a plug, using a plug handle, to the application. The isochronous API notifies a client application of any state changes on a connected plug through the event handle. The isochronous API also manages buffers utilized during a data operation by attaching and detaching the buffers to the connected plug, as appropriate, to mange the data flow.
Owner:SONY CORP +1

Plug-in management method, plug-in manager and set top box

The invention relates to a plug-in management method, a plug-in manager and a set top box. The method comprises the following steps of: registering a plug-in module through a plug-in manager when the plug-in module is connected with the plug-in module, and acquiring the description information of the plug-in module; receiving keyword information, sent by an application module, on querying whether a corresponding plug-in module is registered; and returning a handle of the corresponding plug-in module to the application module when the keyword information is matched with a keyword information part in the description information so that the application module revokes the functions of the corresponding plug-in module according to the handle. In the invention, by arranging a plug-in and the plug-in manager in the set top box, the modularization of set top box software is realized, the coupling of all functional modules of the set top box is reduced, the transportability of the set top box software on different products is enhanced, and the development cycle of the software is shortened; and by dynamically loading the plug-ins, the system functions are expanded and the flexibility of the product is enhanced by dynamically selecting different plug-ins in plug-ins of the same kind.
Owner:山东浪潮数字媒体科技有限公司

Application programming interface for data transfer and bus management over a bus structure

In a first embodiment, an applications programming interface (API) implements and manages isochronous and asynchronous data transfer operations between an application and a bus structure. During an synchronous transfer the API includes the ability to transfer any amount of data between one or more local data buffers within the application and a range of addresses over the bus structure using one or more asynchronous transactions. An automatic transaction generator may be used to automatically generate the transactions necessary to complete the data transfer. The API also includes the ability to transfer data between the application and another node on the bus structure isochronously over a dedicated channel. During an isochronous data transfer, a buffer management scheme is used to manage a linked list of data buffer descriptors. During isochronous transfers of data, the API provides implementation of a resynchronization event in the stream of data allowing for resynchronization by the application to a specific point within the data. Implementation is also provided for a callback routine for each buffer in the list which calls the application at a predetermined point during the transfer of data. An isochronous API of the preferred embodiment presents a virtual representation of a plug, using a plug handle, to the application. The isochronous API notifies a client application of any state changes on a connected plug through the event handle. The isochronous API also manages buffers utilized during a data operation by attaching and detaching the buffers to the connected plug, as appropriate, to manage the data flow.
Owner:SONY CORP +1

Distributed storage volume online migration method, system and device and readable storage medium

The invention discloses a distributed storage volume online migration method, system and device and a computer readable storage medium. The method comprises the steps of obtaining a metadata object storing metadata of an original volume; creating a metadata context handle and a data context handle in a target storage pool corresponding to the migration instruction; storing the metadata object to ametadata context handle; storing an original volume information field in the metadata object to the target volume; copying the data objects in the original volume into a target volume in sequence; wherein the ID of the copied data object is recorded in the target volume; according to the method, the metadata object of the original volume is stored in the target storage pool and the target volume;according to the method, the original index relationship of data in the original volume is not damaged, the target volume can be accessed by utilizing the information of the original volume, and meanwhile, the ID of the data object is utilized to ensure that the data of the original volume can still be accessed in the migration process and the data migrated to the target volume is preferentiallyaccessed, so that non-perception online migration of the storage volume is realized.
Owner:SUZHOU LANGCHAO INTELLIGENT TECH CO LTD

Method and system for logging in Windows operating system

ActiveCN112287312AOvercome the limitation of not being able to provide a login method in a non-domain environmentDigital data authenticationOperational systemPassword
The invention discloses a method and a system for logging in a Windows operating system. The method comprises a binding process and a login process. The binding process comprises the steps: enabling the binding tool to send key handle generation parameters obtained according to a user name to be bound to the authentication device; and correspondingly storing a certificate public key returned by the authentication device, a key handle generated according to the key handle generation parameter and the security descriptor in a predetermined file. The login process comprises the following steps: enabling a credential providing device to receive login data, acquiring a security descriptor according to a user name, and retrieving a corresponding key handle and a credential public key from a predetermined file according to the security descriptor; sending the key handle and the to-be-signed data to an authentication device; receiving a signature value returned by the authentication device andgenerated according to the private key corresponding to the key handle and the to-be-signed data; performing signature verification on the signature value by using the certificate public key, and when the signature verification succeeds, forming credential information required for logging in the system according to the user name and the password.
Owner:FEITIAN TECHNOLOGIES

Handle recognition system analysis load balancing method based on neural network

The invention discloses a Handle recognition system analysis load balancing method based on a neural network. The method comprises the steps of establishing an enterprise-server mapping table, recording time sequence data, training and generating a load utilization rate prediction model and task load prediction, and updating the enterprise-server mapping table according to a prediction result. According to the invention, firstly, the enterprise-server mapping table is established to accelerate the task response speed and improve the task processing efficiency, the time sequence data and the BP neural network are used to generate the load utilization rate prediction model and improve the prediction accuracy, then the task quantity is predicted through the Elman neural network. the task quantity is input into a load utilization rate prediction model to estimate the load change of each server, and finally, a load utilization rate piecewise function and a strategy of dynamically modifying a mapping table are combined, so that the server cluster can well cope with the Handle recognition system analysis task, the utilization rate and the load balance degree of the server cluster are improved, and the time for executing the task is shortened.
Owner:码客工场工业科技北京有限公司

Efficient erasure distributed storage writing method and system, medium and terminal

The invention provides an efficient erasure distributed storage writing method and system, a medium and a terminal, and the method comprises the steps: initiating first request information to a metadata server through a client, enabling the metadata server to create metadata corresponding to a data block at a data node according to the first request information, and feeding back the correspondingblock ID information to the client; enabling the client to initiate second request information used for requesting to open a data block operation handle of the data node end to the data node end according to the block ID information; writing the to-be-stored data into the data node according to the data block operation handle, and closing the written data block. According to the method, the threadfor managing the socket link pool is established during initialization, so that the maximum utilization rate of the socket link can be ensured, the thread number and the link number of the server arereduced, the context switching of the thread is reduced, the utilization rate of a CPU (Central Processing Unit) is improved, the network pressure is reduced, the system overhead is reduced, and theservice throughput is improved.
Owner:重庆紫光华山智安科技有限公司

File system writing acceleration method based on on-chip bus control of application processor

ActiveCN111198843AReduce write performanceReduce multiple copiesFile system administrationEnergy efficient computingCoprocessorFile system
The invention discloses a file system writing acceleration method based on on-chip bus control of an application processor, which comprises the following steps: calling an open () function of a file system to create a file to obtain a file handle; calling a write () function of a file system to write sensor data into the file, wherein the file system writes data into the memory according to a fixed file size, single writing is carried out according to a Page size, the file system only generates Tags information of the Page in the writing process, and the driving layer copies the Tags information of the Page to a buffer area of the controller; when the processor obtains the data writing starting signal, starting to monitor the data of the on-chip bus; enabling the coprocessor to obtain andtemporarily store the on-chip bus Page, then replace the Page data area with the data in the peripheral data buffer area according to a fixed size, and send the replaced Page to the memory; when the size of the remaining write-in files is smaller than or equal to 0, finishing file write-in operation; and enabling the file system to call a close () function to close the file. The method can accelerate the storage performance of the file system.
Owner:XI AN JIAOTONG UNIV

Method, device and system for remotely synchronizing data of distributed storage system

The invention provides a method, device and system for remotely synchronizing data in a distributed storage system, and the method comprises the steps: building a remote copy pair, and writing far-endcluster information into main volume metadata; when data initial full-amount synchronization is carried out, obtaining far-end cluster information, linking a far-end cluster, opening a slave volume,reading object data on a master volume, and writing the object data into the slave volume according to a logic volume handle of the slave volume; and when data is written into the master volume, increasing a pending value in callback, calling an opened slave volume data writing interface to perform data writing, calling callback of the master volume and reducing the pending value in callback afterwriting of the slave volume is finished, and finishing data synchronization when the pending value in callback is reduced to 0. According to the invention, the data of the local cluster is synchronized to the far-end cluster by connecting the far-end cluster and using a data double-writing mode, so that the reliability, the continuity and the high availability of the user data are improved, and huge loss of a production system caused by disasters which cannot be prevented is avoided.
Owner:SUZHOU LANGCHAO INTELLIGENT TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products