Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

462results about How to "Reduce visits" patented technology

Safety data repetition removing method and system applicable to backup system

The invention discloses a safety data repetition removing method applicable to a backup system. The method includes the following steps that a backup request submitted by a user is received; all files needing backup are partitioned and multiple data blocks of different sizes are obtained; the Hash algorithm is used for calculating the Hash value F1 of each data block, wherein the Hash values F1 serve as encrypting keys of the data blocks; the Hash algorithm is used again for calculating the Hash value F2 of the Hash value F1 of each data block, wherein the Hash values F2 serve as fingerprints of the data blocks to identify repeated data blocks and the classic encryption algorithm and a private key of the user are used for encrypting the Hash values F1 of the data blocks to obtain ciphertexts E (F1) of the Hash values F1 of the data in order to protect the encrypting keys of the data blocks. All the data block fingerprints F2 and the like are packaged in sequence into fingerprint segments which are transmitted to a storage server. The SSL protocol is adopted in all communication processes. According to the safety data repetition removing method applicable to the backup system, the safety data repetition removing method is adopted to ensure that storage safety problem that data are lost or tampered is solved under the condition that the data repetition removing rate is not changed.
Owner:HUAZHONG UNIV OF SCI & TECH

GPU (graphic processing unit) based parallel power flow calculation system and method for large-scale power system

ActiveCN103617150ARealize the loop iterative processImplement parallel build methodsComplex mathematical operationsMain processing unitArithmetic processing unit
The invention relates to a GPU (graphic processing unit) based parallel power flow calculation system and method for a large-scale power system. The system comprises a symbol Jacobian matrix forming and decomposing module, an initialization module, a power flow equation right-hand side calculation module, a jacobian matrix assignment module, an LU decomposing module and a forward and backward substitution module; the symbol Jacobian matrix forming and decomposing module is located on a host side, and the host side transmits calculating data to an equipment side; the power flow equation right-hand side calculation module, the jacobian matrix assignment module, the LU decomposing module and the forward and backward substitution module are sequentially connected on the equipment side. The method includes (1) transmitting data needed by calculation to the host side entirely; (2) generating a symbol Jacobian matrix and performing symbol composition on the symbol Jacobian matrix; (3) transmitting a decomposition result by the host side to the equipment side; (4) executing power flow equation right-hand side calculation; (5) executing Jacobian matrix assignment; (6) executing LU decomposition; (7) executing forward and backward substitution.
Owner:STATE GRID CORP OF CHINA +1

Method for rebuilding data of magnetic disk array

The invention relates to a disk array data reconstruction method which pertains to a computer data storage method and solves the problem that the existing disk array data reconstruction method takes too much time and affects the reading and writing performances and reliability of a storage system. A disk array of the invention is provided with a main control module, reading and writing processing modules and a reconstruction module and comprises the steps of initialization, diary pot diagram update, reconstruction based on diaries and finishing. The method instructs data reconstruction process by the real-time monitoring of disk space using status of the disk array; when space that is not accessed is reconstructed, the only requirement is to write 0 in all corresponding data blocks newly added into a disk, thus greatly reducing physical disk visiting time brought by reconstruction, increasing reconstruction speed and reducing the response time of user access; the reconstruction method does not change the reconstruction procedure or the distribution manner of disk array data, can conveniently optimize various traditional disk array data reconstruction methods and is applicable to the construction of storage systems with high performance, high usability and high reliability.
Owner:HUAZHONG UNIV OF SCI & TECH

Configuration method applied to coarse-grained reconfigurable array

The invention discloses a configuration method applied to a coarse-grained reconfigurable array, which aims at a coarse-grained reconfigurable array with a certain scale, and comprises a configuration defining scheme taking data links as basic description objects, a corresponding configuration generating scheme and a corresponding configuration mapping scheme. The configuration defining scheme includes that a program corresponds to a plurality of configurations, each configuration corresponds to one data link, and each data link consists of a plurality of reconfigurable cells with data dependence relations. Compared with a traditional scheme taking RCs (reconfigurable cells) as basic description objects, the configuration defining scheme is capable of concealing interlinking information among the RCs and providing a larger configuration information compression space, thereby being beneficial to decrease of the total amount of configuration and time for switching configuration. Besides, the configuration of one description data link consists of a route, a functional configuration and one or more data configurations, the data configurations share one route and functional configuration information, and switching of one configuration includes one-time or multiple switching of the data configuration after one-time switching of the corresponding route and the functional configuration.
Owner:SOUTHEAST UNIV

Data buffer apparatus and network storage system using the same and buffer method

The invention provides a data buffering device, the network memory system and the buffering method adopting the data buffering device. The buffering device includes a network interface; an EMS memory connected with the network interface and used for memorizing the data requested by computational nodes; a computational node interface protocol transformation module connected with the EMS memory, used for transforming the request of the computational nodes into the request of a peripheral equipment and then submitting the request to a processing unit and used for transmitting data together with buffering memory media; a processing unit connected with the computational node interface protocol transformation module, the network interface and the EMS memory and used for controlling the transmission of data and confirming the buffering relationship of data; buffering memory media connected with the computational node interface protocol transformation module and used for buffering data according to the buffering relationship confirmed by the processing unit. The invention can reduce the independence of the computational nodes on visiting memory nodes through the network, reduce network load and reduce the network load pressure of the whole system at a given time.
Owner:INST OF COMPUTING TECH CHINESE ACAD OF SCI

Data updating method and system based on database

The invention provides a data updating method and system based on a database. The data updating method based on the database comprises the steps that when a data updating request containing a data size, to be deleted, requested to be deleted is received, whether a pre-deleted data size is stored in a local memory is judged, wherein the pre-deleted data size is equal to or larger than the data size to be deleted; if the pre-deleted data size is stored in the local memory, then the data size to be deleted is deleted from the pre-deleted data size in the local memory; only when no pre-deleted data size is stored in the local memory or the stored pre-deleted data size is smaller than the data size to be deleted, the predicted remaining size of data is acquired from the database, the database deleting amount is determined at the same time and is used as the pre-deleted data size in the local memory to be stored, and then the data size to be deleted is deleted from the pre-deleted data size in the local memory. The data updating method based on the database has the advantages that the number of times of database access can be reduced, time for lock wait is then reduced, and the number of processed data updating requests within a unit time of the system is increased.
Owner:ADVANCED NEW TECH CO LTD

Picture element caching method and system in movement compensation process of video decoder

An image element cache method and system, which is used in the motion compensation of an audio decoder in the field of audio decoding, includes: step 1. Reading motion compensation parameter information from the parameter memorizer; step 2. Zoning the area of the reference frame where the current basic block of motion compensation needs; step 3. Reading and selecting the image elements that are not in the internal memorizer out of the external memorizer and memorizing them in the internal memorizer; step 4. Reading and selecting out the data in the reference frame image element unit of the reference frame area from the internal memorizer in turn, and sending them together with the current motion vector of the basic block to an external interpolator. The invention system includes: an overall controlling module, a parameter module reading motion compensation, a reference frame area module which the motion compensation need, a judging and replacing module of image element unit, a controlling module of interpolation data output, and a memorizing module of data and label. The invention reduces the bandwidth need for external memorizer, and resolves the bandwidth bottleneck problem of the external memorizer of the audio decoder design.
Owner:SHANGHAI JIAO TONG UNIV

Method for managing reconfigurable on-chip unified memory aiming at instructions

The invention discloses a method for implementing managing a reconfigurable on-chip unified memory aiming at instructions by utilizing a virtual memory mechanism. Through the method, the parameters of the Cache part and the SPM (Scratch-PadMemory) part in the reconfigurable unified memory can be dynamically adjusted in the program running process to adapt to the requirements for memory architecture in different program execution stages. The method is characterized by analyzing the memory access behaviors in different program running stages, obtaining phase change behavior diagrams of the instruction parts and carrying out mathematical abstraction on the phase change behavior diagrams, obtaining the reconfigurable memory configuration information in each program stage and selecting the program instruction parts needing to be optimized by adopting integer nonlinear programming (INLP) according to the energy consumption objective function and the performance objective function and mapping the code segments which have severe conflicts and are frequently accessed in the Cache into the SPM part as much as possible by virtue of a virtual memory management mechanism, thus not only reducing the external memory access energy consumption caused by repeatedly filling Cache, reducing the extra energy consumption caused by compare logic in the Cache and improving the system performance.
Owner:SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products