Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

138 results about "Parallel programing" patented technology

Parallel programming is a programming technique wherein the execution flow of the application is broken up into pieces that will be done at the same time (concurrently) by multiple cores, processors, or computers for the sake of better performance.

Method of reducing disturbs in non-volatile memory

In a non-volatile memory, the displacement current generated in non-selected word lines that results when the voltage levels on an array's bit lines are changed can result in disturbs. Techniques for reducing these currents are presented. In a first aspect, the number of cells being simultaneously programmed on a word line is reduced. In a non-volatile memory where an array of memory cells is composed of a number of units, and the units are combined into planes that share common word lines, the simultaneous programming of units within the same plane is avoided. Multiple units may be programmed in parallel, but these are arranged to be in separate planes. This is done by selecting the number of units to be programmed in parallel and their order such that all the units programmed together are from distinct planes, by comparing the units to be programmed to see if any are from the same plane, or a combination of these. In a second, complementary aspect, the rate at which the voltage levels on the bit lines are changed is adjustable. By monitoring the frequency of disturbs, or based upon the device's application, the rate at which the bit line drivers change the bit line voltage is adjusted. This can be implemented by setting the rate externally, or by the controller based upon device performance and the amount of data error being generated.
Owner:SANDISK TECH LLC

Method of reducing disturbs in non-volatile memory

In a non-volatile memory, the displacement current generated in non-selected word lines that results when the voltage levels on an array's bit lines are changed can result in disturbs. Techniques for reducing these currents are presented. In a first aspect, the number of cells being simultaneously programmed on a word line is reduced. In a non-volatile memory where an array of memory cells is composed of a number of units, and the units are combined into planes that share common word lines, the simultaneous programming of units within the same plane is avoided. Multiple units may be programmed in parallel, but these are arranged to be in separate planes. This is done by selecting the number of units to be programmed in parallel and their order such that all the units programmed together are from distinct planes, by comparing the units to be programmed to see if any are from the same plane, or a combination of these. In a second, complementary aspect, the rate at which the voltage levels on the bit lines are changed is adjustable. By monitoring the frequency of disturbs, or based upon the device's application, the rate at which the bit line drivers change the bit line voltage can be adjusted. This can be implemented by setting the rate externally, or by the controller based upon device performance and the amount of data error being generated.
Owner:SANDISK TECH LLC

Method of reducing disturbs in non-volatile memory

In a non-volatile memory, the displacement current generated in non-selected word lines that results when the voltage levels on an array's bit lines are changed can result in disturbs. Techniques for reducing these currents are presented. In a first aspect, the number of cells being simultaneously programmed on a word line is reduced. In a non-volatile memory where an array of memory cells is composed of a number of units, and the units are combined into planes that share common word lines, the simultaneous programming of units within the same plane is avoided. Multiple units may be programmed in parallel, but these are arranged to be in separate planes. This is done by selecting the number of units to be programmed in parallel and their order such that all the units programmed together are from distinct planes, by comparing the units to be programmed to see if any are from the same plane, or a combination of these. In a second, complementary aspect, the rate at which the voltage levels on the bit lines are changed is adjustable. By monitoring the frequency of disturbs, or based upon the device's application, the rate at which the bit line drivers change the bit line voltage can be adjusted. This can be implemented by setting the rate externally, or by the controller based upon device performance and the amount of data error being generated.
Owner:SANDISK TECH LLC

Similar web page duplicate-removing system based on parallel programming mode

The invention provides a similar web page duplicate-removing system based on a parallel programming mode, comprises a web page content pre-processing module, a web page eigenvector extracting module,a web page feature fingerprint calculation module, a web page fingerprint on-line duplicate-removing module, a web page fingerprint distributed batch duplicate-removing module and a computing platformbased on specific distribution. The system can complete links of carrying out unified conversion of text content encoding, standardization of document structure, web page noise content abortion, thematic content analysis and identification of web pages, lexical segmentation of continuous text content, and the like on the web pages obtained by crawling of web crawlers, thereby forming eigenvectorswhich can present the web pages. Relative algorithms can be used to obtain web page fingerprints which present web page characteristics aiming at the vector. The system provided by the invention accurately and fast detects fully complete repetition or approximate repetition of the web page contents caused by site mirroring, web document transshipment, and the like on the condition of massive amount of data of Internet and completes corresponding repetition-removing works, thereby enhancing the storage efficiency of search engines and bringing better use experience for the search engines.
Owner:HUAZHONG UNIV OF SCI & TECH

Thread for high-performance computer NUMA perception and memory resource optimizing method and system

ActiveCN104375899ASolve the problem of excessive granularity of memory managementSolve fine-grained memory access requirementsResource allocationComputer architecturePerformance computing
The invention discloses a thread for high-performance computer NUMA perception and a memory resource optimizing method and system. The system comprises a runtime environment detection module used for detecting hardware resources and the number of parallel processes of a calculation node, a calculation resource distribution and management module used for distributing calculation resources for parallel processes and building the mapping between the parallel processes and the thread and a processor core and physical memory, a parallel programming interface, and a thread binding module which is used for providing the parallel programming interface, obtaining a binding position mask of the thread according to mapping relations and binding the executing thread to a corresponding CPU core. The invention further discloses a multi-thread memory manager for NUMA perception and a multi-thread memory management method of the multi-thread memory manager. The manager comprises a DSM memory management module and an SMP module memory pool which manage SMP modules which the MPI processes belong to and memory distributing and releasing in the single SMP module respectively, the system calling frequency of the memory operation can be reduced, the memory management performance is improved, remote site memory access behaviors of application programs are reduced, and the performance of the application programs is improved.
Owner:INST OF APPLIED PHYSICS & COMPUTATIONAL MATHEMATICS

User description based programming design method on embedded heterogeneous multi-core processor

The invention relates to a user description based programming design method on an embedded heterogeneous multi-core processor. The method includes the steps that a user configures a guide through an image interface to perform description of a heterogeneous multi-core processor platform and a task, a parallel mode is set, an element task is established and registered, a task relation graph (directed acyclic graph (DAG)) is generated, the element task is subjected to a static assignment on the heterogeneous multi-core processor, and processor platform characteristics, parallel demands and task assignment are expressed in a configuration file mode (extensible markup language (XML)). Then the element task after a configuration file is subjected to a parallel analysis is embedded into a position of a heterogeneous multi-core framework code task label, a corresponding serial source program is constructed, a serial compiler is invoked, and finally an executable code on the heterogeneous multi-core processor can be generated. By means of the user description based programming design method on the embedded heterogeneous multi-core processor, parallel programming practices such as developing a parallel compiler on a general personal computer (PC) or a high-performance computing platform, establishing a parallel programming language and porting a parallel library are effectively avoided, the difficulty of developing a parallel program on the heterogeneous multi-core processor platform in the embedded field is greatly reduced, the purpose of parallel programming based on the user description and parallelization interactive guide is achieved.
Owner:SHANGHAI UNIV

Parallel programming method oriented to data intensive application based on multiple data architecture centers

The invention relates to a parallel programming method oriented to data intensive application based on multiple data architecture centers. The method comprises the following steps of: constructing a main node of system architecture, constructing a sub node of the system architecture, performing loading, performing execution and the like. The parallel programming method has the advantages that a technicist in the field of a large scale of data intensive scientific data does not need to know well a parallel calculation mode based on multiple data centers and does not need to have a MapReduce and multi-point interface (MPI) parallel programming technology relevant to high-performance calculation; a plurality of distributed clusters are simply configured, and a MapReduce calculation task is loaded to the distributed clusters; the hardware and software collocation of the existing cluster system is not required to be changed, and the architecture can quickly parallel the data intensive application based on the MapReduce programming model on multiple data centers; and therefore, relatively high parallelization efficiency is achieved, and the processing capacity of the large-scale distributed data intensive scientific data can be greatly improved.
Owner:CENT FOR EARTH OBSERVATION & DIGITAL EARTH CHINESE ACADEMY OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products