Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

406 results about "Parallel process" patented technology

Parallel process is a phenomenon noted between therapist and supervisor, whereby the therapist recreates, or parallels, the client's problems by way of relating to the supervisor. The client's transference and the therapist's countertransference thus re-appear in the mirror of the therapist/supervisor relationship.

Extrinsically influenced near-optimal path apparatus and method

InactiveUS6067572ARapidly and automatically determineRapidly and automatically designateError preventionFrequency-division multiplex detailsWavefrontOperational system
A method and apparatus for dynamically providing a path through a network of nodes or granules may use a limited, advanced look at potential steps along a plurality of available paths. Given an initial position, at an initial node or granule within a network, and some destination node or granule in the network, all nodes or granules may be represented in a connected graph. An apparatus and method may evaluate current potential paths, or edges between nodes still considered to lie in potential paths, according to some cost or distance function associated therewith. In evaluating potential paths or edges, the apparatus and method may consider extrinsic data which influences the cost or distance function for a path or edge. Each next edge may lie ahead across the advancing "partial" wavefront, toward a new candidate node being considered for the path. With each advancement of the wavefront, one or more potential paths, previously considered, may be dropped from consideration. Thus, a "partial" wavefront, limited in size (number of nodes and connecting edges) continues to evaluate some number of the best paths "so far." The method deletes worst paths, backs out of cul-de-sacs, and penalizes turning around. The method and apparatus may be implemented to manage a computer network, a computer internetwork, parallel processors, parallel processes in a multi-processing operating system, a smart scissor for a drawing application, and other systems of nodes.
Owner:ORACLE INT CORP

Wastewater treatment system with membrane separators and provision for storm flow conditions

In a wastewater treatment system and process utilizing membrane bioreactors (MBRs), multiple, parallel series of tanks or stages each include, an MBR stage. Under conditions of normal flow volume into the system, influent passes through several parallel series of stages or process lines, which might be, for example, an anoxic stage, an aeration stage and an MBR stage. From the MBR stages a portion of M.L.S.S. is cycled through one or more thickening MBRs of similar process lines, for further thickening and further processing and digesting of the sludge, while a majority portion of the M.L.S.S. is recycled back into the main process lines. During peak flow conditions, such as storm conditions in a combined storm water/wastewater system, all of the series of stages with their thickening MBRs are operated in parallel to accept the peak flow, which is more than twice normal flow. M.L.S.S. is recycled from all MBR stages to the upstream end of each of all the parallel process lines, mixing with influent wastewater, and the last one or several process lines no longer act to digest the sludge. Another advantage is that with the thickened sludge in the last process line of basins, which ordinally act to digest the sludge, there is always sufficient biomass in the system to handle peak flow, the biomass being available if needed for a sudden heavy flow or an event that might bring a toxic condition into the main basins.
Owner:OVIVO INC

Method and system for updating software with smaller patch files

Rather than comparing an old file with a new file to generate a set of patching instructions, and then compressing the patching instructions to generate a compact patch file for transmission to a user, a patch file is generated in a single operation. A compressor is pre-initialized in accordance with the old version of the file (e.g. in an LZ77 compressor, the history window is pre-loaded with the file). The pre-initialized compressor then compresses the new file, producing a patch file from which the new file can be generated. At the user's computer, a parallel process is performed, with the user's copy of the old file being used to pre-initialize a decompressor to which the patch file is then input. The output of the decompressor is the new file. The patch files generated and used in these processes are of significantly reduced size when compared to the prior art. Variations between copies of the old file as installed on different computers are also addressed, so that a single patch file can be applied irrespective of such variations. By so doing, the need for a multi-version patch file to handle such installation differences is eliminated, further reducing the size of the patch file when compared with prior art techniques. Such variations are addressed by “normalizing” the old file prior to application of the patch file. A temporary copy of the old file is typically made, and locations within the file at which the data may be unpredictable due to idiosyncrasies of the file's installation are changed to known or predictable values.
Owner:MICROSOFT TECH LICENSING LLC

Neural network accelerator for bit width partitioning and implementation method of neural network accelerator

The present invention provides a neural network accelerator for bit width partitioning and an implementation method of the neural network accelerator. The neural network accelerator includes a plurality of computing and processing units with different bit widths, input buffers, weight buffers, output buffers, data shifters and an off-chip memory; each of the computing and processing units obtains data from the corresponding input buffering area and weight buffer, and performs parallel processing on data of a neural network layer having a bit width consistent with the bit width of the corresponding computing and processing unit; the data shifters are used for converting the bit width of data outputted by the current computing and processing unit into a bit width consistent with the bit width of a next computing and processing unit corresponding to the current computing and processing unit; and the off-chip memory is used for storing data which have not been processed and have been processed by the computing and processing units. With the neural network accelerator for bit width partitioning and the implementation method of the neural network accelerator of the invention adopted, multiply-accumulate operation can be performed on a plurality of short-bit width data, so that the utilization rate of a DSP can be increased; and the computing and processing units (CP) with different bit widths are adopted to perform parallel computation of each layer of a neural network, and therefore, the computing throughput of the accelerator can be improved.
Owner:TSINGHUA UNIV

Cross-room database synchronization method and system

The invention discloses a cross-room database synchronization method which is used for performing cross-room data synchronization on a target database from a source database. The cross-room database synchronization method comprises step 100, monitoring changes of the source database at a source database terminal; step 110, extracting change data of the source database in turn through a plurality of parallel processes which are serially numbered; step 120, sending the extracted data through the parallel processes after combination and compression; step 130, receiving the data sent from the parallel processes at a target database terminal, loading according to the order of the parallel processes to obtain data to be synchronized and updating the target database according to the data to be synchronized. According to the cross-room database synchronization method and device, the thinking of multi-threaded parallel processing is adopted, the characteristics of SQL (Structured Query Language) semantic execution are adopted, and accordingly the problem of the synchronization delay of a cross-room and cross-regional network can be alleviated through the complete transmission scheme such as data combination, compression and parallel processing.
Owner:ALIBABA GRP HLDG LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products