Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

145 results about "Data contention" patented technology

In database management systems, block contention (or data contention) refers to multiple processes or instances competing for access to the same index or data block at the same time.

Ultra low-power data retention latch

An embodiment of a ultra low-power data retention latch circuit involves a slave latch SL that concurrently latches the same data that is loaded into a main circuit (such as a main latch ML) during normal operation. When the circuit enters a low power (data retention) mode, power (VCC) to the main latch ML is removed and the slave latch SL retains the most recent data (retained data SA, SA-). When power is being restored to the main latch ML, the slave latch's retained data SA, SA- is quickly restored to the main latch ML through what constitute Set and Reset inputs SAR, SAR- of the ML. This arrangement ensures that data restoration is much quicker than conventional arrangements that require the output data path DATA- to be stabilized before power is re-applied to the main latch. Further, there is no need to wait for power to the ML to be stable before restoring data from the SL to the ML, providing an increase in data restoration speed over conventional data retention latches. Using retained data SA, SA- (as mirrored in SAR, SAR-) to control the Set and Reset inputs prevents data contention in the main latch ML. Moreover, compared to known arrangements, the arrangement provides minimal loading on the DATA, DATA- output paths (driving only N7, N8), thus not compromising speed on the data path (DATAIN . . . DATA / DATA-) through the main latch during normal operation.
Owner:TEXAS INSTR INC

Ultra low-power data retention latch

An embodiment of a ultra low-power data retention latch circuit involves a slave latch SL that concurrently latches the same data that is loaded into a main circuit (such as a main latch ML) during normal operation. When the circuit enters a low power (data retention) mode, power (VCC) to the main latch ML is removed and the slave latch SL retains the most recent data (retained data SA, SA-). When power is being restored to the main latch ML, the slave latch's retained data SA, SA- is quickly restored to the main latch ML through what constitute Set and Reset inputs SAR, SAR- of the ML. This arrangement ensures that data restoration is much quicker than conventional arrangements that require the output data path DATA- to be stabilized before power is re-applied to the main latch. Further, there is no need to wait for power to the ML to be stable before restoring data from the SL to the ML, providing an increase in data restoration speed over conventional data retention latches. Using retained data SA, SA- (as mirrored in SAR, SAR-) to control the Set and Reset inputs prevents data contention in the main latch ML. Moreover, compared to known arrangements, the arrangement provides minimal loading on the DATA, DATA- output paths (driving only N7, N8), thus not compromising speed on the data path (DATAIN . . . DATA/DATA-) through the main latch during normal operation.
Owner:TEXAS INSTR INC

Heterogeneous database synchronization method and device, computer equipment and storage medium

ActiveCN111382201AEasy to replace and migrateAchieve eventual consistencyDatabase updatingDatabase distribution/replicationData transportData mining
Embodiments of the invention disclose a heterogeneous database synchronization method and apparatus, computer equipment and a storage medium. The method comprises the following steps: obtaining database incremental data; packaging the incremental data of the database into target transmission data according to a preset custom packaging protocol; and transmitting the target transmission data to a heterogeneous database system, so that the target transmission data is synchronously transmitted in a plurality of databases in the heterogeneous database system. According to the embodiment of the invention, database incremental data is acquired, the database incremental data is packaged into target transmission data according to a preset custom packaging protocol, and the target transmission datais transmitted to a heterogeneous database system, so the target transmission data is synchronously transmitted by multiple databases in the heterogeneous database system, data conflicts possibly occurring in the data transmission process can be effectively avoided, final consistency of the data is achieved, version iterability of the bottom-layer database is improved, and replacement and migration of the database are facilitated.
Owner:GUANGZHOU BAIGUOYUAN INFORMATION TECH CO LTD

Compatible structure and unstructured low density parity check (LDPC) decoder and decoding algorithm

The invention provides a high-efficiency low density parity check (LDPC) decoder structure and a data conflict solution. The decoder adopts a universal serial processing mode, and an LDPC decoding algorithm and a hardware architecture are specially optimized. The classical turbo decoding message passing (TDMP) algorithm cannot be applied to unstructured LDPC codes such as LDPC codes in digital video broadcasting-satellite-second generator (DVB-S2) and China mobile multimedia broadcasting (CMMB). If the TDMP algorithm is directly adopted, a data conflict can be caused, and the performance of LDPC codes can be lowered. The TDMP algorithm is optimized, so that the TDMP algorithm can be well applied to the unstructured LDPC codes. The conventional reading-writing of external information is finished at a time, and a large memory space is required; and improvement is made, so that the memory space required by the decoder is effectively reduced. In terms of a processing unit, the recovery of the external information and the update operation of prior information and posterior information are also optimized. Moreover, in order to achieve compatibility with structured and unstructured LDPC codes, a main decoding time sequence is optimized. By the optimization measures, the hardware utilization efficiency of the decoder is improved.
Owner:FUDAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products