Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

206 results about "Encoding (memory)" patented technology

Memory has the ability to encode, store and recall information. Memories give an organism the capability to learn and adapt from previous experiences as well as build relationships. Encoding allows the perceived item of use or interest to be converted into a construct that can be stored within the brain and recalled later from short-term or long-term memory. Working memory stores information for immediate use or manipulation which is aided through hooking onto previously archived items already present in the long-term memory of an individual.

Error correcting content addressable memory

InactiveUS7254748B1Maximize usable CAM spaceRedundant data error correctionMemory systemsSoft errorMatch line
A CAM and method for operating a CAM are presented. Copies of a CAM database are duplicated and placed in a first set of CAM locations and a second set of CAM locations. An error detector is used to determine false matches in the case of soft errors within the entries producing those false matches. While the entries producing a match should have the same index location, errors might cause those match lines to have an offset. If so, the present CAM, through use of duplicative sets of CAM locations, will detect the offset and thereafter the values in each index location that produces a match, along with the corresponding parity or error detection encoding bit(s). If the parity or error detection encoding bit(s) indicate an error in a particular entry, then that error is located and the corresponding entry at the same index within the other, duplicative set of CAM locations is copied into the that erroneous entry. Since duplicative copies are by design placed into the first and second sets of CAM locations, whatever value exists in the opposing entry can be written into the erroneous entry to correct errors in that search location. The first and second sets of CAM locations are configurable to be duplicative or distinct in content, allowing error detection and correction to be performed at multiple user-specified granularities. The error detection and correction during search is backward compatible to interim parity scrubbing and ECC scan, as well as use of FNH bits set by a user or provider.
Owner:AVAGO TECH INT SALES PTE LTD

Apparatus and method for efficiently incorporating instruction set information with instruction addresses

The present invention provides an apparatus and method for storing instruction set information. The apparatus comprises a processing circuit for executing processing instructions from any of a plurality of instruction sets of processing instructions, each processing instruction being specified by an instruction address identifying that processing instruction's location in memory. A different number of instruction address bits need to be specified in the instruction address for processing instructions in different instruction sets. The apparatus further comprises encoding logic for encoding an instruction address with an indication of the instruction set corresponding to that instruction to generate an n-bit encoded instruction address. The encoding logic is arranged to perform the encoding by performing a computation equivalent to extending the specified instruction address bits to n-bits by prepending a pattern of bits to the specified instruction address bits, the pattern of bits prepended being dependent on the instruction set corresponding to that instruction. Preferably, the encoded instruction address is then compressed. This approach provides a particularly efficient technique for incorporating instruction set information with instruction addresses, and will be useful in any implementations where it is desired to track such information, one example being in tracing mechanisms used to trace the activity of a processing circuit.
Owner:ARM LTD

Chinese text abstract generation method based on sequence-to-sequence model

The invention discloses a Chinese text abstract generation method based on a sequence-to-sequence model, and the method comprises the steps: firstly carrying out the word segmentation of a text, filling the text to a fixed length, and carrying out the Gaussian random initialization of a word vector; encoding the text, inputting the encoded text into a bidirectional long short-term memory (LSTM) network, and taking the final output state as precoding; performing convolutional neural network (CNN) on the word vectors according to different window sizes, and outputting the word vectors as windowword vectors; constructing an encoder, constructing a bidirectional LSTM (Long Short Term Memory), taking precoding as an initialization parameter of the bidirectional LSTM, and taking a window word vector in the previous step as input; and constructing a decoder, and generating a text by using a one-way LSTM and combining an attention mechanism. According to the method, a traditional encoder froma sequence to a sequence model is improved, so that the model can obtain more original text information in an encoding stage, a better text abstract is finally decoded, and a word vector with smallerfine granularity is used, so that the method is more suitable for a Chinese text.
Owner:SOUTH CHINA UNIV OF TECH

Entity and relationship joint extraction method based on reinforcement learning

The invention discloses a joint information extraction method. According to the method, entity and relationship information is jointly extracted from an input text, and the method is composed of a joint extraction module and a reinforcement learning module. The joint extraction module adopts an end-to-end design and comprises a word embedding layer, a coding layer, an entity identification layer and a joint information extraction layer. Wherein the word embedding layer adopts a mode of combining a Glove pre-training word embedding library and word embedding representation based on character granularity. The encoding layer encodes the input text by using a bidirectional long-short memory network. The entity identification layer and the joint information extraction layer decode by using a one-way long-short memory network. The reinforcement learning module is used for removing noise in a data set, and a strategy network of the reinforcement learning module is composed of a convolutionalneural network. The strategy network comprises a pre-training process and a re-training process, and in the pre-training process, a pre-training data set is used for carrying out supervised training on the strategy network. In the retraining process, the strategy network is updated by obtaining the rewards of the joint extraction network, which is an unsupervised learning process.
Owner:SICHUAN UNIV

Knowledge representation method based on combination of text embedding and structure embedding

The invention discloses a knowledge representation method based on the combination of text embedding and structure embedding, and the method comprises the steps: 1, carrying out the preprocessing of an entity description text in a knowledge base, and extracting a subject term from each entity description; 2, encoding the subject term into a term vector by using fasttext, wherein each entity description is expressed as a multi-dimensional term vector; step 3, inputting the processed multi-dimensional word vectors into a bidirectional long-short memory network (A-BiLSTM) with an attention mechanism or a long-short memory network (A-LSTM) with an attention mechanism for encoding, processing the multi-dimensional word vector representing each entity into a one-dimensional vector, namely text representation, and training an existing StransE model to obtain structural representation of the entity; 4, introducing a gating mechanism, and proposing four methods related to text embedding and structure embedding combination to obtain a final entity embedding matrix; and 5, inputting the entity embedding matrix into a ConvKB knowledge graph embedding model, a TransH knowledge graph embedding model, a TransR knowledge graph embedding model, a Distmult knowledge graph embedding model and a Hole knowledge graph embedding model, and improving a knowledge completion task.
Owner:TIANJIN UNIV

Audio decoding system and method adapted to android stagefright multimedia framework

ActiveCN102857833AAudio Decoding Complexity No.Improve portabilitySelective content distributionMultimedia frameworkAmbient data
The invention discloses an audio decoding system and method adapted to an android stagefright multimedia framework. The method comprises the following steps of: saving an unpacking component input by an Awesome player, to complete registration of audio decoders; acquiring media metadata of audio and saving the media metadata to local; acquiring a context environmental data item, and applying for a memory resource as a decoding output buffer; according to the context environment, opening and initializing a decoder in the audio decoders matched with audio stream format, and applying for a memory resource as a decoding input buffer; through the unpacking component, reading audio encoding data to the input buffer and performing audio decoding; updating sampling rate data in local media metadata to serve as the sampling rate of the audio encoding data; and according to local media metadata, calculating to obtain a time stamp of the decoding output data and save the time stamp to the output buffer, and returning from the output buffer original audio data carrying the time stamp. By adopting the audio decoding system and method adapted to the android stagefright multimedia framework provided by the invention, audio formats supported by an android system can be expanded.
Owner:深圳市佳创软件有限公司

Automatic abstract generation method and device, electronic equipment and storage medium

The invention discloses an abstract automatic generation method and apparatus, an electronic device and a storage medium. The method comprises the steps of performing calculation on an original text and a named entity in the original text based on two trained embedded vector models to obtain a first character vector and a second character vector of each single character, and performing splicing toobtain a word vector of each single character; encoding and decoding the word vectors of each single character through the trained Transformer encoding and decoding model to obtain the word vectors of the multiple generated words, so that the feature representation capacity of the word vectors of the multiple generated words can be enhanced, and dividing each generated word into the first type ofgenerated words or the second type of generated words; and calculating the first type of generated words and the second type of generated words by adopting a trained pointer network and a trained memory network respectively to obtain first type of output words and second type of output words, and forming a target abstract by a plurality of first type of output words and/or a plurality of second type of output words, so that the problem that a remote named entity cannot be generated can be effectively solved.
Owner:杭州远传新业科技股份有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products