Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

55results about How to "Save cache resources" patented technology

An automatic retransfer request method, system and relay station in relay network

An embodiment of the invention discloses an automatic retransmission requesting method in a relay network. The method comprises the following procedures: storing the data and transmitting the data to the receiving end when a relay station RS confirms that the data from the transmitting side is correctly received; returning an affirmative response of the corresponding data to the transmitting side by the RS when the RS confirms that the data corresponding to the response information is correctly received by the receiving end according to the response information from the receiving end; and retransmitting the corresponding data when the RS confirms that the data stored by itself is not correctly received by the receiving end according to the response information. The embodiment of the invention simultaneously discloses an automatic retransmission requesting system and a relay station in the relay network. The method, system and relay station provided by the embodiment of the invention can avoid the problems of larger time delay of the data transmission in the first RS processing mechanism of the prior art and more RS caching resource engaging in the second RS processing mechanism of the prior art.
Owner:HUAWEI TECH CO LTD

Method and device for queue management

The invention provides a method and a device for queue management. The method for queue management comprises the steps of: determining a first logical queue indicated by a first logic queue head pointer and a second logic queue indicated by a second logic queue head pointer according to a queue validity identifier corresponding to an entity queue when a message enters the entity queue, wherein the first logic queue and the second logic queue comprise the same tail pointers; reading a scheduling parameter information corresponding to the first logic queue head pointer when the first logic queue dequeues, and reading a message descriptor according to a final scheduling result obtained through the scheduling parameter information when the second logic queue dequeues. Thus, the special function requirement that a push-type multi-level scheduler can schedule and select the queue after obtaining the scheduling parameter information firstly before scheduling is met, and the message descriptor only can be read from a message descriptor table when the scheduler outputs the final scheduling result. Therefore, cache resources are greatly saved.
Owner:XFUSION DIGITAL TECH CO LTD

Performance pre-evaluation based client cache distributing method and system

The invention discloses a performance pre-evaluation based client cache distributing method. The performance pre-evaluation based client cache distributing method comprises the following procedures of: firstly, counting loads of different data nodes in a parallel file system and collecting information such as a network speed and a magnetic disk write-read speed in the parallel file system at the same time; performing performance pre-evaluation on different system client cache distribution strategies by using the counted and collected information; selecting a client cache distribution strategy capable of bringing maximum performances by the system based on a performance pre-evaluation result; giving different priorities to different write requests based on the selected client cache distribution strategy; distributing the client cache to the write requests with relatively high priorities; directly writing the write requests with relatively low priorities into a magnetic disk. The performance pre-evaluation based client cache distributing method can solve the problems of high priority and low efficiency existing in the client distribution strategy of the existing parallel file system and maximizes performance improvement which can be brought by the limited client cache.
Owner:HUAZHONG UNIV OF SCI & TECH

Video detecting and processing method and video detecting and processing device

The invention relates to a video detecting and processing method. The video detecting and processing method comprises the following steps: calculating the whole dependency of images of adjacent fields to predicate whether a current field is a film field or not; if so, combining the current field and the adjacent field to form a frame; detecting comb tooth artifacts in the synthesized frame pixel by pixel; if no comb tooth artifact is detected, determining that a current local region is a film pattern and taking the synthesized frame as a restored video frame; if the comb tooth artifacts are detected, judging that the current local region is a non-film pattern; and calculating an interpolation frame by adopting a movement self-adaptive method or a spatial interpolation method and taking the interpolation frame as a restored video frame. The video detecting and processing method can be used for correctly detecting a film region and an interlacing region in a mixed video and different interlacing removing technologies are used for processing respectively, so that the details of the film region are recovered and a comb tooth phenomenon is avoided. Meanwhile, a film pattern detection module and an interlacing removing module can use the same input so that field caching and DDR bandwidth are reduced, and the hardware cost is saved.
Owner:HAIER BEIJING IC DESIGN

Method for realizing time slot data packet recombination by time division multiplexing cache

The invention discloses a method for realizing time slot data packet recombination by time division multiplexing cache. According to the method, time division multiplexing is carried out on cache resources, namely for the same data cache, a plurality of data packets with different time slots are stored at different time; data packets in different time slots of input original time slot multiplexingcommunication protocol data are stored in the same data cache, and when the data packets are recombined and output, the data cache is read according to the linked list addresses, and the continuouslyoutput of the data packets in a certain time slot is realized; after the addresses of the data cache are manageddynamically to be written into a certain address of the data cache, the address is marked to occupied, and after the certain address of the data cache is read out, the address is marked to be released and to be an idle address, and the idle address of the data cache needs to be distributed before the address is written into the data cache. By the adoption of the scheme, recombination of the data packets in a time slot multiplexing communication protocol is achieved, the cache utilization rate is increased, cache resources are saved, and the recombination of data packets in more time slots is realized based on the limited cache resources.
Owner:TOEC TECH

Variable-length message data processing method and scheduling device

The invention provides a variable-length message data processing method and a scheduling device, and belongs to the field of high-speed transmission of data communication. The data processing method provided by the invention comprises the following steps: receiving and storing a variable-length message; querying a route according to the address type information carried by the variable-length message and obtaining output port information; realizing corresponding port scheduling according to the scheduling strategy of the scheduling device; and carrying out data exchange. The switching device executes message caching, sorting, switching and copying operations according to the input port, the type and the destination port of the variable-length message, wherein the variable-length message adopts a shared cache storage and virtual output queue mode in a plane, and the cache space is adjusted according to the number of variable ports. According to the invention, ordered non-blocking exchange of unicast and multicast variable-length messages can be realized according to the port setting of the exchange device.
Owner:ETOWNIP MICROELECTRONICS BEIJING CO LTD

An image preprocessing device suitable for video coding

The invention relates to the technical field of image processing, and provides an image preprocessing device suitable for video coding. The device realizes the functions of macroblock zooming, image layer superimposing and thumbnail output of the source image in an online manner, and comprises a data source management module, a two-dimensional zooming module, an image layer superimposing module, an output module and the like. The data source management module pre-generates a source data read instruction required for the output of the current target macroblock row and buffers the source data; The image layer superimposing module reads out and pre-reads the macroblock data of the corresponding superimposed layer according to the coordinate position of each image layer. The output module obtains image data of corresponding size according to a fixed reduction multiple, and then outputs the image data to the video encoding module and writes the image data into the off-chip memory. The device cooperates with the pipeline processing of each module through the block partition and ping-pong storage of the data source, which not only reduces the bandwidth consumption, but also satisfies thereal-time requirement of the high-definition video coding.
Owner:珠海亿智电子科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products