Patents
Literature
Eureka-AI is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Eureka AI

190results about How to "Improve reading speed" patented technology

Shared storage message queue-based implementation method for high availability of virtual machines

ActiveCN104253860AMonitor current statusAvoid downtimeData switching networksMessage queueTimestamp
The invention discloses a shared storage message queue-based implementation method for high availability of virtual machines and relates to the field of cloud computing. The shared storage message queue-based implementation method for the high availability of the virtual machines comprises the following steps: initializing a piece of contiguous space on shared storage by a server program to serve as a logical volume; distributing a plurality of contiguous sectors from the logical volume to serve as a sector pool of a client program; acquiring the configuration information of the sector pool and the number of a virtual machine, of which the high availability is enabled, by the client program; sending a heartbeat message by the client program, and updating the timestamp of a virtual machine control block maintained by the client program; receiving the heartbeat message by the server program, and updating the timestamp of the virtual machine control block maintained by the server program by utilizing the timestamp carried in the heartbeat message. According to the shared storage message queue-based implementation method disclosed by the invention, the unnecessary stop time of the virtual machines is avoided; system resource consumption is low by adopting a heartbeat detection mechanism; the performance is high; the service life of the disk of the shared storage is prolonged.
Owner:WUHAN OPENKER COMPUTING

Rapid large-scale point-cloud data reading method based on memory pre-distribution and multi-point writing technology

The invention discloses a rapid large-scale point-cloud data reading method based on memory pre-distribution and multi-point writing technology, belongs to the technical field of point-cloud data file reading, and aims to solve the problems that reading time of an existing high and large-scale point-cloud data file is delayed and the existing high and large-scale point-cloud data file is slowly red. The method includes a memory pre-distribution process and a multi-point writing process and includes the steps: firstly, determining the number of points in a point-cloud data file, determining the memory size occupied by all points in the point-cloud data file and pre-distributing memories with the corresponding sizes for point-cloud data; secondly, mapping the point-cloud data file to a mapped memory through a memory mapping file mechanism, then building a thread pool containing a designated number of threads, enabling each thread to be responsible for analyzing parts of point-cloud data information in the mapped memory, and writing analyzed results into the pre-distributed memories to realize multi-point writing. Test results indicate that by the aid of the reading method based on the memory pre-distribution and multi-point writing technology, the reading speed of the point-cloud data file and particularly the large-scale point-cloud data file is increased by 220%-300%.
Owner:SOUTHWEAT UNIV OF SCI & TECH

Semi-floating gate transistor of drain region embedding inversion layer and manufacturing method thereof

The invention provides a semi-floating gate transistor of a drain region embedding inversion layer and a manufacturing method thereof. The transistor comprises a semiconductor substrate, a plane channel area, a source region, a drain region, a first insulating layer, a floating gate, a diffusion region, a second insulating layer, a control gate and a metal line, wherein the plane channel area is located in an active region of the semiconductor substrate; the source region and the drain region are located on two sides of the plane channel area respectively; the first insulating layer containing a floating gate opening is arranged on a surface of the drain region; the floating gate covers the floating gate opening and the first insulating layer; the diffusion region is arranged in the drain region below the floating gate opening; the second insulating layer covers the whole floating gate, parts of the source region, a rain region surface and the whole plane channel area; the control gate is located above the second insulating layer; the metal line is used to realize leading out of a transistor gate, a source electrode, a drain electrode and the substrate. The transistor is characterized in that a drain region embedding inversion layer which tunnels between a transistor channel region and a heavy doping drain region is embedded below the control gate in the drain region. In the invention, through adding the embedding inversion layer, doping concentration gradient distribution between an embedded tunneling lattice pipe channel and the drain region is optimized; an incidence rate of band-to-band tunneling is increased; a reading and writing speed of the semi-floating gate transistor is improved and electric leakage of the transistor is reduced.
Owner:SHANGHAI INTEGRATED CIRCUIT RES & DEV CENT +1

FPGA-based image scaling processing method and device

The invention discloses an FPGA-based image scaling processing method and device. The method comprises: original image data are obtained and the original image data are inputted into an FPGA internal cache at a preset inputting speed; the original image data are read from the internal cache at a reading speed corresponding to the preset inputting speed, interpolation calculation is carried out based on an interpolation algorithm and the read original image data to obtain image interpolation data; and according to the image interpolation data, image data after image scaling are obtained. The original image data are inputted into the FPGA internal cache for caching and thus no external storage device is needed for storage, so that costs can be lowered. Data can be read from the FPGA internal cache instead of being read from any external storage device, so that the data reading speed can be increased and the overall image scaling efficiency can be improved. In addition, the original image data are read from the internal cache at the reading speed corresponding to the preset inputting speed and thus data reading is carried out at a reasonable reading speed, so that the reading speed can be guaranteed and data loss caused by too slow reading speed can be avoided. Therefore, the scaling processing efficiency can be improved.
Owner:SHENZHEN AIXIESHENG TECH CO LTD

Multipath temporary speed limit post answering device and method

The invention discloses a multipath temporary speed limit post answering device and a multipath temporary speed limit post answering method. The device comprises a radio frequency control module, a baseband processing module and a main control module, wherein the radio frequency control module is used for receiving speed limit post signals and transmitting radio-frequency carrier signals, and comprises a plurality of antennae and a plurality of radio frequency units; each antenna is respectively connected with a corresponding radio frequency unit; an AD/DA converter and an FPGA multipath processing unit in the baseband processing module are connected through a data bus, and the baseband processing module is a transceiving module by taking digital signals as a core; and the main control module is used for scheduling and controlling the work of the whole system, and comprises a data receiving unit, an alarm unit, a control unit, a storage unit and a main control unit. The device has the function of reading and processing multipath speed limit post data in real time, and solves the problem of speed limit chaos because a common answering device cannot timely process multipath speed limit post data in the speed limit process on the basis of reducing system complexity and cost so as to improve driving safety.
Owner:SOUTH CHINA UNIV OF TECH

Power line carrier communication method for oil field underground testing and adjusting equipment

The invention aims to solve the problems of unavailable real-time reading of underground working parameters and data by oil field underground testing and adjusting equipment, excessively complex operation, low efficiency and the like, provides a power line carrier communication method for the oil field underground testing and adjusting equipment and aims to transmit the acquired working parameters of a water well and an oil well to the ground in a digital mode through a power line when an underground testing and adjusting instrument runs under the well, control the operation of an underground instrument and meet the requirements of the real-time monitoring, adjustment and control of the working situations of the water well and the oil well in an oil field. The method is characterized in that: power carrier communication technology is combined with frequency shift keying (FSK) modulation and demodulation technology; a high-voltage direct current power line (voltage magnitude is between 90 and 190 V) which connects a ground control instrument with the underground testing and adjusting instrument is taken as a transmission medium; and a double-frequency FSK carrier modulation and demodulation mode for bearing binary 1 and 0 by two frequencies is adopted so as to realize real-time communication between the ground control instrument and the underground testing and adjusting instrument.
Owner:长春锐利科技有限公司

Adaptive reading optimization method and system for mass data under cloud storage environment

The invention provides an adaptive reading optimization method for mass data under cloud storage environment. The method comprises the following steps of: recording the data access log information of a local user program; carrying out statistical analysis on the above log information in fixed time to obtain an incidence relationship among data objects; according to the data access request of the user grogram, obtaining a data object set related to the data object which is accessed at present, and pre-reading the data object set to a local cache; and for the data object access request of the user program, firstly reading the data object access request from the local cache, and if a data object to be accessed is in the absence in the local cache, reading the data object access request from the far-end storage node of a distributed file system. The method also comprises the steps of pre-reading the related data object set to the local cache from the far-end storage node of the distributed file system and updating the local cache. By use of the method, according to a data object access log subjected to statistics and analysis, the incidence relationship of the data object is established, and the incidence data object of the current read data object is pre-read to the local cache to improve data reading speed.
Owner:COMP NETWORK INFORMATION CENT CHINESE ACADEMY OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products