Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

44results about How to "Improve performance experience" patented technology

Rapid horizontal extending method for databases

ActiveCN102930062AImprove the speed of expansionEasy to monitorSpecial data processing applicationsData segmentDisk size
The invention discloses a rapid horizontal extending method for databases, which belongs to the field of data migration and storage. The method comprises the following steps: a monitoring and control system carries out monitoring on the disk storage space of the existing database, and when the disk size reaches a preset storage capacity threshold, the monitoring and control system triggers a hardware storage increasing action so as to make a hardware environment preparation for data migration; and a routing protocol carries out a modular computation on the instance number N of the current database according to a major key ID to be inserted currently, and routes the line number of data to be inserted into a specified database, and when the number of inserted lines exceeds a preset storage number threshold, the routing protocol triggers a database segmentation action so as to migrate specified data into a new database, thereby completing the rapid horizontal extending of the database. The method disclosed by the invention can be applied to disk databases or memory databases, and the method is easy to use and simple, and avoids the problem that hotspots are distributed nonuniformly caused by traditional horizontal extending of databases; in addition, as the scale of migration of data segmented each time is small, the extending speed is improved. In conclusion, the overall architecture meets the requirements on high availability, high reliability, high speed, high efficiency and the like.
Owner:NANJING FUJITSU NANDA SOFTWARE TECH

Primary and standby services switching method and device

The invention discloses a primary and standby services switching method and device. The method comprises the following steps: performing concurrent pressure performance test on each standby service, monitoring the service condition of a server resource where the standby service is located, and counting the Transaction Per Second tps in the concurrent pressure performance test process; when determining that primary and standby services switching needs to be performed, selecting a standby service, of which the service condition of the server resource is within a reasonable range, as a new primary service; when the service conditions of server resources where various standby services are located are within a reasonable range or within a same abnormal range, selecting a standby service with the highest tps as the new primary service; and when the service conditions of the server resources where various standby services are located exceed the reasonable range, taking a high-priority standbyservice with the service condition of the server resource exceeding the reasonable range as the new primary service according to the preset priority of the server resources. The primary and standby services switching method provided by the invention can automatically select a standby service with better performance as the primary service.
Owner:BEIJING JINGDONG SHANGKE INFORMATION TECH CO LTD +1

Method for hierarchically caching read-write data in storage cluster

The invention discloses a method for hierarchically caching read-write data in a storage cluster, relates to the technical field of cloud computing, and is implemented on the basis of a back-end storage cluster, a first-stage cache, a second-stage cache, an api gateway, a log file system and an application program. The back-end storage cluster manages the original data while the first-stage cache stores the hotspot data. Hot spot data is divided into different pools, the second-stage cache extracts the poll data into segments according to indexes and stores the segments, and the api gateway processes the requests in a unified mode. When the application program initiates a read request, the api gateway processes the request and publishes the request to the second-stage cache, the second-stage cache searches related segments and locates the related segments to the pool, or, if the related segments are not found, a segment missing request is further initiated to the first-stage cache, and after related information still cannot be found, the second-stage cache continues to search in a back-end storage cluster; when the application program initiates a write request, the api gateway processes the request and writes the request into a log file system, and Flush enters a back-end storage cluster after a transaction is completed. According to the invention, time delay can be greatly reduced.
Owner:SHANDONG LANGCHAO YUNTOU INFORMATION TECH CO LTD

A Method of Rapid Horizontal Expansion of Database

The invention discloses a rapid horizontal extending method for databases, which belongs to the field of data migration and storage. The method comprises the following steps: a monitoring and control system carries out monitoring on the disk storage space of the existing database, and when the disk size reaches a preset storage capacity threshold, the monitoring and control system triggers a hardware storage increasing action so as to make a hardware environment preparation for data migration; and a routing protocol carries out a modular computation on the instance number N of the current database according to a major key ID to be inserted currently, and routes the line number of data to be inserted into a specified database, and when the number of inserted lines exceeds a preset storage number threshold, the routing protocol triggers a database segmentation action so as to migrate specified data into a new database, thereby completing the rapid horizontal extending of the database. The method disclosed by the invention can be applied to disk databases or memory databases, and the method is easy to use and simple, and avoids the problem that hotspots are distributed nonuniformly caused by traditional horizontal extending of databases; in addition, as the scale of migration of data segmented each time is small, the extending speed is improved. In conclusion, the overall architecture meets the requirements on high availability, high reliability, high speed, high efficiency and the like.
Owner:NANJING FUJITSU NANDA SOFTWARE TECH

Accompaniment method for actively following music signals and related equipment

The embodiment of the invention discloses an accompaniment method for actively following music signals and related equipment. The accompaniment method comprises the following steps that: first audio data of a performer in the performance process of a target track are acquired; an observation sequence is determined according to the first audio data and music scores corresponding to the target track, wherein the observation sequence is a corresponding sequence of each note in the first audio data and a performance moment of each note; a prediction sequence is determined according to the observation sequence and the music scores, wherein the prediction sequence is a corresponding sequence of each note of the first audio data and the predicted performance moments; and according to the observation sequence and the prediction sequence, the performance moment of the musical note of the next performance of the performer is predicted, and according to the corresponding relationship between the music scores and accompaniment audios, accompaniment audio data corresponding to the musical note of the next performance is controlled to be played at the predicted performance moment. According to the method, the occurrence time of the next note to be performed of the performer can be accurately predicted, the accompaniment is automatically matched with the performance so as to be played, and the interaction effect between the accompaniment and the performance is improved.
Owner:深圳市芒果未来科技有限公司

Heat dissipation device and intelligent glasses equipment

The invention discloses a heat dissipation device. The heat dissipation device comprises a connecting wire, a control radiator and a wearable radiator, wherein the connecting wire is provided with an electric signal wire and an airflow channel wire; the control radiator is connected with the connecting wire and is provided with a control accommodating cavity and an air supply device; the wearable radiator is connected with the connecting wire and provided with a wearable containing cavity, and the wearable containing cavity is communicated with the control accommodating cavity through an airflow channel line. The invention further discloses intelligent glasses equipment which comprises the heat dissipation device, the surface shell temperature of the wearing end and the surface shell temperature of the operation end are controlled to be within an acceptable range, so that a user has good temperature rise experience; meanwhile, the local temperature is prevented from being too high, and good performance experience of the intelligent glasses is achieved. Function distribution is realized through separate arrangement of the control end and the wearing end, so that heat generated by the wearing end is controlled, the airflow is introduced into the control end through the airflow channel line, noise pollution caused by arrangement of a fan at the wearing end is avoided, and the heat dissipation device is simple in structure, remarkable in action effect and suitable for wide popularization.
Owner:BEIJING XLOONG TECH CO LTD

A method for caching read and write data hierarchically in a storage cluster

The invention discloses a method for caching and reading and writing data hierarchically in a storage cluster, and relates to the technical field of cloud computing. Its realization is based on a back-end storage cluster, a first-level cache, a second-level cache, an API gateway, a log file system, and an application program. The back-end storage cluster manages the original data. The first-level cache stores hot data and divides the hot data into different pools. The second-level cache extracts the pool data into segments according to the index and saves them. The API gateway processes requests uniformly. When the application initiates a read request, the api gateway processes the request and publishes it to the second-level cache. The second-level cache searches for the relevant segment and locates it in the pool. Or, if no relevant segment is found, it further initiates a missing segment request to the first-level cache. If the relevant information still cannot be found, continue to search in the back-end storage cluster; when the application initiates a write request, the api gateway processes the request and writes it to the log file system, and then flushes it into the back-end storage cluster when the transaction is completed. The invention can greatly reduce the time delay.
Owner:SHANDONG LANGCHAO YUNTOU INFORMATION TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products