Query method based on distributed search engine, server and storage medium

A search engine and query method technology, applied in the computer field, can solve the problems of low query efficiency and long response time, and achieve the effect of load balancing and reducing response time

Pending Publication Date: 2019-11-19
GUANGZHOU SHIYUAN ELECTRONICS CO LTD
View PDF4 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In order to solve the problems of low query efficiency and long response time of distributed search engines in the prior art, the embodiment of the present application provides a query method, server and storage medium based on distributed search engines

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Query method based on distributed search engine, server and storage medium
  • Query method based on distributed search engine, server and storage medium
  • Query method based on distributed search engine, server and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0177] Embodiment 1: The device 7 is a front-end server.

[0178] The receiving unit 301 is configured to receive a query request.

[0179] The processing unit 302 is configured to select a target server in the server cluster; wherein, multiple nodes are deployed in the target server.

[0180] The processing unit 302 is further configured to select a node among the multiple nodes deployed on the target server as the general node;

[0181] A sending unit 303, configured to send the query request to the general node; wherein, the query request is used to instruct the general node to send the query request to each server in the first server set for processing, and the first server A set includes servers in the server cluster other than the target server.

[0182] Optionally, the processing unit 302 selecting a target server in the server cluster includes:

[0183] Randomly select a server in the server cluster as the target server; or

[0184] Monitoring the load information of...

Embodiment 2

[0190] Embodiment 2: The device 7 is a target server.

[0191] The receiving unit 301 is configured to receive a query request.

[0192] The processing unit 302 is configured to determine a master node corresponding to each server in the first server set; wherein, the target server is located in a server cluster, and the target server is deployed with a master node and at least one slave node.

[0193] A sending unit 303, configured to send a query request to a master node corresponding to each server in the first server set;

[0194] The processing unit 302 is further configured to combine the records queried from the master node and the records queried from at least one slave node deployed in the target server to obtain a first record set.

[0195] The receiving unit 301 is further configured to receive the second record set sent from the master node corresponding to each server in the first server set in response to the query request;

[0196] The processing unit 302 is f...

Embodiment 3

[0213] Embodiment 3: The device 7 is a server.

[0214] The receiving unit 301 is configured to receive a query request from the master node deployed in the target server.

[0215] A processing unit 302, configured to combine the records queried on the master node with the records queried on at least one slave node associated with the master node to obtain a second record set;

[0216] A sending unit 303, configured to send the second record set to the master node deployed in the target server.

[0217] Optionally, the processor 302 merges the record queried on the master node with the record queried on at least one slave node associated with the master node to obtain the second record set including:

[0218] performing a union operation on the records queried on the master node and the records queried on the at least one slave node;

[0219] Arrange the records after the union operation in ascending order according to the score;

[0220] selecting a first preset number of ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a query method based on a distributed search engine. A master node distributes a query request to a main node in a server, so that the main node can resend the query request toslave nodes in the server where the main node is located. The problem of processing bottleneck caused by the fact that the total node distributes the query request to all the nodes in the server cluster in the prior art is solved. The load balance on each node is realized. The processing overhead on the master node can be reduced, so that the query efficiency is improved. The query response timeis shortened.

Description

technical field [0001] The present application relates to the computer field, and in particular to a query method, server and storage medium based on a distributed search engine. Background technique [0002] Distributed search engine has the characteristics of high availability and fault tolerance. In the current SolrCloud architecture, the query process includes: a node (node) in the SolrCloud cluster receives the query request, the node includes shard and replica, and the node monitors the nodes of all servers under the SolrCloud cluster through zookeeper (distributed unified coordinator). Distribute query requests to nodes of all servers for separate scoring, and obtain query results based on the queried records. [0003] According to the above query method, it can be seen that when the amount of data continues to increase in the future, the size of the entire SolrCloud cluster will also increase. When the number of servers in the SolrCloud cluster may be dozens or even...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/2458G06F9/50
CPCG06F16/2471G06F9/505G06F9/5061
Inventor 胡启明
Owner GUANGZHOU SHIYUAN ELECTRONICS CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products