Control method for intelligent shopping guide robot system and computer readable storage medium
A robot system and control method technology, applied in the field of robots, can solve the problem that shopping guide robots cannot be found by customers
Inactive Publication Date: 2019-01-18
深圳威琳懋生物科技有限公司
0 Cites 0 Cited by
AI-Extracted Technical Summary
Problems solved by technology
[0003] The main purpose of the present invention is to provide a control method for an intelligent shopping guide robot system, aiming to solve th...
Method used
Implement the present embodiment, through the multiple cameras in the shopping mall, the first surveillance video that obtains, identify all target characteristic information by face recognition algorithm, described target characteristic information is constructed by error backpropagation algorithm again The neural network model of the neural network model updates the satisfaction evaluation results stored in the storefront satisfaction database in real time, responds to the user's shopping guide input request, det...
Abstract
The invention discloses a control method of an intelligent shopping guide robot system and a computer-readable storage medium, wherein, the method comprises the following steps: obtaining a first surveillance video shot by a camera in a cash register area; Extracting target feature information in the first surveillance video as sample input data by a face recognition algorithm, calculating based on a satisfaction evaluation model of facial expression, obtaining satisfaction evaluation results of different target consumers, and updating satisfaction evaluation results in a storefront satisfaction database; According to the satisfaction evaluation result stored in the storefront satisfaction database and the determined storefront information, recommending the storefront meeting the preset satisfaction to the user by the shopping guide robot. The accuracy and purposefulness of the navigation function of the intelligent shopping guide robot system are realized.
Application Domain
Customer communicationsBiological neural network models +1
Technology Topic
Robotic systemsNavigation function +9
Image
Examples
- Experimental program(1)
Example Embodiment
[0032] Hereinafter, exemplary embodiments of the present invention will be described in more detail with reference to the accompanying drawings. Although exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention can be implemented in various forms and should not be limited by the embodiments set forth herein. On the contrary, these embodiments are provided to enable a more thorough understanding of the present invention and to fully convey the scope of the present invention to those skilled in the art.
[0033] Any equivalent structure or equivalent process transformation made by using the contents of the description and drawings of the present invention, or directly or indirectly applied to other related technical fields, are similarly included in the scope of patent protection of the present invention.
[0034] Please refer to figure 1 with 2 In a specific embodiment of the present invention, an intelligent shopping guide robot system is provided, which includes a plurality of cameras 1 arranged at the cashier area of each store in a shopping mall, a plurality of intelligent shopping guide robots 2 that can move freely inside the shopping mall, and a server 3. Among them, the server 3 is in communication connection with multiple cameras 1 and multiple smart shopping guide robots 2 respectively.
[0035] The server 3 may include a control circuit board unit 5, a power supply module 6 and the like. The control circuit board unit 5 includes a first communication module 51, a memory 52, a processor 53, and a computer program stored in the memory 52 and running on the processor 53. The control circuit board unit 5 is connected to the The power supply module 6 is connected.
[0036] The smart shopping guide robot 2 includes a robot body 21 and a traveling drive device 22. The robot body 21 includes a central processing unit 211, a memory unit (not shown), a touch screen 212, a positioning module 213, and a second communication unit. Module 214 and so on.
[0037] Wherein, the server 3 may send a remote control instruction to the traveling drive device 22 through the first communication module 51 to control the traveling drive device 22 to move on the road.
[0038] The camera 1 collects the first surveillance video of the cash register area of each store in the shopping mall. Specifically, in this embodiment, each store in the shopping mall has one or more cameras to capture the cash register area in the store in all directions. And the consumption area, and send the collected surveillance video to the processor 53 in the server 3 in real time.
[0039] Among them, in this embodiment, the surveillance video in the cash register area is the first surveillance video, and the surveillance video in the consumption area is the second surveillance video.
[0040] The processor 53 of the server 3 recognizes the target feature information of the first surveillance video through a face recognition algorithm according to the acquired first surveillance video, and the processor 53 stores the target feature information in the memory via data transmission. 52 in. The target feature information is dynamic facial information of different target consumers located in the cash register area of the store in the first surveillance video.
[0041] The processor 53 further obtains the current satisfaction evaluation result corresponding to the first surveillance video through the neural network algorithm constructed according to the error back propagation algorithm according to the target feature information. The processor 53 reads the existing satisfaction evaluation results of the store in the satisfaction database, obtains the average of the above two satisfaction evaluation results, obtains the final satisfaction evaluation result, and stores the final satisfaction evaluation result To the satisfaction database, replace the existing satisfaction evaluation results of the store.
[0042] It is understandable that, in other embodiments, the satisfaction evaluation result calculated based on the first surveillance video can also be used to directly replace the existing satisfaction evaluation result of the store in the satisfaction database; or The weighted calculation is performed based on the current satisfaction evaluation result calculated by the first surveillance video and the existing satisfaction evaluation result of the store in the satisfaction database to obtain the final satisfaction evaluation result.
[0043] The user can input demand instructions through the touch screen 212 of the smart shopping guide robot 2.
[0044] The smart shopping guide robot 2 communicates through the cooperation of the second communication module 214 and the first communication module 51 of the server 3. The smart shopping guide robot 2 can send instructions and transmit data to the server 3, where the instructions include query instructions, directions instructions, etc., and the data can include location data, image data, and the like.
[0045] The smart shopping guide robot 2 can also locate the real-time position of the smart shopping guide robot 2 through the positioning module 213 and send the current position information to the server 3.
[0046] When the server 3 receives the routing instruction, the processor 53 generates a travel route according to the destination information in the routing instruction and the received current location information detected by the positioning module 213.
[0047] The server 3 may send the travel route to the smart shopping guide robot 2 through the first communication module 51, so that the driving device 22 of the smart shopping guide robot 2 moves on the road according to the received travel route, thereby moving to the destination. The destination.
[0048] It is understandable that the server 3 may also send the travel route to the touch screen 212 of the smart shopping guide robot 2 for display.
[0049] When the touch display screen 212 of the smart shopping guide robot 2 is triggered by a query instruction, the touch display screen 212 displays selectable commodity categories, and the commodity categories belong to include a first-level catalog and a second-level catalog, and the first-level catalog includes clothing , Food, daily necessities, etc., each first-level catalog shows a second-level catalog, the second-level catalog is a refinement of the first-level catalog, for example, the second-level catalog under the first-level catalog clothing includes men's clothing, women's clothing, sportswear, etc. Various characteristic words, for another example, the second-level categories under the first-level category food include hot pot, Hunan cuisine, Cantonese cuisine, Japanese food, French cuisine, etc. After one or more items in the second-level catalog of the commodity category are selected by the user, the smart shopping guide robot 2 sends the selected one of the second-level catalogs of the commodity category through the second communication module 214 Or multiple items are sent to the server 3.
[0050] After the server 3 receives the second-level catalog of the product category selected by the user sent by the smart shopping guide robot 2, the processor 53 obtains the satisfaction evaluation results of all stores in the satisfaction database that meet the second-level catalog. The storefronts that meet the preset satisfaction degree are displayed on the touch screen 212 of the smart shopping guide robot 2, for example, the three with the highest evaluation results or the three with the most evaluations are pushed to the smart shopping guide robot 2.
[0051] The server 3 can also obtain the second surveillance video of all stores that conform to the secondary catalog, and count the number of consumers in the consumption area of the store through the face recognition algorithm and image recognition algorithm. When the number of consumers reaches the maximum preset value of the store When counting the number of people, it is determined that the store is full, and a warning message is sent to the intelligent shopping guide robot 2 about the full store. Further, the server 3 can also determine, based on the warning information, whether there is the full storefront among the stores recommended by the smart shopping guide robot 2 to the user that meet the preset satisfaction level; When the storefront of contains the full storefront, the warning message will be sent to the touch display screen 212 of the smart shopping guide robot 2 along with the information of the full storefront for display to complete the query.
[0052] In this way, the intelligent shopping guide robot system can quickly and accurately find the technical problems of specific storefronts for customers in need.
[0053] In this embodiment, such as image 3 As shown, the embodiment of the present invention provides a control method of an intelligent shopping guide robot system, and the specific process is as follows:
[0054] Step S1: Obtain a first surveillance video taken by a camera set in the cashier area of the store in the shopping mall;
[0055] In the step S1, the processor 53 of the server 3 issues a receiving instruction in real time according to a timer, and controls the transmission of the camera 1 to transmit the first surveillance video captured within the effective time. The server 3 synchronously retrieves the specific information of each storefront and matches the first surveillance video sent by the camera of the storefront. The specific information of each storefront refers to the specific information about the storefront submitted by the storefront seller, including storefront business type, storefront name, etc. After the camera 1 starts to send the first surveillance video, the memory 52 saves the received video data as the first surveillance video.
[0056] Furthermore, each store has one or more cameras, which are distributed in the cash register area and the consumption area.
[0057] Step S2: According to the first surveillance video, the target feature information in the first surveillance video is extracted through a face recognition algorithm, wherein the target feature information is the information in the cash register area in the first surveillance video. Dynamic facial information of different target consumers.
[0058] In the step S2, the processor 53 of the server 3 reads the first surveillance video stored in the memory 52, filters out the faces of all uniformed waiters according to the facial recognition algorithm and the image recognition algorithm, and recognizes all consumers. Face, and intercept all facial dynamic information of the consumer's face in the cash register area, record it as target feature information, and store it in the memory 52.
[0059] Step S3, using the target feature information as sample input data, and calculating the satisfaction evaluation model based on facial expressions to obtain the satisfaction evaluation results of the different target consumers, wherein the satisfaction evaluation based on facial expressions The model is a neural network model constructed based on the error back propagation algorithm.
[0060] In the step S3, the processor 53 of the server 3 invokes the facial expression satisfaction evaluation model for evaluation and calculation. First, the sample input data is read, and the sample input data is the target feature information stored in the memory 52, and then neural The network model first analyzes and calculates the algorithm input data required by the error back propagation algorithm, and then uses the error back propagation algorithm to calculate to obtain the satisfaction results of different facial information.
[0061] Further, the input data of the algorithm is the specific data for the neural network to analyze the dynamic information of the face, including the curvature of the mouth corner, the rate of change of the curvature of the mouth corner, the number of exposed teeth, and the duration of the curvature of the mouth corner.
[0062] Further, in the step of the satisfaction evaluation model being a neural network model constructed according to the error back propagation algorithm, the calculation process of the error back propagation algorithm is composed of a forward calculation process and a reverse calculation process;
[0063] The forward calculation process includes: the sample input data is processed layer by layer through the hidden unit layer, and the output result is turned to the output result, and when the output result does not meet the preset expected value, the reverse calculation is turned to;
[0064] The reverse calculation process includes: returning the output result generated when the output does not match the preset expected value as an error signal along the original path of the algorithm, and modifying the weight of each neuron in the neural network model to Adjust the error signal to a preset range, and return to the forward calculation process.
[0065] Further, the specific calculation method of the forward calculation process includes:
[0066]
[0067] Where x i Is the target feature information, b j Is the threshold, w ij Is the weight, S j Is the output result.
[0068] Further, the calculation result of the reverse calculation process is the threshold value b for the j+1th sample input data calculation. j+1 And weight w i(j+1) , The result is counterproductive to the neural network model. The facial expression satisfaction evaluation model not only evaluates and calculates consumer satisfaction, but also optimizes its own neural network model to increase the probability of success in the next calculation.
[0069] Step S4, updating the satisfaction evaluation results in the store satisfaction database according to the satisfaction evaluation results of the different target consumers;
[0070] In the step S4, the satisfaction database reads the current satisfaction evaluation result calculated by the neural network model, and takes the average value of the current satisfaction evaluation result and the satisfaction evaluation result of the corresponding store in the satisfaction database , The average value obtained is the updated satisfaction evaluation result in the satisfaction database.
[0071] Further, the satisfaction database is obtained by performing training calculations based on the satisfaction evaluation model through pre-acquired sample training data. The sample training data is obtained based on the initial customer satisfaction survey questionnaire and face data analysis. The face data analysis is used as the neural network model input, and the customer satisfaction survey questionnaire is taken as the neural network output to calculate the initial Threshold b i And weight w ij , Establish the initial neural network model.
[0072] Step S5, respond to the user's shopping guide input request, and determine the user's shopping guide demand;
[0073] In the step S5, the smart shopping guide robot 2 sends the instruction received by the touch display 212 to the server 3; when the instruction triggered by the user on the touch display 212 is the instruction input by the user to guide shopping, the processor 53 receives The received instruction responds to the user's shopping guide input request to determine the user's shopping guide demand.
[0074] Step S6: Determine storefront information corresponding to the shopping guide input request according to the shopping guide demand;
[0075] In the step S6, the processor 53 responds to the shopping guide instruction input by the user, and reads all storefront information corresponding to the shopping guide input request according to the shopping guide demand. Including women's clothing stores.
[0076] Step S7: According to the satisfaction evaluation result stored in the store satisfaction database and the determined store information, recommend storefronts that meet the preset satisfaction level to the user.
[0077] In the step S7, the satisfaction evaluation result of the corresponding store determined in step S6 in the satisfaction database is read, the three with the highest evaluation results are found, a message is sent to the smart shopping guide robot 2, and the touch display on the server 3 is output On the screen.
[0078] Further, the second surveillance video taken by the camera set in the storefront consumption area of the shopping mall is acquired, and the number of consumers in the storefront is counted through the face recognition algorithm according to the second surveillance video. When the number of consumers reaches the storefront When the maximum number of people is preset, send a warning instruction to the intelligent shopping guide robot about the store is full;
[0079] According to the user's shopping guide needs, recommending a store that meets their shopping guide needs, including when the number of consumers reaches the maximum preset number of the store, the warning instruction is sent to the vicinity of the store information;
[0080] The maximum preset number of people in the store belongs to the basic information of the store and is provided by the merchant.
[0081] In the implementation of this embodiment, the first surveillance video obtained from multiple cameras in the shopping mall is used to identify all target feature information through a face recognition algorithm, and the target feature information is then used to construct a neural network constructed by an error back propagation algorithm The model updates the satisfaction evaluation results stored in the store satisfaction database in real time, responds to the user’s shopping guide input request, determines the user’s shopping guide demand, and determines the store information corresponding to the shopping guide input request according to the shopping guide demand. The satisfaction evaluation results stored in the satisfaction database and the determined store information, recommend storefronts that meet the preset satisfaction level to the users, so that the intelligent shopping guide robot system can quickly and accurately find the specific requirements for customers. Technical issues of the storefront.
[0082] Please refer again Figure 1-Figure 3 In the server 3 provided by the present invention, when the processor 53 executes the computer program, the steps of the control method of the intelligent shopping guide robot system described in any of the above embodiments are implemented.
[0083] There is a computer-readable storage medium in the present invention, the computer-readable storage medium stores a computer program, and when the computer program is executed by the processor 53, the control method of the intelligent shopping guide robot system as described in any of the above embodiments is realized A step of.
[0084] Exemplarily, the computer program in the computer-readable storage medium includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate form. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signal, telecommunications signal, and software distribution media.
[0085] It should be noted that, since the computer program of the computer-readable storage medium is executed by the processor 53 to implement the steps of the aforementioned method for isolated access to a multi-tenant database, all embodiments of the aforementioned method are applicable to the computer-readable storage medium. And all can achieve the same or similar beneficial effects.
[0086] In the description of this specification, the description with reference to the terms "one embodiment", "another embodiment", "other embodiments", or "first embodiment to Xth embodiment" etc. means to combine this embodiment or The specific features, structures, materials, or characteristics described in the examples are included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the above terms do not necessarily refer to the same embodiment or example. Moreover, the described specific features, structures, materials, method steps or characteristics can be combined in any one or more embodiments or examples in a suitable manner.
[0087] It should be noted that in this article, the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or device including a series of elements not only includes those elements, It also includes other elements not explicitly listed, or elements inherent to the process, method, article, or device. Without more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other identical elements in the process, method, article or device that includes the element.
[0088] The sequence numbers of the foregoing embodiments of the present invention are only for description, and do not represent the superiority of the embodiments.
[0089] Through the description of the above embodiments, those skilled in the art can clearly understand that the method of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. 的实施方式。 Based on this understanding, the technical solution of the present invention essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions to make a terminal (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the method described in each embodiment of the present invention.
[0090] The above are only the preferred embodiments of the present invention, and do not limit the scope of the present invention. Any equivalent structure or equivalent process transformation made using the content of the description and drawings of the present invention, or directly or indirectly applied to other related technical fields , The same reason is included in the scope of patent protection of the present invention.
PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.