Goods-selling method and device based on image comparison and self-service vending machine
A vending machine and image technology, applied in the field of image processing, can solve the problems of the camera being too late to track, unable to identify, and difficult to identify.
Inactive Publication Date: 2018-07-24
成都果小美网络科技有限公司
8 Cites 23 Cited by
AI-Extracted Technical Summary
Problems solved by technology
However, sometimes the user picks up the item too fast, which will cause the camera to be too late to track, resulting in recognition difficulties, or no recognition at all; sometimes the user holds the item completely or only reveals a part of the item, or the user intentional...
Method used
The method provided by the application can identify the goods purchased by customers in real time, because what is taken is the comparison of static pictures, it is not necessary to carry out unreasonable constraints on customer purch...
Abstract
The invention discloses a goods-selling method and device based on image comparison and a self-service vending machine. The method includes the steps that in image obtaining, when a door of the self-service vending machine is opened or will be opened, images of goods on goods shelves in the self-service vending machine are obtained as original images, and after the door of the self-service vendingmachine is opened, the images of the goods on the goods shelves are obtained at preset time intervals; in goods detection, each obtained image and the previous image of the image are subjected to feature comparison to determine goods which are taken from the goods shelves by users or goods which are returned to the goods shelves; in purchase account settling, when the door of the self-service vending machine is closed or after the door of the self-service vending machine is closed, the goods which are finally taken are determined, and account settling is performed. Through the method, the images of the goods are obtained at the set time intervals; by analyzing and judging the adjacent images, the goods which are taken from the goods shelves or returned to the goods shelves by the users can be judged correctly.
Application Domain
Coin-freed apparatus detailsCharacter and pattern recognition +2
Technology Topic
Computer science
Image
Examples
- Experimental program(1)
Example Embodiment
[0053] Based on the following detailed description of the specific embodiments of the present application in conjunction with the accompanying drawings, those skilled in the art will better understand the above and other objectives, advantages and features of the present application.
[0054] figure 1 A vending method according to an aspect of the present application is shown, applied to an unmanned vending machine, and the method includes:
[0055] Image acquisition step: when the door of the unmanned vending machine is opened or about to be opened, the image of the items on the shelf in the unmanned vending machine is acquired as the original image, and the door of the unmanned vending machine is After opening, obtain images of the items on the shelf every preset time interval;
[0056] Item detection step: comparing each acquired image with the previous image of the image to determine the item that the user has picked up from the shelf or the item returned to the shelf; and
[0057] Purchase settlement step: when the door of the unmanned vending machine is closed or after the door is closed, the final item picked up by the user is determined and settlement is performed.
[0058] The method provided in this application can identify the products purchased by customers in real time. Since the comparison of static images is adopted, there is no need to impose unreasonable restrictions on customer purchase behaviors and high-speed camera devices to capture high-definition images, which greatly improves The recognition accuracy rate is improved, the user experience is improved, and the cost of the unmanned vending machine is reduced.
[0059] Among them, the unmanned vending machine can be a cabinet, a box or other shapes. When the door of the unmanned vending machine is opened or about to be opened, it can be judged by the user's operation. For example, a QR code label may be affixed to the outside of the cabinet of an unmanned vending machine. When the user scans the two-dimensional code label with a mobile smart terminal, he accesses the background server, and the background server sends an unlocking instruction to the unmanned vending machine. It can be judged that the cabinet door is about to be opened by the user. For example, the outside of the cabinet of the unmanned vending machine may be provided with a biological information collection device, which is used to collect the user's biological information or mobile phone information, access the background server, and the background server sends an unlocking instruction to the unmanned vending machine; wherein, the biological information Including fingerprint information, palm print information, palm vein information, finger vein information, jealous film information or face information; the mobile phone information includes SIM card information, NFC information, etc. It is also possible to determine that the door is opened or about to be opened in other ways, for example, a sensor detects the movement of the door.
[0060] Optionally, each layer of the cabinet can be provided with a camera device, so that an image of each layer of commodities is taken as the original image. figure 2 The original image of a certain layer is shown.
[0061] Optionally, the preset time interval ranges from 1 second to 5 seconds, preferably 1 second.
[0062] Optionally, the item detection step includes:
[0063] Image feature extraction step: calculate the feature points of the image and the previous image of the image through a scale-invariant feature transformation algorithm, and perform feature point matching;
[0064] Step of determining the image to be detected: If there are feature points that are not successfully matched in the image, it is considered that the user has put back the item, and the image is used as the image to be detected; if there are features that are not successfully matched in the previous image of the image Point, it is considered that the user has picked up the item, and the previous image is used as the image to be detected;
[0065] The image processing step to be detected: for the image to be detected, the part of the image corresponding to the unmatched feature point is retained; and
[0066] Item determination step: using a machine learning method to identify the processed image to be detected, and determine the item in the image to be detected.
[0067] Corresponding products are placed on each shelf of the unmanned vending machine according to the requirements. Assume that a shelf of a shelf has kegs of instant noodles, sauerkraut instant noodles, cola, coconut juice, potato chips and other products in order. The camera device may be a camera. E.g, figure 1 The first image was taken for the camera at a certain moment at the level of goods. The user took the Coke from the outermost layer, now see figure 2 , The camera took the second picture, and compared the first picture with the second picture through the SIFT algorithm. Because the SIFT algorithm has the robustness of the scale without deformation, rotation without deformation, image brightness and shooting angle of view, and because most of the current products are more colorful and feature rich in order to attract customers, the SIFT algorithm can be used on the front and back two pictures Both extract more image feature points. The feature points of the two photos before and after are matched, and the position of the product is considered when matching, and the coke taken will not be matched. Since the second image has feature points that the first image does not have, it is considered that the customer has picked up the item and uses the second image as the image to be detected. Suppose the customer does not want to buy Coke, and puts the bottle of Coke back to its original position. At this time, the camera takes a third picture and compares the second picture with the third picture through the SIFT algorithm. There is a second picture in the third picture. If there is no feature point in the picture, it is considered that the customer has put the item back, and the third picture is used as the image to be detected. For the image to be detected, the part of the image corresponding to the unmatched feature point is retained, and the image is detected by a machine learning method to identify the quantity and category of the product.
[0068] In addition to the SIFT algorithm, this application can also use the Laplacian of Gaussian operator detection (LoG) algorithm, the determinant value (DoH) method, the accelerated robust feature (SURF) algorithm, and the binary robust primitive independent feature (BRIEF) algorithm Such detection algorithms are implemented. The essence of these algorithms is to detect and find the feature points of the image, and finally to compare the local or overall similarity of the picture.
[0069] This step can also improve the accuracy of settlement in more complicated situations. A more complicated situation may be that the user obstructs other commodities during the purchase process. For example, in the process of purchasing Coke, the user accidentally covered the instant noodles, resulting in a box of instant noodles missing from the second photo compared to the first photo. At this time, the system believes that the customer also bought instant noodles. When the customer went to confirm the order, the camera had already taken the third photo. By comparing with the second photo, it was found that there was an extra box of instant noodles. Therefore, the system will eliminate the previous error caused by the customer's occlusion. Through the previous process judgment, real-time identification of the customer's purchase process can be realized.
[0070] Optionally, the step of processing the to-be-detected image includes: setting the image part corresponding to the successfully matched feature point in the to-be-detected image to a solid color, and the image part corresponding to the unmatched feature point remains unchanged.
[0071] Optionally, the pure color may be black, white or other colors. See Figure 4 , The picture is the image to be detected after processing. For example, reset all the matched feature points and their surroundings to white, so that the Coke in the first picture is not covered by white, and use machine learning to detect the picture to identify the category corresponding to the product. It can be judged that the user has purchased a bottle of Coke.
[0072] Optionally, Figure 5 The purchase settlement step is shown, and this step includes:
[0073] Final image acquisition step: when the door of the unmanned vending machine is closed or after it is closed and before settlement, the image of the item on the shelf is obtained as the final image;
[0074] List determining step: determining a first shopping list based on the image before the final image, and comparing the original image with the final image to determine the second shopping list of the items purchased by the user; and
[0075] List verification step: the first shopping list is compared with the second shopping list, and if the content of the first shopping list and the second shopping list are consistent, the final shopping list is determined and settled.
[0076] When the door of the unmanned vending machine is closed, after the door is closed, or after the order is confirmed, the camera continues to take a photo as the final image, and compares the image with the first image when the user starts the purchase. Identify all the goods purchased by customers, compare the recognition results of the real-time process and the confirmation process to further confirm the correctness of the entire process, improve the robustness of the system, and further enhance the accuracy of the system and errors caused by other situations. Improve the user experience.
[0077] According to another aspect of this application, see Figure 5 , There is also provided a vending device applied to an unmanned vending machine, the device comprising:
[0078] The image obtaining module is configured to obtain an image of the item on the shelf in the unmanned vending machine as an original image when the door of the unmanned vending machine is opened or about to be opened, and the unmanned vending machine After the door of is opened, obtain images of the items on the shelf at preset time intervals;
[0079] The item detection module is configured to compare each acquired image with the previous image of the image to determine the item that the user has picked up from the shelf or the item that is returned to the shelf; and
[0080] The purchase settlement module is configured to determine the item finally picked up by the user for settlement when the door of the unmanned vending machine is closed or after it is closed.
[0081] Optionally, the article detection module includes:
[0082] An image feature extraction module, which is configured to calculate feature points of the image and the previous image of the image through a scale-invariant feature transformation algorithm, and perform feature point matching;
[0083] The module for determining the image to be detected, which is configured to consider that if there are feature points that have not been successfully matched in the image, the user will consider the item to be returned, and use the image as the image to be detected. If there is an unmatched image in the previous image If the feature points are matched successfully, it is considered that the user has extracted the item, and the previous image is used as the image to be detected;
[0084] A to-be-detected image processing module, which is configured to retain, for the to-be-detected image, the part of the image corresponding to the feature points that have not been successfully matched; and
[0085] The item determination module is configured to recognize the processed image to be detected by using a machine learning method, and determine the item in the image to be detected.
[0086] Optionally, the to-be-detected image processing module is configured to configure the to-be-detected image processing module to set the image part corresponding to the successfully matched feature points in the to-be-detected image to a solid color, and the features that are not successfully matched The part of the image corresponding to the point remains unchanged.
[0087] Optionally, the purchase settlement module includes:
[0088] The final image acquisition module is configured to acquire the image of the item on the shelf as the final image when the door of the unmanned vending machine is closed or after it is closed and before settlement;
[0089] The list determining module is configured to determine a first shopping list based on the image before the final image, compare the original image with the final image, and determine a second shopping list of items purchased by the user; and
[0090] The list verification module is configured to compare the first shopping list with the second shopping list, and if the contents of the first shopping list and the second shopping list are consistent, determine the final shopping list and proceed Settlement.
[0091] According to another aspect of this application, see Figure 7 , An unmanned vending machine is provided, comprising: a cabinet 1, a shelf in the cabinet 2, a camera 3, and a processor (not shown), wherein the camera is arranged in the cabinet, and the camera Connected to the processor;
[0092] When the door of the cabinet is opened or about to be opened, the camera acquires an image of the item on the shelf in the unmanned vending machine as an original image, and after the door of the unmanned vending machine is opened, Acquiring images of the items on the shelf every preset time interval, and transmitting all the acquired images to the processor;
[0093] The processor compares the characteristics of each acquired image with the previous image of the image, and determines the items that the user picks up from the shelf or the items put back on the shelf; in the unmanned vending machine When the door is closed or closed, the final item that the user picks up is determined and settled.
[0094] Optionally, the processor separately calculates the feature points of the image and the previous image of the image through a scale-invariant feature transformation algorithm, and performs feature point matching; if there are feature points that are not successfully matched in the image, then It is considered that the user puts back the item and uses the image as the image to be detected; if there are unmatched feature points in the previous image of the image, it is considered that the user has extracted the item and the previous image is used as the image to be detected; The image is detected, and the part of the image corresponding to the feature point that has not been successfully matched is retained; the processed image to be detected is recognized by a machine learning method, and the object in the image to be detected is determined.
[0095] Optionally, the processor sets the image part corresponding to the successfully matched feature point in the to-be-detected image to a solid color, and the image part corresponding to the unmatched feature point remains unchanged.
[0096] Optionally, the camera can be arranged on the top wall of the cabinet and under each shelf to take pictures of the shelves below; it can also be arranged on the side wall of the cabinet; or it can be arranged at other suitable positions. It is understandable that one or more cameras can be used to photograph the items on each shelf. In the case of using multiple cameras to shoot, the multiple cameras can be set above the shelf or on the shelf. The inner wall of the cabinet next to it, or a combination of the two.
[0097] Optionally, the camera is further configured to photograph the items on the shelf when the door is closed or after the door is closed, to obtain a final image, and transmit it to the processor;
[0098] The processor determines a first shopping list according to the image before the final image, compares the original image with the final image, determines a second shopping list of items purchased by the user, and compares the first shopping list with The second shopping list is compared, and if the contents of the first shopping list and the second shopping list are consistent, the final shopping list is determined and the settlement is performed.
[0099] Optionally, the unmanned vending machine may also include a lighting lamp, which is used for lighting inside the container to stabilize the light conditions inside the container and ensure the accuracy of SIFT detection and product identification.
[0100] The unmanned vending machine may also include a display screen, which is used to display the name and quantity of the goods purchased by the customer in real time, and is used for the customer to confirm the order. For example, the customer can confirm the order through the keyboard connected to the display screen, the soft keyboard displayed on the display screen, or touch the display screen.
[0101] In a preferred embodiment, the processor sends the characteristic points to a background server, and the background server matches the characteristic points and implements subsequent image processing and recognition steps.
[0102] Optionally, the unmanned vending machine may also be provided with a gravity sensor, which is connected to the processor, and the gravity sensor detects the changing weight of the items on the shelf before and after being extracted, and sends it to the background server through the processor; The change in weight and the result of image processing are combined to detect the items picked up or put back by the user.
[0103] Optionally, the shelf is a multi-layer shelf, and each shelf is provided with a camera, or each shelf is provided with a camera and a gravity sensor.
[0104] Optionally, the unmanned vending machine further includes a lock installed on the door, and the lock is connected to the processor. Optionally, the control lock can be a magnetic lock, an electric bolt lock or an electric control lock.
[0105] Optionally, the opening and/or closing of the door may be controlled by the processor, or the opening and/or closing of the door may be controlled by the background server via the processor.
[0106] Optionally, the unmanned vending machine also includes a settlement device arranged outside the cabinet. The settlement device can be said to be a credit card machine, an NFC sensing area, which is used to read the user's card swiping information or mobile phone information, and settle through the background server
[0107] When the user is close to the unmanned vending machine, the mobile terminal uses the mobile terminal to initiate a shopping process to the processor or the back-end server. The unmanned vending machine recognizes whether it is a registered user, and automatically opens the unmanned vending machine after determining the user's purchase intention. At the door of the cargo plane, the customer picks up the commodity he wants, the processor or the back-end server obtains the type and quantity of the picked up item, and at the same time the display shows the commodity, quantity and total price taken by the user. After the customer completes the purchase, check the purchase list, confirm the purchase on the display and leave, the door of the unmanned vending machine is automatically closed, and the purchase is over, the unmanned vending machine is waiting to serve the next user.
[0108] The invention solves the disadvantages of high hardware purchase cost of the intelligent unmanned vending machine, high dependence on the external environment, high requirements for network real-time and bandwidth, high pressure on the back-end, and restrictions on user purchasing habits, etc., and is an intelligent unmanned vending machine. The widespread promotion of cargo aircraft laid the foundation.
[0109] In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware or any combination thereof. When implemented by software, it can be implemented in the form of a computer program product in whole or in part. When the computer loads and executes the computer program instructions, the processes or functions described in the embodiments of the present application are generated in whole or in part. The computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable devices. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center. Transmission to another website site, computer, server or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.). The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
[0110] Professionals should also be further aware that the units and algorithm steps of the examples described in the embodiments disclosed in this article can be implemented by electronic hardware, computer software or a combination of both, in order to clearly illustrate the hardware and software Interchangeability. In the above description, the composition and steps of each example have been generally described in terms of function. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
[0111] The above are only preferred specific implementations of this application, but the protection scope of this application is not limited to this. Any person skilled in the art can easily think of changes or changes within the technical scope disclosed in this application. Replacement shall be covered within the scope of protection of this application. Therefore, the protection scope of this application should be subject to the protection scope of the claims.
PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.