Hazardous article image detection method, device, equipment and medium under articulated naturality web
A dangerous object and image detection technology, applied in the field of visual networking, can solve problems such as fatigue, waste of human resources, inspection omissions, etc., and achieve the effects of increasing robustness, reducing labor costs, and improving speed and accuracy
Inactive Publication Date: 2019-11-19
VISIONVERA INFORMATION TECH CO LTD
3 Cites 4 Cited by
AI-Extracted Technical Summary
Problems solved by technology
Since the security inspection machine uses the conveyor belt and the X-ray machine to perform X-ray scanning on the items in the luggage to be inspected, generate an X-ray image, and send the image to the client for human eye recognition, so this inspection method has the following disadvantages: Sec...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View moreMethod used
Embodiments of the present invention apply the characteristics of the Internet of Vision, train the characteristic neural network model through the Internet of Vision server, and determine whether the image contains dangerous goods by detecting the image to be detected by the client, thereby realizing the training characteristic neural network model and detecting the image to be detected Phase separation liberates the memory and CPU pressure of the client; by preprocessing the image samples of dangerous objects in the Internet of Vision server, the diversity of the image samples is improved, the robustness of the feature neural network model is increased, and the samples are as fast as possible. Cover as much as possible all the image existence forms seen in the usage scene; detect the object to be detected through the feature neural network model, and feed back the detection result information to the client to output the alarm information, reducing labor costs and improving the detection speed and accuracy.
The structural design of the Internet of View completely eradicates the network security problems that plague the Internet through the individual licensing system for each service, complete isolation of equipment and user data, etc., and generally does not require anti-virus programs and firewalls, eliminating hackers and Virus attacks, providing users with a structured worry-free security network.
The ultra-high-speed memory technology of unified video platform adopts the most advanced real-time operating system in order to adapt to the media content of super-large capacity and super-large flow, and the program information in the server instruction is mapped to concrete hard disk space, and the media content is no longer After passing through the server, it is sent directly to the user terminal in an instant, and the user generally waits for less than 0.2 seconds. The optimized sector distribution greatly reduces the mechanical movement of the hard disk head seeking. The resource consumption is only 20% of the IP Internet of the same level, but the concurrent traffic generated is 3 times larger than that of the traditional hard disk array, and the overall efficiency is increased by more than 10 times.
When the feature neural network model detects that the image to be detected contains dangerous goods, the client stores the detection result information in the folder representing dangerous goods in the local memory, and the information includes the output of the feature neural network model with dangerous goods The image of marking and classification information, as well as the original image to be detected sent by the security inspection machine corresponding to the image, and these images are stored according to the category of dangerous goods and time, and each image is named according to the category of dangerous goods, time, etc.; when the detection result When the information does not contain dangerous goods, the representation in the local memory of the client is not to store the detection result informa...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View moreAbstract
The invention provides a dangerous article image detection method, a dangerous article image detection device, equipment and a medium under the articulated naturality web, and relates to the technicalfield of articulated naturality webs. The method comprises the following steps: acquiring a dangerous article image sample in an articulated naturality web server, and preprocessing the dangerous article image sample to obtain a preprocessed image sample; transmitting the preprocessed image sample to a preset neural network model for training to obtain a feature neural network model, wherein thepreset neural network model is stored in the articulated naturality web server; and sending the parameters of the feature neural network model to a client, so that the client detects a to-be-detectedobject through the received feature neural network model. According to the invention, the training of the feature neural network model and the detection of the to-be-detected image are separated, thememory and CPU pressure of the client are liberated. The robustness of the feature neural network model is improved. The labor cost is reduced, and the detection speed and accuracy are improved.
Application Domain
Technology Topic
Image
Examples
- Experimental program(1)
Example Embodiment
[0072] Hereinafter, exemplary embodiments of the present invention will be described in more detail with reference to the accompanying drawings. Although exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention can be implemented in various forms and should not be limited by the embodiments set forth herein. On the contrary, these embodiments are provided to enable a more thorough understanding of the present invention and to fully convey the scope of the present invention to those skilled in the art.
[0073] Video networking is an important milestone in network development. It is a real-time network that can realize real-time transmission of high-definition video, and push many Internet applications to high-definition video, high-definition face-to-face.
[0074] Video networking adopts real-time high-definition video exchange technology, which can provide required services on a network platform, such as high-definition video conferencing, video surveillance, intelligent monitoring and analysis, emergency command, digital broadcast TV, delayed TV, online teaching, and live broadcast Dozens of video, voice, picture, text, communication, data and other services such as VOD, VOD, TV mail, personalized recording (PVR), intranet (self-organized) channels, intelligent video broadcast control, information release, etc. are all integrated in one System platform, through TV or computer to achieve high-definition quality video playback.
[0075] In order to enable those skilled in the art to better understand the embodiments of the present invention, video networking is introduced as follows:
[0076] Some of the technologies applied by the network are as follows:
[0077] Network Technology
[0078] The network technology innovation of the Vision Network improves the traditional Ethernet (Ethernet) to face the potentially huge first video traffic on the network. Different from simple network packet switching (Packet Switching) or network circuit switching (Circuit Switching), depending on the networking technology, Packet Switching is used to meet the requirements of Streaming. Visual networking technology has the flexibility, simplicity and low price of packet switching, as well as the quality and safety of circuit switching, and realizes the seamless connection of the entire network switched virtual circuit and data format.
[0079] Switching Technology
[0080] Video networking uses the two advantages of Ethernet's asynchronous and packet switching, eliminating Ethernet defects under the premise of full compatibility, and has a seamless end-to-end connection across the entire network, direct user terminals, and directly carry IP data packets. User data does not need any format conversion within the entire network. Video networking is a more advanced form of Ethernet. It is a real-time exchange platform that can realize the large-scale real-time transmission of high-definition video on the entire network that is currently unachievable on the Internet, and push many network video applications to high-definition and unified.
[0081] Server Technology
[0082] The server technology on the video network and unified video platform is different from the server in the traditional sense. Its streaming media transmission is built on a connection-oriented basis. Its data processing capability has nothing to do with traffic and communication time. A single network layer can contain information. Order and data transmission. For voice and video services, the complexity of streaming media processing on video networking and unified video platforms is much simpler than data processing, and the efficiency is greatly improved by more than a hundred times than traditional servers.
[0083] Storage Technology
[0084] The ultra-high-speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content of ultra-large capacity and ultra-large traffic. The program information in the server instructions is mapped to the specific hard disk space, and the media content no longer passes through the server. It is sent directly to the user terminal in an instant, and the user's waiting time is generally less than 0.2 seconds. The optimized sector distribution greatly reduces the mechanical movement of the hard disk head seeking. The resource consumption only accounts for 20% of the IP Internet of the same level, but it generates concurrent traffic 3 times larger than the traditional hard disk array, and the overall efficiency is increased by more than 10 times.
[0085] Network Security Technology
[0086] The structural design of the video network completely eliminates the network security problems that plague the Internet through the separate permission system for each service, complete isolation of equipment and user data, etc., and generally does not require anti-virus programs and firewalls to prevent hackers and virus attacks. , Provide users with a structured worry-free and safe network.
[0087] Service Innovation Technology
[0088] The unified video platform integrates business and transmission. Whether it is a single user, private network user, or a network, it is just an automatic connection. User terminals, set-top boxes or PCs are directly connected to the unified video platform to obtain a variety of multimedia video services. The unified video platform adopts a "recipe style" table configuration model to replace traditional complex application programming. It can use very few codes to realize complex applications and realize "unlimited" new business innovation.
[0089] The networking of depending on the network is as follows:
[0090] Visual networking is a centralized control network structure. The network can be a tree network, a star network, a ring network, etc., but on this basis, a centralized control node is required to control the entire network.
[0091] Such as figure 1 As shown, the visual network is divided into two parts: an access network and a metropolitan area network.
[0092] The equipment of the access network can be divided into three categories: node servers, access switches, and terminals (including various set-top boxes, encoding boards, storage, etc.). The node server is connected to the access switch, and the access switch can be connected to multiple terminals and can be connected to the Ethernet.
[0093] Among them, the node server is a node with a centralized control function in the access network, and can control the access switch and the terminal. The node server can be directly connected to the access switch or directly connected to the terminal.
[0094] Similarly, the equipment of the metropolitan area network can also be divided into three categories: metropolitan area servers, node switches, and node servers. The metropolitan area server is connected to the node switch, and the node switch can be connected to multiple node servers.
[0095] Among them, the node server is the node server of the access network part, that is, the node server belongs to both the access network part and the metropolitan area network part.
[0096] The metropolitan area server is a node with a centralized control function in the metropolitan area network, and can control the node switch and the node server. The metropolitan area server can be directly connected to the node switch or the node server.
[0097] It can be seen that the entire Visionlink network is a hierarchical and centralized control network structure, and the network controlled by the node server and the metropolitan area server can have various structures such as tree, star, and ring.
[0098] Vividly, the access network part can form a unified video platform (the part in the dashed circle), and multiple unified video platforms can form a video network; each unified video platform can be interconnected through metropolitan area and wide area video network.
[0099] Depending on the classification of networked devices
[0100] 1.1 The devices in the visual networking of the embodiment of the present invention can be mainly divided into three categories: servers, switches (including Ethernet protocol conversion gateways), and terminals (including various set-top boxes, encoding boards, storage, etc.). The visual network can be divided into metropolitan area network (or national network, global network, etc.) and access network as a whole.
[0101] 1.2 The equipment of the access network part can be mainly divided into three categories: node servers, access switches (including Ethernet protocol conversion gateways), terminals (including various set-top boxes, encoding boards, storage, etc.).
[0102] The specific hardware structure of each access network device is:
[0103] Node server:
[0104] Such as figure 2 As shown, it mainly includes a network interface module 201, a switching engine module 202, a CPU module 203, and a disk array module 204;
[0105] Among them, the incoming packets from the network interface module 201, the CPU module 203, and the disk array module 204 all enter the switching engine module 202; the switching engine module 202 checks the address table 205 for the incoming packets to obtain the packet orientation information; The packet orientation information stores the packet in the queue of the corresponding packet buffer 206; if the queue of the packet buffer 206 is nearly full, discard it; the switching engine module 202 polls all packet buffer queues, and forwards it if the following conditions are met: 1) The port sending buffer is not full; 2) The queue packet counter is greater than zero. The disk array module 204 mainly implements the control of the hard disk, including the initialization and reading and writing of the hard disk; the CPU module 203 is mainly responsible for the protocol processing between the access switch and the terminal (not shown in the figure), and the address table 205 (Including the configuration of the downstream protocol packet address table, the upstream protocol packet address table, and the data packet address table) and the configuration of the disk array module 204.
[0106] Access switch:
[0107] Such as image 3 As shown, it mainly includes a network interface module (downlink network interface module 301, uplink network interface module 302), switching engine module 303, and CPU module 304;
[0108] Among them, the packet (uplink data) from the downlink network interface module 301 enters the packet inspection module 305; the packet inspection module 305 detects whether the destination address (DA), source address (SA), data packet type and packet length of the packet meet the requirements, if If it matches, the corresponding stream identifier (stream-id) is allocated and enters the switching engine module 303, otherwise it is discarded; the packet (downlink data) from the upstream network interface module 302 enters the switching engine module 303; the data packet from the CPU module 304 Enter the switching engine module 303; the switching engine module 303 performs the operation of looking up the address table 306 on the incoming packet to obtain the packet orientation information; if the packet entering the switching engine module 303 goes from the downstream network interface to the upstream network interface, then combine The stream identifier (stream-id) stores the packet in the queue of the corresponding packet buffer 307; if the queue of the packet buffer 307 is close to full, it is discarded; if the packet entering the switching engine module 303 is not a downstream network interface, go upstream If the network interface goes, the data packet is stored in the queue of the corresponding packet buffer 307 according to the packet orientation information; if the queue of the packet buffer 307 is nearly full, it is discarded.
[0109] The switching engine module 303 polls all packet buffer queues, which can include two situations:
[0110] If the queue is from the downstream network interface to the upstream network interface, the following conditions are met for forwarding: 1) The port sending buffer is not full; 2) The queue packet counter is greater than zero; 3) The token generated by the rate control module is obtained ;
[0111] If the queue does not go from the downstream network interface to the upstream network interface, the following conditions are met for forwarding: 1) The port sending buffer is not full; 2) The queue packet counter is greater than zero.
[0112] The code rate control module 308 is configured by the CPU module 304, and generates tokens for all the packet buffer queues from the downstream network interface to the upstream network interface within a programmable interval to control the upstream forwarding code rate.
[0113] The CPU module 304 is mainly responsible for the protocol processing with the node server, the configuration of the address table 306, and the configuration of the code rate control module 308.
[0114] Ethernet protocol conversion gateway :
[0115] Such as Figure 4 As shown, it mainly includes network interface modules (downlink network interface module 401, uplink network interface module 402), switching engine module 403, CPU module 404, packet detection module 405, code rate control module 408, address table 406, and packet buffer 407 And MAC adding module 409, MAC deleting module 410.
[0116] Among them, the data packet coming in from the downstream network interface module 401 enters the packet detection module 405; the packet detection module 405 detects the Ethernet MAC DA, Ethernet MAC SA, Ethernet length or frame type, depending on the network destination address DA, and depending on the network The source address SA, depending on whether the network data packet type and packet length meet the requirements, if it meets the requirements, the corresponding stream identifier (stream-id) is allocated; then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2byte), and enter the corresponding receiving buffer, otherwise discard;
[0117] The downlink network interface module 401 detects the sending buffer of the port, and if there is a packet, it learns the Ethernet MAC DA of the corresponding terminal according to the visual network destination address DA of the packet, and adds the Ethernet MAC DA of the terminal, the MACSA of the Ethernet protocol conversion gateway, and Ethernet length or frame type, and send.
[0118] The functions of other modules in the Ethernet protocol conversion gateway are similar to those of the access switch.
[0119] terminal:
[0120] Mainly include network interface module, business processing module and CPU module; for example, set-top box mainly includes network interface module, video and audio codec engine module, CPU module; encoding board mainly includes network interface module, video and audio code engine module, CPU module; memory Mainly include network interface module, CPU module and disk array module.
[0121] 1.3 The equipment of the metropolitan area network can be mainly divided into two categories: node servers, node switches, and metropolitan area servers. Among them, the node switch mainly includes a network interface module, a switching engine module, and a CPU module; the metropolitan area server mainly includes a network interface module, a switching engine module, and a CPU module.
[0122] 2. Depending on the definition of networked data packets
[0123] 2.1 Access network data packet definition
[0124] The data packet of the access network mainly includes the following parts: destination address (DA), source address (SA), reserved bytes, payload (PDU), CRC.
[0125] As shown in the following table, the data packet of the access network mainly includes the following parts:
[0126] DA SA Reserved Payload CRC
[0127] among them:
[0128] The destination address (DA) is composed of 8 bytes. The first byte indicates the type of data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.). There are up to 256 possibilities. The second byte to the sixth byte are the metropolitan area network address, and the seventh and eighth bytes are the access network address;
[0129] The source address (SA) is also composed of 8 bytes (byte), the definition is the same as the destination address (DA);
[0130] The reserved byte consists of 2 bytes;
[0131] The payload part has different lengths according to different datagram types. If it is a variety of protocol packets, it is 64 bytes, if it is a unicast data packet, it is 32+1024=1056 bytes. Of course, it is not limited to The above 2 kinds;
[0132] CRC consists of 4 bytes, and its calculation method follows the standard Ethernet CRC algorithm.
[0133] 2.2 Metropolitan area network data packet definition
[0134] The topology of the metropolitan area network is graphical. There may be two or more connections between two devices, that is, there may be more than two connections between node switches and node servers, node switches and node switches, and node switches and node servers. Kind of connection. However, the metropolitan area network address of the metropolitan area network device is unique. In order to accurately describe the connection relationship between metropolitan area network devices, a parameter: tag is introduced in the embodiment of the present invention to uniquely describe a metropolitan area network device.
[0135] The definition of the label in this manual is similar to that of MPLS (Multi-Protocol Label Switch, Multi-Protocol Label Switch). Assuming that there are two connections between device A and device B, then there will be data packets from device A to device B. 2 labels, there are also 2 labels for the data packet from device B to device A. Labels are divided into in-label and out-label. Assuming that the label (in label) of the data packet entering device A is 0x0000, the label (out label) of the data packet when it leaves device A may become 0x0001. The network access process of the metropolitan area network is a centralized control of the access process, which means that the address allocation and label allocation of the metropolitan area network are all dominated by the metropolitan area server, and the node switches and node servers are all passively executed. It is different from MPLS label distribution, which is the result of mutual negotiation between switches and servers.
[0136] As shown in the following table, the data packet of the metropolitan area network mainly includes the following parts:
[0137] DA SA Reserved label Payload CRC
[0138] Namely, destination address (DA), source address (SA), reserved byte (Reserved), tag, payload (PDU), CRC. Among them, the format of the tag can refer to the following definition: the tag is 32bit, the high 16bit is reserved, and only the low 16bit is used, and its position is between the reserved byte of the data packet and the payload.
[0139] Reference Figure 5 , Shows a step flow chart of a method for detecting dangerous goods images on a visual network server according to the present invention, which may specifically include the following steps:
[0140] In step S11, a dangerous article image sample is collected, and the dangerous article image sample is preprocessed to obtain a preprocessed image sample;
[0141] In the embodiment of the present invention, the source of the dangerous goods image sample is the X-ray image of the dangerous goods detected and confirmed by the historical security inspection machine, such as knives, scissors and other sharp weapons, guns, ammunition, bottles and other suspected dangerous goods. A sample of the backup image left.
[0142] Specifically, the security inspection machine uses the conveyor belt and the X-ray machine in the security inspection machine to scan and generate X-ray images. Among these X-ray images, the images with dangerous goods are dangerous goods image samples and are stored locally or in the server. The visual network server directly retrieves the stored dangerous goods image samples locally or in the server, and the visual network server uses third-party open source tools such as opencv or manually to preprocess the dangerous goods image samples, such as image enhancement, image denoising, Data enhancement methods such as image blur, image tilt, occlusion coverage, addition of contrast, brightness, and Gaussian blurring are used to preprocess the dangerous goods image samples to obtain preprocessed image samples. This preprocessing scheme of dangerous goods image samples increases the diversity of image samples, so that the samples cover as much as possible all the forms of images seen in the use scene.
[0143] In step S12, transport the preprocessed image samples to a preset neural network model for training to obtain a characteristic neural network model; wherein, the preset neural network model is stored in the visual network server;
[0144] In the embodiment of the present invention, the Yolo neural network model is used as the preset neural network model. The Yolo neural network model uses a convolutional neural network structure.
[0145] Specifically, the Darknet deep learning framework is selected to train the characteristic neural network model. The framework selected here is not only a Darknet deep learning framework, but also a deep learning framework such as TensorFlow and OpenCV, and the present invention does not limit it. The images in the preprocessed image samples are sent to the Yolo neural network model, and the images marked with the locations of dangerous goods and dangerous species are output. Then compare the specific location of the dangerous goods in the original corresponding image sample with the dangerous species to obtain multiple difference indices. Observe the difference value during the training process, commonly known as the training loss function, which drops to a level that can be accepted artificially or reaches the specified number of training times After that, the training of the characteristic neural network model is successful; otherwise, the training of the characteristic neural network model continues.
[0146] In step S13, the parameters of the characteristic neural network model are sent to the client, so that the client can detect the object to be inspected through the received characteristic neural network model.
[0147] In the embodiment of the present invention, the parameters of the characteristic neural network model may represent the trained characteristic neural network model, or may be training parameters in the characteristic neural network model obtained by training. If the training parameter is a data set packaged and generated by the characteristic neural network model in the visual network server, the visual network server sends the data set to the client through the visual network, and the client generates the characteristic neural network model through the data set, and finally passes This model detects whether there are dangerous objects in the image to be detected sent by the security inspection machine. If the training parameter is a parameter obtained by completing the training of the characteristic neural network model, the client prestores a preset neural network model, and then uses the parameter to transform the aforementioned preset neural network model to obtain the characteristic neural network model. In the embodiment of the present invention, the characteristic neural network model in the client is the same as the characteristic neural network model in the server. The client terminal receives the image to be inspected sent by the security inspection machine in real time, and sends the image to be inspected to the characteristic neural network model to detect the items in the image. In this embodiment of the present invention, the manner in which the foregoing two clients generate the characteristic neural network model is not limited.
[0148] At the same time, the above-mentioned embodiments can meet the use scenarios of stand-alone requirements. The visual network server does not need to maintain a continuous connection with the client, as long as the visual network server regularly connects to the client to update the characteristic neural network model in the client, it can meet the needs of security inspection. This scenario is generally applicable to small security checkpoints, such as subway stations, small stations and other places where the flow of people is small and only one or two security checkpoints are needed.
[0149] In another embodiment, refer to Image 6 , Shows a flow chart of the steps of a method for detecting dangerous goods image clients under the network of the present invention, which may specifically include the following steps:
[0150] In step S21, an image to be inspected is received; wherein the image to be inspected is an image output by a security inspection machine after scanning the object to be inspected;
[0151] In the embodiment of the present invention, the video network server and the security inspection machine maintain a communication connection through the video network. The security inspection machine transmits the scanned image to be detected, that is, the X-ray image, to the web server in real time, and the web server receives the X-ray image. Because, as the network server is based on connection-oriented, its data processing capacity has nothing to do with traffic and communication time. A single network layer can contain signaling and data transmission. At the same time, because the network is a real-time network, it can achieve high quality Images are transmitted in real time. Therefore, in the embodiment of the present invention, the use of the video network server and the video network connection can ensure the high authenticity of the image to be detected and the real-time transmission of the image to be detected, ensuring the accuracy and speed of the security check.
[0152] In step S22, input the image to be detected into the characteristic neural network model to obtain detection result information about whether the image to be detected contains dangerous goods;
[0153] In the embodiment of the present invention, the image to be detected is input to the feature neural network model in the visual network server, and the image of the dangerous goods marked with a box is output, and the dangerous goods category is marked on the image, or the original image to be detected is output . At the same time, the feature network model outputs corresponding detection results in real time according to the time sequence of the client receiving the images to be detected. The detection results include the detection result information of dangerous goods and the detection result information of non-dangerous goods.
[0154] In step S23, the detection result information is fed back to the client, so that the client outputs the detection result information.
[0155] In the embodiment of the present invention, the detection result information includes the marked images with dangerous goods and dangerous categories output by the characteristic neural network model.
[0156] In combination with the foregoing embodiment, in another embodiment of the present application, the client is caused to output alarm information according to the detection result information. In addition to step S21-step S23, the method also includes the following steps:
[0157] In step S24, when the detection result information indicates that the image to be detected contains dangerous goods, an alarm instruction is sent to the client, so that the client outputs alarm information.
[0158] In the embodiment of the present invention, the client judges the danger level of the dangerous goods according to the dangerous category and the preset dangerous goods level table, and the client triggers different alarm information according to different danger levels.
[0159] Specifically, bottles, lighters, wires, etc. are a type of dangerous goods. The client interface pops up a warning prompt pop-up window, and displays the image location and type information of the dangerous goods in the pop-up window; sticks, knives, liquids, etc. are For Class II dangerous goods, a warning pop-up window will pop up on the client interface, and the image location and type information of the dangerous goods will be displayed in the pop-up window, while the indicator light flashes; firearms, firearms parts, bombs, etc. are classified as Class III dangerous goods, the client The client interface pops up a warning prompt pop-up window, and displays the image location and type information of the dangerous goods in the pop-up window, the indicator light flashes, and an alarm is issued at the same time. These three types of dangerous goods are recorded in the dangerous goods classification table. In addition, this solution is only a preferred solution, and all other embodiments obtained by a person of ordinary skill in the art without creative work shall fall within the protection scope of this application.
[0160] In combination with the foregoing embodiment, in another embodiment of the present application, the detection result information is stored in the client and/or the video network server according to the detection result information. In addition to step S21-step S23, the method also includes the following steps:
[0161] In step S25, in the case where the detection result information indicates that the image to be detected contains dangerous goods, the image to be detected is stored locally;
[0162] In the embodiment of the present invention, when the detection result information contains dangerous goods, the network server stores the detection result information in a folder representing dangerous goods in the local memory. The information includes the dangerous goods mark output by the characteristic neural network model. The image with classification information, and the original image to be inspected sent by the security inspection machine corresponding to the image. At the same time, these images are classified and stored according to the category and time of dangerous goods. Each image is named according to the category and time of dangerous goods; when the detection result information When the dangerous goods are not included, the detection result information is stored in the folder of dangerous goods as the representation of the network server in the local memory. Storing the detection result information in the video network server ensures the security and checkability of the data.
[0163] In step S26, and/or in the case where the detection result information indicates that the image to be detected contains dangerous goods, the image to be detected is sent to the client, so that the client can store the image to be detected. Check the image.
[0164] In the embodiment of the present invention, the visual network server sends the detection result information to the client's memory for storage, and the specific storage scheme refers to step S25. Local classification and storage of detection result information through the client side facilitates quick and accurate search by security personnel.
[0165] This embodiment can meet the overall multi-machine demand usage scenario. The visual network server and the client maintain a continuous connection status. As long as the visual network server is connected to train the characteristic neural network model, it also receives the image to be detected from the security inspection machine, and uses the characteristic neural network model to detect the image to be detected, and the detection result information is real-time Feedback to the client can meet the needs of security inspection. This scenario is generally applicable to large security checkpoints, such as airports, large railway stations and other places where there is a large flow of people and many security check machines are required.
[0166] In another embodiment, refer to Figure 7 , Shows a step flow chart of the method for preprocessing a dangerous goods image sample of the present invention, which may specifically include the following steps:
[0167] In step S31, crop the dangerous goods image in the dangerous goods image sample;
[0168] In the embodiment of the present invention, a third party such as opencv and other open source tools can be used to crop the image in the dangerous goods image sample. However, in order to achieve higher accuracy of cropping, it is preferable to use a manual cropping method for the dangerous goods image sample. Cut out the dangerous goods image. For example, the tool in the image with the tool is cropped according to the shape of the tool, and the background image of the tool is removed, so that the cropped image is only the tool image.
[0169] In step S32, data enhancement is performed on the dangerous goods in the cropped image to obtain a cropped image with data enhancement;
[0170] The present invention can use image enhancement to perform data enhancement on dangerous items in the cropped image. Specifically, the domain average method is used to enhance the data of the dangerous goods in the cropped image; the histogram equalization method is used to enhance the data after the smoothed denoising image; you can also use a third party such as opencv and other open source tools to perform data enhancement on the cropped image. The dangerous goods image in the image is preprocessed, such as image enhancement, image denoising, image blur, image tilt, occlusion coverage, adding contrast, brightness, and Gaussian blurring and other data enhancement methods.
[0171] In step S33, the data-enhanced cropped image is labeled and classified to obtain the preprocessed image sample.
[0172] The category information of the image is marked in the naming of the data-enhanced cropped image. For example, mark the gun category in the image name of the gun, mark the tool category in the image name of the knife, etc. to obtain the preprocessed image sample.
[0173] In another embodiment, refer to Figure 8 , Shows a flow chart of the steps on the client side of a method for detecting dangerous goods images under the visual network of the present invention, which can specifically include the following steps:
[0174] In step S41, receiving the parameters of the characteristic neural network model sent by the network server;
[0175] In the embodiment of the present invention, the client receives the parameters of the characteristic neural network model sent by the visual networking server through the visual networking communication, and the parameters of the characteristic neural network model may represent the trained characteristic neural network model or the trained characteristics If the training parameters in the neural network model are training parameters, the client pre-stores a preset neural network model, and then uses the training parameters, that is, the parameters of the characteristic neural network model to modify the above-mentioned preset neural network model to obtain the characteristic neural network of the client Network model. If the training parameter is a data set generated by packaging the characteristic neural network model in the visual network server, the visual network server sends the data set to the client through the visual network, and the client generates the characteristic neural network model through the data set, and finally passes This model detects whether there are dangerous objects in the image to be detected sent by the security inspection machine. The scheme of this step is the same as the scheme of step S13 described above.
[0176] In step S42, the to-be-detected image sent by the security inspection machine is received, and the to-be-detected image is sent to the characteristic neural network model to obtain the detection result information of whether the to-be-detected image contains dangerous goods.
[0177] The security inspection machine uses its internal X-ray machine to perform X-ray scanning of the object to be inspected in real time to obtain the image to be inspected. The security inspection machine then sends the detected image to the client in real time. The client receives the image to be detected and sends it to the characteristic neural network model to detect whether the image contains dangerous goods, and finally determine whether the goods to be detected are dangerous goods or contain danger article.
[0178] In combination with the foregoing embodiment, in another embodiment of the present application, the client terminal outputs alarm information according to the detection result information. In addition to step S41-step S42, the method also includes the following steps:
[0179] In step S43, in the case where the detection result information indicates that the image to be detected contains dangerous goods, output alarm information.
[0180] If the detection result information output by the client includes dangerous goods, that is, when the characteristic neural network model recognizes that the image to be detected contains dangerous goods such as knives, guns, etc., the client will output alarm information, and the parallel client will according to the detection result information To determine the danger level of the dangerous goods according to the dangerous categories and the preset dangerous goods level table, the client triggers different alarm messages according to different danger levels. For the specific alarm setting method, see step S24.
[0181] In combination with the foregoing embodiment, in another embodiment of the present application, the client terminal and/or the visual network server stores the detection result information according to the detection result information. In addition to step S41-step S42, the method also includes the following steps:
[0182] In step S44, in a case where the detection result information indicates that the image to be detected contains dangerous goods, the image to be detected is stored locally;
[0183] When the characteristic neural network model detects that the image to be detected contains dangerous goods, the client stores the detection result information in the local memory of the dangerous goods folder, which includes the dangerous goods mark and classification output by the characteristic neural network model The image of the information and the original image to be inspected sent by the security inspection machine corresponding to the image. At the same time, these images are classified and stored according to the dangerous goods category and time. Each image is named according to the dangerous goods category, time, etc.; when the inspection result information does not contain In the case of dangerous goods, the client terminal stores the detection result information in the local memory other than the dangerous goods folder, and the information includes the original to-be-detected image sent by the security inspection machine. Local classification and storage of detection result information through the client side facilitates quick and accurate search by security personnel.
[0184] In step S45, in the case where the detection result information indicates that the image to be detected contains dangerous goods, the image to be detected is sent to the video network server, so that the video network server stores the to be detected image.
[0185] When the client terminal stores the measurement result information, it synchronously uploads the information to the video network server for storage, and the specific storage method refers to step S44. Storing the detection result information in the video network server ensures the security and checkability of the data.
[0186] Based on the embodiments of the present invention, the following beneficial effects can be achieved:
[0187] The embodiment of the present invention applies the characteristics of the visual network, trains the characteristic neural network model through the visual network server, and determines whether the image contains dangerous goods through the client to detect the image to be detected, and realizes the separation of the training characteristic neural network model and the detection of the image to be detected. Freeing the client's memory and CPU pressure; by preprocessing the dangerous goods image sample in the visual network server, the diversity of the image sample is improved, and the robustness of the characteristic neural network model is increased, so that as many samples as possible Covers all the image existence forms seen in the use scene; detects the object to be detected through the feature neural network model, and feeds back the detection result information to the client to output alarm information, which reduces labor costs and improves the speed and accuracy of detection.
[0188] Reference Picture 9 , Shows a structure diagram of a device for detecting dangerous goods images on a visual network server according to the present invention, which can specifically include the following modules:
[0189] The collection module 11 is used to collect dangerous goods image samples, and preprocess the dangerous goods image samples to obtain preprocessed image samples;
[0190] The training module 12 is configured to transport the preprocessed image samples to a preset neural network model for training to obtain a characteristic neural network model; wherein the preset neural network model is stored in the visual network server;
[0191] The transmission module 13 is configured to send the parameters of the characteristic neural network model to the client, so that the client can detect the object to be inspected through the received characteristic neural network model.
[0192] Preferably, the above device further includes:
[0193] The first receiving module is configured to receive an image to be detected; wherein the image to be detected is an image output by a security inspection machine after scanning the object to be detected;
[0194] The first detection module is configured to input the image to be detected into the characteristic neural network model to obtain detection result information on whether the image to be detected contains dangerous goods;
[0195] The feedback module is configured to feed back the detection result information to the client, so that the client outputs the detection result information.
[0196] The first alarm module is configured to send an alarm instruction to the client when the detection result information indicates that the image to be detected contains dangerous goods, so that the client outputs alarm information.
[0197] The first storage module is configured to store the image to be detected locally when the detection result information indicates that the image to be detected contains dangerous goods; and/or
[0198] The first sending module is configured to send the image to be detected to the client when the detection result information indicates that the image to be detected contains dangerous goods, so that the client can store the image to be detected image.
[0199] In an optional implementation manner, the training module 12 includes:
[0200] The cropping sub-module is used to crop the dangerous goods image in the dangerous goods image sample;
[0201] The data enhancement sub-module is used to perform data enhancement on the dangerous goods in the cropped image to obtain a data-enhanced cropped image;
[0202] The label classification sub-module is used to label and classify the data-enhanced cropped image to obtain the preprocessed image sample.
[0203] Reference Picture 10 , Shows the structure diagram of the device for detecting dangerous goods image client under the network of the present invention, which can specifically include the following modules:
[0204] The second receiving module 21 is configured to receive the parameters of the characteristic neural network model sent by the network server;
[0205] The second detection module 22 is configured to receive the image to be detected sent by the security inspection machine, and send the image to be detected to the characteristic neural network model to obtain detection result information of whether the image to be detected contains dangerous goods.
[0206] Preferably, the above device further includes:
[0207] The second alarm module is configured to output alarm information when the detection result information indicates that the image to be detected contains dangerous goods.
[0208] The second storage module is configured to store the image to be detected locally when the detection result information indicates that the image to be detected contains dangerous goods; and/or
[0209] The second sending module is configured to send the image to be detected to the video network server when the detection result information indicates that the image to be detected contains dangerous goods, so that the video network server stores the The image to be detected.
[0210] Based on the same inventive concept, another embodiment of the present invention provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of being run on the processor. The steps in the method described in the embodiment.
[0211] Based on the same inventive concept, another embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps in the method described in any of the above embodiments of the present application are implemented . As for the device embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
[0212] The various embodiments in this specification are described in a progressive manner. Each embodiment focuses on the differences from other embodiments, and the same or similar parts between the various embodiments can be referred to each other.
[0213] Those skilled in the art should understand that the embodiments of the embodiments of the present invention may be provided as methods, devices, or computer program products. Therefore, the embodiments of the present invention may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the embodiments of the present invention may take the form of computer program products implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
[0214] The embodiments of the present invention are described with reference to the flowcharts and/or block diagrams of the methods, terminal devices (systems), and computer program products according to the embodiments of the present invention. It should be understood that each process and/or block in the flowchart and/or block diagram, and the combination of processes and/or blocks in the flowchart and/or block diagram can be implemented by computer program instructions. These computer program instructions can be provided to the processors of general-purpose computers, special-purpose computers, embedded processors, or other programmable data processing terminal equipment to generate a machine, so that instructions executed by the processor of the computer or other programmable data processing terminal equipment Generated for implementation in the process Figure one Process or multiple processes and/or boxes Figure one A device with functions specified in a block or multiple blocks.
[0215] These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing terminal equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device. The instruction device is implemented in the process Figure one Process or multiple processes and/or boxes Figure one Functions specified in a box or multiple boxes.
[0216] These computer program instructions can also be loaded on a computer or other programmable data processing terminal equipment, so that a series of operation steps are executed on the computer or other programmable terminal equipment to produce computer-implemented processing, so that the computer or other programmable terminal equipment The instructions executed on the Figure one Process or multiple processes and/or boxes Figure one Steps of functions specified in a box or multiple boxes.
[0217] Although the preferred embodiments of the embodiments of the present invention have been described, those skilled in the art can make additional changes and modifications to these embodiments once they learn the basic creative concept. Therefore, the appended claims are intended to be interpreted as including the preferred embodiments and all changes and modifications falling within the scope of the embodiments of the present invention.
[0218] Finally, it should be noted that in this article, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply these entities Or there is any such actual relationship or sequence between operations. Moreover, the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or terminal device including a series of elements not only includes those elements, but also includes those that are not explicitly listed. Other elements listed, or also include elements inherent to this process, method, article or terminal device. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other same elements in the process, method, article or terminal device that includes the element.
[0219] The method, device, equipment, and medium of dangerous goods image detection under the visual network provided by the present invention are described in detail above. Specific examples are used in this article to illustrate the principles and implementation of the present invention. The description of the above embodiments is only It is used to help understand the method and core idea of the present invention; at the same time, for those of ordinary skill in the art, according to the idea of the present invention, there will be changes in the specific implementation and the scope of application. In summary, the present The contents of the description should not be construed as limiting the present invention.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more Similar technology patents
Vision separating box and method for realizing multiple vision-separating mode effects by using same
InactiveCN101742070AIncrease diversityRaise attentionTelevision system detailsColor television detailsData contentElectronic information
Owner:GUANGZHOU XINYING DIGITAL MEDIA
Information processing method and electronic equipment
ActiveCN106125938AIncrease diversityImprove user experienceInput/output for user-computer interactionGraph readingVirtual imageElectricity
Owner:LENOVO (BEIJING) CO LTD
Man-machine interaction system and man-machine interaction method
InactiveCN103034324AIncrease diversityImprove the realization of diversityInput/output for user-computer interactionGraph readingSignal processingBody sense
Owner:DEXIN INTERACTIVE TECH BEIJING
Classification and recommendation of technical efficacy words
- Improve robustness
- Increase diversity
System and method for implementing a dynamic cache for a data storage system
ActiveUS7856530B1Improve robustnessFlexible operationMemory systemsData storage systemUser identifier
Owner:NETWORK APPLIANCE INC
Convolutional neural network based crowd density distribution estimation method
ActiveCN106326937AImprove robustnessImprove generalization abilityCharacter and pattern recognitionNeural learning methodsDensity distributionImage segmentation
Owner:ZHENGZHOU JINHUI COMP SYST ENG
Single target tracking method based on convolution neural network
InactiveCN106709936AImprove robustnessHigh precisionImage enhancementImage analysisNetwork modelConvolution
Owner:BEIJING UNIV OF TECH
Tracking and identification of a moving object from a moving sensor using a 3D model
Owner:RAFAEL ADVANCED DEFENSE SYSTEMS
Recommendation Systems
InactiveUS20100268661A1Increase diversityDigital data processing detailsSpecial data processing applicationsClient-sideOff the shelf
Owner:B7 INTERACTIVE LLC
Personalized recommendation method based on commodity property entropy
Owner:NANJING UNIV
Calibration and normalization method for biosensors
InactiveUS20080240543A1Increase diversityMicrobiological testing/measurementMaterial analysis by optical meansFluorescencePhysics
Owner:NOVARTIS AG +1
Information recommendation system and information recommendation method
InactiveCN106327227AIncrease diversityOptimize the recommendation functionMarketingRecommender systemThe Internet
Owner:航天信息软件技术有限公司