Coal mine personnel behavior detection method and device, and storage medium
A detection method and behavior technology, applied in the field of image processing, can solve problems such as low accuracy of behavior detection results and weak data foundation, and achieve the effect of accurate behavior classification results and comprehensiveness
Active Publication Date: 2022-02-11
深圳海清智元科技股份有限公司
6 Cites 0 Cited by
AI-Extracted Technical Summary
Problems solved by technology
[0004] However, in the process of implementing this application, the inventors found that there are at least the following problems in the prior art: in the existing behavior detection scheme, the behavior detection ...
Method used
In order to solve the problems of the technologies described above, the inventors of the present application have found that the image to be processed can be preprocessed into an input tag sequence, and then based on the attention algorithm, feature extraction is carried out, thereby realizing the global processing of the image to be processed, including more The characteristics of multi-detailed information make the behavior detection results of coal mine personnel more accurate. Based on this, the embodiment of the present application provides a coal mine personnel behavior detection method, by performing global image processing on the image to be processed, compared with only extracting human body node information, it includes more detailed information features, so that the output coal mine personnel The behavior classification results are more accurate, and by using the bounding box regression algorithm, the behaviors of multiple people in the image can be detected simultaneously, and the behavior classification results corresponding to the people in each bounding box can be obtained, which ensures the comprehensiveness of the detection results.
In the specific implementation process, the image to be processed is obtained, and the image to be processed is preprocessed through the image block embedding vector network based on convolution and positional encoding (PositionalEncoding) to obtain the image to be processed The input tag sequence corresponding to the image, through the cross-covariance attention module and the classification attention module, according to the input tag sequence, determine the attention output matrix based on the attention algorithm, and generate a feature map according to the attention output matrix, through The standard ROI header performs bounding box regression processing on the feature map to obtain the behavior classification results of coal mine personnel in each bounding box. The coal mine personnel behavior detection method provided in the embodiment of the present application, by performing global image processing on the image to be processed, compared with only extracting human body node information, includes more detailed information features, so that the output of the behavior classification results of coal mine personnel It is more accurate, and by using the bounding box regression algorithm, the behaviors of multiple people in the image can be detected simultaneously, and the behavior classification results corresponding to the people in each bounding box can be obtained, which ensures the comprehensiveness of the detection results.
Specifically, as shown in Figure 5, the feature map obtained after the feature extraction process will be input into the single-head region of interest extractor (SingleRoIExtractor), which can use the region of interest RoI pooling or similar methods , extract the image features of at least one region of interest (each coal miner in the image can be defined as a region of interest) from the feature pyramid of the feature map, and input the image features into the region generation network (RegionProposalNetwork), which can be of interest to each The image features of the region are used to extract the candidate frame respectively. After obtaining the candidate box, it can be converted into a small feature with a fixed size H×W by the Region of Interest Alignment (RoIAlign) module, which can apply bilinear interpolation to the feature map of the region of interest of any size , the size of the corresponding candidate boxes is adjusted and aligned. Then use the bounding box generation head (Shared2FCBBoxHead) to generate bounding boxes based on the aligned candidate boxes, and use the full convolution mask generation head (FCNMaskHead) to add mask information for the coal mine personnel in each bounding box. The final output is an output image with bounding boxes and corresponding masks. By adding a bounding box and a mask to the output image, the coal mine personnel can be highlighted, which is easy to capture and get more attention. In addition, the images of coal mine personnel can be accurately extracted based on the bounding box and mask, which is convenient for subsequent archiving, warning video production and other processing.
The coal mine personnel behavior detection equipment that the embodiment of the present application provides, by carrying out global image processing to the image to be processed, with respect to only extracting the human body node information, has included the feature of more detailed information, makes the output coal mine personnel's The behavior classification results are more accurate, and by using the bounding box regression algorithm, the behaviors of multiple people in the image can be detected simultaneously, and the behavior classification results corresponding to the people in each bounding box can be obtained, which ensures the comprehensiveness of the detection results.
The coal mine personnel behavior detection method provided by the present embodiment, by carrying out global image processing to the image to be processed, with respect to only extracting the human body node information, has included the feature of more detailed information, makes the behavior of the coal mine personnel of output The classification results are more accurate, and by using the bounding box regression algorithm, the behaviors of multiple people in the image can be detected simultaneously, and the behavior classification results corresponding to the people in each bounding box can be obtained, which ensures the comprehensiveness of the detection results.
[0052] In one manner, the method of dense trajectories (Dense Trajectories, DT) can be used to detect the behavior of coal mine personnel. The DT method den...
Abstract
The embodiment of the invention provides a coal mine personnel behavior detection method and device, and a storage medium. The method comprises the following steps: obtaining a to-be-processed image, preprocessing the to-be-processed image, obtaining an input mark sequence corresponding to the to-be-processed image, determining an attention output matrix based on an attention algorithm according to the input mark sequence, generating a feature map according to the attention output matrix, and carrying out bounding box regression processing on the feature map to obtain a behavior classification result of the coal mine personnel in each bounding box. According to the embodiment of the invention, global image processing is carried out on the to-be-processed image, and compared with only extraction of human body node information, more detail information features are included, so that the output behavior classification result of the coal mine personnel is more accurate, and the accuracy of the behavior classification result is improved by adopting the bounding box regression algorithm. The behaviors of multiple persons in the image can be detected at the same time, the behavior classification results corresponding to the persons in the bounding boxes are obtained, and the comprehensiveness of the detection results is guaranteed.
Application Domain
Character and pattern recognitionNeural architectures +1
Technology Topic
Classification resultCoal +5
Image
Examples
- Experimental program(1)
Example Embodiment
[0049] In order to make the objects, technical solutions, and advantages of the present application, the technical solutions in the present application embodiment will be described in connext of the present application embodiment, and It is a part of the present application, not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained without creative labor are not made in the pre-creative labor premises.
[0050]The coal mine production environment is more complicated, and real-time monitoring is required, but the manual supervision is consumed. With the improvement of video data processing power and the development of computer vision technology, the coal mine intelligent video surveillance system has become an indispensable component of coal mine safety production supervision. Coal mine intelligent video surveillance system can monitor coal mine production site through video real-time, and find suspected unsafe work factors as soon as possible, create a coal mine safety production work environment, and ensure smooth production of coal mine safety production. Therefore, the coal mine intelligent video surveillance system is of great significance for the incidence of safety accidents in coal mine production and further improving production efficiency. At this stage, most of the research on coal mine intelligent monitoring focuses on mining and transportation, and has less research on the behavior of coal mines underground.
[0051] In the prior art, there are many ways to detect the behavior of coal miners.
[0052] In one way, coal mine personnel can be carried out in a method of dense trajectories, DT. The DT method is densely sampled in a plurality of scales in a plurality of scales, and then detects the spatial feature points, and tracks these feature points separately to form a trajectory of the fixed length, and finally each trajectory And its time-space neighborhood uses four feature descriptors, which are used to describe trajectory shape descriptor, TSDs, describing the track neighboring information (Motion Boundary Histograms, MBH). ), Describing the directional gradient histogram of apparent information and the optical flow direction histogram describing motion information (Histogram of OrientedOptical Flow, Hof). Considering that the camera movement causes the DT feature independent of the human behavior in the video, an Improved Dense Trajectories, IDT method can be employed. The IDT method can estimate the motion of the camera and eliminate the effect of the camera movement by matching the SURF descriptor and dense light flow feature point between the previous two frames. After the feature extraction, the DT / IDT method encodes the feature by the FV (Fisher Vector) method, and then based on the Encoding Feature Machine, SVM) classifier implements human behavior recognition. This method has a better robustness. However, the disadvantage of DT / IDT is that the algorithm is slow, it is difficult to practical.
[0053] In another way, the extraction of the node information of the human body can be achieved by a critical point detection method based on deep learning, and the operation feature extraction is performed based on the human node information to obtain the action detection result. Specifically, the GRAPH CONVOLUTIONAL NETWORK, GCN is a method based on the behavior identification of the skeleton point, and the sequester map structure is used as the node, the skeleton is the skeleton structure. The spatial and space feature is extracted from the skeleton map using the GCN and Timing Network (TCN), respectively, and behavioral detection results are obtained based on the spatial feature. However, the method based on key point detection can help reason to help other information in the image can only be utilized by the node information extracted to the human body, so that the resulting detection result is less accurate. In addition, the traditional human key detection algorithm uses "self-top" mode, first detects all people in the image, and then detects the key points of each person, and cannot achieve rapid detection under multiplayer scene. .
[0054] In order to solve the above technical problem, the inventors of the present invention find that the pre-processed image to be processed is input to the input marker sequence, and then based on the attention algorithm, feature extraction, thereby implementing global processing to be processed, including more detail information. The characteristics make it more accurate in the behavior detection results of coal miners. Based on this, the present application provides a method of behavioral detection method of coal miners. By treating the processing image, the characteristics of more detail information are included with only the human node information, and the output coal mine personnel The behavior classification results are more accurate, and by using the boundary block regression algorithm, the behavior of multiple people in the image can be detected simultaneously, and the behavior classification results corresponding to the personnel in each boundary box are obtained, and the comprehensiveness of the detection result is ensured.
[0055] Figure 1A The principle of the behavior detection method of coal mine personnel provided in the present application embodiment. like Figure 1A As shown, the image to be processed sequentially embeds the vector network based on the convolved image block, a cross-covariance of the cross-covariance, the classification focus module (ClassAttentlayer), and the Standard Roi Head The processing of the belt border is obtained after processing. Among them, the interactive partial payment is a self-payment of self-payment between the characteristics instead of the labeling clearly completely paired interaction, where it is to be drawn by the key and the query projection calculated. of. The classification focus module is used to classify the extracted classification tag.
[0056] In the specific implementation, the image to be processed is acquired, and the image to be processed by the web is sequentially preprocessed by the convolved image block embedded vector network and positionALencoding, obtaining the image corresponding to the image. The input marker sequence is based on the input marker sequence by the interaction method, based on the input marker sequence, based on the attention force algorithm, based on the attention matrix, generate a characteristic, and generate a feature map through standard interested The area header performs a boundary block regression process to obtain the behavioral classification of coal mine personnel in each border frame. The behavior detection method of coal mine personnel provided herein, by treating the processing image, including more detail information, including more detail information, making the behavior of the output coal mining personnel More accurate, and by adopting a boundary block regression algorithm, it is possible to simultaneously detect the behavior of multiple people in the image, and obtain the behavior classification results corresponding to the personnel in each boundary box, and ensure the comprehensiveness of the detection result.
[0057] In addition, if Figure 1B As shown, in order to improve the processing effect, a local image block interaction (LocalPatchInteraction) and multilayerpercePtron, MLP can be added between the interacting partial contrast module and the classified focus module. Among them, local image block interaction is a hidden exchange of reinforcement of scattering the scattering of the block diagonal in a manner, which can be implemented using the 2-layer separable 3x3 convolution to GELU and BATCHNORM2D, and 3x3 can be achieved. A clear communication between the tags in the window. MLP can include linear transform units, Gaussian Error Linear Units, GAUSSIAN ERROR LINEAR Units, Gaussian Error Linear Units, Gaussian ERROR LINEAR Units, Gaussian Error Lands, and the Gaussian error linear unit is a random regularization of the input of neural networks, and matches one or 0 or 1 random value. In this embodiment, the processing effect can be improved by adopting the processing of the local image block interaction and the multilayer perception.
[0058] also, Figure 1A and Figure 1B The training process of the model shown can include creating a sample set, and the sample input model in the sample is used to train, and finally Figure 1A or Figure 1B The model shown. The source of the sample set can be the action video of the acquisition of coal miners, and the video length can be set as needed, such as 5s to 10s. Multiple action videos collected can be labeled, specifically marked with classification information, and you can add each person in the video to a suitable border box, resulting in a sample labeling sample to obtain a sample set for model training.
[0059] The technical solution of the present application will be described in detail below with a specific embodiment. The following specific embodiments can be bonded to each other, for the same or similar concepts or processes may not be described in some embodiments.
[0060] figure 2 A flow schematic of the method of conducting the behavior detection method of coal mine personnel provided herein. like figure 2 As shown, the method includes:
[0061] 201, obtain the image to be processed, and preprocess the image to be processed, and obtain the input marker sequence corresponding to the image to be processed.
[0062] The execution body of the present embodiment can be a computer, a tablet, a mobile phone, a server, and the like data processing devices.
[0063] In this embodiment, the image to be processed can be a picture or a frame image in a video, and this embodiment does not limit this.
[0064] In an implementable manner, the pre-processed image can include performing a convolution processing to be processed to obtain a plurality of image blocks corresponding to the image to be processed; The image block sequentially passes the convolutional process of the predetermined quantity convolution layer to obtain a embedded vector of the preset dimension to be processed; determine the positional encoding of the plurality of the image blocks, and encodes the position The embedded vector is superimposed to obtain an input marker sequence corresponding to the image to be processed.
[0065] Specifically, such as image 3 As shown, the pre-processing portion of the image to be processed mainly includes two phases: one is the image block extraction phase and the other is the image block processing phase. The image to be processed into a set of image blocks via the convolutional layer of the convolution layer. The group image block obtains an embedded vector by embedding the vector module based on the convolved image block. Finally, the embedded vector is superimposed to the position encoding to obtain an input marker sequence for feature extraction.
[0066] Exemplary, with 3 × 256 × 256 to be processed, the target's embedded vector is 768 dimension, and the vector size is 16 × 16 as an example, and the entire pre-processing process will be described.
[0067] First, the image to be processed can be processed by an input dimension of 3, the output dimension is 96, the volume nuclear size is 3 × 3, the step length is 2, the convolution processing of the two-dimensional split layer filled with 1, will be processed The image is divided into multiple image blocks. Then, the obtained image block is input to the convolution-based image block embedded vector network, the input dimension of the first layer of the network is 96, the output dimension is 192, the volume core size is 3 × 3, the step size is 2, The two-dimensional convolution layer and the GELU activation layer filled in 1, and the dimension is 192, the size of 64 × 64 is obtained after the first layer is processed, that is, the size of the feature map is reduced by half, based on one layer The second layer of the convolved image block embedded vector network is similar to the third layer and the first layer, which uses a coil core size of 3 × 3, a two-dimensional split layer and GELU activated by 2, filled with 1. The layer, the difference is that the input dimension of the second layer is 192, the output dimension is 384, the input dimension of the third layer is 384, and the output dimension is 768. After the second layer of the network and the third layer of the third layer, an embedded vector of 768 dimension is obtained with a target of 16 × 16. It should be noted that the size of the image is processed in the present embodiment, the size of the volume core in each convolution layer, the number of layers of the embedded vector network, and the related design parameters of each layer are exemplary data, and can be performed according to actual needs. Setting, this embodiment does not limit this.
[0068] In this embodiment, the location encoding is a method of subjected to the secondary representation of each image block in the sequence of the image block, and the sequence information and the image block are combined to form a new representation input to the model, so The model has the ability to learn location information. There are various ways to encode location coding, for example, a sinusoidal position coding can be employed.
[0069] Exemplary, after obtaining an embedded vector of 768 dimension and a target size of 16 × 16, the embedded vector can be superimposed to the sinusoidal position encoding, and the input marker sequence can be obtained, and subsequent feature extraction is performed based on the input tag sequence. .
[0070] 202. According to the input marker sequence, the attention force output matrix is determined based on the attention algorithm, and the characteristic map is generated according to the attention matrix.
[0071] In the present embodiment, the determination mode of the attention output matrix can be in a variety of, in an implementable manner, the self-focus algorithm can be determined, and in another implementable manner, the interactive differences can be employed. Sure. This embodiment does not limit this.
[0072] like Figure 4 As shown, the feature extraction section contains two main modules, the focal module (cross-covariance atttentation, or self-focus module) and the classified focus module. The input tag sequence is input to the focus module, and the input tag sequence can be allocated, obtaining the weighting point of attention, after the weight output matrix inputs the classification attention module, first stitching with the classification sequence, and then perform weight distribution After the output matrix of the classified focus module passes the processing of the network by point by point, the characteristic map containing the classification information is finally output. Among them, the focus of each location can be mapped to a larger dimensional feature space by pointing the feedforward network by point-by-point, and then the nonlinearity is filtered using the RELU function, and finally returns to the original dimension. In some embodiments, in order to improve the training efficiency, a layer standardization can be performed prior to each module, and the extreme value can be eliminated to improve the training stability. In addition, the module and modules can be used to prevent gradient disappearance and model predimentation.
[0073] 203. Bound border regression processing is performed on the feature map to obtain the behavior of the coal mine personnel in each boundary frame.
[0074] In the present embodiment, the feature map performs boundary frame regression processing, obtaining the behavior of the coal mining personnel in each boundary frame, and may include the step of: a poolization process for the characteristic, from the characteristic map. The image features of at least one region of interest are extracted in the feature pyramid; for the image characteristics of each of the region of interest, it is determined that the candidate box of the interested area is converted to a fixed size based on the dual linear difference algorithm. , Obtain the conversion candidate box, the conversion candidate frame, to obtain the boundary frame of the intention, and the corresponding behavioral classification results.
[0075] Specifically, such as Figure 5 As shown, the feature drawing obtained by feature extraction processing inputs a single-headed region extractor (SINGLEROIEXTRACTOR), which can be extracted from the feature of the feature of the feature of the characterization of the ROI poolization or similar approach. At least one region of interest (each coal mining person in the image can be defined as an image feature), the image feature input area generates a network, and the image features of each of the interested areas can be given a candidate box. extract. After obtaining the candidate box, the module can be used to align the ROIALIGN module, which can convert the two-line interpolation for the feature of the region of interest, convert it to a small feature having a fixed size H × W. Figure, the size of the corresponding candidate frame is adjusted aligned. By border block generation head (Shared2FCBBOXHEAD), the border frame is generated according to the respective candidate frames, and the coal mining personnel in each boundary frame are plus mask information by full convolution mask generation head (Fcnmaskhead). The output image with a boundary box and a corresponding mask is output. By setting a boundary box and mask in the output image, the coal miner can be protrounded, easy to capture, and obtain more attention. Further, it is possible to accurately extract the image of the coal miner based on the boundary frame and the mask, which is easy to subsequent archiving, and a warning video or the like can be made.
[0076] In this embodiment, the coal mine personnel detecting method is performed, and the characteristics of more detail information are made more detailed, including more detail information relative to only the human node information is performed by the processing image, which includes more detail information. Accurate, and by adopting a boundary block regression algorithm, the behavior of multiple people in the image can be simultaneously detected, and the behavior classification results corresponding to the personnel in each boundary box are obtained, and the comprehensiveness of the detection result is ensured.
[0077] Image 6 The flow of the coal mine personnel according to the embodiment of the present application is schematically Figure II. like Image 6 As shown, in order to save force, on the basis of the above embodiment, for example figure 2 Based on the embodiment shown, in this embodiment, a feature extraction is used in accordance with an interactive partial difference, which includes:
[0078] 601, obtain the image to be processed, and preprocessing the image to be processed, obtaining an input tag sequence corresponding to the image to be processed.
[0079] In this embodiment, step 601 is similar to step 201 in the above-described embodiment, and details are not described herein again.
[0080] 602. Linear projection transformation of the input tag sequence, obtain the query matrix Query, Weight Matrix Key and Value.
[0081] Specifically, in an implementation, such as Figure 7 As shown, the input tag sequence can be performed three linear projection transformations, respectively obtain the query matrix Query, the weight matrix Key and the value matrix Value. In another implementable manner, in order to improve efficiency, the input marker sequence can be linearly projected, obtaining the transformed input marker sequence, and then splits the transformed input tag sequence, obtain the query matrix. Query, Weight Matrix Key and Value Matrix VALUE.
[0082] 603, multiplying the transposition matrix of the weight matrix with the demarcation matrix of the query matrix, and performs SoftMAX operation on the results of the multiplication, and obtains the focus weight matrix.
[0083] 604, multiplied the attention matrix with the transposition matrix of the value matrix, obtain the attention matrix.
[0084] Specifically, the attention weight matrix can be multiplied by the transpressive matrix of the value matrix, and multiply the obtained matrix shape transform, obtain the attention matrix; the shape of the attention weight matrix Similarly to the shape of the input tag sequence.
[0085] Exemplary, such as Figure 7 As shown in the shape (1, 256, 768) of the input tag sequence as an example, first acquire the shape of the input tension, and then map the input tag sequence to Q (Query), K (KEY), V by linear projection. Value) Three tensions, the output tubular shape of the linear conversion layer is (1, 256, 2304). The output tubes (1, 256, 2304) were then split to obtain three matrices of Q, K, V, and the shapes of the three matrices were (1, 8, 256, 96). Then Q, K, V is transpired, and then the transposed matrix of K is multiplied by the transpose matrix of Q to the focus weight value and perform SoftMax operation on it, and obtain the attention weight matrix, and finally pay attention to weight matrix Multiply with the transposition matrix of V, and the multiplied result is shaped and then the same attention output matrix is obtained as the shape of the input marker sequence.
[0086] 605, a feature map is generated based on the attention output matrix.
[0087] Specifically, after obtaining the attention matrix, the attention output matrix can be spliced with a preset classification sequence, and the matrix after the splicing, and then the matrix after the stitching method is treated, obtained. The features are shown. The preset classification sequence can be training.
[0088] In one implementable manner, the matrix after the stitching method is processed based on the attention algorithm, and the feature is obtained, which can be treated based on the matrix of the mutual correction, which can be treated. Figure 7 The illustrated embodiments are not described herein.
[0089] 606. A bounded frame regression process is performed on the feature map to obtain behavioral classification results of coal mine personnel in each boundary frame.
[0090] In this embodiment, step 606 is similar to step 203 in the above embodiment, and details are not described herein again.
[0091] In this example, the coal mining behavior detecting method is used to calculate the attention output matrix by using an interactive partial traveling algorithm. Compared to self-focused mechanism, the self-payment operation of the interactive equivalents is used to use the self-payment operation between the characteristics. Since its attention map is derived from the mutual association of the key value pair of projection of the tag feature, the complexity of the interactive integrated payment algorithm is only the linear complexity of the number of image blocks. Compared to traditional self-focused mechanisms interacting, more than one thousand pixels in each dimension can be handled more efficiently.
[0092] Figure 8 Structure Schematic of Coal Mine Personnel Detection Equipment provided in the present application embodiment. like Figure 8 As shown, the coal miner behavior detecting device 80 includes a pre-processing module 801, a generation module 802, and a post-processing module 803.
[0093] The pre-processing module 801 is configured to obtain an image to be processed, and preprocessing the image to be processed, obtaining an input tag sequence corresponding to the image to be processed;
[0094] Generating module 802 for determining a focus output matrix based on the input marker sequence, and generates a feature map according to the attention matrix, generating a characteristic according to the attention matrix;
[0095] The post-processing module 803 is configured to perform a boundary block regression process on the feature map to obtain the behavior of the coal mine personnel in each border frame.
[0096] The coal miner behavior detecting device provided by the example of the present application, by treating a global image processing of the processing, including more detail information, including more detail information, making the behavior of the output coal mine person More accurate, and by adopting a boundary block regression algorithm, it is possible to simultaneously detect the behavior of multiple people in the image, and obtain the behavior classification results corresponding to the personnel in each boundary box, and ensure the comprehensiveness of the detection result.
[0097] According to the example of the present application, the coal miner behavior detecting device can be used to perform the above method embodiment, and the principle and technical effect are similar, and the present embodiment will not be described later.
[0098] Figure 9 The structural block diagram of the coal mining behavior detection device provided herein, which can be a computer, a message transceiver device, a tablet device, a medical device, a server, a visual device, and other data processing devices.
[0099] Device 90 can include one or more components: processing assembly 901, memory 902, power component 903, multimedia component 904, audio component 905, input / output (I / O) interface 906, sensor assembly 907, and communication component 908.
[0100]The processing component 901 generally controls the overall operation of the device 90, such as an operation associated with display, telephone call, data communication, camera operation, and recording operation. Processing component 901 can include one or more processor 909 to perform instructions to complete all or some of the steps of the above method. Additionally, processing assembly 901 can include one or more modules that facilitate the interaction between components 901 and other components. For example, processing component 901 can include a multimedia module to facilitate interactions between multimedia components 904 and processing component 901.
[0101] Memory 902 is configured to store various types of data to support operations of device 90. Examples of this data include instructions, contact data, phone book data, messages, pictures, video, and the like for any application or method for operating on device 90. Memory 902 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), can be erased Programmable Read-Read memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, disk or disc.
[0102] The power component 903 provides power to various components of the device 90. Power component 903 can include a power management system, one or more power supplies, and other components associated with generating, managing, and assigning power for device 90.
[0103] Multimedia component 904 includes a screen that provides an output interface between the device 90 and the user. In some embodiments, the screen can include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive an input signal from the user. The touch panel includes one or more touch sensors to sense the gestures on the touch, slide, and touch panels. The touch sensor may not only sense the boundary of the touch or slide operation, but also detect the duration and pressure associated with the touch or sliding operation. In some embodiments, multimedia component 904 includes a front camera and / or a rear camera. When device 90 is in operation mode, such as shooting mode or video mode, the front camera and / or the rear camera can receive external multimedia data. Each front camera and the rear camera can be a fixed optical lens system or with focal length and optical zoom.
[0104] The audio component 905 is configured to output and / or input an audio signal. For example, the audio component 905 includes a microphone (MIC), when device 90 is in operation mode, such as call mode, recording mode, and speech recognition mode, the microphone is configured to receive an external audio signal. The received audio signal can be further stored in memory 902 or transmitted via communication component 908. In some embodiments, the audio assembly 905 also includes a speaker for outputting an audio signal.
[0105] I / O interface 906 provides an interface between the processing component 901 and the peripheral interface module, and the peripheral interface module can be a keyboard, a clicking wheel, buttons, and the like. These buttons can include, but are not limited to: home button, volume buttons, start button, and lock buttons.
[0106] The sensor assembly 907 includes one or more sensors for assessing the state assessment of various aspects for device 90. For example, the sensor assembly 907 can detect the opening / closing state of the device 90, the relative positioning of the assembly, such as the display and keypad of the device 90, and the sensor assembly 907 can also detect the location change of the device 90 or device 90. The presence or absence of the user and the device 90 or does not exist, the device 90 or the acceleration / deceleration, and the temperature of the device 90 are changed. The sensor assembly 907 can include a proximity sensor configured to detect the presence of nearby objects without any physical contact. Sensor assembly 907 can also include a light sensor such as a CMOS or CCD image sensor for use in imaging applications. In some embodiments, the sensor assembly 907 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
[0107] Communication component 908 is configured to facilitate communication between wire or wireless mode between device 90 and other devices. The device 90 can access a communication standard wireless network such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, communication component 908 receives a broadcast signal or broadcast information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 908 also includes a near field communication (NFC) module to facilitate short-range communication. For example, the NFC module can be achieved based on RFI (RFID) technology, infrared data association (IRDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other techniques.
[0108] In an exemplary embodiment, device 90 can be programmable, digital signal processing (DSPD), digital signal processor (DSP), digital signal processing device (DSPD), digital signal processor (PLD), digital signal (PLD), digital signal processing device (DSPD), field programmable logic device (PLD) The gate array (FPGA), controller, microcontroller, microprocessor or other electronic component is implemented for performing the above method.
[0109] In an exemplary embodiment, a non-contextic computer readable storage medium including instructions, such as a memory 902 including instructions, and the instructions may be performed by processor 909 of device 90 to complete the above method. For example, the non-temporary computer readable storage medium can be a ROM, a random access memory (RAM), a CD-ROM, a tape, a floppy disk, and an optical data storage device.
[0110] The above-described computer readable storage medium can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as a static random access memory (SRAM), electroless erase Programmable Read-only memory (EEPROM), erased programmable read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, disk or disc. The readable storage medium can be any available medium that can be accessed in a general or dedicated computer.
[0111] An exemplary readable storage medium is coupled to the processor such that the processor can read information from the readable storage medium, and can write information to the readable storage medium. Of course, the readable storage medium can also be part of the processor. The processor and readable storage medium can be located in the Application Specific IntegratedCircuits, abbreviation: ASIC). Of course, the processor and readable storage medium can also be exist as discrete components in the device.
[0112] One of ordinary skill in the art will appreciate that all or part of the steps that implement the above method embodiments can be accomplished by the hardware associated with the program instruction. The foregoing procedures can be stored in a computer readable storage medium. When the program is executed, the step of including the above method embodiments is performed; the aforementioned storage medium comprises: a medium such as a ROM, a RAM, a disk, or an optical disk, which can store the program code.
[0113] The present application embodiment also provides a computer program product, including a computer program, and the computer program is executed by the processor, realizing the behavior of coal mine personnel performed by the above coal mine personnel behavior detection equipment.
[0114] It will be noted in that the above embodiments are intended to illustrate the technical solutions of the present application, and will not limit the present invention, and however, The technical scheme described in the foregoing embodiments can still be modified, or partially or all of the technical features is still equivalent to alternative; these modifications or replacements do not allow the nature of the corresponding technical solution from the present application according to the present invention. Scope.
PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
Similar technology patents
New energy vehicle energy-saving driving recommendation method, device and equipment and storage medium
PendingCN114707037Aavoid limitationsEnsure comprehensiveness
Owner:DONGFENG MOTOR GRP
Wireless camera
PendingCN108551547AEnsure smoothness and timelinessEnsure comprehensiveness
Owner:NANJING INST OF RAILWAY TECH
Control system for virtual-card split water controller
Owner:NEWCAPEC ELECTRONICS CO LTD
Alarm aggregation method and device, equipment and storage medium
PendingCN114238013ARealize comprehensive detectionEnsure comprehensiveness
Owner:NEUSOFT CORP
Gesture interaction evaluation model in immersive environment
PendingCN112099635AEnsure comprehensivenessGuaranteed assessment
Owner:SICHUAN UNIVERSITY OF SCIENCE AND ENGINEERING +1
Classification and recommendation of technical efficacy words
- Ensure comprehensiveness
Extraction method for network new words in microblogs and microblog emotion analysis method and system
InactiveCN103559233AEnsure comprehensivenessavoid sparsity
Owner:NAT UNIV OF DEFENSE TECH
Quality management system for stem cell production
InactiveCN101603031AEnsure comprehensivenessAvoid omissions and human error
Owner:熊俊 +1
Data acquisition method and data acquisition system in network target range system
ActiveCN110351255AEnsure comprehensivenessDiversity guaranteed
Owner:北京永信至诚科技股份有限公司
Intelligent logistics sorting and conveying device based on computer control
Owner:HANDAN COLLEGE