Unlock instant, AI-driven research and patent intelligence for your innovation.

Action recognition model training method, action recognition method and related device

An action recognition and action technology, applied in the field of computer vision, can solve the problem of low accuracy of video data recognition action, achieve the effect of reducing data volume, improving accuracy, and improving flexibility

Pending Publication Date: 2021-05-04
BIGO TECH PTE LTD
View PDF0 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The embodiment of the present invention proposes an action recognition model training and action recognition method and related devices to solve the problem of low accuracy of action recognition based on deep learning methods for video data

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Action recognition model training method, action recognition method and related device
  • Action recognition model training method, action recognition method and related device
  • Action recognition model training method, action recognition method and related device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0065] figure 1 It is a flow chart of an action recognition method provided by Embodiment 1 of the present invention. This embodiment is applicable to the situation of recognizing actions based on global and local video data. This method can be executed by an action recognition device, which can It is implemented by software and / or hardware, and can be configured in computer equipment, such as servers, workstations, personal computers, etc., and specifically includes the following steps:

[0066] Step 101. Receive video data.

[0067] In practical applications, users can make video data in real time or edit previous video data in the client, such as short videos, micro movies, live broadcast data, etc., upload the video data to the video platform, and intend to publish the video on the video platform Data for the public to circulate and browse.

[0068] Different video platforms can formulate video content review standards according to business, legal and other factors. Befo...

Embodiment 2

[0193] Figure 10 It is a flow chart of a training method for an action recognition model provided in Embodiment 2 of the present invention. This embodiment is applicable to the situation of recognizing actions based on global and local video data, and the method can be executed by a training device for an action recognition model , the training device of the action recognition model can be implemented by software and / or hardware, and can be configured in computer equipment, such as servers, workstations, personal computers, etc., specifically including the following steps:

[0194] Step 1001, determine an action recognition model.

[0195] In this embodiment, the action recognition model can be pre-built, and the action recognition model can be implemented using MXNet (a deep learning framework designed for efficiency and flexibility) as the underlying support library, and the action recognition model can be trained using four graphics cards.

[0196] In a specific implement...

Embodiment 3

[0286] Figure 11 The structural block diagram of an action recognition device provided in Embodiment 3 of the present invention may specifically include the following modules:

[0287] A video data receiving module 1101, configured to receive video data, wherein the video data has multiple frames of original image data;

[0288] A sampling module 1102, configured to perform sampling from the original image data to obtain target image data;

[0289] A global action recognition module 1103, configured to identify actions appearing in the video data according to the global features of the target image data, and obtain global actions;

[0290] A local action recognition module 1104, configured to identify an action appearing in the video data according to the local features of the target image data, and obtain a local action;

[0291] A target action fusion module 1105, configured to fuse the global action and the local action into a target action appearing in the video data. ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention provides a motion recognition model training and motion recognition method and a related device, and the method comprises the steps: receiving video data which comprises multiple frames of original image data, carrying out the sampling of the original image data, obtaining target image data, recognizing actions appearing in the video data according to the global features of the target image data to obtain global actions, recognizing actions appearing in the video data according to the local features of the target image data to obtain local actions, and fusing the global actions and the local actions into target actions appearing in the video data. The local action recognition branch and the global action recognition branch are used for carrying out action modeling and action recognition on the video data respectively, the defect that only local action information or global action information is concerned is overcome, the action recognition flexibility is improved, and the action of the video data is predicted by fusing the local action and the global action, and the accuracy of identifying various different video data is improved.

Description

technical field [0001] Embodiments of the present invention relate to the technical field of computer vision, and in particular, to an action recognition model training and action recognition method and related devices. Background technique [0002] With the rapid development of video applications such as short videos, users can create video data and upload it to video platforms anytime and anywhere, resulting in massive video data on the Internet. Due to the openness and wide dissemination of the Internet, major video platforms will conduct content review and implement effective supervision of these video data before the video data is made public. [0003] Motion recognition is part of content moderation and is used to filter video data involving violence, etc. [0004] The traditional methods for recognizing actions on video data are based on artificially designed feature extraction operators, and the extracted features are difficult to adapt to the content diversity of v...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/46G06K9/62
CPCG06V40/20G06V20/42G06V20/46G06V10/44G06F18/25
Inventor 蔡祎俊卢江虎项伟
Owner BIGO TECH PTE LTD