Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for detecting character interaction in image based on multi-feature fusion

A multi-feature fusion and interactive detection technology, applied in the field of visual relationship detection and understanding, can solve problems such as difficulty in determining the correlation between people and object instances, low detection accuracy, and wrong associations

Pending Publication Date: 2021-09-10
SHANGHAI UNIV
View PDF3 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This speculative method that only relies on instance-level features still has shortcomings when dealing with relatively complex interaction classes, resulting in low overall detection accuracy
First, due to the lack of detailed clues, it is difficult to determine the correlation between human and object instances with instance-level representations, which can easily lead to false associations between humans and non-interacting objects
In addition, when only relying on similar instance-level features to distinguish fine-grained interaction types, the internal connection between features is not effectively utilized, and it is impossible to accurately judge complex situations.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for detecting character interaction in image based on multi-feature fusion
  • Method for detecting character interaction in image based on multi-feature fusion
  • Method for detecting character interaction in image based on multi-feature fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0043] In this example, see figure 1 , a method for detecting human interaction in images based on multi-feature fusion, the operation steps are:

[0044] Step 1: Input the original picture;

[0045] Step 2: target detection;

[0046] Step 3: Build a character interaction recognition network;

[0047] Step 4: Detect the interactive behavior of the characters in the picture to be tested;

[0048] In the step 2, after using the target detection algorithm to detect all the instance information in the picture, including the position information of the human body and the position and category information of the object, input the trained character interaction behavior recognition network to detect the relationship between the character pairs in the picture to be tested. interactive behavior;

[0049] In the step 3, the character interaction recognition network adopts a multi-branch neural network structure, including paired tributaries, intersection tributaries and short-term me...

Embodiment 2

[0052] This embodiment is basically the same as Embodiment 1, especially in that:

[0053] In this embodiment, in the step 2, the process of target detection is:

[0054] Use the trained target detector to perform target detection on the input picture, and get the candidate frame b of the person h and the human confidence s h and object candidate box b o and the object's confidence s o . The subscript h represents the human body and o represents the object.

[0055] In this embodiment, in the step 3, constructing a character interaction recognition network includes the following steps:

[0056] 1) Extract the convolutional features of the entire image:

[0057] Use the classic residual network ResNet-50 to perform convolutional feature extraction on the original input image to obtain the global convolutional feature map F of the entire image, and the human body position b of the target detection result h , object position b o Together as the input of the character inte...

Embodiment 3

[0075] This embodiment is basically the same as the above-mentioned embodiment, and the special features are:

[0076] In this example, if figure 1 As shown, a method for human interaction detection in images based on multi-feature fusion, the specific steps are as follows:

[0077] Step 1: Perform target detection on the picture to obtain all instance information, including human body location information, object location and category information, and form a instance with people and objects to input person interaction recognition network for person interaction detection.

[0078] Step 2: Construct a character interaction recognition network, and use a multi-branch neural network structure to learn various features of instances in the picture, including paired tributaries, intersection tributaries and short-term memory selection tributaries. Each tributary extracts different feature information to detect the interaction between person pairs. Attached below figure 2 The i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for detecting character interaction in an image based on multi-feature fusion, and the method comprises the steps: detecting all instance information, including human body position information, object position and category information and the like, in a picture through a target detection algorithm, then inputting the information into a trained character interaction behavior recognition network, and detecting an interaction behavior between a character pairs in a to-be-detected picture. According to the method, on the basis of global spatial configuration of a pose capture interaction relationship, effective information provided by a human and object intersection area is focused, finer local features are learned, the matching probability of correct human interaction pairs is increased, people, objects and background region information are effectively screened and utilized by means of a short-term memory selection module, and the precision of character interaction detection is improved through fusion of various features.

Description

technical field [0001] The invention belongs to the technical field of detecting and understanding visual relationships in images by using computer vision, and in particular relates to a method for detecting human interaction in images based on multi-feature fusion. Background technique [0002] The goal of Human-Object Interaction (HOI) detection in images is to use computer vision to automatically detect the specific positions of objects such as people and objects interacting in the input image, and to identify the <human-object> pairs. Interactive behavior categories, so as to realize the automatic understanding of the image content by the machine. Human interaction detection is the core technology for automatically understanding deep-level visual relationships and realizing advanced artificial intelligence through computer vision. It can be widely used in many fields such as intelligent robots, security monitoring, information retrieval, and human-computer interact...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/32G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/045G06F18/253
Inventor 马世伟汪畅孙金玉
Owner SHANGHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products