Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Method for selecting camera combination in visual perception network

A visual perception and camera technology, applied to color TV parts, TV system parts, TV, etc., can solve the problem of less research

Inactive Publication Date: 2014-12-24
NANJING UNIV
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

There are relatively few researches on the existing camera combination selection methods so far

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for selecting camera combination in visual perception network
  • Method for selecting camera combination in visual perception network
  • Method for selecting camera combination in visual perception network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0071] This embodiment includes off-line training to generate a visual dictionary, on-line generation of the target image visual histogram and sequential forward camera selection, and its processing flow chart is as follows figure 1 As shown, the whole method is divided into two main steps of online target image visual histogram generation and camera selection, and the main processes of each embodiment part are introduced below.

[0072] 1. Online generation of target image visual histogram

[0073] In order to establish the information association between the cameras, this embodiment first selects multiple channels of video data in the same scene, extracts the local feature information in the video frame, clusters the local feature vectors, and uses the cluster center as the visual image generated by offline training. Dictionary, so that the online video can generate the corresponding visual histogram according to the visual dictionary and compare the associated information. ...

Embodiment 2

[0119] The camera selection system implemented by this scheme is used for POM data sets mainly outdoor scenes (Fleuret F, Berclaz J, Lengane R, Fua P.Multi-camera people tracking with a probabilistic occupancy map[J].IEEE Transaction on Pattern Analysis and Machine Intelligence, 2008.vol30 (2): 267-282) to select the Terrace video sequence, the video sequence is configured with 4 cameras, and the Terrace2 video sequence is selected as the scene training data to train and generate the visual dictionary in this scene, Terrace1 The online selection test is performed on the video sequence, and the original image of the 180th frame is shown in Figure 2. Figure 2a~2d Denote the target images captured by cameras C0, C1, C2, and C3, respectively. Figure 3a~3d for right Figure 2a~2d The target images acquired by cameras C0, C1, C2, and C3 use the mixed Gaussian model for background modeling and foreground extraction, and combine texture information for shadow removal to detect fore...

Embodiment 3

[0121] The camera selection system implemented by this scheme is used for the i3DPost data set (N.Gkalelis, H.Kim, A.Hilton, N.Nikolaidis, and I.Pitas. The i3dpost multi-view and 3dhuman action / interaction database.In CVMP, 2009.), the video sequence is selected, and the video set is configured with 8 cameras, among which the Walk video sequence D1-002 and D1-015 are selected as training data to generate a visual dictionary for this scene, and the Run video sequence D1-016 online selection test, the original image of the 62nd frame is as follows Figure 6a ~ Figure 6h as shown, Figure 6a ~ Figure 6h Represent images captured by cameras C0, C1, C2, C3, C4, C5, C6, and C7, respectively. Figure 7a ~ Figure 7h Indicates the target area under each viewing angle after motion detection and shadow elimination. For the target area, the face detection value of C5 and C6 in the optimal camera selection process of this scheme is 1, and the value of other cameras is 0. Comprehensive inf...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for selecting camera combination in a visual perception network. The method comprises the following steps: on-line generation of a target image visual histogram: in the case that the vision fields of a plurality of cameras overlap each other, performing motion detection on online obtained video data of multiple cameras observing the same object, determining the subregion of the object in a video frame image space according to the detection result, to obtain a target image region; performing local feature extraction on the target image region, and calculating the visual histogram of the target image region at the visual angle according to a visual dictionary generated by pre-training; and sequential forward camera selection: selecting an optimal visual angle, that is, the optimal camera, in the set of unselected cameras, selecting a secondary optimal camera, adding the secondary optimal camera to the set of selected cameras, removing the secondary optimal camera from the set of candidate cameras, and repeating the steps until the count of the selected cameras reaches the count of needed cameras.

Description

technical field [0001] The invention relates to a camera selection method, which belongs to the technical field of computer vision and video data processing, in particular to a camera combination selection method in a visual perception network. Background technique [0002] In recent years, due to the wide application of cameras in the fields of security monitoring, human-computer interaction, navigation and positioning, and battlefield environment perception, multi-camera systems have become one of the research hotspots in the field of computer vision and its applications. Especially in applications such as video-based monitoring and human-computer interaction, the visual perception network VSN (VisualSensor Network) composed of multiple cameras can effectively solve the problem of self-occlusion in the process of object observation with a single camera, but it also generates a lot of redundancy. The redundant information increases the burden of system storage, visual compu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04N5/247G06T7/00G06K9/62
Inventor 孙正兴李骞陈松乐
Owner NANJING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products