Indoor typical scene matching and positioning method based on neural network

A scene matching and neural network technology, applied in the field of computer vision, can solve problems such as difficult to meet the amount of data, achieve the effects of improving detection accuracy, high training efficiency, and making up for shortcomings and deficiencies

Inactive Publication Date: 2019-08-16
HANGZHOU DIANZI UNIV
View PDF6 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In the vision-based indoor positioning method, image matching technology is the most important technical link. Traditional image matching technology (such as histogram, SIFT algorithm) has been difficult to meet the current requirements of large data volume and complex environment.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Indoor typical scene matching and positioning method based on neural network
  • Indoor typical scene matching and positioning method based on neural network
  • Indoor typical scene matching and positioning method based on neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0016] Below in conjunction with accompanying drawing and specific embodiment the present invention is described in further detail:

[0017] like figure 1 As shown, the method described in this embodiment adopts a video data acquisition device, a computer / server and a typical scene discriminator, and the server is connected with the video acquisition device; as image 3 As shown, the video data acquisition device includes a mobile camera, a video frame formatting processing module and an image preprocessing module, the mobile camera is used to obtain video data, and the video frame formatting processing module is used to convert real-time video data into formatted video frames f(x, t), where t represents time, and function f( ) represents a video data formatting function; the image preprocessing module judges whether to preprocess the collected image according to the video image collected by the video collection device, and other functions are the same as The existing video c...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides an indoor typical scene matching and positioning method based on neural network. The method comprises the steps of step 1, establishing a standard typical scene positioning image library at a server side; step 2, adopting a Siamese deep neural network model, through training of a large amount of data, enabling the neural network to learn from the data to judge similarity measurement; step 3, outputting a feature vector by the deep neural network, calculating the similarity with a standard typical scene image library by utilizing the feature vector, judging the matching degree of the typical scene according to the similarity, and evaluating the quality of the model; and step 4, loading the trained model into a server, obtaining video data, sending the video data intothe trained deep neural network in the server, calculating similarity, and judging the current position. The method has the advantages of being high in training efficiency, high in convergence, high in modeling precision, good in matching effect, capable of meeting complex environments and the like, and matching and positioning of equipment in online indoor typical scenes can be accurately and efficiently achieved.

Description

technical field [0001] The invention relates to the field of computer vision, in particular to a neural network-based indoor typical scene matching and positioning method. Background technique [0002] With the advancement of science and technology and the improvement of people's economic strength, people pay more and more attention to location positioning services. At present, the outdoor positioning system is very mature. Outdoor positioning systems such as GPS systems cannot effectively locate. Existing indoor positioning methods based on the iBeacon Bluetooth module and methods using Wi-Fi positioning technology are easily affected by factors such as low positioning accuracy of the positioning method itself, building occlusion and other factors, and cannot accurately locate the user's current location. [0003] The vision-based positioning technology emerging today has attracted widespread attention because of its simple equipment and small impact factors. Since the ca...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/30G06T7/70
CPCG06T2207/10016G06T2207/20081G06T2207/20084G06T7/30G06T7/70
Inventor 郭春生容培盛应娜陈华华杨萌章建武
Owner HANGZHOU DIANZI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products