Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

316 results about "Street scene" patented technology

Method for detecting color cast of video image based on Lab chrominance space

The invention relates to a method for detecting the color cast of a video image based on Lab chrominance space. The method comprises the following steps: a) converting RGB (Red Green Blue) color space into the Lab chrominance space; b) calculating color cast factors. The invention provides a new color cast detecting factor, which is successively used for detecting the color cast of the image. First, the image is converted from the original RGB color space into the Lab chrominance space; then according to the distinction of the spatial gray level histogram distribution of a chrominance and b chrominance between a normal image without color cast and an abnormal image with color cast, the mean value of the spatial gray level histogram distribution of the a chrominance and the b chrominance are respectively calculated; then according to the statistical distribution between the mean value and the mid value of histograms, the color factors are calculated. A lot of experiments of real street view image databases prove that the new color cast detecting factor disclosed by the invention can really reflect the degree of color cast of images so as to complete color cast detection. The method has the advantages of high detection speed and accurate precision.
Owner:湖南乐泊科技有限公司

Vehicle locating method based on matching of road surface image characteristics

The invention discloses a vehicle locating method based on matching of road surface image characteristics. The vehicle locating method based on the matching of the road surface image characteristics comprises the steps of determining geographical coordinates of an initial position of a vehicle, shooting road surface images in real time in the traveling process of the vehicle, carrying out dogging processing on two shot current continuous frames of the road surface images in sequence, carrying out matching on the two continuous frames of the road surface images processed through the dogging in real time to obtain matching dot pairs of the two continuous frames of the road surface images, carrying out vehicle location on the vehicle according to the obtained matching dot pairs, judging whether the current two frames of the images are the last two frames or not, ending the process if the current two frames of the images are the last two frames, and otherwise repeating the steps. According to the vehicle locating method based on the matching of the road surface image characteristics, the road surface images only need to be collected in real time in the traveling process of the vehicle, and matching is carried out on the two continuous frames of the road surface images so that automatic vehicle location can be achieved; the method is not prone to interference, high in location precision and capable of saving time and labor due to the fact that the time-consuming and labor-consuming link of collecting all-dimensional street scenes in advance in an existing locating method is saved.
Owner:CHANGAN UNIV

Urban multi-source data-based street space quality measurement evaluation method and system

The invention relates to the field of urban planning, and provides an urban multi-source data-based street space quality measurement evaluation method, which comprises the following steps of: obtaining urban road network data and interest point data in a research area, and preprocessing the urban road network data and the interest point data to obtain street view sampling points; obtaining a plurality of groups of streetscape image data through streetscape sampling points; inputting each piece of streetscape image data into the trained semantic feature extraction model to obtain a streetscapesemantic element data table; constructing a measure evaluation index of the urban street space quality through a street view semantic element data table, the interest point data and the urban road network data; and obtaining the distribution rule and the distribution mode of the urban space quality through the measure evaluation index of the urban street space quality. According to the invention,the street quality is researched on the microcosmic scale, the research range is widened, research is carried out on the perspective of the urban macroscopic level, and the accuracy of street space quality measurement can be remarkably improved.
Owner:CHINA UNIV OF GEOSCIENCES (WUHAN)

Guide board recognizing method and device

The invention discloses a guide board recognizing method and device and belongs to the technical field of computers. The guide board recognizing method comprises the steps that color detection is conducted on each pixel point in a street scene image to be processed, so that the pixel points corresponding to the color of a guide board are obtained; the pixel points corresponding to the color of the guide board are connected, so that at least one first connection area is obtained; area filtration is conducted on the at least one first connection area according to the image characteristics of the guide board, so that at least one second connection area is obtained; the image characteristic of each second connection area is extracted; the image characteristics of the second connection areas are filtered by means of a guide board reorganization support vector machine, and a guide board area is screened out from the at least one second connection area, wherein the guide board reorganization support vector machine is obtained through guide board image training; character recognition is conducted on the guide board area, so that guide board information of the guide board area is obtained. According to the guide board recognizing method and device, due to the fact that manual intervention is not needed in the guide board reorganization process, guide board reorganization is reduced, and guide board reorganization efficiency is improved.
Owner:TSINGHUA UNIV +1

Virtual reality technology-based scenery viewing system

The invention relates to a virtual reality technology-based scenery viewing system. According to the system, scenery viewing equipment is realized through using an Android intelligent terminal, and a streetscape API provided by map service providers such as Tencent and Baidu is used in combination, and conveyor belt equipment such as a running machine is also used in combination, and therefore, a user can enjoy sceneries along a selected route when walking and running on the conveyor belt equipment. The system is composed of a touch control display screen and a corresponding speed collecting device; and an angular velocity wireless sensor device assembled on a conveying crawler belt can be selected as the speed collecting device to acquire precise instant speed, or equipment with an acceleration gyroscope such as a mobile phone and a smart band, which is worn on the user, can be adopted as the speed collecting device to perform calculation, so that relatively accurate estimated velocity can be obtained. According to the system of the invention, streetscape maps and the conveyor belt device are used in combination, virtual-reality scenery viewing experience can be provided. The system is advantageous in simple assembly. The system can be fast deployed on the conveyor belt device such as the running machine so as to be used.
Owner:FUZHOU UNIV

Text positioning method and system based on visual structure attribute

The invention belongs to the technical field of image recognition, and particularly relates to a text positioning method and system based on the visual structure attribute. Based on the visual attribute of a text, by means of color polarity difference transformation and edge neighborhood tail end bonding, abundant closed edges are detected so that abundant candidate connection elements can be obtained, then character stroke attributive character and text colony attributive character screening is conducted, the connection elements belonging to characters are extracted from the candidate connection elements, and then the final text is positioned through multi-channel blending and repeated connection element removal. The method is high in robustness and can be adapted to the situation that multiple word language categories are mixed, or various font styles exist, or arrangement directions are random, or background interference exists and other situations, the positioned text can be directly provided for OCR software for recognition, and OCR software recognition rate can be increased. The text positioning method and system based on the visual structure attribute can be applied to image video retrieval, junk information blocking, vision assisted navigation, street view positioning, industrial equipment automation and other fields.
Owner:SHENZHEN UNIV

Three-dimensional point cloud data instance segmentation method and system in automatic driving scene

The invention provides a three-dimensional point cloud data instance segmentation method and system in an automatic driving scene. The method comprises steps of carrying out the preliminary recognition and division of an outdoor street scene through the spatial position information of a target object, and forming a point cloud visual column of an interested region; visual column point clouds containing objects and negative sample visual column background point clouds distributed in the same way are extracted from the point cloud visual columns of the region of interest to form a visual columnpoint cloud data set; and extracting high-dimensional semantic feature information of an object contained in each visual column point cloud in the visual column point cloud data set, and meanwhile, introducing a multi-classification focus loss function with a weight to obtain a category to which each point cloud in the visual column belongs, thereby realizing instance segmentation of the point cloud data. According to the three-dimensional point cloud data instance segmentation method in the automatic driving scene, target detail feature expression can be effectively enhanced so that the prediction capability of point cloud difficult samples can be enhanced and the performance of point cloud instance segmentation in the automatic driving scene can be enhanced.
Owner:SHANGHAI JIAO TONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products