Eureka-AI is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Eureka AI

1949 results about "Recognition algorithm" patented technology

The Text Recognition Algorithm Independent Evaluation (TRAIT) is being conducted to assess the capability of text detection and recognition algorithms to correctly detect and recognize text appearing in unconstrained imagery.

Emergency communications for the mobile environment

Systems and methods for two-way, interactive communication regarding emergency notifications and responses for mobile environments are disclosed. In accordance with one embodiment of the present invention, a specific geographic area is designated for selective emergency communications. The emergency communications may comprise text, audio, video, and other types of data. The emergency notification is sent to users' mobile communications devices such as in-vehicle telematics units, cellular phones, personal digital assistants (PDAs), laptops, etc. that are currently located in the designated area. The sender of the emergency message or the users' service provider(s) may remotely control cameras and microphones associated with the users' mobile communications devices. For example, a rear camera ordinarily used when driving in reverse may be used to capture images and video that may assist authorities in searching for a suspect. The users' vehicles may send photographs or video streams of nearby individuals, cars and license plates, along with real-time location information, in response to the emergency notification. Image recognition algorithms may be used to analyze license plates, vehicles, and faces captured by the users' cameras and determine whether they match a suspect's description. Advantageously, the present invention utilizes dormant resources in a highly beneficial and time-saving manner that increases public safety and national security.

Cross-camera pedestrian detection tracking method based on depth learning

The invention discloses a cross-camera pedestrian detection tracking method based on depth learning, which comprises the steps of: by training a pedestrian detection network, carrying out pedestrian detection on an input monitoring video sequence; initializing tracking targets by a target box obtained by pedestrian detection, extracting shallow layer features and deep layer features of a region corresponding to a candidate box in the pedestrian detection network, and implementing tracking; when the targets disappear, carrying out pedestrian re-identification which comprises the process of: after target disappearance information is obtained, finding images with the highest matching degrees with the disappearing targets from candidate images obtained by the pedestrian detection network and continuously tracking; and when tracking is ended, outputting motion tracks of the pedestrian targets under multiple cameras. The features extracted by the method can overcome influence of illuminationvariations and viewing angle variations; moreover, for both the tracking and pedestrian re-identification parts, the features are extracted from the pedestrian detection network; pedestrian detection, multi-target tracking and pedestrian re-identification are organically fused; and accurate cross-camera pedestrian detection and tracking in a large-range scene are implemented.

System and method for integrating and controlling audio/video devices

A processor integrating and controlling at least two A/V devices by constructing a control model, referred to as a filter graph, of the at least two A/V devices as a function of a physical connection topology of the at least two A/V devices and a desired content to be rendered by one of the at least two A/V devices. The filter graph may be constructed as a function of at least two device filters corresponding to the at least two A/V devices, in which the device filters include certain characteristics of the at least two A/V device. These characteristics may include the input or output pins for each device, the media type that the A/V device may process, the type of functions that the device may serve, etc. The desired content may be received as a user input which is entered via a keyboard, mouse or other comparable input devices. In addition, the user input may be entered as a voice command, which may be parsed by the processor using conventional speech recognition algorithms or natural language processing to extract the necessary information. Once the filter graph is constructed, the processor may control the at least two A/V devices via the filter graph by invoking predetermined operations on the filter graph resulting in the appropriate commands being sent to the at least two A/V devices, thereby results in the rendering of the desired content.

Named entities recognition method based on bidirectional LSTM and CRF

The invention discloses a named entities recognition method based on bidirectional LSTM and CRF. The named entities recognition method based on the bidirectional LSTM and CRF is improved and optimizedbased on the traditional named entities recognition algorithm in the prior art. The named entities recognition method based on the bidirectional LSTM and CRF comprises the following steps: (1) preprocessing a text, extracting phrase information and character information of the text; (2) coding the text character information by means of the bidirectional LSTM neural network to convert the text character information into character vectors; (3) using the glove model to code the text phrase information into word vectors; (4) combining the character vectors and the word vectors into a context information vector and putting the context information vector into the bidirectional LSTM neural network; and (5) decoding the output of the bidirectional LSTM with a linear chain condition random field to obtain a text annotation entity. The invention uses a deep neural network to extract text features and decodes the textual features with the condition random field, therefore, the text feature information can be effectively extracted and good effects can be achieved in the entity recognition tasks of different languages.

3D (three-dimensional) convolutional neural network based human body behavior recognition method

InactiveCN105160310AThe extracted features are highly representativeFast extractionCharacter and pattern recognitionHuman bodyFeature vector
The present invention discloses a 3D (three-dimensional) convolutional neural network based human body behavior recognition method, which is mainly used for solving the problem of recognition of a specific human body behavior in the fields of computer vision and pattern recognition. The implementation steps of the method are as follows: (1) carrying out video input; (2) carrying out preprocessing to obtain a training sample set and a test sample set; (3) constructing a 3D convolutional neural network; (4) extracting a feature vector; (5) performing classification training; and (6) outputting a test result. According to the 3D convolutional neural network based human body behavior recognition method disclosed by the present invention, human body detection and movement estimation are implemented by using an optical flow method, and a moving object can be detected without knowing any information of a scenario. The method has more significant performance when an input of a network is a multi-dimensional image, and enables an image to be directly used as the input of the network, so that a complex feature extraction and data reconstruction process in a conventional recognition algorithm is avoided, and recognition of a human body behavior is more accurate.

Mobile terminal capable of entering corresponding scene modes by means of face recognition and implementation method thereof

The invention discloses a mobile terminal capable of entering corresponding scene modes by means of face recognition and an implementation method thereof. The implementation method includes the following steps: scene modes corresponding to people at different age stages are preset on the mobile terminal, and corresponding usable applications and operation interface attributes under the different scene modes are set; when the mobile terminal is unlocked, a camera is utilized to acquire the image information of a current user; face recognition is carried out on the acquired image information to recognize the facial feature of the current user, and according to the recognized facial feature of the current user, the age stage of the current user is analyzed; according to the analyzed age stage of the current user, the mobile terminal is controlled to enter a scene mode corresponding to people at the current age stage. When the invention is adopted, the camera on the mobile phone can be utilized to acquire a face, the age of the current mobile phone user is recognized by a face recognition algorithm, and the mobile phone is controlled to automatically enter a scene mode corresponding to the age of the recognized user.

Moving target classification method based on on-line study

InactiveCN101389004AAutomatic judgmentAlgorithms are efficientImage analysisClosed circuit television systemsClassification methodsImage sequence
The invention relates to a method which automatically classifies motion targets learning online, models an image sequence background and detects the motion targets, scene variation, coverage viewing angle and partitioning scene, extracts and clusters characteristic vectors, and marks region classes; the number of the motion targets in a sub-region and certain threshold value initialize Gaussian distribution and prior probability to accomplish initialization of a classifier in accordance with the characteristic vectors of all the motion target regions that pass through the sub-region; the motion targets in the sub-region are classified and parameters of the classifier are online iterated and optimized; classification results in the process of tracking the motion targets are synthesized to output the classification result of the motion result that learns online. The invention is used for detection of abnormalities in monitor scenes, establishing rules for various class targets, enhancing security of monitor system, identifying objects in the monitor scenes, lessening complexity of identification algorithm, improving rate of identification, and for semantized comprehension for the monitor scenes, identifying classes of the motion target and aiding to comprehension for behavior events occurring in the scenes.

Unmanned aerial vehicle intelligent perception system and method based on multiple sensors

The invention relates to a sensor and unmanned aerial vehicle intelligent perception control technology, so as to realize the intelligent perception of an unmanned aerial vehicle to the surrounding environment. The system can verify other autonomous localization algorithms and target recognition algorithms to improve the research efficiency of the intelligent perception technology. According to the unmanned aerial vehicle intelligent perception system and method based on multiple sensors, the system comprises a laser radar, an RGB-D visual sensor, an IMU inertial measurement unit, an embedded airborne processor and a flight controller; the laser radar is used to measure the distance information of the surrounding environment; the RGB-D visual sensor collects the distance information and image information of the surrounding environment; the laser radar collects 2D point cloud data, while the RGB-D visual sensor collects 3D point cloud data; the IMU inertial measurement unit is a device which measures the three-axis attitude angle and the acceleration of the device; and the embedded airborne processor comprises two independent modules of autonomous positioning and target recognition. The unmanned aerial vehicle intelligent perception system and method based on multiple sensors are mainly used for unmanned aerial vehicle control.

Routing inspection method and routing inspection robot system applied to high-speed railway machine room

The invention discloses a routing inspection method applied to a high-speed railway machine room. The routing inspection method comprises the following steps that (1), a routing inspection robot is started, and start-up status self-check and equipment initialization are performed; (2), a routing inspection task file is read, the target equipment cabinet number to be detected is obtained, and the position information of equipment cabinets to be detected is exported from the equipment cabinet position database; (3), the routing inspection robot performs real-time positioning according to laser radar data, performs global path planning and local path planning and reaches the target equipment cabinet positions in sequence, when the routing inspection robot reaches the target equipment cabinetpositions, environment detection is performed, and detection is performed on the equipment cabinet state according to a deep learning neural network model and an image recognition algorithm. The invention provides a corresponding routing inspection robot system at the same time, and the routing inspection robot system can be used for performing automatic regular routing inspection on high-speed railway machine room equipment and can finish emergency routing inspection tasks through remote control when an emergency occurs.

Intelligent recognition monitoring system and method for unmanned substation

InactiveCN104333736AAvoiding the Drawbacks of Analog Small Signal TransmissionEasy to installCharacter and pattern recognitionClosed circuit television systemsVideo monitoringVibration detection
The invention discloses an intelligent recognition monitoring system and method for an unmanned substation. Based on a complete IPVS (internet protocol video monitoring system), the monitoring system comprises a front-end video acquisition layer, a video storage layer and a peripheral warning layer, the front-end video acquisition layer performs real-time video monitoring for the substation by the aid of a network camera with an intelligent recognition function and transmits monitoring data to a monitoring center through a network transmission layer, the video storage layer includes substation information on-site scatter storage and monitoring center concentrated storage, the peripheral warning layer acquires environmental information through an infrared/microwave sensor, a vibration detection device and a temperature and humidity acquisition device in real time, and the monitoring center receives abnormal information transmitted by the front-end video acquisition layer, further judges the information, sieves local alarms caused by normal operation and remotely gives an alarm when abnormity really exists. An intelligent recognition module adopts different recognition algorithms for different monitoring objects, the running state of a target device and the running environment of the substation can be rapidly analyzed, and unattended operation of the substation is truly realized.
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products