Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

39 results about "Semantic analyzer" patented technology

A semantic analyzer for a subset of the Java programming language. From Wikipedia: Semantic analysis, also context sensitive analysis, is a process in compiler construction, usually after parsing, to gather necessary semantic information from the source code.

Sign language interpreting, displaying and sound producing system based on electromyographic signals and motion sensors

The invention relates to a sign language interpreting, displaying and sound producing system based on electromyographic signals and motion sensors. The sign language interpreting, displaying and sound producing system comprises a gesture recognition subsystem and a semantic displaying and sound producing subsystem. The gesture recognition subsystem comprises the multi-axial motion sensors and a multi-channel muscle current acquisition and analysis module, the gesture recognition subsystem is put on the left arm and the right arm of a user, and the original surface electromyogram signals of the user and motion information of the arms of the user are obtained; gestures are differentiated by processing the electromyogram signals and data of the motion sensors. The displaying and sound producing subsystem comprises a semantic analyzer, a voice control module, a loudspeaker, a displaying module, a storage module, a communication module and the like. According to the sign language interpreting, displaying and sound producing system, by the adoption of the mode recognition technology based on the electromyographic signals of the double arms and the data of the motion sensors, the gesture recognition accuracy rate is increased; through the combination of the semantic displaying and sound producing subsystem, interpreting from commonly-used sign language to voice or text is achieved, and the efficiency of direct communication between people with language disorders and normal people is improved.
Owner:SHANGHAI OYMOTION INFORMATION TECH

Common Information Model for Web Service for Management with Aspect and Dynamic Patterns for Real-Time System Management

A system and method is disclosed for reducing the effects of cross-profile crosscutting concerns to enable just-in-time configuration updates and real-time adaptation in the Common Information Model (CIM). The CIM object model is thereby allowed to adapt to dynamic role, resource, or service changes such as logging, debugging, security or quality of service (QOS). An aspect syntactic analyzer is implemented to extend a CIM Managed Object Format (MOF) to implement aspect and dynamic pattern extensions. CIM MOF extensions comprise an Aspect Oriented Programming (AOP) join point. The join point can be implemented as an association class referencing two classes or as a method call of a first class to a property of a second class. The two classes may reside in the same or different CIM profiles. A CIM repository is accessed by a CIM Object Manager (CIMOM) comprising an aspect weaver implemented to enable AOP operations between CIM clients and data providers. The CIM providers comprise an Aspect Semantic Analyzer to similarly enable AOP operations comprising CIM MOF aspect and dynamic pattern extensions. As a result, cross-profile crosscutting concerns are reduced, thereby allowing dynamic changes in the CIM model and enabling just-in-time configuration changes and real-time environment adaptation.
Owner:DELL PROD LP

Novel intelligent sign language translation and man-machine interaction system and use method thereof

The invention provides a novel intelligent sign language translation and man-machine interaction system which comprises a gesture identifying system and a semantic vocalizing system. The gesture identifying system is connected with a user with language disorder to transmit an original electromyographic signal; the gesture identifying system is connected with the semantic vocalizing system to transmit basic semantic information; and the semantic vocalizing system transmits voice which is consistent with the semantics of sign language. The gesture identifying system comprises an electromyographic signal acquisition device, a wireless transceiver, a filter, a characteristic extraction unit and a classifier. The semantic vocalizing system comprises a semantic analyzer, a voice controller and a portable speaker. According to the novel intelligent sign language translation and man-machine interaction system and the use method thereof, a mode identifying technology based on the electromyographic signal is adopted, the gesture identification precision is improved and the efficiency in exchange between the user with language disorder and an healthy person is improved by combining a voice system. After being simply adjusted, the system and the method can be used by users with language disorder with different body shapes. The electromyographic signal acquisition device and the semantic vocalizing system are respectively worn through a wrist band and a waist band, so that the normal life of the user with language disorder is not affected.
Owner:SHANGHAI JIAO TONG UNIV

Method for computer to simulate human brain in learning knowledge, logic theory machine and brain-like artificial intelligence service platform

ActiveCN108874380ARealize intellectual function learning knowledgeSemantic analysisSoftware designSingle sentenceKnowledge element
The invention relates to the field of computers, in particular to a method for a computer to simulate human brain in learning knowledge, a logic theory machine and a brain-like artificial intelligenceservice platform. The method for the computer to simulate human brain in learning knowledge comprises the following steps: establishing a computer brain-like knowledge base, including a lexicon, a class library, a resource library and an intelligent information management library; making the computer call a semantic analyzer to create class basic elements and semantic properties generated by a single sentence with natural language statements by a class method, and store the class basic elements and semantic properties in the class library; and making the computer call the semantic analyzer togenerate an intelligent application specific to an intelligent application demand based on intelligent knowledge elements in the class library, and store the intelligent application in the intelligent information management library. A cognitive model for the human brain to recognize objective things by intelligent calculation and judgment and the intelligent mechanism of logical reasoning based on the cognitive model are simulated to a computer system by an artificial method, thereby realizing the intelligent function of a machine to simulate human brain to study and work, and forming a brain-like artificial intelligence service platform.
Owner:HUNAN BENTI INFORMATION TECH RES CO LTD

Compiler of test scene generation source code and test scene generation system

The invention discloses a compiler for generating a source code in a test scene and a test scene generation system, and relates to an automatic driving technology. The compiler comprises an integrated development environment used for obtaining a source code; a lexical analyzer used for analyzing the source code to obtain a regular set; a grammar analyzer used for analyzing the regular set according to grammar rules to obtain grammar units; a semantic analyzer used for adding an attribute grammar on the basis of a grammar unit to obtain a semantic data structure; during execution, the semantic data structure is used for reading a measured road section according to measured road section information of a road network file; assigning participants in the test scene, and loading models corresponding to the participants; sequentially determining information of behaviors executed by the vehicle and the target vehicle along with time migration; and on the tested road section, controlling the participants to sequentially execute corresponding behaviors according to the information of each behavior, and generating the test scene. According to the method, the test scene is generated by adopting the source code which is convenient to edit and high in readability, and the corresponding compiler and system are developed.
Owner:BEIJING CATARC DATA TECH CENT +2

Sign Language Interpretation and Display Voice System Based on Myoelectric Signal and Motion Sensor

The invention relates to a sign language interpreting, displaying and sound producing system based on electromyographic signals and motion sensors. The sign language interpreting, displaying and sound producing system comprises a gesture recognition subsystem and a semantic displaying and sound producing subsystem. The gesture recognition subsystem comprises the multi-axial motion sensors and a multi-channel muscle current acquisition and analysis module, the gesture recognition subsystem is put on the left arm and the right arm of a user, and the original surface electromyogram signals of the user and motion information of the arms of the user are obtained; gestures are differentiated by processing the electromyogram signals and data of the motion sensors. The displaying and sound producing subsystem comprises a semantic analyzer, a voice control module, a loudspeaker, a displaying module, a storage module, a communication module and the like. According to the sign language interpreting, displaying and sound producing system, by the adoption of the mode recognition technology based on the electromyographic signals of the double arms and the data of the motion sensors, the gesture recognition accuracy rate is increased; through the combination of the semantic displaying and sound producing subsystem, interpreting from commonly-used sign language to voice or text is achieved, and the efficiency of direct communication between people with language disorders and normal people is improved.
Owner:SHANGHAI OYMOTION INFORMATION TECH

Equipment efficiency evaluation method and device based on knowledge base rule reasoning

The invention discloses an equipment efficiency evaluation device based on knowledge base rule reasoning. The equipment efficiency evaluation device comprises a sensing interface, a semantic analyzer, a rule reasoning machine, a natural environment influence knowledge base and an execution interface, the natural environment influence knowledge base comprises an entity knowledge base, an influence rule base and an algorithm base; the rule reasoning machine reasons rule information by by using deductive reasoning, inductive reasoning, causal reasoning, condition missing speculation, equipment analogy speculation and decision table judgment reasoning rule reasoning calculation methods; the semantic analyzer reads natural environment, equipment information and the like to obtain a semantic extension set. The invention discloses an equipment efficiency evaluation method based on knowledge base rule reasoning. The accuracy and completeness of the evaluation conclusion depend on the knowledge base and the constraint rules, and by continuously improving the concept elements and the constraint rules of the knowledge base, the method can adapt to various types of combat actions, natural environments and the like, and the overall intelligent level of efficiency evaluation is improved.
Owner:中国人民解放军32021部队

Artificial intelligence deep learning neural network embedded control system

The invention discloses an artificial intelligence deep learning neural network embedded control system, comprising a control center and a smart external device. The control center comprises a microprocessor and a voice recognition unit. The voice recognition unit comprises a voice collector and a semantic analyzer. An output end of the voice collector is electrically connected with an input end of the semantic analyzer. The output end of the semantic analyzer is electrically connected with the input end of a command word library. The output end of the command word library is electrically connected with the input end of the microprocessor. The control center also comprises an infrared remote control unit. The infrared remote control unit comprises an infrared signal receiver and an infrared signal code library. The output end of the infrared signal receiver is electrically connected with the input end of the infrared signal code library. The infrared signal code library is intercommunicated with an internal memory unit through a dedicated channel. The output end of the infrared signal code library is electrically connected with the input end of the microprocessor. According to thesystem, an offline voice command word library can be stored in the internal or eternal memory unit, thereby realizing offline voice recognition.
Owner:四川声达创新科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products