Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Anthropomorphic oral translation method and system with man-machine communication function

A technology of oral translation and human-computer dialogue, applied in oral translation methods and corresponding system fields, can solve problems such as difficulty in meeting the requirements of scene friendliness, lack of user-computer communication, and difficulty in meeting accuracy requirements, so as to improve accuracy , translation and interactive experience with intelligent and easy-to-use effects

Inactive Publication Date: 2017-11-03
BEIJING ZIDONG COGNITIVE TECH CO LTD
View PDF6 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the current machine-spoken language translation method is an end-to-end presentation translation method that does not deal with the complexity of the actual scene and semantics, which is obviously difficult to meet the accuracy requirements.
At the same time, because translation as a software service lacks human-computer communication with users, it is difficult to meet the requirements of scene friendliness in actual application scenarios

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Anthropomorphic oral translation method and system with man-machine communication function
  • Anthropomorphic oral translation method and system with man-machine communication function
  • Anthropomorphic oral translation method and system with man-machine communication function

Examples

Experimental program
Comparison scheme
Effect test

no. 1 example

[0038] The first embodiment: the dialogue system of oral translation between the two parties on the mobile phone

[0039] In this embodiment, a two-party oral translation dialogue system on a mobile phone is provided. The system provides an end-to-speak translation dialogue function for both parties in the dialogue, and initiates a man-machine dialogue to the user when necessary to improve the user's translation experience.

[0040] (1) Obtain the source language speech input by the speaker, and the input method is optional according to the speaker's use environment and usage habits (such as Figure 6 shown).

[0041] If the speaker's current environment is not conducive to direct use of voice input, the system provides alternatives for direct input of source language text;

[0042] If the speaker is used to manually specifying the language of the two parties in the conversation, the system provides a button for manually specifying the language, and at the same time allows ma...

no. 2 example

[0070] The second embodiment: multi-party oral translation conference system on the mobile phone

[0071] In this embodiment, a multi-party spoken language translation conference system on a mobile phone is provided. The system provides participants with an end-to-end multi-party spoken language conference translation function, provides an intelligent conference moderator function, and provides a conference call to the participants when necessary. Machine dialogue to improve the translation experience of the meeting.

[0072] (1) Obtain meeting information and create a meeting (such as Figure 10 shown). The meeting creator specifies the meeting identification code, which is the basis for the unique identification of the meeting, and other meeting participants participate in the specified meeting by entering the meeting identification code; the meeting creator specifies the meeting name, and the meeting name is the content summary or participation of the meeting The informat...

no. 3 example

[0099] The third embodiment: an anthropomorphic spoken language translation system based on no-screen display

[0100] In this embodiment, an anthropomorphic spoken language translation system based on no-screen display is provided. The system provides end-to-end anthropomorphic translation services to users without a screen. The system adopts the following technical solutions (such as Figure 13 shown):

[0101] (1) Obtain information about the speaker

[0102] In the case of no screen display, the system acquires relevant information about the speaker through intelligent processing of the speaker's input voice, and the relevant information includes but is not limited to:

[0103] Optionally, the system requests all speakers to speak a common phrase in turn when starting up, so as to obtain the language information of the dialogue participants;

[0104] The system automatically identifies the speaker, and uses the identification result as an important basis for language ide...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an anthropomorphic oral translation method with a man-machine communication function. The method comprises the following steps of conducting intelligent speech recognition of source language speech and obtaining source language text; processing the source language text and a communication scene, and conducting anthropomorphic man-machine communication; conducting machine translation to obtain a translation result. The invention further provides an anthropomorphic oral translation system with the man-machine communication function. Through the adoption of the system, man-machine communication with a user needs to be conducted according to a translation task if necessary, the information used for obviously improving translation experiences of the user in a complex application scene is obtained accurately, and the accuracy of translation semantics is improved.

Description

technical field [0001] The invention relates to the field of computer and artificial intelligence, in particular to a spoken language translation method and a corresponding system which adds an anthropomorphic man-machine dialogue mechanism into the translation process. Background technique [0002] With the popularization and application of the Internet and the rapid advancement of globalization, oral translation, as an effective solution to the problems of high cost, high threshold, and imbalance between supply and demand of human translation, has a strong role in many scenarios such as daily life, business negotiation, and international communication. market demand. [0003] Oral translation technology in two languages ​​is provided by figure 1 The composition shown includes speech recognition, speech synthesis and two-way translation technologies in the source and target languages. Among them, two-way speech recognition and two-way translation are technologies that mus...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F17/28G06F17/30G10L13/02G10L15/00G10L15/18G10L15/26G10L25/48
CPCG06F16/3329G06F40/56G06F40/58G10L13/02G10L15/005G10L15/1807G10L15/26G10L25/48
Inventor 陈炜王峰徐爽徐波
Owner BEIJING ZIDONG COGNITIVE TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products