Voice control method and device
A voice control and voice engine technology, applied in voice analysis, voice recognition, instruments, etc., can solve problems that affect user experience, wrong operation, and voice control of music playback applications, and achieve the effect of improving user experience and convenience
Inactive Publication Date: 2015-05-06
LE SHI ZHI XIN ELECTRONICS TECH TIANJIN
5 Cites 65 Cited by
AI-Extracted Technical Summary
Problems solved by technology
This method of controlling smart TVs through voice applications can only achieve partial control of smart TVs, and is limited to the voice control of the original functions of voice applications, and is powerless to the functions of third-party applications, such as , the user can open the music playback application on the smart TV by voice, but cannot control the music playback application through the v...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View moreMethod used
In specific applications, some control operations can be provided with corresponding secondary voice confirmation operations, such as "shutdown", "restart", "close screen" and other control operations, and the terminal will pop up whether to shut down, restart, or turn off the screen Inquiry dialog box or output inquiry voice. Only when the user confirms yes, these control operations will be further completed. The terminal will be shut down, restarted, and the screen will be turned off. By setting the secondary voice confirmation, the user’s misoperation can be avoided and the user experience can be improved. .
One of the core ideas of the embodiments of the present application is to collect the control control commands supported by the current interface in the command set identification file, after the terminal is triggered to enter the voice control mode, receive voice data, and generate voice according to the voice data Control command, match the voice control command in the comman...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View moreAbstract
The application provides a voice control method and a voice control device. The voice control method includes: when a terminal is triggered, and enters a voice control mode, receiving input voice data; generating voice control instructions according to the voice data; matching the voice control instructions in an instruction set identification file which includes a set of control controlling instructions supported by a current interface; executing control operations corresponding to the voice control instructions if the voice control instructions are successfully matched in the instruction set identification file. The voice control method and the voice control device can achieve full voice control of a terminal, improve operation convenience, and improve user experience.
Application Domain
Technology Topic
Image
Examples
- Experimental program(4)
Example Embodiment
[0062] Example one:
[0063] Reference figure 1 , Shows a step flow chart of Embodiment 1 of a voice control method of the present application, which may specifically include the following steps:
[0064] Step 101: When the terminal is triggered to enter the voice control mode, receive input voice data;
[0065] In the embodiment of the present application, the user can control the terminal through a remote control device, or control the terminal through voice.
[0066] The terminal is equipped with a recording module. When the terminal enters the voice control mode, the recording module can monitor the voice data input by the user in real time. When the user inputs voice data, the voice data input by the user can be received through the recording module.
[0067] For example, when the user inputs voice data such as "play a song", "how is the weather today", "channel 15", "death squad" and other voice data, the terminal can receive the above voice data input by the user through the recording module. Of course, the user In actual applications, any voice data can be input, and the terminal can receive any voice data input by the user through the recording module.
[0068] The terminal can display the received voice data on the current interface, such as "play a song", "what's the weather today", "channel 15", "death squad" and other voice data on the current interface.
[0069] Step 102: Generate a voice control instruction according to the voice data;
[0070] After the terminal receives the voice data input by the user, it can generate a voice control instruction according to the voice data. The voice control instruction is a character string that the terminal can recognize and can correspond to the control operations supported by the terminal.
[0071] For example, when the voice data received by the terminal is "play a song", the voice control command generated by the terminal is the character string corresponding to the control operation of song playback; when the voice data received by the terminal is "how is the weather today" , The voice control command generated by the terminal is to search for the character string corresponding to the control operation of today’s weather; when the voice data received by the terminal is "channel 15", the voice control command generated by the terminal corresponds to the control operation of playing channel 15 video programs Character string; when the voice data received by the terminal is "death squad", the voice control command generated by the terminal is the character string corresponding to the control operation of searching the movie "death squad".
[0072] Step 103: Match the voice control instructions in an instruction set identification file; the instruction set identification file includes a set of control control instructions supported by the current interface;
[0073] Step 104: If the matching is successful, the control operation corresponding to the voice control instruction is executed.
[0074] The terminal may include an instruction set identification file, and the instruction set identification file is a set of control control instructions supported by the current interface of the terminal.
[0075] The control control instruction can be a series of character strings, corresponding to all control operations that can be performed on the current interface, and different interfaces can include different control control instructions.
[0076] For example, if the current interface is a music playing interface, the control commands can include "play", "pause", "play previous song", "play next song", "volume adjustment", "play a specific song", Strings corresponding to control operations such as "Lyrics display", "Exit music playback", etc.
[0077] If the current interface is an information search interface, the control control instructions can include "search for a certain type of information (such as weather, movies, programs, stocks)", "page up (page up, down)", "exit information search", etc. The string corresponding to the control operation.
[0078] If the current interface is a channel program playing interface, the control control instructions may include character strings corresponding to control operations such as "switch channel (select a channel)", "volume adjustment", "brightness adjustment", etc.
[0079] The terminal can match the generated voice control instruction in the instruction set recognition file. The matching method can be to compare the character string corresponding to the voice control instruction with the character string corresponding to the control control instruction. If there are identical or similar If it is a character string, the matching is successful, otherwise, the matching fails. Of course, those skilled in the art can set other matching methods according to actual needs, and the embodiment of the present application does not limit this.
[0080] When the matching is successful, the terminal can perform the operation corresponding to the voice control instruction.
[0081] For example, if the current interface is a music playing interface, the voice control command corresponds to the control operation of "play a song", and after the match is successful, the terminal will automatically play the song.
[0082] If the current interface is an information search interface, the voice control command corresponds to the control operation of "search today's weather". After the match is successful, the terminal will automatically search the network for today's weather conditions through the Internet, and display today's weather conditions in the current interface.
[0083] If the current interface is the channel program playing interface, the voice control command corresponds to the control operation of "play 15 channel video programs". After the match is successful, the terminal will automatically switch the current channel to channel 15 and play channel 15 video programs.
[0084] In specific applications, some control operations can be set to correspond to secondary voice confirmation operations, such as "shut down", "restart", "close screen" and other control operations. The terminal will pop up a dialog asking whether to shut down, whether to restart, and whether to close the screen. When the user confirms yes, these control operations are further completed. The terminal will be shut down, restarted, and the screen is closed. A second voice confirmation is set to avoid user misoperation and improve user experience.
[0085] When the matching is unsuccessful, the terminal may not perform any operation, or it may issue a prompt voice to remind the user that the input voice data does not meet the operations supported by the current interface. Of course, those skilled in the art can set other operation methods according to actual needs. The application embodiment does not limit this.
[0086] For example, if the current interface is a music playback interface, the voice control command corresponds to the control operation of "search today's weather". Because the music playback interface does not support the weather search control operation, the matching cannot be successful, and the terminal may not perform any Operation, it can also prompt the user that the voice data input does not conform to the operation supported by the current interface.
[0087] If the current interface is an information search interface, the voice control command corresponds to the control operation of "play a song". Because the information search interface does not support music playback control operations, the match cannot be successful, and the terminal does not need to perform any operation , It can also prompt the user that the voice data input does not conform to the operation supported by the current interface.
[0088] If the current interface is the channel program playback interface, the voice control command corresponds to the control operation of "search today's weather". Because the channel program playback interface does not support the weather search control operation, the matching cannot be successful, and the terminal does not need to perform any Operation, it can also prompt the user that the voice data input does not conform to the operation supported by the current interface.
[0089] After the terminal performs the control operation corresponding to the voice control instruction, the voice data displayed on the current interface can be deleted.
[0090] The embodiment of the application collects the control commands supported by the current interface in the command set recognition file. After the terminal is triggered to enter the voice control mode, it receives voice data, generates voice control commands based on the voice data, and places the voice control commands in the command set. The matching is performed in the recognition file, and the corresponding operation is executed when the matching is successful, thereby realizing the full voice control of the terminal, avoiding the embarrassment of only partial voice control of the terminal, improving the convenience of terminal control, and improving the user experience.
Example Embodiment
[0091] Embodiment two:
[0092] Reference figure 2 , Shows a step flow chart of Embodiment 2 of a voice control method of the present application, which may specifically include the following steps:
[0093] Step 201: When the terminal is triggered to enter the voice control mode, receive the input voice data;
[0094] After the terminal enters the voice control mode, the user can input voice data to the terminal to realize the voice control terminal.
[0095] In a preferred embodiment of the present application, when the terminal receives a voice control mode start command, it can enter the voice control mode according to the voice control mode start command.
[0096] The user can trigger the terminal to enter the voice control mode by triggering the button on the remote control device to issue a voice control mode start command; or send the voice control mode start command to the recording module on the remote control device to trigger the terminal to enter the voice control mode; also The voice control mode start command can be issued directly, which is detected and recorded by the built-in recording module on the terminal to trigger the terminal to enter the voice control mode. The voice control mode start command can be preset by the terminal in advance, or can be customized by the user and stored in the terminal, such as "Hello, TV".
[0097] When the terminal is triggered to enter the voice control mode, it can be initialized. The initialization process is as follows:
[0098] Step 201a, start the speech engine;
[0099] Step 201b: The application layer sends control operation information supported by the current interface to the voice engine;
[0100] In step 201c, the voice engine generates control commands supported by the current interface according to the control operation information.
[0101] In this embodiment of the application, the terminal includes a voice engine and an application layer.
[0102] The voice engine can realize the recognition of voice data, can convert the user's voice data into corresponding text, set the voice engine in the terminal, and the terminal can recognize the voice data.
[0103] The application layer is composed of all applications (including system applications and third-party applications) running on the smart TV terminal.
[0104] In addition to the voice engine and application layer, the terminal can also include a framework layer, that is, the software development kit SDK layer (Software Development Kit). The framework layer is used to build application software for specific software packages, software frameworks, hardware platforms, operating systems, etc. A collection of development tools, third-party applications are developed based on the framework layer, subject to the constraints of the framework layer interface, system applications in the application layer can call the framework layer interfaces and modules to obtain relevant data of third-party applications, such as control operations Information to achieve control of third-party applications.
[0105] The application layer can call the interfaces and modules of the framework layer, establish a connection with the speech engine through the corresponding communication protocol, and generate the service connection ServiceConnection function. The data interaction between the application layer and the speech engine can be realized through the service connection ServiceConnection function.
[0106] The application layer can send control operation information supported by the current interface to the speech engine through the ServiceConnection function.
[0107] The control operation information may include the control operations supported by the current interface and the scene classification to which the current interface belongs.
[0108] The embodiments of this application can classify scenes. The scenes of this application can include: global control scenes, information search scenes, music playback scenes, movie playback scenes, channel video program playback scenes, chat scenes (such as Weibo), help information display scenes .
[0109] The control operations supported by the current interface can be predefined, and the interface corresponding to different scene categories can support different control operations, for example:
[0110] Global control scenarios may include control operations such as "terminal on and off", "terminal restart", "screen off", "mute", "open and close of application" and so on.
[0111] The information search scene may include control operations such as "searching for certain types of information (such as weather, movies, programs, stocks)", "turning pages (turning pages up and down)", "exiting information search", and so on.
[0112] Music playback scenes can include "play", "pause", "play previous song", "play next song", "volume adjustment", "play a specific song", "lyric display", "exit music playback" And so on control operations.
[0113] Channel video program playing scenes may include control operations such as "switch channel (select a channel)", "volume adjustment", "brightness adjustment", "exit channel video program playback".
[0114] The chat scene may include control operations such as "information release", "cancel release", "show wonderful content", and "enter help".
[0115] The foregoing is only an exemplary description of the control operations supported by the interfaces corresponding to different scenarios, and those skilled in the art can define other control operations according to actual needs.
[0116] In specific applications, the above-mentioned scenes can be reclassified. For example, information search scenes can be divided into weather search scenes, stock search scenes, movie search scenes, preview program search scenes, etc., which are not included in this embodiment of the application. limit.
[0117] The voice engine receives the control operation information, and can generate corresponding control control instructions according to the control operations supported by the current interface. The control control instructions are character strings that can be recognized by the terminal.
[0118] For example, if the current interface corresponds to the interface corresponding to the global control scenario, the generated control control commands are "terminal on and off", "terminal restart", "close screen", "mute", and "application open and close" "And so on, the string corresponding to the control operation.
[0119] If the current interface corresponds to the interface corresponding to the information search scene, the generated control control instructions are "search for a certain type of information (such as weather, movies, programs, stocks)", "page up (page up, page down)", Strings corresponding to control operations such as "Exit Information Search".
[0120] If the current interface corresponds to the interface corresponding to the music playback scene, the generated control commands are "play", "pause", "play previous song", "play next song", "volume adjustment", "play a certain Strings corresponding to control operations such as a specific song", "Lyrics display", "Exit music playback", etc.
[0121] If the current interface corresponds to the interface corresponding to the playing scene of the channel video program, the generated control command is "search for a certain type of information (such as weather, movies, programs, stocks)", "page up (page up, page down) ", "Exit information search" and other control operations corresponding to the string.
[0122] In a preferred embodiment of the present application, after the initialization is completed, the voice control prompt information can be displayed on the current interface.
[0123] Different scene classifications often correspond to different voice control prompt messages.
[0124] For example, in the global control scenario, the voice control prompt information can include "turn on, off", "restart", "turn off the screen", "mute", "open and close the application", etc. In the information search scenario, the voice control prompt Information can include "Today's Weather", "Movie Name", "Stocks", "TV Station Program Preview", "Page Up and Down", etc. In music playback scenes, voice control prompt information can include "Play", " "Pause", "Play the previous song", "Play the next song", "Volume adjustment", "Display lyrics", "Exit", etc., in the channel video program playback scene, the voice control prompt message can include "Channel switching" , "Volume adjustment", "Brightness adjustment", "Exit", etc. In the chat scene, the voice control prompt information can include "Publish", "Cancel", "Exciting content", "Help" and so on.
[0125] It should be noted that the voice control prompt information can be pre-defined, and can also be generated in real time according to application popularity or according to user habits.
[0126] For example, in a music playback scene, you can predefine voice control prompts such as "Play Previous", "Play Next", "Pause", etc., or display the names of songs with high popularity (such as " Voice control prompt messages such as "fireworks are easy to cold" and "conquer"), and voice control prompt messages such as "skip" (corresponding to "next song") can also be generated according to the user's usage habits.
[0127] The application layer can send voice control prompt information to the voice engine through the ServiceConnection function.
[0128] When the voice engine receives the voice control prompt information, it can display the voice control prompt information at the bottom of the current interface. Of course, those skilled in the art can set the voice control prompt information to be displayed anywhere on the current interface according to actual needs.
[0129] In specific applications, the same scene can include different interfaces, and different interfaces can display different voice control prompts.
[0130] For example, in an information search scenario, the current interface is the interface that displays today’s weather, and the displayed voice control prompt information can include "tomorrow weather" and "help". The current interface is the interface that displays tomorrow’s weather, and the displayed voice control prompt information can include "Today's weather", "Help", etc.; in the music playing scene, the current interface is the interface that displays the lyrics, and the displayed voice control prompt information can include "play previous song", "play next song", "pause", " "Close Lyrics", etc. The current interface is an interface that does not display lyrics. The displayed voice control prompt information can include "play previous song", "play next song", "pause", "display lyrics", etc.; for other scenes , Such as movie playing scenes, channel video program playing scenes, etc., different prompts can also be displayed on different interfaces, which will not be listed here.
[0131] The voice control prompt information is an example of instructing the user to input voice data, and the user can input voice data according to the voice control prompt information.
[0132] For example, the user can input the voice data "Today's weather" according to the prompt information of the current interface, "Play the previous song" according to the prompt information of the current interface, and input the voice data "Play the previous song" according to the current interface. The prompt message "Firework is easy to cold" on the interface, enter the voice data "Firework is easy to cold".
[0133] Step 202: Generate the voice control instruction according to the voice data;
[0134] After receiving the voice data, the terminal can generate voice control instructions based on the voice data.
[0135] In a preferred embodiment of the present application, step 202 may include the following sub-steps:
[0136] Sub-step 202a, the voice engine performs voice recognition on the voice data, and generates multiple characters corresponding to the voice data;
[0137] In sub-step 202b, the voice engine performs semantic recognition on multiple characters corresponding to the voice data, and generates a voice control instruction.
[0138] Voice recognition is the process of converting voice data into characters. Semantic recognition is the process of converting characters into voice control instructions. The voice control instructions can be other strings that the terminal can die, corresponding to the control operations supported by the terminal.
[0139] For example, if the voice data received by the terminal is "playing fireworks is easy to be cold", the voice engine can perform voice recognition on the voice data to generate corresponding multiple characters "playing fireworks is easy to cold", and then "playing fireworks is easy to cold" for multiple characters "Perform semantic recognition, and generate a character string corresponding to the voice control instruction of the song "Firework is easy to cold"; the voice data received by the terminal is "Today's weather", and the voice engine can perform voice recognition on the voice data to generate multiple corresponding The character "Today's Weather" is then semantically recognized for multiple characters "Today's Weather", and a character string corresponding to the voice control instruction for searching "Today's Weather" is generated.
[0140] In specific applications, the voice data sent by the user may not conform to the current scenario, or the voice data sent by the user itself is ambiguous. In this case, the voice data sent by the user can be corrected and filtered out during the semantic recognition process Invalid character.
[0141] For example, if the user sends out the voice data "play the next TV series" in the music playing scene, it is obvious that the TV series cannot be played in the music application, that is, the voice data sent by the user does not match the current scene. Therefore, in the semantic recognition During the process, the invalid character "TV series" can be filtered out, and the character corresponding to the voice data can be corrected to "Play the next song". Another example is that the user sends out the voice data "Playing movie fireworks is easy to be cold", but actually "Easy cold" is just a song, not a movie. Therefore, in the process of semantic recognition, invalid characters "movie" can be filtered out, and the characters corresponding to the voice data can be corrected to "playing music and fireworks are easy to be cold".
[0142] Step 203: Match the voice control instruction in the instruction set recognition file;
[0143] The terminal can match the generated voice control instruction in the instruction set recognition file. The matching method can be to compare the character string corresponding to the voice control instruction with the character string corresponding to the control control instruction. If there are identical or similar If it is a character string, the matching is successful, otherwise, the matching fails. Of course, those skilled in the art can set other matching methods according to actual needs, and the embodiment of the present application does not limit this.
[0144] Step 204: If the matching is successful, the control operation corresponding to the voice control instruction is executed.
[0145] When the match is successful, the terminal can perform the operation corresponding to the voice control instruction,
[0146] When the matching is unsuccessful, the terminal may not perform any operation, or it may issue a prompt voice to remind the user that the input voice data does not meet the operations supported by the current interface. Of course, those skilled in the art can set other operation methods according to actual needs. The application embodiment does not limit this.
[0147] For example, if the current interface is a music playback interface, the voice control command corresponds to the control operation of "search today's weather". Because the music playback interface does not support weather search control operations, the matching cannot be successful, and the terminal may not perform any Operation, it can also prompt the user that the voice data input does not conform to the operation supported by the current interface.
[0148] If the current interface is an information search interface, the voice control command corresponds to the control operation of "play a song". Because the information search interface does not support music playback control operations, it cannot be matched successfully, and the terminal does not need to perform any operation , It can also prompt the user that the voice data input does not conform to the operation supported by the current interface.
[0149] If the current interface is the channel program playback interface, the voice control command corresponds to the control operation of "search today's weather". Because the channel program playback interface does not support the weather search control operation, the matching cannot be successful, and the terminal does not need to perform any Operation, it can also prompt the user that the voice data input does not conform to the operation supported by the current interface.
[0150] In a preferred embodiment of the present application, step 204 may include the following sub-steps:
[0151] Sub-step 204a, determining whether the voice control command belongs to a custom control command;
[0152] In sub-step 204b, if the voice control instruction does not belong to the custom control command, the voice engine sends the voice control instruction to the application layer for the application layer to execute according to the voice control instruction The corresponding control operation;
[0153] In sub-step 204c, if the voice control instruction belongs to a custom control command, the voice engine sends the voice control instruction to the framework layer, so that the framework layer executes a corresponding control operation according to the voice control instruction.
[0154] After the voice control instruction is successfully matched in the instruction set recognition file, it can be determined whether the voice control instruction belongs to a custom control command.
[0155] Custom control commands can be pre-set according to different scene classifications. For example, you can set the custom control command "favorite" in the information search scene. The user can use the "favorite" command to collect the information of interest and the URL corresponding to the information, etc., or Set the custom control command "Share" in the music playing scene. The user can share the songs of interest with others through the "Share" command. Of course, those skilled in the art can define various custom control commands according to actual needs. The embodiments of the application do not impose restrictions on this.
[0156] If the voice control command is not a custom control command, the voice engine can send the voice control command to the application layer through the ServiceConnection function, and the application layer performs the corresponding operation.
[0157] If the voice control command is a custom control command, the voice engine can send the voice control command to the framework layer through the service connection ServiceConnection function, and the framework layer can read the text information corresponding to the voice control command, and execute the corresponding operation according to the text information (view operating).
[0158] For example, when the voice control command generated based on the voice data is "Play the next song", it is determined that the voice control command is not a custom control command, then the voice control command is sent to the application layer through the ServiceConnection function, and the application layer executes the corresponding The terminal automatically plays the next song; when the voice control command generated according to the voice data is "playing fireworks is easy to be cold", it is determined that the voice control command is not a custom control command, and the voice control command is set through the ServiceConnection function Send to the application layer, the application layer executes the corresponding operation, the terminal automatically plays the song and the firework is easy to be cold.
[0159] When the voice control command generated based on the voice data is "share", it is determined that the voice control command belongs to a custom control command, and the voice control command is sent to the framework layer through the ServiceConnection function, and the text information read by the framework layer is " "Share", execute the corresponding view operation according to the text information "Share", and automatically share the currently playing song with others; when the voice control command generated from the voice data is "favorite", it is determined that the voice control command belongs to a custom control command. The voice control command is sent to the framework layer through the ServiceConnection function of the service connection. The text information read by the framework layer is "collection", and the corresponding view operation is executed according to the text information "collection", and the current search information and the corresponding information are automatically collected URL.
[0160] In the embodiment of the application, the voice control prompt information is displayed on the current interface, prompting the user what operations can be performed on the current interface, and guiding the user to input voice data conforming to the current interface according to the voice control prompt information, so that the user can quickly grasp the voice control operation and avoid the user Input voice data that does not meet the current scenario, improve the effectiveness of the user's input of voice data, and improve user experience.
[0161] It should be noted that for the method embodiments, for the sake of simple description, they are all expressed as a series of action combinations, but those skilled in the art should know that the embodiments of this application are not limited by the described sequence of actions, because According to the embodiments of the present application, certain steps may be performed in other order or simultaneously. Secondly, those skilled in the art should also know that the embodiments described in the specification are all preferred embodiments, and the actions involved are not necessarily required by the embodiments of this application.
Example Embodiment
[0162] Embodiment three:
[0163] Reference image 3 , Shows a structural block diagram of an embodiment of a voice control device of the present application, which may specifically include the following modules:
[0164] The voice data receiving module 301 is configured to receive input voice data after the terminal is triggered to enter the voice control mode;
[0165] The voice control instruction generating module 302 is configured to generate voice control instructions according to the voice data;
[0166] The voice control instruction matching module 303 is configured to match the voice control instruction in an instruction set recognition file; the instruction set recognition file includes a set of control control instructions supported by the current interface;
[0167] The control operation execution module 304 is configured to execute the control operation corresponding to the voice control instruction when the matching is successful.
[0168] The device of this embodiment is used to execute the method steps in the above-mentioned embodiment, which will not be repeated here.
[0169] In the embodiment of the application, the voice control prompt information is displayed on the current interface, prompting the user what operations can be performed on the current interface, and guiding the user to input voice data conforming to the current interface according to the voice control prompt information, so that the user can quickly grasp the voice control operation and avoid the user Input voice data that does not meet the current scenario, improve the effectiveness of the user's input of voice data, and improve user experience.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more Similar technology patents
Comprehensive utilizing system and method of intelligent home service rule
ActiveCN104486416AImprove convenienceEasy to set upDatabase updatingComputer controlHome environmentShared service
Owner:SAMSUNG ELECTRONICS CHINA R&D CENT +1
Mobile terminal one-hand operation method and mobile terminal thereof
InactiveCN103399692AImprove convenienceEasy to operateInput/output processes for data processingHand operationsComputer engineering
Owner:DONGGUAN YULONG COMM TECH +1
Method and system for booking parking stall and selecting parking lot based on App on mobile terminal
ActiveCN105702082AImprove convenienceReduce labor inputIndication of parksing free spacesReal-time computingSoftware
Owner:SHANDONG JIANZHU UNIV
Keyless entry system
InactiveUS20050099263A1Improve conveniencePrevent temperatureLock applicationsDigital data processing detailsMode selectionElectrical and Electronics engineering
Owner:ALPS ALPINE CO LTD
Method and system for controlling infrared controlled equipment
ActiveCN102930712AImprove convenienceLow costNon-electrical signal transmission systemsEmbedded systemControl equipment
Owner:TONLY ELECTRONICS HLDG LTD
Classification and recommendation of technical efficacy words
- Improve user experience
- Improve convenience
System and method of automatically aligning video scenes with an audio track
ActiveUS7512886B1Improve user experienceElectronic editing digitised analogue information signalsCarrier indicating arrangementsSpeech recognitionChange points
Owner:MAGIX
Natural language processing for a location-based services system
ActiveUS20020161587A1Improve user experienceImprove speech recognition performanceDigital data information retrievalSpecial service for subscribersNatural languageApplication software
Owner:ACCENTURE GLOBAL SERVICES LTD
Method and device of pairing connection of equipment
ActiveCN102983890AImprove efficiencyImprove user experienceNear-field transmissionConnection managementInformation transmissionShort distance
Owner:XIAOMI INC
Problem reporting system based on user interface interactions
InactiveUS20100229112A1Improve user experienceFault responseDigital computer detailsSoftware developmentGraphical user interface
Owner:MICROSOFT TECH LICENSING LLC
Flexible apparatus and control method thereof
InactiveUS20140098028A1Improve convenienceDigital data processing detailsCathode-ray tube indicatorsControl unitComputer science
Owner:SAMSUNG ELECTRONICS CO LTD
Method for implementing a navigation key function in a mobile communication terminal based on fingerprint recognition
InactiveUS20050041841A1Improve convenienceReduce number of partElectric signal transmission systemsImage analysisMarine navigationFingerprint recognition
Owner:SAMSUNG ELECTRONICS CO LTD
Hydrogen filling apparatus and hydrogen filling method
ActiveUS20100307636A1Improve convenienceAccurate timingLiquid fillingGas handling/storage effectsHydrogen tankChemistry
Owner:HONDA MOTOR CO LTD
Method for treating Sjogren's syndrome
InactiveUS20060062787A1Low costImprove convenienceBiocideSenses disorderVisual analogue scaleRegimen
Owner:GENENTECH INC