Guiding device and method for operation of intelligent terminal

A technology of intelligent terminals and guidance devices, which is applied in the input/output process of data processing, telephone communication, instruments, etc., and can solve problems such as the difficulty of realizing intelligent terminals

Active Publication Date: 2017-05-10
NUBIA TECHNOLOGY CO LTD
5 Cites 6 Cited by

AI-Extracted Technical Summary

Problems solved by technology

However, for some places where the network environment is relatively complex or the coverage...
View more

Method used

[0067] The A/V input unit 120 is used to receive audio or video signals. The A/V input unit 120 may include a camera 121 that processes image data of still pictures or videos obtained by an image capture device in a video capture mode or image capture mode, and a microphone 122 . The processed image frames may be displayed on the display unit 151 . Image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal. The microphone 122 may receive sound (audio data) via the microphone in a phone call mode, recording mode, voice recognition mode, and the like operating modes, and can process such sound as audio data. The processed audio (voice) data may be converted into a format transmittable to a mobile communication base station via the mobile communication module 112 for output in case of a phone call mode. The microphone 122 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the process of receiving and transmitting audio signals.
[0109] In this embodiment, the video of the intelligent terminal is operated by playing a preset guide, and the video includes a key frame; when the key frame is played, the video is paused, and the corresponding application of the video is started; the user is monitored Operation on the application interface; when the user is detected to be operating, determine whether the application interface after the user operation contains the keyword corresponding to the key frame; if the application interface after the user operation contains the keyword corresponding to the key frame keyword, the operation guidance of the smart terminal is completed. Through the above method, the user plays the preset video that guides the operation of the smart terminal, and the user obtains the steps and methods of operating the smart terminal by watching the video. When a key frame is played, the video is paused and the application corresponding to the video is started. , and then the user operates the corresponding application according to the steps and methods of operating the smart terminal in the video. The smart terminal then monitors the user's operation on the application ...
View more

Abstract

The invention discloses a guiding device for operation of an intelligent terminal. The guiding device comprises a first playing module used for playing a preset video for guiding to operate the intelligent terminal, wherein the video comprises a key frame; a starting module used for pausing the video and starting an application corresponding to the video when playing the key frame; a monitoring module used for monitoring operation of an user on an application interface; a judging module used for judging whether a keyword corresponding to the key frame is contained in the user-operated application interface when user operation is detected; and a processing module used for finishing the operation guiding of the intelligent terminal if the keyword corresponding to the key frame is contained in the user-operated application interface. The invention also discloses a guiding method for operation of the intelligent terminal. The guiding device and method for operation of the intelligent terminal can help a beginner to learn and use the intelligent terminal.

Application Domain

Substation equipmentInput/output processes for data processing

Technology Topic

Computer moduleComputer science +1

Image

  • Guiding device and method for operation of intelligent terminal
  • Guiding device and method for operation of intelligent terminal
  • Guiding device and method for operation of intelligent terminal

Examples

  • Experimental program(1)

Example Embodiment

[0058] It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention.
[0059] A mobile terminal implementing various embodiments of the present invention will now be described with reference to the accompanying drawings. In the following description, suffixes such as 'module', 'component' or 'unit' used to represent elements are used only to facilitate the description of the present invention, and have no specific meaning per se. Therefore, "module" and "component" can be used interchangeably.
[0060]The mobile terminal may be implemented in various forms. For example, the terminal described in the present invention may include, for example, a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (Personal Digital Assistant), a PAD (Tablet Computer), a PMP (Portable Multimedia Player), a navigation device, and the like mobile terminals as well as stationary terminals such as digital TVs, desktop computers, etc. Below, it is assumed that the terminal is a mobile terminal. However, those skilled in the art will understand that the configuration according to the embodiments of the present invention can also be applied to stationary type terminals, in addition to elements especially for mobile purposes.
[0061] figure 1 The hardware structure of the mobile terminal for realizing the various embodiments of the present invention is illustrated.
[0062] The mobile terminal 100 may include a wireless communication unit 110, an A/V (audio/video) input unit 120, a user input unit 130, an output unit 150, a memory 160, a controller 180, a power supply unit 190, and the like. figure 1 A mobile terminal is shown having various components, but it should be understood that implementation of all of the shown components is not a requirement. More or fewer components may alternatively be implemented. The elements of the mobile terminal will be described in detail below.
[0063] The wireless communication unit 110 typically includes one or more components that allow radio communication between the mobile terminal 100 and a wireless communication device or network. For example, the wireless communication unit may include at least one of the mobile communication module 112 , the wireless Internet module 113 and the short-range communication module 114 .
[0064] The mobile communication module 112 transmits and/or receives radio signals to and/or from at least one of a base station (eg, an access point, a Node B, etc.), an external terminal, and a server. Such radio signals may include voice call signals, video call signals, or various types of data sent and/or received in accordance with text and/or multimedia messages.
[0065] The wireless Internet module 113 supports wireless Internet access of the mobile terminal. The module can be internally or externally coupled to the terminal. The wireless Internet access technologies involved in this module may include WLAN (Wireless LAN) (Wi-Fi), Wibro (Wireless Broadband), Wimax (World Interoperability for Microwave Access), HSDPA (High Speed ​​Downlink Packet Access), etc. .
[0066] The short-range communication module 114 is a module for supporting short-range communication. Some examples of short-range communication technologies include Bluetooth™, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), ZigBee™, and the like.
[0067] The A/V input unit 120 is used to receive audio or video signals. The A/V input unit 120 may include a camera 121 and a microphone 122, and the camera 121 processes image data of still pictures or videos obtained by an image capture device in a video capture mode or an image capture mode. The processed image frame may be displayed on the display unit 151 . The image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal. The microphone 122 may receive sound (audio data) via the microphone in a telephone call mode, a recording mode, a voice recognition mode, etc. operating modes, and can process such sound into audio data. The processed audio (voice) data may be converted into a format transmittable to a mobile communication base station via the mobile communication module 112 for output in the case of a telephone call mode. The microphone 122 may implement various types of noise cancellation (or suppression) algorithms to remove (or suppress) noise or interference generated in the process of receiving and transmitting audio signals.
[0068] The user input unit 130 may generate key input data to control various operations of the mobile terminal according to commands input by the user. User input unit 130 allows a user to input various types of information, and may include a keyboard, dome, touch pad (eg, a touch-sensitive component that detects changes in resistance, pressure, capacitance, etc. due to being touched), scroll wheel , joystick, etc. In particular, when the touch panel is superimposed on the display unit 151 in the form of a layer, a touch screen may be formed.
[0069] In addition, when the mobile terminal 100 is connected to an external base, the interface unit 170 may function as a path through which power is allowed to be supplied from the base to the mobile terminal 100 or may function as a path through which various command signals input from the base are transmitted to the mobile terminal 100 The path to the terminal. Various command signals or power input from the base may be used as a signal for identifying whether the mobile terminal is accurately mounted on the base. The output unit 150 is configured to provide output signals (eg, audio signals, video signals, alarm signals, vibration signals, etc.) in a visual, audio and/or tactile manner. The output unit 150 may include a display unit 151, an audio output module 152, and the like.
[0070] The display unit 151 may display information processed in the mobile terminal 100 . For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 may display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capture mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing the video or image and related functions, and the like.
[0071] Meanwhile, when the display unit 151 and the touch panel are superimposed on each other in the form of layers to form a touch screen, the display unit 151 may function as an input device and an output device. The display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be constructed to be transparent to allow users to view from the outside, which may be referred to as transparent displays, a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like. Depending on the particular desired implementation, the mobile terminal 100 may include two or more display units (or other display devices), eg, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) . The touch screen can be used to detect touch input pressure as well as touch input position and touch input area.
[0072] The audio output module 152 can convert the audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a talking mode, a recording mode, a voice recognition mode, a broadcast receiving mode, etc. audio signal and output as sound. Also, the audio output module 152 may provide audio output related to a specific function performed by the mobile terminal 100 (eg, call signal reception sound, message reception sound, etc.). The audio output module 152 may include a pickup, a buzzer, and the like.
[0073] The memory 160 may store software programs and the like for processing and control operations performed by the controller 180, or may temporarily store data that has been or will be output (eg, phonebook, messages, still images, videos, etc.). Also, the memory 160 may store data on various manners of vibration and audio signals output when a touch is applied to the touch screen.
[0074] The memory 160 may include at least one type of storage medium including flash memory, hard disk, multimedia card, card-type memory (eg, SD or DX memory, etc.), random access memory (RAM), static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disks, optical disks, and the like. Also, the mobile terminal 100 may cooperate with a network storage device that performs the storage function of the memory 160 through a network connection.
[0075] The controller 180 generally controls the overall operation of the mobile terminal. For example, the controller 180 performs control and processing related to voice calls, data communications, video calls, and the like. In addition, the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, and the multimedia module 181 may be constructed within the controller 180 or may be constructed separately from the controller 180 . The controller 180 may perform pattern recognition processing to recognize handwriting input or picture drawing input performed on the touch screen as characters or images.
[0076] The power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate various elements and components.
[0077] The various embodiments described herein can be implemented in computer-readable media using, for example, computer software, hardware, or any combination thereof. For hardware implementation, the embodiments described herein can be implemented using application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( FPGA), processors, controllers, microcontrollers, microprocessors, electronic units designed to perform the functions described herein are implemented, in some cases such implementations may be implemented in the controller 180 implemented in. For software implementation, embodiments such as procedures or functions may be implemented with separate software modules that allow the performance of at least one function or operation. The software codes may be implemented by a software application (or program) written in any suitable programming language, which may be stored in memory 160 and executed by controller 180 .
[0078] So far, the mobile terminal has been described in terms of its functions. Hereinafter, for the sake of brevity, a slide-type mobile terminal among various types of mobile terminals such as a folder-type, bar-type, swing-type, slide-type mobile terminal and the like will be described as an example. Therefore, the present invention can be applied to any type of mobile terminal, and is not limited to a slide-type mobile terminal.
[0079] like figure 1 The mobile terminal 100 shown in can be configured to operate using such as wired and wireless communication devices and satellite-based communication devices that transmit data via frames or packets.
[0080] will now refer to figure 2 A communication device in which a mobile terminal according to the present invention can operate is described.
[0081] Such communication devices may use different air interfaces and/or physical layers. For example, air interfaces used by communication devices include, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications (UMTS) (in particular, Long Term Evolution (LTE) ), Global Device for Mobile Communications (GSM), etc. By way of non-limiting example, the following description refers to CDMA communication devices, but such teachings are equally applicable to other types of devices.
[0082] refer to figure 2 , a CDMA wireless communication apparatus may include a plurality of mobile terminals 100 , a plurality of base stations (BS) 270 , a base station controller (BSC) 275 and a mobile switching center (MSC) 280 . MSC 280 is configured to interface with public switched telephone network (PSTN) 290 . MSC 280 is also configured to interface with BSC 275, which may be coupled to base station 270 via a backhaul line. The backhaul line may be constructed according to any of several known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL or xDSL. It will be understood that if figure 2 The apparatus shown in may include a plurality of base stations 270.
[0083]Each BS 270 may serve one or more partitions (or areas), each of which is covered by a multi-directional antenna or antenna directed in a particular direction radially away from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS 270 may be configured to support multiple frequency allocations, and each frequency allocation has a specific spectrum (eg, 1.25MHz, 5MHz, etc.).
[0084] The intersection of partitions and frequency assignments may be referred to as CDMA channels. BS 270 may also be referred to as a base transceiver sub-unit (BTS) or other equivalent terminology. In such a case, the term "base station" may be used to refer generally to a single BSC 275 and at least one BS 270. A base station may also be referred to as a "cell site". Alternatively, each partition of a particular BS 270 may be referred to as multiple cell sites.
[0085] like figure 2 As shown, a broadcast transmitter (BT) 295 transmits a broadcast signal to the mobile terminal 100 operating within the device. like figure 1 The broadcast receiving module 111 shown in is provided at the mobile terminal 100 to receive broadcast signals transmitted by the BT295. exist figure 2 , several Global Positioning Device (GPS) satellites 300 are shown. The satellites 300 help locate at least one of the plurality of mobile terminals 100 .
[0086] exist figure 2 , a plurality of satellites 300 are depicted, but it will be appreciated that any number of satellites may be utilized to obtain useful positioning information. Instead of or in addition to GPS tracking technology, other technologies that can track the location of the mobile terminal may be used. Additionally, at least one GPS satellite 300 may selectively or additionally handle satellite DMB transmissions.
[0087] As a typical operation of a wireless communication device, the BS 270 receives reverse link signals from various mobile terminals 100 . Mobile terminals 100 are typically involved in calls, messaging, and other types of communications. Each reverse link signal received by a particular BS 270 is processed within the particular BS 270 . The obtained data is forwarded to the relevant BSC275. The BSC provides call resource allocation and coordinated mobility management functions including soft handover procedures between BSs 270. BSC 275 also routes received data to MSC 280, which provides additional routing services for interfacing with PSTN 290. Similarly, PSTN 290 interfaces with MSC 280 , which interfaces with BSC 275 , and BSC 275 controls BS 270 to send forward link signals to mobile terminal 100 accordingly.
[0088] Based on the above-mentioned mobile terminal hardware structure and communication device structure, various embodiments of the apparatus and method of the present invention are proposed.
[0089] The present invention further provides a guiding device for the operation of an intelligent terminal.
[0090] refer to image 3 , image 3 It is a schematic diagram of the functional modules of the first embodiment of the guiding device for intelligent terminal operation according to the present invention.
[0091] In this embodiment, the device includes:
[0092] The first playing module 100 is configured to play a preset video for guiding the operation of the intelligent terminal, and the video includes key frames.
[0093] In this embodiment, the first playback module may be figure 1 The multimedia module 181 is shown. In order to ensure the normal implementation of the present invention, before implementing this embodiment, it is necessary to obtain the video for guiding the operation of the intelligent terminal. The obtained video for guiding the operation of the intelligent terminal may be obtained by receiving transmissions from other intelligent terminals, or may be obtained by the user recording through his own intelligent terminal. In this embodiment, it is obtained by receiving transmission from other intelligent terminals, and in the specific implementation process, it can also be obtained by the user recording through his own intelligent terminal. The recording method may include recording an operation process video of another user operating an application, and the application may be WeChat, QQ, a map, or the like. The video is processed, key steps in the video are selected, operation guidance is added, and marked as a key frame. able to pass figure 1 received by microphone 122 in the figure 1 The audio output module 152 in the output. And mark the frame after adding the action guide as a keyframe. like Figure 4 shown, select Figure 4 , and add "click the plus sign, then click to add a friend" as an operation guide. Of course, voice operation guidance can also be added. and mark Figure 4 for keyframes. The keyword of the key frame is recorded. Figure 4 The key frame shown is to guide how to add friends, so the words "add friends" are extracted as Figure 4 keywords for the keyframes shown; or as in Image 6 shown, select Image 6 , and add "click the plus sign, then click to receive payment" as an operation guide. Of course, voice operation guidance can also be added. and mark Image 6 for keyframes. Record Image 6 The keyword for the keyframe shown. Image 6 The key frame shown is to guide how to receive and pay, so the words "Receive and Pay" are extracted as Image 6 The keyword for the keyframe shown.
[0094] After the recording is completed, the recorded video is sent to the smart terminals used by parents, children, etc. who need operation guidance, or the video is stored in the smart terminals used by parents, children, etc. by copying. The sending method can be through figure 1 The shown short-range communication module 114 turns on Bluetooth to send, or it can be transmitted through figure 1 The shown wireless Internet module 113 is connected to the WLAN and transmits through Wi-Fi. Then, the user who uses the smart terminal that stores the guide video can trigger to play the video instruction, and the smart terminal receives the playback instruction, starts the playback function, and plays the video.
[0095] The starting module 200 is configured to pause the video and start the application corresponding to the video when the key frame is played.
[0096] In this embodiment, the startup module 200 may be figure 1 Controller 180 in . When the key frame is played, the user can trigger an instruction to pause the playback of the video, and the smart terminal receives the instruction and pauses the playback of the video. After pausing the video, the user can go back to the desktop by pressing the Home button. In a specific implementation, when the key frame is played, the video may be automatically paused and returned to the desktop. Then the user clicks on the application corresponding to the video, triggering an instruction to start the application corresponding to the video, and the smart terminal receives the instruction and starts the application corresponding to the video. For example, when the content in the video is a guide method for WeChat operation in the smart terminal, the application corresponding to the video is WeChat, and WeChat is activated.
[0097] The monitoring module 300 is configured to monitor the operation of the user on the application interface.
[0098] The judging module 400 is used for judging whether a keyword corresponding to the key frame is included in the application interface after the user's operation when an operation by the user is detected.
[0099] Specifically, the monitoring module 300 and the judgment module 400 may be figure 1 Controller 180 in . After starting the application corresponding to the video, the user can perform the corresponding operation on the application according to the operation guide played by the video, and the corresponding operation on the application can be performed through figure 1 The user input unit 130 in the input. The intelligent terminal monitors the user's operation on the application interface. Then it is determined whether the keyword corresponding to the key frame is included in the application interface after the user's operation.
[0100] The judgment may be made by using automated testing software, for example, by using automated testing software UIAutomator. Whether the text content of the interface contains the keyword in the key frame is judged by automated testing software.
[0101] It is also possible to extract keywords in the video in advance for saving during the video recording process. For example, it can be saved to video attribute information, and of course, it can also be saved to documents such as PDF and Word. In this embodiment, after identifying the text information in the interface after the user's operation, the identified text information is compared with the preset keywords in the video attribute information for judgment.
[0102] Specifically, as Figure 5 shown, Figure 5 for users according to Figure 4 The operation guide in the interface appears after the operation, Figure 5 contains the words "Add a friend", with Figure 4 The keywords of the shown key frames correspond to the keywords, so it can be judged that the user's operation is correct; or as Figure 7 shown, Figure 7 for users according to Image 6 The operation guide in the interface appears after the operation, Figure 7 contains the words "Receive and Pay", and Image 6 The keywords of the shown key frames correspond to each other, so it can be determined that the user's operation is correct.
[0103] The processing module 500 is configured to complete the operation guidance of the intelligent terminal if the keyword corresponding to the key frame is included in the application interface after the user's operation.
[0104] If the application interface after the user's operation includes the keyword corresponding to the key frame, it is determined that the user's operation is accurate and the operation included in the key frame has been learned. At this time, the operation guidance of the intelligent terminal is completed.
[0105] Further, refer to Figure 11 , the processing module 500 may include:
[0106] Judging unit 510, for judging whether the key frame is the last key frame in the video;
[0107] The playing unit 520 is configured to continue to play the video if not, until the operation in the last key frame is completed.
[0108] Specifically, after the user completes the operation guidance steps corresponding to the intelligent terminal of the key frame, the judgment unit judges whether the key frame is the last key frame in the video; if the key frame is not the last key frame in the video, Then continue to play the video, and when the next key frame appears, pause the video playback, the user operates the application corresponding to the video again, and judges whether the application interface after the user operation contains the keyword in the corresponding key frame, until the last Actions in a keyframe are guided to completion.
[0109]In this embodiment, a preset video for guiding the operation of the intelligent terminal is played, and the video includes key frames; when the key frame is played, the video is paused, and the application corresponding to the video is started; The operation on the application interface; when monitoring the user's operation, determine whether the application interface after the user's operation contains the keyword corresponding to the key frame; if the application interface after the user operation contains the keyword corresponding to the key frame , the operation guidance of the intelligent terminal is completed. In the above manner, the user plays a preset video that guides the operation of the smart terminal, and the user obtains the steps and methods for operating the smart terminal by watching the video. When the key frame is played, the video is paused and the application corresponding to the video is started. , and then the user operates the corresponding application according to the steps and method of operating the smart terminal according to the video. The intelligent terminal then monitors the user's operation on the application interface; when monitoring the user's operation, it determines whether the application interface after the user's operation contains the keyword corresponding to the key frame; if so, it determines that the user's operation is accurate and has been After learning the corresponding operation, the operation guidance of the intelligent terminal is completed. The present invention combines viewing and operation, which can facilitate beginners to memorize and be familiar with the operation steps of the application, and improve the efficiency of beginners in learning to use the intelligent terminal.
[0110] Further, refer to Figure 8 , Figure 8 for image 3 A schematic diagram of a refined functional module of the judgment module described in .
[0111] Based on the above image 3 In the illustrated embodiment, the judging module 400 may include:
[0112] The first obtaining unit 410 is used to start the interface automatic testing software, and obtain the keywords marked in the key frame through the automatic testing software;
[0113] The second obtaining unit 420 is configured to obtain the text content in the interface after the user's operation through the automated testing software;
[0114] The first judging unit 430 is configured to judge whether the text content of the interface contains the keyword in the key frame.
[0115] In this embodiment, interface automation testing software is used to determine whether the application interface after the user's operation contains the keyword corresponding to the key frame. The interface automatic testing software adopts UI Automator, and in the specific implementation process, automatic testing software such as Cucumber and Café can also be used.
[0116] Specifically, when it is detected that the user performs an operation, the automatic interface testing software is started, and the automatic interface testing software obtains the key frame, parses the key frame, and obtains the keywords in the key frame. After the user operates the application, the interface automation testing software obtains the application interface after the user operation, and the obtained application interface after the user operation can be figure 1 The camera 121 in the acquisition is performed. The interface is parsed, and the text content of the interface is extracted. It is judged whether the text content of the interface contains the keyword in the key frame. In the specific implementation process of obtaining the application interface after the user's operation, whether the user has completed the operation can be judged through touch screen sensing, such as: Figure 4 operation, you need to tap the screen three times to complete the described Figure 4 Steps in, first click WeChat, activate WeChat, then click the plus sign, and then click Add friends. This completes the steps to add a friend. In this case, it can be set that if the user taps the screen three times, the user is considered to have completed the operation. At this time, the interface automation testing software obtains the application interface after the user's operation. In the specific implementation process, it is also possible to set the time sensing to determine whether the user has completed the operation. For example, the time sensing is set to 20 seconds. If the user does not click on the screen for more than 20 seconds, the user is considered to have completed the operation. At this time, the interface is automatically tested. The software obtains the application interface after the user's operation, and the time can also be set to 30 seconds, 40 seconds, etc. during the specific implementation process.
[0117] In this embodiment, the automatic testing software of the interface is started, and the keywords marked in the key frame are obtained through the automatic testing software; the text content in the interface after the user's operation is obtained through the automatic testing software; it is judged whether the text content of the interface contains keywords in the keyframe. In the above manner, it is judged whether the text content of the interface contains the keywords in the key frame, so that it can be judged whether the user's operation is correct, and then the user can be guided to perform the next operation.
[0118] Further, refer to Figure 9 , Figure 9 for image 3 A schematic diagram of another refined functional module of the judgment module described in .
[0119] Based on the above image 3 In the illustrated embodiment, the judging module 400 may further include:
[0120] The identification unit 440 is used for identifying the text information in the interface after the user's operation;
[0121] The comparison unit 450 is configured to compare the recognized text information with the preset keywords in the video.
[0122] In this embodiment, the identification unit 440 may be figure 1 Controller 180 in . In order to ensure that this embodiment can be implemented normally, before implementing this embodiment, keywords in the video need to be extracted and stored in a PDF, and may also be stored in a Word document during the specific implementation process. The saving can be done by figure 1 stored in the memory 160. After the user operation is completed, the text information in the interface after the user operation is identified, and the identified text information is compared with the preset keywords in the video.
[0123] In this embodiment, the text information in the interface after the user's operation is recognized; it is used to compare the recognized text information with the preset keywords in the video. Through the above method, it is judged whether the text information of the interface contains the preset keyword, so that it can be judged whether the user's operation is correct, and then the user can be guided to perform the next operation.
[0124] Further, refer to Figure 10 , Figure 10 It is a schematic diagram of the functional modules of the second embodiment of the guiding device for the operation of the intelligent terminal according to the present invention.
[0125] Based on the above-mentioned embodiment of the guiding device for intelligent terminal operation of the present invention, the device further includes:
[0126] The second playing module 600 is configured to prompt that the operation is wrong and replay the preset video for guiding the operation of the intelligent terminal if the keyword corresponding to the key frame is not included in the application interface after the user's operation.
[0127] In this embodiment, if the application interface after the user's operation does not contain the keyword corresponding to the key frame, it means that the user's operation is wrong, prompts the operation to be wrong, and replays the preset video for guiding the operation of the smart terminal .
[0128] In this embodiment, if the keyword corresponding to the key frame is not included in the application interface after the user's operation, the operation is prompted to be wrong, and the preset video for guiding the operation of the intelligent terminal is replayed. In the above manner, when an operation error prompt occurs, the user can re-watch the video, and operate the corresponding application again according to the video until the keyword corresponding to the key frame is included in the interface after the operation. Users can use the smart terminal by watching and practicing many times.
[0129] The present invention also provides a method for guiding the operation of an intelligent terminal.
[0130] refer to Figure 12 , Figure 12 This is a schematic flowchart of the first embodiment of the method for guiding the operation of an intelligent terminal according to the present invention. The method includes:
[0131] Step S100, playing a preset video for guiding the operation of the intelligent terminal, where the video includes key frames.
[0132] In order to ensure the normal implementation of the present invention, before implementing this embodiment, it is necessary to obtain the video for guiding the operation of the intelligent terminal. The obtained video for guiding the operation of the intelligent terminal may be obtained by receiving transmissions from other intelligent terminals, or may be obtained by the user recording through his own intelligent terminal. In this embodiment, it is obtained by receiving transmission from other intelligent terminals, and in the specific implementation process, it can also be obtained by the user recording through his own intelligent terminal. The recording method may include recording an operation process video of another user operating an application, and the application may be WeChat, QQ, a map, or the like. Process the video, select the key steps in the video, add an operation guide, and mark it as a key frame. The operation guide can be a process guide of text description, and of course can also be a process guide of voice recording, and the operation guide will be added. Guided frames are marked as keyframes. like Figure 4 shown, select Figure 4 , and add "click the plus sign, then click to add a friend" as an operation guide. Of course, voice operation guidance can also be added. and mark Figure 4 for keyframes. The keyword of the key frame is recorded. Figure 4 The key frame shown is to guide how to add friends, so the words "add friends" are extracted as Figure 4 keywords for the keyframes shown; or as in Image 6 shown, select Image 6 , and add "click the plus sign, then click to receive payment" as an operation guide. Of course, voice operation guidance can also be added. and mark Image 6 for keyframes. Record Image 6 The keyword for the keyframe shown. Image 6 The key frame shown is to guide how to receive and pay, so the words "Receive and Pay" are extracted as Image 6 The keyword for the keyframe shown.
[0133] After the recording is completed, the recorded video is sent to the smart terminals used by parents, children, etc. who need operation guidance, or the video is stored in the smart terminals used by parents, children, etc. by copying. Then, the user who uses the smart terminal that stores the guide video can trigger to play the video instruction, and the smart terminal receives the playback instruction, starts the playback function, and plays the video.
[0134] Step S200, when the key frame is played, pause the video, and start the application corresponding to the video.
[0135] In this embodiment, when the key frame is played, the user can trigger an instruction to pause the playback of the video, and the intelligent terminal receives the instruction and pauses the playback of the video. After pausing the video, the user can go back to the desktop by pressing the Home button. In a specific implementation, when the key frame is played, the video may be automatically paused and returned to the desktop. Then the user clicks on the application corresponding to the video, triggering an instruction to start the application corresponding to the video, and the smart terminal receives the instruction and starts the application corresponding to the video. For example, when the content in the video is a guide method for WeChat operation in the smart terminal, the application corresponding to the video is WeChat, and WeChat is activated.
[0136] Step S300, monitoring the user's operation on the application interface.
[0137] Step S400, when detecting that the user performs an operation, determine whether the application interface after the user's operation contains the keyword corresponding to the key frame.
[0138] Specifically, after starting the application corresponding to the video, the user can perform corresponding operations on the application according to the operation guide played by the video, and the intelligent terminal monitors the user's operation on the application interface. Then it is determined whether the keyword corresponding to the key frame is included in the application interface after the user's operation.
[0139] The judgment may be made by using automated testing software, for example, by using automated testing software UIAutomator. Whether the text content of the interface contains the keyword in the key frame is judged by automated testing software.
[0140] It is also possible to extract keywords in the video in advance for saving during the video recording process. For example, it can be saved to video attribute information, and of course, it can also be saved to documents such as PDF and Word. In this embodiment, after identifying the text information in the interface after the user's operation, the identified text information is compared with the preset keywords in the video attribute information for judgment.
[0141] Specifically, as Figure 5 shown, Figure 5 for users according to Figure 4 The operation guide in the interface appears after the operation, Figure 5 contains the words "Add a friend", with Figure 4 The keywords of the shown key frames correspond to the keywords, so it can be judged that the user's operation is correct; or as Figure 7 shown, Figure 7 for users according to Image 6 The operation guide in the interface appears after the operation, Figure 7 contains the words "Receive and Pay", and Image 6 The keywords of the shown key frames correspond to each other, so it can be determined that the user's operation is correct.
[0142] Step S500, if the application interface after the user's operation includes the keyword corresponding to the key frame, the operation guidance of the intelligent terminal is completed.
[0143] If the keyword corresponding to the key frame is included in the application interface after the user's operation, it is determined that the user's operation is accurate and the operation included in the key frame has been learned, and the operation guidance of the intelligent terminal is completed at this time.
[0144] Further, refer to Figure 16 , the S500 may include:
[0145] S510, determine whether the key frame is the last key frame in the video;
[0146] S520, if not, continue to play the video until the operation in the last key frame is completed.
[0147] Specifically, after the user completes the operation guidance steps corresponding to the intelligent terminal of the key frame, the judgment unit judges whether the key frame is the last key frame in the video; if the key frame is not the last key frame in the video, Then continue to play the video, and when the next key frame appears, pause the video playback, the user operates the application corresponding to the video again, and judges whether the application interface after the user operation contains the keyword in the corresponding key frame, until the last Actions in a keyframe are guided to completion.
[0148] In this embodiment, a preset video for guiding the operation of the intelligent terminal is played, and the video includes key frames; when the key frame is played, the video is paused, and the application corresponding to the video is started; The operation on the application interface; when monitoring the user's operation, determine whether the application interface after the user's operation contains the keyword corresponding to the key frame; if the application interface after the user operation contains the keyword corresponding to the key frame , the operation guidance of the intelligent terminal is completed. In the above manner, the user plays a preset video that guides the operation of the smart terminal, and the user obtains the steps and methods for operating the smart terminal by watching the video. When the key frame is played, the video is paused and the application corresponding to the video is started. , and then the user operates the corresponding application according to the steps and method of operating the smart terminal according to the video. The intelligent terminal then monitors the user's operation on the application interface; when monitoring the user's operation, it determines whether the application interface after the user's operation contains the keyword corresponding to the key frame; if so, it determines that the user's operation is accurate and has been After learning the corresponding operation, the operation guidance of the intelligent terminal is completed. The present invention combines viewing and operation, which can facilitate beginners to memorize and be familiar with the operation steps of the application, and improve the efficiency of beginners in learning to use the intelligent terminal.
[0149] Further, refer to Figure 13 , Figure 13 for Figure 12 Described in a detailed flow chart of judging whether a keyword corresponding to the key frame is included in the application interface after the user's operation when an operation is detected by the user.
[0150] Based on the above Figure 12 In the illustrated embodiment, step S400 may include:
[0151] S410, start the interface automated testing software, and obtain the keywords marked in the key frame through the automated testing software;
[0152] S420, obtain the text content in the interface after the user's operation through automated testing software;
[0153] S430: Determine whether the text content of the interface includes the keyword in the key frame.
[0154] In this embodiment, interface automation testing software is used to determine whether the application interface after the user's operation contains the keyword corresponding to the key frame. The interface automatic testing software adopts UI Automator, and in the specific implementation process, automatic testing software such as Cucumber and Café can also be used.
[0155] Specifically, when it is detected that the user performs an operation, the automatic interface testing software is started, and the automatic interface testing software obtains the key frame, parses the key frame, and obtains the keywords in the key frame. After the user operates the application, the interface automation testing software acquires the application interface after the user operation, parses the interface, and extracts the text content of the interface. It is judged whether the text content of the interface contains the keyword in the key frame. In the specific implementation process of obtaining the application interface after the user's operation, whether the user has completed the operation can be judged through touch screen sensing, such as: Figure 4 operation, you need to tap the screen three times to complete the described Figure 4 Steps in, first click WeChat, activate WeChat, then click the plus sign, and then click Add friends. This completes the steps to add a friend. In this case, it can be set that if the user taps the screen three times, the user is considered to have completed the operation. At this time, the interface automation testing software obtains the application interface after the user's operation. In the specific implementation process, it is also possible to set the time sensing to determine whether the user has completed the operation. For example, the time sensing is set to 20 seconds. If the user does not click on the screen for more than 20 seconds, the user is considered to have completed the operation. At this time, the interface is automatically tested. The software obtains the application interface after the user's operation, and the time can also be set to 30 seconds, 40 seconds, etc. during the specific implementation process.
[0156] In this embodiment, the automatic testing software of the interface is started, and the keywords marked in the key frame are obtained through the automatic testing software; the text content in the interface after the user's operation is obtained through the automatic testing software; it is judged whether the text content of the interface contains keywords in the keyframe. In the above manner, it is judged whether the text content of the interface contains the keywords in the key frame, so that it can be judged whether the user's operation is correct, and then the user can be guided to perform the next operation.
[0157] Further, refer to Figure 14 , Figure 14 for Figure 12 Described in another detailed flow chart of judging whether a keyword corresponding to the key frame is included in the application interface after the user's operation when an operation by the user is detected.
[0158] Based on the above Figure 12 In the illustrated embodiment, step S400 may further include:
[0159] S440, identifying the text information in the interface after the user's operation;
[0160] S450, compare the identified text information with the preset keywords in the video.
[0161] To ensure that this embodiment can be implemented normally, before implementing this embodiment, keywords in the video need to be extracted and stored in a PDF, and may also be stored in a Word document during the specific implementation process. After the user operation is completed, the text information in the interface after the user operation is identified, and the identified text information is compared with the preset keywords in the video.
[0162] In this embodiment, the text information in the interface after the user's operation is recognized; it is used to compare the recognized text information with the preset keywords in the video. Through the above method, it is judged whether the text information of the interface contains the preset keyword, so that it can be judged whether the user's operation is correct, and then the user can be guided to perform the next operation.
[0163] Further, refer to Figure 15 , Figure 15 This is a schematic flowchart of the second embodiment of the method for guiding the operation of an intelligent terminal according to the present invention.
[0164] Based on the above-mentioned embodiments of the method for guiding the operation of an intelligent terminal of the present invention, the method further includes:
[0165] Step S600, if the application interface after the user's operation does not contain the keyword corresponding to the key frame, it will prompt that the operation is wrong, and replay the preset video for guiding the operation of the intelligent terminal.
[0166] In this embodiment, if the application interface after the user's operation does not contain the keyword corresponding to the key frame, it means that the user's operation is wrong, prompts the operation to be wrong, and replays the preset video for guiding the operation of the intelligent terminal .
[0167] In this embodiment, if the keyword corresponding to the key frame is not included in the application interface after the user's operation, the operation is prompted to be wrong, and the preset video for guiding the operation of the intelligent terminal is replayed. In the above manner, when an operation error prompt occurs, the user can re-watch the video, and operate the corresponding application again according to the video, until the keyword corresponding to the key frame is included in the interface after the operation. Users can use the smart terminal by watching and practicing many times.
[0168] From the description of the above embodiments, those skilled in the art can clearly understand that the methods of the above embodiments can be implemented by means of software plus a necessary general hardware platform, and of course hardware can also be used, but in many cases the former is better implementation. Based on this understanding, the technical solutions of the present invention can be embodied in the form of software products in essence or the parts that make contributions to the prior art, and the computer software products are stored in a storage medium (such as ROM/RAM, magnetic disk, CD-ROM), including several instructions to make a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods described in the various embodiments of the present invention.
[0169] The above are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention. Any equivalent structure or equivalent process transformation made by using the contents of the description and drawings of the present invention, or directly or indirectly applied in other related technical fields , are similarly included in the scope of patent protection of the present invention.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products