Sound effect adjusting method and user terminal

A user terminal and adjustment method technology, applied in the electronic field, can solve the problems of time-consuming, difficult to meet the use requirements of sound effect parameters at the same time, cumbersome operation, etc., to achieve the effect of improving the convenience of operation, user experience and auditory effect

Inactive Publication Date: 2016-09-21
GUANGDONG OPPO MOBILE TELECOMM CORP LTD
8 Cites 19 Cited by

AI-Extracted Technical Summary

Problems solved by technology

However, in practical applications, it is difficult for one sound effect parameter to meet the needs of all users at the same time. For special groups of people, especially the elderly or users with poor hearing, the so...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Method used

In the embodiment of the present invention, by implementing the user terminal shown in Figure 7, the corresponding sound effect parameters can be loaded by identifying the user's identity information, so that different sound effects can be automatically switched for different users, and the convenience of operation can be improved. Effectively improve user experience and auditory effects. In addition, at least one of the user terminal's location information, system time, scene mode, and ambient volume value can be considered to further optimize the sound effect parameters, so that the sound effect parameters are more suitable for users, more humanized, and have better sound effects.
[0091] In the method described in FIG. 1, when it is detected that the target application in the user terminal has a user login, the target identity information of the user can be obtained, and the target identity information matching the target identity information can be obtained according to the target identity information. The target sound effect parameter, when the user terminal receives the audio output command, load the target sound effect parameter to play the audio. By implementing the method described in Figure 1, the corresponding sound effect parameters can be loaded by identifying the user's identity information, so that different sound effects can be automatically switched for different users, improving the convenience of operation, and effectively improving user experience and auditory effects.
[0107] In this embodiment, the same identity information may have different sound effect parameters corresponding to different location information. At this time, the sound effect parameter list may include correspondences between identity information, location information and sound effect parameters of different users. The sound effect parameters corresponding to different location information of the same user may be different, for example, the sound effect parameters of the user at home and in the office may be set to be different. By implementing this embodiment, the location information of the user terminal is further consider...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Abstract

Embodiments of the invention disclose a sound effect adjusting method and a user terminal. The method comprises the steps of detecting whether a target application in the user terminal is logged in by a user or not; when it is detected that the target application is logged in by the user, obtaining target identity information of the user; according to the target identity information, obtaining target sound effect parameters corresponding to the target identity information; and when the user terminal receives an audio output instruction, loading the target sound effect parameters to perform audio play. By implementing the embodiments of the invention, different sound effects can be automatically switched for different users, the convenience of operation can be enhanced, and the user experience can be improved.

Application Domain

Technology Topic

Image

  • Sound effect adjusting method and user terminal
  • Sound effect adjusting method and user terminal
  • Sound effect adjusting method and user terminal

Examples

  • Experimental program(1)

Example Embodiment

[0063] The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are part of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
[0064] The embodiment of the present invention discloses a sound effect adjustment method and a user terminal, which can load corresponding sound effect parameters by recognizing the user's identity information, thereby being able to automatically switch different sound effects for different users, effectively improving user experience and auditory effects. Detailed descriptions are given below.
[0065] See figure 1 , figure 1 It is a schematic flowchart of a sound effect adjustment method disclosed in an embodiment of the present invention. Such as figure 1 As shown, the sound effect adjustment method may include the following steps:
[0066] 101. Detect whether a user is logged in to the target application in the user terminal, if yes, execute step 102; if not, execute step 105.
[0067] In the embodiment of the present invention, the user terminal may include a mobile phone, a tablet computer, a palmtop computer, a personal digital assistant (Personal Digital Assistant, PDA), a mobile Internet device (Mobile Internet Device, MID), a multimedia player (such as MP3, CD player, etc.) ), smart wearable devices (such as smart watches, smart bracelets, etc.) and various terminals, which are not limited in the embodiment of the present invention.
[0068] In the embodiment of the present invention, the target application in the user terminal may be a self-contained application in the user terminal, or may be a downloaded and installed third-party application, which is not limited in the embodiment of the present invention. For example, detecting whether a user has logged in to the audio player in the user terminal. It can detect in real time whether the target application in the user terminal has a user login, and it can also detect whether the target application in the user terminal has a user login at a specific time. In addition, it is also possible to detect whether there is a user login in the user terminal, that is, before using the user terminal, the user terminal can be logged in first.
[0069] As an optional implementation manner, the specific implementation manner of step 101 detecting whether the target application in the user terminal has a user login may include: detecting whether a user-triggered login operation is received on the login interface of the target application in the user terminal; The login operation carries the user's login information; if it is received, it is determined whether the login information entered by the user matches the preset login information, and if it matches, it is detected that the target application in the user terminal has a user login.
[0070] 102. Acquire target identity information of the user.
[0071] In the embodiment of the present invention, when it is detected that a user is logged in to the target application in the user terminal, the target identity information of the user can be obtained. Before the user logs in, the user can register on the target application platform in the user terminal, which can be registered in the form of a user name (or login account) and password. The user name can be a nickname or email address entered by the user. The password may include but is not limited to at least one of a text string password, a gesture password, and a biometric information password. The biometric information may include, but is not limited to, facial feature information, fingerprint information, iris information, retina information, and voiceprint information. A combination of one or more. In addition, when registering, you can also enter the user's age (which can be used to understand which age the user is in), gender, preferences and other information.
[0072] In the embodiment of the present invention, when the user logs in successfully, the target identity information of the user can be obtained, where the target identity information may include but is not limited to at least one of user name, age, gender and other information.
[0073] 103. According to the target identity information, obtain target sound effect parameters corresponding to the target identity information.
[0074] In the embodiment of the present invention, after obtaining the target identity information of the user who has successfully logged in, the target sound effect parameters matching the target identity information can be further obtained, where the target sound effect parameters may include, but are not limited to, volume value, sound effect style ( At least one of information such as original sound, rock, classic, pop, jazz, etc.), scene mode (such as concert hall mode, room mode, headphone mode, KTV mode, etc.), stereo, single/dual channel and other information. The volume value can be adjusted through the volume controller, and the sound effect style, scene mode and other information can be adjusted through the sound effect equalizer.
[0075] In the embodiment of the present invention, the target sound effect parameters corresponding to different target identity information may be different, for example, different sound effect parameters may be set for different users. When the user registers, the corresponding sound effect parameters can be set.
[0076] As an optional implementation manner, before performing step 101 to detect whether the target application in the user terminal has a user logged in, figure 1 The described method may also include the following steps:
[0077] 11) For different users, preset and store a sound effect parameter list, the sound effect parameter list including the mapping relationship between the identity information of different users and the sound effect parameters;
[0078] Wherein, in step 103, a specific implementation manner of obtaining target sound effect parameters corresponding to the target identity information according to the target identity information may include the following steps:
[0079] 12) According to the target identity information, obtain target sound effect parameters corresponding to the target identity information from the sound effect parameter list.
[0080] In this embodiment, for any user, when registering, the corresponding sound effect parameter can be set for the user, and the sound effect parameter can be stored in the sound effect parameter list. The sound effect parameter list can contain the identity information of all registered users. With the corresponding sound effect parameters, the sound effect parameters corresponding to different identity information may be different. In addition, after the user logs in, the user can also modify the preset sound effect parameters according to their own needs or preferences. At this time, the sound effect parameters corresponding to the user in the sound effect parameter list can be updated accordingly.
[0081] As an optional implementation manner, for any user, the method for setting sound effect parameters may include the following steps:
[0082] 13) When the user is registering, obtain the user's identity information;
[0083] 14) Receive the sound effect adjustment operation input by the user;
[0084] 15) According to the sound effect adjustment operation, set the corresponding sound effect parameters;
[0085] 16) Establish a binding relationship between the user's identity information and the sound effect parameter, and store the binding relationship in the sound effect parameter list.
[0086] Wherein, the user's identity information may include at least one of user name, gender, age, and the like. The user can select an existing set of sound effect parameters in the user terminal as its own corresponding sound effect parameters. Preferably, the user terminal can recommend some suitable sound effect parameters according to the user’s identity information. For example, when the user is an elderly person, some more suitable sound effect parameters are recommended. A sound effect parameter suitable for the elderly; the sound effect parameter can also be a sound effect parameter that the user sets according to himself through manual adjustment.
[0087] 104. When the user terminal receives the audio output instruction, load the target sound effect parameter to perform audio playback.
[0088] In the embodiment of the present invention, when the user terminal receives an audio output instruction, it can load the acquired target sound effect parameter, and replace the target sound effect parameter with the current sound effect parameter of the user terminal to play audio. Wherein, the audio output instruction received by the user terminal may be triggered by the user (for example, the user clicks to play an audio file), or it may be triggered by the user terminal itself (for example, an alarm sounds, an incoming call, etc.).
[0089] 105. When the user terminal receives the audio output instruction, it loads the default sound effect parameters for audio playback.
[0090] In the embodiment of the present invention, when it is detected that the target application in the user terminal has no user login, when the user terminal receives an audio output instruction, the default sound effect parameters in the user terminal can be loaded for audio playback. The default sound effect parameter may be the original sound effect parameter of the user terminal, such as a fixed sound effect parameter preset by the user; it may also be the sound effect parameter adjusted last time by the user terminal, for example, the sound effect parameter used during the previous user login.
[0091] in figure 1 In the described method, when it is detected that the target application in the user terminal has a user login, the target identity information of the user can be obtained, and the target sound effect parameters matching the target identity information can be obtained according to the target identity information. When the terminal receives the audio output instruction, it loads the target sound effect parameter to perform audio playback. By implementing figure 1 The described method can load the corresponding sound effect parameters by recognizing the user's identity information, so that different sound effects can be automatically switched for different users, which improves operation convenience and effectively improves user experience and auditory effects.
[0092] See figure 2 , figure 2 It is a schematic flowchart of another sound effect adjustment method disclosed in an embodiment of the present invention. Such as figure 2 As shown, the sound effect adjustment method may include the following steps:
[0093] 201. Detect whether a user is logged in to the target application in the user terminal, if yes, execute step 202; if not, execute step 206.
[0094] 202. Acquire target identity information of the user.
[0095] In the embodiment of the present invention, the target identity information may include but is not limited to at least one of user name (or login account number), age, gender, and other information.
[0096] 203. Obtain target data.
[0097] In the embodiment of the present invention, the target data may include but is not limited to at least one of the location information of the user terminal, the current system time of the user terminal, the current scene mode of the user terminal, and the volume value of the environment where the user terminal is currently located.
[0098] In the embodiment of the present invention, the location information of the user terminal may be the location information of the current location of the user terminal, which may be obtained through GPS (Global Positioning System, global positioning system) in the user terminal, or may be obtained through base station positioning. It may also be obtained through Wi-Fi positioning, etc., which is not limited in the embodiment of the present invention. The current location information of the user terminal can be embodied in the form of latitude and longitude coordinates, or it can be a specific actual address, such as the province, city, street, and house number where the terminal is located. The location information of the user terminal can also be the location information corresponding to the geographic location with the highest historical activity frequency of the user terminal, which can count the activities of the user terminal in each geographic location within a preset time (such as within a month, within a week, within a day, etc.) The number and/or duration of the event, and obtain the location information of the geographic location with the most number of events and/or the longest duration. The location information of the user terminal may be a specific location or a location range, which is not limited in the embodiment of the present invention. The current system time of the user terminal may be the time currently output by the user terminal, such as 8:30 and 19:00. The current scene mode of the user terminal may include, but is not limited to, standard mode, conference mode, flight mode, silent mode, and so on. The volume value of the environment where the user terminal is currently located is the loudness of the environmental noise, in decibels, which can be collected and evaluated through the microphone of the user terminal.
[0099] 204. According to the target identity information, obtain target sound effect parameters corresponding to the target identity information and the target data.
[0100] In the embodiment of the present invention, the sound effect parameters corresponding to the same target identity information under different target data may be different.
[0101] As an optional implementation manner, before step 201 is executed, figure 2 The described method may also include the following steps:
[0102] 21) For different users, a sound effect parameter list is preset and stored, and the sound effect parameter list includes the mapping relationship between the identity information of different users and the sound effect parameters;
[0103] Wherein, in step 204, a specific implementation manner of obtaining target sound effect parameters corresponding to the target identity information and the target data according to the target identity information may include the following steps:
[0104] 22) According to the target identity information, obtain target sound effect parameters corresponding to the target identity information and the target data from the sound effect parameter list.
[0105] As an optional implementation manner, when the target data includes the location information of the user terminal, step 22) obtains the target identity information and the target corresponding to the target data from the sound effect parameter list according to the target identity information The specific implementation of the sound effect parameter may include the following steps:
[0106] 23) According to the target identity information, a sound effect parameter corresponding to the target identity information and the location information of the user terminal is obtained from the sound effect parameter list as the target sound effect parameter.
[0107] In this embodiment, the sound effect parameters corresponding to the same identity information under different location information may be different. At this time, the sound effect parameter list may include the correspondence between the identity information, location information, and sound effect parameters of different users. The sound effect parameters corresponding to the same user under different location information may be different. For example, the sound effect parameters of the user at home and in the office may be set to different. Through the implementation of this embodiment, the location information of the user terminal is further considered on the basis of the identity information, and the sound effect parameters can be changed with different position information, making the sound effect parameters more suitable for users, more humane, and sound effects. Better.
[0108] As an optional implementation manner, when the target data includes the current system time of the user terminal, step 22) obtains the target identity information and the target data corresponding to the target identity information from the sound effect parameter list according to the target identity information The specific implementation of the target sound effect parameter may include the following steps:
[0109] 24) Determine the preset time range to which the current system time of the user terminal belongs;
[0110] 25) According to the target identity information, a sound effect parameter corresponding to the target identity information and the preset time range is obtained from the sound effect parameter list as the target sound effect parameter.
[0111] In this embodiment, the sound effect parameters corresponding to the same identity information in different time ranges may be different. At this time, the sound effect parameter list may include the correspondence between the identity information of different users, the time range, and the sound effect parameters. The sound effect parameters corresponding to the same user in different time ranges can be different. For example, the sound effect parameters corresponding to the user at 8:00~11:00 in the morning and 21:00~24:00 in the evening can be set to be different, or the user The sound effect parameters corresponding to rest days and working days can be set differently. Through the implementation of this embodiment, the system time of the user terminal is further considered on the basis of the identity information, and the sound effect parameters can be changed over time, making the sound effect parameters more suitable for users, more humane, and more sound effects. good.
[0112] As an optional implementation manner, when the target data includes the current scene mode of the user terminal, step 22) obtains the target identity information and the target data corresponding to the target identity information from the sound effect parameter list according to the target identity information The specific implementation of the target sound effect parameter may include the following steps:
[0113] 26) According to the target identity information, obtain the sound effect parameters corresponding to the target identity information and the current scene mode of the user terminal from the sound effect parameter list as the target sound effect parameters.
[0114] In this embodiment, the sound effect parameters corresponding to the same identity information in different scenarios are different. At this time, the sound effect parameter list may include the correspondence between the identity information of different users, the scene mode, and the sound effect parameters. The sound effect parameters corresponding to the same user in different scene modes can be different. For example, the sound effect parameters corresponding to the user in the standard mode and the conference mode can be set to different. Through the implementation of this embodiment, based on the identity information, the scene mode of the user terminal is further considered, and the sound effect parameters can be changed with different scene modes, making the sound effect parameters more suitable for users, more humane, and sound effects. Better.
[0115] As an optional implementation manner, when the target data includes the volume value of the environment where the user terminal is currently located, step 22) acquires the target identity information and the target data from the sound effect parameter list according to the target identity information The specific implementation of the corresponding target sound effect parameter may include the following steps:
[0116] 27) Determine the preset volume range to which the volume value of the environment where the user terminal is currently located;
[0117] 28) According to the target identity information, the sound effect parameter corresponding to the target identity information and the preset volume range is obtained from the sound effect parameter list as the target sound effect parameter.
[0118] In this embodiment, the sound effect parameters corresponding to the same identity information in different volume ranges may be different. At this time, the sound effect parameter list may include the correspondence between the identity information of different users, the volume range and the sound effect parameters. The sound effect parameters corresponding to the same user in different volume ranges may be different. For example, the sound effect parameters corresponding to the environment volume value of 0-20 decibels and 20-40 decibels may be set to different for the same user. Through the implementation of this embodiment, the external environment volume value is further considered on the basis of the identity information, and the sound effect parameters can be changed with the different environmental volume values, making the sound effect parameters more suitable for users, more humane, and sound effects. Better.
[0119] 205. When the user terminal receives the audio output instruction, load the target sound effect parameter to perform audio playback.
[0120] In the embodiment of the present invention, when the user terminal is connected to the Internet, the target sound effect parameter set by the user can be uploaded to the server, so that other users can directly select the sound effect parameter as their own in a similar environment, without having to do it manually. Set up. In addition, the user can change the target sound effect parameters at any time after logging in. After the change is completed, the sound effect parameter list will be updated accordingly.
[0121] 206. When the user terminal receives the audio output instruction, it loads the default sound effect parameters to perform audio playback.
[0122] In the embodiment of the present invention, by implementing figure 2 The described method can load the corresponding sound effect parameters by recognizing the user's identity information, so that different sound effects can be automatically switched for different users, which improves operation convenience and effectively improves user experience and auditory effects. In addition, at least one of the location information of the user terminal, the system time, the scene mode, and the environmental volume value can be considered in combination to further optimize the sound effect parameters, so that the sound effect parameters are more suitable for users, more humane, and sound effects are better.
[0123] See image 3 , image 3 It is a schematic structural diagram of a user terminal disclosed in an embodiment of the present invention, which can be used to execute the sound effect adjustment method disclosed in the embodiment of the present invention. Such as image 3 As shown, the user terminal may include:
[0124] The detection unit 301 is used to detect whether a user is logged in to the target application in the user terminal.
[0125] In the embodiment of the present invention, the target application in the user terminal may be a self-contained application in the user terminal, or may be a downloaded and installed third-party application, which is not limited in the embodiment of the present invention. For example, the detection unit 301 detects whether a user is logged in to the audio player in the user terminal. In addition, the detection unit 301 can also detect whether there is a user login in the user terminal, that is, before using the user terminal, the user terminal can be logged in first.
[0126] The first obtaining unit 302 is configured to obtain the target identity information of the user when the detection unit 301 detects that the target application has a user login.
[0127] In the embodiment of the present invention, before the user logs in, the user can register on the target application platform in the user terminal, which can be registered in the form of a user name and password, where the user name can be a nickname or email address entered by the user. The password may include but is not limited to at least one of a text string password, a gesture password, and a biometric information password. The biometric information may include, but is not limited to, facial feature information, fingerprint information, iris information, retina information, and voiceprint information. A combination of one or more. In addition, when registering, you can also enter the user's age, gender, preferences and other information.
[0128] In the embodiment of the present invention, when the user logs in successfully, the first obtaining unit 302 may obtain the target identity information of the user, where the target identity information may include but is not limited to at least one of user name, age, gender, etc. .
[0129] The second obtaining unit 303 is configured to obtain target sound effect parameters corresponding to the target identity information according to the target identity information.
[0130] In the embodiment of the present invention, the target sound effect parameters may include, but are not limited to, volume value, sound effect style (such as original sound, rock, classic, pop, jazz, etc.), scene mode (such as concert hall mode, room mode, headset mode, KTV mode) Etc.), at least one of stereo, single/dual channel and other information. You can adjust the volume value through the volume controller, and adjust the sound effect style, scene mode and other information through the equalizer. The target sound effect parameters corresponding to different target identity information may be different. For example, different sound effect parameters may be set for different users. When the user registers, the corresponding sound effect parameters can be set.
[0131] The loading unit 304 is configured to load the target sound effect parameter for audio playback when the user terminal receives an audio output instruction.
[0132] In the embodiment of the present invention, when the user terminal receives an audio output instruction, the loading unit 304 may load the acquired target sound effect parameter, and replace the target sound effect parameter with the current sound effect parameter of the user terminal to perform audio playback. Wherein, the audio output instruction received by the user terminal may be triggered by the user (for example, the user clicks to play an audio file), or it may be triggered by the user terminal itself (for example, an alarm sounds, an incoming call, etc.).
[0133] As an optional implementation manner, the loading unit 304 can also be used to load the default sound effect parameters in the user terminal when the detection unit 301 detects that the target application has no user login, and when the user terminal receives an audio output instruction Perform audio playback. Among them, the default sound effect parameter may be the original sound effect parameter of the user terminal, or may be the sound effect parameter adjusted by the user terminal last time.
[0134] Please refer to Figure 4 , Figure 4 It is a schematic structural diagram of another user terminal disclosed in the embodiment of the present invention, which can be used to execute the sound effect adjustment method disclosed in the embodiment of the present invention. among them, Figure 4 The user terminal shown is in image 3 The user terminal shown is based on further optimization. versus image 3 Compared with the user terminal shown, Figure 4 The user terminal shown may also include:
[0135] The setting unit 305 is configured to preset and store a sound effect parameter list for different users before the detection unit 301 detects whether the target application in the user terminal has a user logged in. The sound effect parameter list includes the identity information of different users and the information of the sound effect parameters. Mapping relations.
[0136] Correspondingly, the specific implementation manner for the second obtaining unit 303 to obtain the target sound effect parameter corresponding to the target identity information according to the target identity information may be:
[0137] The second obtaining unit 303 obtains the target sound effect parameter corresponding to the target identity information from the sound effect parameter list according to the target identity information.
[0138] As an optional implementation, Figure 4 The user terminal shown may also include:
[0139] The third acquiring unit 306 is configured to acquire target data. The target data may include, but is not limited to, the location information of the user terminal, the current system time of the user terminal, the current scene mode of the user terminal, and the volume value of the environment where the user terminal is currently located. At least one of.
[0140] Correspondingly, the second obtaining unit 303 obtains the target sound effect parameter corresponding to the target identity information from the sound effect parameter list according to the target identity information in a specific implementation manner:
[0141] The second acquiring unit 303 acquires the sound effect parameter corresponding to the target identity information and the target data from the sound effect parameter list as the target sound effect parameter according to the target identity information. Wherein, the sound effect parameters corresponding to the same target identity information under different target data may be different. In this case, the sound effect parameter list may include the corresponding relationship among the identity information, the target data, and the sound effect parameters.
[0142] As an optional implementation manner, when the target data includes location information of the user terminal, the second obtaining unit 303 obtains the target identity information and the target data corresponding to the target identity information from the sound effect parameter list according to the target identity information The specific implementation of the sound effect parameter as the target sound effect parameter can be:
[0143] The second acquiring unit 303 acquires the sound effect parameters corresponding to the target identity information and the location information of the user terminal from the sound effect parameter list as the target sound effect parameters according to the target identity information, wherein the same identity information corresponds to different location information The sound effect parameters can be different. At this time, the sound effect parameter list may include the correspondence between identity information, location information, and sound effect parameters.
[0144] As an optional implementation manner, when the target data includes the current scene mode of the user terminal, the second obtaining unit 303 obtains the target identity information and the target data from the sound effect parameter list according to the target identity information. The specific implementation of the corresponding sound effect parameter as the target sound effect parameter may be:
[0145] The second obtaining unit 303 obtains from the sound effect parameter list the sound effect parameters corresponding to the target identity information and the current scene mode of the user terminal as the target sound effect parameters according to the target identity information, wherein the same identity information is in different scene modes The corresponding sound effect parameters can be different. At this time, the sound effect parameter list may include the correspondence between identity information, scene mode, and sound effect parameters.
[0146] As an optional implementation, when the target data includes the current system time of the user terminal, please refer to Figure 5 , Figure 5 It is a schematic structural diagram of another user terminal disclosed in the embodiment of the present invention, which can be used to execute the sound effect adjustment method disclosed in the embodiment of the present invention. among them, Figure 5 The user terminal shown is in Figure 4 The user terminal shown is based on further optimization. versus Figure 4 Compared with the user terminal shown, Figure 5 The second acquiring unit 303 in the user terminal shown may include:
[0147] The first determining subunit 3031 is configured to determine the preset time range to which the current system time of the user terminal belongs;
[0148] The first obtaining subunit 3032 is configured to obtain, from the sound effect parameter list, the sound effect parameter corresponding to the target identity information and the preset time range as the target sound effect parameter according to the target identity information, wherein the same identity information is used at different times The corresponding sound effect parameters under the range can be different. At this time, the sound effect parameter list may include the corresponding relationship among the identity information, the time range, and the sound effect parameter.
[0149] As an optional implementation manner, when the target data includes the volume value of the environment where the user terminal is currently located, please refer to Image 6 , Image 6 It is a schematic structural diagram of another user terminal disclosed in the embodiment of the present invention, which can be used to execute the sound effect adjustment method disclosed in the embodiment of the present invention. among them, Image 6 The user terminal shown is in Figure 4 The user terminal shown is based on further optimization. versus Figure 4 Compared with the user terminal shown, Image 6 The second acquiring unit 303 in the user terminal shown may include:
[0150] The second determining subunit 3033 is configured to determine the preset volume range to which the volume value of the environment where the user terminal is currently located belongs;
[0151] The second acquiring subunit 3034 is configured to acquire, from the sound effect parameter list, the sound effect parameter corresponding to the target identity information and the preset volume range as the target sound effect parameter according to the target identity information, wherein the same identity information is at different volume levels The corresponding sound effect parameters under the range can be different. At this time, the sound effect parameter list may include the corresponding relationship among the identity information, the volume range, and the sound effect parameter.
[0152] In the embodiment of the present invention, by implementing Figure 3 to Figure 6 The user terminal shown can load the corresponding sound effect parameters by recognizing the user's identity information, thereby being able to automatically switch different sound effects for different users, improving operation convenience, and effectively improving user experience and auditory effects. In addition, at least one of the location information of the user terminal, the system time, the scene mode, and the environmental volume value can be considered in combination to further optimize the sound effect parameters, so that the sound effect parameters are more suitable for users, more humane, and sound effects are better.
[0153] See Figure 7 , Figure 7 It is a schematic structural diagram of another user terminal disclosed in the embodiment of the present invention, which can be used to execute the sound effect adjustment method disclosed in the embodiment of the present invention. Such as Figure 7 As shown, the user terminal 700 may include: at least one processor 701, at least one input device 702, at least one output device 703, memory 704 and other components. Among them, these components can be connected through one or more buses 705 for communication. Those skilled in the art can understand, Figure 7 The structure of the user terminal shown in does not constitute a limitation to the embodiment of the present invention. It may be a bus-shaped structure or a star-shaped structure, and may also include more or less components than shown in the figure, or a combination of certain components. Some components, or different component arrangements. among them:
[0154] In the embodiment of the present invention, the processor 701 is the control center of the user terminal, which uses various interfaces and lines to connect the various parts of the entire user terminal, by running or executing programs and/or modules stored in the memory 704, and calling The data in the memory 704 is used to perform various functions of the user terminal and process data. The processor 701 may be composed of an integrated circuit (Integrated Circuit, IC for short), for example, it may be composed of a single packaged IC, or may be composed of connecting multiple packaged ICs with the same function or different functions. For example, the processor 701 may only include a central processing unit (Central Processing Unit, CPU for short), or a CPU, a digital signal processor (DSP for short), or a graphics processor (Graphic Processing Unit, GPU for short). And a combination of various control chips. In the embodiment of the present invention, the CPU may be a single computing core or may include multiple computing cores.
[0155] In the embodiment of the present invention, the input device 702 may include a standard touch screen, a keyboard, etc., and may also include a wired interface, a wireless interface, etc., and may be used to implement interaction between the user and the user terminal 700.
[0156] In the embodiment of the present invention, the output device 703 may include a display screen, a speaker, etc., and may also include a wired interface, a wireless interface, and the like.
[0157] In the embodiment of the present invention, the memory 704 can be used to store application programs and modules. The processor 701, the input device 702, and the output device 703 call the application programs and modules stored in the memory 704 to execute various functional applications and applications of the user terminal. Realize data processing. The memory 704 mainly includes a program storage area and a data storage area. The program storage area can store an operating system, an application program required for at least one function, etc.; the data storage area can store data created according to the use of a user terminal, etc. In the embodiment of the present invention, the operating system may be an Android system, an iOS system, a Windows operating system, and so on.
[0158] in Figure 7 In the user terminal shown, the processor 701 calls an application program stored in the memory 704 to perform the following operations:
[0159] Detecting whether a user is logged in to the target application in the user terminal 700;
[0160] When it is detected that the target application has a user login, obtain the user's target identity information;
[0161] According to the target identity information, obtain the target sound effect parameters corresponding to the target identity information;
[0162] When the input device 702 in the user terminal 700 receives an audio output instruction, the output device 703 is triggered to load the target sound effect parameter for audio playback.
[0163] As an optional implementation manner, before detecting whether the target application in the user terminal 700 has a user login, the processor 701 may also call an application program stored in the memory 704 and perform the following operations:
[0164] For different users, a sound effect parameter list is preset and stored in the memory 704, and the sound effect parameter list includes the mapping relationship between the identity information of different users and the sound effect parameters;
[0165] Wherein, a specific implementation manner for the processor 701 to obtain the target sound effect parameter corresponding to the target identity information according to the target identity information may be:
[0166] According to the target identity information, the target sound effect parameter corresponding to the target identity information is obtained from the sound effect parameter list.
[0167] As an optional implementation manner, the processor 701 may also call an application program stored in the memory 704, and perform the following operations:
[0168] Acquiring target data, the target data including at least one of the location information of the user terminal 700, the current system time of the user terminal 700, the current scene mode of the user terminal 700, and the volume value of the environment where the user terminal 700 is currently located;
[0169] Wherein, the processor 701 obtains the target sound effect parameter corresponding to the target identity information from the sound effect parameter list according to the target identity information, including:
[0170] According to the target identity information, a sound effect parameter corresponding to the target identity information and the target data is obtained from the sound effect parameter list as the target sound effect parameter.
[0171] As an optional implementation manner, when the target data includes location information of the user terminal 700, the processor 701 obtains the target identity information and the target data corresponding to the target identity information from the sound effect parameter list according to the target identity information. The specific implementation of the sound effect parameter as the target sound effect parameter may be:
[0172] According to the target identity information, the sound effect parameters corresponding to the target identity information and the position information of the user terminal 700 are obtained from the sound effect parameter list as the target sound effect parameters, wherein the sound effect parameters corresponding to the same identity information under different position information can be different.
[0173] As an optional implementation manner, when the target data includes the current system time of the user terminal 700, the processor 701 obtains the target identity information and the target data corresponding to the target identity information from the sound effect parameter list according to the target identity information The specific implementation of the sound effect parameter as the target sound effect parameter can be:
[0174] Determine the preset time range to which the current system time of the user terminal 700 belongs;
[0175] According to the target identity information, the sound effect parameters corresponding to the target identity information and the preset time range are obtained from the sound effect parameter list as the target sound effect parameters, wherein the same identity information corresponds to different sound effect parameters in different time ranges.
[0176] As an optional implementation manner, when the target data includes the current scene mode of the user terminal 700, the processor 701 obtains the target identity information and the target data corresponding to the target identity information from the sound effect parameter list according to the target identity information The specific implementation of the sound effect parameter as the target sound effect parameter can be:
[0177] According to the target identity information, the sound effect parameters corresponding to the target identity information and the current scene mode of the user terminal 700 are obtained from the sound effect parameter list as the target sound effect parameters, wherein the sound effect parameters corresponding to the same identity information in different scene modes Can be different.
[0178] As an optional implementation manner, when the target data includes the volume value of the environment in which the user terminal 700 is currently located, the processor 701 obtains the target identity information and the target identity information from the sound effect parameter list according to the target identity information. The specific implementation of the sound effect parameter corresponding to the data as the target sound effect parameter may be:
[0179] Determine the preset volume range to which the volume value of the environment where the user terminal 700 is currently located belongs;
[0180] According to the target identity information, the sound effect parameters corresponding to the target identity information and the preset volume range are obtained from the sound effect parameter list as the target sound effect parameters, wherein the same identity information corresponds to different sound effect parameters in different volume ranges.
[0181] As an optional implementation manner, the processor 701 may also call an application program stored in the memory 704, and perform the following operations:
[0182] When it is detected that the target application has no user login and the input device 702 in the user terminal 700 receives an audio output instruction, the output device 703 is triggered to load the default sound effect parameters for audio playback.
[0183] Specifically, the user terminal introduced in the embodiment of the present invention can implement the combination of the present invention figure 1 or figure 2 Part or all of the procedures in the embodiments of the sound effect adjustment method introduced.
[0184] In the embodiment of the present invention, by implementing Figure 7 The user terminal shown can load the corresponding sound effect parameters by recognizing the user's identity information, thereby being able to automatically switch different sound effects for different users, improving operation convenience, and effectively improving user experience and auditory effects. In addition, at least one of the location information of the user terminal, the system time, the scene mode, and the environmental volume value can be considered in combination to further optimize the sound effect parameters, so that the sound effect parameters are more suitable for users, more humane, and sound effects are better.
[0185] See Figure 8 , Figure 8 It is a schematic structural diagram of another user terminal disclosed in an embodiment of the present invention, which can be used to execute the sound effect adjustment method disclosed in the embodiment of the present invention. Such as Figure 8 As shown, for ease of description, only the parts related to the embodiment of the present invention are shown. For specific technical details that are not disclosed, please refer to the method part of the embodiment of the present invention. The user terminal can include various types of terminals such as mobile phones, tablet computers, handheld computers, PDAs, MIDs, multimedia players, smart wearable devices, and car terminals. Take the user terminal as a mobile phone as an example:
[0186] Figure 8 What is shown is a schematic diagram of a partial structure of a mobile phone related to a user terminal disclosed in an embodiment of the present invention. reference Figure 8 The mobile phone includes: a radio frequency (RF) circuit 810, a memory 820, an input unit 830, a display unit 840, a sensor 850, an audio circuit 860, a wireless fidelity (Wireless Fidelity, WiFi) module 870, a processor 880, and a power supply 890 And other parts. Those skilled in the art can understand, Figure 8 The structure of the mobile phone shown in does not constitute a limitation on the mobile phone, and may include more or less components than shown in the figure, or a combination of some components, or a different component arrangement.
[0187] Combine below Figure 8 A detailed introduction to each component of the mobile phone:
[0188] The RF circuit 810 can be used for receiving and sending signals during information transmission or communication. In particular, after receiving the downlink information of the base station, it is processed by the processor 880; in addition, the designed uplink data is sent to the base station. Generally, the RF circuit 810 includes but is not limited to an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 810 can also communicate with the network and other devices through wireless communication. The above-mentioned wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile Communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division Multiple Access). Access, CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), etc.
[0189] The memory 820 may be used to store software programs and modules. The processor 880 executes various functional applications and data processing of the mobile phone by running the software programs and modules stored in the memory 820. The memory 820 may mainly include a program storage area and a data storage area. The program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data (such as audio data, phone book, etc.) created by the use of mobile phones. In addition, the memory 820 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
[0190] The input unit 830 may be used to receive inputted digital or character information, and generate key signal input related to user settings and function control of the mobile phone. Specifically, the input unit 830 may include a touch panel 831 and other input devices 832. The touch panel 831, also called a touch screen, can collect user touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 831 or near the touch panel 831. Operation), and drive the corresponding connection device according to the preset program. Optionally, the touch panel 831 may include two parts: a touch detection device and a touch controller. Among them, the touch detection device detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 880, and can receive and execute the commands sent by the processor 880. In addition, the touch panel 831 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 831, the input unit 830 may also include other input devices 832. Specifically, the other input device 832 may include, but is not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick.
[0191] The display unit 840 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The display unit 840 may include a display panel 841. Optionally, the display panel 841 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), etc. Further, the touch panel 831 can cover the display panel 841. When the touch panel 831 detects a touch operation on or near it, it transmits it to the processor 880 to determine the type of the touch event, and then the processor 880 responds to the touch event. The type provides corresponding visual output on the display panel 841. Although in Figure 8 The touch panel 831 and the display panel 841 are used as two independent components to realize the input and input functions of the mobile phone. However, in some embodiments, the touch panel 831 and the display panel 841 can be integrated to realize the input of the mobile phone. And output function.
[0192] The mobile phone may also include at least one sensor 850, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor. The ambient light sensor can adjust the brightness of the display panel 841 according to the brightness of the ambient light. The proximity sensor can close the display panel 841 and/or when the mobile phone is moved to the ear. Or backlight. As a kind of motion sensor, the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three-axis), and can detect the magnitude and direction of gravity when stationary, and can be used to identify mobile phone posture applications (such as horizontal and vertical screen switching, related Games, magnetometer posture calibration), vibration recognition related functions (such as pedometer, percussion), etc.; as for other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which can be configured in mobile phones, we will not here Repeat.
[0193] The audio circuit 860, the speaker 861, and the microphone 862 can provide an audio interface between the user and the mobile phone. The audio circuit 860 can transmit the electric signal after the conversion of the received audio data to the speaker 861, and the speaker 861 converts it into a sound signal for output; on the other hand, the microphone 862 converts the collected sound signal into an electric signal, and the audio circuit 860 After being received, it is converted into audio data, and then processed by the audio data output processor 880, and sent to, for example, another mobile phone via the RF circuit 810, or the audio data is output to the memory 820 for further processing.
[0194] WiFi is a short-distance wireless transmission technology. The mobile phone can help users send and receive e-mails, browse web pages, and access streaming media through the WiFi module 870. It provides users with wireless broadband Internet access. although Figure 8 The WiFi module 870 is shown, but it is understandable that it is not a necessary component of the mobile phone, and can be omitted as required without changing the essence of the invention.
[0195] The processor 880 is the control center of the mobile phone. It uses various interfaces and lines to connect various parts of the entire mobile phone. It executes by running or executing software programs and/or modules stored in the memory 820, and calling data stored in the memory 820. Various functions and processing data of the mobile phone can be used to monitor the mobile phone as a whole. Optionally, the processor 880 may include one or more processing units; preferably, the processor 880 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, application programs, etc. , The modem processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 880.
[0196] The mobile phone also includes a power supply 890 (such as a battery) for supplying power to various components. Preferably, the power supply 890 may be logically connected to the processor 880 through a power management system, so that functions such as charging, discharging, and power consumption management can be managed through the power management system.
[0197] in spite of Figure 8 Not shown in, the mobile phone may also include a camera, a Bluetooth module, etc., which will not be repeated here.
[0198] In the embodiment of the present invention, the processor 880 included in the user terminal also has a function corresponding to the processor 701 in the foregoing embodiment, and details are not described herein again.
[0199] The modules or sub-modules in all the embodiments of the present invention can be implemented by a general integrated circuit, such as a CPU, or by an ASIC (Application Specific Integrated Circuit, application specific integrated circuit).
[0200] It should be noted that for the foregoing method embodiments, for the sake of simple description, they are all expressed as a series of action combinations, but those skilled in the art should know that the present invention is not limited by the described sequence of actions. Because according to this application, certain steps can be performed in other order or simultaneously. Secondly, those skilled in the art should also know that the embodiments described in the specification are all preferred embodiments, and the actions and modules involved are not necessarily required by this application.
[0201] In the above-mentioned embodiments, the description of each embodiment has its own emphasis. For parts that are not described in detail in an embodiment, reference may be made to related descriptions of other embodiments.
[0202] The steps in the method of the embodiment of the present invention can be adjusted, merged, and deleted in order according to actual needs.
[0203] In the embodiment of the present invention, the units or subunits in the user terminal can be combined, divided, and deleted according to actual needs.
[0204] A person of ordinary skill in the art can understand that all or part of the processes in the above-mentioned embodiment methods can be implemented by instructing relevant hardware through a computer program. The program can be stored in a computer readable storage medium. During execution, it may include the procedures of the above-mentioned method embodiments. Wherein, the storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.
[0205] The sound effect adjustment method and user terminal disclosed in the embodiments of the present invention are described in detail above. Specific examples are used in this article to illustrate the principles and implementation of the present invention. The description of the above embodiments is only used to help understand the present invention. At the same time, for those of ordinary skill in the art, according to the ideas of the present invention, there will be changes in the specific implementation and the scope of application. In summary, the content of this specification should not be understood as Restrictions on the invention.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Similar technology patents

Classification and recommendation of technical efficacy words

Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products