Pronunciation detection method and device, computer equipment and storage medium
A detection method and a technology for correct pronunciation, which are applied in speech analysis, teaching aids and instruments for electrical operation, etc., can solve the problems of error in judgment results, the generalization ability of classification models with limited segmentation accuracy, and the interpretation of pronunciation characteristics. The effect of improving accuracy
Pending Publication Date: 2021-01-05
北京乐学帮网络技术有限公司
0 Cites 2 Cited by
AI-Extracted Technical Summary
Problems solved by technology
[0004] However, this judgment of right and wrong based on the pronunciation characteristics of a single speech is limited by the segmentation accuracy and th...
Abstract
The invention provides a pronunciation detection method and device, computer equipment and a storage medium; and the method comprises the steps: obtaining audio data of a target user for any target user; for each phoneme contained in the audio data, decoding the phoneme by using a pre-constructed network to obtain a time boundary corresponding to the phoneme; respectively encoding each phoneme with the determined time boundary by using a phoneme encoding model, and determining a first phoneme vector corresponding to each phoneme; for each phoneme, determining the distance between a first phoneme vector and a second phoneme vector corresponding to the phoneme, the second phoneme vector being a vector corresponding to the phoneme obtained in the phoneme coding model training process; and detecting the audio data according to the distance between the first phoneme vector and the second phoneme vector corresponding to each phoneme. According to the embodiment of the disclosure, personalized detection is carried out according to the pronunciation characteristics of each user, so that the accuracy of the pronunciation detection result is improved.
Application Domain
Speech analysisElectrical appliances
Technology Topic
Computer equipmentAudiology +3
Image
Examples
- Experimental program(3)
Example Embodiment
[0081]Example one
[0082]Aiming at the characteristics of the user’s individual pronunciation and the feedback of the user’s pronunciation errors, the embodiments of the present disclosure provide a pronunciation detection method, seeFigure 1b As shown, this is a flowchart of a pronunciation detection method provided by an embodiment of the present disclosure. The method includes steps S101 to S106, wherein:
[0083]S101: Acquire audio data of the target user for any target user.
[0084]In this step, the device that receives the audio data may be the aforementioned terminal device 11, such as a computer, a mobile phone, a tablet computer, etc., with an evaluation client installed. In specific implementation, the client uses the microphone of the terminal device to collect the audio data of the text read aloud by the target user. The audio data includes phonemes. After obtaining the audio data, the client sends the audio data to the server to check whether the reading is accurate.
[0085]Of course, in some implementations, the client can also integrate the pronunciation detection method provided in the embodiments of the present disclosure. After acquiring the audio data of the target user, the client performs detection and feeds back the detection result to the user. This is not limited, and the following takes the pronunciation detection method executed by the server as an example for description.
[0086]After receiving the audio data sent by the client, the server extracts the acoustic characteristics of the audio data. Among them, the acoustic feature may be Mel Frequency Cepstral Coefficient (MFCC), Perceptual Linear Prediction (PLP), etc. In specific implementation, the server first converts the audio data into the frequency domain using fast Fourier transform (FFT), and each frame represents speech energy; then, the audio data is converted into the auditory characteristics of the human ear through the filter bank ; Finally, the use of discrete cosine transform (Discrete Cosine Transform, DCT) to extract acoustic features.
[0087]For example, to obtain a piece of audio data as "jiangnan ke cai lian", that is, "jiang nan ke cai lian", the frequency spectrum of the audio data is asFigure 2a As shown, the schematic diagram of the extracted acoustic features isFigure 2b Shown. After extracting the corresponding acoustic features, the audio data can be divided into multiple frames, and each frame includes a phoneme state. Continuing the above example, you can set the interval of 25 milliseconds as the window length to divide the audio data. Let 10 milliseconds be the window shift and framing, and divide "jiangnankecailian" into "j" "iang" "n" "an"" k""e""c""ai""l""ian", each phoneme is divided into several phoneme states, taking "j" divided into 3 phoneme states as an example, it can be expressed as j_s1, j_s2 and j_s3 .
[0088]S102: For each phoneme included in the audio data, use a pre-built network to decode the phoneme to obtain a time boundary corresponding to the phoneme.
[0089]In the specific implementation process, the audio data of the target user is acquired based on step S101, and the acoustic features of the audio data are extracted; for each phoneme contained in the audio data, based on the extracted acoustic features, the audio recognition model is used to determine the posterior corresponding to the phoneme Probability: Based on the posterior probability corresponding to the phoneme, use a pre-built network to decode the posterior probability corresponding to the phoneme to obtain the time boundary corresponding to the phoneme.
[0090]Among them, the audio recognition model is obtained by training using audio sample data with phoneme tags. The audio recognition model may be a neural network model, such as convolutional neural networks (CNN), or Long Short Term Mermory (LSTM), etc. It should be noted that the audio sample data for training the audio recognition model can be the audio sample data of any user.
[0091]In specific implementation, the acoustic features extracted in step S101 are input into the audio recognition model, and the posterior probability corresponding to each refined phoneme state is obtained, as shown in Table 1.
[0092]Table 1
[0093]
[0094]
[0095]The network mentioned in this step can be constructed using text information corresponding to audio data. In specific implementation, the text information corresponding to the audio data can be used to construct a network, and the network structure of the constructed network can be defined according to actual needs, such asFigure 2c As shown, it is a schematic diagram of a network structure constructed based on text information. Based on the constructed network, the text path is obtained. According to the path, the posterior probability of the input phoneme state can be decoded by the Viterbi algorithm (Viterbi algorithm) to obtain the corresponding phoneme Time boundary.
[0096]Continuing the above example, you can use the network constructed by "Jiangnan Ke-Cai-lian" to obtain its path information as "jiang-nan-ke-cai-lian", and decode the posterior corresponding to the input j_s2 through the Viterbi algorithm (Viterbi algorithm) Probability 0.35, posterior probability 0.21 corresponding to iang_s3, etc., to obtain the time boundary corresponding to each phoneme "j", "iang", "n", "an", "k", "e", "c", "ai", "l", and "ian" , That is, the start time and end time of each phoneme, and the time boundary effect diagram of each phoneme is as followsimage 3 Shown.
[0097]S103: Use the phoneme coding model to separately encode each phoneme whose time boundary is determined, and determine the first phoneme vector corresponding to each phoneme.
[0098]According to the embodiment of the present disclosure, the phoneme coding model is obtained by training based on the audio sample data generated by the target user, and the output of the phoneme coding model is a one-hot vector, that is, a 0-1 vector, where N dimensions are determined according to the number of phonemes N Vector. According to the coding result, the vector corresponding to the corresponding phoneme is 1, and the remaining positions are all 0. For example, if the current phoneme coding result is j, the corresponding vector value of j in the N-dimensional vector is 1, and the vector values corresponding to the remaining phonemes are all Is 0. In specific implementation, the phoneme coding model may be an LSTM model. The sound characteristics of each phoneme with a time boundary determined are input into the LSTM model, and each phoneme is coded to determine the first phoneme vector corresponding to each phoneme. The coding process diagram is asFigure 4 Shown.
[0099]Continuing the above example, the LSTM model can be used to encode each phoneme "j" "iang" "n" "an" "k" "e" "c" "ai" "l" "ian" that has determined the time boundary. The current phoneme training data is 10-15 frames is j, then using the LSTM model, the input is the acoustic characteristics of 10-15 frames, and the output is the first phoneme vector (0,0,0,1,...,0) for each frame , That is, the first phoneme vector corresponding to the phoneme; the current phoneme’s training data is 20-30 frames of iang, then the input is the acoustic features of 20-30 frames, and the output is that each frame is the first phoneme vector (0, 1, ..., 0), which will not be repeated here. As shown in Table 2, it should be understood that the 10 phonemes in Table 2 are taken as an example for description.
[0100]Table 2
[0101] j iang n an k e c ai l ian 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1
[0102]S104: For each phoneme, determine the distance between the first phoneme vector and the second phoneme vector corresponding to the phoneme.
[0103]In this step, the second phoneme vector is the vector corresponding to the phoneme obtained during the training of the phoneme coding model. The first phoneme vector is the vector obtained by inputting the audio data collected in the detection process into the phoneme coding model, such asFigure 5 As shown, it is a schematic diagram of the principle of obtaining the first phoneme vector using the phoneme coding model.
[0104]During specific implementation, the outermost output of the phoneme coding model is the one-hot vector corresponding to the phoneme. In the embodiment of the present disclosure, the output of the second outer layer of the phoneme coding model is taken as the first phoneme vector and the second phoneme vector corresponding to the phoneme. Since the input corresponding to a phoneme may have multiple frames of data, in order to ensure the accuracy of the output result in specific implementation, the sub-outer output of the last frame of data of the phoneme can be used as the first phoneme vector and second phoneme corresponding to the phoneme. vector.
[0105]Among them, for a new user who has not generated audio data, the audio sample data of the standard pronunciation can be used to train the corresponding second phoneme vector in the embodiment of the present disclosure; after the audio data of the user is subsequently collected, it can be based on the user corresponding The audio data training to obtain the second phoneme vector corresponding to the user.
[0106]Based on the obtained first phoneme vector and second phoneme vector, the distance between the first phoneme vector and the second phoneme vector can be calculated by the cosine similarity formula. The distance value is usually in the range of [-1, 1]. The value is 1, which means that the two vectors are exactly the same; if the value is 0, it means that the two vectors are orthogonal; if the value is between [-1, 1], it means that the two vectors have a certain degree of similarity. Among them, the two vectors A and B, the cosine similarity formula is:
[0107]
[0108]Continuing the previous example, using the phoneme coding model to output the sub-outer layer value of the last frame within the time boundary, the first phoneme vector (0.2, 0.1, 0.03, 0.7,..., 0.05) and the first phoneme of "j" are calculated by the cosine similarity formula The distance between the two phoneme vectors (0.1, 0.15, 0.07, 0.6,..., 0.04).
[0109]S105: Detect the audio data according to the distance between the first phoneme vector and the second phoneme vector corresponding to each phoneme.
[0110]In this step, the audio data is detected. For each phoneme, according to the distance between the first phoneme vector and the second phoneme vector corresponding to the phoneme, the detection result can be determined according to the following method:
[0111]In one embodiment, when the distance between the first phoneme vector and the second phoneme vector is less than or equal to the first preset threshold, it is determined that the phoneme in the audio data is pronounced correctly; if all the phonemes in the audio data are pronounced correctly, then Make sure that the audio data is pronounced correctly.
[0112]According to the embodiment of the present disclosure, the first preset threshold may be determined according to the following method, including the following steps:
[0113]Step 1. For each phoneme, obtain the first sample set of the correct pronunciation of the phoneme.
[0114]Step 2. Use the phoneme coding model to separately encode each sample in the first sample set to obtain a fourth phoneme vector set corresponding to the phoneme;
[0115]Step 3: Determine the distance between each fourth phoneme vector and the second phoneme vector included in the fourth phoneme vector set and arrange them in ascending order;
[0116]Step 4. Determine the distance corresponding to the first preset ratio as the first preset threshold.
[0117]For example, for each phoneme, the audio data with the correct pronunciation of the phoneme is collected to form a first sample set. Generally, the first sample set contains each phoneme, and the LSTM model is used to perform a separate analysis on each sample in the first sample set. Encoding, take the sub-outer output of the last frame of each phoneme to obtain the fourth phoneme vector set corresponding to the phoneme. For the specific process, refer to step S102, which will not be repeated here; the fourth phoneme vector set is calculated by the cosine similarity formula. The distance D1 between each fourth phoneme vector and the second phoneme vector of, and arrange D1 in ascending order. The second preset ratio can be set according to actual needs. The embodiment of the present disclosure does not limit this, for example, The value corresponding to 50% is used as the first preset threshold.
[0118]Further, for phonemes with accurate pronunciation, the corresponding first phoneme vector can be used to update the corresponding second phoneme vector.
[0119]In another embodiment, when the distance between the first phoneme vector and the second phoneme vector is greater than the second preset threshold, it is determined that the phoneme in the audio data is pronounced incorrectly; if at least one phoneme in the audio data is incorrectly pronounced, It is determined that the audio data pronunciation is incorrect.
[0120]In specific implementation, in order to determine whether the target user’s pronunciation error for the phoneme is an accidental error or a systematic pronunciation error, that is, the user is to grasp the accurate pronunciation of the phoneme, in the embodiments of the present disclosure, the pronunciation error of the phoneme is counted If the number of pronunciation errors of the phoneme reaches the third preset threshold; and it is determined whether there is an update of the second phoneme vector corresponding to the phoneme; if there is no update of the second phoneme vector corresponding to the phoneme, it is determined that the phoneme is systematic The pronunciation is wrong.
[0121]According to the embodiment of the present disclosure, the second preset threshold may be determined according to the following method, including the following steps:
[0122]Step 1. For each phoneme, obtain a second sample set of the wrong pronunciation of the phoneme;
[0123]Step 2. Use the phoneme coding model to separately encode each sample in the second sample set to obtain the fifth phoneme vector set corresponding to the phoneme;
[0124]Step 3: Determine the distance between each fifth phoneme vector and the second phoneme vector included in the fifth phoneme vector set and arrange them in descending order;
[0125]Step 4. Determine that the distance corresponding to the second preset ratio is the second preset threshold.
[0126]For example, for each phoneme, the audio data with the correct pronunciation of the phoneme is collected to form a first sample set. Usually, the first sample set contains each phoneme, and each sample in the second sample set is coded separately using the LSTM model , The fifth phoneme vector set corresponding to the phoneme is obtained. For the specific process, refer to step S102, which will not be repeated here; each fifth phoneme vector and the second phoneme contained in the fifth phoneme vector set can be calculated separately through the cosine similarity formula The distance D2 between the vectors is arranged in descending order D2. The second preset ratio can be set according to actual needs. The embodiment of the present disclosure does not limit this. For example, the value corresponding to 90% is taken as the second preset threshold. .
[0127]Furthermore, the embodiments of the present disclosure also provide users with an error correction function for the phonemes that are pronounced incorrectly.
[0128]During specific implementation, for the phonemes that are mispronounced, the distance between the first phoneme vector corresponding to the phoneme and the second phoneme vector corresponding to the third phoneme is respectively determined, and the third phoneme with the smallest distance is used as the prompt phoneme. The phoneme is a phoneme other than the phoneme in the preset phoneme.
[0129]Take the text "wo" and the wrong audio data "wu" as an example. For the phoneme that is pronounced incorrectly, determine the first phoneme vector (0.02, 0.1, 0.7,..., 0.05) corresponding to the phoneme, and find the corresponding phoneme in all phonemes. The phoneme with the smallest distance from the first phoneme vector corresponding to this phoneme can be considered as a wrong phoneme. Among them, "o" corresponds to the second phoneme vector (0.2, 0.5, 0.03,..., 0.02), and "u" corresponds to The second phoneme vector (0.02,0.1,0.7,...,0.05), it can be determined that the distance between the second phoneme vector corresponding to "o" is very large, and the distance between the second phoneme vector corresponding to "u" It is very small, so it can generate error correction function, that is, "o" is changed to "u".
[0130]The embodiments of the present disclosure extract the acoustic features of audio data and input them into the phoneme recognition model to obtain the posterior probability corresponding to the output phoneme. After decoding the posterior probability corresponding to the phoneme, obtain the time boundary corresponding to each phoneme. The phoneme coding model encodes the phonemes that have determined the time boundary, determines the first phoneme vector, and determines the distance between the first phoneme vector and the second phoneme vector corresponding to each phoneme output by the phoneme coding model during the training process, based on the determination The detection of audio data enables the phoneme coding model to detect user pronunciation errors and feedback the user’s pronunciation errors. At the same time, by updating the phoneme coding model and audio data, it makes full use of the individual pronunciation characteristics of users, making the detection results more targeted It improves the accuracy of the test results.
[0131]Those skilled in the art can understand that in the above-mentioned method of the specific implementation, the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process. The specific execution order of each step should be based on its function and possibility. The inner logic is determined.
[0132]Based on the same inventive concept, the embodiment of the present disclosure also provides a pronunciation detection device corresponding to the pronunciation detection method. Since the device in the embodiment of the present disclosure solves the problem in principle similar to the above-mentioned pronunciation detection method of the embodiment of the disclosure, the implementation of the device You can refer to the implementation of the method, and the repetition will not be repeated.
Example Embodiment
[0133]Example two
[0134]ReferenceFigure 6 As shown, this is a schematic diagram of a pronunciation detection device provided by an embodiment of the present disclosure. The device includes: an extraction unit 601, a decoding unit 602, a first determination unit 603, a second determination unit 604, and a detection unit 605; wherein,
[0135]The extraction unit 601 is configured to obtain audio data of any target user, where the audio data includes phonemes;
[0136]The decoding unit 602 is configured to decode each phoneme contained in the audio data by using a pre-built network to obtain the time boundary corresponding to the phoneme. The network is constructed by using text information corresponding to the audio data of;
[0137]The first determining unit 603 is configured to use a phoneme coding model to encode the phonemes whose time boundaries are determined to determine a first phoneme vector corresponding to each phoneme, wherein the phoneme coding model is based on audio sample data generated by the target user Obtained from training;
[0138]The second determining unit 604 is configured to determine, for each phoneme, the distance between the first phoneme vector and the second phoneme vector corresponding to the phoneme, and the second phoneme vector is the one obtained during the training of the phoneme coding model. The vector corresponding to the phoneme;
[0139]The detecting unit 605 is configured to detect the audio data according to the distance between the first phoneme vector and the second phoneme vector corresponding to each phoneme.
[0140]In an optional implementation manner, the detection unit 605 is specifically configured to, for each phoneme, according to the distance between the first phoneme vector and the second phoneme vector corresponding to the phoneme, when the distance is less than or equal to the first phoneme vector, In the case of a preset threshold, it is determined that the phoneme in the audio data is pronounced correctly; if all the phonemes in the audio data are pronounced correctly, it is determined that the audio data is pronounced correctly; in the case that the distance is greater than the second preset threshold Next, it is determined that the phoneme in the audio data is incorrectly pronounced; if at least one phoneme in the audio data is incorrectly pronounced, it is determined that the audio data is incorrectly pronounced.
[0141]In an optional implementation manner, an update unit is further included, wherein:
[0142]The updating unit is configured to update the corresponding first phoneme vector for the phoneme with accurate pronunciation after determining that the phoneme in the audio data is correctly pronounced when the distance is less than a first preset threshold The second phoneme vector of.
[0143]In an optional implementation manner, the detection unit 605 is further configured to count the number of incorrect pronunciations of the phoneme for a phoneme that is incorrectly pronounced; if the number of incorrect pronunciations of the phoneme reaches a third preset threshold; and determine the phoneme Whether there is an update of the corresponding second phoneme vector; if there is no update of the second phoneme vector corresponding to the phoneme, it is determined that the phoneme has a systematic pronunciation error.
[0144]In an optional implementation manner, a third determining unit is further included, wherein:
[0145]The third determining unit is configured to determine the distance between the first phoneme vector corresponding to the phoneme and the second phoneme vector corresponding to the third phoneme for the phoneme with systematic pronunciation errors, and the third phoneme with the smallest distance is used as The prompt phoneme, wherein the third phoneme is a phoneme other than the phoneme in the preset phoneme.
[0146]In an optional implementation manner, it further includes a fourth determining unit, configured to obtain, for each phoneme, a first sample set of the phoneme’s correct pronunciation; using the phoneme coding model to compare the first sample set Each sample of is coded separately to obtain the fourth phoneme vector set corresponding to the phoneme; the distance between each fourth phoneme vector contained in the fourth phoneme vector set and the second phone vector is determined in ascending order Arrangement; determining that the corresponding distance at the first preset ratio is the first preset threshold.
[0147]In an optional implementation manner, it further includes a fifth determining unit, configured to obtain, for each phoneme, a second sample set of the phoneme’s mispronunciation; using the phoneme coding model to analyze each of the second sample sets The samples are respectively coded to obtain the fifth phoneme vector set corresponding to the phoneme; respectively determine the distance between each fifth phoneme vector included in the fifth phoneme vector set and the second phoneme vector and arrange them in descending order; It is determined that the distance corresponding to the second preset ratio is the second preset threshold.
[0148]In an alternative embodiment, the decoding unit 602 is specifically configured to extract the acoustic features of the audio data; for each phoneme contained in the audio data, based on the extracted acoustic features, the audio recognition model is used to determine The posterior probability corresponding to the phoneme, the audio recognition model is obtained by training using audio sample data with phoneme labels; based on the posterior probability corresponding to the phoneme, the phoneme is decoded using a pre-built network to obtain the phoneme The corresponding time boundary.
[0149]For the description of the processing flow of each module in the device and the interaction flow between the modules, reference may be made to the relevant description in the above method embodiment, which will not be detailed here.
Example Embodiment
[0150]Example three
[0151]Based on the same technical concept, the embodiment of the present application also provides a computer device. ReferenceFigure 7 As shown, a schematic structural diagram of a computer device provided by an embodiment of this application includes a processor 701, a memory 702, and a bus 703. Among them, the memory 702 is used to store execution instructions, including the memory 7021 and the external memory 7022; here, the memory 7021 is also called internal memory, which is used to temporarily store the calculation data in the processor 701 and the data exchanged with the external memory 7022 such as the hard disk. The processor 701 exchanges data with the external memory 7022 through the memory 7021. When the computer device is running, the processor 701 communicates with the memory 702 through the bus 703, so that the processor 701 executes the execution instructions mentioned in the above method embodiment. .
[0152]The embodiment of the present disclosure also provides a computer-readable storage medium on which a computer program is stored, and the computer program executes the steps of the pronunciation detection method described in the above method embodiment when the computer program is run by a processor. Wherein, the storage medium may be a volatile or nonvolatile computer readable storage medium.
[0153]The computer program product of the pronunciation detection method provided by the embodiments of the present disclosure includes a computer-readable storage medium storing program code, and the instructions included in the program code can be used to execute the steps of the pronunciation detection method described in the above method embodiment For details, please refer to the above method embodiment, which will not be repeated here.
[0154]The embodiments of the present disclosure also provide a computer program, which, when executed by a processor, implements any method of the foregoing embodiments. The computer program product can be specifically implemented by hardware, software or a combination thereof. In an optional embodiment, the computer program product is specifically embodied as a computer storage medium. In another optional embodiment, the computer program product is specifically embodied as a software product, such as a software development kit (SDK), etc. .
[0155]Those skilled in the art can clearly understand that for the convenience and conciseness of the description, the specific working process of the system and device described above can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, device, and method may be implemented in other ways. The device embodiments described above are merely illustrative. For example, the division of the units is only a logical function division, and there may be other divisions in actual implementation. For example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be through some communication interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
[0156]The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
[0157]In addition, the functional units in the various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
[0158]If the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a nonvolatile computer readable storage medium executable by a processor. Based on this understanding, the technical solution of the present disclosure essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present disclosure. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
Similar technology patents
Imaging apparatus and flicker detection method
Owner:RENESAS ELECTRONICS CORP
Techniques for sentiment analysis of data using a convolutional neural network and a co-occurrence network
Owner:ORACLE INT CORP
Emotion classifying method fusing intrinsic feature and shallow feature
Owner:CHONGQING UNIV OF POSTS & TELECOMM
Scene semantic segmentation method based on full convolution and long and short term memory units
Owner:UNIV OF ELECTRONIC SCI & TECH OF CHINA
Classification and recommendation of technical efficacy words
- improve accuracy
Golf club head with adjustable vibration-absorbing capacity
Owner:FUSHENG IND CO LTD
Stent delivery system with securement and deployment accuracy
Owner:BOSTON SCI SCIMED INC
Method for improving an HS-DSCH transport format allocation
Owner:NOKIA SOLUTIONS & NETWORKS OY
Catheter systems
Owner:ST JUDE MEDICAL ATRIAL FIBRILLATION DIV
Gaming Machine And Gaming System Using Chips
Owner:UNIVERSAL ENTERTAINMENT CORP