[0060]In order to make the technical problems, technical solutions and advantages to solve the present invention more clearly, and will be described in detail below with reference to the accompanying drawings and specific examples.
[0061]The present invention is aimed at capturing the syntactic dependencies between the sentences between sentences and difficult syntax structures between sentences and difficult to express the complex syntactic structure in the text, and the textual emotional classification is low, providing a graphic Text emotional analysis method for the network.
[0062]Such asFigure 1 to 3As shown, embodiments of the present invention provide a textual emotion analysis method based on a map-based network, including: Step 1, acquire text collection and emotional label collection from the SEMEVAL 2014 Task 4 data set; Step 2, comparable in text Randomly selected in the emotion label collection, to obtain training sets and test sets; Step 3. Step 3, through the BIAFFine dependent parser to analyze the sentences in the training set, according to the sentence of the sentence, depending on the sentence, the synthesis, step 4, will Training set input BERT pre-training model, converting the words in the training set into words vector; step 5, step 5, step 6, step 6, set the diagram focused network model according to the adjacent matrix; step 7, The word vector is embedded in the corresponding node in the diagram point of force, and the word vector is used as the initialization state of the node; in step 8, the diagram attention network model is updated, and the node and nodes in the network model will The adjacent vector is polymerized according to the attention weight, and the vector sequence of the updated node is obtained, the vector sequence of the node is used as a temporary state of the corresponding node in the graph point; step 9, the initialization of the diagram to the network model node Saving in the state input GRU model, resulting in the saving status of the graph pointing network model node; step 10, the temporary state of the graph pointing network model node and the saving status of the graph pointing network model node are aggregated, getting the picture The final state of the power network model node; Step 11, the final state of the graph pointing network model node is activated by the SoftMax function, obtaining the textual emotion trend; step 12, multi-layer training for the diagram attention network model, constructing a loss function; Step 13, according to the loss function, adjust the attention weight, when the loss function value is less than the minimum value of the recorded loss function, update the loss function value minimum and record the corresponding map, the network model parameters, to obtain the optimal Picture focused network model; step 14, the text is emotionally analyzed by the optimal map of the network model.
[0063]The step 3 specifically includes: through the BIAffine dependent parser to grammarize the sentence in the training set, divide the linear sequence of the word sentence and translate into a graphical structure dependent on the sentence, to obtain the syntax, The words that are modified relationships are connected by syntactics.
[0064]According to the above-described embodiment of the present invention, a textual emotion analysis method based on a scales of the text data is required to perform textual emotion analysis operations in different aspects, using BIAffine dependent parser. The syntax is converted into the map, and the linear sequence of a sentence divided into a linear sequence of a sentence is converted to the graphical structure dependent of its syntax, and the syntax is connected to the words of each aspect, reducing the interference of unrelated information and the meaning between different aspects. Influencing, using an exact description according to a description of the sentence.
[0065]The step 4 specifically includes: inputting a training set into the BERT pre-training model, transformer architecture of the BERT pre-training model converts the words in the training set into words 300.
[0066]According to the above-described embodiment of the present invention, a textual emotional analysis method based on the BERT pre-training model, the semantic information of the words is greatly reflected.
[0067]The step 5 specifically includes: a neighboring matrix having a symmetric relationship according to a node having an asymmetric binary relationship in the figure, when the adjacent matrix is 1, there is a corresponding coordinate between the adjacent matrix. The arc is connected, and when the abutment matrix is 0, there is no direction arc between the two nodes of the corresponding coordinate in the abutment matrix.
[0068]The step 6 specifically includes: the node of the adjacent matrix as the node of the graph, the node is the side, and the edge of the diagram focused network model is built.
[0069]According to the above-described embodiment of the present invention, a textual emotional analysis method based on a graphic branch network, a node in the figure, and a node in the figure, respectively, and a node in the graph point network model, respectively.
[0070]The step 7 specifically includes: embed the word vector of 300 into the corresponding node corresponding to the diagram point of force, as the initialization state of the node in the graph pointing network model.
[0071]Wherein, the step 8 specifically includes: updating the diagram to focus on the network layer, as follows:
[0072]
[0073]Among them, αIJ Indicates the attention coefficient of node J to I, N represents the number of nodes, and W represents the linear transformation weight matrix applied on each node, The entity vector representing the node i, The entity vector indicating the node J, Entity vector representing node K, NiRepresents the neighbor node of node i;
[0074]The contextual information capture of each node is captured by the multi-point attention mechanism. By aggregating the expression of the nodes around each node in the weighted and form of expression, the calculation results under the K independent focal mechanism are adopted. K is an average of replacement, as shown below:
[0075]
[0076]among them, Indicates the value of the node i update, and k is represented as the kth in multiple attention mechanisms, || indicates the characteristics of multiple attention to the characteristics to splicing, σ represents the activation function, Indicates that Node i pays attention to node J, WkIndicates the linear transformation weight matrix of the input node.
[0077]According to the above-described embodiment of the above-described embodiment of the present invention, the textual emotional analysis method based on the introduction of the multi-attention mechanism to capture the context information to stabilize the learning process.
[0078]The step 9 specifically includes: modeling the current state of the node, set the GRU model, and input the GRU model, combined with the input X of the node T time.tTo update the status of reset doors and control gates, as shown below:
[0079]rt= Σ (wz[ht-1, Xt]) (3)
[0080]zt= Σ (wr[ht-1, Xt]) (4)
[0081]
[0082]Among them, σ represents the sigmod function, acts as a gate control signal from the transformation of 0-1, RtControlled reset of the T time, ZtIndicates the gate control update, Ht-1Indicates the node status of the T-1 time. A candidate set indicating T time, * indicates the product of the matrix, [] indicates that two vectors are connected, and Tanh is represented as a dual-stroke normal.
[0083]Wherein, the step 9 further includes: simultaneously forgetting and selecting memory by the same gating z, the range of the gating signal z is 0-1, the closer the gating signal is closer to 1, and the importance of the data is higher, as follows Down:
[0084]
[0085]Among them, 1-z represents forgetting the door, (1-z) θht-1Indicates some unimportant information in the T-1 time node status, Indicates the choice of important information in the candidate concentration in T.
[0086]The feedforward process of the entire graph pointing network model is represented as:
[0087]Hidel+1= GRU (GAT (Hl), Hl) (7)
[0088]Where Hl+1Indicates the status of the first layer, HlIndicates the status of the Layer Node.
[0089]According to the above embodiment of the present invention, the textual emotion analysis method based on a map, using the GRU to model the current state of the node, and save the node's initialization status input GRU model, and the temporary status of the node and the preservation The node status performs node status aggregation, obtains the final node state, and improves the convergence of the graphical attention network.
[0090]The step 11, the step 12, the step 13, and the step 14 specifically include: model training, using L2Regularization method adjusts the minimize cross entropy loss function training diagram attention network model, by linear transformation to map the status of the target node to the classification space, calculate the probability of the emotion classification K of the target node through the SoftMax function, as shown below:
[0091]
[0092]Where w represents the weight matrix of linear transformation, htIndicates the target node state, b represents the deviation of the linear transformation, y indicates a collection of emotion categories.
[0093]According to the above-described embodiment of the present invention, a textual emotion analysis method based on a map-based network is analyzed by the BIAffine dependent on the syntactic dependency between the sentence and constructs the syntax according to the syntax, according to the syntax, the image is constructed, according to the adjacency Matrix build diagram focused network model, through the BERT pre-training model to embed the word to the word vector, embedded into the graph point network model and the word vector as the initialization state of the graph pointless network model node, update the diagram pointing network model , According to the attention of the node of the graph and the node of the graph and the vector of the node neighborhood, obtain the new vector sequence, and the new vector sequence is used as the temporary state of the graph and the temporary state of the network model node, and the picture attention network model will The initialization status of the node is saved in the GRU model set, resulting in the saving status of the graph point network model node, and the temporary state of the graphic network model node and the saving status of the graph point of the network model node are aggregated. The final state of the graph payment network model node, activates the final state of the diagram attention network model node by the SoftMax function to activate the textual emotion, multi-layer training for the diagram attention network model, build a loss function, adjust the attention weight, Get the optimal map of the network model, through the optimal map of the network model for emotional analysis, the textual emotion analysis method based on the graphic disconnection network, depending on the syntactic dependency between the analyzer between the parser To analyze, through the BERT pre-training model, the word vector is expressed, through the picture attention network model, the text is emotionally analyzed, fully expressing the complex syntactic structure in the text, and improves the accuracy of textual emotion analysis.
[0094]The above is a preferred embodiment of the present invention, and it should be noted that several improvements and moisteners can be made without departing from the principles of the invention, these improvements and moisters are also made without departing from the principles of the present invention. It should be considered as a scope of the present invention.