[0104] Embodiment 2
[0105] refer to figure 2 , showing a flow chart of the steps of another method for determining a recommended object provided by an embodiment of the present disclosure, such as figure 2 As shown, the method for determining the recommended object may specifically include the following steps:
[0106] Step 201: Acquire current target session information of the target user.
[0107] The embodiments of the present disclosure can be applied to a scenario where an object that the user may click next is determined, and the object is recommended to the user.
[0108] The target session information refers to the session information formed by the target user clicking on an object (such as a business, an item, etc.) within a current period of time.
[0109]When the next click behavior of the target user needs to be predicted, the current target session information of the target user can be obtained, and then step 202 is performed.
[0110] Step 202: Determine at least one session information associated with the target session information according to the historical behavior sequence of the target user within a preset historical period and all objects in the target session information.
[0111] The at least one session information refers to session information of a historical period associated with the target session information.
[0112] Understandably, at least one session information may be the session information formed by the target user (that is, the target session information to the winning user) clicking on the object, or may be the session information formed by other users clicking on the object, or it may include the target user's information. Session information, and also include session information of other users and so on.
[0113] Of course, in this embodiment, the association between the at least one session information and the target session information is embodied in: the at least one session information includes one or more objects among all objects in the target session information, for example, in the target session information Object 1, Object 2, and Object 3 are included, and at least one session information should include at least one of Object 1, Object 2, and Object 3.
[0114] It can be understood that the above examples are only examples listed for better understanding of the technical solutions of the embodiments of the present application, and are not intended to be the only limitations on the embodiments of the present application.
[0115] After acquiring the current target session information of the target user, at least one session information associated with the target session information can be determined according to the historical behavior sequence of the target user within the preset historical period and all objects included in the target session information.
[0116] After determining at least one session information associated with the target session information, step 203 is performed.
[0117] Step 203: Perform a weighted summation of the representation vectors of all the objects to obtain the global information of the target session information.
[0118] Step 204: Input the representation vectors of all the objects into a temporal convolutional neural network to obtain representation vectors of all the objects including sequence information.
[0119] The representation vector refers to the vector representation of the object in the target session information.
[0120] Global information refers to the representation information of the entire object sequence in the target session information.
[0121] After acquiring the target session information and at least one session information associated with the target session information, a cross-session global item directed graph (Item-Graph) can be constructed for the objects (items) in the user's historical behavior sequence, in which one A node represents an item, such as Figure 2a shown, session s 1 contains object v 1 ,v 2 ,v 3 ,v 4 , sessions 2 contains object v 6 ,v 2 ,v 3 , sessions 3 contains object v 4 ,v 3 ,v 1 ,v 5 , (s_i, s_i+1) as an edge, representing that s_i+1 clicked by the user in session s is after s_i. Compared with existing methods, the cross-session item graph can capture both intra-session and extra-session information because Item-Graph can build graph links between items appearing in different sessions. Among them, each item is mapped to a d-dimensional embedding v∈R^d, and the graph convolutional neural network model is used to learn the relationship between items, and an item vector (item_vector) containing cross-session information is obtained. Specifically, we can As shown in the following formula (3):
[0122] V l+1 =σ(D 1-/2 AD -1/2 V l W l ) (3)
[0123] In the above formula (3), σ is the activation function, D is the degree matrix of the Item-Graph, A is the adjacency matrix of the Item-Graph, W is the feature transformation matrix, 1 represents the lth layer of the graph convolutional neural network, and l is the number of layers of the graph convolutional neural network, V l+1 is the representation of the item through the l+1 layer of the graph convolutional neural network.
[0124] Among them, the degree matrix refers to the number of edges connected to each node, that is, when a node is connected to one edge, the degree matrix is 1, and when the node is connected to three edges, then The degree matrix is 3 etc.
[0125] An adjacency matrix is a matrix of nodes connected to a node.
[0126] The feature transformation matrix is a variable that is actually initialized and can be optimized with the training process.
[0127] After the representation vectors of all objects in the target session information are obtained, the representation vectors of all objects can be weighted and summed to obtain a global representation of the target session information, that is, the global information of the target session information. In order to distinguish the different items have different influences on the session, the item-level attention mechanism is adopted, so that the session representation is more focused on the items with high importance.
[0128] Step 205: Determine the local information of the target session information according to the representation vector of the sequence information.
[0129] Local information refers to the object information output at the last moment in the representation information of the entire object sequence of the target session information.
[0130] After obtaining the characterization vectors of the sequence information of all objects in the target session information, the characterization vectors of multiple items included in the target session information can be input into the temporal convolutional neural network model, for each item in the target session information Calculation of dilated convolution is performed, and session sequence information is extracted. During the process of session sequence information extraction, the last output item in the session can be used as the local information of the target session information, indicating the current point of interest of the user.
[0131] Step 206: Construct a session graph according to the at least one session information and the target session information.
[0132] The local information and global information of the session only focus on the current session, while ignoring the influence between sessions. To solve this problem, a context-sensitive session graph structure (Session-Context-Graph) can be constructed to consider the complex relationship between different sessions.
[0133] In the conversation graph, each node represents a conversation, and the links of the edges represent the similarity between two conversations. At this point, an important issue to consider is how to decide whether an edge exists. For each pair of conversations, the similarity expressed by the two can be calculated, and then the KNN-Graph model based on the similarity value can be used to determine the neighbors of a conversation node.
[0134] After the session graph is constructed according to the at least one session information and the target session information, step 207 is performed.
[0135] Step 207: Obtain a similarity value between the at least one session information and the target session information.
[0136] The similarity value can be used to represent the degree of similarity between a pair of session information.
[0137] After the session graph is constructed, the similarity value between the target session information and the at least one session information, that is, the similarity value between the at least one session information and the target session information can be calculated.
[0138] After acquiring the similarity value between the at least one session information and the target session information, step 208 is performed.
[0139] Step 208 : Determine the adjacent session information corresponding to the target session information in the session graph according to the similarity value; the adjacent session information is the session information in the at least one session information.
[0140] After obtaining the similarity between at least one session information and the target session information, the neighbor of a session node can be determined according to the KNN-Graph model of the similarity value, that is, the target session information corresponding to the target session information is determined in the session graph according to the similarity value. Adjacent session information. Further, step 209 is executed.
[0141] Step 209: Determine session influence information of the target session information according to the adjoining session information and the graph neural network model.
[0142] The session impact information refers to the impact of at least one session information on the target session information.
[0143] After determining the adjacent session information corresponding to the target session information, the attention mechanism of the session layer and the graph neural network model can be used to integrate the influence of the session neighbor nodes on themselves, and finally a session context-sensitive session representation based on the session context can be obtained. Session impact information for session information.
[0144] After determining the session influence information of the target session information according to the adjoining session information and the graph neural network model, step 210 is executed.
[0145] Step 210: Determine the fusion information corresponding to the target session information according to the global information, the local information and the session impact information.
[0146] In order to better predict the user's next behavior, this embodiment uses a fusion function to fuse the local information of the session, the global information, and the session influence information based on the cross-session information to obtain the final session representation, that is, combining the global information and local information. and session impact information to determine the fusion information corresponding to the target session information.
[0147] After the fusion information corresponding to the target session information is determined according to the global information, the local information and the session influence information, step 211 is executed.
[0148] Step 211: Determine the similarity between the target session information, the target session information and all objects in the at least one session information according to the fusion information.
[0149] All objects refer to the target session information and all objects included in at least one session information.
[0150] Similarity refers to the similarity between the target session information and all objects.
[0151] After the fusion information is acquired, the similarity between the target session information and all objects may be determined according to the fusion information, and then step 212 is performed.
[0152] Step 212: Determine the recommended object corresponding to the target session information according to the similarity.
[0153] After obtaining the similarity between the target session information and all objects, the recommended objects corresponding to the target session information can be determined according to the similarity. Specifically, the similarity can be normalized to obtain a similarity probability, and the similarity probability can be changed from large to The candidate objects are sorted in a small order, and a preset number of candidate objects in the top order are selected as the recommended objects.
[0154] In a specific implementation manner of this embodiment, the foregoing step 212 may include:
[0155] Sub-step S1: Normalize the similarity to generate the probability of the target session information relative to all the objects.
[0156] In this embodiment, the probability is the similarity probability of the target session information relative to all objects, and all objects are candidate objects to be recommended.
[0157] After obtaining the similarity between the target session information and all the objects, the similarity may be normalized, so that the probability of the target session information relative to all the objects may be obtained, and then sub-step S2 is performed.
[0158] Sub-step S2: sort all the objects according to the probabilities in descending order,
[0159] After obtaining the probabilities of the target session information relative to all objects, all objects can be sorted in descending order of the probabilities, so that the sorting results of all objects can be obtained.
[0160] Sub-step S3: According to the sorting result, select the top N objects from all the objects as the recommended objects; wherein, N is a positive integer greater than or equal to 1.
[0161] After the sorting result is obtained, the top N objects can be selected from all the objects as the recommended objects according to the sorting result, where N is a positive integer greater than or equal to 1.
[0162] Step 213: Recommend the recommended object to the target user.
[0163] After the recommended objects are acquired, the recommended objects may be recommended to the target user. Specifically, after the recommended objects are acquired, the recommended objects may be recommended to the users one by one.
[0164] The model based on cross-session information and temporal convolutional neural network (CA-HATCN) proposed in this embodiment has achieved the highest value on the two recognized evaluation indicators P@20 and MRR@20, that is, the effect is the best. like Figure 2b As shown, the effectiveness of cross-session information is verified through comparative experiments. In the comparative experiments, CA--HATCN (sc.exl.) increases the utilization of cross-session item information compared to HATCN (using only the TCN model). CA -HATCN increases the utilization of session context information compared to CA-HATCN (sc.exl.). The experimental results show that the introduction of cross-session information can gradually improve the performance of sequence recommendation, which verifies the effectiveness of cross-session information.
[0165] In the method for determining a recommended object provided by the embodiments of the present disclosure, by acquiring target session information and at least one session information associated with the target session information, and according to the characterization vectors of all objects in the target session information, the global information corresponding to the target session information and For local information, the session influence information of at least one session information on the target session information is obtained, and the recommendation object corresponding to the target session information is determined according to the global information, the local information and the session influence information. The embodiments of the present disclosure not only consider the current target session information, but also combine the influence of cross-session information on the target session, so that the accuracy of recommendation can be improved, and the recommendation performance of the recommendation system can be improved.