Self-adaptive learning implicit user trust behavior method based on depth map convolutional network

An adaptive learning and convolutional network technology, applied in the field of adaptive learning implicit user trust behavior, can solve problems such as algorithm performance degradation, and achieve the effect of improving recommendation performance and strengthening connections

Pending Publication Date: 2022-01-07
HANGZHOU DIANZI UNIV
0 Cites 0 Cited by

AI-Extracted Technical Summary

Problems solved by technology

However, the classic social recommendation algorithm is very dependent on the quality of the user's tru...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Method used

As shown in Figure 1, a kind of adaptive learning implicit user trust behavior method based on deep graph convolutional network is to allow deep neural network to learn user's trust behavior by filtering out unreliable trust information, thereby To reflect the user's implicit behavioral logic. The model is divided into three parts. The first part is to use the user's historical behavior to filter out unreliable trust information to alleviate the noise problem of trust; the second part is the graph convolution learning part. The deep graph co...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Abstract

The invention belongs to the technical field of deep learning, and discloses a self-adaptive learning implicit user trust behavior method based on a depth map convolutional network. The method comprises the following steps: 1, trust information preprocessing: filtering unreliable trust information by using historical behaviors of a user to relieve a trust noise problem; 2, graph convolutional network learning: a depth graph convolutional network performs learning according to score data and trust features of the user, and outputs features of the user; 3, adaptive learning of implicit trust features: learning reliable user information by using an adaptive learning matrix, and speculating trust information and trust features of the user. According to the method, unreliable trust features can be filtered out, and meanwhile, the self-adaptive matrix learns the behavior features of the user, so that the implicit trust behavior logic of the user is learned, the trust features of the user are deduced and perfected, the relation between score data of the user and trust data of the user is enhanced, and the recommendation performance is improved.

Application Domain

Technology Topic

Image

  • Self-adaptive learning implicit user trust behavior method based on depth map convolutional network
  • Self-adaptive learning implicit user trust behavior method based on depth map convolutional network
  • Self-adaptive learning implicit user trust behavior method based on depth map convolutional network

Examples

  • Experimental program(1)

Example Embodiment

[0046] In order to better understand the object, structure, and function of the present invention, a method of further detailed understanding of adaptive learning implicit user trust behavior based on depth map splitting networks will be described in connection with the accompanying drawings.
[0047] like figure 1As shown, an adaptive learning implicit user trust behavior method based on deep map volume network is to let depth neural networks to learn user trust behavior by filtering out unreliable trust information, reacting user implicit Behavioral logic. The model is divided into three parts. The first part is to filter out unreliable trust information by using the user's historical behavior to alleviate the noise problem of trust; the second part is the map volume learning section, depth map volume network The user's scoring data and the trust feature are learned, and the user's feature is output. The third part is to learn reliable user information using an adaptive learning matrix, and speculate on user trust information and trust feature. After the algorithm is stable, the user will generate the predictive score data for the project.
[0048] Trust information pretreatment
[0049] First get the user's score data for the project and the trust data of the user, and divide the user's score data into the data set, test set, verification set, where the division is 8: 1: 1. The user trust data is then processed according to the user's score data, and the unreliable trust data is filtered using the Jaccard formula, where the Jaccard formula is:
[0050]
[0051] J u,v Indicates the similarity between the user u and the user V, the more similar to 1, the more I represent the user u and the user V, i u And i v The historical score data of the user u and the user V is respectively, respectively.
[0052] Map volume network learning
[0053] The map volume network portion first fuses the user characteristics to trust data, and the user's trust data is used as the user's characteristics, and the user's behavior characteristic combines the user's trust feature, and then inputs the user's trust feature to the depth map volume Constant iterations and learning in neural networks, and each iteration uses the previous iteration results as this iteration input:
[0054]
[0055]
[0056] Where g a = {I | g ai = 1} ∈i is a collection of items that the user A interacts, h i = {I | h ai = 1} }U set for the user of the interaction I, while W k+1 Then the parameter learning matrix with weights.
[0057] Adaptive learning implicit trust feature
[0058] The adaptive learning module consists of two parts: score prediction section and implicit trust feature update section, the score prediction part is mainly used to output the final prediction result, and the implicit trust feature update is mainly based on the user's implicit trust behavior. To update the user's trust feature.
[0059] Score prediction section: After multiple iterations, the entire model reaches the convergence, the user is embedded according to the user output according to the volume part. And project embedding As an input, final output prediction score matrix:
[0060]
[0061] Where represents the volume of the vector.
[0062] Implicit Trust Feature Update section: In this article, we mainly use the user's trust data as the user's attribute to learn in the map volume, so you can use an adaptive learning matrix to learn the user vector of the map volume network. Update implicit trust features with the following formula:
[0063]
[0064] Where W t It is a matrix parameter that needs to be learned. The feature vector predicted according to the features of User A. Then need to introduce and update the user's implicit trust attribute according to the updated user trust feature, and each iteration requires the result of the previous iteration, the formula is as follows:
[0065]
[0066] Where T represents all user trust attribute matrices, Represents the updated user trust feature matrix, i X It is a matrix of an element all 1. Updates for implicit trust feature will continue to converge throughout the model.
[0067] Model optimization:
[0068] There are two parts that participate in the iterative module, so two loss functions need to be optimized.
[0069] Predicting Rating Loss Function: We use Bayesian PersonalizedRanking, BPR to serve as a loss function recommended by the project:
[0070]
[0071] Where σ (x) is representative of the Sigmoid function, It is the weight parameter matrix that learned in the map volume module, and λ is a regular factor.
[0072] Adaptive learning implicit trust loss function: learning and inferior implicit trust can be seen as a classification task, so we can use cross entropy loss functions to evaluate the learning-related trust characteristics and real trust feature The gap is:
[0073]
[0074] Θ θ a = [W t ] Is a collection of weighttric parameters in learning implicit trust feature.
[0075] According to the loss function of the above two parts, we can combine two loss functions and use the balance factor γ balance between the relationship between the two loss functions, as shown in the following formula:
[0076]
[0077] Where θ = [θ a , Θ r ] The parameters of the above two loss functions are included.
[0078] data set
[0079] During the three disclosed data sets (Lastfm, Douban, Gowalla), the specific information of the data set is shown below:
[0080] amount of users Item quantity Quantity Trust data quantity Lastfm 1892 17632 92834 25434 Douban 2848 39586 894887 35770 Gowalla 18737 32510 1278274 86985
[0081] Evaluate the score prediction of the model using the commonly used recommended system evaluation indicator, the evaluation indicator is precise and NDCG (Normalized Discounted Cumulative Gain), formula is:
[0082]
[0083]
[0084]
[0085] Where N represents the number of recommended projects, u 'is the user collection of test sets; N (u) is a collection of recommendation items for user u; if the user u and the project i have an over-interaction, Rel (i) = 1, and IDCG @ n is ideally, the target user's interest item is sorted.
[0086] Rating forecast results
[0087] Super parameter adjustment:
[0088] We experiment with the balance factor of the model and the number of users trusted friends in the model to obtain the best model effect.
[0089] Balance factor: Experiment on different values ​​of balance factors υ, we take the value between 0 and 1 in three public data sets, the result is as follows Figures 2 (a) -2 (c) Looking:
[0090] The final result shows that when the balance factor is 0.01, all data sets have achieved best recommendation performance.
[0091] Quantity of trust friends: Since we have a lot of trust friends who do not meet the expected expectations, we only take the first few trust friends with each user who trust friends data, and we have experimented from 1 to 5 from 1 to 5, respectively. The experimental results are as follows Figure 3 (a) -3 (c) Looking:
[0092] Experiment comparison:
[0093] The most recently popular recommendation system algorithm is compared to the method of our proposed system, as shown below:
[0094]
[0095] Among them, ATGCN's method model for us can see that ATGCN has achieved best recommendation performance in three public data.
[0096] It will be appreciated that these features and examples can be made by those skilled in the art without departing from the spirit and scope of the invention. Additionally, in the teachings of the present invention, these features and embodiments can be modified to accommodate specific situations and materials without departing from the spirit and scope of the invention. Thus, the present invention is not limited by the specific embodiments disclosed herein, and embodiments within the scope of the claims fall within the scope of the invention are within the scope of the present invention.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Similar technology patents

Multi-focus image fusion method with good anti-noise environment effect

InactiveCN108765360AEliminate noiseKeep in good touchImage enhancementImage analysisComputer visionEnvironment effect
Owner:JINGDEZHEN CERAMIC INSTITUTE

Aircraft knowledge graph construction method and device, equipment and storage medium

ActiveCN114186690AKeep in good touchImprove construction efficiency and structural rationalityNeural architecturesKnowledge representationFlight vehicleData class
Owner:CALCULATION AERODYNAMICS INST CHINA AERODYNAMICS RES & DEV CENT

Intelligent maintenance service modular service package design method and system

InactiveCN110263947AKeep in good touchAvoiding Subjective Causes of ErrorResourcesCommercePackage designDesign methods
Owner:苏州凌犀物联网技术有限公司

Classification and recommendation of technical efficacy words

  • Keep in good touch
  • Improve recommendation performance

Smart beverage vending machine and push method

InactiveCN107644487AKeep in good touchSell ​​and manage data intelligenceCoin-freed apparatus detailsNutrition controlInformation storageFingerprint recognition
Owner:GREE ELECTRIC APPLIANCES INC OF ZHUHAI

Network alarm root analysis method and system, memory medium and computer equipment

ActiveCN108809734AIntuitive and Accurate AnalysisKeep in good touchData switching networksData conversionTime range
Owner:北京思特奇信息技术股份有限公司

Method for reinforcing fan foundation through additionally-arranged horizontal steel bars

InactiveCN106638733AIncreased embedding depthKeep in good touchFoundation repairArm lengthsRebar
Owner:CHANGAN UNIV

Loudspeaker module

InactiveCN109831725AKeep in good touchImprove acoustic performanceSingle transducer incorporationLoudspeaker transducer fixingLoudspeakerEngineering
Owner:AAC TECH PTE LTD

Riding method and device, electronic equipment and readable storage medium

PendingCN112508215AImprove ride safetyKeep in good touchReservationsCharacter and pattern recognitionCar doorReal-time computing
Owner:APOLLO INTELLIGENT CONNECTIVITY (BEIJING) TECH CO LTD

Cross-domain recommendation method based on multi-view knowledge representation

PendingCN112541132AImprove recommendation performanceSolve data sparsity and cold startDigital data information retrievalMachine learningData miningData science
Owner:BEIJING JIAOTONG UNIV

Comment text-oriented graph neural network recommendation method

PendingCN114723522AImprove recommendation performanceEnhanced representationForecastingBuying/selling/leasing transactionsConvolutionData mining
Owner:HEFEI UNIV OF TECH

Recommendation algorithm based on knowledge graph

ActiveCN114491055AImprove recommendation performanceImprove recommendation effectNeural architecturesNeural learning methodsAlgorithmData mining
Owner:浙江辰时科技集团有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products