Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

81 results about "Context values" patented technology

Parallel non-zero coefficient context modeling method for binary arithmetic coding

The invention discloses a parallel non-zero coefficient context modeling method for binary arithmetic coding, and relates to a context modeling technology for video coding. The method is proposed to solve the problem that the data throughput rate of a coding system is reduced because the conventional binary arithmetic coding generates a data dependency relationship on the context in the context modeling process of non-zero coefficients. The method comprises the following steps of: 1, defining the number of coefficients and non-zero coefficients in a transform quantification block; 2, performing binarization on the non-zero coefficients to obtain a bin sequence; 3, performing context modeling on a first context according to the position information of the non-zero coefficients and the number of the non-zero coefficients in the transform quantification block; 4, calculating the probability distribution of the non-zero coefficients, the absolute value of which is abs (Li) in the first context value; 5, subtracting 1 from the absolute value of Li, and performing binarization; and 6, performing context modeling by using the equal probability distribution. By using the method, the context modeling processes of different non-zero coefficients can be simultaneously performed, and parallel execution of multiple context modeling processes in the coding process is realized.
Owner:HARBIN INST OF TECH

Sliding window sampling-based distributed machine learning training method and system thereof

InactiveCN106779093AImprove perceptionAlleviate the problem of poor training convergenceMachine learningSlide windowModel parameters
The invention provides a sliding window sampling-based distributed machine learning training method and system thereof. The method comprises the steps of initializing parameters of a machine learning model; obtaining a data fragment of all data and independently carrying out model training; collecting multiple rounds of historical gradient expiration degree samples, sampling the samples through sliding, calculating a gradient expiration degree context value, adjusting the learning rate and then initiating a gradient update request; asynchronously collecting the multiple gradient expiration degree samples, updating global model parameters by using the adjusted learning rates and pushing updated parameters; asynchronously obtaining pushed global parameters for updating, and further carrying out next training; checking the model convergence, if the model is not convergent, carrying out model training cycle; and if the model is convergent, obtaining model parameters. The learning rate of a learning device is controlled by using the expiration gradient, the stability and the convergence effect of distributed training are improved, the training fluctuation caused by the distributed system is reduced and the robustness of distributed training is improved.
Owner:SHANGHAI ADVANCED RES INST CHINESE ACADEMY OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products