The invention provides a model compression method based on pruning sequence active learning. an end-to-end pruning framework based on sequential active learning is provided; The method has the advantages that the importance of all layers of the network can be actively learned, the pruning priority is generated, a reasonable pruning decision is made, the problem that an existing simple sequential pruning method is unreasonable is solved, pruning is preferentially carried out on the network layer with the minimum influence, pruning is carried out step by step from simplification to difficulty, and the model precision loss in the pruning process is minimized; And meanwhile, the final loss of the model is taken as a guide, and the importance of the convolution kernel is evaluated in a multi-angle, efficient, flexible and rapid manner, so that the compression correctness and effectiveness of the whole-process model are ensured, and technical support is provided for subsequent transplantation of a large model to portable equipment. Experimental results show that the model compression method based on pruning sequence active learning provided by the invention is leading under the conditions of multiple data sets and multiple model structures, can greatly compress the model volume under the condition of ensuring the model precision, and has a very strong practical application prospect.