The present invention provides a target re-identification method based on densely connected convolutional network hypersphere embedding. Firstly, the densely connected convolutional network DenseNet extracts the underwater deformation target features in the video sequence, which greatly reduces the disappearance of gradients, strengthens feature propagation, and supports The process of feature reuse and parameter learning, and then from the perspective of fine-grained classification, from the local integration to the global, using the group average pooling idea to refine and extract the features of underwater deformation targets at all levels, to obtain more accurate underwater deformation target feature expression capabilities, and to The hypersphere loss, that is, the angular triple loss, focuses on the inter-class differences of underwater deformation individual targets, distinguishes intra-class differences, avoids directly measuring the Euclidean distance between the encoding features of underwater deformation individual targets, and constructs a complete underwater vision system with multi-point deployment. A continuous underwater deformation individual target re-identification model. The present invention finally completes the close supervision and process tracking of the underwater deformation target individual in the short-distance multi-field observation.