The invention discloses an adaptive clothes modeling method based on visual perception. The method comprises the steps of 1, constructing a clothes visual saliency model which accords with a human eye characteristic, applying deep convolutional neural network learning and extracting different hierarchical abstract characteristics of each image frame of clothes animation, and performing deep learning on the characteristics and true eye motion data for obtaining a visual saliency model; 2, performing clothes sub-area modeling, based on the visual saliency model which is constructed in the step 1, predicating a visual saliency chart of a clothes animation image, extracting attention degree of a clothes area, filtering clothes deformation, and performing sub-area modeling through setting a detail simulation factor according to camera viewpoint motion information and physical deformation information; and 3, constructing an adaptive clothes model driven by visual perception and realizing simulation, and realizing clothes sub-area modeling by means of adaptive multi-precision grid technology, performing high-precision modeling on the area with high detail simulation factor, and performing low-precision modeling on the area with low detail simulation factor, and performing dynamics calculation and bumping detection based on the steps above, and constructing a visual vivid clothes animation system.