The invention discloses an implementation method based on various hardware platforms (such as CPLD, FPGA, special chips) for calculating a softmax function. The softmax function is widely applied to multi-classification tasks, attention models and the like for deep learning, wherein related e-exponent and division calculations need to consume many hardware resources. According to a design method,by conducting mathematical transformation on the function, the e-exponent calculation is simplified to one constant multiplication, one exponential operation of 2 with a fixed input range and one shift operation; n division operations are simplified to one highest nonzero digit detection operation, one reciprocal operation with a fixed input range, one shift operation and n multiplication operations. The exponential and reciprocal operations of 2 are achieved according to a specially-designed lookup table, and the same precision can be achieved with a smaller storage space. The method is usedin the attention models and the like for deep learning, the calculation speed can be greatly increased on the premise of almost not influencing the precision, and the consumption of calculation resources and storage resources is reduced.