The invention discloses a multimodal brain network feature fusion method based on multi-task learning, and the multimodal brain network feature fusion method based on the multi-task learning includes the steps of preprocessing the obtained functional magnetic resonance imaging (fMRI) images and diffusion tensor imaging (DTI) images, registrating the preprocessed fMRI image to the standard AAL template, carrying out a fiber tracking for preprocessed DTI images, calculating fiber anisotropy (FA) value, and constructing structure connection matrix through the AAL template. Clustering coefficient of each brain area in a function connection matrix and the structure connection matrix is calculated to be regarded as function features and structure features. As two different tasks, the function features and the structure features assess an optimal feature set by solving the problem of multi-task learning optimization. The method uses information with multiple modalities complementing each other to learn simultaneously and to classify, improves the classification accuracy, solves the problems that a single task feature does not consider the correlation between features, and the fact that only one modality feature is used for pattern classification can bring to insufficient amount of information.