The goal of this chapter is to quantify the relationships between equilibria for finite-player games, as they were defined in Chapter (Vol I)-2, and the solutions of the mean field game problems. We first show that the solution of the limiting mean field game problem can be used to provide approximate Nash equilibria for the corresponding finite-player games, and we quantify the nature of the approximation in terms of the size of the game. Interestingly enough, we prove a similar result for the solution of the optimal control of McKean-Vlasov stochastic dynamics. The very notion of equilibrium used for the finite-player games shed new light on the differences between the two asymptotic problems. Next, we turn to the problem of the convergence of Nash equilibria for finite-player games toward solutions of the mean field game problem. We tackle this challenging problem under more specific assumptions, by means of an analytic approach based on the properties of the master equation when the latter has classical solutions.