Federated learning (FL), a type of collaborative machine learning framework, is capable of helping protect users’ private data while training the data into useful models. Nevertheless, privacy leakage may still happen by analyzing the exchanged parameters, e.g., weights and biases in deep neural networks, between the central server and clients. In this chapter, to effectively prevent information leakage, we investigate a differential privacy mechanism in which, at the clients’ side, artificial noises are added to parameters before uploading. Moreover, we propose a K-client random scheduling policy, in which K clients are randomly selected from a total of N clients to participate in each communication round. Furthermore, a theoretical convergence bound is derived from the loss function of the trained FL model. In detail, considering a fixed privacy level, the theoretical bound reveals that there exists an optimal number of clients K that can achieve the best convergence performance due to the tradeoff between the volume of user data and the variances of aggregated artificial noises. To optimize this tradeoff, we further provide a differentially private FL based client selection (DP-FedCS) algorithm, which can dynamically select the number of training clients. Our experimental results validate our theoretical conclusions and also show that the proposed algorithm can effectively improve both the FL training efficiency and FL model quality for a given privacy protection level.