This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization. In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which, in turn, aggregates them into a quantized global model and synchronizes the devices. With the goal of jointly determining the set of participating devices in each training iteration and the bitwidths employed at the devices, we pose an optimization problem for minimizing the training loss of quantized FL under a device sampling budget and delay requirement. Our analytical results show that the improvement of FL training loss between two consecutive iterations depends on not only the device selection and quantization scheme, but also on several parameters inherent to the model being learned. As a result, we propose, a model-based reinforcement learning (RL) method to optimize action selection over iterations. Compared to model-free RL, the proposed approach leverages the derived mathematical characterization of the FL training process to discover an effective device selection and quantization scheme without imposing additional device communication overhead. Numerical evaluations show that the proposed FL framework can achieve the same classification performance while reducing the number of training iterations needed for convergence by 20% compared to model-free RL-based FL.