This paper studies online stochastic optimization where the random parameters follow time-varying distributions. In each time slot, after a control variable is determined, a sample drawn from the current distribution is revealed as feedback information. This form of stochastic optimization has broad applications in online learning and signal processing, where the underlying ground-truth is inherently time-varying, e.g., tracking a moving target. Dynamic optimal points are adopted as the performance benchmark to define the regret, as opposed to the static optimal point used in stochastic optimization with fixed distributions. Stochastic optimization with time-varying distributions is examined and a projected stochastic gradient descent algorithm is presented. An upper bound on its regret is established with respect to the drift of the dynamic optima, which measures the variations of the optimal solutions due to the varying distributions. In particular, the algorithm possesses sublinear regret as long as the drift of the optima is sublinear, i.e., the distributions do not vary too drastically. Finally, numerical results are presented to corroborate the efficacy of the proposed algorithm and the derived analytical results.