Depth-dependent defocus results in a limited depth-of-field in consumer-level cameras. Computational imaging provides alternative solutions to resolve all-in-focus images with the assistance of designed optics and algorithms. In this work, we extend the concept of focal sweep from refractive optics to diffractive optics, where we fuse multiple focal powers onto one single element. In contrast to state-of-the-art sweep models, ours can generate better-conditioned point spread function (PSF) distributions along the expected depth range with drastically shortened (40%) sweep distance. Further by encoding axially asymmetric PSFs subject to color channels, and then sharing sharp information across channels, we preserve details as well as color fidelity. We prototype two diffractive imaging systems that work in the monochromatic and RGB color domain. Experimental results indicate that the depth-of-field can be significantly extended with fewer artifacts remaining after the deconvolution.