Over the past several years, a growing number of social media platforms have begun taking an active role in content moderation and online speech regulation. While enforcement actions have been shown to improve outcomes within moderating platforms, less is known about possible spillover effects across platforms. We study the impact of removing groups promoting anti-vaccine content on Facebook on engagement with similar content on Twitter. We followed 160 Facebook groups discussing COVID-19 vaccines and prospectively tracked their removal from the platform between April and September 2021. We then identified users who cited these groups on Twitter, and examined their online behavior over time. Our findings from a stacked difference-in-differences analysis shows that users citing removed Facebook groups promoted more anti-vaccine content on Twitter in the month after the removals. In particular, users citing the removed groups used 10-33% more anti-vaccine keywords on Twitter, when compared to accounts citing groups that were not removed. Our results suggest that taking down anti-vaccine content on one platform can result in increased production of similar content on other platforms, raising questions about the overall effectiveness of these measures.