Statistical hypothesis tests are referred to as robust if they are insensitive to small, random deviations from the underlying model. For two hypotheses and fixed sample sizes, the robust testing is well studied and understood. However, few results exist for the case in which the number of samples is variable (i.e., sequential testing) and the number of hypotheses is larger than two (i.e., multiple hypothesis testing). This paper outlines a theory of minimax optimal sequential tests for multiple hypotheses under general distributional uncertainty. It is shown that, in analogy to the fixed sample size case, the minimax solution is an optimal test for the least favorable distributions, i.e., a test that optimally separates the most similar feasible distributions. The joint similarity of multiple distributions is shown to be determined by a weighted f-dissimilarity, whose corresponding function is given by the unique solution of a nonlinear integral equation and whose weights are given by the likelihood ratios of the past samples. As a consequence, the least favorable distributions depend on the past observations and the underlying random process becomes a Markov-process whose state variable coincides with the test statistic.