Traditional embedded systems rely on custom C code deployed in a monolithic firmware image. In these systems, all code must be trusted completely, as any code can directly modify memory or hardware registers. More recently, some embedded OSes have improved security by separating userspace applications from the kernel, using strong hardware isolation in the form of a memory protection unit (MPU). Unfortunately, this design requires either a large trusted computing base (TCB) containing all OS services, or moving many OS services into userspace. The large TCB approach offers no protection against seemingly-correct backdoored code, discouraging the use of kernel code produced by others and complicating security audits. OS services in userspace come at a cost to usability and efficiency. We posit that a model enabling two tiers of trust for kernel code is better suited to modern embedded software practices. In this paper, we present the threat model of the Tock Operating System, which is based on this idea. We compare this threat model to existing security approaches, and show how it provides useful guarantees to different stakeholders.