This paper introduces a learning algorithm for a neural structure, directed acyclic graphs (DAGs) that is structurally based, i.e. reduction and manipulation of internal structure are directly linked to learning. This paper extends the concepts of I-Jong Lin and Kung (see IEEE Transactions in Signal Processing Special Issue Neural Networks, 1996) for template matching to a neural structure with capabilities for generalization. DAG-learning is derived from concepts in finite state transducers, hidden Markov models, and dynamic time warping to form an algorithmic framework within which many adaptive signal techniques such as vector quantization, K-means, approximation networks, etc., may be extended to temporal recognition. The paper provides a concept of path-based learning to allow comparison among hidden Markov models (HMMs), finite state transducers (FSTs) and DAG-learning. The paper also outlines the DAG-learning process and provides results from the DAG-learning algorithm over a test set of isolated cursive handwriting characters.