ArangoDB v3.8 reached End of Life (EOL) and is no longer supported.

This documentation is outdated. Please see the most recent version at docs.arangodb.com

Replication

Replication allows you to replicate data onto another machine. It forms the base of all disaster recovery and failover features ArangoDB offers.

ArangoDB offers synchronous and asynchronous replication.

Synchronous replication is used between the DB-Servers of an ArangoDB Cluster.

Asynchronous replication is used:

  • between the Leader and the Follower of an ArangoDB Leader/Follower setup
  • between the Leader and the Follower of an ArangoDB Active Failover setup
  • between multiple ArangoDB Data Centers (inside the same Data Center replication is synchronous)

Synchronous replication

Synchronous replication only works within an ArangoDB Cluster and is typically used for mission critical data which must be accessible at all times. Synchronous replication generally stores a copy of a shard’s data on another DB-Server and keeps it in sync. Essentially, when storing data after enabling synchronous replication the Cluster will wait for all replicas to write all the data before greenlighting the write operation to the client. This will naturally increase the latency a bit, since one more network hop is needed for each write. However, it will enable the cluster to immediately fail over to a replica whenever an outage has been detected, without losing any committed data, and mostly without even signaling an error condition to the client.

Synchronous replication is organized such that every shard has a leader and r-1 followers, where r denoted the replication factor. The number of followers can be controlled using the replicationFactor parameter whenever you create a collection, the replicationFactor parameter is the total number of copies being kept, that is, it is one plus the number of followers.

In addition to the replicationFactor there is a writeConcern that specifies the lowest number of in-sync followers. Specifying the write concern with a value greater than 1 locks down a collection’s leader shards for writing as soon as too many followers were lost.

Asynchronous replication

In ArangoDB any write operation is logged in the write-ahead log.

When using asynchronous replication Followers (or followers)
connect to a Leader (or leader) and apply locally all the events from the Leader log in the same order. As a result the Followers (followers) will have the same state of data as the Leader (leader).

Followers (followers) are only eventually consistent with the Leader (leader).

Transactions are honored in replication, i.e. transactional write operations will become visible on Followers atomically.

As all write operations will be logged to a Leader database’s write-ahead log, the replication in ArangoDB currently cannot be used for write-scaling. The main purposes of the replication in current ArangoDB are to provide read-scalability and “hot backups” of specific databases.

It is possible to connect multiple Follower to the same Leader. Followers should be used as read-only instances, and no user-initiated write operations should be carried out on them. Otherwise data conflicts may occur that cannot be solved automatically, and that will make the replication stop.

In an asynchronous replication scenario Followers will pull changes from the Leader. Followers need to know to which Leader they should connect to, but a Leader is not aware of the Followers that replicate from it. When the network connection between the Leader and a Follower goes down, write operations on the Leader can continue normally. When the network is up again, Followers can reconnect to the Leader and transfer the remaining changes. This will happen automatically provided Followers are configured appropriately.

Before 3.3.0 asynchronous replication was per database. Starting with 3.3.0 it is possible to setup global replication.

Replication lag

As described above, write operations are applied first in the Leader, and then applied in the Followers.

For example, let’s assume a write operation is executed in the Leader at point in time t0. To make a Follower apply the same operation, it must first fetch the write operation’s data from Leader’s write-ahead log, then parse it and apply it locally. This will happen at some point in time after t0, let’s say t1.

The difference between t1 and t0 is called the replication lag, and it is unavoidable in asynchronous replication. The amount of replication lag depends on many factors, a few of which are:

  • the network capacity between the Followers and the Leader
  • the load of the Leader and the Followers
  • the frequency in which Followers poll the Leader for updates

Between t0 and t1, the state of data on the Leader is newer than the state of data on the Followers. At point in time t1, the state of data on the Leader and Followers is consistent again (provided no new data modifications happened on the Leader in between). Thus, the replication will lead to an eventually consistent state of data.

Replication overhead

As the Leader servers are logging any write operation in the write-ahead-log anyway replication doesn’t cause any extra overhead on the Leader. However it will of course cause some overhead for the Leader to serve incoming read requests of the Followers. Returning the requested data is however a trivial task for the Leader and should not result in a notable performance degradation in production.