ArangoDB v3.8 reached End of Life (EOL) and is no longer supported.

This documentation is outdated. Please see the most recent version at docs.arangodb.com

Cluster Administration

This section includes information related to the administration of an ArangoDB Cluster.

For a general introduction to the ArangoDB Cluster, please refer to the Cluster chapter.

There is also a detailed Cluster Administration Course for download.

Please check the following talks as well:

Date Title Who Link
10th April 2018 Fundamentals and Best Practices of ArangoDB Cluster Administration Kaveh Vahedipour, ArangoDB Cluster Team Online Meetup Page & Video
29th May 2018 Fundamentals and Best Practices of ArangoDB Cluster Administration: Part II Kaveh Vahedipour, ArangoDB Cluster Team Online Meetup Page & Video

Enabling synchronous replication

For an introduction about Synchronous Replication in Cluster, please refer to the Cluster Architecture section.

Synchronous replication can be enabled per collection. When creating a collection you may specify the number of replicas using the replicationFactor parameter. The default value is set to 1 which effectively disables synchronous replication among DB-Servers.

Whenever you specify a replicationFactor greater than 1 when creating a collection, synchronous replication will be activated for this collection. The Cluster will determine suitable leaders and followers for every requested shard (numberOfShards) within the Cluster.

Example:

127.0.0.1:8530@_system> db._create("test", {"replicationFactor": 3})

In the above case, any write operation will require 3 replicas to report success from now on.

Preparing growth

You may create a collection with higher replication factor than available DB-Servers. When additional DB-Servers become available the shards are automatically replicated to the newly available DB-Servers.

To create a collection with higher replication factor than available DB-Servers please set the option enforceReplicationFactor to false, when creating the collection from ArangoShell (the option is not available from the web interface), e.g.:

db._create("test", { replicationFactor: 4 }, { enforceReplicationFactor: false });

The default value for enforceReplicationFactor is true.

Note: multiple replicas of the same shard can never coexist on the same DB-Server instance.

Sharding

For an introduction about Sharding in Cluster, please refer to the Cluster Sharding section.

Number of shards can be configured at collection creation time, e.g. the UI, or the ArangoDB Shell:

db._create("sharded_collection", {"numberOfShards": 4});

To configure a custom hashing for another attribute (default is _key):

db._create("sharded_collection", {"numberOfShards": 4, "shardKeys": ["country"]});

The example above, where ‘country’ has been used as shardKeys can be useful to keep data of every country in one shard, which would result in better performance for queries working on a per country base.

It is also possible to specify multiple shardKeys.

Note however that if you change the shard keys from their default ["_key"], then finding a document in the collection by its primary key involves a request to every single shard. However this can be mitigated: All CRUD APIs and AQL support taking the shard keys as a lookup hint. Just make sure that the shard key attributes are present in the documents you send, or in case of AQL, that you use a document reference or an object for the UPDATE, REPLACE or REMOVE operation which includes the shard key attributes:

FOR doc IN sharded_collection
  FILTER doc._key == "123"
  UPDATE doc WITH {  } IN sharded_collection
UPDATE { _key: "123", country: "" } WITH {  } IN sharded_collection

Using a string with just the document key as key expression instead will be processed without shard hints and thus perform slower:

UPDATE "123" WITH {  } IN sharded_collection

If custom shard keys are used, you can no longer specify the primary key value for a new document, but must let the server generate one automatically. This restriction comes from the fact that ensuring uniqueness of the primary key would be very inefficient if the user could specify the document key. If custom shard keys are used, trying to store documents with the primary key value (_key attribute) set will result in a runtime error (“must not specify _key for this collection”).

Unique indexes (hash, skiplist, persistent) on sharded collections are only allowed if the fields used to determine the shard key are also included in the list of attribute paths for the index:

shardKeys indexKeys  
a a allowed
a b not allowed
a a, b allowed
a, b a not allowed
a, b b not allowed
a, b a, b allowed
a, b a, b, c allowed
a, b, c a, b not allowed
a, b, c a, b, c allowed

On which DB-Server in a Cluster a particular shard is kept is undefined. There is no option to configure an affinity based on certain shard keys.

Sharding strategy

Strategy to use for the collection. Since ArangoDB 3.4 there are different sharding strategies to select from when creating a new collection. The selected shardingStrategy value will remain fixed for the collection and cannot be changed afterwards. This is important to make the collection keep its sharding settings and always find documents already distributed to shards using the same initial sharding algorithm.

The available sharding strategies are:

  • community-compat: default sharding used by ArangoDB Community Edition before version 3.4
  • enterprise-compat: default sharding used by ArangoDB Enterprise Edition before version 3.4
  • enterprise-smart-edge-compat: default sharding used by smart edge collections in ArangoDB Enterprise Edition before version 3.4
  • hash: default sharding used for new collections starting from version 3.4 (excluding smart edge collections)
  • enterprise-hash-smart-edge: default sharding used for new smart edge collections starting from version 3.4

If no sharding strategy is specified, the default will be hash for all collections, and enterprise-hash-smart-edge for all smart edge collections (requires the Enterprise Edition of ArangoDB). Manually overriding the sharding strategy does not yet provide a benefit, but it may later in case other sharding strategies are added.

The OneShard feature does not have its own sharding strategy, it uses hash instead.

Moving/Rebalancing shards

A shard can be moved from a DB-Server to another, and the entire shard distribution can be rebalanced using the corresponding buttons in the web UI.

Replacing/Removing a Coordinator

Coordinators are effectively stateless and can be replaced, added and removed without more consideration than meeting the necessities of the particular installation.

To take out a Coordinator stop the Coordinator’s instance by issuing kill -SIGTERM <pid>.

Ca. 15 seconds later the cluster UI on any other Coordinator will mark the Coordinator in question as failed. Almost simultaneously, the recycle bin icon will appear to the right of the name of the Coordinator. Clicking that icon will remove the Coordinator from the Coordinator registry.

Any new Coordinator instance that is informed of where to find any/all Agent(s), --cluster.agency-endpoint <some agent endpoint> will be integrated as a new Coordinator into the cluster. You may also just restart the Coordinator as before and it will reintegrate itself into the cluster.

Replacing/Removing a DB-Server

DB-Servers are where the data of an ArangoDB cluster is stored. They do not publish a web UI and are not meant to be accessed by any other entity than Coordinators to perform client requests or other DB-Servers to uphold replication and resilience.

The clean way of removing a DB-Server is to first relieve it of all its responsibilities for shards. This applies to followers as well as leaders of shards. The requirement for this operation is that no collection in any of the databases has a replicationFactor greater than the current number of DB-Servers minus one. In other words, the highest replication factor must not exceed the future DB-Server count. For the purpose of cleaning out DBServer004 for example would work as follows, when issued to any Coordinator of the cluster:

curl <coord-ip:coord-port>/_admin/cluster/cleanOutServer -d '{"server":"DBServer004"}'

After the DB-Server has been cleaned out, you will find the recycle bin icon to the right of the name of the DB-Server on any Coordinators’ UI. Clicking on it will remove the DB-Server in question from the cluster.

Firing up any DB-Server from a clean data directory by specifying the any of all Agency endpoints will integrate the new DB-Server into the cluster.

To distribute shards onto the new DB-Server either click on the Distribute Shards button at the bottom of the Shards page in every database.

The clean out process can be monitored using the following script, which periodically prints the amount of shards that still need to be moved. It is basically a countdown to when the process finishes.

Save below code to a file named serverCleanMonitor.js:

var dblist = db._databases();
var internal = require("internal");
var arango = internal.arango;

var server = ARGUMENTS[0];
var sleep = ARGUMENTS[1] | 0;

if (!server) {
    print("\nNo server name specified. Provide it like:\n\narangosh <options> -- DBServerXXXX");
    process.exit();
}

if (sleep <= 0) sleep = 10;
console.log("Checking shard distribution every %d seconds...", sleep);

var count;
do {
    count = 0;
    for (dbase in dblist) {
        var sd = arango.GET("/_db/" + dblist[dbase] + "/_admin/cluster/shardDistribution");
        var collections = sd.results;
        for (collection in collections) {
        var current = collections[collection].Current;
        for (shard in current) {
            if (current[shard].leader == server) {
            ++count;
            }
        }
        }
    }
    console.log("Shards to be moved away from node %s: %d", server, count);
    if (count == 0) break;
    internal.wait(sleep);
} while (count > 0);

This script has to be executed in the arangosh by issuing the following command:

arangosh --server.username <username> --server.password <password> --javascript.execute <path/to/serverCleanMonitor.js> -- DBServer<number>

The output should be similar to the one below:

arangosh --server.username root --server.password pass --javascript.execute ~./serverCleanMonitor.js -- DBServer0002
[7836] INFO Checking shard distribution every 10 seconds...
[7836] INFO Shards to be moved away from node DBServer0002: 9
[7836] INFO Shards to be moved away from node DBServer0002: 4
[7836] INFO Shards to be moved away from node DBServer0002: 1
[7836] INFO Shards to be moved away from node DBServer0002: 0

The current status is logged every 10 seconds. You may adjust the interval by passing a number after the DB-Server name, e.g. arangosh <options> -- DBServer0002 60 for every 60 seconds.

Once the count is 0 all shards of the underlying DB-Server have been moved and the cleanOutServer process has finished.