Replication Dump Commands

The inventory method can be used to query an ArangoDB database’s current set of collections plus their indexes. Clients can use this method to get an overview of which collections are present in the database. They can use this information to either start a full or a partial synchronization of data, e.g. to initiate a backup or the incremental data synchronization.

Return inventory of collections and indexes

Returns an overview of collections and their indexes

GET /_api/replication/inventory

Query Parameters

  • includeSystem (optional): Include system collections in the result. The default value is true.

  • global (optional): Include alll databases in the response. Only works on _system The default value is false.

  • batchId (required): The RocksDB engine requires a valid batchId for this API call

Returns the array of collections and indexes available on the server. This array can be used by replication clients to initiate an initial sync with the server.

The response will contain a JSON object with the collection and state and tick attributes.

collections is an array of collections with the following sub-attributes:

  • parameters: the collection properties

  • indexes: an array of the indexes of a the collection. Primary indexes and edge indexes are not included in this array.

The state attribute contains the current state of the replication logger. It contains the following sub-attributes:

  • running: whether or not the replication logger is currently active. Note: since ArangoDB 2.2, the value will always be true

  • lastLogTick: the value of the last tick the replication logger has written

  • time: the current time on the server

Replication clients should note the lastLogTick value returned. They can then fetch collections’ data using the dump method up to the value of lastLogTick, and query the continuous replication log for log events after this tick value.

To create a full copy of the collections on the server, a replication client can execute these steps:

  • call the /inventory API method. This returns the lastLogTick value and the array of collections and indexes from the server.

  • for each collection returned by /inventory, create the collection locally and call /dump to stream the collection data to the client, up to the value of lastLogTick. After that, the client can create the indexes on the collections as they were reported by /inventory.

If the clients wants to continuously stream replication log events from the logger server, the following additional steps need to be carried out:

  • the client should call /logger-follow initially to fetch the first batch of replication events that were logged after the client’s call to /inventory.

    The call to /logger-follow should use a from parameter with the value of the lastLogTick as reported by /inventory. The call to /logger-follow will return the x-arango-replication-lastincluded which will contain the last tick value included in the response.

  • the client can then continuously call /logger-follow to incrementally fetch new replication events that occurred after the last transfer.

    Calls should use a from parameter with the value of the x-arango-replication-lastincluded header of the previous response. If there are no more replication events, the response will be empty and clients can go to sleep for a while and try again later.

Note: on a coordinator, this request must have the query parameter DBserver which must be an ID of a DBserver. The very same request is forwarded synchronously to that DBserver. It is an error if this attribute is not bound in the coordinator case.

Note:: Using the global parameter the top-level object contains a key databases under which each key represents a datbase name, and the value conforms to the above describtion.

Return codes

  • 200: is returned if the request was executed successfully.

  • 405: is returned when an invalid HTTP method is used.

  • 500: is returned if an error occurred while assembling the response.

Examples

shell> curl --header 'accept: application/json' --dump - http://localhost:8529/_api/replication/inventory

HTTP/1.1 OK
keep-alive: timeout=300
x-content-type-options: nosniff
content-type: application/json

Show response body

With some additional indexes:

shell> curl --header 'accept: application/json' --dump - http://localhost:8529/_api/replication/inventory

HTTP/1.1 OK
keep-alive: timeout=300
x-content-type-options: nosniff
content-type: application/json

Show response body

The batch method will create a snapshot of the current state that then can be dumped. A batchId is required when using the dump api with rocksdb.

Create new dump batch

handle a dump batch command

POST /_api/replication/batch

Note: These calls are uninteresting to users.

A JSON object with these properties is required:

  • ttl: the time-to-live for the new batch (in seconds)

A JSON object with the batch configuration.

Creates a new dump batch and returns the batch’s id.

The response is a JSON object with the following attributes:

  • id: the id of the batch

Note: on a coordinator, this request must have the query parameter DBserver which must be an ID of a DBserver. The very same request is forwarded synchronously to that DBserver. It is an error if this attribute is not bound in the coordinator case.

Return codes

  • 200: is returned if the batch was created successfully.

  • 400: is returned if the ttl value is invalid or if DBserver attribute is not specified or illegal on a coordinator.

  • 405: is returned when an invalid HTTP method is used.

Deletes an existing dump batch

handle a dump batch command

DELETE /_api/replication/batch/{id}

Note: These calls are uninteresting to users.

Path Parameters

  • id (required): The id of the batch.

Deletes the existing dump batch, allowing compaction and cleanup to resume.

Note: on a coordinator, this request must have the query parameter DBserver which must be an ID of a DBserver. The very same request is forwarded synchronously to that DBserver. It is an error if this attribute is not bound in the coordinator case.

Return codes

  • 204: is returned if the batch was deleted successfully.

  • 400: is returned if the batch was not found.

  • 405: is returned when an invalid HTTP method is used.

Prolong existing dump batch

handle a dump batch command

PUT /_api/replication/batch/{id}

Note: These calls are uninteresting to users.

Path Parameters

  • id (required): The id of the batch.

A JSON object with these properties is required:

  • ttl: the time-to-live for the new batch (in seconds)

Extends the ttl of an existing dump batch, using the batch’s id and the provided ttl value.

If the batch’s ttl can be extended successfully, the response is empty.

Note: on a coordinator, this request must have the query parameter DBserver which must be an ID of a DBserver. The very same request is forwarded synchronously to that DBserver. It is an error if this attribute is not bound in the coordinator case.

Return codes

  • 204: is returned if the batch’s ttl was extended successfully.

  • 400: is returned if the ttl value is invalid or the batch was not found.

  • 405: is returned when an invalid HTTP method is used.

The dump method can be used to fetch data from a specific collection. As the results of the dump command can be huge, dump may not return all data from a collection at once. Instead, the dump command may be called repeatedly by replication clients until there is no more data to fetch. The dump command will not only return the current documents in the collection, but also document updates and deletions.

Please note that the dump method will only return documents, updates and deletions from a collection’s journals and datafiles. Operations that are stored in the write-ahead log only will not be returned. In order to ensure that these operations are included in a dump, the write-ahead log must be flushed first.

To get to an identical state of data, replication clients should apply the individual parts of the dump results in the same order as they are provided.

Return data of a collection

returns the whole content of one collection

GET /_api/replication/dump

Query Parameters

  • collection (required): The name or id of the collection to dump.

  • chunkSize (optional):
  • batchId (required): rocksdb only - The id of the snapshot to use

  • from (optional): mmfiles only - Lower bound tick value for results.

  • to (optional): mmfiles only - Upper bound tick value for results.

  • includeSystem (optional): mmfiles only - Include system collections in the result. The default value is true.

  • ticks (optional): mmfiles only - Whether or not to include tick values in the dump. The default value is true.

  • flush (optional): mmfiles only - Whether or not to flush the WAL before dumping. The default value is true.

Returns the data from the collection for the requested range.

When the from query parameter is not used, collection events are returned from the beginning. When the from parameter is used, the result will only contain collection entries which have higher tick values than the specified from value (note: the log entry with a tick value equal to from will be excluded).

The to query parameter can be used to optionally restrict the upper bound of the result to a certain tick value. If used, the result will only contain collection entries with tick values up to (including) to.

The chunkSize query parameter can be used to control the size of the result. It must be specified in bytes. The chunkSize value will only be honored approximately. Otherwise a too low chunkSize value could cause the server to not be able to put just one entry into the result and return it. Therefore, the chunkSize value will only be consulted after an entry has been written into the result. If the result size is then bigger than chunkSize, the server will respond with as many entries as there are in the response already. If the result size is still smaller than chunkSize, the server will try to return more data if there’s more data left to return.

If chunkSize is not specified, some server-side default value will be used.

The Content-Type of the result is application/x-arango-dump. This is an easy-to-process format, with all entries going onto separate lines in the response body.

Each line itself is a JSON object, with at least the following attributes:

  • tick: the operation’s tick attribute

  • key: the key of the document/edge or the key used in the deletion operation

  • rev: the revision id of the document/edge or the deletion operation

  • data: the actual document/edge data for types 2300 and 2301. The full document/edge data will be returned even for updates.

  • type: the type of entry. Possible values for type are:

    • 2300: document insertion/update

    • 2301: edge insertion/update

    • 2302: document/edge deletion

Note: there will be no distinction between inserts and updates when calling this method.

Return codes

  • 200: is returned if the request was executed successfully and data was returned. The header x-arango-replication-lastincluded is set to the tick of the last document returned.

  • 204: is returned if the request was executed successfully, but there was no content available. The header x-arango-replication-lastincluded is 0 in this case.

  • 400: is returned if either the from or to values are invalid.

  • 404: is returned when the collection could not be found.

  • 405: is returned when an invalid HTTP method is used.

  • 500: is returned if an error occurred while assembling the response.

Examples

Empty collection:

shell> curl --header 'accept: application/json' --dump - http://localhost:8529/_api/replication/dump?collection=testCollection

HTTP/1.1 No Content
keep-alive: timeout=300
x-arango-replication-checkmore: false
x-arango-replication-lastincluded: 0
x-content-type-options: nosniff
content-type: application/x-arango-dump

Non-empty collection (One JSON document per line):

shell> curl --header 'accept: application/json' --dump - http://localhost:8529/_api/replication/dump?collection=testCollection

HTTP/1.1 OK
keep-alive: timeout=300
x-arango-replication-checkmore: false
x-arango-replication-lastincluded: 212
x-content-type-options: nosniff
content-type: application/x-arango-dump

Show response body

Synchronize data from a remote endpoint

start a replication

PUT /_api/replication/sync

A JSON object with these properties is required:

  • endpoint: the master endpoint to connect to (e.g. “tcp://192.168.173.13:8529”).

  • database: the database name on the master (if not specified, defaults to the name of the local current database).

  • username: an optional ArangoDB username to use when connecting to the endpoint.

  • password: the password to use when connecting to the endpoint.

  • includeSystem: whether or not system collection operations will be applied

  • incremental: if set to true, then an incremental synchronization method will be used for synchronizing data in collections. This method is useful when collections already exist locally, and only the remaining differences need to be transferred from the remote endpoint. In this case, the incremental synchronization can be faster than a full synchronization. The default value is false, meaning that the complete data from the remote collection will be transferred.

  • restrictType: an optional string value for collection filtering. When specified, the allowed values are include or exclude.

  • restrictCollections: an optional array of collections for use with restrictType. If restrictType is include, only the specified collections will be sychronised. If restrictType is exclude, all but the specified collections will be synchronized.

  • initialSyncMaxWaitTime: the maximum wait time (in seconds) that the initial synchronization will wait for a response from the master when fetching initial collection data. This wait time can be used to control after what time the initial synchronization will give up waiting for a response and fail. This value will be ignored if set to 0.

Starts a full data synchronization from a remote endpoint into the local ArangoDB database.

The sync method can be used by replication clients to connect an ArangoDB database to a remote endpoint, fetch the remote list of collections and indexes, and collection data. It will thus create a local backup of the state of data at the remote ArangoDB database. sync works on a per-database level.

sync will first fetch the list of collections and indexes from the remote endpoint. It does so by calling the inventory API of the remote database. It will then purge data in the local ArangoDB database, and after start will transfer collection data from the remote database to the local ArangoDB database. It will extract data from the remote database by calling the remote database’s dump API until all data are fetched.

In case of success, the body of the response is a JSON object with the following attributes:

  • collections: an array of collections that were transferred from the endpoint

  • lastLogTick: the last log tick on the endpoint at the time the transfer was started. Use this value as the from value when starting the continuous synchronization later.

WARNING: calling this method will sychronize data from the collections found on the remote endpoint to the local ArangoDB database. All data in the local collections will be purged and replaced with data from the endpoint.

Use with caution!

Note: this method is not supported on a coordinator in a cluster.

Return codes

  • 200: is returned if the request was executed successfully.

  • 400: is returned if the configuration is incomplete or malformed.

  • 405: is returned when an invalid HTTP method is used.

  • 500: is returned if an error occurred during sychronization.

  • 501: is returned when this operation is called on a coordinator in a cluster.

Return cluster inventory of collections and indexes

returs an overview of collections and indexes in a cluster

GET /_api/replication/clusterInventory

Query Parameters

  • includeSystem (optional): Include system collections in the result. The default value is true.

Returns the array of collections and indexes available on the cluster.

The response will be an array of JSON objects, one for each collection. Each collection containscontains exactly two keys “parameters” and “indexes”. This information comes from Plan/Collections/{DB-Name}/* in the agency, just that the indexes attribute there is relocated to adjust it to the data format of arangodump.

Return codes

  • 200: is returned if the request was executed successfully.

  • 405: is returned when an invalid HTTP method is used.

  • 500: is returned if an error occurred while assembling the response.