ArangoDB v3.8 reached End of Life (EOL) and is no longer supported.

This documentation is outdated. Please see the most recent version at docs.arangodb.com

Database Methods

Collection

returns a single collection or null db._collection(collection-name)

Returns the collection with the given name or null if no such collection exists.

db._collection(collection-identifier)

Returns the collection with the given identifier or null if no such collection exists. Accessing collections by identifier is discouraged for end users. End users should access collections using the collection name.

Examples

Get a collection by name:

arangosh> db._collection("demo");
Show execution results
Hide execution results
[ArangoCollection 91, "demo" (type document, status loaded)]

Get a collection by id:

arangosh> db._collection(123456);
[ArangoCollection 123456, "demo" (type document, status loaded)]

Unknown collection:

arangosh> db._collection("unknown");
Show execution results
Hide execution results
null

Create

creates a new document or edge collection db._create(collection-name)

Creates a new document collection named collection-name. If the collection name already exists or if the name format is invalid, an error is thrown. For more information on valid collection names please refer to the naming conventions.

db._create(collection-name, properties)

properties must be an object with the following attributes:

  • waitForSync (optional, default false): If true creating a document will only return after the data was synced to disk.

  • keyOptions (optional): additional options for key generation. If specified, then keyOptions should be a JSON object containing the following attributes (note: some of them are optional):
    • type: specifies the type of the key generator. The currently available generators are traditional, autoincrement, uuid and padded. The traditional key generator generates numerical keys in ascending order. The sequence of keys is not guaranteed to be gap-free. The autoincrement key generator generates numerical keys in ascending order, the inital offset and the spacing can be configured (note: autoincrement is currently only supported for non-sharded collections). The sequence of generated keys is not guaranteed to be gap-free, because a new key will be generated on every document insert attempt, not just for successful inserts. The padded key generator generates keys of a fixed length (16 bytes) in ascending lexicographical sort order. This is ideal for usage with the RocksDB engine, which will slightly benefit keys that are inserted in lexicographically ascending order. The key generator can be used in a single-server or cluster. The sequence of generated keys is not guaranteed to be gap-free. The uuid key generator generates universally unique 128 bit keys, which are stored in hexadecimal human-readable format. This key generator can be used in a single-server or cluster to generate “seemingly random” keys. The keys produced by this key generator are not lexicographically sorted.

      Please note that keys are currently only guaranteed to be truly ascending in single server deployments. The reason is that document keys can be generated not only by the DB-Server, but also by Coordinators (of which there are normally multiple instances). While each component still generates an ascending sequence of keys, the overall sequence (mixing the results from different components) may not be ascending. ArangoDB 3.10 changes this behavior so that collections with only a single shard can provide truly ascending keys.

    • allowUserKeys: if set to true, then it is allowed to supply own key values in the _key attribute of a document. If set to false, then the key generator will solely be responsible for generating keys and supplying own key values in the _key attribute of documents is considered an error.
    • increment: increment value for autoincrement key generator. Not used for other key generator types.
    • offset: initial offset value for autoincrement key generator. Not used for other key generator types.
  • schema: An object that specifies the collection-level document schema for documents. The attribute keys rule, level and message must follow the rules documented in Document Schema Validation

  • cacheEnabled: Whether the in-memory hash cache for documents should be enabled for this collection (default: false). Can be controlled globally with the --cache.size startup option. The cache can speed up repeated reads of the same documents via their document keys. If the same documents are not fetched often or are modified frequently, then you may disable the cache to avoid the maintenance costs.

  • isSystem (optional, default is false): If true, create a system collection. In this case collection-name should start with an underscore. End users should normally create non-system collections only. API implementors may be required to create system collections in very special occasions, but normally a regular collection will do.

  • syncByRevision: Whether the newer revision-based replication protocol is enabled for this collection. This is an internal property.

  • numberOfShards (optional, default is 1): in a cluster, this value determines the number of shards to create for the collection. In a single server setup, this option is meaningless.

  • shardKeys (optional, default is [ "_key" ]): in a cluster, this attribute determines which document attributes are used to determine the target shard for documents. Documents are sent to shards based on the values they have in their shard key attributes. The values of all shard key attributes in a document are hashed, and the hash value is used to determine the target shard. Note that values of shard key attributes cannot be changed once set. This option is meaningless in a single server setup.

    When choosing the shard keys, one must be aware of the following rules and limitations: In a sharded collection with more than one shard it is not possible to set up a unique constraint on an attribute that is not the one and only shard key given in shardKeys. This is because enforcing a unique constraint would otherwise make a global index necessary or need extensive communication for every single write operation. Furthermore, if _key is not the one and only shard key, then it is not possible to set the _key attribute when inserting a document, provided the collection has more than one shard. Again, this is because the database has to enforce the unique constraint on the _key attribute and this can only be done efficiently if this is the only shard key by delegating to the individual shards.

  • replicationFactor (optional, default is 1): in a cluster, this attribute determines how many copies of each shard are kept on different DB-Servers. The value 1 means that only one copy (no synchronous replication) is kept. A value of k means that k-1 replicas are kept. Any two copies reside on different DB-Servers. Replication between them is synchronous, that is, every write operation to the “leader” copy will be replicated to all “follower” replicas, before the write operation is reported successful.

    If a server fails, this is detected automatically and one of the servers holding copies take over, usually without an error being reported.

    When using the Enterprise Edition of ArangoDB the replicationFactor may be set to “satellite” making the collection locally joinable on every DB-Server. This reduces the number of network hops dramatically when using joins in AQL at the costs of reduced write performance on these collections.

  • writeConcern (optional, default is 1): in a cluster, this attribute determines how many copies of each shard are required to be in sync on the different DB-Servers. If there are less then these many copies in the cluster a shard will refuse to write. The value of writeConcern can not be larger than replicationFactor. Please note: during server failures this might lead to writes not being possible until the failover is sorted out and might cause write slow downs in trade for data durability.

  • shardingStrategy (optional): specifies the name of the sharding strategy to use for the collection. Since ArangoDB 3.4 there are different sharding strategies to select from when creating a new collection. The selected shardingStrategy value will remain fixed for the collection and cannot be changed afterwards. This is important to make the collection keep its sharding settings and always find documents already distributed to shards using the same initial sharding algorithm.

    The available sharding strategies are:

    • community-compat: default sharding used by ArangoDB Community Edition before version 3.4
    • enterprise-compat: default sharding used by ArangoDB Enterprise Edition before version 3.4
    • enterprise-smart-edge-compat: default sharding used by smart edge collections in ArangoDB Enterprise Edition before version 3.4
    • hash: default sharding used for new collections starting from version 3.4 (excluding smart edge collections)
    • enterprise-hash-smart-edge: default sharding used for new smart edge collections starting from version 3.4

    If no sharding strategy is specified, the default will be hash for all collections, and enterprise-hash-smart-edge for all smart edge collections (requires the Enterprise Edition of ArangoDB). Manually overriding the sharding strategy does not yet provide a benefit, but it may later in case other sharding strategies are added.

    In single-server mode, the shardingStrategy attribute is meaningless and will be ignored.

  • distributeShardsLike: distribute the shards of this collection cloning the shard distribution of another. If this value is set, it will copy the attributes replicationFactor, numberOfShards and shardingStrategy from the other collection.

  • isSmart: Whether the collection is for a SmartGraph (Enterprise Edition only). This is an internal property.

  • isDisjoint (boolean): Whether the collection is for a Disjoint SmartGraph (Enterprise Edition only). This is an internal property.

  • smartGraphAttribute: The attribute that is used for sharding: vertices with the same value of this attribute are placed in the same shard. All vertices are required to have this attribute set and it has to be a string. Edges derive the attribute from their connected vertices.

    This feature can only be used in the Enterprise Edition.

  • smartJoinAttribute: in an *Enterprise Edition cluster, this attribute determines an attribute of the collection that must contain the shard key value of the referred-to SmartJoin collection. Additionally, the sharding key for a document in this collection must contain the value of this attribute, followed by a colon, followed by the actual primary key of the document.

    This feature can only be used in the Enterprise Edition and requires the distributeShardsLike attribute of the collection to be set to the name of another collection. It also requires the shardKeys attribute of the collection to be set to a single shard key attribute, with an additional ‘:’ at the end. A further restriction is that whenever documents are stored or updated in the collection, the value stored in the smartJoinAttribute must be a string.


db._create(collection-name, properties, type)

Specifies the optional type of the collection, it can either be document or edge. On default it is document. Instead of giving a type you can also use db._createEdgeCollection or db._createDocumentCollection.


db._create(collection-name, properties[, type], options)

As an optional third (if the type string is being omitted) or fourth parameter you can specify an optional options map that controls how the cluster will create the collection. These options are only relevant at creation time and will not be persisted:

  • waitForSyncReplication (default: true) When enabled the server will only report success back to the client if all replicas have created the collection. Set to false if you want faster server responses and don’t care about full replication.

  • enforceReplicationFactor (default: true) When enabled which means the server will check if there are enough replicas available at creation time and bail out otherwise. Set to false to disable this extra check.

Examples

With defaults:

arangosh> c = db._create("users");
arangosh> c.properties();
Show execution results
Hide execution results
[ArangoCollection 71121, "users" (type document, status loaded)]
{ 
  "globallyUniqueId" : "h92B84384354F/71121", 
  "isSystem" : false, 
  "waitForSync" : false, 
  "keyOptions" : { 
    "allowUserKeys" : true, 
    "type" : "traditional", 
    "lastValue" : 0 
  }, 
  "writeConcern" : 1, 
  "cacheEnabled" : false, 
  "syncByRevision" : true, 
  "schema" : null 
}

With properties:

arangosh> c = db._create("users", { waitForSync: true });
arangosh> c.properties();
Show execution results
Hide execution results
[ArangoCollection 71101, "users" (type document, status loaded)]
{ 
  "globallyUniqueId" : "h92B84384354F/71101", 
  "isSystem" : false, 
  "waitForSync" : true, 
  "keyOptions" : { 
    "allowUserKeys" : true, 
    "type" : "traditional", 
    "lastValue" : 0 
  }, 
  "writeConcern" : 1, 
  "cacheEnabled" : false, 
  "syncByRevision" : true, 
  "schema" : null 
}

With a key generator:

arangosh> db._create("users",
........> { keyOptions: { type: "autoincrement", offset: 10, increment: 5 } });
arangosh> db.users.save({ name: "user 1" });
arangosh> db.users.save({ name: "user 2" });
arangosh> db.users.save({ name: "user 3" });
Show execution results
Hide execution results
[ArangoCollection 71091, "users" (type document, status loaded)]
{ 
  "_id" : "users/10", 
  "_key" : "10", 
  "_rev" : "_fw2Y17S---" 
}
{ 
  "_id" : "users/15", 
  "_key" : "15", 
  "_rev" : "_fw2Y17S--_" 
}
{ 
  "_id" : "users/20", 
  "_key" : "20", 
  "_rev" : "_fw2Y17S--A" 
}

With a special key option:

arangosh> db._create("users", { keyOptions: { allowUserKeys: false } });
arangosh> db.users.save({ name: "user 1" });
arangosh> db.users.save({ name: "user 2", _key: "myuser" });
arangosh> db.users.save({ name: "user 3" });
Show execution results
Hide execution results
[ArangoCollection 71109, "users" (type document, status loaded)]
{ 
  "_id" : "users/71114", 
  "_key" : "71114", 
  "_rev" : "_fw2Y17e--_" 
}
[ArangoError 1222: unexpected document key]
{ 
  "_id" : "users/71117", 
  "_key" : "71117", 
  "_rev" : "_fw2Y17e--A" 
}

creates a new edge collection db._createEdgeCollection(collection-name)

Creates a new edge collection named collection-name. If the collection name already exists an error is thrown. The default value for waitForSync is false.

db._createEdgeCollection(collection-name, properties)

properties must be an object with the following attributes:

  • waitForSync (optional, default false): If true creating a document will only return after the data was synced to disk.

creates a new document collection db._createDocumentCollection(collection-name)

Creates a new document collection named collection-name. If the document name already exists and error is thrown.

All Collections

returns all collections db._collections()

Returns all collections of the given database.

Examples

arangosh> db._collections();
Show execution results
Hide execution results
[ 
  [ArangoCollection 19, "_analyzers" (type document, status loaded)], 
  [ArangoCollection 34, "_appbundles" (type document, status loaded)], 
  [ArangoCollection 31, "_apps" (type document, status loaded)], 
  [ArangoCollection 22, "_aqlfunctions" (type document, status loaded)], 
  [ArangoCollection 37, "_frontend" (type document, status loaded)], 
  [ArangoCollection 7, "_graphs" (type document, status loaded)], 
  [ArangoCollection 28, "_jobs" (type document, status loaded)], 
  [ArangoCollection 25, "_queues" (type document, status loaded)], 
  [ArangoCollection 10, "_statistics" (type document, status loaded)], 
  [ArangoCollection 13, "_statistics15" (type document, status loaded)], 
  [ArangoCollection 16, "_statisticsRaw" (type document, status loaded)], 
  [ArangoCollection 4, "_users" (type document, status loaded)], 
  [ArangoCollection 97, "animals" (type document, status loaded)], 
  [ArangoCollection 91, "demo" (type document, status loaded)], 
  [ArangoCollection 71327, "example" (type document, status loaded)] 
]

Collection Name

selects a collection from the database db.collection-name

Returns the collection with the given collection-name. If no such collection exists, create a collection named collection-name with the default properties.

Examples

arangosh> db.example;
Show execution results
Hide execution results
[ArangoCollection 71084, "example" (type document, status loaded)]

Drop

drops a collection db._drop(collection)

Drops a collection and all its indexes and data.

db._drop(collection-identifier)

Drops a collection identified by collection-identifier with all its indexes and data. No error is thrown if there is no such collection.

db._drop(collection-name)

Drops a collection named collection-name and all its indexes. No error is thrown if there is no such collection.

db._drop(collection-name, options)

In order to drop a system collection, one must specify an options object with attribute isSystem set to true. Otherwise it is not possible to drop system collections.

Note: cluster collection, which are prototypes for collections with distributeShardsLike parameter, cannot be dropped.

Examples

Drops a collection:

arangosh> col = db.example;
arangosh> db._drop(col);
arangosh> col;
Show execution results
Hide execution results
[ArangoCollection 71129, "example" (type document, status loaded)]
[ArangoCollection 71129, "example" (type document, status loaded)]

Drops a collection identified by name:

arangosh> col = db.example;
arangosh> db._drop("example");
arangosh> col;
Show execution results
Hide execution results
[ArangoCollection 71136, "example" (type document, status loaded)]
[ArangoCollection 71136, "example" (type document, status deleted)]

Drops a system collection

arangosh> col = db._example;
arangosh> db._drop("_example", { isSystem: true });
arangosh> col;
Show execution results
Hide execution results
[ArangoCollection 71143, "_example" (type document, status loaded)]
[ArangoCollection 71143, "_example" (type document, status deleted)]

Truncate

truncates a collection db._truncate(collection)

Truncates a collection, removing all documents but keeping all its indexes.

db._truncate(collection-identifier)

Truncates a collection identified by collection-identified. No error is thrown if there is no such collection.

db._truncate(collection-name)

Truncates a collection named collection-name. No error is thrown if there is no such collection.

Examples

Truncates a collection:

arangosh> col = db.example;
arangosh> col.save({ "Hello" : "World" });
arangosh> col.count();
arangosh> db._truncate(col);
arangosh> col.count();
Show execution results
Hide execution results
[ArangoCollection 71151, "example" (type document, status loaded)]
{ 
  "_id" : "example/71156", 
  "_key" : "71156", 
  "_rev" : "_fw2Y18C---" 
}
1
0

Truncates a collection identified by name:

arangosh> col = db.example;
arangosh> col.save({ "Hello" : "World" });
arangosh> col.count();
arangosh> db._truncate("example");
arangosh> col.count();
Show execution results
Hide execution results
[ArangoCollection 71163, "example" (type document, status loaded)]
{ 
  "_id" : "example/71168", 
  "_key" : "71168", 
  "_rev" : "_fw2Y19a---" 
}
1
0