ArangoDB 3.5 (Planned Q2, 2019)
Subject to change any time.
Improved Incremental Backup (E)
Incremental, consistent snapshot backup in cluster mode.
Time-to-Live Index (TTL)
Remove expired documents from a collection.
Performance improvements, parallel queries, PRUNE keyword and K-path traversal support.
Create custom analyzers by combining configurable tokenizers, token filters and character filters.
Data Masking (E)
Use data masking capabilities when exporting attributes containing sensitive data / PII via arangodump.
It will be possible to utilize storage among multiple machines automatically.
Reduces the administration cost as much as possible.
HTTP/2 and/or WebSockets
Support for new Web-protocols like HTTP/2 or WebSockets.
Instead of sharding based on a hash value, allow to shard ranges.
Define constrains on your collection, in order to validate data entered.
Transactions in a distributed environment with a transaction manager.
Support for big data blobs.
ArangoDB 3.4 - released on December 6th 2018 ✓
A sophisticated, integrated full-text search solution over a user-defined set of attributes and collections.
Improved geo functionality
The new geo index functionality allows indexing complex geographical objects in addition to indexing simple point coordinates. Functionality has been added for querying and comparing GeoJSON objects. Geo index performance has been improved vastly for the RocksDB engine.
Insert operations can now be turned into a replace automatically, in case that the target document already exists. Such operations (called a “Repsert”) can simplify client application development.
Optimized binary format for the RocksDB engine
ArangoDB 3.4 can use an optimized binary format for storing documents with the RocksDB storage engine, allowing for better long-term insertion performance.
Round-robin load-balancer support
ArangoDB now supports running multiple coordinators behind round-robin load balancers, such as they can be found in cloud environments often.
Faster cluster AQL execution
The cluster-internal protocol for running AQL queries has been improved so that AQL queries can run in a cluster with less overhead.
AQL query profiling
AQL queries can now be profiled in detail, so that query execution plans show detailed runtime information.
In a cluster setup, COLLECT queries for grouping and aggregation can now execute significant parts of the query on the database servers, greatly reducing the amount of data to be transferred between database servers and the coordinator.
Improved sparse index support
The AQL query optimizer can now use sparse indexes in more cases than it was able to in previous versions, making sparse indexes a viable option in more situations and queries.
Parallel dump and restore
ArangoDB’s tools for database backups are now multi-threaded, which means taking and restoring backups is now faster than in previous versions.
ArangoDB 3.3 - released on December 22nd 2017 ✓
DC to DC Replication(Enterprise Edition)
This feature allows you to run two ArangoDB clusters in two different datacenters A and B, and set up asynchronous replication from A to B.
Encrypted backup (Enterprise Edition)
The encryption key can be read from a file or from a generator program. It works in single server and cluster mode.
Resilient active/passive mode
There is now a mode to start two arangod instances as a pair of connected servers with automatic failover.
The new globalApplier has the same interface as the existing applier, but it will replicate from all database on the leader and not just a single one.
It throttles write operations to RocksDB in the RocksDB storage engine, in order to prevent total stalls.
Faster shard creation in cluster
Creating collections is what all ArangoDB users do. So it should be as quick as possible.
ArangoDB 3.2 - released on July 20th 2017 ✓
RocksDB & Pluggable Storage Engine
Additional storage engine for ArangoDB to work with huge datasets. Document level locking on writes, no locking on reads
Distributed Graph Processing with Pregel
Use incremental graph processing algorithms in a single mode server or cluster
Foxx service are now self-healing, even if all coordinators go down.
Get documents sorted by distance to a certain point in space. You can also apply filters and limits to geo_cursor. E.g. “Give me 10 vegetarian restaurants within a 1 mile radius to X”
Export your data in multiple formats. Export graphs data to xgmml format for Cytoscape visualizations or arbitrary collections to JSON or JSONL
Satellite Collections (Enterprise Edition)
Satellite Collections enable faster join operations when working with sharded datasets and avoid expensive network hops during join processing among machines.
Encryption at Rest (Enterprise Edition)
Even if a disk gets stolen, data can’t be accessed.
LDAP (Enterprise Edition)
ArangoDB can now be integrated with LDAP allowing for an external authentication server to manage users.
ArangoDB 3.1 - Released November 3rd 2016 ✓
VelocyPack over HTTP
Stream binary storage VelocyPack over HTTP
Directly stream our binary format VelocyPack for high performance needs
boost-ASIO server infrastructure
Performance boost with new boost-ASIO
Use ArangoDB as a resilient, RAFT-based key/value store as alternative to ZooKeeper or etcd
Much easier to use. Choose JSON, tabular or graph outputs. Simplified elaboration of queries with new Query Performance Profiler
New Graph Viewer
Suitable for large graph visualization with much more features. First WebGL implementation
Overhauled Query Optimizer
Better overall query execution and performance increases
Preparations for pluggable storage engine and MVCC
Improved abstraction to integrate pluggable storage engine and MVCC
Generate indices on edges which are a combination of vertex and attribute
New Java Driver
Multi-document operations, VelocyStream ready, asynchronous request handling
SmartGraphs (Enterprise Edition)
Shard large graph datasets to a cluster and stay close to the performance of a single instance
Auditing (Enterprise Edition)
Keep a detailed log of all the important things that happened in ArangoDB
Encryption Control (Enterprise Edition)
Choose your level of SSL encryption
ArangoDB 3.0 - Released 23/06/2016 ✓
Internal storage will change from JSON to VelocyPack for enhanced performance, smaller footprint and binary support.
We plan on making indexes persistent, which will allow using quicker recovery, start-up and larger datasets.
Low-Level C++ Driver
Implementation of efficient, reusable, platform-independent core driver functionality to be used in multiple client languages.
Allow for automatic failover to slave nodes. A monitor process detects network failures and automatically switches to backup nodes.
Replicate data not just in a master/slave fashion, but also as true master/master.
Automatic Failover with Mesos
This release will contain the next iteration of our Mesosphere DCOS integration and will thus offer convenient set-up of synchronous replication and full automatic failover.
Health Check Dashboard in Mesos
Enables you to see the health status of your ArangoDB cluster in Mesos dashboard.
Improved cluster administration will be implemented.
Instead of dedicated slave, you can use spare capacity on masters to hold the slave for other shards.
ArangoDB 2.8 – released 25/01/2016 ✓
Hash indexes and skiplist indexes support array values so they index individual array members.
Graph Traversals in AQL
Using AQL to traverse a graph / edge collections.
Reimplemented AQL functions in C++ for improved performance.
New Framework in Mesosphere DCOS 1.3
ArangoDB package for DCOS 1.3, enhanced replication and failover.
Automatic Deadlock Detection for Transactions
The new deadlock detection mechanism will kick in automatically when it detects operations that are mutually waiting for each other.
ArangoDB 2.7 - released 09/10/2015 ✓
This allows much easier synchronization of a single collection from a master to a slave server.
A lot is not enough. Throughput is another key requirement for a premium database. Again we pushed our throughput a big step forward with 2.7.
Improved Date Handling in AQL
AQL functions for date and time calculation and manipulation.
Split primary indexes and hash indexes into multiple index buckets.