Benchmark Archives - ArangoDB

Sign up for ArangoGraph Insights Platform

Before signing up, please accept our terms & conditions and privacy policy.

What to expect after you signup
You can try out ArangoDB Cloud FREE for 14 days. No credit card required and you are not obligated to keep using ArangoDB Cloud.

At the end of your free trial, enter your credit card details to continue using ArangoDB Cloud.

If you decide that ArangoDB Cloud is not (yet) for you, you can simply leave and come back later.

ArangoDB Among Highest Rated Operational Databases Management Systems solutions in Gartner Report with 4.7/5 Rating

00GeneralTags: , ,

Firstly, a huge thank you to all our customers that took the time to review ArangoDB for the Gartner Peer Insights “Voice of the Customer”: Operational Database Management Systems Market report. Without your help and assistance, the continued improvements and enhancements we make to our software wouldn’t be possible. Read more

Multi-model benchmark round 1 – completed

013PerformanceTags: , ,
The latest edition of the NoSQL Performance Benchmark (2018) has been released. Please click here

It’s time for another update of my NoSQL performance blog series. This hopefully concludes the first part of this series with the initial databases ArangoDB, MongoDB, Neo4J and OrientDB and I can now start to check out other databases. I’m getting a lot of requests to test others as well and I’ll try to add them as soon as possible. Pull requests to my repository are also more than welcome. Remember it is all open-source.

The first set of benchmarks was started as a proof that multi-model can compete with specialized solutions and I started with the corresponding top dogs (Neo4J and MongoDB) for graphs and documents. After the first blog post, we were asked by the community to include OrientDB as the other multi-model database, too, which makes sense and therefore I expanded the initial lineup.

Concluding the tests did take a bit longer than expected, because vendors took up the challenge and improved their products with impressive results – as we asked them to do. Still, for each iteration we needed some time to run all tests, see below. However, on the upside, everyone can benefit from the improvements, which is an awesome by-product of the benchmark tests. More info

How an open-source competitive benchmark helped to improve databases

017General, PerformanceTags: , ,

TL;DR: Our initial benchmark has raised a lot of interest. Initially we wanted to show that multi-model can compete with other solutions. Due to the open and competitive way we have conducted the benchmark, the discussions around it have lead to improvements in all products, better algorithms, faster drivers and better ways to use the databases.

The latest edition of the NoSQL Performance Benchmark (2018) has been released. Please click here

General Setup

From the outset we published all code and data and asked the vendors of all tested products as well as the general public, not only to run the tests on their own machines, but also to suggest improvements in the data models, test code, database configuration, driver usage and server configuration. This lead to a lively discussion, lots of pull requests and even to the release of improved versions of the database products themselves!

This process exceeded all our expectations and is yet another great example of community collaboration not only for fact finding but also for product improvements. Obviously, the same benchmark code will always show slightly different results when run on different hardware, operating systems, network setups and with more or less RAM. Therefore, a reliable result of a benchmark can essentially only be achieved by allowing everybody to run it on their own machines.

The technical setup is described in the above blog post. Let me briefly repeat the key facts.

More info

Performance comparison between ArangoDB, MongoDB, Neo4j and OrientDB

037PerformanceTags: ,
The latest edition of the NoSQL Performance Benchmark (2018) has been released. Please click here

My recent blog post “Native multi-model can compete” has sparked considerable interest on HN and other channels. As expected, the community has immediately suggested improvements to the published code base and I have already published updated results several times (special thanks go to Hans-Peter Grahsl, Aseem Kishore, Chris Vest and Michael Hunger).

Please note: An update is available (June ’15) and a new performance test with PostgreSQL added.

Here are the latest figures and diagrams:

chart_performance_r105

The aim of the exercise was to show that a multi-model database can successfully compete with special players on their own turf with respect to performance and memory consumption. Therefore it is not surprising that quite a few interested readers have asked, whether I could include OrientDB, the other prominent native multi-model database.

More info

Native multi-model can compete with pure document and graph databases

03PerformanceTags: , ,

Claudius Weinberger, CEO ArangoDB

TL;DR Native multi-model databases combine different data models like documents or graphs in one tool and even allow to mix them in a single query. How can this concept compete with a pure document store like MongoDB or a graph database like Neo4j? I myself and a lot of folks in the community asked that question.

So here are some benchmark results: 100k reads → competitive; 100k writes → competitive; friends-of-friends → superior; shortest-path → superior; aggregation → superior.

Feel free to comment, join the discussion on HN and contribute – it’s all on Github.

The latest edition of the NoSQL Performance Benchmark (2018) has been released. Please click here

More info

Additional results for mixed workload

00General, PerformanceTags: ,

In a comment to the last post, there was a request to conduct some benchmarks with a mixed workload that does not test insert/delete/update/get operations in isolation but when they work together.

To do this, I put together a quick benchmark that inserts 10,000 documents, and after each insert either

  • directly fetches the inserted document (i.e. insert / get),
  • updates the inserted documents and retrieves it (i.e. insert / update / get), or
  • deletes it (i.e. insert / delete)

The three cases are alternated deterministically, meaning each case occurs with the same frequency and in the same order. It’s probably still not the best ever test case, but at least it reflects a mixed read and write workload.

The document ids used in the test were monotically increasing integers, starting from some base value. That means no random values were used.

The test was repeated for 100,000 documents as well. The dataset still fully fits in RAM. The tests were run in the same environment as the previous tests so one can compare them.

The results are in line with the results shown in the previous post. Here’s the chart with the results of the 10,000 documents benchmark:

And here are the tests result for the 100,000 documents benchmark: