home shape

Performance Comparison: ArangoDB vs MongoDB, Neo4j, OrientDB

The latest edition of the NoSQL Performance Benchmark (2018) has been released. Please click here

My recent blog post “Native multi-model can compete” has sparked considerable interest on HN and other channels. As expected, the community has immediately suggested improvements to the published code base and I have already published updated results several times (special thanks go to Hans-Peter Grahsl, Aseem Kishore, Chris Vest and Michael Hunger).

Please note: An update is available (June ’15) and a new performance test with PostgreSQL added.

Here are the latest figures and diagrams:

chart performance r105

The aim of the exercise was to show that a multi-model database can successfully compete with special players on their own turf with respect to performance and memory consumption. Therefore it is not surprising that quite a few interested readers have asked, whether I could include OrientDB, the other prominent native multi-model database.

Eager to make our case for multi-model even stronger, my collegues and I immediately set out to give OrientDB a spin. There is an officially supported node.js driver Oriento using the fast binary protocol and OrientDB easily offers everything we need for the comparison at hand in its SQL dialect and due to its multi-model nature.

To our great disappointment the results were not as good as expected. It seems obvious that there is something wrong in the way I am using OrientDB. So I have asked OrientDB users to check the implementation but they could not immediately spot the problem. Therefore I would love to see suggestions and will publish updated results as soon as they come in.

Results

Enough talk, here are the results, this time including OrientDB, as was suggested by many.

chart performance r201

table comparison r201

Clearly something is wrong here. Any help to improving things would be much appreciated. Discuss on HN.

Details of how we use OrientDB

For OrientDB, we import the profiles in one class and the relations in another, but they are also stored as direct links between nodes in the objects of the Profile class (“index free adjacency”). All queries are sent as SQL queries to the database. The Oriento driver uses promises in the usual node.js style. Single document queries and aggregations are native to SQL and therefore straightforward. For the neighbor queries we use OrientDB’s graph extensions that use the direct links. The second degree neighbors with unique results was a bit of a challenge but with some outside help we managed to find an SQL query that works. ShortestPath is also directly supported as a function call in SQL.

Oriento currently only supports one connection to the database. In order to parallelize the requests, we used 25 instances of the driver – giving us 25 parallel connections to the server. This improved performance considerably.

Some conjectures about reasons

As mentioned above, we do not really understand what is going on here. We had a short look at the (open source version of the) code. It seems that OrientDB uses an implementation of Dijkstra’s algorithm for the ShortestPath that proceeds only from the source, contrary to ArangoDB and Neo4j. This essentially explains the bad performance in the shortest path test, since our social graph is typical in that it is highly connected and essentially shows an exponential growth of the neighborhood of each vertex with the distance.

We can only repeat that any hints about how to improve this situation would be more than welcome, and that we will immediately update this post when new findings arise.

We perform exactly the same tests as before, please read the full description there.

Resources and Contribution

All code used in this test can be downloaded from my Github repository and all the data is published in a public Amazon S3 bucket. The tar file consists of two folders data (database) and import (source files).

Everybody is welcome to contribute by testing other databases and sharing the results.

admin

37 Comments

  1. Michael Hunger on June 11, 2015 at 2:07 pm

    Can you please publish absolute numbers and more details about the system you ran it on?
    I also sent an update today to the google group thread around Neo4j.

    And please use a stable, released version of ArangoDB when comparing, not an “alpha” version, or use “alpha” versions of all databases.

    • Claudius Weinberger on June 11, 2015 at 2:31 pm

      Behind the link to the previous test we tried to give as much information as we can in the appendix. There we also describe the exact Google Compute Engine machine we are using. Is this not detailed enough?

      We are happy to test other alpha versions, if the vendor considers this to be sensible. From your comments in your Google group we conclude that you think that we should do this. Frank is working on that just now and as soon as we can reproduce your results, we will publish an update as before.

      • Michael Hunger on June 11, 2015 at 3:04 pm

        Please also add more graph queries, the code for 2nd and 3rd degrees or aggregations across those is there in the repo, just no results published.
        Currently it is mostly document/kv queries besides the shortest-path.
        Neighbors doesn’t count as it is just a read of an array property with primary-keys.

        • Claudius Weinberger on June 11, 2015 at 3:20 pm

          I think we have picked a balanced selection of tests:
          single reads and writes:
          every DB must be able to do this, quick operation

          aggregation:
          CPU intensive, every DB should be able to do this

          shortest path:
          only graph databases can do this

          neighbors of distance 1 and 2:
          every DB should be able to handle this, but graph databases should be particularly good at it, since in many use cases people match for short paths

          After all, this is not a graph database benchmark, but a multi-model vs. specialised solutions benchmark.

          I am not aware of any code in the repo for aggregations across neighbors, what do you mean by this?

          • Michael Hunger on June 11, 2015 at 3:38 pm

            Only there is no 2nd degree neighbours in the chart 🙂
            So what’s the typical document database operation then? Deep search in the doc?



          • Claudius Weinberger on June 11, 2015 at 3:52 pm

            Sorry, this is a misunderstanding: What we have in the charts under “neigbors” is to find both the distance 1 and the distance 2 neighbors, but each of them only once. We did not include the performance of direct neighbors, because every database can easily do this and thus it is not much of a challenge. We will change the wording in the post.

            If you want, I can send you the results of the direct neighbors test in our setup after Frank has integrated your changes.



  2. Claudius Weinberger on June 11, 2015 at 2:30 pm

    Behind the link to the previous test we tried to give as much information as we can in the appendix. There we also describe the exact Google Compute Engine machines we are using. Is this not detailed enough?

    We are happy to test other alpha versions, if the vendor considers this to be sensible. From your comments in your Google group we conclude that you think that we should do this. Frank is working on that just now and as soon as we can reproduce your results, we will publish an update as before.

  3. Nicolas Joseph on June 11, 2015 at 4:19 pm

    You should also probably mention that arango is a “mostly memory database”. Unless I am mistaken that’s not the case of every other database in the benchmark. I had a question about that: How much nodes and edges can you store in arango ? and how much ram does it need to store that much ?

    • Claudius Weinberger on June 11, 2015 at 4:29 pm

      We say in many places that we are “mostly-in-memory”. However, this essentially holds for the other databases as well. Therefore we do the warmup to give everybody a chance to load everything into RAM.

      In this example, there are 1.6M vertices, 30M edges, and the total memory ArangoDB uses is 12.5GB RAM for the whole graph, including indexes.

      • Nicolas Joseph on June 11, 2015 at 4:53 pm

        Thanks ! and how much weight each document, for how many/what kind of properties ?

        • Claudius Weinberger on June 11, 2015 at 5:22 pm

          The source file for the profiles is a CSV file of size 1.7GB, there are a lots of different attributes (76 attributes). Not all attributes are defined for a row. Roughly half of the entries are null.

  4. Ziink A on June 12, 2015 at 6:00 am

    Trying to run the arangodb benchmark on Windows but I’m getting

    ArangoError: unknown path ‘_db_system_apicollectionprofiles’

    Also, it would help to have absolute times instead of a comparative percentage.

    • fceller on June 12, 2015 at 9:17 am

      It seems that the data is not installed / accessible. Can you contact hackers (at) arangodb.org with some more details?

      • Ziink A on June 12, 2015 at 4:11 pm

        There’s a bug with arangojs which prevents it from being used on Windows. Have submitted a pull request.

        • fceller on June 12, 2015 at 4:30 pm

          Thanks a lot

  5. Rasmus Schultz on June 12, 2015 at 10:07 am

    Max,

    Oriento is not a full binary driver – it uses the older (pre-2.0) CSV-style data serialization format to encode records for transport, e.g. most of the data in transport is not in fact binary, numbers and record IDs etc. are encoded as text. That means substantial encoding overhead on the server, and more substantial decoding overhead on the client, which could be part of the explanation for the distorted result.

    Sadly, there are no “official” drivers for JS or PHP or most other platforms besides Java – these are third-party initiatives that get labeled as “official” by the OrientDB team as means of saying, I suppose, “we approve”; but they don’t develop them, and the developers have no responsibility (or real incentive) to keep them up to date. I have urged them before to stop using the term “official”, which to most imply things like support, maintenance, stability, etc.

    Bottom line, if you want a fair benchmark, you need to use the only real official drivers, the Java drivers – likely any JVM language will do, as long as you’re using the latest, supported, truly “official” drivers.

    • Max Neunhöffer on June 12, 2015 at 11:39 am

      Well, “official” or not, I read this page http://orientdb.com/docs/last/Programming-Language-Bindings.html as saying that Oriento uses the “native binary protocol” and thus seems to be the most appropriate OrientDB driver for node.js.
      Are you essentially saying that the only language, from which you can use OrientDB efficiently, is Java?
      We wanted to make the benchmark “fair” by using the same language to query each database, and we decided to use node.js.
      Another way to conduct a “fair” benchmark would be to use the “native” language for each database. But then one would use C++ (probably with libcurl) for ArangoDB, C++ (with the native MongoDB C++ driver) for MongoDB, and Java for Neo4j and OrientDB. I am no prophet, but I can make a sophisticated guess how things would look like then…

      • Rasmus Schultz on June 14, 2015 at 3:00 pm

        > Are you essentially saying that the only language, from which you can use OrientDB efficiently, is Java?

        I’m saying it’s the only truly official driver – Java of course is not the only language available, there are plenty of good JVM languages. I also can’t speak to the quality of other third-party drivers, I haven’t tried them, but I wouldn’t expect miracles – the OrientDB protocol is quite complex, not fully documented, and a lot of things have to be referenced from the Java drivers to build a driver in another language.

        It sounds like they’re finally starting to acknowledge the state of the driver situation, and they’ve indicated they might be doing something about it soon:

        https://github.com/orientechnologies/orientdb/issues/4354#issuecomment-111685765

    • Charles Pick on June 12, 2015 at 11:40 am

      > that means substantial encoding overhead on the server, and more substantial decoding overhead on the client

      Rasmus, it would be better if you investigated this issue before making conjectures of your own. The next (now mothballed) oriento release uses the full binary record format and is not meaningfully faster when actually interacting with the database. Despite your claims, record serialization has never been the bottleneck here. I can believe that *protocol parsing* is a performance problem but that’s unavoidable due to OrientDB’s braindead wire format.

      If OrientDB are happy to call Oriento “official” then who are you to question their definition? The average user will use the “official” drivers for their platform, so this benchmark makes sense.

      • Rasmus Schultz on June 14, 2015 at 2:38 pm

        Using the binary serialization format should be meaningfully faster – if it isn’t, there should be some other explanation. Your client may not be net faster using the binary serialization protocol, but from what somebody at OrientDB explained (in the forum or a GitHub thread) earlier, this is the storage format used by the OrientDB server, on disk, which means it can stream records directly to the client without even parsing it’s own data; from what they said, encoding as CSV is substantially more load on the server (well, obviously) than just streaming the data as-is. That’s hardly conjecture. Though I can’t say anything about the performance of your unreleased driver, obviously.

        • Charles Pick on June 14, 2015 at 3:00 pm

          > Using the binary serialization format should be meaningfully faster – if it isn’t, there should be some other explanation.

          There is another explanation – OrientDB is not spending the majority of its time on serialization, which is what we’d expect – databases must do a lot more work than just that. Consider the difference in time between serializing a single record and performing even one disk seek to retrieve that record, it’s a difference of orders of magnitude.

          > which means it can stream records directly to the client without even parsing it’s own data

          *I* told you that, and while it’s possible in theory AFAIK OrientDB does not do it. Obviously this also would not work when you’re performing any kind of query other than “give me this particular document”

          I’m not arguing that serializing to “csv” is free, I’m saying that in the scheme of things it’s a drop in the bucket. Especially when you consider the kind of queries this benchmark uses, serialization barely comes into it!

          • Rasmus Schultz on June 14, 2015 at 3:16 pm

            CSV encoding and decoding is “a drop in the bucket”? You actually profiled the drivers and know this for a fact, or is this my speculation against your speculation?

            I’m just offering ideas to try to clue in the issue with this benchmark. I may be all wrong, but if we’re both just speculating, well… If you’re sure this isn’t the issue, do you have any other ideas or clues, anything that might help set this benchmark straight?



          • Charles Pick on June 14, 2015 at 4:05 pm

            > You actually profiled the drivers and know this for a fact

            Yes, I performed the same queries using both deserializers and got essentially the same performance (binary record format is 1% – 3% better). Of course when benchmarking the deserializers alone in isolation the binary format is many times faster, but I’m not seeing any real benefit of that when actually hitting the database.

            The nature of OrientDB’s wire format makes for extremely complicated code in an async platform like node.js, and that *could* be a factor here, but I’ve spent a long time optimizing it and when testing with fixed buffers (i.e. not connecting to a database at all) I can still parse many times the number of responses than OrientDB can send. If it’s a factor, it’s not the primary one.

            The OrientDB team will have a better idea of the causes of this, Luca promised a PR with some fixes, I’m as interested to see them as you are.



          • Rasmus Schultz on June 14, 2015 at 10:54 pm

            > when benchmarking the deserializers alone in isolation the binary format is many times faster

            hmm, are you benchmarking this with enough threads to max out the CPU? If you’re measuring turn-around with a single-thread benchmark, you’re probably just clocking idle CPU time.



          • Rasmus Schultz on June 14, 2015 at 10:56 pm

            btw, I meant, did you actually profile the drivers to see what percentage of CPU time is spent where? if you suspect there may be a bottleneck somewhere, profiling (not benchmarking) might reveal something.



  6. Dario on June 12, 2015 at 3:10 pm

    Great work Claudius! Given that the benchmark code is public it would be good to see the other vendors further optimising it to give the absolute best results for each platform.

    • CoDEmanX on June 12, 2015 at 6:33 pm

      +1, this is a very fair benchmark. I hope others will contribute additional benchmark scripts for popular database systems like CouchDB, PostgreSQL, maybe even Redis?

  7. Claudius Weinberger on June 13, 2015 at 4:17 pm

    I’ve posted yesterday some answers in the Orient Group (https://groups.google.com/forum/#!topic/orient-database/nW9k_IISz6U). Unfortunately, the moderator did not have time to clear it, yet. Therefore I have add the answers to the questions asked here:

    Hi Luca,

    I’m Claudius, the author of blog post.

    You are very welcome to send a pull request and we will update the published results with any improvements we can reproduce, and acknowledge any contribution, as we did before for MongoDB and Neo4j. Obviously, we do not want to change the targets of the benchmark:

    – client/server test
    – from node.js using whatever driver is most appropriate

    In this framework we are happy to hear about any improvements w.r.t. database configuration, driver configuration, query formulation etc.

    Everything you need to reproduce the results is available on S3 and Github. So if someone doubts the results he can run the benchmark by himself.

    As to master/master replication and ArangoDB, stay tuned over the summer…

    Best regards,

    Claudius

  8. Nicolas Harraudeau on June 17, 2015 at 9:02 am

    I just saw this: http://orientdb.com/welcome-to-orientjs/. This blog post has had some great effects.
    It might be fair to mention in this benchmark that the company behind OrientDB is fixing the nodejs client library.

    • Claudius Weinberger on June 17, 2015 at 1:03 pm

      It’s really great. I love to see all the contributions and improvements. MongoDB made really good steps forward and Neo4J will come with an update soon.

      We didn’t get any contribution or improvement proposal from OrientDB by now. Any improvement which fits to the test are more than welcome. I tried to write such an invitation in their google group (last friday) but this waits for a proofing from the moderator.

  9. Luca Garulli on June 20, 2015 at 5:10 pm

    Hey all, we sent a Pull Request 2 days ago to the author of the Benchmark, as they used OrientDB incorrectly. Now OrientDB is the fastest in all the benchmarks, except for “singleRead” and “neighbors2”, but we know why.

    We are still waiting for the Arango team to update the results…

    • Luca Garulli on June 20, 2015 at 5:39 pm

      However, who is interested in running the tests themselves, just clone this repository:

      https://github.com/maggiolo00/nosql-tests

    • Claudius Weinberger on June 20, 2015 at 6:41 pm

      We will try the new release OrientDB, in which you have reimplemented the shortest path to match Neo4J’s algorithms. I assume we should now see similar results, but as you know if performance should yield reliable and reproducible result and that is not done within hours. We run the tests several times. I took you a week to reimplement the algorithms and produce new drivers, so give as few days to run the tests.

      We will then publish a update – also with the new results for Neo4J. That should gives as proof-able results instead of marketing statements.

  10. CoDEmanX on August 17, 2015 at 11:05 pm

    Would like to see another benchmark round, OrientDB in the left corner and ArangoDB 2.7-devel in the right corner – this time with AQL query cache enabled and run by ArangoDB team.

Leave a Comment





Get the latest tutorials, blog posts and news: