Testing EventStore

I recently came across Event Store, which as its name might hint, is, well, a store for events. The doc says it better than me:

Event Store stores your data as a series of immutable events over time, making it easy to build event-sourced applications.

I wanted to see how useful it would be for us, how it could fit in a Hadoop based platform. This post describes my findings.

Principles

EventStore is thus a database to store events. How is that different from a standard RDBMS, say MySQL? The answers lays in the words Event Sourcing. Basically, a standard database would store the current status of an item or a concept. Think for instance about a shopping cart. If a user adds item A, then item B, then removes item A, the database would have a shopping cart with one element only, A, in it.

If you follow the principles of Event Sourcing, instead of updating the state of your cart, you would instead remember events. User added A. User added B. User removed A. That way, at any point in time you know all the history of your cart. This might help you in many ways: debugging, analysing why product A does not sell so well or even when you have a new great idea, having a lot of relevant data to test it already. You never know which analysis you will want to do in the future. You can read a lot about this, I strongly this post by Martin Kleppman : Using logs to build a solid data infrastructure.

Technology stack and installation

Note: I did use the Linux build, version 3.0.5. The windows build might have less bugs.

EventStore is developed on .Net, and can be built under Mono for Mac or Linux. It is (partly) open source, with some extra tools requiring a licence. Installation is quite easy if you follow the getting started doc. It does look like quite a young project, the only way (for Linux) is to download a .tgz and uncompress it, there is no deb or rpm packages for instance. Inside the tarball, there is no init script, and there are some assumptions in startup scripts (proper chdir before running) which make me feel that the project is built for Windows first, with Linux as an after thought (but it is there), or that the the project is not fully mature yet.

Of course, running under Mono is still a bit worrying. The full .Net framework is not and will be ported, and the legal status of Mono is not fully clear. You might never know what the future will bring.

Managing and monitoring

There is a nice web interface, which is good to have an instantaneous view of your cluster. A dashboard can give you some monitoring information, which can then be accessed via an (undocumented) call to /stats. This will give you a nice JSON object full of information.

Another bug is that the /stats page does need authentication, but will happily return an empty document with a 200 status code if you do not authenticate. This is another proof of lack of maturity.

Data loading

With the HTTP API, it was quite easy. You just need to post some JOSN to an end point. That said, the doc to write events to a stream seems wrong or there is a bug in the version I am using (3.0.5), because EventStore requires a UUID and event type for each event, which can be either passed as part of the JSON, or as part as the header. The first example uses JSON, which did not work at all for me:

HTTP/1.1 400 Must include an event type with the request either in body or as ES-EventType header.

I did have to use a HTTP header. Not a big deal, but that feels like a bad start.

The load was quite slow (8 hours for 1GB JSON), but I cannot say where the time was spent as I only did some functional testing. I was running EventStore one a small virtual machine, with 1 core and 512MB of memory. I never went above 50% CPU usage or 350MB memory. That said, I did have to generate a UUID per event, and that might be slow.

The .Net (tcp) API is said to be much faster. I did not try it, as there are other issues which Event Store which makes it a bad choice for us.

There is a well on github a JVM client. This one is referenced but less described in the doc, and is said to work well up to older versions (3.0.1).

Data fetching

My feeling is that Event Store is mostly to be used as a queue. You have nice ways to subscribe to a stream of event (Atom feed), and add processing to it, via projections, which are javascript snippets. With those projections you can set up simple triggers on events, or build counters. The official documentation is not great, but you can get a list of blog posts going more in depths. Note that projections are considered beta, not to be used in projection.

Simple processing (counters) is quite easy via projections. One place where Event Store shines, is the processing of temporal series. An example is given in some of the blog posts, to analyse the time difference between commit and push per language on github.

There are other APIs (.Net, JVM plus some not officially supported), but they all are about reading a stream of events programatically, without the buit-in ability to do more. Of course, from your language you can do whatever you want.

A big lack to me is that there is no SQL interface. If we want the data to be accessed, we do need some developer time, making it harder for the data analysts. Furthermore, doing joins does look quite tricky.

Oh, and I could not add projections at all, as the web interface does not let me to, for some reason.

Summary

Event Store is not yet for us. The bad points for us are:

  • Mono does not feel safe to use for a major production brick
  • Project seems not mature: errors in documentation, which is as well hard to find. Web UI not fully functional.
  • Data fetching (projections) considered beta and not supposed to be used in production.
  • Other APIs are production ready, but will cost lots of developer time, instead of giving easy access to the data to analysts.
  • No SQL interface.
  • Loads of small bugs here and there.

Of course, I looked at it from the point of view of the guy who will have to maintain it, and develop against it. It has some pretty good points, though:

  • Although it is not well integrated in Linux environments, installation was fairly painless, It just worked.
  • The concepts behind Event Store are very neat
  • It is fairly active on github, I do expect some nice progression

Replacing a single mongoDB server

I am moving a single mongoDB server to another hardware, and I want to do that with the least possible production interruption, of course.

Well, it so happens that it is not possible if you did not plan it from the start. You can argue that if I have a single SPOF server in production I am doing my job badly, but this is beside the point for this post.

MongoDB has this neat replication features, where you can build a cluster of servers, with one primary and a few slaves (secondaries), among other options. If you properly configured mongo to use this feature, then you can add a secondary, promote it to primary to eventually switch off the initial primary. This is what I will describe here.

Note that there will be 2 (very short) downtimes. One to create a replica set (this is just a restart), and one where the primaries are switched (you need to redirect connections to your new primary).

A note about vagrant

If you are using vagrant, make sure that you use the plugin vagrant-hostmanager (vagrant plugin install vagrant-hostmanager) which helps managing /etc/hosts from inside vagrant boxes. Furthermore, make sure you set a different hostname to each of your VMs. By default, if you use the same basebox, they will probably end up having the same hostname (config.vm.hostname in your vagrant file, or the more specific version if you define a cluster inside your vagrantfile).

Configure the replication set

First of all, you need to tell mongoDB to use the replication feature. If not you will end up with messages like:

> rs.initiate()
{ "ok" : 0, "errmsg" : "server is not running with --replSet" }

You just need to update your /etc/mongodb.conf to add a line like so:

replSet=spof

This is the config option that enables the replication. All servers in your replica set will have the same option, with the same value.

On a side note, in the same file make sure you are not binding only to 127.0.0.1, or you will have trouble having your 2 mongo instances talking to each other.

The sad thing is that mongo cannot reload its config file:

root@debian-800-jessie:~# service mongodb reload
[warn] Reloading mongodb daemon: not implemented, as the daemon ... (warning).
[warn] cannot re-read the config file (use restart). ... (warning).

Right. So a restart is needed:

service mongodb restart

You can now connect to your mongo shell the usual way, and initialise the replica set. This follows part of the tutorial explaining how to convert a standalone server to a replica set. Just type in the mongo shell:

rs.initiate()

and the (1 machine) replica set is now operational.

You can check this easily:

> rs.conf()
{
  "_id" : "spof",
  "version" : 1,
  "members" : [
  {
  "_id" : 0,
  "host" : "debian-800-jessie:27017"
 }
 ]
}

On your (new with an empty mongo) server, make sure that you add the replSet line as well in mongodb.conf.

Note that the hostname must be resolvable on the other machine of the cluster. If mongoDB somehow picked the local hostname, your replica set will just not work. If the local hostname has been picked up, see option 2 (reconfig) below.

You are now ready to add the second server to the set.

spof:PRIMARY> rs.add("server2.example.com:27017")
{ "ok" : 1 }

We can check that all is fine:

spof:PRIMARY> rs.conf()
{
   "_id" : "spof",
   "version" : 3,
   "members" : [
   {
       "_id" : 0,
       "host" : "debian-800-jessie:27017"
   },
   {
       "_id" : 1,
       "host" : "server2.example.com:27017"
   },
   ]
}

If the local hostname was chosen, the easiest option is to fully reconfigure the replica set from within the mongo shell, based on your current configuration :

// Get current configuration object
cfg=rs.conf()
// update the current machine to use the non local name
cfg.members[0].host="server1.example.com"
// fully add server 2
cfg.members[1]={"_id":1, host:"server2.example.com"}
// use this new config
rs.reconfig(cfg)

Ok, we are all good, and the replica set is properly set up. What happened on our server2 which was empty when we started?

on server2:

spof:SECONDARY> use events
spof:SECONDARY> db.events.find()
error: { "$err" : "not master and slaveOk=false", "code" : 13435 }

Hum, what does that mean? In short, there is a replication delay between the primary and secondary, so by default mongo disables reads to the secondary to make sure you always read up to date data. You can read more about read preferences, but to tell mongo that yes, you know what you are doing, just issue the slaveOK() command:

spof:SECONDARY> rs.slaveOk()
spof:SECONDARY> db.events.find()
{ "_id" : ObjectId("556d550b59a5fb8615044c72"), "name" : "relevant" }

Success! (In this vagrant example, there was only one document in the collection).

In real life, if the secondary needs to sync a lot of data, it will stay in state STARTUP2 for a long time, which you can see via rs.status(). In the log files of the new secondary, you can see progress per collections. It will then move to RECOVERING to finally become SECONDARY, which is when it will start accepting connections.

Switch primaries

We are all set, you waited long enough to have the secondary in sync with the primary. What now? We first need to switch primary and secondary roles. This can be done easily by changing the priorities:

spof:PRIMARY> cfg=rs.conf()
spof:PRIMARY> cfg.members[0].priority=0.5
0.5
spof:PRIMARY> cfg.members[1].priority=1
1
spof:PRIMARY> rs.reconfig(cfg)
spof:SECONDARY>

As you can see, your prompt changed from primary to secondary.

From this moment on, all connections to your now secondary should succeed but you will not be able to do much (secondary cannot write, and remember slaveOk()). You must thus be sure that your client connect to the new primary, or that you know that the connection is readonly in which case you can use slaveOk(). This switchover will be your last downtime.

Clean up

you can tell your new master that the secondary is not needed anymore:

rs.remove('k1.wp:27017')

Note that if you switch the secondary off (service mongodb stop), then the primary will step down to secondary as well, as it cannot guarantee that it is in a coherent state. This is what you get from using a replica set with only 2 machines.

You can now dispose of your old primary as you wish.

If you want to play around with your old primary, you will be out of luck to start with:

"not master or secondary; cannot currently read from this replSet member"

It will of course be obvious that you need to remove the replSet value from mongodb.conf and restart the server. Sadly, you will then be greeted by another, longer message when you connect:

Server has startup warnings: 
Wed Jun 3 13:10:44.435 [initandlisten] 
Wed Jun 3 13:10:44.435 [initandlisten] ** WARNING: mongod started without --replSet yet 1 documents are present in local.system.replset
Wed Jun 3 13:10:44.435 [initandlisten] ** Restart with --replSet unless you are doing maintenance and no other clients are connected.
Wed Jun 3 13:10:44.435 [initandlisten] ** The TTL collection monitor will not start because of this.
Wed Jun 3 13:10:44.435 [initandlisten] ** For more info see http://dochub.mongodb.org/core/ttlcollections
Wed Jun 3 13:10:44.435 [initandlisten]

Well, the solution is almost obvious from the error message. If there is a document in local.system.replset, let’s just remove it!

> use local
switched to db local
> db.system.replset.find()
{ "_id" : "spof", "version" : 4, "members" : [ { "_id" : 1, "host" : "server1.example.com:27017" } ] }
> db.system.replset.remove()
> db.system.replset.find()
>

Once you exit and reconnect to mongoDB, all will be fine, and will have your nice standalone server back.