ORC benchmarking

I need to use transactional tables in Hive, so I naturally use the ORC format. One limitation of those transactional tables is that they have to be bucketed (in Hive 2 at least. In Hive3 my understanding is that bucketing is not mandatory, but the default is equivalent to having 1 bucket).

It is tricky to find a good answer on how to use buckets: how many? on which columns? It usually boils down to:

Buckets are something that should be done for a concrete problem not just because you think you should have them. Normally I would not use them.

Which is not helpful when you actually have to use them.

So how to use buckets?

I did some benchmarking to try to get a valid answer. My dataset was as follow:

  • Data:
    • 1B rows table worth 3 months of data, with a compound (logical) key: user id, and 4-level hierarchical identifier. This is about 15GB (unreplicated) on disk.
    • 9 merges of 1 hour of data
  • Benchmarks:
    • select count(*)
    • select count(*) where  hierarchy_level_1=something
    • select 1 specific row
    • big select with join on 2 other tables, resulting in 400k rows.

All benchmarks were run 5 times, doing one iteration for all tables before starting the next iteration to prevent caching.

I tested a few variations:

  • transactional table or not,
  • partitioned or not (partition was the highest level of my hierarchical key. About 200 hundred partitions, somewhat skewed),
  • 1 to 32 buckets,
  • bloom filters on levels #1 and/or #2  my hierarchical key and/or on the user id.
  • I added as extra test some non-transactional tables, sorted on different fields.

It is tricky to properly display the data here so I show it as an image (click to enlarge), but you can find it as a pdf: ORC Benchmark – results or see it on google sheets.


orcbench

What gives?

I had some expectations:

  • More buckets would mean more files so faster load and faster queries up to a point.
  • Bucket + partitions would create way too many files, at the cost of performance.
  • Proper bloom filters would make everything faster.

Well, looking at the results, here are the takeaways:

  • Bloom filters have basically no impact at all,
  • no partition usually helps,
  • 1 bucket is horrible (but that I did expect),
  • too many buckets and partition together is bad,
  • the sweet spot (for me) is partition +  4 buckets.

I am very disappointed because I had a lot of hopes for bloom filters (I guess there is something I am doing wrong). The rest is more a confirmation of my expectations, with numbers to prove them.

The worst of all? My sweet spot is the one I am already using, so after all those tests (which ran almost 3 days straight) I have no new information to speed up my queries. Well, the silver lining is that at least I have data to show that I can pat myself on the back.

Cluster on which I tested: hdp 2.6, hive 2, 8 data nodes (8 CPU, 32GB) on AWS, doing nothing else apart from running these tests.

Advertisement

The cost of ACID with ORC table

ACID introduction

ACID transactions (update, merge) in Hive are awesome. The merge statement especially is incredibly useful.

Of course, not all table are ACID. You need to use ORC and have the table marked as ACID but those are easy steps:

create table something (id bigint) stored as orc tblproperties("transactional"="true")

Of course, in hdfs you cannot change a file once it is created. The standard way (not Hadoop specific) to handle changing immutable files is to have deltas. Each table will consist of a few directories:

  • the base directory: the data at creation time,
  • one or more delta directories: contains updated rows.

Every hive.compactor.check.interval seconds a compaction will happen (or at least the compactor will check if a compaction must happen). The compactor will compact the deltas and base directory in a new base directory, which will consist of a one new base directory with all the deltas applied to the original base directory.

The reason is that when you read an ACID table with many deltas, there is a lot more to read than for only a base directory as hive has to go through each and every delta. This has IOs and CPU costs, which are removed after compaction.

Naive ACID use

Every day I build a summary table gathering all data that changed in the last 24h as well as some related data. Many events are aggregated together. Think for instance about sending an email: I would get send data, open data maybe click data, bounce and a few others. I started building following the temporal flow:


create table summary (id bigint, number_sent bigint, number_open bigint...)stored as orc tblproperties("transactional"="true");

insert into summary select .... from sent;

merge into summary select ... from open;

merge into summary select ... from click;

...

Overall a few billions rows will be read. The final summary table will have about 100 millions rows.

What is interesting here is that I am inserting the biggest data first. This is the table summing up reads and writes per event while building the whole summary, which ran for about 4 hours:

Event Bytes read (GB) Bytes written (GB)
Total 516.5 104.1
Sent 16.2 87.1
Open 88.8 14.2
Click 101.5 1.7
Conversion 102.9 0.01
Bounce 103 1
Spam 104 0.11

Seeing 500GB read scared me a little, so instead of following the naive temporal flow, I started with the smallest event first to finish up with the biggest:

Event Bytes read (GB) Bytes written (GB)
Total 31.5 99.1
Conversion 0 0
Spam 0 0
Click 0.3 1.5
Bounce 1.7 1
Open 4.4 13.3
Sent 25.1 83.4

That’s much better already! The total number of bytes written does not change much (quite logical I suppose as the final data is the same) but the number of bytes read is only 6% of the original! Furthermore, it ran in 2h40 instead of 4 hours.

I added one last step. This summary data was written at user level. I actually needed to do one extra aggregation but I was worried about joining against the user table at every step, as the user table is actually quite big and joins are expensive. But well, I experimented, doing the aggregation at each step instead of  doing one big aggregation at the end:

Event Bytes read (GB) Bytes written (GB)
Total 20.5 8.6
Conversion 0.2 0
Spam 1.2 0
Click 1.4 0.2
Bounce 1.5 0.2
Open 3.5 1.7
Sent 12.7 6.4

Total run time: 1.5 hours!

TL;DR

When using ACID deltas are expensive. When using HDFS writes are expensive. Order your processing to have a little of those as possible.