Create and apply patch from a github pull request

You have a local clone of a github repository, somebody created a pull request against this repository and you would like to apply it to your clone before the maintainer of the origin actually merges it.

How to do that?

It is actually surprisingly neat. When you look at the url of a github PR:

You can just add ‘.patch’ at the end of the url to get a nicely formatted email patch:

From there on, you have a few options. If you download the patch (in say pr.patch) at the root of your clone, you can apply it:

git am ./pr.patch

If you want to apply the code patch without actually apply the commits, you can use your old trusty patch command:

patch -p 1 < ./pr.patch

If you are lazy (as my director studies always said, ‘laziness drives progress’), you can do all in one line:

wget -q -O - '' | git am

Understand and generate unix passwords

I needed to generate an entry in the /etc/shadow file, without using the standard tools. It turned out it was not a good idea for my needs, but in the meantime I learned a lot about the format of this file and how to manually generate passwords.

Shadow file format

User passwords are stored in the file /etc/shadow. The format of this file is extensively described in the shadow man page. Each line is divided in 9 fields, separated by colons but we are interested only in the 2 first fields, namely username and password.

The password field itself is split in 3 parts, divided by $, for instance


What does that mean?

The first field (in our example $6) tells us which hash function is used. It will usually be 6 (SHA-512) under linux. The values can be found in man crypt, but they are as follow:

 ID  | Method
 1   | MD5
 2a  | Blowfish (not in mainline glibc; added in some
     | Linux distributions)
 5   | SHA-256 (since glibc 2.7)
 6   | SHA-512 (since glibc 2.7)

The second field is a salt. It is used to make sure that 2 users with the same password would actually end up with 2 different hashes. It is randomly generated at password creation, but technically as we will see it later, it can be manually set. Wikipedia has more information about the cryptographic salt.

The third field is the password with the hash (second field) concatenated, hashed using the function defined in the first field.

How to generate a valid password entry?

For ID 1 (MD5) you can use openssl:

openssl passwd -1 -salt $salt $pwd
# outputs: $1$pepper$XQ7/iG9IT0XdBll4ApeJQ0

You cannot generate a SAH-256 or SHA-512 hash that way, though. The easiest and generic way would then be to use python or perl. For a salt ‘pepper’ and a password ‘very_secure’, both lines would yeld the same result:

python -c 'import crypt; print crypt.crypt("very_secure", "$6$pepper")'
# There is more than one way to do it
perl -e 'print crypt("very_secure", "\$6\$pepper\n") . "\n"'

# outputs: $6$pepper$P9Wt3.3Uqh9UZbvz5/6UPtHqa4KE/2aeyeXbKm0mpv36Z5aCBv0OQEZ1e.aKcPR6RBYvQIa/ToAfdUX6HjEOL1

Disco: purge all completed jobs

In Disco, jobs not purged still use some disk space, for temporary data. This can lead to fully using all the disc space in your cluster. I have been there, and it is not fun I promise you.

You can setup some purge policy (a job might purge itself after completion, for instance), but if you need to quickly clean up all jobs, this snippet will help. It will purge all completed jobs, successful or not.

for job in $(disco jobs);
  disco events $job | grep "WARN: Job killed\|READY";
  if [ $? -eq 0 ];
    disco purge $job;

Disco: replicate all data away from a blacklisted node

Before removing a node from Disco, the right way to do it is to blacklist it, replicate away its data, and finally remove it form the cluster. Replicating data away is done by running the garbage collector. Unfortunately, the garbage collector does not migrate everything in one go, so a few runs are needed. To not have to do this manually, the following script will run the garbage collector as often as needed as long as some nodes are blacklisted but not yet safe for removal. The full script can be found on github.

#!/usr/bin/env bash

# Will check if there are nodes blacklisted for ddfs but not fully replicated yet.
# If this is the case, it will run the GC as long as all data is not replicated away.

# debug
#set -x

# Treat unset variables as an error when substituting.
set -u

# master

# API commands

# counter to mark how many times the GC ran.

function is_running {
    # will get "" if GC not running, or a string describing the current status.
    _GC_RES=$(wget -q -O- $GC_STATUS)
    if [ "$_GC_RES" == '""' ]
    echo $_GC_RES

function is_safe {
    _BLACKLISTED=$(wget -q -O- $BLACKLIST)
    _SAFE=$(wget -q -O- $SAFE_GC)

    # eg.
    # blacklisted:  ["slave1","slave2","slave3"]
    # safe_gc_blacklist: []

    # safe is a subset of get. If we concat the 2 (de-jsonised) and get uniques, we have 2 cases:
    # - no uniques => all nodes are safe (in blacklist *and* in safe)
    # - uniques => some nodes are not safe

    echo "$_BLACKLISTED $_SAFE" | tr -d '[]"' | tr ', ' '\n' | sort | uniq -u

while true


    if [ -z "$GC_RES" ]
        echo "GC not running, let's check if it is needed."
        if [ -z "$NON_SAFE" ]
            echo "All nodes are safe for removal."
            echo "Somes nodes are not yet safe: $NON_SAFE"
            date +'%Y-%m-%d %H:%M:%S'
            wget -q -O /dev/null $GC_START
            echo "Run $CNT started."
        echo "GC running ($GC_RES). Let's wait".
    sleep 60

How to build a full flume interceptor by a non java developer

Building a flume interceptor does not look too complex. You can find some examples, howtos or cookbooks around. The issue is that they all explain the specificities of the interceptor, leaving a python/perl guy like me in the dark about maven, classpaths or imports. This posts aims to correct this. This will describe the bare minimum to get it working, but I link to more documentation if you want to go deeper. You can find the code on github as well.

The interceptor itself

You need to create a directory structure, in which to write the following java code. It needs to be under src/main/java/com/example/flume/interceptors/ in this example, for an interceptor named com.example.flume.interceptor.eventTweaker.

package com.example.flume.interceptors;

import java.util.List;
import java.util.Map;

import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.interceptor.Interceptor;

import org.apache.log4j.Logger;

public class eventTweaker implements Interceptor {

  private static final Logger LOG = Logger.getLogger(eventTweaker.class);

  // private means that only Builder can build me.
  private eventTweaker() {}

  public void initialize() {}

  public Event intercept(Event event) {

    Map<String, String> headers = event.getHeaders();

    // example: add / remove headers
    if (headers.containsKey("lice")) {
      headers.put("shampoo", "antilice");

    // example: change body
    String body = new String(event.getBody());
    if (body.contains("injuries")) {
      try {
      } catch ( e) {
        // drop event completely
        return null;

    return event;

  public List<Event> intercept(List<Event> events) {
    for (Event event:events) {
    return events;

  public void close() {}

  public static class Builder implements Interceptor.Builder {

    public Interceptor build() {
      return new eventTweaker();

    public void configure(Context context) {}

Maven configuration

To build the above, you need a pom.xml, to be put at the root of the aforementioned tree. The following is bare minimum which would probably make any java developer cringe, but It Works™.

<project xmlns="" xmlns:xsi=""

 <name>Maven Central</name>
 <name>Cloudera CDH</name>




Once done, you can just type

mvn package

and a jar will be created for you under the target directory.

Tree example

After compilation, here is how my directory structure looks like:

├── pom.xml
├── src
│   └── main
│       └── java
│           └── com
│               └── example
│                   └── flume
│                       └── interceptor
│                           └──
└── target
    ├── classes
    │   └── com
    │       └── example
    │           └── flume
    │               └── interceptors
    │                   ├── eventTweaker$1.class
    │                   ├── eventTweaker$Builder.class
    │                   └── eventTweaker.class
    ├── maven-archiver
    │   └──
    └── eventTweaker-1.0.jar

Jar destination

You need to put the jar (in this case target/eventTweaker-1.0.jar) in the flume classpath for it to be loaded. You can do so in 2 ways:

  1. By editing the FLUME_CLASSPATH variable in the file
  2. By adding it under the $FLUME_HOME/plugins.d directory. In this directory, put your jar at  your_interceptor_name/lib/yourinterceptor.jar and voilà, it will be loaded automagically.

Flume configuration


You can see log statements in the usual flume.log file, probably under /var/log/flume-ng/flume.log


This is the easiest bit. On your source definition, you just have to add an interceptor name and class: