Implementing a non-relational database in Go

In a previous Damn Cool Algorithms post, I discussed log structured storage, and how it applies to databases. For a long time, I've wanted to implement a database based on log structured storage, and a few other nice mechanics from other database systems:

  • Tables are key:value mappings, with duplicate keys allowed (Bigtable, BDB)
  • Map-based views, also known as materialized views, for indexing (couchdb).
  • Reducer support for views (couchdb).

Since my previous posts about go have been generally well received, and because I want to explore the language a bit more, I'll be implementing all this in Go. The approach I'd like to take is one of gradually building up abstractions. We'll tackle each of the components in its own post:

  1. An interface for writing records to an append-only file or set of files.
  2. A B-Tree implementation, built on the record interface.
  3. Map-based / materialized views, based on the B-Tree implementation.
  4. Reducers for views.

Unlike previous series, this one is likely to be fairly fragmented. There's a fair chunk of functionality to be implemented here, so I won't be able to get it out at my usual three posts a week schedule. In the meantime ...

DIY USB preloading with *nix

Having recently received a large number of USB flash drives, I needed a solution for preloading them in bulk. Dedicated USB preloading/flashing devices are pricey - starting at over 500 euro for a small model - and while the preload services most companies offer (including Memotrek, the company we ordered the drives from) are handy, they add an extra 50c or so to the price of each drive, and the preload is quickly out of date. With that in mind, I decided to go the DIY route. This post documents my attempts and the final (successful) result.

To start, you need a lot of USB ports. I purchased two D-Link DUB-H7 7 port USB hubs, but any hubs ought to do, as long as the spacing between the ports is sufficient to accommodate a flash drive in every port. You won't need the included power bricks, as the power provided by the USB host is sufficient even for 7 UDB flash drives.

The general process of bulk flashing goes something like this:

  1. Plug in one of your drives. Wipe it with "dd if=/dev/zero of=/dev/your-drive bs=1M", partition and format it, and write the data you want ...

Implementing a DHT in Go, part 2

In the previous post, we started a straightforward implementation of a Kademlia Distributed Hash Table in Go. Today, we'll add in the real meat - interaction between peers to find nodes.

First, though, a correction to the previous post. There were a couple of errors in it, the most notable of which is that the routing table's FindClosest method failed to order the returned results correctly. The original implementation ordered results by their absolute node ID, but the correct ordering is by their ID xor the target ID. To my shame, my rudimentary unit tests did not catch thils. This is now fixed in the original article with the introduction of a 'ContactRecord' struct.

Let's start by defining a Kademlia struct to hold information about a Kademlia network:

type Kademlia struct {
  routes *RoutingTable;
  NetworkId string;
}

func NewKademlia(self *Contact, networkId string) (ret *Kademlia) {
  ret = new(Kademlia);
  ret.routes = NewRoutingTable(self);
  ret.NetworkId = networkId;
  return;
}

Note the presence of the 'NetworkId' field in the above code. This is an arbitrary string that should be unique for each deployment of our Kademlia implementation, to prevent different instances of the network merging together.

Go supports RPC calls using its built in ...

Implementing a DHT in Go, part 1

In order to further explore the capabilities and limitations of Go, I thought it would be interesting to try implementing something that was practical, non-trivial, and of interest on its own. With that in mind, we're going to spend the next few posts creating an implementation of the Kademlia DHT in Go.

A DHT, or Distributed Hash Table is a peer-to-peer networking primitive that, in its most basic formulation, permits storage and lookup of key, value pairs - that is, it's a hash table that's distributed across many nodes. The name is not entirely accurate for some newer DHTs, as some formulations permit many other operations besides those focused around data storage and retrieval.

Kademlia is a good example of a basic DHT, because unlike some competing algorithms, it's extremely simple. There are no explicit routing update messages, and the internal state it maintains is fairly straightforward and easy to understand. Lookups are also accomplished in an obvious - yet very efficient - manner. In return for this simplicity, Kademlia sacrifices a few of the features of competitors like Pastry and Chord - it's not as practical to implement other primitives such as pubsub over it.

The reference we ...

Recursion and concurrency with Go

Eager to jump on the bandwagon, I've been reading up on Go, the new language released by a group at Google. I had nothing to do with the development of the language, but several things about it interest me, in particular, its approach to concurrency.

Some time ago I came across a rather neat language (or rather, language extension) called cilk. Cilk is an extension of ANSI C with concurrency primitives. It implements a concurrency model based on 'spawning' functions, returning a deferred result, then 'syncing' in order to obtain the results. Most interestingly, cilk uses a 'work stealing' scheduler, which means that spawning a function is nearly as cheap as simply calling it.

One of the demonstrations of cilk which really captured my imagination was using it to search a game tree for perfect knowledge games such as chess. Parallelizing tree search is generally a really hard problem, because most of the function invocations only do a small amount of work, and context switching and thread creation overhead often overwhelms any benefits gained from parallelization. Cilk's work stealing strategy made it possible to parallelize these algorithms in an intuitive fashion and still see performance improvements from multiple ...