A History of Transaction Histories

I’ve been trying to understand database transactions for a long time, and recently spent some time researching this with Justin Jaffray. Here’s an attempt to summarize what we’ve learned.

One recurring challenge has been that understanding transactions has been a decades long journey for the database and distributed infrastructure community. Reading the literature requires understanding this context. Otherwise it’s jarring to see contradicting terminology, and contradicting intuitions.To clarify this, I’ve written down how the discourse around database transactions has evolved.

In the beginning (1990?) database connections were nasty, brutish, and short. Under the law of the jungle, anarchy reigned and programmers had to deal with numerous anomalies. To deal with this, the ANSI SQL-92 standard was the first attempt to bring a uniform definition to isolation levels. They chose the following path:

  1. There are a few known anomalies, namely dirty reads, non-repeatable reads, and phantom reads.
  2. They recognized that there are some locking strategies that prohibit some or all of these anomalies. These they called “read uncommitted” (the jungle), “read committed” (no dirty reads), repeatable read (no dirty reads or non-repeatable reads).
  3. The final level of enlightenment was the isolation level that didn’t have all three anomalies. They called it serializable, but like… lets just say that was a flaw in this plan.

But serializability is one of the older gods. The definition goes back to the 1960s, and is defined in terms of transaction histories. To quote Christos Papadimitriou from one of the earlier papers that discussed transactions:

A sequence of atomic user updates/retrievals is called serializable essentially if its overall effect is as though the users took turns, in some order, executing each their entire transaction indivisibly.

I personally find reading papers from the era of scanned typewritten sheets hard, so I’m going to skip ahead and say that Herlihy-Wing is a decent place to understand serializability. But the idea is surprisingly simple: every transaction happens as if it happened at some instantaneous point in time, and there’s some order between them (it doesn’t even say that this order has to be clear to the users, or to the transactions themselves! It just has ex-post-facto to exist).

A few years later, folks from Microsoft’s database group wrote a great paper called A Critique of ANSI SQL Isolation Levels, which points out that the ANSI SQL definition of serializable is… not serializable? Like the way ANSI thinks about it, if you eliminate dirty reads, non-repeatable reads, and phantom reads, you’re serializable. This is wrong. It turns out if you eliminate these three anomalies, what you have is a new isolation level which they called Snapshot Isolation.

The other valuable contribution of the Critique is to point out that defining isolation in terms of “the absence of specific anomalous phenomena” is bad, and like, can we please start talking about transaction histories? The surrounding literature around this time period talks about isolation levels by defining specific locking strategies and then proving that those locking strategies can or cannot have various anomalies. This is confusing, because you can have multiple locking strategies with functionally equivalent isolation levels, but the names are associated with the strategy, and so you have multiple names for the same isolation level.

In comes a knight on shining armor to clear up this mess: Atul Adya, with his PhD Thesis Weak Consistency: A Generalized Theory and Optimistic Implementations for Distributed Transactions. Adya starts by looking at transaction histories. His strategy is essentially, “here are some histories, lets study the graphs”. Then he identifies various weird shapes in the graphs (honestly, mostly cycles of various lengths?), and says “these are bad”. Then he maps each badness back to the ANSI definitions, sort of retroactively giving some coherent mathematical meaning to the ANSI isolation levels. This is very valuable! We finally have precise definitions for isolation levels, ones that are independent of the locking strategy that happens to live at that level!

But besides just giving meaning to the four ANSI levels, he also identifies other anomalies that are more subtle, e.g. G2, or “anti-dependency cycles” which are very subtle and live in the vast uncharted territory between Snapshot Isolation and Serializability.

Adya’s thesis is seminal in databases. It is the first time someone coherently gave mathematical definitions of isolation levels in terms of properties one can observe in a transaction history graph. The year is 1999. Infrastructure still sucks.

Now the next question comes up: what do we do with all these databases that exist in the wild (namely Oracle and Postgres, MySQL is still a toy around now) that claim serializability but are actually just snapshot isolation, because they were built in a world of ANSI definitions, but now we know better?

Alan Fekete et al have a great idea in 2005, which they call “Making Snapshot Isolation Serializable”. It essentially takes a vanilla snapshot isolation database, and layers on some checks in the SQL statements, to ensure that you have serializability. They use TPC-C as a running example, because it has already been cleverly engineered to always be serializable, even when running on a snapshot isolation database. So like you, the application programmer, even if you are forced to use a Snapshot Isolation database, can get serializability with this One Weird Trick (Anomalies hate him!).

Now this is a good idea, and Fekete extends this work in 2008 in a paper called Serializable Isolation for Snapshot Databases. This paper basically runs with that idea, except instead of having those checks be written in application code, they do the checks at the database level. So you don’t have to rip out your transaction processing engine, but you can just layer on another set of checks, and you can make your database serializable. The technique is called SSI (Serializable Snapshot Isolation) because apparently taking two technical definitions and combining them to name a completely new definition is not confusing at all.

This is such a good idea, that some people decide to implement it in PostgreSQL and Postgres finally has serializability. The year is 2012. Note that this is not default postgres mode because ANSI says Snapshot (but called Serializable) is sufficient. This is a good point to note that the ANSI definitions have not been fixed.

This strategy is extended in a paper called A Critique of Snapshot Isolation . This paper points out that SSI bootstrapping algorithm can be simplified. To expand, the original SSI algorithm checks for write-write conflicts. Instead, Yabandeh proposes detecting Read-Write conflicts instead. While this requires holding in-memory data structures that have information about the latest timestamp of every read in the database, it has far better concurrency control behavior (for some common workloads), as it never aborts reads. It only ever aborts writes.

This is also finally the year when people start to wake up and realize they care about serializability, because Jeff Dean said its important. Michael Stonebraker, meanwhile, has been shouting for about 20 years and is very frustrated that people apparently only care when The Very Important Googlers say its important.

At this point academics have a lot of angst, because now we’re in this weird zone where

  1. according to the theory, serializability is The Only Way
  2. many people use garbage non-serializable databases and… are basically fine?
  3. Therefore, are we just like wasting our time? Does serializability even matter? What are we doing with our lives?

Peter Bailis summarizes this up best in a very fine blog post in 2014. To quote him:

Despite the ubiquity of weak isolation, I haven’t found a database architect, researcher, or user who’s been able to offer an explanation of when, and, probably more importantly, why isolation models such as Read Committed are sufficient for correct execution. It’s reasonably well known that these weak isolation models represent “ACID in practice,” but I don’t think we have any real understanding of how so many applications are seemingly (!?) okay running under them.

The retort is basically: you’re not always fine, and you’re going to find out that you’re not fine when its too late, so don’t do that. This is not an easy argument to make. It finally starts to catch once people have been burned with NoSQL databases for about a decade.