08 October, 2023
Building a high performance financial ledger with Postgres and
It's been a few months since my cofounder and I decided to call time on our fundraise and as a result our journey building Nanumo. We fought as hard as we could to build what we (still) believe is a valuable product in an underserved market but optimism and graft can only take you so far. The stack of evidence suggesting we were working on the wrong thing grew too big to ignore and rather than wait for it to crush us in an avalanche, we called time.
Even though we only spent a year working on it, folding Nanumo has left me with a lot to process. Lessons, scars, things to repeat next time and mistakes to hopefully leave behind. On the one hand it's been the hardest thing I've ever done but on the other I'm grateful to have survived and to have retained the fight to try again.
Right now I'm still in the process of going over the debris (psychological and technological) and working out what to save. One of the things I think we got right was our ledger and it's an interesting illustration of some data modelling and type system basics that might be of interest to people needing to build something similar.
Ledger basics
The principles of double entry accounting have remained unchained since its derivation at some point in the 8th or 9th century (and subsequent documentation by Luca Pacioli in the 14th century). A ledger is at the heart of such a system and anyone building a product that moves money should start by reading the excellent accounting for software engineers series that covers the basic principles of double entry accounting. Without rehashing the basics, the critical intuition is that from an accounting perspective value can never be created nor destroyed, only moved from one place to another.
Without getting too precise, we call those places accounts and divide those accounts into sources of money (credit normal accounts) and uses of money (debit normal accounts). We record movements between accounts as entries in the ledger and by summing up all the entries in the ledger for account a from the beginning of time until time t, we obtain the balance of the account a at t.
When we record a transaction in the ledger we have to obey one rule: The balance of debits and credits must be zero for each transaction (and by extension for the ledger as a whole, although not for each account). This doesn't guarantee that our transactions will be recorded correctly but it does catch one class of error. Note that this also implies that a valid transaction must consist of at least one entry.
The purpose of this article isn't to go into detail about how ledgers work (see the above link for the full rundown on that), our ledger table just serves as a useful example for the schema design we used to store out transactions.
Transactions
For users of our ledger, transactions are the atomic unit. They're a useful level of abstraction as they map to the kinds of transactions that we're used to dealing with every day. Things like bank transfers, e-money payments (e.g. Paypal, Venmo) and cash deposits are all represented as transactions in our data model. Most transactions consist of multiple movements of money (entries in the ledger). As well as moving a principal, there can be fees, holds, settlement delays, reversals, currency conversions and so on. Each of these involves a separate entry in the ledger but relate to a single transaction.
Although there's data that's common to all transactions (although as we came to realise, not as much as you might think), most data is specific to a certain transaction type e.g. last 4 for card transactions, account numbers and sort codes for bank transfers. Over my career I've seen 3 different approaches for storing this in a relational database:
- The supertable table. Store all transactions in one table that includes columns for all the datapoints for all transactions. This makes querying transactions easy as they're all in one place, adding new transaction types fairly easy (just add more columns) but has the disadvantages that we can no longer enforce nullability constraints on any of the transaction specific columns, in the long run the table tends to get wide fast and the potential blast radius of bad DDL migrations is significant.
- The supertable with JSON "metadata" columns. Similar to the above except we shove all transaction-specific data into a JSON column. In my experience this approach has no advantages vs. the wide table approach and no long-term upsides so we never really considered it.
- Separate tables for each transaction type. This makes it harder to run queries across all transactions, in some respects harder to add new transaction types (as any queries that run across all transactions need to be modified, something that we can't enforce at compile time) but does mean that we can enforce nullability constraints and reduces the blast radius of bad DDL updates.
We decided to go for 3.
The problem
So we've got our ledger and we've got our tables for each transaction type but how do we go about maintaining foreign key relationships between the ledger table and the transaction tables? We could have one join column per transaction type in the ledger table but there are already 10s of other tables that need to reference transactions and having to modify every table each time we add a new transaction type is far from ideal.
The solution
We decided we needed one more table to act as a join table between the specific transaction tables, and other tables
that need to reference transactions. That way we only need one join column in the ledger table (transaction_id
) and we
don't need to modify tables that reference transactions when we add new transaction types:
id | bank_transfer_id | card_payment_id |
---|---|---|
ID |
ID |
NULL |
ID |
NULL | ID |
... | ... | ... |
To make things a bit more user friendly we added a check that enforces equality between the one non-null join column and the id column. It doesn't make a huge difference but removes one layer of indirection when working with transaction ids (as the id columns in the join table and specific tables are the same).
Postgres table inheritance
This turned out to be a dead end but we did look at Postgres table inheritance as when we first heard about it, it sounded like it might be relevant for our use case. Unfortunately the current implementation comes with some serious limitations that preclude its use in this situation. Foreign keys aren't supported and crucially inserting into a child table doesn't make the row visible when querying the parent table.
Meanwhile in the application code
We decided early-on that we weren't going to use an ORM for our Postgres layer. Enough has been written about that decision elsewhere so I won't touch on it here. What we did do was build some simple tools to help move data between Kotlin objects and database rows. It's not very exciting but uses reflection to read Kotlin objects, take the relevant fields and knows how to map them to database types. All Postgres primitives are supported and there's some logic for supporting enums and our typesafe constants (our attempt at fulfilling our desire for enum-like classes with inheritance). The best thing is that you can add or remove fields from entity classes without needing to make any code changes to the mapping logic.
The base classes are the Entity
and ID<T>
classes (we're using uuids for primary keys everywhere):
open class Entity<T>(
open val id: ID<out T>,
open val createdAt: Instant
)
open class DeletableEntity<T>(
override val id: ID<out T>,
override val createdAt: Instant,
open val deletedAt: Instant?
) : Entity<T>(id, createdAt)
@JsonSerialize(using = ToStringSerializer::class)
data class ID<T>(
private val uuid: UUID = UUID.randomUUID()
) {
constructor(uuidString: String) : this(UUID.fromString(uuidString))
override fun toString() = uuid.toString()
override fun equals(other: Any?) = other is ID<*> && this.uuid == other.uuid
override fun hashCode() = uuid.hashCode()
}
and everything stored in a table inherits from those classes. It's fairly straightforward. We also have some base
classes that represent tables (e.g. ImmutableTable<T : Entity<T>>
) that include some basic queries you can run on any
table (e.g. get
, insert
, getAll
).
The transaction classes all inherit from the Entity
base class as well as the sealed Transaction
class that looks
something like:
sealed class Transaction<T : Transaction<T>>(
open val organisationId: ID<Organisation>,
open val status: TransactionStatus,
override val id: ID<T> = ID(),
override val createdAt: Instant = Instant.now(),
) : Entity<T>(
id, createdAt
)
Being able to use a sealed class here is great for application logic that needs to handle all possible transaction
types.
The when
statement enforces exhaustive case handling at compile time e.g.
val inserted = when (transaction) {
is LedgerCorrectingTransaction -> ledgerCorrectingTransactionTable.createTransaction(transaction)
is ManualTopupTransaction -> manualTopupTransactionTable.createTransaction(transaction)
...
}
meaning that adding a new transaction to the type hierarchy raises a compile error across the codebase until a branch is added. It's only a small thing but makes it much easier to add new transaction types without risking runtime errors.
Common operations
Each of the sealed subclasses of the Transaction
class represent a specific transaction type and have their own table
in the database. When we want to create a transaction we create a row in the relevant specific table and create a row in
the transactions
table as part of the same transaction.
To retrieve a transaction we can either switch on the concrete type (if it's known at runtime) by selecting on
the ID<SpecificTransactionType>
or we can invoke a slightly more expensive call that takes an ID<Transaction<*>>
and
performs a lookup on the transactions
table to determine the correct concrete table to retrieve the transaction from.
Optimisations
Our application necessitated building a lot of dashboards. To display a transaction dashboard, we often had to collect
rows from multiple concrete transaction tables. These queries can get pretty nasty (especially if you want to support
sorting, filtering and pagination) unless the relevant data is copied from the specific tables into the transaction
table. For example, transaction state (CREATED
, EXECUTED
, VOIDED
, etc.)was duplicated into the transactions
table making it easy for us to display all transactions in a specific state. This would have been very tedious without
duplicating data into the base table Keeping the duplicated data in sync poses a significant challenge (we couldn't
find a good way to use checks for this) but it did leave us with dashboards that were performant and easy to iterate.