Hacker News
Show HN: Turbolite – a SQLite VFS serving sub-250ms cold JOIN queries from S3
It’s called turbolite. It is experimental, buggy, and may corrupt data. I would not trust it with anything important yet.
I wanted to explore whether object storage has gotten fast enough to support embedded databases over cloud storage. Filesystems reward tiny random reads and in-place mutation. S3 rewards fewer requests, bigger transfers, immutable objects, and aggressively parallel operations where bandwidth is often the real constraint. This was explicitly inspired by turbopuffer’s ground-up S3-native design. https://turbopuffer.com/blog/turbopuffer
The use case I had in mind is lots of mostly-cold SQLite databases (database-per-tenant, database-per-session, or database-per-user architectures) where keeping a separate attached volume for inactive database feels wasteful. turbolite assumes a single write source and is aimed much more at “many databases with bursty cold reads” than “one hot database.”
Instead of doing naive page-at-a-time reads from a raw SQLite file, turbolite introspects SQLite B-trees, stores related pages together in compressed page groups, and keeps a manifest that is the source of truth for where every page lives. Cache misses use seekable zstd frames and S3 range GETs for search queries, so fetching one needed page does not require downloading an entire object.
At query time, turbolite can also pass storage operations from the query plan down to the VFS to frontrun downloads for indexes and large scans in the order they will be accessed.
You can tune how aggressively turbolite prefetches. For point queries and small joins, it can stay conservative and avoid prefetching whole tables. For scans, it can get much more aggressive.
It also groups pages by page type in S3. Interior B-tree pages are bundled separately and loaded eagerly. Index pages prefetch aggressively. Data pages are stored by table. The goal is to make cold point queries and joins decent, while making scans less awful than naive remote paging would.
On a 1M-row / 1.5GB benchmark on EC2 + S3 Express, I’m seeing results like sub-100ms cold point lookups, sub-200ms cold 5-join profile queries, and sub-600ms scans from an empty cache with a 1.5GB database. It’s somewhat slower on normal S3/Tigris.
Current limitations are pretty straightforward: it’s single-writer only, and it is still very much a systems experiment rather than production infrastructure.
I’d love feedback from people who’ve worked on SQLite-over-network, storage engines, VFSes, or object-storage-backed databases. I’m especially interested in whether the B-tree-aware grouping / manifest / seekable-range-GET direction feels like the right one to keep pushing.
carlsverre
|next
[-]
I do like the B-tree aware grouping idea. This seems like a useful optimization for larger scan-style workloads. It helps eliminate the need to vacuum as much.
Have you considered doing other kinds of optimizations? Empty pages, free pages, etc.
russellthehippo
|root
|parent
[-]
I've tried out all sorts of optimizations - for free pages, I've considered leaving empty space in each S3 object and serving those as free pages to get efficient writes without shuffling pages too much. My current bias has been to over-store a little if it keeps the read path simpler, since the main goal so far has been making cold reads plausible rather than maximizing space efficiency. Especially because free pages compress well.
I have two related roadmap item: hole-punching and LSM-like writing. For local on non-HDD storage, we can evict empty pages automatically by releasing empty page space back to the OS. For writes, LSM is best because it groups related things together, which is what we need. but that would mean doing a lot of rewriting on checkpoint. So both of these feel a little premature to optimize for vs other things.
agosta
|next
|previous
[-]
I do wonder - for projects that do ultimately enforce single writer sqlite setups - it still feels to me as if it would always be better to keep the sqlite db local (and then rsync/stream backups to whatever S3 storage one prefers).
The nut I've yet to see anyone crack on such setup is to figure out a way to achieve zero downtime deploys. For instance, adding a persistent disk to VMs on Render prevents zero downtime deploys (see https://render.com/docs/disks#disk-limitations-and-considera...) which is a real unfortunate side effect. I understand that the reason for this is because a VM instance is attached to the volume and needs to be swapped with the new version of said instance...
There are so many applications where merely scaling up a single VM as your product grows simplifies devops / product maintenance so much that it's a very compelling choice vs managing a cluster/separate db server. But getting forced downtime between releases to achieve that isn't acceptable in a lot of cases.
Not sure if it's truly a cheaply solvable problem. One potential option is to use a tool like turbolite as a parallel data store and, only during deployments, use it to keep the application running for the 10 to 60 seconds during a release swap. During this time, writes to the db are slower than usual but entirely online. And then, when your new release is live, it can sync the difference of data written to s3 back to the local db. In this way, during regular operation, we get the performance of local IO and fallback onto s3 backed sqlite during upgrades for persistent uptime.
Sounds like a fraught thing to build. But man it really is hard/impossible to beat the speed of local reads!
hrmtst93837
|root
|parent
[-]
bob1029
|next
|previous
[-]
https://aws.amazon.com/about-aws/whats-new/2024/08/amazon-s3...
https://docs.aws.amazon.com/AmazonS3/latest/userguide/condit...
russellthehippo
|root
|parent
[-]
Short answer: conditional PUTs for distributed scare me for multiwriter. The issue isn't doing the writes, it's ensuring that the writer is writing against the most current data. For OLTP workloads with upserts, this is very hard! If you have immutable data without any upserts and your writes don't depend on reads, that actually works really well. But in any other scenario it's dangerous. The writer needs to ensure that the other writers have checkpointed first, and there's not a great way to do that.
One thing that could make this work in turbolite is a fast distributed lock system with transaction and lock timeouts. E.g. use Redis as the lock lease holder, and distributed writers acquire it, fetch the latest manifest from s3, sync any data they need, do the write, checkpoint, update the lock to be released and note the new manifest version, and return success to the user. then before any reads, they simple check Redis for the new manifest (or S3 for guarantee that the manifest update/lock release didn't fail). All writes have an N second timeout, and the write lock has N second + T timeout, so it's guaranteed that successful checkpoints are used in the next read, as long as readers check the manifest first.
This could work, but it's <still> single writer lol. But you'd only want it with infrequent writes. And reads cost an S3 GET. So I guess it would work best with Wasabi, which doesn't charge for operations, or self-hosted MinIO.
russellthehippo
|next
|previous
[-]
The motivating question for me was less “can SQLite read over the network?” and more “what assumptions break once the storage layer is object storage instead of a filesystem?”
The biggest conceptual shift was around *layout*.
What felt most wrong in naive designs was that SQLite page numbers are not laid out in a way that matches how you want to fetch data remotely. If an index is scattered across many unrelated page ranges, then “prefetch nearby pages” is kind of a fake optimization. Nearby in the file is not the same thing as relevant to the query.
That pushed me toward B-tree-aware grouping. Once the storage layer starts understanding which table or index a page belongs to, a lot of other things get cleaner: more targeted prefetch, better scan behavior, less random fetching, and much saner request economics.
Another thing that became much more important than I expected is that *different page types matter a lot*. Interior B-tree pages are tiny in footprint but disproportionately important, because basically every query traverses them. That changed how I thought about the system: much less as “a database file” and much more as “different classes of pages with very different value on the critical path.”
The query-plan-aware “frontrun” part came from the same instinct. Reactive prefetch is fine, but SQLite often already knows a lot about what it is about to touch. If the storage layer can see enough of that early, it can start warming the right structures before the first miss fully cascades. That’s still pretty experimental, but it was one of the more fun parts of the project.
A few things I learned building this:
1. *Cold point reads and small joins seem more plausible than I expected.* Not local-disk fast, obviously, but plausible for the “many mostly-cold DBs” niche.
2. *The real enemy is request count more than raw bytes.* Once I leaned harder into grouping and prefetch by tree, the design got much more coherent.
3. *Scans are still where reality bites.* They got much less bad, but they are still the place where remote object storage most clearly reminds you that it is not a local SSD.
4. *The storage backend is super important.* Different storage backends (S3, S3 Express, Tigris) have verg different round trip latencies and it's the single most important thing in determining how to tune prefetching.
Anyway, happy to talk about the architecture, the benchmark setup, what broke, or why I chose this shape instead of raw-file range GETs / replication-first approaches / etc.
russellthehippo
|next
|previous
[-]
alex_hirner
|next
|previous
[-]
russellthehippo
|root
|parent
|next
[-]
The obvious policy-driven versions are things like:
- when cache size crosses a limit
- on checkpoint
- every N writes (kind of like autocheckpoint)
- after some idle / age threshold
My instinct is that for the workload I care about, the best answer is probably hybrid. The VFS should have a tier-aware policy internally that users can configure with separate policies for interior/index/data pages. But the user/application may still be in the best position to say “this tenant/session DB is cold now, evict aggressively.”
jijji
|next
|previous
[-]
russellthehippo
|root
|parent
|next
[-]
If you evict the cache on every read/write, then all reads and writes do S3 GET. PUTs are 10-100x more expensive than GETs usually. Check the benchmark/README.md for GET counts, but it's usually 5-50 per cold read (with interior B-tree pages on disk). S3 GETs are $0.0004/1000, S3 Express is $0.00003. So 10 queries per minute all day long averaging 20 GET operations, with full eviction on each request, would be 2010$0.00046024/100030 = $3.45 per month. With S3 Express One Zone that's $0.26/mo. Both plus storage costs, which would probably be lower.
On Wasabi, that would be just the cost of storage (but they have minimum 1TB requirements at $6.99/mo).
If you checkpoint after every write and write 10 times per minute, each write hitting e.g. 5 page groups (S3 objects) plus the manifest file, the analysis looks like: 610$0.0056024/100030=$12.96. Again that's worst case for the benchmark 1.5GB database. On S3 Express that' $2.92.
Point is - it's not too bad, and that's kind of worst-case scenario where you evict on every request every 6 seconds which isn't really realistic. If you evict the cache hourly, that cost goes is 1/600th - less than a $0.01 per month.
Summary: use S3 Express One Zone, don't evict the cache too often, checkpoint to S3 once a minute (turbolite lets you checkpoint either locally (disk-durability) or locally+S3 separately), and you're running a decent workload for pennies every month.
[Apologies for the spreadsheet math in plaintext]