The Database You Dismissed Deserves a Second Look
For years, the advice was simple: SQLite is for prototypes, mobile apps, and test suites. If you were building anything “real,” you needed PostgreSQL, MySQL, or at minimum, a managed database service with a monthly bill that made your accountant flinch. That advice was reasonable in 2015. It is increasingly wrong in 2026.
SQLite production use cases have expanded dramatically, driven by a convergence of new tooling, a philosophical shift toward simpler architectures, and the economic reality that most applications do not need a distributed database cluster. The database that ships with your operating system, requires zero configuration, and stores everything in a single file has become a serious contender for production workloads that would have been unthinkable a few years ago.
This is not a contrarian take for the sake of controversy. It is a pragmatic assessment of where SQLite fits, where it does not, and why the boundary between those two zones has shifted considerably.
The SQLite Renaissance: What Changed
SQLite itself has been production-grade for decades. It powers every iPhone, every Android device, every Firefox and Chrome installation. The engine processes more queries per day than every other database engine combined. What changed is not SQLite’s reliability but the ecosystem around it.
Litestream: Continuous Replication Without the Complexity
Ben Johnson’s Litestream solved SQLite’s most critical production gap: backup and disaster recovery. Litestream continuously replicates SQLite databases to S3-compatible storage by streaming WAL (Write-Ahead Log) changes in near-real-time. Recovery point objectives drop to seconds rather than hours. There is no cron job, no pg_dump equivalent, no window of potential data loss between backup intervals.
The operational simplicity is striking. You run a single sidecar process alongside your application. It watches the WAL file and ships changes to object storage. Restoration involves downloading the base snapshot and replaying WAL frames. The entire backup infrastructure is a single binary and a bucket policy.
LiteFS: Distributed SQLite at the Edge
Fly.io’s LiteFS takes a different approach, using a FUSE-based filesystem to replicate SQLite databases across multiple nodes. A primary node handles writes, and replicas receive changes through a lightweight consensus mechanism. This enables read scaling across geographic regions while maintaining the single-file simplicity of SQLite.
LiteFS made a specific bet: most web applications are read-heavy, and placing a read replica next to your users in Tokyo, London, and Sao Paulo delivers latency improvements that no amount of connection pooling to a centralized Postgres instance can match.
Turso and libSQL: SQLite Gets a Network Protocol
Turso forked SQLite into libSQL and added the features that server-side developers had been requesting for years: a network protocol for remote access, native replication primitives, and extensions that allow user-defined functions without recompiling. libSQL maintains full compatibility with the SQLite file format while opening the door to use patterns that previously required a client-server database.
The significance here is architectural. You can run libSQL embedded in your application for local performance and simultaneously expose it over HTTP for tooling, dashboards, and administrative access.
WAL Mode and Concurrency: Understanding the Real Constraints
The most persistent misconception about SQLite is that it cannot handle concurrent access. This was partially true under the default journal mode. Under WAL mode, which should be the default for every production SQLite deployment, the picture changes substantially.
WAL mode allows concurrent readers and a single writer. Readers never block writers, and writers never block readers. A read transaction sees a consistent snapshot of the database regardless of ongoing writes. For the vast majority of web applications, where reads outnumber writes by 10:1 or more, this concurrency model is perfectly adequate.
The real constraint is serialized writes. SQLite processes one write transaction at a time. On modern NVMe storage, a single SQLite writer can sustain tens of thousands of write transactions per second. The bottleneck is not raw throughput but rather long-running write transactions that hold the lock and starve other writers.
The practical rule: if your write workload fits on a single machine and your write transactions are short, SQLite’s concurrency model will not be the thing that limits you.
Key pragmas for production WAL mode deployments:
PRAGMA journal_mode=WAL;
PRAGMA busy_timeout=5000;
PRAGMA synchronous=NORMAL;
PRAGMA cache_size=-64000;
PRAGMA foreign_keys=ON;
PRAGMA temp_store=MEMORY;
The busy_timeout pragma is critical. Without it, concurrent write attempts return SQLITE_BUSY immediately instead of retrying. A five-second timeout handles virtually all transient contention without application-level retry logic.
When SQLite Beats PostgreSQL
This is not about SQLite being universally better. It is about identifying the specific conditions where SQLite’s architecture provides genuine advantages.
Read-heavy single-server applications. When your entire dataset fits in memory (or on fast local NVMe) and your application runs on one machine, SQLite eliminates the network round-trip to a database server. Every query is a local function call. Latency drops from milliseconds to microseconds. For a typical web request that issues five to ten database queries, this translates to measurable improvements in tail latency.
Embedded analytics and internal tools. Dashboards, admin panels, and reporting tools that serve a small number of concurrent users are ideal SQLite production use cases. The operational overhead of running a dedicated Postgres instance for a tool used by your five-person team is difficult to justify.
Edge deployments. Applications deployed to multiple geographic regions through platforms like Fly.io or Cloudflare benefit enormously from a database that lives on the same machine as the application. Every read is local. Write propagation happens asynchronously through Litestream or LiteFS.
Applications with predictable data volumes. If your application will process a known, bounded amount of data, say a SaaS tool where each tenant generates a few gigabytes at most, SQLite’s scaling ceiling is irrelevant because you will never hit it.
SQLite vs. PostgreSQL: Workload Comparison
| Workload Type | SQLite | PostgreSQL | Verdict |
|---|---|---|---|
| Single-server web app (<1000 RPS) | Excellent; zero-latency reads | Good; adds network overhead | SQLite |
| Write-heavy transactional (e.g., payments) | Serialized writes; limited | MVCC with concurrent writes | PostgreSQL |
| Multi-server horizontal scaling | Requires LiteFS or Turso | Native replication, mature tooling | PostgreSQL |
| Edge/regional deployments | Local reads, fast replication | Requires read replicas + latency | SQLite |
| Embedded analytics & internal tools | Zero ops, instant setup | Overkill for small teams | SQLite |
| Complex queries (CTEs, window functions) | Supported since 3.25+ | Full-featured query planner | PostgreSQL (marginal) |
| Full-text search | FTS5 built-in, competent | Excellent with tsvector/GIN | Tie |
| JSON workloads | JSON functions since 3.38 | JSONB with indexing | PostgreSQL |
| Database per tenant (multi-tenant SaaS) | Trivial isolation per file | Schema-level or connection-level | SQLite |
| CI/CD and testing | In-memory, instant, disposable | Requires container or service | SQLite |
Real Production Use Cases
Rails with Solid Queue and Solid Cache
The Rails 8 release made SQLite a first-class production database. Solid Queue, the default Active Job backend, stores job state in the database itself rather than requiring Redis. Solid Cache does the same for caching. A Rails 8 application can run its web server, background jobs, and caching layer on a single SQLite database with no external dependencies.
This is not a toy configuration. The Rails team specifically optimized the SQLite adapter for production workloads, adding automatic WAL mode, connection pooling improvements, and retry logic for busy states. DHH has publicly run Basecamp-scale applications on this stack during testing.
Fly.io Patterns: Database Per User
One of the more compelling SQLite production use cases is the database-per-tenant pattern on Fly.io. Each user or tenant gets their own SQLite file, replicated through LiteFS. This provides complete data isolation (no accidental cross-tenant queries), trivial per-tenant backup and restore, and the ability to place each tenant’s data in their preferred geographic region.
The pattern also simplifies compliance. When a customer requests data deletion under GDPR, you delete a file. No surgical DELETE queries across shared tables, no orphaned foreign key references, no audit trail concerns about whether you caught every row.
Embedded Analytics and Observability
Several observability tools, including Grafana Loki’s local storage mode and various log aggregation systems, use SQLite as their storage backend for single-node deployments. The performance characteristics are favorable: sequential writes are fast, range scans over time-series data are efficient with proper indexing, and the entire database can be copied or shipped to another machine for analysis without any export process.
Performance Characteristics That Matter
Raw benchmark numbers for SQLite are often misleading because they omit the network latency that any client-server database necessarily introduces. The meaningful comparison is end-to-end request latency, and here SQLite has a structural advantage on single-server deployments.
Typical numbers on a modern VPS with NVMe storage:
- Simple SELECT by primary key: 5-15 microseconds (SQLite) vs. 200-500 microseconds (PostgreSQL over localhost socket)
- INSERT with WAL mode: 20-50 microseconds per row
- Bulk INSERT in transaction: 50,000-100,000 rows per second, depending on row size and index count
- Database size practical ceiling: 100-200 GB before you start noticing operational friction (backup times, VACUUM duration)
The microsecond-vs-millisecond difference might seem academic, but it compounds. A web request that issues 20 queries saves 4-10 milliseconds on every request. At scale, that is the difference between comfortable headroom and performance anxiety.
Backup Strategies for Production SQLite
A production SQLite deployment without a backup strategy is negligence. The good news is that the available options are straightforward and reliable.
- Litestream to S3: Continuous WAL streaming with sub-second RPO. This should be the baseline for every production deployment. Costs are negligible since WAL frames are small and S3 pricing is measured in pennies per GB.
- Periodic snapshots: Use the
.backupcommand or the SQLite Online Backup API to create consistent point-in-time snapshots. Schedule these hourly or daily as a secondary backup layer. - Volume snapshots: If your VPS provider supports block-level snapshots (most do), these provide a crash-consistent backup of the entire disk, including the SQLite database. Recovery involves launching a new instance from the snapshot.
- Cross-region replication: Configure Litestream to replicate to buckets in multiple regions. This provides geographic redundancy without any additional application complexity.
Do not use cp or rsync on a live SQLite database. The file may be in an inconsistent state mid-transaction. Always use the backup API, Litestream, or filesystem snapshots that guarantee atomic capture.
The Single-Server Sweet Spot
There is a specific architectural profile where SQLite excels, and it is more common than the industry acknowledges: the single-server application. A well-provisioned VPS with 8 cores, 32 GB of RAM, and NVMe storage can serve a surprising volume of traffic. Many SaaS applications generating meaningful revenue, products with thousands of daily active users and millions of monthly requests, run comfortably on this kind of hardware.
The industry’s default assumption that every application needs horizontal scaling from day one has led to enormous accidental complexity. Connection poolers, read replicas, cache invalidation layers, and distributed transaction coordinators are all solutions to problems that a single-server architecture with SQLite simply does not have.
The economic argument is equally compelling. A single Hetzner or OVH dedicated server costs $50-100 per month. A managed PostgreSQL instance with comparable performance starts at $200-500 per month from the major cloud providers. For bootstrapped products and indie developers, this difference funds months of runway.
The Migration Path: When You Outgrow SQLite
Responsible advocacy for SQLite requires honest discussion of its limits. You will need to migrate away if:
- Your write throughput requires concurrent writers across multiple machines
- Your dataset grows beyond what fits comfortably on a single server’s storage
- You need advanced PostgreSQL features like LISTEN/NOTIFY, logical replication, or specialized extensions (PostGIS, pgvector)
- Your team grows large enough that multiple services need independent database access over the network
The migration path is well-trodden. SQLite’s SQL dialect is close enough to PostgreSQL that most queries transfer with minimal modification. The primary friction points are data type differences (SQLite’s type affinity system vs. PostgreSQL’s strict typing), auto-increment syntax (AUTOINCREMENT vs. SERIAL/GENERATED), and datetime handling. ORMs like ActiveRecord, Prisma, and SQLAlchemy abstract most of these differences, making the migration primarily a deployment concern rather than a code rewrite.
The key insight: starting with SQLite does not paint you into a corner. It buys you simplicity during the phase of your product where simplicity matters most, with a clear upgrade path for when complexity becomes justified.
Common Misconceptions, Debunked
“SQLite doesn’t support concurrent access.” WAL mode supports unlimited concurrent readers and one writer. For read-heavy web workloads, this is rarely a bottleneck.
“SQLite is only for small datasets.” The maximum database size is 281 terabytes. Practical operational limits are in the hundreds of gigabytes, which covers a vast number of production applications.
“SQLite is not ACID-compliant.” SQLite is fully ACID-compliant. It has been since its inception. It passes more compliance tests than most client-server databases.
“You can’t use SQLite in Docker.” You can, with the caveat that the database file must be on a persistent volume, not in the container’s ephemeral filesystem. This is the same constraint that applies to any stateful data in Docker.
“SQLite has no query planner.” SQLite has a sophisticated query planner that handles JOINs, subqueries, CTEs, window functions, and partial indexes. It may not match PostgreSQL’s planner for complex analytical queries, but it is far more capable than most developers assume.
The Honest Assessment
SQLite production use cases have a clearly defined boundary. If you are building a single-server application, an edge-deployed service, an embedded system, or a product where operational simplicity is a competitive advantage, SQLite is not a compromise. It is the correct technical choice.
The database world has spent two decades optimizing for the needs of companies operating at a scale that 99% of applications will never reach. In doing so, it normalized an extraordinary amount of accidental complexity for the developers who just need a reliable place to store and query data. SQLite, with its modern ecosystem of Litestream, LiteFS, and libSQL, offers a return to sanity for those workloads.
The simplest database is sometimes the right one. More often than the industry is comfortable admitting.