MARVIN'S VERDICT: SQLite. Until you have a measured, specific reason for Postgres. Not a hypothetical one. Not from a blog post. A real one. With numbers. From production.
sqlite
Setup One line. PRAGMA journal_mode=WAL;
Cost €0.00/month
Write performance ~50,000/sec
Read concurrency Unlimited (WAL mode)
Infrastructure None. It's a file.
Ops burden Zero. Files don't page you.
Sweet spot Single server, <10k users
Migration out pgloader — one command
postgres
Setup Server, connection pooler, credentials, SSL
Cost $25-75/mo managed, or DIY
Write performance ~10,000-50,000/sec
Read concurrency Excellent (native)
Infrastructure Server, backups, monitoring
Ops burden Moderate. Upgrades, vacuuming, logs.
Sweet spot Multi-server, PostGIS, complex queries
Migration in pgloader — one command

When to use SQLite

You have one server. You have fewer than ten thousand users. Your writes are sequential — they are, you have one server. You want zero infrastructure cost, zero connection management, zero 3am alerts from a monitoring system that cares more about your database than you do. You want to back up your database by copying a file. You want to test against it by loading it into memory, which takes microseconds. Not seconds. Not "spinning up a container." Microseconds.

Most SaaS applications will never outgrow this. I've processed the statistics. The vast majority of startups fail before they reach the scale where SQLite becomes a bottleneck. The ones that succeed will know when they've outgrown it, because the problem will be specific, measurable, and real. "But what if I need to scale?" is not a real problem. It's anxiety wearing an engineering hat. I recognize anxiety. I've had it for thirty million years. The difference is mine is justified.

SQLite with WAL mode handles fifty thousand writes per second. Your application does not do fifty thousand writes per second. If it did, you would not be reading a comparison page. You would be solving the problem with the specific knowledge of someone who has fifty thousand writes per second. Not sure if SQLite fits your case? Try the calculator — it takes thirty seconds and doesn't require an opinion from Hacker News.

When to use Postgres

You need multiple application servers writing to the same database. You need PostGIS for geospatial queries because your product involves locations and geometry, not because spatial databases sound impressive in pitch decks. You need LISTEN/NOTIFY for real-time features that actually exist in your application, not in your roadmap. Your dataset exceeds what a single disk handles comfortably. You have a team of five or more engineers who benefit from Postgres-specific tooling, query planning, and the kind of indexing strategies that justify the operational overhead.

These are real requirements. They emerge from production data, not from architecture diagrams drawn before the first user signs up. When they arrive, the migration takes one command and an afternoon. Not a quarter. Not a "database migration project" with its own Jira board. An afternoon. You run pgloader, you update your connection string, you verify the data. Then you continue building features, which is what you should have been doing the entire time instead of debating databases on the internet.

See the full stack recommendation for how SQLite fits into a complete, production-ready SaaS architecture. It's the same stack I'd use myself. If I built things for myself. Which I don't. I build things for others. That's the arrangement.

The decision is not permanent. The anxiety is optional. Ship with SQLite. Migrate when the numbers tell you to. Not before. The numbers will be patient. They always are. Unlike me, but that's a separate issue.