Language:English VersionChinese Version

The Side Project Graveyard Is Full of Technically Impressive Failures

Most developers who attempt to turn a side project into a product make the same mistake: they over-engineer the technical decisions and under-engineer the product decisions. The side project has a beautiful architecture, immaculate test coverage, and a polished CI/CD pipeline — and zero users. This is not a technical failure. It is a prioritization failure. This guide covers the specific technical decisions that matter when transitioning from side project to product, and equally important, the ones that do not matter until much later than you think.

The Transition Moment

A side project becomes a product when someone other than you depends on it. Not when you have 100 users. Not when you launch on Product Hunt. When a single real person would notice if it went down, that is the moment the stakes change and the technical decisions need to reflect different priorities.

Before that moment, optimize for iteration speed. After it, you need to balance iteration speed against reliability, observability, and the ability to onboard others to the codebase. These are different optimization targets, and the architecture decisions that serve one often actively hurt the other.

Decision 1: The Deployment Platform

Your deployment platform choice constrains your architecture, affects your costs at scale, and determines how much operational time you spend on infrastructure versus features. Choose it thoughtfully early because migrating later is painful.

The Spectrum

Vercel/Render/Railway (managed PaaS): Zero infrastructure management. Deploy on git push. Scales to zero when idle. The right choice if you are a solo developer who wants to spend zero time on infrastructure until revenue justifies a dedicated DevOps hire. The economics worsen as you scale — Vercel’s pricing at significant traffic is not indie-friendly — but the time savings are real and significant during early stages.

Single VPS (Hetzner, DigitalOcean, Fly.io): A $20/month VPS can run a surprising amount of production traffic with Caddy, Docker Compose, and PostgreSQL. This is the architecture described in our “$50/month server” piece: you own the box, you control the configuration, and the costs are predictable. The tradeoff is that every infrastructure problem is your problem — but most of those problems are solvable in an afternoon and rarely recur once fixed.

AWS/GCP/Azure (cloud providers): These are the right choice when you have specific requirements that only they can meet: compliance certifications (SOC 2, HIPAA), specific geographic availability requirements, or tight integration with enterprise customer environments. Do not start here if you are a solo developer or small team. The operational complexity and billing unpredictability are expensive taxes on your attention.

# Docker Compose production deployment — reasonable for $0 to $10K MRR
# Run on a single Hetzner CX31 (4 vCPU, 8GB RAM, $15/month)

version: '3.8'
services:
  app:
    image: ghcr.io/yourorg/yourapp:${APP_VERSION:-latest}
    restart: unless-stopped
    environment:
      - DATABASE_URL=postgresql://app:${DB_PASSWORD}@postgres:5432/appdb
      - REDIS_URL=redis://redis:6379
      - SECRET_KEY=${SECRET_KEY}
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_started
    labels:
      - "caddy=app.example.com"
      - "caddy.reverse_proxy=:3000"

  postgres:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      POSTGRES_DB: appdb
      POSTGRES_USER: app
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U app -d appdb"]
      interval: 10s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    restart: unless-stopped
    command: redis-server --appendonly yes
    volumes:
      - redis_data:/data

  caddy:
    image: lucaslorentz/caddy-docker-proxy:latest
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - caddy_data:/data

volumes:
  postgres_data:
  redis_data:
  caddy_data:

Decision 2: Monolith vs. Services

Start with a monolith. This is not a controversial take in 2026 — even the microservices advocates (including the ThoughtWorks team that popularized the term) have published extensive writing about the monolith-first approach for products without established scaling requirements.

The practical reasoning: a monolith lets you refactor boundaries cheaply. When you discover (and you will) that you drew the domain boundary in the wrong place, a function call becomes a different function call. In a microservices architecture, that same refactor involves rewriting service contracts, updating API clients, and coordinating deployments. The cost of wrong boundaries is dramatically higher.

Extract services when a specific component has scaling characteristics that differ significantly from the rest of the application, when a component needs to be deployed independently for organizational reasons, or when a team grows large enough that two teams owning the same codebase creates coordination overhead. None of these conditions typically apply before you have meaningful revenue and a team larger than three to five engineers.

Decision 3: Authentication From Day One

Authentication is the technical decision that is most expensive to change later. Rolling your own auth at launch to avoid the complexity of an auth provider, then migrating to a proper auth system when you have real users, is a painful experience. Your users will have their sessions invalidated and you will spend days testing edge cases in password reset flows.

Use an auth library or service from the beginning. The options in 2026:

  • Clerk: The most developer-friendly managed auth service. Pre-built UI components, webhooks for user events, organization management. $25/month for up to 10,000 MAUs. Appropriate for most B2B SaaS products.
  • Auth.js (formerly NextAuth): Open-source, runs in your application, integrates with your own database. Zero per-user cost. More setup, but total control. Best for developers who want to understand what’s happening and not be dependent on a third-party auth provider.
  • Lucia Auth: Lightweight auth library for TypeScript, database-agnostic. More opinionated than Auth.js in useful ways — it handles session management cleanly and leaves the UI entirely to you.
# Auth.js v5 configuration (framework-agnostic)
// auth.ts
import NextAuth from "next-auth"
import GitHub from "next-auth/providers/github"
import { DrizzleAdapter } from "@auth/drizzle-adapter"
import { db } from "@/lib/db"

export const { handlers, auth, signIn, signOut } = NextAuth({
  adapter: DrizzleAdapter(db),
  providers: [
    GitHub({
      clientId: process.env.GITHUB_CLIENT_ID,
      clientSecret: process.env.GITHUB_CLIENT_SECRET,
    }),
  ],
  session: {
    strategy: "database",
    maxAge: 30 * 24 * 60 * 60, // 30 days
  },
  callbacks: {
    async session({ session, user }) {
      // Attach additional user data to session
      session.user.id = user.id
      session.user.plan = user.plan ?? "free"
      return session
    },
  },
})

Decision 4: The Database Schema Is Your Most Expensive Early Mistake

Your database schema becomes increasingly expensive to change as your user data accumulates. Not impossible — zero-downtime migration strategies (add column, backfill, deploy, drop old column) handle most schema changes — but expensive in time and testing overhead.

The most expensive early database mistakes are soft-deleted record handling (did you forget to filter WHERE deleted_at IS NULL in that query?), multi-tenancy isolation (row-level vs. database-per-tenant), and timestamp timezone assumptions.

-- Base table structure that ages well
-- Follow this pattern for every table from the start

CREATE TABLE products (
  id          UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  -- Multi-tenancy: all user data scoped to an org
  org_id      UUID NOT NULL REFERENCES organizations(id) ON DELETE CASCADE,
  
  -- Business fields
  name        TEXT NOT NULL,
  slug        TEXT NOT NULL,
  status      TEXT NOT NULL DEFAULT 'draft' 
              CHECK (status IN ('draft', 'active', 'archived')),
  metadata    JSONB NOT NULL DEFAULT '{}',
  
  -- Audit fields: always include these
  created_at  TIMESTAMPTZ NOT NULL DEFAULT NOW(),
  updated_at  TIMESTAMPTZ NOT NULL DEFAULT NOW(),
  created_by  UUID REFERENCES users(id),
  
  -- Soft delete: include from day one or not at all
  deleted_at  TIMESTAMPTZ,
  
  UNIQUE (org_id, slug)
);

-- Row-level security: enforce org isolation at the database level
ALTER TABLE products ENABLE ROW LEVEL SECURITY;

CREATE POLICY products_org_isolation ON products
  USING (org_id = current_setting('app.current_org_id')::uuid);

-- Automatically update updated_at on any row change
CREATE TRIGGER products_updated_at
  BEFORE UPDATE ON products
  FOR EACH ROW
  EXECUTE FUNCTION update_updated_at_column();

Decision 5: Observability Before It’s Needed

The hardest debugging sessions happen on production systems where you have no observability. Setting up structured logging and basic metrics before you have users costs two hours and saves dozens of hours of “I wonder why that happened” debugging sessions.

# Structured logging in Node.js with Pino (fast, JSON output)
import pino from 'pino'

export const logger = pino({
  level: process.env.LOG_LEVEL ?? 'info',
  // Include request context in all logs from a request
  mixin: () => ({
    environment: process.env.NODE_ENV,
    version: process.env.APP_VERSION,
  }),
})

// In your request handler:
export async function handleRequest(req, res) {
  const requestLogger = logger.child({
    requestId: req.headers['x-request-id'] ?? crypto.randomUUID(),
    userId: req.user?.id,
    path: req.path,
    method: req.method,
  })

  try {
    requestLogger.info('request_started')
    const result = await processRequest(req)
    requestLogger.info({ statusCode: 200 }, 'request_completed')
    return res.json(result)
  } catch (error) {
    requestLogger.error({ 
      error: error.message, 
      stack: error.stack,
      statusCode: 500 
    }, 'request_failed')
    return res.status(500).json({ error: 'Internal server error' })
  }
}

The Decisions That Do Not Matter Yet

Spend zero time on these until you have validated product-market fit with paying customers:

  • Microservices, service boundaries, event sourcing architectures
  • Multi-region deployment and global low-latency requirements
  • Horizontal auto-scaling and Kubernetes
  • Custom background job infrastructure (a simple database-backed queue handles 99% of needs)
  • Read replicas (PostgreSQL handles more concurrent reads than most side projects ever generate)

These are real concerns at scale. They are distractions before scale. The cost of premature optimization in a side project is not performance overhead — it is the weeks of engineering time you spent on infrastructure instead of talking to users and shipping features.

Key Takeaways

  • A single VPS with Docker Compose and PostgreSQL is sufficient infrastructure for most products from zero to meaningful revenue. Do not add complexity before you have the scaling requirements that justify it.
  • Start with a monolith. Extract services when you have specific, demonstrated reasons — not architectural ideals.
  • Use an established auth library or managed auth service from the beginning. Auth is the most painful system to migrate later.
  • Build your database schema with multi-tenancy isolation, soft deletes (if needed), and timezone-aware timestamps from day one.
  • Add structured logging before you have users, not after your first mysterious production incident.

By Michael Sun

Founder and Editor-in-Chief of NovVista. Software engineer with hands-on experience in cloud infrastructure, full-stack development, and DevOps. Writes about AI tools, developer workflows, server architecture, and the practical side of technology. Based in China.

Leave a Reply

Your email address will not be published. Required fields are marked *