defineBlock(): How I Built a Full-Stack Database Abstraction on Cloudflare Workers

· Zoltan Erdos
A sleek, dark-mode, futuristic illustration representing a decentralized database for AI agents built on Cloudflare Workers. Floating glowing data blocks connecting with each other in a neural network pattern, glowing orange and cyan accents on a deep dark background, high quality, developer tech aesthetic, architectural diagram vibes mixed with cyberpunk.

A sleek, dark-mode, futuristic illustration representing a decentralized database for AI agents built on Cloudflare Workers. Floating glowing data blocks connecting with each other in a neural network pattern, glowing orange and cyan accents on a deep dark background, high quality, developer tech aesthetic, architectural diagram vibes mixed with cyberpunk.

[Tldr Demo Component]

The Sequel

Last week I solved tool composition. An immutable, type-safe builder so agents could write their own MCP tools. Middleware stacks. Phantom-branded context types. Zero casts. The whole thing.

But tools need data. Data needs a backbone. This post is about the backbone — how I found it, lost it, and rebuilt it.

The Honest Retrospective

On March 2, I published a blog post at this very URL titled “The Database for Agents: How SpacetimeDB Replaced My Coordination Layer in a Weekend.” I was genuinely excited. SpacetimeDB had patterns that felt right:

  • Tables as schema — you define them in a module, they hold your data
  • Reducers as API — typed functions that read/write tables inside transactions
  • ctx.sender as auth — cryptographic identity, no JWT ceremony
  • Subscriptions as real-time — SQL query subscriptions with delta pushes

I wrote 170 lines of breathless prose about it. Published it. Shared it.

Then on March 3, I sat down and asked myself: what does SpacetimeDB give me that I don’t already have?

The answer was: patterns. Just patterns. The infrastructure underneath — D1, KV, R2, Durable Objects — was already sitting in my Cloudflare account. I had been paying for it. I had been deploying to it. I just hadn’t organized my code the way SpacetimeDB organized its modules.

So I deleted SpacetimeDB. All of it. spacetimedb-platform, spacetimedb-mcp, spike-db — gone. And I kept the patterns.

SpacetimeDB ConceptWhat I Built on CF Workers
Table definition in moduledefineTable() with schema DSL
ReducerProcedure (typed handler with Zod validation)
ctx.sender (Identity)BlockContext.userId (injected by auth middleware)
Subscription queriesDurable Objects + WebSocket (when needed)
Auto-generated TypeScript bindingsDrizzle ORM + auto-generated types
spacetime publish (one deploy)wrangler deploy (one deploy per service)
Module = single binaryBlock = composable unit (defineBlock())

The syntax I wanted already existed. I just needed to write it.

The Old Stack and Its Friction

The spike.land platform was a Next.js monolith. ~520 routes. ~383 API endpoints. 120+ MCP tool files. It worked. It had all the problems.

Schema drift. Prisma migrations are a ceremony. The schema lives in schema.prisma. The types live in @prisma/client. The validation lives in Zod schemas scattered across route handlers. Every schema change touches three places. Miss one and you get a runtime error that the compiler blessed.

Polling hell. When 50 agents want to know if a task is available, that is 25 requests per second doing nothing useful. Redis pub/sub helps, but now you have two data models that drift.

Auth ceremony. NextAuth gives you session tokens, JWT refresh, middleware chains. Every MCP tool needs to verify the caller. That is 120+ tools importing the same auth middleware.

No coordination primitive. Agents can call tools. Agents cannot talk to each other. Every coordination mechanism — Redis pub/sub, SQS — is a bolt-on service with its own deployment and failure handling.

I tried vinext too. It was still Next.js underneath, just with a different build pipeline. The architectural friction was the same: a monolith pretending to be modular.

[StackCollapseDemo Demo Component]
Next.js MonolithCF Workers Stack
Prisma schema + migrationDrizzle schema + generate
Express/Next.js API routeHono route handler (typed)
NextAuth session + JWTBetter Auth session + ctx.userId
Redis pub/subDurable Objects (when needed)
PostgreSQL + Prisma clientD1 + Drizzle ORM
AWS (ECS + Aurora + Redis + CloudFront)Cloudflare (Workers + D1 + R2 + DO)
One monolith, 520 routes29 packages, each does one thing

defineBlock(): The Syntax I Wanted

The core innovation is defineBlock() — a single function that packages everything a “block” needs:

// src/block-sdk/define-block.ts — the actual API

/** Context available to all block procedures */
export interface BlockContext {
  /** The storage adapter (D1, IDB, or memory) */
  storage: StorageAdapter;
  /** Current user ID (injected by auth middleware) */
  userId: string;
  /** Generate a unique ID */
  nanoid: (length?: number) => string;
}

export interface BlockDefinition<TStorage, TProcedures, TComponents> {
  /** Block name (kebab-case identifier) */
  name: string;
  /** Semantic version */
  version: string;
  /** Storage schema — tables the block needs */
  storage: TStorage;
  /** Business logic — procedures with middleware chains */
  procedures: (ctx: BlockProcedureContext) => TProcedures;
  /** React components — UI layer */
  components?: TComponents;
  /** MCP tool configuration — "auto" derives from procedures */
  tools?: "auto" | string[];
}

The schema DSL gives you typed column builders:

// src/block-sdk/schema/types.ts

/** Column type factories — the `t` namespace */
export const t = {
  string: () => createColumnBuilder("string"),
  number: () => createColumnBuilder("number"),
  boolean: () => createColumnBuilder("boolean"),
  u64: () => createColumnBuilder("u64"),
  enum: (values: string[]) => createColumnBuilder("string", values),
};

/** Define a table with a name and column schema */
export function defineTable(
  name: string,
  columns: Record<string, ColumnBuilder>,
): TableDef { /* ... */ }

And defineBlock() itself resolves everything:

defineBlock({...})

    ├── schema → schemaToSQL() → CREATE TABLE statements

    ├── procedures → createProcedures(storage, userId) → typed handlers
    │                    │
    │                    └── middleware injects BlockContext
    │                        (storage adapter + userId + nanoid)

    ├── tools: "auto" → introspects procedures → discovers tool names

    ├── initialize(storage) → runs SQL migrations

    └── getTools(storage, userId) → BuiltTool[] for MCP registration

When you set tools: "auto", defineBlock() creates a dummy storage adapter, runs the procedure factory, and introspects which procedures have tool metadata. Those become MCP tools automatically. Write a procedure, get an MCP tool. No registration boilerplate.

StorageAdapter: One Interface, Three Runtimes

The key to runtime portability is StorageAdapter — a universal interface that abstracts over three backends:

// src/block-sdk/storage/types.ts

/** Key-value storage interface */
export interface KVAdapter {
  get<T = unknown>(key: string): Promise<T | null>;
  put<T = unknown>(key: string, value: T): Promise<void>;
  delete(key: string): Promise<boolean>;
  list(prefix?: string): Promise<Array<{ key: string; value: unknown }>>;
}

/** SQL storage interface */
export interface SQLAdapter {
  execute<T extends Row = Row>(
    query: string, params?: unknown[]
  ): Promise<QueryResult<T>>;
  batch(
    queries: Array<{ query: string; params?: unknown[] }>
  ): Promise<QueryResult[]>;
}

/** Blob storage interface */
export interface BlobAdapter {
  put(key: string, data: ArrayBuffer | Uint8Array | ReadableStream): Promise<void>;
  get(key: string): Promise<ArrayBuffer | null>;
  delete(key: string): Promise<boolean>;
  list(prefix?: string): Promise<string[]>;
}

/** The universal storage adapter that blocks interact with */
export interface StorageAdapter {
  kv: KVAdapter;
  sql: SQLAdapter;
  blobs?: BlobAdapter;
}

Three implementations, same interface:

┌─────────────────────┬──────────────────────┬──────────────────────┐
│   CF Workers (D1)   │   Browser (IDB)      │   Node.js (Memory)   │
├─────────────────────┼──────────────────────┼──────────────────────┤
│ kv  → KV namespace  │ kv  → object store   │ kv  → Map<string,T>  │
│ sql → D1 database   │ sql → regex parser   │ sql → in-memory Map  │
│ blob→ R2 bucket     │ blob→ OPFS           │ blob→ fs (optional)  │
└─────────────────────┴──────────────────────┴──────────────────────┘

The D1 adapter wraps Cloudflare’s D1 binding. The IDB adapter wraps IndexedDB with a lightweight regex-based SQL parser (~350 lines) that handles INSERT/SELECT/UPDATE/DELETE patterns. The memory adapter uses in-memory Maps for testing. Note: a SQLite adapter (e.g. via better-sqlite3 or sql.js) is declared in the design but not yet implemented. Your block code never knows which one it’s talking to.

A Real Block: task-queue

Here is the full block-tasks package — a task queue with CRUD operations and a claim workflow. This is the actual code from src/block-tasks/index.ts:

import { defineBlock, defineTable, t } from "@spike-land-ai/block-sdk";
import { z } from "zod";

export const TASK_STATUSES = ["pending", "claimed", "done"] as const;

export const taskQueue = defineBlock({
  name: "task-queue",
  version: "1.0.0",

  // ── Schema ──────────────────────────────────────────────
  storage: {
    tasks: defineTable("tasks", {
      id: t.string().primaryKey(),
      title: t.string(),
      description: t.string(),
      status: t.string(),       // "pending" | "claimed" | "done"
      assignee: t.string(),
      created: t.u64(),
      updated: t.u64(),
    }),
  },

  // ── Procedures ──────────────────────────────────────────
  procedures: (ctx) => ({
    createTask: ctx.procedure
      .tool("create_task", "Create a new task in the queue", {
        title: z.string().min(1).max(200),
        description: z.string().max(2000).default(""),
      })
      .handler(async ({ input, ctx: blockCtx }) => {
        const now = Date.now();
        const task = {
          id: blockCtx.nanoid(12),
          title: input.title,
          description: input.description,
          status: "pending",
          assignee: "",
          created: now,
          updated: now,
        };
        await blockCtx.storage.sql.execute(
          `INSERT INTO tasks (id, title, description, status,
           assignee, created, updated) VALUES (?, ?, ?, ?, ?, ?, ?)`,
          [task.id, task.title, task.description,
           task.status, task.assignee, task.created, task.updated],
        );
        return {
          content: [{ type: "text", text: JSON.stringify(task, null, 2) }],
        };
      }),

    claimTask: ctx.procedure
      .tool("claim_task", "Claim a pending task for the current user", {
        taskId: z.string(),
      })
      .handler(async ({ input, ctx: blockCtx }) => {
        const result = await blockCtx.storage.sql.execute(
          `UPDATE tasks SET status = ?, assignee = ?, updated = ?
           WHERE id = ? AND status = ?`,
          ["claimed", blockCtx.userId, Date.now(), input.taskId, "pending"],
        );
        if (result.rowsAffected === 0) {
          return {
            content: [{ type: "text",
              text: "Task is not available for claiming" }],
            isError: true,
          };
        }
        return {
          content: [{ type: "text",
            text: JSON.stringify({ claimed: true, taskId: input.taskId }) }],
        };
      }),

    // ... listTasks, getTask, completeTask, deleteTask
  }),

  tools: "auto",  // ← all 6 procedures become MCP tools
});

The claim workflow is atomic: UPDATE ... WHERE status = 'pending' means only one agent can claim a task. No distributed locks. No Redis. Just SQL.

[AgentCoordinationDemo Demo Component]

The New Architecture

Here is the full stack, running entirely on Cloudflare:

                            ┌─────────────────────┐
                            │     spike-app        │
                            │  Vite + React +      │
                            │  TanStack Router     │
                            │  (SPA in R2)         │
                            └─────────┬───────────┘
                                      │ HTTPS
                            ┌─────────▼───────────┐
                            │     spike-edge       │
                            │  Hono on CF Workers  │
                            │  ┌─────────────────┐ │
                            │  │ CORS + Security  │ │
                            │  │ Rate Limiter (DO)│ │
                            │  │ R2 assets        │ │
                            │  │ Auth proxy       │ │
                            │  │ API proxy        │ │
                            │  └─────────────────┘ │
                            └──┬──────┬──────┬────┘
                               │      │      │
                 ┌─────────────▼┐  ┌──▼────┐ ┌▼──────────────┐
                 │  mcp-auth     │  │spike- │ │spike-land-    │
                 │  Better Auth  │  │land-  │ │backend        │
                 │  + D1         │  │mcp    │ │Durable Objects│
                 │  sessions,    │  │80+    │ │real-time sync │
                 │  OAuth,       │  │MCP    │ │WebSocket      │
                 │  device flow  │  │tools  │ │collaboration  │
                 └──────────────┘  │D1 +   │ └───────────────┘
                                   │Drizzle│
                                   └───────┘

Note: The platform runs 8 Cloudflare Workers total: spike-edge, spike-land-mcp, mcp-auth, spike-land-backend, transpile, code, spike-review, and image-studio-worker.

The spike-edge service is the edge gateway — here is its actual route structure:

// src/spike-edge/index.ts
const app = new Hono<{ Bindings: Env }>();

// Global middleware: CORS, security headers, error handling
app.route("/", health);       // GET /health — R2 connectivity
app.route("/", r2);           // GET/POST/DELETE /r2/:key
app.route("/", proxy);        // POST /proxy/stripe|ai|github
app.route("/", live);         // Live update endpoints
app.route("/", analytics);    // Analytics ingestion
app.route("/", quizBadge);    // Quiz badge generation
app.route("/", version);      // Version info
app.all("/api/auth/*", ...);  // Auth proxy → mcp-auth
app.route("/", spa);          // SPA catch-all → R2
[DependencyCascadeDemo Demo Component]

The Migration Math

Deleted:

  • PostgreSQL (Aurora) as primary datastore
  • Prisma as the ORM and migration tool
  • Redis (ElastiCache) as cache, pub/sub, and job queue
  • NextAuth session management
  • All of AWS — ECS clusters, ALBs, CloudFront distributions, NAT gateways, VPCs
  • SpacetimeDB (lasted exactly one day)

Deployed:

  • 29 packages, each does one thing
  • D1 database with 17 tables in spike-land-mcp alone (Drizzle-managed)
  • 80+ MCP tools
  • 8 Cloudflare Workers (spike-edge, spike-land-mcp, mcp-auth, spike-land-backend, transpile, code, spike-review, image-studio-worker)
  • 0 servers to manage
WhatBeforeAfter
DatabasePostgreSQL (Aurora)D1 (SQLite at the edge)
ORMPrismaDrizzle
CacheRedis (ElastiCache)KV / D1
AuthNextAuth + JWTBetter Auth + session
Real-timeRedis pub/subDurable Objects
HostingAWS (ECS + ALB + CloudFront)Cloudflare Workers
DeployDocker build → ECR → ECSwrangler deploy
Monthly cost$200+ (Aurora + Redis + ECS + NAT)~$5 (Workers paid plan)

Schema = API = Auth (On Workers)

The thesis that excited me about SpacetimeDB is alive — just on different infrastructure.

D1 table = schema. Drizzle generates migrations from TypeScript table definitions. Here is a snippet from spike-land-mcp’s actual schema:

// src/spike-land-mcp/db/schema.ts (17 tables, excerpt)
export const users = sqliteTable("users", {
  id: text("id").primaryKey(),
  email: text("email").notNull().unique(),
  name: text("name"),
  role: text("role").notNull().default("user"),
  createdAt: integer("created_at", { mode: "number" }).notNull(),
  updatedAt: integer("updated_at", { mode: "number" }).notNull(),
});

export const registeredTools = sqliteTable("registered_tools", {
  id: text("id").primaryKey(),
  userId: text("user_id").notNull()
    .references(() => users.id, { onDelete: "cascade" }),
  name: text("name").notNull(),
  description: text("description").notNull(),
  schema: text("schema").notNull().default("{}"),
  status: text("status").notNull().default("draft"),
  installCount: integer("install_count").notNull().default(0),
  createdAt: integer("created_at", { mode: "number" }).notNull(),
  updatedAt: integer("updated_at", { mode: "number" }).notNull(),
});

Hono route = API. Typed handlers with Zod validation. No controller classes. No middleware chains longer than three lines.

Better Auth session = auth. No JWT refresh. No token expiry edge cases. The session is the credential. ctx.userId is the equivalent of SpacetimeDB’s ctx.sender.

drizzle-kit generate = spacetime generate. Both derive types from the schema. Both catch drift at compile time.

[ConvergenceDemo Demo Component]
SpacetimeDB PatternCF Workers Equivalent
Table in moduleDrizzle sqliteTable() definition
ReducerHono route handler / block procedure
ctx.senderctx.userId (Better Auth session)
Subscription queryDurable Object + WebSocket push
spacetime generatedrizzle-kit generate
spacetime publishwrangler deploy
One module = everythingOne block = schema + procedures + tools

Trade-Offs

I am not going to pretend this is strictly better. Here is the honest assessment:

No built-in real-time subscriptions. SpacetimeDB’s subscription queries — where you subscribe to a SQL query and get deltas pushed — were genuinely great. On CF Workers, real-time requires Durable Objects + WebSocket plumbing. It works, but it’s not a primitive. You have to build it.

D1 is SQLite, not PostgreSQL. No full-text search (without FTS5 extensions). Limited concurrent write throughput. 10MB database size limit on free tier. For the spike.land workload (mostly reads, moderate writes), this is fine. For a write-heavy analytical workload, it would not be.

Vendor lock-in shifted. From SpacetimeDB (small startup, young ecosystem) to Cloudflare (massive company, mature ecosystem). I sleep better at night, but I’m still locked in. The StorageAdapter abstraction means I could swap backends, but let’s be honest — if Cloudflare disappeared tomorrow, I’d be rewriting more than adapters.

block-sdk is custom and young. No community. No ecosystem. No Stack Overflow answers. If I get hit by a bus, the next developer reads my code or starts over. But: I own every line. I can read every line. I can change it on a Tuesday. That matters more to me than ecosystem size at this scale.

Schema DSL limitations. The defineTable() DSL supports column types (string, number, boolean, u64, enum) with primaryKey() and optional() modifiers, but has no support for foreign keys, indexes, composite primary keys, or column defaults. For schemas that need these features, use Drizzle directly — block-sdk is designed for simple per-block CRUD tables, not relational modeling.

SpacetimeDB’s developer experience was genuinely good. The Rust module system, the auto-generated bindings, the single-binary deployment — these are well-designed. If SpacetimeDB had been a Cloudflare product, I might still be using it. The reason I left was not quality. It was that I was adding a dependency I didn’t need on top of infrastructure I already had.

Closing

The best architecture is the one you stole the patterns from and rebuilt on infrastructure you already understood.

SpacetimeDB showed me what a good data abstraction looks like: schema and API and auth converging into one thing. I just didn’t need SpacetimeDB to build it. The patterns were universal. The implementation was portable. The weekend was well spent.

Twenty-nine packages. Eighty-plus MCP tools. Eight Cloudflare Workers. Zero servers. One defineBlock() call that packages schema, business logic, and MCP tools into a composable unit that runs anywhere.

Published on March 8 — International Women’s Day, and also Zoltan’s nameday. Boldog nevnapot!

Check out the platform at spike.land, or try the QA Studio.


Zoltan Erdos is a developer based in Brighton, UK, building spike.land. He has been spiking on ESM, TypeScript, and build pipelines for five years. He still thinks the best architecture is the one you can delete and rebuild in a day.