Skip to content
live Aether memory runtime

Spike Chat keeps context sharp by refusing to carry the whole transcript.

AI chat assistant with Bayesian memory, a four-stage execution pipeline, and MCP-native tool use.

Pipeline 4 stages
Memory Bayesian
Tooling 80+ MCP tools
Context bounded

channel

app-spike-chat

execute memory +3
Spike

I kept the stable prompt prefix cached. What do you want to ship today?

You

Find the deploy regression on /apps/spike-chat and explain the fix.

Spike

Route traced. The page is missing from the prerender catalog, so production falls back to the generic homepage shell.

Ask anything... enter

Execution model

Transcript-first is the bug. Staged artifacts are the fix.

  1. 01

    Classify

    Routes intent, urgency, and domain before the model starts improvising.

  2. 02

    Plan

    Builds a narrow response strategy and decides whether tool work is necessary.

  3. 03

    Execute

    Streams the answer, runs tools, and emits compact artifacts instead of transcript sludge.

  4. 04

    Extract

    Promotes useful notes, demotes noise, and updates long-term memory after the turn.

Memory policy

Notes compete for context.

Spike Chat scores notes instead of replaying every past turn. Helpful evidence survives. Noise decays.

0.92 Route /apps/spike-chat should resolve to a dedicated product page.
0.81 Full transcript replay is wasteful when stage artifacts already exist.
0.74 Deployment regressions are easier to isolate when route metadata is explicit.

Tooling surface

Bounded tools, visible state.

Tool work stays legible through compact event traces, not giant hidden transcripts. The model gets enough evidence to act without drowning in history.

$ classify --channel app-spike-chat
$ plan --intent deploy-regression
$ execute --tool trace_app_route
artifact emitted: prerender catalog missing spike-chat