live
Aether memory runtime
Spike Chat keeps context sharp by refusing to carry the whole transcript.
AI chat assistant with Bayesian memory, a four-stage execution pipeline, and MCP-native tool use.
Pipeline 4 stages
Memory Bayesian
Tooling 80+ MCP tools
Context bounded
Spike I kept the stable prompt prefix cached. What do you want to ship today?
You Find the deploy regression on /apps/spike-chat and explain the fix.
Spike
Route traced. The page is missing from the prerender catalog, so production falls
back to the generic homepage shell.
Ask anything... enter
Execution model
Transcript-first is the bug. Staged artifacts are the fix.
- 01
Classify
Routes intent, urgency, and domain before the model starts improvising.
- 02
Plan
Builds a narrow response strategy and decides whether tool work is necessary.
- 03
Execute
Streams the answer, runs tools, and emits compact artifacts instead of transcript sludge.
- 04
Extract
Promotes useful notes, demotes noise, and updates long-term memory after the turn.
Memory policy
Notes compete for context.
Spike Chat scores notes instead of replaying every past turn. Helpful evidence survives.
Noise decays.
0.92 Route /apps/spike-chat should resolve to a dedicated product page.
0.81 Full transcript replay is wasteful when stage artifacts already exist.
0.74 Deployment regressions are easier to isolate when route metadata is explicit.
Tooling surface
Bounded tools, visible state.
Tool work stays legible through compact event traces, not giant hidden transcripts. The
model gets enough evidence to act without drowning in history.
$ classify --channel app-spike-chat
$ plan --intent deploy-regression
$ execute --tool trace_app_route
artifact emitted: prerender catalog missing spike-chat
We use cookies to improve your experience and track usage analytics.
Learn more