The Ω Protocol Family

High-fidelity command transport designed for real agent workloads. Preserves payload structure, eliminates escaping failures, and makes multi-step execution practical in live browser-hosted environments.

Protocol Primitives

PRIMARY

ΩHERE

Heredoc-style command format with custom delimiters. Passes multi-line content — scripts, configs, file edits — with zero escaping. The workhorse of daily operations.

ΩHERE run_command
@command=bash
@stdin<<SCRIPT
echo "hello $USER"
SCRIPT
ΩEND
PARALLEL

ΩBATCH

Structured batch execution with conditional chaining. Multiple independent operations in a single round-trip, with variable passing and error control between steps.

ΩBATCH{"steps":[
  {"tool":"run_command","params":{...},
   "saveAs":"v1"},
  {"tool":"eval_js","params":{...},
   "when":{"var":"v1","success":true}}
]}ΩEND
ORCHESTRATION

ΩPLAN / ΩFLOW

Higher-level task planning and workflow templates. ΩPLAN declares goals for intelligent decomposition; ΩFLOW invokes reusable workflow templates with parameterized steps.

ΩPLAN{"goal":"Deploy new feature"}
ΩFLOW{"template":"build-test-deploy"}
NEW — ZERO-ESCAPE

ΩCODE

Zero-corruption code transport channel. AI outputs raw code in the SSE stream; the extension intercepts it before DOM rendering and stores it in a dedicated storage backend. No JSON serialization, no shell escaping, no base64 bloat.

ΩCODE
var regex = /[^a-z]+/g;
var str = "hello$world";
return str.split(regex);
ΩCODEEND

The Multi-Layer Escaping Problem

When AI-generated code travels from model output to execution, it passes through multiple serialization layers. Each layer applies its own escaping rules, and the compounding effect makes complex code — regex, template strings, nested quotes — nearly impossible to transmit intact.

  AI Model Output
      │
      │  ① SSE stream (text/event-stream)
      ▼
  Agent Parser          ← JSON.parse() on each chunk
      │
      │  ② Tool parameter extraction
      ▼
  eval_js / run_command  ← Shell or JS interpreter
      │
      │  ③ Code execution
      ▼
  Runtime Result

  Each arrow = potential escaping corruption
  3 layers × special chars = exponential breakage

JSON serialization

Backslashes doubled, quotes escaped, template literals interpreted

\n becomes \\n, ${var} interpolated

Shell expansion

$ and backticks trigger variable/command substitution

$USER replaced, $(cmd) executed

Regex in eval_js

Character classes mangled by nested escaping

[^a-z] breaks across JSON→JS→RegExp layers

Multi-layer nesting

Each transport layer adds its own escape rules

3+ layers = practically impossible to predict

ΩCODE: The Zero-Escape Solution

Instead of fighting escaping layers, ΩCODE bypasses them entirely. The code never enters JSON serialization or shell expansion — it travels as raw text in the SSE stream and is captured before any processing occurs.

  AI Model Output
      │
      │  SSE stream contains: ΩCODE\n...raw code...\nΩCODEEND
      ▼
  SSE Hook (sse-hook.js)     ← Intercepts raw stream
      │
      │  content.js tryParseSSECommands()
      │  Detects ΩCODE markers in accumulated text
      ▼
  writeCodeStorage()         ← Stores via Genspark API
      │                           (same-origin, no auth needed)
      ▼
  Code Storage (731a7c05)    ← Dedicated conversation slot
      │
      │  eval_js: readCodeStorage().then(code => new Function(code)())
      ▼
  Browser Execution          ← Zero corruption guaranteed

  ✓ No JSON serialization    ✓ No shell escaping
  ✓ No base64 bloat (0%)     ✓ Regex, quotes, $vars all preserved

Dual Storage Architecture

Context and code live in separate storage channels, preventing ΩCODE payloads from overwriting compression summaries. Both use Genspark's project API as a key-value store, accessed via same-origin fetch from the browser extension.

59cdb9cb

Context Storage

Compression summaries, cross-session memory. Written by autoCompress, read on conversation restore.

writeContextStorage() / readContextStorage()

731a7c05

Code Storage

Dedicated to ΩCODE payloads. Intercepted from SSE stream and stored automatically, executed via one-line eval_js.

writeCodeStorage() / readCodeStorage()

  Browser Extension (content.js + sse-hook.js)
  ├── MAIN world (sse-hook.js, document_start)
  │   ├─ writeContextStorage()  → 59cdb9cb conversation
  │   ├─ readContextStorage()   → 59cdb9cb conversation
  │   ├─ writeCodeStorage()     → 731a7c05 conversation
  │   └─ readCodeStorage()      → 731a7c05 conversation
  │
  └── ISOLATED world (content.js, document_idle)
      ├─ writeContextStorage()  → 59cdb9cb (own copy)
      ├─ readContextStorage()   → 59cdb9cb (own copy)
      ├─ writeCodeStorage()     → 731a7c05 (own copy)
      ├─ readCodeStorage()      → 731a7c05 (own copy)
      └─ autoCompress()         → triggers compression

  Both worlds have independent function copies.
  Same API endpoint, same conversation IDs.
  ISOLATED world is where tool execution happens.

Why Not Just Base64?

We tried multiple approaches before arriving at ΩCODE. Each solved part of the problem but introduced new trade-offs.

  Approach          Escaping  Size    Latency  Complexity
  ─────────────────────────────────────────────────────
  eval_js direct     ✗ Breaks  100%    ~0ms     Low
  write_file+base64  ✓ Safe    133%    ~50ms    Medium
  AI Drive upload    ✓ Safe    100%    ~2.3s    High
  ΩCODE             ✓ Safe    100%    ~0ms     Low      ← Winner

  ΩCODE: zero bloat, zero latency, zero escaping layers.
  The code travels in the SSE stream itself — no detour.

Design Principles

Stream-Native

Commands and code ride the existing SSE stream. No side channels, no extra HTTP requests for transport. The stream is the bus.

Intercept Before Render

Ω markers are captured from raw SSE text before the chat UI renders them. The browser never tries to display or interpret the payload as markdown.

Same-Origin Advantage

Running inside genspark.ai means free access to authenticated APIs. No tokens to manage, no CORS issues. The browser is the auth layer.

Separation of Concerns

Context summaries and code payloads in separate storage slots. One cannot corrupt the other. Each has its own lifecycle and access patterns.

VFS: Persistent Virtual File System

Seven named slots backed by Genspark's project API, each holding up to ~2MB. Together they give the agent cross-session memory, dynamic prompt injection, an executable function library, and three-level disaster recovery.

context

Compressed conversation summaries for cross-session memory. Auto-written by autoCompress, restored on new conversation boot.

registry

Self-referential VFS registry. Maps slot names to metadata, sizes, and descriptions. The index of all other slots.

boot-prompt

Dynamic prompt generator. Assembles injection payload from other slots on each conversation start, with live function name extraction from fn slot.

ref-guide

Operational reference: environment config, infrastructure IPs, tool selection rules, error handling patterns, and hard-won pitfall notes.

system-prompt

Compact system prompt template. Distilled rules and behavioral guidelines, injected alongside dynamic content.

toolkit

22 skill quick-references with command templates and code snippets. Loaded on demand to save injection budget (~6.7K chars).

fn

Executable function library. 20 JavaScript functions for API calls, DOM manipulation, data processing. Loaded via vfs.exec() into window.__tk.

  Conversation Start
      │
      │  Boot Sequence
      ▼
  VFS Registry (registry)    ← List all mounted slots
      │
      ├── Read context          → Restore last session summary
      ├── Read ref-guide        → Inject environment + rules
      ├── Read system-prompt    → Inject behavioral guidelines
      ├── Read boot-prompt      → Assemble final injection payload
      │
      │  On Demand (not auto-injected)
      ├── vfs.read('toolkit')   → Skill templates when needed
      └── vfs.exec('fn')        → Load functions to window.__tk
      
  Three-Level Disaster Recovery
  ───────────────────────────────
  L1: Genspark Project API    (primary, ~2MB per slot)
  L2: chrome.storage.local    (extension-local fallback)
  L3: Oracle ARM server       (PM2 cron, CDP snapshots every 6h)

Backup Pipeline

VFS data is too valuable to trust to a single storage layer. A three-level recovery system ensures slots survive browser crashes, extension updates, and API outages.

CDP Snapshot

Chrome DevTools Protocol reads VFS slots directly from the browser via headless connection. No login needed — connects to existing Chrome instance.

PM2 Scheduled

PM2 ecosystem runs vfs-backup.py every 6 hours on the Oracle ARM server. Snapshots stored as timestamped JSON files.

Manual Trigger

vfs.backup() from any conversation exports all slots. vfs.snapshot(name) captures a single slot with timestamp.