A2QL is an open protocol where AI agents express data queries as declarative graphs. The runtime validates the DAG, topologically sorts it into parallel execution waves, and streams results into your React components. The agent never writes a query string. It writes a graph. The graph is the query.
// The agent outputs this. Not SQL. Not code. A validated DAG.
const graph: A2QLDataGraph = {
graphId: 'incident-metrics',
nodes: [
{ id: 'src', type: 'source', provider: 'servicenow', op: 'aggregate' },
{ id: 'grouped', type: 'transform', op: 'group', input: 'src' },
{ id: 'sorted', type: 'transform', op: 'sort', input: 'grouped' }
],
bindings: { 'dashboard': 'sorted' }
};
// Topo-sort → execution waves → live React state. One hook.
const { surfaces, isRefreshing } = useLiveCanvas(a2uiSurfaces, [graph]);Integrates with
How it works under the hood
An agent emits a JSON DAG. A2QL validates it, resolves the topology, splits independent branches into concurrent execution waves, and pushes results into React state. No query strings. No eval. No ORM.
Three node types: source, transform, parameter. Flat JSON arrays. No nesting, no inheritance, no hidden state. Agents generate them. Humans can git diff them.
map, filter, group, sort, slice, compute, join, merge. Each one is a pure function over rows. Chain them. The topo-sort figures out the rest.
Topological sort groups nodes into waves. Nodes in the same wave have no data dependencies on each other, so they execute concurrently. You get parallelism from the graph structure itself.
Cycle detection. Orphan node checks. Duplicate ID prevention. Binding validation. All of it runs before a single byte hits the network. Bad graphs fail fast and tell you exactly why.
React hooks subscribe to graph outputs. When the refresh policy fires (configurable from 1 min to 1 day), the graph re-executes and your components re-render with fresh data. No polling boilerplate.
Compute nodes use a restricted arithmetic engine. Parameterized configs prevent injection. The entire attack surface of agent-generated code is eliminated by construction. There is no code to inject into.
Architecture
The agent reasons about what data the user needs. A2QL handles how to get it. A2UI decides how to show it. Three layers, clean separation, no coupling between them.
Interprets user intent, emits a data graph
Validates, topo-sorts, executes in waves
Binds live results to declarative surfaces
What flows through the pipe
Developer Experience
Source nodes point at a provider and table. Transform nodes wire into each other via input refs. The shape is a flat array. That's the whole schema.
useDataGraph takes a graph, returns streaming data. useLiveCanvas takes multiple graphs and wires them to A2UI surfaces. One line each.
Pass surfaces to <A2UIRenderer />. It resolves dynamic values, dispatches actions, and re-renders when the refresh policy fires. You write zero data-fetching code.
The Spec
Every A2QL graph is a validated DAG you can serialize to JSON, store in a database, diff in a PR, or replay in a test. The agent doesn't call your API. It describes a data flow. Your runtime executes it.
Three node types. That's the whole vocabulary.
Source, Transform, Parameter. Combine them to express any data pipeline.
Deterministic ordering
Same graph, same topological sort, same execution order. Always.
Named bindings
Export specific node outputs to UI surfaces with typed contracts.
Runtime parameters
Date ranges, enums, dynamic values. Resolved at execution time, not compile time.
Points at a data provider and operation. Fetches rows, aggregates, or specific records. Field mapping and display values included.
Pure functions over the output of another node. References its input by ID. Deterministic, side-effect-free.
User-facing inputs with defaults and runtime overrides. Date ranges, enums, strings, numbers. The agent sets defaults. The user can override them.
Three packages. One graph definition. Your agent handles the rest.