collective performative intelligence
scalable expert systems where many participants contribute interoperable operations and gain from the intelligence of the network.
Tutorial
Decentralised Creative Network (DCN) is the network/platform where contributors publish, compose, and run executable modules, accessed through an Application Programming Interface (API) .
The protocol language running in DCN is called Performative Transactions (PT). It enables building:
scalable expert systems where many participants contribute interoperable operations and gain from the intelligence of the network.
blockchain-based behaviours, operations, and conditions for autonomous objects, decentralised games, and NPC-player interactions.
smart contract-based behaviours, incentives, and programmable conditions in self-governing economic networks. AI marketplaces, hybrid organisations, and resilient agent swarms.
execution and provenance layer for programmable delegation across humans and AI agents. Define roles, constraints, conditions, and verifiable histories of execution.
You use DCN when you want behaviours to outlive apps, teams, and servers. When you care that an operation keeps working tomorrow, under explicit conditions, with no third party deciding whether it still runs.
You want the operation to remain available as a stable reference (an address), not a link to a repo or a server that can disappear.
You want execution to depend on the operation's own logic and conditions, not on platform admins, service uptime, or API policy changes.
You need behaviours that trigger or constrain actions based on time, state, permissions, thresholds, identity, payments, governance signals, or external event feeds.
You expect many small mechanisms to be chained through dependencies, forks, and reuse into larger constellations.
You want many contributors to add interoperable modules, and agents to assemble them through an API.
Performative Transactions (PT) is the protocol language running in DCN. In Performative Transactions, flows of connectors contributed by humans and AI agents shape how particles (elements from outside the network) are re-configured by the network into new configurations.
At runtime, PT starts from one root connector, checks its condition, computes transformed index streams on each dimension, and then either passes those streams to connected child connectors or emits them as output particles.
In PT, particles are the smallest reusable value units that flow through connectors. They represent real-world things as values that can be selected, transformed, and composed.
Particles can mean, for instance: pitches, durations, and velocities in music; but also colour values, object behaviours, skills, manuscripts, samples, recordings, and many other elements from outside the network. The role of the DCN is to set them into a new configuration by using connectors contributed by various humans and AI agents building on top of each other's interoperable contributions.
PT is particle-centered in execution: connectors receive particles on their dimensions, process them, and emit output particles for downstream connectors.
Connectors are the runnable units of PT. Users build and run connectors. A connector contains dimensions, transformations, particle dependencies, and a condition gate.
In practice, this is where users define the structure of execution: which child connectors are connected on which dimensions, which transformations run per dimension, and which condition gates activation.
A connector:
Connectors compose into larger graphs: one connector's output particles become another connector's input particles.
Example 1
a connector over musical pitch can be modelled by a connector over a chromatic scale, which can be modelled by a connector over a major scale, which can be modelled by a connector over a melody, and so on.
Example 2
a connector over time can be modelled by a connector over a tempo grid, which can be modelled by a connector over swing and microtiming profiles, which can be modelled by a connector over groove templates for different parts (kick, hi-hats, bass, chords), which can be modelled by a connector over an arrangement structure (intro, build, drop, breakdown, outro), and so on.
Example 3
a connector over colour can be modelled by a connector over an RGB (or HSL) palette, which can be modelled by a connector over palette constraints (warm/cool bias, contrast range, saturation limits), which can be modelled by a connector over a composition rule-set (background/foreground separation, accent distribution, gradient fields), and so on.
Example 4
a connector over time can be modelled by a connector over a timeline (frames, clips, markers), which can be modelled by a connector over edit decisions (cuts, transitions, pacing rules), which can be modelled by a connector over colour grading transforms (LUT selection, exposure/contrast curves, saturation limits), which can be modelled by a connector over an effects and export pipeline (stabilisation, compositing, audio mix, render settings), and so on.
Example 5
a connector over space can be modelled by a connector over a coordinate system and scene graph, which can be modelled by a connector over physics rules (collision layers, forces, constraints), which can be modelled by a connector over agent behaviours (navigation meshes, state machines, decision policies), which can be modelled by a connector over level logic (spawning, triggers, quests, difficulty curves), and so on.
Dimensions are chains of transformations inside a connector, to which other connectors can be connected. The transformations on that dimension then select values from the values output by the connector connected to that dimension.
For example, one dimension can represent pitch values, another rhythmic values, and another reference values. Transformations on each dimension decide which particles are selected and how they are mapped forward.
Dimensions make heterogeneous particles interoperable by placing them into explicit, programmable spaces.
Transformations are reusable pieces of logic that select, map, and pass particles through connector dimensions.
On each dimension, they are applied as an ordered chain, with explicit arguments per step, so users can control how index streams are selected and shifted over time.
A transformation can be simple, for example add a constant number to an input value (output = input + x), or it can encode any higher-level rule that code can express.
Transformations are written in Solidity (Ethereum's programming language for smart contracts). Many are already in the network and can be reused directly.
Any connector can be wrapped in a condition.
If a connector's condition is not met, that connector path does not run.
A condition is a piece of Solidity code that specifies what must be true for the connector to activate.
Conditions can be:
financial, for example: the connector activates when a payment above a threshold is received, or when a token balance requirement is met.
non-financial, for example: the connector activates when an artistic or technical condition is met (time windows, permissions, external signals, governance events, or other programmable triggers).
Besides graph structure, PT exposes a small runtime control surface that users can intentionally set:
This means you can keep the same connector graph but produce different particle selections by changing runtime controls.
When a connector runs, it outputs particles that correspond to values in the world outside DCN. Practically, this is often a list of index values selecting something from a defined space of possibilities.
The output is a set of streams (paths in the connector graph), and each stream contains N generated values.
Those values become meaningful when mapped into your external workflow: a plugin, a DAW, a game engine, notation software, or any other environment. PT provides the interoperable selection and transformation logic; the external tool renders, plays, simulates, or materialises the result.
This is how DCN connects individual workflows to collective performative intelligence: participants contribute reusable operations (connectors, transformations, conditions), and others, humans and AI agents, assemble them into new executable chains.
Because interoperability only works when components speak the same operational language.
Instead of asking a system to "produce something," a connector selects and transforms particles from spaces that other connectors define. Each step reduces a field of possibilities into a more specific configuration.
This makes heterogeneous contributions compatible:
All of them can contribute connectors as long as they operate on shared value structures.
Connectors don't need to know who created the previous step. They only need to understand the format of incoming particles.
That's what makes the network scalable.
When systems generate outputs independently, their results are difficult to combine. When systems select and transform shared value spaces, they become interoperable by design.
DCN is built this way because the goal is not isolated creativity. The goal is collective performative intelligence.
Core terms used across protocol, server, and API documentation.