Cybarete

In 1967, Arthur Koestler coined a word that still feels like it was written for today’s distributed software: the holon—an entity that is both a whole and a part. A cell is a complete unit and a component of an organ. A team is coherent and also a subsystem of a company. A microservice can be deployed independently and still depends on the larger product to be meaningful.

Agent-based systems are a pragmatic return to this idea. Not because “agents” are fashionable, but because the environments we’re automating are not tidy enough for a single, central intelligence to remain correct for long.

The holon is the right shape for reality

Koestler’s holon rejects a simplistic choice between atomism (“only parts are real”) and holism (“only the whole is real”). The holon says: both.

That matters for industrial systems because the field is not a spreadsheet. It’s moving equipment, wear, variability, constraints, and a constant parade of exceptions: a pump is down, weather shifts, comms degrade, operators improvise, and economics change mid-shift.

In those conditions, “the system” is not a monolith. It is a stack of semi-autonomous wholes:

  • A sensing node that can self-check, buffer, and degrade gracefully.
  • A machine controller that can choose safe states.
  • A local coordinator that can allocate tasks across a few resources.
  • A site-level optimizer that can rebalance priorities with incomplete data.
  • A business constraint layer that shifts what “good” means (throughput vs. energy vs. safety vs. compliance).

Each layer is a whole; each layer is also a part. That is holonic structure in practice.

Holarchy vs. brittle hierarchy

A common failure mode in automation is not “lack of intelligence,” but the wrong topology.

Centralized control can be very smart and still be brittle because it assumes the future will resemble the test plan. When a novel condition appears, a central brain has two bad options:

  • Wait for a human to diagnose and patch the logic, or
  • Keep executing a model that is now wrong.

Holarchic systems—systems of holons—offer a third option: local autonomy with global coherence.

This is the core promise of agent-based architectures in industrial contexts:

  • Local agents act on local truth quickly.
  • Coordination agents negotiate and reconcile conflicts.
  • The system continues to function even when parts are degraded.
  • The whole remains legible because the coordination layer is explicit, not implicit tribal knowledge.

“Design for the unexpected” is a design constraint

Most engineering is optimization against known constraints. In volatile environments, the more important question is: what happens when the constraint set itself changes?

Designing for the unexpected doesn’t mean “anything goes.” It means adopting patterns that expect novelty:

  • Graceful degradation over perfect performance.
  • Fallback states that are safe and reversible.
  • Observability that makes surprises visible quickly.
  • Loosely coupled coordination so local failures don’t become systemic failures.
  • Learning loops so the system improves because the unexpected occurred.

If you want a phrase with teeth, consider antifragility: systems that don’t merely survive variance, but can improve from it. Industrial systems rarely get the luxury of repeated clean resets; they evolve in-place. That makes learning—and the ability to incorporate learning safely—a first-class requirement.

The philosophical punchline: autonomy is a safety feature

In industrial settings, autonomy is often framed as a productivity story. But autonomy is also a safety story.

A holon that can maintain a safe state without permission is not “less controlled.” It is more governable under stress. It reduces the number of situations where the only safe response requires perfect comms, perfect central context, and perfect timing.

Centralized systems can be audited. Holonic systems can be audited and continue to operate under partial failure.

That combination is the future: not a single master controller, but a society of constrained actors—each bounded, each accountable, each contributing to a larger purpose.

References

  • Arthur Koestler, The Ghost in the Machine (1967) — origin of “holon”
  • Arthur Koestler, “Some general properties of self-regulating open hierarchic order (SOHO)” (1969), in Beyond Reductionism
  • Kennie H. Jones (NASA), “Engineering Antifragile Systems: A Change in Design Philosophy” (2014)
  • Nassim Nicholas Taleb, Antifragile: Things That Gain from Disorder (2012)
← All insights Get in touch