In Beat 10, we re-enter the terrain of purposeful system composition — with a twist. This time, we zoom in on interfaces: not just as screens or buttons, but as relational spaces where humans, machines, environments, and other intelligences encounter one another. Through the frameworks of Eames, Simon, and cybernetics, we explore how design is not only a plan, but a practice of change — recursive, ethical, and deeply situated in networks of feedback.
We ask: What does it mean to design an interface for a gorilla? Or a neural net? Or a post-human city?
We trace how every “change” is actually a cascade — across boundaries, identities, ecologies, and time.
Talking Points
Design as cybernetic process: recursive, purposeful, feedback-driven
Interfaces as ethical boundaries, not neutral tools
Human–AI interaction as co-adaptation, not control
Animal–AI systems and nonhuman cognition
Machine-to-machine communication and invisible power dynamics
Cascading impacts: every design decision ripples
Charles Eames and Herbert Simon on design and transformation
Designing for change across legal, cultural, emotional, and biological systems
AI governance frameworks and Indigenous data sovereignty
Systemic accountability and designing with foresight
Script:
Welcome back to Cybernetic Heart.
I’m Annie and this is Beat 10 - Interfacing with Change: Designing in the Age of AI
This week, we find ourselves at the edge — where human meets machine, where the built world touches the natural, and where transformation begins at the interface.
Not just the screen.
Not just the button.
But the space of exchange.
“One could describe design as a plan for arranging elements to accomplish a particular purpose.”
— Charles Eames, 1968
But that’s only half the story.
Design isn’t just a plan.
It’s a practice. A cybernetic loop — where feedback, reflection, and recalibration drive transformation.
And in this fortnight, we asked:
If everything is hitched to everything else...
Then how do we design for change without pulling the whole system apart
We began by returning to some grounding thinkers:
Charles Eames, with design as structured intention
and Herbert Simon, who positioned design as the foundational act of artificial sciences — not what is, but what could be
Design, then, isn’t just about form.
It’s a way of doing transformation — of shifting the current state into a preferred one.
But preferred by whom?
For how long?
And with what side effects?
These questions echoed through every design activity this fortnight — from mapping the ripple effects of AI systems, to applying governance frameworks, to imagining AI–animal collaboration.
What even is an interface?
To a developer: a screen.
To a user: a doorway.
To a cybernetician? A boundary where meaning emerges.
Interfaces aren’t just conduits. They’re mediators:
They control what can be seen.
They define what kinds of input are legible.
They constrain and expand possibility.
We explored design principles like:
Perceived affordances: How do we know what’s possible?
Minimal complexity: When does simplicity enable depth?
Feedback loops: Does the system let us know we’re being heard?
But the deeper provocation was this:
What happens beyond the interface?
When the system responds — not to clicks, but to context.
When the interface isn’t a screen, but a relationship.
One of the most compelling insights from this fortnight was this:
“AI is no longer a tool. It’s a co-agent.”
In human–AI interaction, we are no longer just users.
We’re collaborators — co-steering, co-creating, and co-adapting.
We mapped three key dimensions of this collaboration:
Agency: Who holds decision power?
Interaction: How is meaning built and negotiated?
Adaptation: How do both sides evolve over time?
But this doesn’t stop at humans.
What does it mean to build an interface with a gorilla?
Or a worm, like in the OpenWorm simulation?
Or an ecosystem, as in AI-driven climate response systems?
When the “user” is nonhuman — the interface becomes ethological.
Species-specific. Sensory. Situated.
And then, there’s AI-to-AI interaction —
protocols negotiating with protocols, at speeds we can’t see.
The takeaway?
All interfaces encode values.
Even when no human is watching.
We explored a spectrum of AI design guidelines — from Microsoft’s AI principles to Indigenous data sovereignty protocols.
These aren’t just policies. They’re meta-interfaces —
governing how systems should behave, even when we’re not looking.
In class, we used these to audit our own projects:
Where are the gaps in your system’s accountability?
What types of impact do you currently ignore?
And who gets to decide what’s a “preferred” outcome?
We came back to one powerful line:
“Design isn’t just about how things work.
It’s about why they’re built — and for whom.”
Here’s the truth at the heart of cybernetic design:
Change is never isolated.
It cascades.
Design a better user flow? It changes expectations.
Add a feedback loop? It alters behavior.
Build an AI with just a slightly different training set?
It may reproduce — or disrupt — an entire worldview.
In our workshop, we traced cascading impacts of AI systems:
Who benefits?
Who’s overlooked?
What unintended futures are being quietly designed?
One group asked:
What if the system we build outlives us?
That’s the real cybernetic challenge —
To design for systems that will continue evolving without us.
Final Reflection? — Designing for Difference
Design is transformation.
But it’s also care.
To design well — cybernetically — is to:
Anticipate interaction beyond intention.
Invite feedback instead of control.
Build with others — human and nonhuman — in mind.
Thanks for joining us on Beat 10 of Cybernetic Heart.
Until next time,
Stay relational.
Stay responsive.











