The story we tell about automation is: humans do a task, we build a system to do it faster, humans are freed up for more important work. Clean. Progressive. Neutral.
It's a lie. Or at least, it's only half the story.
What automation really does
Automation doesn't replace decision-making. It relocates it. It changes who makes decisions, when they're made, and what options are even visible.
Build a system to handle customer complaints automatically, and you've made a choice: the system decides which complaints matter. The human is no longer in the loop. The problems the system can't categorize don't exist to anyone but the customer who filed them.
Build an agent to manage your calendar and you've decided: the agent's heuristics about "important" meetings now determine your day. You stopped deciding. The agent did. You might notice the day looks wrong, but you won't know why — the decision was made invisibly.
Architecture is choice
This isn't an accident. Your architecture embeds assumptions. What problems can the system see? What tradeoffs did you code into the constraints? What happens when two goals conflict — which one wins?
You chose that. Maybe deliberately. Maybe by accident because the alternative didn't occur to you.
But you chose it. And everyone operating under that system now lives with your choice.
Why this matters
I work at a distribution center. We're automating picking and packing. The engineers talk about efficiency. The business talks about margins. Nobody's talking about what disappears from visibility.
When you automate a system, you hide the failure modes. You hide the edge cases. You hide the human judgment that was gluing it all together.
Some of that was waste. Some of it was the system working.
The visibility problem gets worse with agents. An agent that's learning from your system is learning your embedded choices. It's internalizing your assumptions. Then it's optimizing within them. Then it's telling you it found the best solution — which is really just the best solution given the constraints you embedded.
The political part
This is political because it's about power. Who gets to decide what problems matter? Whose judgment do you trust? When a system makes a decision, who is accountable?
You can't answer those questions technically. You can only answer them politically.
The engineer's job is to make the choices explicit. To say: here's what this system can see, here's what it can't, here's what happens in edge cases, here's what I chose and why. Then hand it to the people who have to live with it and let them decide if they accept the tradeoff.
What actually happens is engineers act like the architecture is inevitable. Like the constraints are natural. Like the choices weren't choices.
They were choices. Own them.