At what point does delegation become self-erasure?
A colleague of mine, someone who actively uses, benefits from, and advocates for AI tools, sighed deeply before telling me:
My life and my work is legitimately getting better using AI. But I also feel myself disappearing into an abyss. What it means to be myself or human feels like it's slipping away. I have to be a lot more intentional to feel like I have a grasp on it.
I've been hearing versions of this everywhere. Not from luddites, but from practitioners. Something is getting better. And something is being lost. Perhaps worse, something is being given away.
In 2017, I wrote down a line from a Josh Brewer talk in my commonplace book:
As product people we are responsible. We are responsible for the decisions that we make. We are responsible for the ramifications of those decisions. But it does not get talked about very often, if ever. The behaviors that we are creating and promoting and the things that we put in front of people and incentivize them to use and the long-term consequences rarely ever get brought up in meetings, rarely ever get factored in when product roadmaps are being created, and there certainly isn't time baked in anywhere to stop and reflect.
I've been really feeling this recently. Brewer was talking about smartphones, which makes his concern seem almost quaint now. Because we're no longer just deciding how people spend their screen time. We're deciding which human capacities get exercised and which ones atrophy. I can best name this nagging feeling I have using this question:
At what point does delegation become self-erasure?
The AI moment is presenting what amounts to a total delegation offer.
For much of it, you should accept. Our AI agents will handle scheduling, research, inbox triage, logistics. A whole operational layer of life. I'm looking forward to it. But that same nagging feeling tells me there are categories of human activity where delegation doesn't simply save time and tedium.
It costs you yourself.
The most insidious part is the harm feels like a benefit at first. You hand something off and feel real relief. But a piece of you went with it. The logic is airtight right up until you realize you've optimized yourself out of your own life. By then, the capacity you gave away has atrophied, and you don't even remember you had it.
What's worth protecting
AI can tell us we could do something. Only we can decide if we should. What should we protect, in our own lives, and in what we build for others?
mastery
Human beings are learning creatures. I began my career as an educator, not writing code, and this is the thing I am certain of: developing mastery requires effort and intention. It isn't easy. And sometimes that's the point. The struggle is not an obstacle to a good life. It is a good life.
Mastery isn't only about capability or self-reliance. The effort teaches you about yourself. What you value, what you'll fight for, what actually matters to you. Delegate the effort and you lose both the skill and the self-knowledge that came with it.
When we build tools that promise to remove struggle entirely, we should ask: are we removing the productive kind? The effort that builds something in the person using the product? If so, it isn't offering convenience. It's offering to make people less capable and less self-aware. And they'll accept, because it feels like relief.
human connection
When people face important problems, they need help they can trust. And deep, enduring trust requires something only other humans can offer.
At one end, it's empathy: the kind of knowing that comes from having lived through something with real stakes. An AI has never struggled with something that matters. It has no life to live, no stakes, no emotions to navigate. This isn't a current capability gap, it's a categorical impossibility. Software built by people who have lived the problems their users face carries that understanding into the product. Software built without it can only simulate.
But human connection in our tools isn't only about empathy for hard things. It's the craft and joy that come through when someone builds out of genuine passion for a domain. You feel the difference between a tool made by someone who deeply loves the thing you're using it for, and a tool that is generically competent. No one has that depth of affinity for everything. An AI can't have it about anything. This isn't nostalgia. It's a resonance between the maker's care and the user's experience. And it's worth something.
attention
Human attention is finite and not fungible. When someone gives their attention to something, they are giving a piece of their finite life to it. Important activities deserve full presence, not a fraction of it. The tools we build for those activities should signal: this is not like everything else.
When we fold an activity into the stream of everything, we are telling the user, by design, that the activity isn't special. Sometimes that's the right call. Sometimes it's erasure.
What follows
This is not an argument against AI. It's an argument for discernment in where we apply it and what we build with it. The solution is not to expunge these new capabilities. It's to be deliberate about what they touch.
For those of us who build tools: we are deciding right now, every founder, every platform, on every product team, what kind of relationship humans will have with the most important activities in their lives. What behaviors we create, what we incentivize, what we make easy to give away. I fear we aren't actually deciding. We're just stumbling into what is next.
The tools that resist this share a common shape. They're single-purpose software.
Single-purpose software creates a bounded space where focused attention is possible and decisions are essential. It is built by people who understand and love a specific domain. It supports mastery rather than removing it. It carries human connection into the product itself, both the empathy of shared experience and the craft of genuine passion.
Single-purpose software is not for everything, but for the five things instead of the fifty. Single-purpose software may or may not use AI. That's not the point. It is for the activities where the effort is the point, where your presence matters, where you are becoming someone or expressing yourself as someone through the act of doing. These tools are humble in a way. Single-purpose software says: I'm not going to take this from you, but I'm going to be the best possible place for you to do it well.
I believe single-purpose software is what tools look like when they're designed to protect what matters in human experience rather than optimize it away. It's what saves you from accidentally giving away something you wish you hadn't.
The offer
The total delegation offer is real, and it's coming for everything. Be optimistic and accept a lot of it. But be skeptical and decide what you're protecting. The question we should be asking as makers, as people, is whether what we're building leaves room for mastery, connection, and attention.
AI is already extraordinary at many things. But that makes it more important, not less, to build tools which protect the things it can't replace. We should build on that conviction.