The Fantasy of AI Is not Technological, it’s disappearing from the Process.


 

There’s a type of post I’ve been seeing a lot online lately. The author changes, the tone shifts —sometimes it’s written with enthusiasm, sometimes with a hint of existential exhaustion, sometimes with a lot of hope— but the idea is always the same: “I can’t wait for the moment when artificial intelligence does (insert here any process or activity the writer doesn’t want to do or learn) so I can spend my time on more important things.” And I get it. I really do. But there’s something about that sentence that doesn’t quite sit right, and I’ve been turning it over in my head for a while now because I feel like the issue isn’t wanting AI to help us, but what we mean when we say “help.”

 

When someone says they want AI to help them with a task they dislike, what they’re really describing isn’t efficiency, but full delegation. And that’s an important distinction, because even though the two sound similar, they lead in completely different directions. Efficiency means we’re still part of the process, just with less friction, less time, fewer unnecessary steps. Full delegation means we step out entirely, the process happens without us, and responsibility shifts elsewhere. And that second scenario is, as far as I understand how this works, essentially impossible —not because the technology isn’t advanced enough, but because there’s something in the nature of human processes that, by definition, requires a human. And that part isn’t negotiable.

 

Let’s take something simple. Imagine we all own a dishwasher. That’s already technology that makes the process of washing dishes more efficient, and no one argues with that, no one feels like the dishwasher is taking something away from them. But it doesn’t actually wash the dishes in an absolute sense: we still have to load it, add detergent, choose a setting, empty it when it’s done, and if something goes wrong, someone has to decide what to do. It handles the longest and most tedious part, but we’re still the ones carrying the process. That’s much closer to what AI does —except instead of dishes and hot water, it works with information, options, structures, and data.

 

And I think that’s where the core misunderstanding begins.

 

Right now, there are two groups of people reacting to AI, and interestingly, they seem opposed but share the same underlying assumption. On one side, there are those who are afraid —who feel AI is going to take something away from them, erase their processes, make them irrelevant. On the other, there are those who are excited —who feel AI is going to solve everything, free them, do the work for them so they can focus on higher-level things. But both are imagining a version of AI that doesn’t exist: one that makes decisions on its own, assumes responsibility on its own, carries out processes without supervision or intervention when something doesn’t go as expected. And that’s not a tool. That’s something else entirely.

 

At that point, the conversation stops being technological and becomes almost philosophical, because we’re no longer talking about improving processes —we’re talking about transferring responsibility. And I suspect that even if that were possible, we wouldn’t actually want what we think we want, because what we’re really looking for isn’t freedom, but relief. And relief and freedom are not the same thing. But that’s probably another post.

 

If we move this into a corporate context and think about structured processes, it helps to use a different analogy. Think about a film. When you produce a film, everything is captured: movements, dialogue, sequence. You can replay it a thousand times and it will always be the same. There’s no adaptation, no response to the unexpected, because the unexpected doesn’t exist within the film —it’s already been recorded, contained, controlled. Now imagine designing an AI process so perfect that it requires no human input at any point. What you would get is exactly that: a film. Something that can be repeated, but not adapted. Something that works perfectly within its boundaries, but breaks the moment something falls outside them, because it has no way of processing what wasn’t in the script.

 

And the real world always, always has something that wasn’t in the script.

 

So what *can* we expect from AI? Let’s imagine we wake up one day wanting to understand how a rocket works. We don’t have a background in aerospace engineering, we don’t know anyone in that field. We ask AI, and within seconds we get an explanation tailored exactly to our level. Then we ask what it would take to build one, and it outlines the steps. Then we ask what happens if we don’t have the expertise, and it gives us real options: study, hire someone, buy something ready-made. We choose one, and AI opens the next layer: schools, requirements, companies, price ranges, comparisons. What just happened is that we collapsed weeks of research into a few hours. Something that used to require privileged access to knowledge is now within reach of anyone with an internet connection and the ability to ask a question. That opens up something that wasn’t accessible before.

 

But it’s important to notice what didn’t happen: AI didn’t choose for us. It didn’t evaluate our financial situation, our time, our priorities. It gave us the most complete set of options we could have gathered in that timeframe, but the moment we said yes to one thing and no to another, that was ours. AI gives us information. We make the decisions. That’s the relationship.

 

This shift in expectations doesn’t happen automatically, and it has context. There are industries that have spent decades working with technology, and they already understand how to integrate tools into processes. They evaluate tools based on what they do to the process, not whether they replace the person carrying it. But when that same conversation reaches industries without that background, enthusiasm often arrives before understanding, and that creates expectations no tool can meet —not because of technical limitations, but because of what we’re asking it to be. AI isn’t a magical solution. It’s a powerful tool, and its function is to amplify what we can do, not replace us in doing it.

 

There’s one last thing worth saying, and it’s probably the most uncomfortable.

 

Fear of AI and excessive excitement about it have something in common: both imagine a version of reality where we don’t have to be present. One assumes we’ll be removed, the other assumes we’ll be freed. But neither is asking the more basic —and more difficult— question: what do we actually want to do with the time and energy that a more efficient tool gives back to us?

 

Because AI can optimize a process we hate. It can’t tell us why we hate it. It can give us five different ways to reach a goal. It can’t tell us whether that goal still matters. It can make faster what we already know how to do. It can’t do for us the work of figuring out what we want to do.

 

And that part —the most human and the most uncomfortable— is still entirely ours.       

Comentarios

Entradas populares