There’s a lot being said about AI at the moment and most of it comes back to the same question.
What happens next?
I don’t think that’s really a question about the technology. I think it’s a question about what replaces what we already understand.
I think we’re used to change happening in a way we can follow. Things improve, processes evolve, new tools appear, and the structure stays broadly the same.
We adjust to it.
This feels different. Not because the outcome will be worse, but because it’s harder to picture.
And when something is hard to picture, it’s easy to assume the worst.
That seems to be where most of the reaction sits. Not in what AI is doing now, but in what we think it might do next.
In most cases, nothing has actually changed yet. It’s just the sense that it might.
That’s an uncomfortable place to be, because there’s nothing concrete to respond to.
It’s a familiar feeling, just in a different context.
Most clients don’t come to us because something has clearly gone wrong. On paper, things are usually fine.
What they’re reacting to is something harder to describe. A sense that things are changing, or about to, and not being entirely sure what that means for them.
I think we’ve been in versions of that gap before – not often, but enough.
Times where things didn’t just improve slightly, but had to be reworked more fundamentally.
It didn’t always go smoothly, but more often than not, it ended up better than expected.
Not because we had a clear plan, but because we worked it out as we went. That doesn’t remove the uncertainty, but it does put it in context.
The question isn’t whether things will change. They will. It’s whether the inability to picture what comes next is being mistaken for something more serious.
Most of the time, I think that’s the mistake.
